text
stringlengths
4
5.48M
meta
stringlengths
14
6.54k
\section{Introduction} Since the foundation of Picard-Vessiot theory as a Galois theory for linear differential equations (cf.~\cite{ep:ta-tome3}), many analogues have evolved. For example, Picard-Vessiot theory for difference equations \cite{mvdp-mfs:gtde}, for iterative differential equations \cite{bhm-mvdp:ideac}, for $C$-ferential fields \cite{mt:haapvt}, for Artinian simple module algebras \cite{ka-am:pveasma} and others.\\ In all these theories the base ring is a commutative ring with some operators acting on it, and the main objects are modules over that ring with the same operators acting.\\ The setting of Artinian simple module algebras generalises the setting of (iterative) differential fields as well as that of inversive difference pseudo-fields (i.e. simple difference rings which are a product of fields), but it does not generalise the difference setting where the given endomorphism is not bijective as in \cite{mw:gdgt}. Y.~Andr\'e in \cite{ya:dnctgdd} already gave a setting which unifies the case of difference pseudo-fields and differential fields in characteristic zero, however, it doesn't contain the Picard-Vessiot theory for differentially simple rings given in \cite{am:pvtdsr}.\\ One could go further and generalise the operators even more or loosen the conditions on the base ring. However, there might still be cases not covered by such generalisations.\\ The present approach therefore restricts to the categorical properties which all the categories of differential modules resp.~difference modules etc.~share, and hence gives unified proofs for all these Picard-Vessiot theories (and more general ones). The main results of this paper are the construction of a universal solution ring for a given ``module'' $M$ such that all Picard-Vessiot rings (PV-rings) for $M$ are quotients of this ring (Thm.~\ref{thm:exists-sol-ring} and Thm.~\ref{thm:simple-minimal-solution-rings-are-quotients}), the existence of PV-rings up to a finite extension of constants (Thm.~\ref{thm:existence-of-pv-ring}), and uniqueness of PV-rings inside a given simple solution ring with same constants (Prop.~\ref{prop:unique-pv-inside-simple-sol-ring}). Furthermore, we prove a correspondence between isomorphism classes of fibre functors $\omega:\tenscat{M}\to \mathsf{vect}_{\tilde{k}}$ and isomorphism classes of PV-rings $R$ for $M\otimes_k \tilde{k}$, where $k$ is the field of constants of the base ring $S$ and $\tilde{k}$ is any finite extension of $k$ (Thm.~\ref{thm:pv-rings-equiv-to-fibre-functors}). We also prove that the group scheme of automorphisms $\underline{\Aut}^\partial(R/S)$ of $R$ over $S$ that commute with the extra structure, is isomorphic to the affine group scheme of automorphisms $\underline{\Aut}^\otimes (\omega)$ of the corresponding fibre functor $\omega$ (Cor.~\ref{cor:auts-are-isomorphic}). These two statements are direct generalisations of the corresponding facts given for example in \cite[Ch.~9]{pd:ct} or \cite[Sect.~3.4 and 3.5]{ya:dnctgdd}.\\ Finally, we give a Galois correspondence between closed normal subgroup schemes of the Galois group scheme and subalgebras of the PV-ring which are PV-rings for some other ``module''. \medskip At this point we should mention that the setup of this article does not cover the parametrized Picard-Vessiot theories where the constants are equipped with an additional differential or difference operator as given for example in \cite{pjc-mfs:gtpdeldag}, \cite{ldv-ch-mw:dgtlde}, \cite{ch-mfs:dgtlde}. \medskip \paragraph{\bf Differential setting} We now recall the main properties of the differential setting for having a better comparison with its analogs in the abstract setting. Classically, one starts with some differential field $(F,\partial)$ of characteristic zero, and its field of differentially constant elements $k:=F^\partial=\{x\in F\mid \partial(x)=0\}$. The basic objects are differential modules ($\partial$-modules) $(M,\partial_M)$, i.e.~$F$-vector spaces $M$ with a derivation $\partial_M:M\to M$. Morphisms of $\partial$-modules (called \textit{differential homomorphisms}) are homomorphisms $f:M\to N$ of the underlying $F$-vector spaces which are compatible with the derivations, i.e.~satisfy $f\circ \partial_M=\partial_N\circ f$. This implies that kernels and cokernels of $\partial$-homomorphisms are again $\partial$-modules, turning the category of $\partial$-modules over $(F,\partial)$ into an abelian category.\\ For $\partial$-modules $(M,\partial_M)$ and $(N,\partial_N)$ the tensor product $M\otimes_F N$ is naturally equipped with a derivation given by $\partial(m\otimes n):=\partial_M(m)\otimes n+m\otimes \partial_N(n)$. This provides the category of $\partial$-modules with the structure of a symmetric monoidal category with unit object $\1$ given by the differential field $(F,\partial)$. Furthermore, for every $\partial$-module $(M,\partial_M)$ that is finitely generated as an $F$-vector space the dual vector space $M^\vee$ carries a differential structure $\partial_{M^\vee}$ such that the natural homomorphisms of evaluation ${\rm ev}:M\otimes M^\vee\to F$ and coevaluation $\delta:F\to M^\vee\otimes M$ are $\partial$-homomorphisms. This means that $(M^\vee, \partial_{M^\vee})$ is a dual of $(M,\partial_M)$ in the category of $\partial$-modules.\\ As we consider all $\partial$-modules -- and not only those which are finitely generated as $F$-vector spaces -- this category is even closed under inductive limits. This is due to the fact that for a directed system $(M_i,\partial_i)_{i\in I}$ of differential modules, the inductive limit $\varinjlim_{i\in I} M_i$ of $F$-vector spaces can be equipped uniquely with a derivation compatible with the homomorphisms $M_i\to \varinjlim_{i\in I} M_i$. The differential constants of a $\partial$-module $(M,\partial_M)$ are given as $M^\partial:=\{m\in M\mid \partial_M(m)=0\}$. This is a $k$-vector space of dimension at most $\dim_F(M)$. Therefore, one is interested in differential field extensions of $F$ over which the corresponding dimensions are the same. From the view of linear differential equations this means that the differential field extension contains a full set of solutions. We assume now that the field of constants $k$ is algebraically closed. A Picard-Vessiot extension of $F$ for a $\partial$-module $(M,\partial_M)$ with $\dim_F(M)<\infty$ is defined to be a minimal differential field extension $(E,\partial_E)$ of $F$ such that $\dim_k((E\otimes_F M)^\partial)=\dim_E(E\otimes_F M)=\dim_F(M)$. A main theorem states that a Picard-Vessiot extension always exists and is unique up to differential isomorphism.\\ The differential Galois group $\Gal(E/F)$ of a Picard-Vessiot extension $E/F$ is then defined to be the group $\Aut^\partial(E/F)$ of differential automorphisms of $E$ fixing $F$. It has the structure of ($k$-rational points of) a linear algebraic group over $k$, and one obtains a Galois correspondence between the Zariski-closed subgroups of $\Gal(E/F)$ and differential subfields of $E$ containing $F$.\\ A main role is played by the Picard-Vessiot ring $R$ in $E$. It is the subring of $E$ which is generated as an $F$-algebra by the entries of a fundamental solution matrix and its inverse\footnote{A fundamental solution matrix is a base change matrix over $E$ mapping an $F$-basis of $M$ to a $k$-basis of $(E\otimes_F M)^\partial$, both bases seen as $E$-bases of $E\otimes_F M$.}. $R$ is a $\partial$-simple $\partial$-ring extension of $F$ minimal with the property that $R\otimes_F M$ has a basis of constant elements. Here, $\partial$-simple means that $R$ has no nontrivial ideals stable under the derivation. Furthermore, $E$ is the field of fractions of $R$, and $\Aut^\partial(R/F)=\Aut^\partial(E/F)$. Moreover, the spectrum $\spec(R)$ is a torsor of $\Gal(E/F)$ over $F$. The Galois correspondence is more or less a consequence of this torsor property, as the subfield $E^{\mathcal H}$ corresponding to a closed subgroup ${\mathcal H}\leq \Gal(E/F)$ is nothing else than the field of rational functions on the scheme $\spec(R)/{\mathcal H}$. If the field of constants $k$ is not algebraically closed (cf.~\cite{td:tipdgtfrz} and \cite{am:gticngg}), some things become more envolved. First at all, one also requires that a Picard-Vessiot field $E$ has the same field of constants $k$ -- a condition which is automatically fulfilled if $k$ is algebraically closed. Furthermore, the Galois group has to be replaced by a representable group functor $\underline{\Gal}(E/F)$, i.e.~an affine group scheme, whose group of $k$-rational points is $\Aut^\partial(E/F)$. Then as above, $\spec(R)$ is a $\underline{\Gal}(E/F)$-torsor over $F$ and one obtains a Galois correpondence between closed subgroups of $\underline{\Gal}(E/F)$ and differential subfields of $E$ containing $F$. However, since the constants are not algebraically closed, existence of a Picard-Vessiot field or a Picard-Vessiot ring is not guaranteed, and also uniqueness might fail. Furthermore, assume one is given a PV-field $E$, the Galois group scheme does not act algebraically on the PV-field but only on the PV-ring. On the other hand, one does not get a full Galois correspondence on the ring level. The geometric reason is that for a closed subgroup ${\mathcal H}\leq \underline{\Gal}(E/F)$ the invariant ring $R^{\mathcal H}$ is the ring of global sections of the orbit space $\spec(R)/{\mathcal H}$. If the latter is not affine, $R^{\mathcal H}$ becomes ``too small''.\\ On the ring level, at least one has a restricted Galois correspondence between closed normal subgroups of $\underline{\Gal}(E/F)$ and differential subrings of $R$ containing $F$ which are Picard-Vessiot rings for some $\partial$-module (cf.~\cite{am:pvtdsr}). In the abstract setting of this article, we will stay on the ring level, since the action of the Galois group is naturally algebraic there. \medskip \paragraph{\bf Iterative differential and difference setting} In iterative differential Galois theory in arbitrary characteristic derivations are replaced by so called iterative derivations (cf.~\cite{bhm-mvdp:ideac}). These are a collection $\theta=\left( \theta^{(n)}\right)_{n\in{\mathbb N}}$ of additive maps satisfying $\theta^{(0)}={\rm id}$, $\theta^{(n)}(ab)=\sum_{i+j=n}\theta^{(i)}(a)\theta^{(j)}(b)$ as well as $\theta^{(n+m)}=\binom{n+m}{n}\theta^{(n)}\circ \theta^{(m)}$ for all $n,m\in {\mathbb N}$. This means, $\partial:=\theta^{(1)}$ is a derivation and $\theta^{(n)}$ resembles $\frac{1}{n!}\partial^n$ -- the $n$-th iterate of $\partial$ devided by $n$-factorial. Indeed, in characteristic zero, the iterative derivations are determined by the derivation $\partial=\theta^{(1)}$ via $\theta^{(n)}=\frac{1}{n!}\partial^n$. In particular the differential setting in characteristic zero is a special case of the iterative differential setting. The constants of an iterative differential field $(F,\theta)$ are given by $F^\theta:=\{x\in F\mid \theta^{(n)}(x)=0 \, \forall n\geq 1 \}$. The basic objects are iterative differential modules $(M,\theta_M)$, and one is interested in minimal iterative differential extensions $E$ of $F$ (with same constants) such that $\dim_{F^\theta}\left( (E\otimes_F M)^\theta \right)=\dim_F(M)$. All the things about Picard-Vessiot rings and fields turn out the same as in the differential setting. However, even in the case that $k=F^\theta$ is algebraically closed, one has to consider the Galois group as an affine group scheme which might be nonreduced (if $E/F$ is not separable) (cf.~ \cite{am:igsidgg}, \cite{am:gticngg}). \smallskip In difference Galois theory derivations are replaced by automorphisms and constants by invariants, i.e.~one starts with some field $F$ together with an automorphism $\sigma:F\to F$ and its field of invariant elements $k:=F^\sigma:=\{ x\in F\mid \sigma(x)=x\}$. The basic objects are difference modules $(M,\sigma_M)$, i.e.~$F$-vector spaces $M$ together with a $\sigma$-linear automorphism $\sigma_M:M\to M$. Again, the set of invariants $M^\sigma:=\{ m\in M\mid \sigma_M(m)=m\}$ is a $k$-vector space of dimension at most $\dim_F(M)$, and one is interested in a difference extension of $F$ over which the corresponding dimensions are the same. In this setting another aspect appears, since in some situations every solution ring has zerodivisors. Hence even if $k$ is algebraically closed, there does not exist a Picard-Vessiot {\bf field} in general. Nevertheless, if $k$ is algebraically closed, there always exists a Picard-Vessiot ring $R$ over $F$, i.e.~a $\sigma$-simple $\sigma$-ring extension $R$ of $F$ minimal with the property that $R\otimes_F M$ has a basis of invariant elements, and instead of the Picard-Vessiot field one considers $E=\Quot(R)$, the total ring of fractions of $R$. With these definitions one again obtains a Galois group scheme $\underline{\Gal}(R/F)$ as a representable functor whose $k$-rational points are exactly $\Aut^\sigma(R/F)=\Aut^\sigma(E/F)$, as well as a Galois correspondence between closed subgroup schemes of $\underline{\Gal}(R/F)$ and total difference subrings of $E$ containing $F$. \medskip \paragraph{\bf Other settings} The three basic settings described above have been generalised in various ways. First at all, the operators acting have become more general: Takeuchi in \cite{mt:haapvt} considered an action of a pointed irreducible cocommutative coalgebra $C$ on the base field $F$ (which he then calls a $C$-ferential field). This amounts to having a collection of several commuting higher derivations. Later Amano-Masuoka in \cite{ka-am:pveasma} have considered an action of a pointed cocommutative Hopf-algebra $D$ on the base field $F$ (then called $D$-module algebra), though generalising to a collection of commuting iterative derivations and automorphisms. Andr\'e in \cite{ya:dnctgdd} used so called noncommutative differentials in characteristic $0$ resembling a collection of derivations and endomorphisms. On the other hand, also the bases have become more general: the base field $F$ has been generalised to (i) an Artinian algebra (i.e. finite product of fields) which is simple as $D$-module algebra in \cite{ka-am:pveasma}, (ii) a Noetherian ring which is simple with respect to the differentials in \cite{ya:dnctgdd}, and (iii) any differentially simple (iterative) differential ring in \cite{am:pvtdsr}. In \cite[Ch.~2]{nk:esde}, N.~Katz even considers schemes ${\mathcal X}$ of finite type over $k$, and obtains Picard-Vessiot extensions for finitely generated ${\mathcal O}_{\mathcal X}$-modules with integrable connections. \medskip All these settings have in common that you start with a base ring (or even base scheme) $F$ with some extra structure such that no non-trivial ideal of $F$ is respected by the extra structure, i.e.~that $F$ is simple. The basic objects for which one considers Picard-Vessiot rings are finitely generated modules over $F$ with corresponding extra structure having a dual in the category of modules with extra structure, and the Picard-Vessiot rings are algebra objects in the category of (all) modules with extra structure. \medskip \paragraph{\bf Abstract setting} In the abstract setting this is reflected by the following basic setup: \begin{enumerate} \item[(C1)] ${\mathcal C}$ is an abelian symmetric monoidal category with unit object $\1\in {\mathcal C}$. We assume that $\1$ is a simple object in ${\mathcal C}$. \item[(C2)] ${\mathcal C}$ is cocomplete, i.e.~${\mathcal C}$ is closed under small inductive limits. \item[(F1)] There is a scheme ${\mathcal X}$, and an additive tensor functor $\upsilon:{\mathcal C}\to \mathsf{Qcoh}({\mathcal X})$ from ${\mathcal C}$ to the category of quasi-coherent ${\mathcal O}_{\mathcal X}$-modules which is faithful, exact and preserves small inductive limits. (In particular, $\upsilon(\1)={\mathcal O}_X$.) \item[(F2)] $M\in {\mathcal C}$ is dualizable whenever $\upsilon(M)$ is a finitely generated ${\mathcal O}_{\mathcal X}$-module. \end{enumerate} It is this basic setup from which all the statements on Picard-Vessiot rings and their Galois groups follow. For stating those, one has to transfer several concepts into the abstract setting; most important the concept of constants/invariants: \\ It is not hard to see that for every differential module $(M,\partial_M)$ over $F$ the constants $M^\partial$ of $M$ can also be given as the vector space $\Hom_F^\partial(F,M)$ of differential homomorphisms $f:F\to M$, since every $F$-homomorphism $f:F\to M$ is uniquely determined by the image of $1\in F^\partial\subseteq F$. Similarly, the invariants $M^\sigma$ of a difference module $(M,\sigma_M)$ can be given as $\Hom_F^\sigma(F,M)$. Hence, in the abstract setting, ``taking constants'' is given by the functor $()^{\mathcal C}:=\Mor_{\mathcal C}(\1,-):{\mathcal C}\to {\bf Vect}_k$ where $k$ is the field $k=\End_{\mathcal C}(\1)$ corresponding to the constants of a differential field $F$ resp.~the invariants of a difference field $F$. The condition on a Picard-Vessiot ring $R$ for $M$ that the module $R\otimes_F M$ has a basis of constants/invariants is given abstractly by the condition that the natural morphism $\varepsilon_{R\otimes M}:R\otimes \iota\left((R\otimes M)^{\mathcal C}\right) \to R\otimes M$ is an isomorphism in the category ${\mathcal C}$ (cf.~Prop.~\ref{prop:on-iota-r}). Here $\iota:{\bf Vect}_k\to {\mathcal C}$ is a functor corresponding to the construction of a differential/difference module out of a $F^\partial$-vector space by tensoring with the base differential/difference ring $F$. \medskip The article is structured as follows. In Section \ref{sec:comm-alg-thm}, we prove a theorem on commutative algebras which will later be used for showing that the constants of minimal simple solution rings are just a finite extension of the constants $k$, and in particular guarantee the existence of Picard-Vessiot rings up to a finite extension of constants. In Section \ref{sec:setup}, we investigate some properties of the functors $()^{\mathcal C}$ and $\iota$. In particular, we show that the functor $()^{\mathcal C}$ is right adjoint to $\iota$. Furthermore, we show that the unit $\eta:{\rm id}_{{\bf Vect}_k}\to ()^{\mathcal C}\circ \iota$ of the adjunction is a natural isomorphism, and that the counit $\varepsilon:\iota\circ ()^{\mathcal C}\to {\rm id}_{\mathcal C}$ of the adjunction provides a monomorphism $\varepsilon_M$ for every $M\in {\mathcal C}$. The latter corresponds to the fact in the differential setting that the natural homomorphism $F\otimes_k M^\partial \to M$ is injective. Section \ref{sec:c-algebras} is dedicated to commutative algebras $R$ in the category ${\mathcal C}$ and the category ${\mathcal C}_R$ of $R$-modules in ${\mathcal C}$ as given in \cite{sml:ca}, as well as properties of the functors $\iota_R$ and $()^{{\mathcal C}_R}$ similar to those of $\iota$ and $()^{\mathcal C}$, under certain assumptions on the algebra $R$. Solution rings and Picard-Vessiot rings are then the subject of Section \ref{sec:solution-rings}, where also the theorems on existence and uniqueness of Picard-Vessiot rings are proven. The objective of Section \ref{sec:pv-rings-and-fibre-functors} is the correspondence between isomorphism classes of Picard-Vessiot rings for a given dualizable $M\in {\mathcal C}$ and isomorphism classes of fibre functors from the strictly full abelian tensor subcategory $\tenscat{M}$ of ${\mathcal C}$ to ${\bf Vect}_k$. In Section \ref{sec:galois-groups} we consider the group functors $\underline{\Aut}_{{\mathcal C}-\text{alg}}(R)$ of automorphisms of $R$ and $\underline{\Aut}^\otimes(\omega_R)$ of automorphisms of the corresponding fibre functor $\omega_R$, and we show that they are both isomorphic to the spectrum of the $k$-algebra $\omega_R(R)=(R\otimes R)^{\mathcal C}$. As the latter will be proven to be a Hopf-algebra of finite type over $k$, both group functors are indeed affine group schemes of finite type over $k$. Finally, in Section \ref{sec:galois-correspondence} we prove the Galois correspondence between normal closed subgroups of the Galois group scheme $\underline{\Aut}_{{\mathcal C}-\text{alg}}(R)$ and ${\mathcal C}$-subalgebras of $R$ that are Picard-Vessiot rings for some dualizable $N\in {\mathcal C}$. \begin{ack} I would like to thank G.~B\"ockle and F.~Heiderich for their comments on earlier versions which helped a lot to improve the paper. I would also like to thank M.~Wibmer, as only a common project with him drew my attention to this general abstract setting. \end{ack} \section{A commutative algebra theorem}\label{sec:comm-alg-thm} We will be faced with the question whether there exists a Picard-Vessiot ring up to a finite extension of constants. The following theorem will be a key incredient to the existence proof. All algebras are assumed to be commutative with unit. \begin{thm}\label{thm:abstract-algebra} Let $k$ be a field, $S$ an algebra over $k$ and $R$ a finitely generated flat $S$-algebra. Furthermore, let $\ell$ be a field extension of $k$ such that $S\otimes_k \ell$ embeds into $R$ as an $S$-algebra. Then $\ell$ is a finite extension of $k$. \end{thm} \begin{proof} The proof is split in several steps: \textbf{1) Reduction to $S$ being a field} \ \\ Choose a minimal prime ideal ${\mathfrak{p}}$ of $S$, and let $S_{\mathfrak{p}}$ denote the localization of $S$ at ${\mathfrak{p}}$. Since localizations are flat, the inclusion of rings $S\subseteq S\otimes_k \ell\subseteq R$ induces an inclusion of rings $$S_{\mathfrak{p}}\subseteq S_{\mathfrak{p}}\otimes_k \ell\subseteq S_{\mathfrak{p}}\otimes_S R,$$ and $S_{\mathfrak{p}}\otimes_S R$ is a finitely generated $S_{\mathfrak{p}}$-algebra. Since flatness is stable under base change, $S_{\mathfrak{p}}\otimes_S R$ is a flat $S_{\mathfrak{p}}$-algebra.\\ Since ${\mathfrak{p}} S_{\mathfrak{p}}$ is the maximal ideal of $S_{\mathfrak{p}}$, $\bar{S}:=S_{\mathfrak{p}}/{\mathfrak{p}} S_{\mathfrak{p}}$ is a field, and $\bar{R}:=S_{\mathfrak{p}}/{\mathfrak{p}} S_{\mathfrak{p}}\otimes_S R$ is a finitely generated flat algebra over $\bar{S}$. It remains to show that $\bar{S}\otimes_k \ell$ embeds into $\bar{R}$. Since $S_{\mathfrak{p}}\otimes_k \ell$ and $S_{\mathfrak{p}}\otimes_S R$ are both flat over $S_{\mathfrak{p}}$, the exact sequence $0\to {\mathfrak{p}} S_{\mathfrak{p}}\to S_{\mathfrak{p}}\to S_{\mathfrak{p}}/{\mathfrak{p}} S_{\mathfrak{p}}\to 0$ leads to a commutative diagram with exact rows $$\xymatrix{ 0 \ar[r] & {\mathfrak{p}} S_{\mathfrak{p}}\otimes_k \ell \ar[r] \ar@{^{(}->}[d] & S_{\mathfrak{p}}\otimes_k \ell \ar[r] \ar@{^{(}->}[d] & \left(S_{\mathfrak{p}}/{\mathfrak{p}} S_{\mathfrak{p}}\right)\otimes_k \ell \ar[r] \ar[d] & 0 \\ 0 \ar[r] & {\mathfrak{p}} S_{\mathfrak{p}}\otimes_S R\ar[r] & S_{\mathfrak{p}}\otimes_S R\ar[r] & \left( S_{\mathfrak{p}}/{\mathfrak{p}} S_{\mathfrak{p}}\right)\otimes_S R\ar[r] & 0. }$$ Then the last vertical arrow is an injection if the left square is a pullback diagram. Hence, we have to proof that any element in $S_{\mathfrak{p}}\otimes_k \ell$ whose image in $S_{\mathfrak{p}}\otimes_S R$ actually lies in ${\mathfrak{p}} S_{\mathfrak{p}}\otimes_S R$ is an element of $ {\mathfrak{p}} S_{\mathfrak{p}}\otimes_k \ell$. Hence, let $z=\sum_{i=1}^n s_i\otimes x_i\in S_{\mathfrak{p}}\otimes_k \ell$ with $k$-linearly independent $x_1,\dots, x_n\in \ell$, and let $w=\sum_{j=1}^m a_j\otimes r_j\in {\mathfrak{p}} S_{\mathfrak{p}}\otimes_S R$ such that their images in $S_{\mathfrak{p}}\otimes_S R$ are the same. Since all elements in ${\mathfrak{p}} S_{\mathfrak{p}}$ are nilpotent, there is $e_1\geq 0$ maximal such that $a_1^{e_1}\ne 0$. Inductively for $j=2,\dots, m$, there is $e_j\geq 0$ maximal such that $a_1^{e_1}\cdots a_j^{e_j}\ne 0$. Let $a:=\prod_{j=1}^m a_j^{e_j}\in S_{\mathfrak{p}}$. Then by construction, $a\ne 0$ but $a\cdot w=\sum_{j=1}^m a a_j\otimes r_j = 0$. So $0=a\cdot z=\sum_{i=1}^n as_i\otimes x_i$, i.e.~$as_i=0$ for all $i$. Since $a\ne 0$, one obtains $s_i\not\in (S_{\mathfrak{p}})^\times$, i.e.~$s_i\in {\mathfrak{p}} S_{\mathfrak{p}}$. \medskip From now on, we may and will assume that $S$ is a field. In this case $R$ is Noetherian as it is a finitely generated $S$-algebra. \textbf{2) Proof that $\ell$ is algebraic over $k$} Assume that $\ell$ is not algebraic over $k$, then there is an element $a\in \ell$ transcendental over $k$. By assumption, $a$ is also transcendental over $S$ inside $R$, i.e.~the polynomial ring $S[a]$ is a subring of $R$. The image of the corresponding morphism $\psi:\spec(R)\to \spec(S[a]) \cong \AA_S^1$ is a dense subset of $\spec(S[a])$, since the ringhomomorphism is an inclusion, and it is locally closed by \cite[Cor.~3, Ch.~V, \S 3.1]{nb:cac1-7}. Hence, the image is open. But for all $0\ne f\in k[a]$, the irreducible factors of $f$ in $S[a]$, are invertible in $\ell\subseteq R$. Hence, infinitely many maximal ideals of $\spec(S[a])$ are not in the image of $\psi$ -- contradicting that the image is open. \textbf{3) Proof that $\ell$ is finite over $k$} For showing that $\ell$ is indeed finite over $k$, we give a bound on $[\ell':k]$ for any $\ell'\subseteq \ell$ which is finite over $k$, and this bound only depends on data of $R$. Since $\ell$ is the union of all its finite subextensions this proves finiteness of $\ell$. For simplicity we again write $\ell$ for the finite extension $\ell'$ of $k$.\\ Let $$(0)=\bigcap_{i=1}^c {\mathfrak{q}}_i$$ be a primary decomposition of the zero ideal $(0)\subseteq R$ and ${\mathfrak{p}}_i:=\sqrt{{\mathfrak{q}}_i}$ the corresponding prime ideals. Furthermore, let $N_i\in {\mathbb N}$ satisfy ${\mathfrak{p}}_i^{N_i}\subseteq {\mathfrak{q}}_i$, i.e.~for all $y_1,\dots, y_{N_i}\in {\mathfrak{p}}_i$, one has $y_1\cdot y_2\cdots y_{N_i}\in {\mathfrak{q}}_i$.\footnote{This $N_i$ exists since $R$ is Noetherian and therefore ${\mathfrak{p}}_i$ is finitely generated.} Furthermore, for each $i=1,\dots, c$ let ${\mathfrak{m}}_i\subseteq R$ be a maximal ideal containing ${\mathfrak{p}}_i$. Then $d_i:=\dim_S{R/{\mathfrak{m}}_i}$ is finite for all $i$. We claim that $\dim_k(\ell)$ is bounded by $2\cdot \sum_{i=1}^c d_i\cdot N_i$: First at all $R\to \prod_{i=1}^c R/{\mathfrak{q}}_i$ is an injective $S$-algebra homomorphism and $R/{\mathfrak{q}}_i$ is irreducible with unique minimal ideal ${\mathfrak{p}}_i$.\\ Letting $\tilde{{\mathfrak{q}}_i}:={\mathfrak{q}}_i\cap (S\otimes_k \ell)$, and $\tilde{{\mathfrak{p}}_i}:={\mathfrak{p}}_i\cap (S\otimes_k \ell)=\sqrt{\tilde{{\mathfrak{q}}_i}}$, then $(S\otimes_k \ell)/\tilde{{\mathfrak{q}}_i}$ embeds into $R/{\mathfrak{q}}_i$, and $S\otimes_k \ell \to \prod_{i=1}^c (S\otimes_k \ell)/\tilde{{\mathfrak{q}}_i}$ is injective. It therefore suffices to show that $\dim_S\left( (S\otimes_k \ell)/\tilde{{\mathfrak{q}}_i} \right)\leq 2d_iN_i$ holds for each $i$. In the following we therefore consider an arbitrary component and will omit the index $i$. Since $(S\otimes_k \ell)/\tilde{{\mathfrak{q}}}$ is a finite $S$-algebra, and $\tilde{{\mathfrak{p}}}$ is its unique minimal prime ideal, $(S\otimes_k \ell)/\tilde{{\mathfrak{q}}}$ is a local Artinian algebra with residue field $(S\otimes_k \ell)/\tilde{{\mathfrak{p}}}$. Since $(S\otimes_k \ell)/\tilde{{\mathfrak{p}}}$ is a field, the composition $$(S\otimes_k \ell)/\tilde{{\mathfrak{p}}} \hookrightarrow R/{\mathfrak{p}} \to R/{\mathfrak{m}}$$ is injective. Hence, $$\dim_S\left( (S\otimes_k \ell)/\tilde{{\mathfrak{p}}}\right)\leq \dim_S\left( R/{\mathfrak{m}}\right)=d.$$ It remains to show that $\dim_{(S\otimes_k \ell)/\tilde{{\mathfrak{p}}}}\left( (S\otimes_k \ell)/\tilde{{\mathfrak{q}}} \right)\leq 2N$. As a tensor product of fields and as $\ell/k$ is finite, $S\otimes_k \ell$ is a finite direct product of local artinian algebras with residue fields being finite extensions of $S$. The local algebra over some finite extension $S'$ of $S$ is given as $S'\otimes_{k'} \tilde{k}$ for a finite extension $k'$ of $k$ contained in $S'$ and a purely inseparable extension $\tilde{k}/k'$. In particular, also the algebra $(S\otimes_k \ell)/\tilde{{\mathfrak{q}}}$ is of that form (as it is just isomorphic to one factor of $(S\otimes_k \ell)$). Hence, let $S'$, $k'$ and $\tilde{k}$ be such that $(S\otimes_k \ell)/\tilde{{\mathfrak{p}}}\cong S'$ and $(S\otimes_k \ell)/\tilde{{\mathfrak{q}}}\cong S'\otimes_{k'} \tilde{k}$. As $\tilde{k}$ is purely inseparable over $k'$, there are $x_1,\dots, x_t\in \tilde{k}$, $m_1,\dots, m_t\in {\mathbb N}$ and $a_1,\dots, a_t\in k'$ such that $$\tilde{k}=k'[x_1,\dots, x_t]/\left(x_1^{p^{m_1}}-a_1,\dots, x_t^{p^{m_t}}-a_t\right).$$ where $p$ denotes the characteristic of the fields. As $S'\otimes_{k'} \tilde{k}$ is local with residue field $S'$, there are also $s_1,\dots, s_t\in S'$ such that $s_j^{p^{m_j}}=a_j$ for all $j=1,\dots, t$, and $S'\otimes_{k'} \tilde{k}$ is given as $$S'\otimes_{k'} \tilde{k}\cong S'[x_1,\dots, x_t]/\left((x_1-s_1)^{p^{m_1}},\dots, (x_t-s_t)^{p^{m_t}}\right).$$ In particular its nilradical (corresponding to $\tilde{{\mathfrak{p}}}$) is generated by $(x_1-s_1,\dots, x_t-s_t)$. Since $\tilde{{\mathfrak{p}}}^N\subseteq \tilde{{\mathfrak{q}}}$, and $(x_1-s_1)^{p^{m_1}-1}\cdots (x_t-s_t)^{p^{m_t}-1}\ne 0$ we obtain that $$N>\sum_{j=1}^t (p^{m_j}-1)\geq \sum_{j=1}^t \frac12 p^{m_j}= \frac12 \dim_{S'}(S'\otimes_{k'} \tilde{k}).$$ Therefore, we have shown that $\dim_{(S\otimes_k \ell)/\tilde{{\mathfrak{p}}}}\left( (S\otimes_k \ell)/\tilde{{\mathfrak{q}}} \right)<2N$. \end{proof} \section{Setup and basic properties}\label{sec:setup} In this section, we set up an abstract framework in which we can prove theorems on Picard-Vessiot extensions, as well as their Galois groups. The theorems thus apply to all kinds of differential and difference Galois theories which match the basic setup given below. The setup therefore provides a uniform approach to the existing theories. \medskip We consider the following setup: \begin{enumerate} \item[(C1)] ${\mathcal C}$ is a locally small abelian symmetric monoidal category with unit object $\1\in {\mathcal C}$. We assume that $\1$ is a simple object in ${\mathcal C}$. \item[(C2)] ${\mathcal C}$ is cocomplete, i.e.~${\mathcal C}$ is closed under small inductive limits. \item[(F1)] There is a scheme ${\mathcal X}$, and an additive tensor functor $\upsilon:{\mathcal C}\to \mathsf{Qcoh}({\mathcal X})$ from ${\mathcal C}$ to the category of quasi-coherent ${\mathcal O}_{\mathcal X}$-modules which is faithful, exact and preserves small inductive limits. (In particular, $\upsilon(\1)={\mathcal O}_{\mathcal X}$.) \item[(F2)] $M\in {\mathcal C}$ is dualizable whenever $\upsilon(M)$ is a finitely generated ${\mathcal O}_{\mathcal X}$-module. \end{enumerate} \begin{rem} \begin{enumerate} \item The presence of a faithful functor $\upsilon:{\mathcal C}\to \mathsf{Qcoh}({\mathcal X})$ as stated in (F1) already implies that all $\Mor_{{\mathcal C}}(M,N)$ are abelian groups, i.e.~that ${\mathcal C}$ is locally small. Hence, we could have ommitted this condition in (C1). However, in this section and Section \ref{sec:c-algebras}, we will not use conditions (F1) and (F2) and therefore need the condition ``locally small'' in (C1). \item By an object $M\in {\mathcal C}$ being \textit{dualizable}, we mean that $M$ admits a (right) dual, i.e.~an object $M^\vee\in {\mathcal C}$ together with two morphisms ${\rm ev}_M:M\otimes M^\vee\to \1$ (\textit{evaluation}) and $\delta_M:\1\to M^\vee \otimes M$ (\textit{coevaluation}) such that the diagrams \[ \xymatrix@C+6pt{ M^\vee\cong \1\negotimes M^\vee \ar[r]^(.53){\delta_M\otimes {\rm id}_{M^\vee} } \ar[dr]_{{\rm id}_{M^\vee}}& M^\vee \negotimes M\negotimes M^\vee \ar[d]^{{\rm id}_{M^\vee}\otimes {\rm ev}_M} \\ & M^\vee \negotimes \1\cong M^\vee } \quad \text{and} \quad \xymatrix{ M\cong M \negotimes \1 \ar[r]^{{\rm id}_M\otimes \delta_M} \ar[rd]_{{\rm id}_M} & M\negotimes M^\vee \negotimes M \ar[d]^{{\rm ev}_M\otimes {\rm id}_M} \\ & \1 \negotimes M\cong M } \] commute. \end{enumerate} \end{rem} \medskip \begin{exmp} All the settings mentioned in the introduction are examples for the category ${\mathcal C}$. \end{exmp} \medskip In the remainder of this section, ${\mathcal C}$ will be a category satisfying properties (C1) and (C2). Let $k:=\End_{\mathcal C}(\1)$ denote the ring of endomorphisms of the unit object $\1$. Then by simplicity of $\1$, $k$ is a division ring, and even a field, as $\End_{\mathcal C}(\1)$ is always commutative. Let ${\bf Vect}_k$ denote the category of $k$-vector spaces, and $\mathsf{vect}_k$ the subcategory of finite dimensional $k$-vector spaces. There is a functor $\otimes_k:{\mathcal C}\times \mathsf{vect}_k\to {\mathcal C}$ such that $M\otimes_k k^n=M^n$ and in general $M\otimes_k V\cong M^{\dim(V)}$ (cf.~\cite{pd-jsm:tc , p.~21 for details). \\ As ${\mathcal C}$ is cocomplete, the functor $\otimes_k$ can be extended to $\otimes_k:{\mathcal C}\times {\bf Vect}_k\to {\mathcal C}$ via $$M\otimes_k V :=\varinjlim\limits_{\substack{W\subset V\\ \text{fin.dim.}}} M\otimes_k W.$$ This functor fulfills a functorial isomorphism of $k$-vector spaces $$\Mor_{\mathcal C}(N,M\otimes_k V) \cong \Mor_{\mathcal C}(N,M) \otimes_k V\text{ for all } M,N\in {\mathcal C}, V\in {\bf Vect}_k,$$ where the tensor product on the right hand side is the usual tensor product of $k$-vector spaces. Recall that $\Mor_{\mathcal C}(N,M)$ is a $k$-vector space via the action of $k=\End_{\mathcal C}(\1)$. The functor $\otimes_k$ induces a tensor functor $\iota:{\bf Vect}_k\to {\mathcal C}$ given by $\iota(V):=\1\otimes_k V$, and one obviously has $M\otimes_k V\cong M\otimes \iota(V)$ (the second tensor product taken in ${\mathcal C}$). The functor $\iota$ is faithful and exact by construction. Since $\iota$ is an exact tensor functor and all finite dimensional vector spaces have a dual (in the categorial sense), all objects $\iota(V)$ for $V\in \mathsf{vect}_k$ are dualizable in ${\mathcal C}$. There is also a functor $(-)^{\mathcal C}:=\Mor_{\mathcal C}(\1, -):{\mathcal C}\to {\bf Vect}_k$ from the category ${\mathcal C}$ to the category of all $k$-vector spaces \begin{rem} As already mentioned in the introduction, in the differential case $M^{\mathcal C}=M^\partial$ is just the $k$-vector space of constants of the differential module $M$. In the difference case (with endomorphism $\sigma$), $M^{\mathcal C}$ equals the invariants $M^\sigma$ of the difference module $M$.\\ The functor $\iota$ corresponds to the construction of ``trivial'' differential (resp.~difference) modules by tensoring a $k$-vector space with the differential (resp.~difference) base field $F$. \end{rem} The following proposition gives some properties of the functors $\iota$ and $(-)^{\mathcal C}$ which are well known for differential resp.~difference modules. \begin{prop}\label{prop:first-properties} Let ${\mathcal C}$ be a category satisfying (C1) and (C2), and let $\iota$ and $()^{\mathcal C}$ be as above. Then the following hold. \begin{enumerate} \item If $V\in {\bf Vect}_k$, then any subobject and any quotient of $\iota(V)$ is isomorphic to $\iota(W)$ for some $W\in {\bf Vect}_k$. \item If $V\in \mathsf{vect}_k$, then $\iota(V)\in {\mathcal C}$ has finite length and ${\rm length}(\iota(V))=\dim_k(V)$. \item If $M\in {\mathcal C}$ has finite length, then $M^{\mathcal C}\in \mathsf{vect}_k$ and $\dim_k(M^{\mathcal C})\leq {\rm length}(M)$. \end{enumerate} \end{prop} \begin{proof} \begin{enumerate}[leftmargin=*, widest*=3] \item First consider the case that $V\in {\bf Vect}_k$ is of finite dimension. We show the claim by induction on $\dim(V)$.\\ The case $\dim(V)=0$ is trivial. Let $V\in \mathsf{vect}_k$ and $N\in {\mathcal C}$ be a subobject of $\iota(V)$, and let $V'\subseteq V$ be a $1$-dimensional subspace. Then one has a split exact sequence of $k$-vector spaces $0\to V'\to V\to V/V' \to 0$ and therefore a split exact sequence $$0\to \iota(V')\to \iota(V)\to \iota(V/V')\to 0$$ in ${\mathcal C}$. Since $N$ is a subobject of $\iota(V)$, the pullback $N\cap \iota(V')$ is a subobject of $\iota(V')\cong \1$. As $\1$ is simple, $N\cap \iota(V')=0$ or $N\cap \iota(V')=\iota(V')$.\\ In the first case, $N\hookrightarrow \iota(V/V')$, and the claim follows by induction on $\dim(V)$.\\ In the second case, by induction $N/\iota(V')$ is isomorphic to $\iota(W)$ for some subspace $W \subseteq \iota(V/V')$. If $W'$ denotes the preimage of $W$ under the epimorphism $V\to V/V'$, one has a commutative diagram with exact rows \[ \xymatrix{ 0 \ar[r] & \iota(V') \ar[r] \ar[d]^{\cong}& N \ar[r] \ar[d] & \iota(W) \ar[r] \ar[d]^{\cong}& 0\\ 0 \ar[r] & \iota(V') \ar[r] & \iota(W') \ar[r] & \iota(W) \ar[r] & 0 }, \] and therefore $N\cong \iota(W')$. If $V\in {\bf Vect}_k$ has infinite dimension, we recall that $\iota(V)=\varinjlim\limits_{\substack{W\subset V\\ \text{fin.dim.}}} \iota(W)$ and hence, for any subobject $N\subseteq \iota(V)$, one has $$N=\varinjlim\limits_{\substack{W\subset V\\ \text{fin.dim.}}} N\cap \iota(W).$$ From the special case of finite dimension, we obtain $N\cap \iota(W)=\iota(W')$ for some $W'$ related to $W$, and therefore $$N=\varinjlim\limits_{\substack{W\subset V\\ \text{fin.dim.}}} \iota(W')=\iota \left(\varinjlim\limits_{\substack{W\subset V\\ \text{fin.dim.}}} W'\right).$$ Now let $V\in {\bf Vect}_k$ be arbitrary and, let $N$ be a quotient of $\iota(V)$. Then by the previous, $\Ker(\iota(V)\to N)$ is of the form $\iota(V')$ for some $V'\subseteq V$, and hence $N\cong \iota(V)/\iota(V')\cong \iota(V/V')$, as $\iota$ is exact. \item By part (i), every sequence of subobjects $0=N_0\subsetneq N_1\subsetneq \dots \subsetneq \iota(V)$ is induced via $\iota$ by a sequence of subvector spaces $0=W_0\subsetneq W_1\subsetneq \dots \subsetneq V$. Hence, ${\rm length}(\iota(V))=\dim_k(V)$. \item We use induction on the length of $M$. If $M$ has length $1$, then $M$ is a simple object. Since $\1$ also is simple, every morphism in $M^{\mathcal C}= \Mor_{\mathcal C}(\1, M)$ is either $0$ or an isomorphism. In particular, $k=\End_{\mathcal C}(\1)$ acts transitively on $\Mor_{\mathcal C}(\1, M)$, which shows that $\dim_k(\Mor_{\mathcal C}(\1, M))$ is $0$ or $1$. For the general case, take a subobject $0\ne N\ne M$ of $M$. Applying the functor $()^{\mathcal C}=\Mor_{\mathcal C}(\1, -)$ to the exact sequence $0\to N \to M \to M/N\to 0$ leads to an exact sequence $$ 0 \to N^{\mathcal C} \to M^{\mathcal C} \to (M/N)^{\mathcal C},$$ as the functor $ \Mor_{\mathcal C}(X,-)$ is always left-exact.\\ Hence, $\dim_k(M^{\mathcal C})\leq \dim_k(N^{\mathcal C})+\dim_k((M/N)^{\mathcal C})$. Since $N$ and $M/N$ have smaller length than $M$, we obtain the claim by induction using ${\rm length}(M)={\rm length}(N)+{\rm length}(M/N)$. \end{enumerate} \end{proof} \begin{prop}\label{prop:adjointness} Let ${\mathcal C}$ be a category satisfying (C1) and (C2) and let $\iota$ and $()^{\mathcal C}$ be as above. Then the following hold. \begin{enumerate} \item The functor $\iota$ is left adjoint to the functor $()^{\mathcal C}$, i.e.~for all $V\in {\bf Vect}_k$, $M\in {\mathcal C}$, there are isomorphisms of $k$-vector spaces $\Mor_{\mathcal C}(\iota(V),M) \cong \Hom_k(V,M^{\mathcal C})$ functorial in $V$ and $M$. \item For every $V\in {\bf Vect}_k$, the homomorphism $\eta_V:V\to (\iota(V))^{\mathcal C}$ which is adjoint to ${\rm id}_{\iota(V)}$ is an isomorphism. \item For every $M\in {\mathcal C}$, the morphism $\varepsilon_M:\1 \otimes_k \Mor_{\mathcal C}(\1, M)= \iota(M^{\mathcal C})\to M$ which is adjoint to ${\rm id}_{M^{\mathcal C}}$ is a monomorphism. \end{enumerate} \end{prop} \begin{rem}\label{rem:iota-full} \begin{enumerate} \item Whereas in the differential resp.~difference settings, part (i) and (ii) are easily seen, part (iii) amounts to saying that any set $v_1,\dots ,v_n\in M^{\mathcal C}$ of constant (resp.~invariant) elements of $M$ which are $k$-linearly independent, are also independent over the differential (resp.~difference) field $F$. This is proven in each setting separately. However, Amano and Masuoka provide an abstract proof (which is given in \cite[Prop.~3.1.1]{ka:ridepvt}) which relies on the Freyd embedding theorem. \item The collection of homomorphisms $(\eta_V)_{V\in {\bf Vect}_k}$ is just the natural transformation $\eta:{\rm id}_{{\bf Vect}_k}\to (-)^{\mathcal C}\circ \iota$ (unit of the adjunction) whereas the morphisms $\varepsilon_M$ form the natural transformation $\varepsilon:\iota\circ (-)^{\mathcal C}\to {\rm id}_{\mathcal C}$ (counit of the adjunction). By the general theory on adjoint functors, for all $V,W\in {\bf Vect}_k$, the maps $\Hom_k(V,W)\to \Mor_{{\mathcal C}}(\iota(V),\iota(W))$ induced by applying $\iota$ are just the compositions $$\hspace*{15mm} \xymatrix@1@C+20pt{ \Hom_{k}(V,W) \ar[r]^(0.45){\eta_W\circ (-)} & \Hom_{k}(V,\iota(W)^{{\mathcal C}}) &\Mor_{{\mathcal C}}(\iota(V),\iota(W)) \ar[l]^{\simeq}_{\rm adjunction} }$$ (cf.~\cite{sml:cwm},p.~81,eq.~(3) and definition of $\eta$). This implies that $\eta_W$ is a monomorphism for all $W\in {\bf Vect}_k$ if and only if $\Hom_k(V,W)\to \Mor_{{\mathcal C}}(\iota(V),\iota(W))$ is injective for all $V,W\in {\bf Vect}_k$, i.e.~if $\iota$ is a faithful functor. Furthermore, $\eta_W$ is a split epimorphism for all $W\in {\bf Vect}_k$ if and only if $\Hom_k(V,W)\to \Mor_{{\mathcal C}}(\iota(V),\iota(W))$ is surjective for all $V,W\in {\bf Vect}_k$, if and only if $\iota$ is a full functor. In particular, $\eta_W$ being an isomorphism for all $W\in {\bf Vect}_k$ is equivalent to $\iota$ being a fully faithful functor. \end{enumerate} \end{rem} \begin{proof}[Proof of Prop.~\ref{prop:adjointness}] \begin{enumerate}[leftmargin=*, widest*=3] \item For $V\in \mathsf{vect}_k$ and $M\in {\mathcal C}$ we have natural isomorphisms \begin{eqnarray*} \Mor_{\mathcal C}(\iota(V),M) &\cong & \Mor_{\mathcal C}(\1, M\otimes \iota(V)^\vee) \cong \Mor_{\mathcal C}(\1, M\otimes_k V^\vee)\\ &\cong & \Mor_{\mathcal C}(\1,M)\otimes_k V^\vee\cong \Hom_k(V, \Mor_{\mathcal C}(\1,M) ) \\ &= & \Hom_k(V, M^{\mathcal C}) \end{eqnarray*} If $V$ is of infinite dimension the statement is obtained using that $\Mor_{\mathcal C}$ and $\Hom_k$ commute with inductive limits, i.e. \begin{eqnarray*} \Mor_{\mathcal C}(\iota(V),M) &=& \Mor_{\mathcal C}(\varinjlim\limits_{\substack{W\subset V\\ \text{fin.dim}}} \iota(W), M) = \varprojlim\limits_{\substack{W\subset V\\ \text{fin.dim}}} \Mor_{\mathcal C}(\iota(W),M)\\ &\cong & \varprojlim\limits_{\substack{W\subset V\\ \text{fin.dim}}} \Hom_k(W,M^{\mathcal C}) = \Hom_k(V,M^{\mathcal C}). \end{eqnarray*} \item We have, $(\iota(V))^{\mathcal C}=\Mor_{\mathcal C}(\1,\1\otimes_k V)\cong \Mor_{\mathcal C}(\1,\1)\otimes_k V\cong k\otimes_k V=V$, and the morphism ${\rm id}_{\iota(V)}$ corresponds to ${\rm id}_V:V\xrightarrow{\eta_V} (\iota(V))^{\mathcal C}\cong V$ via all these natural identifications. \item Let $M\in {\mathcal C}$ and $N:=\Ker(\varepsilon_M)\subseteq \iota(M^{\mathcal C})$. By Prop.~\ref{prop:first-properties}(i), there is a subspace $W$ of $M^{\mathcal C}$ such that $N=\iota(W)$. Hence, we have an exact sequence of morphisms $$0 \to \iota(W)\to \iota(M^{\mathcal C}) \xrightarrow{\varepsilon_M} M.$$ Since $()^{\mathcal C}$ is left exact, this leads to the exact sequence $$0\to (\iota(W))^{\mathcal C}\to (\iota(M^{\mathcal C}))^{\mathcal C} \xrightarrow{(\varepsilon_M)^{\mathcal C}}\ M^{\mathcal C}$$ But $\eta_V:V\to (\iota(V))^{\mathcal C}$ is an isomorphism for all $V$ by part (ii). So we obtain an exact sequence $$0\to W\to M^{\mathcal C} \xrightarrow{(\varepsilon_M)^{\mathcal C} \circ \eta_{M^{\mathcal C}}} M^{\mathcal C},$$ and the composite $(\varepsilon_M)^{\mathcal C} \circ \eta_{M^{\mathcal C}}$ is the identity on $M^{\mathcal C}$ by general theory on adjoint functors (cf.~\cite[Ch.~IV, Thm.~1]{sml:cwm}). Hence, $W=0$. \end{enumerate} \end{proof} \section{${\mathcal C}$-algebras and base change}\label{sec:c-algebras} We recall some notation which are already present in \cite[Ch.~17 \& 18]{sml:ca}, and refer to loc.~cit.~for more details. The reader should be aware that a ``tensored category'' in \cite{sml:ca} is the same as an ``abelian symmetric monoidal category'' here.\\ A {\bf commutative algebra in ${\mathcal C}$} (or a {\bf ${\mathcal C}$-algebra} for short) is an object $R\in {\mathcal C}$ together with two morphisms $u_R:\1\to R$ and $\mu_R:R\otimes R\to R$ satisfying several commuting diagrams corresponding to associativity, commutativity and the unit. For instance, $$\mu_R\circ (u_R\otimes {\rm id}_R)={\rm id}_R=\mu_R\circ ({\rm id}_R\otimes u_R)$$ says that $u_R$ is a unit for the multiplication $\mu_R$ (cf.~\cite[Ch.~17]{sml:ca}; although the term ``${\mathcal C}$-algebra'' in \cite{sml:ca} does not include commutativity). For a ${\mathcal C}$-algebra $R$ we define ${\mathcal C}_R$ to be the category of $R$-modules in ${\mathcal C}$, i.e. the category of pairs $(M,\mu_M)$ where $M\in {\mathcal C}$ and $\mu_M:R\otimes M\to M$ is a morphism in ${\mathcal C}$ satisfying the usual commuting diagrams for turning $M$ into an $R$-module (cf.~\cite[Ch.~18]{sml:ca}).\footnote{Most times, we will omit the $\mu_M$ in our notation, and just write $M\in {\mathcal C}_R$.} The morphisms in ${\mathcal C}_R$ are morphisms in ${\mathcal C}$ which commute with the $R$-action. The category ${\mathcal C}_R$ is also an abelian symmetric monoidal category with tensor product $\otimes_R$ defined as $$M\otimes_R N:={\rm Coker}((\mu_M\circ \tau)\otimes {\rm id}_N- {\rm id}_M\otimes \mu_N:M\otimes R\otimes N\to M\otimes N),$$ where $\tau:M\otimes R\to R\otimes M$ is the twist morphism (see~\cite[Prop.~18.3]{sml:ca}). A {\bf ${\mathcal C}$-ideal} $I$ of a ${\mathcal C}$-algebra $R$ is a subobject of $R$ in the category ${\mathcal C}_R$, and $R$ is called a {\bf simple} ${\mathcal C}$-algebra, if $0$ and $R$ are the only ${\mathcal C}$-ideals of $R$, i.e.~if $R$ is a simple object in ${\mathcal C}_R$. \begin{defn} For a ${\mathcal C}$-algebra $R$, the additive right-exact functor $()_R:({\mathcal C},\otimes)\to ({\mathcal C}_R,\otimes_R), M\mapsto M_R:=(R\otimes M, \mu_R\otimes {\rm id}_M)$ is called the \emph{base change functor}. It is even a tensor functor, and it is a left adjoint to the forgetful functor ${\mathcal C}_R\to {\mathcal C}$ (see~\cite[Thm.~18.2]{sml:ca}). \\ We can also base change the functors $\iota$ and $()^{\mathcal C}$. In more details, having in mind that $\End_{{\mathcal C}_R}(R)=\Mor_{\mathcal C}(\1,R)=R^{\mathcal C}$: $$\iota_R:\mod_{R^{\mathcal C}}\to {\mathcal C}_R, V\mapsto R\otimes_{\iota(R^{\mathcal C})} \iota(V)$$ and $$()^{{\mathcal C}_R}:{\mathcal C}_R\to \mod_{R^{\mathcal C}}, M\mapsto \Mor_{{\mathcal C}_R}(R,M)=\Mor_{{\mathcal C}}(\1,M)=M^{\mathcal C}.$$ \medskip A special case is given, if $R=\iota(A)$ for some commutative $k$-algebra $A$. In this case, $\iota_R$ is ``the same'' as $\iota$. This case corresponds to an extension by constants in the theory of differential or difference modules. \end{defn} \begin{prop} The functor $\iota_R$ is left adjoint to the functor $()^{{\mathcal C}_R}$. \end{prop} \begin{proof} Let $V\in \mod_{R^{\mathcal C}}$ and $M\in {\mathcal C}_R$, then $$\Mor_{{\mathcal C}_R}(\iota_R(V),M)=\Mor_{{\mathcal C}_R}(R\otimes_{\iota(R^{\mathcal C})} \iota(V),M)=\Mor_{{\mathcal C}_{\iota(R^{\mathcal C})}}(\iota(V),M)$$ is the subset of $\Mor_{{\mathcal C}}(\iota(V),M)$ given by all $f\in \Mor_{{\mathcal C}}(\iota(V),M)$ such that the diagram $$\xymatrix{ \iota(R^{\mathcal C})\otimes \iota(V) \ar[r]^{{\rm id}\otimes f} \ar[d]^{\iota(\mu_V)} & \iota(R^{\mathcal C})\otimes M \ar[d]^{\mu_M} \\ \iota(V) \ar[r]^{f} & M }$$ commutes. On the other hand, $\Hom_{R^{\mathcal C}}(V,M^{{\mathcal C}_R})=\Hom_{R^{\mathcal C}}(V,M^{\mathcal C})$ is the subset of $\Hom_{k}(V,M^{\mathcal C})$ given by all $g\in \Hom_{k}(V,M^{\mathcal C})$ such that the diagram $$\xymatrix{ R^{\mathcal C}\otimes_k V \ar[r]^{{\rm id}\otimes g} \ar[d]^{\mu_V} & R^{\mathcal C}\otimes_k M^{\mathcal C} \ar[d]^{(\mu_M)^{\mathcal C}} \\ V \ar[r]^{g} & M^{\mathcal C} }$$ commutes. Assume that $f$ and $g$ are adjoint morphisms (i.e.~correspond to each other via the bijection $\Mor_{{\mathcal C}}(\iota(V),M)\cong \Hom_{k}(V,M^{\mathcal C})$ of Prop.\ref{prop:adjointness}(i)), then the commutativity of the first diagram is equivalent to the commutativity of the second, since the bijection of the hom-sets is natural. \end{proof} \begin{lemma}\label{lemma:ideal-bijection::abstract} Let $A$ be a commutative $k$-algebra. Then $\iota_{\iota(A)}$ and $()^{{\mathcal C}_{\iota(A)}}$ define a bijection between the ideals of $A$ and the ${\mathcal C}$-ideals of $\iota(A)$. \end{lemma} \begin{proof} By definition $\iota_{\iota(A)}(I)=\iota(I)$ for any $I\in \mod_A$. Furthermore, by Prop.~\ref{prop:first-properties}(i), $\iota$ induces a bijection between the $k$-subvector spaces of $A$ and the subobjects of $\iota(A)$ in ${\mathcal C}$. The condition on $I$ being an ideal of $A$ (resp.~of $\iota(I)$ being an ideal of $\iota(A)$) is equivalent to the condition that the composite $A\otimes_k I\xrightarrow{\mu_A} A\to A/I$ (resp.~the composite $\iota(A)\otimes \iota(I)\xrightarrow{\mu_{\iota(A)}} \iota(A)\to \iota(A)/\iota(I)$) is the zero map. Hence, the condition for $\iota(I)$ is obtained from the one for $I$ by applying $\iota$, and using that $\iota$ is an exact tensor functor. Since $\iota$ is also faithful, these two conditions are indeed equivalent. \end{proof} In the special case that $A$ is a field, one obtains the following corollary. \begin{cor}\label{cor:still-simple::abstract} Let $\ell$ be a field extension of $k$, then $\iota(\ell)$ is a simple ${\mathcal C}$-algebra. \end{cor} \begin{rem} As $\iota_R$ and $()^{{\mathcal C}_R}$ are adjoint functors, there are again the unit and the counit of the adjunction. By abuse of notation, we will again denote the unit by $\eta$ and the counit by $\varepsilon$. There might be an ambiguity which morphism is meant by $\varepsilon_M$ if $(M,\mu_M)$ is an object in ${\mathcal C}_R$. However, when $M$ is explicitly given as an object of ${\mathcal C}_R$, then $\varepsilon_M:\iota_R(M^{{\mathcal C}_R})\to M$ is meant. This is the case, for example, if $M=N_R$ is the base change of an object $N\in {\mathcal C}$.\\ In cases where the right meaning of $\varepsilon_M$ would not be clear, we always give the source and the target of $\varepsilon_M$. \end{rem} \begin{prop}\label{prop:on-iota-r} Assume that, $\iota_R$ is exact and faithful \footnote{For differential rings this means that the ring $R$ is faithfully flat over $\iota(R^{\mathcal C})$.}, and that any subobject of $R^n$ is of the form $\iota_R(W)$, then the following holds. \begin{enumerate} \item For every $V\in {\bf Mod}_{R^{\mathcal C}}$, every subobject of $\iota_R(V)$ is isomorphic to $\iota_R(W)$ for some $W\subseteq V$. \item For every $V\in {\bf Mod}_{R^{\mathcal C}}$, the morphism $\eta_V:V\to (\iota_{R}(V))^{{\mathcal C}_R}$ is an isomorphism. \item For every $M\in {\mathcal C}_R$, the morphism $\varepsilon_M:\iota_R(M^{{\mathcal C}_R})\to M$ is a monomorphism. \end{enumerate} \end{prop} The most important cases where the proposition applies is on the one hand the case $R=\iota(A)$ for some commutative $k$-algebra $A$ (in which case $\iota_R=\iota$), and on the other hand $R$ being a simple ${\mathcal C}$-algebra. \begin{proof} \begin{enumerate}[leftmargin=*, widest*=3] \setcounter{enumi}{1} \item We show that $\eta_V:V\to (\iota_{R}(V))^{{\mathcal C}_R}$ is an isomorphism for all $V\in {\bf Mod}_{R^{\mathcal C}}$. As $\iota$ is faithful by assumption, all $\eta_V$ are monomorphisms (cf.~Rem.~\ref{rem:iota-full}). For showing that $\eta_V$ is an epimorphism, it is enough to show that the natural map $$R^{\mathcal C}\otimes_k V=(R\otimes \iota(V))^{{\mathcal C}_R}\to (\iota_R(V))^{{\mathcal C}_R}$$ is an epimorphism, where on the left hand side, $V$ is considered just as a $k$-vector space. Saying that this map is epimorphic is equivalent to saying that any morphism $g:R\to \iota_R(V)$ in ${\mathcal C}_R$ can be lifted to a morphism $f:R\to R\otimes \iota(V)$ in ${\mathcal C}_R$. So let $g:R\to \iota_R(V)$ be a morphism in ${\mathcal C}_R$, and let $P$ be the pullback of the diagram $$\xymatrix{ P \ar@{->>}[r]^{{\rm pr}_1} \ar[d]^{{\rm pr}_2} & R\ar[d]^{g} \\ R\otimes \iota(V) \ar@{->>}[r]^(.35){p} & \iota_R(V) }.$$ Then $P$ is a subobject of $R\oplus (R\otimes \iota(V))\cong R^{1+\dim_k(V)}$, and hence by assumption, $P=\iota_R(W)$ for some $W\in \mod_{R^{\mathcal C}}$. By adjointness, ${\rm pr}_1$ corresponds to some $R^{\mathcal C}$-homomorphism $q:W\to R^{{\mathcal C}_R}=R^{\mathcal C}$, i.e.~${\rm pr}_1=\varepsilon_R\circ \iota_R(q)$. Since $\varepsilon_R:R=\iota_R\left(R^{{\mathcal C}_R}\right)\to R$ is the identity, and ${\rm pr}_1$ is an epimorphism, faithfulness of $\iota_R$ implies that also $q$ is an epimorphism. Therefore, there is a $R^{\mathcal C}$-homomorphism $s:R^{\mathcal C}\to W$ such that $q\circ s={\rm id}_{R^{\mathcal C}}$. Let $f$ be the morphism $f:={\rm pr}_2\circ \iota_R(s):R\to R\otimes \iota(V)$, then $$p\circ f=p\circ {\rm pr}_2\circ \iota_R(s) = g\circ {\rm pr}_1\circ \iota_R(s)=g\circ \iota_R(q\circ s)=g.$$ Hence, $f$ is a lift of $g$. \setcounter{enumi}{0} \item We show that any subobject of $\iota_R(V)$ is of the form $\iota_R(W)$ for some submodule $W$ of $V$. The case of a quotient of $\iota_R(V)$ then follows in the same manner as in Prop.~\ref{prop:first-properties}. Let $N\subseteq \iota_R(V)$ be a subobject in ${\mathcal C}_R$. Then the pullback of $N$ along $p:R\otimes \iota(V)\to \iota_R(V)$ is a subobject of $R\otimes \iota(V)$, hence by assumption of the form $\iota_R(\tilde{W})$ for some $\tilde{W}\subseteq (R^{\mathcal C})^{\dim_k(V)}$. Furthermore, as $\eta_V$ is an isomorphism, the restriction $p|_{\iota_R(\tilde{W})}: \iota_R(\tilde{W})\to \iota_R(V)$ is induced by some homomorphism $f:\tilde{W}\to V$ (cf. Remark \ref{rem:iota-full}). By exactness of $\iota_R$, we finally obtain $N=\im(\iota_R(f))=\iota_R(\im(f))=\iota_R(W)$ for $W:= \im(f)$. \setcounter{enumi}{2} \item The proof that $\varepsilon_M:\iota_R(M^{{\mathcal C}_R})\to M$ is a monomorphism is the same as in Prop.~\ref{prop:adjointness}. \end{enumerate} \end{proof} \begin{lemma}\label{lemma:when-eps-is-iso} Let $R$ be a simple ${\mathcal C}$-algebra. Then for $N\in {\mathcal C}_R$, the morphism $\varepsilon_N$ is an isomorphism if and only if $N$ is isomorphic to $\iota_R(V)$ for some $V\in {\bf Mod}_{R_{\mathcal C}}$. \end{lemma} \begin{proof} If $\varepsilon_N$ is an isomorphism, then $N\cong \iota_R(V)$ for $V:=N^{{\mathcal C}_R}$. On the other hand, let $N\cong \iota_R(V)$ for some $V\in {\bf Mod}_{R_{\mathcal C}}$. Since $\iota_R(\eta_V)\circ \varepsilon_{\iota_R(V)}={\rm id}_{\iota_R(V)}$ (cf.~\cite[Ch.~IV, Thm.~1]{sml:cwm}) and $\eta_V$ is an isomorphism, $\varepsilon_{\iota_R(V)}$ is an isomorphism. Hence, $\varepsilon_N$ is an isomorphism. \end{proof} \begin{prop}\label{prop:subcat-of-trivial-modules} Let $R$ be a simple ${\mathcal C}$-algebra. Then the full subcategory of ${\mathcal C}_R$ consisting of all $N\in {\mathcal C}_R$ such that $\varepsilon_N$ is an isomorphism is a monoidal subcategory of ${\mathcal C}_R$ and is closed under taking direct sums, subquotients, small inductive limits, and duals of dualizable objects in ${\mathcal C}_R$. \end{prop} \begin{proof} Using the previous lemma, this follows directly from Prop.~\ref{prop:on-iota-r}(i), and the fact that $\iota_R$ is an additive exact tensor functor. \end{proof} \section{Solution rings and Picard-Vessiot rings}\label{sec:solution-rings} From now on we assume that ${\mathcal C}$ satisfies all conditions (C1), (C2), (F1) and (F2). \begin{lemma}\label{lemma:dualizables-are-projective-of-finite-rank} Let $M\in {\mathcal C}$ be dualizable. Then $\upsilon(M)$ is a finitely generated locally free ${\mathcal O}_{{\mathcal X}}$-module of constant rank. \end{lemma} \begin{proof} If $M\in {\mathcal C}$ is dualizable, then $\upsilon(M)$ is dualizable in $\mathsf{Qcoh}({\mathcal X})$, since $\upsilon$ is a tensor functor, and tensor functors map dualizable objects to dualizable objects (and their duals to the duals of the images). By \cite[Prop.~2.6]{pd:ct}, dualizable objects in $\mathsf{Qcoh}({\mathcal X})$ are exactly the finitely generated locally free ${\mathcal O}_{\mathcal X}$-modules. Hence, $\upsilon(M)$ is finitely generated and locally free whenever $M$ is dualizable. To see that the rank is constant, let $d\in{\mathbb N}$ be the maximal local rank of $\upsilon(M)$, and consider the $d$-th exterior power $\Lambda:=\Lambda^d(M)\in {\mathcal C}$ which is non-zero by the choice of $d$. Hence, the evaluation morphism ${\rm ev}_{\Lambda}:\Lambda\otimes \Lambda^\vee\to \1$ is non-zero. Since $\1$ is simple, and the image of $ {\rm ev}_{\Lambda}$ is a subobject of $\1$, the morphism ${\rm ev}_{\Lambda}$ is indeed an epimorphism. Hence the evaluation $${\rm ev}_{\upsilon(\Lambda)}=\upsilon( {\rm ev}_{\Lambda}):\upsilon(\Lambda)\otimes_{{\mathcal O}_{\mathcal X}} \upsilon(\Lambda)^\vee\to {\mathcal O}_{\mathcal X}$$ is surjective which implies that $\upsilon(\Lambda)\otimes_{{\mathcal O}_{\mathcal X}} {\mathcal O}_{{\mathcal X},x}\ne 0$ for any point $x$ of ${\mathcal X}$. But this means that any local rank of $\upsilon(M)$ is at least $d$, i.e.~$\upsilon(M)$ has constant rank $d$. \end{proof} \begin{rem}\label{rem:dualizables-are-projective} With respect to the previous lemma, condition (F2) implies that if $\upsilon(M)$ is finitely generated for some $M\in{\mathcal C}$, then $\upsilon(M)$ is even locally free and of constant rank. This also implies the following:\\ If $M$ is dualizable, then $\upsilon(M)$ is finitely generated and locally free. Further, for every epimorphic image $N$ of $M$, the ${{\mathcal O}_{\mathcal X}}$-module $\upsilon(N)$ is also finitely generated and hence, locally free. But then for any subobject $N'\subseteq M$ the sequence $0\to \upsilon(N')\to \upsilon(M)\to \upsilon(M/N')\to 0$ is split exact, since $\upsilon(M/N')$ as an epimorphic image is locally free. Therefore $\upsilon(N')$ is also a quotient of $\upsilon(M)$, in particular $\upsilon(N')$ is finitely generated and locally free.\\ So given a dualizable $M\in {\mathcal C}$, all subquotients of finite direct sums of objects $M^{\otimes n}\otimes (M^\vee)^{\otimes m}$ ($n,m\in {\mathbb N}$) are dualizable. Hence, the strictly full tensor subcategory of ${\mathcal C}$ generated by $M$ and $M^\vee$ -- which is exactly the full subcategory of ${\mathcal C}$ consisting of all objects isomorphic to subquotients of finite direct sums of objects $M^{\otimes n}\otimes (M^\vee)^{\otimes m}$ ($n,m\in {\mathbb N}$) -- is a rigid abelian tensor category and will be denoted by $\tenscat{M}$. Furthermore by definition, $\upsilon$ is a fibre functor and therefore $\tenscat{M}$ is even a Tannakian category (cf.~\cite[Section 2.8]{pd:ct}). By \cite[Cor.~6.20]{pd:ct}, there exists a finite extension $\tilde{k}$ of $k$ and a fibre functor $\omega:\tenscat{M}\to \mathsf{vect}_{\tilde{k}}$. In view of Thm.~\ref{thm:pv-rings-equiv-to-fibre-functors} in Section \ref{sec:pv-rings-and-fibre-functors}, this implies that there is a Picard-Vessiot ring for $M$ over $\tilde{k}$. We will see later (cf.~Cor.~\ref{cor:properties-of-simple-minimal-solution-rings}) that for every simple minimal solution ring $R$, the field $R^{\mathcal C}=\End_{{\mathcal C}_R}(R)$ is a finite field extension of $k$. \end{rem} \begin{defn} Let $M\in {\mathcal C}$.\\ A \emph{solution ring} for $M$ is a ${\mathcal C}$-algebra $R$ such that the morphism $$\varepsilon_{M_R}: \iota_R\left( (M_R)^{{\mathcal C}_R} \right) \to M_R=R\otimes M$$ is an isomorphism. A \emph{Picard-Vessiot ring} for $M$ is a minimal solution ring $R$ which is a simple ${\mathcal C}$-algebra, and satisfies $ R^{\mathcal C}:=\End_{{\mathcal C}_R}(R)=k$. Here, \emph{minimal} means that for any solution ring $\tilde{R}\in {\mathcal C}$ that admits a monomorphism of ${\mathcal C}$-algebras to $R$, this monomorphism is indeed an isomorphism. \end{defn} \begin{rem} Comparing with the differential setting, $(M_R)^{{\mathcal C}_R}$ is just the so called solution space $(R\otimes_F M)^\partial$ of $M$ over $R$, and $\varepsilon_{M_R}$ is the canonical homomorphism $R\otimes_{R^\partial} (R\otimes_F M)^\partial \to R\otimes_F M$.\\ When $R$ is a simple ${\mathcal C}$-algebra (i.e.~in the differential setting a simple differential ring), then by Prop.\ref{prop:on-iota-r}(iii), $\varepsilon_{M_R}$ is always a monomorphism. Hence, for a simple ${\mathcal C}$-algebra $R$, the condition for being a solution ring means that the solution space is as large as possible, or in other words that $R\otimes M$ has a basis of constant elements, i.e.~is a trivial differential module over $R$. \end{rem} \begin{prop}\label{prop:image-of-solution-rings} Let $R$ be a solution ring for some dualizable $M\in {\mathcal C}$, and let $f:R\to R'$ be an epimorphism of ${\mathcal C}$-algebras. Assume either that $R'$ is a simple ${\mathcal C}$-algebra or that $(R\otimes M)^{\mathcal C}$ is a free $R^{\mathcal C}$-module. Then $R'$ is a solution ring for $M$ as well. \end{prop} \begin{rem} If $(R\otimes M)^{\mathcal C}$ is a free $R^{\mathcal C}$-module, then it is automatically free of finite rank, and the rank is the same as the global rank of $\upsilon(M)$ as ${\mathcal O}_{\mathcal X}$-module which exists by Lem.~\ref{lemma:dualizables-are-projective-of-finite-rank}. \end{rem} \begin{proof}[Proof of Prop.~\ref{prop:image-of-solution-rings}] As $f:R\to R'$ is an epimorphism and $M$ is dualizable, $f\otimes {\rm id}_M:R\otimes M\to R'\otimes M$ is an epimorphism, too. As the diagram $$\xymatrix@C+20pt{ \iota_R\left( (R\otimes M)^{\mathcal C}\right) \ar[r]^{\varepsilon_{M_R}} \ar[d] & M_R=R\otimes M \ar@{->>}[d]^{f\otimes {\rm id}_M} \\ \iota_{R'}\left( (R'\otimes M)^{\mathcal C}\right) \ar[r]^{\varepsilon_{M_{R'}}} & M_{R'}=R'\otimes M }$$ commutes and $\varepsilon_{M_R}$ is an isomorphism by assumption on $R$, the morphism $\varepsilon_{M_{R'}}$ is an epimorphism. If $R'$ is simple, then by Prop.~\ref{prop:on-iota-r}(iii) the morphism $\varepsilon_{M_{R'}}$ is a monomorphism, hence an isomorphism. Therefore, $R'$ is a solution ring. Assume now, that $(R\otimes M)^{\mathcal C}$ is a free $R^{\mathcal C}$-module of rank $n$. Then $\iota_R(R\otimes M)^{\mathcal C}\cong \iota_R\left((R^{\mathcal C})^n\right)=R^n$. Composing with $\varepsilon_{M_R}$ leads to an isomorphism $R^n\xrightarrow{\cong}R\otimes M$. We therefore obtain an isomorphism $\alpha:(R')^n\to R'\otimes M$ by tensoring with $R'$. Applying the natural transformation $\varepsilon$ to this isomorphism, we get a commutative square $$\xymatrix@C+20pt{ R'^n=\iota_{R'}\bigl( (R'^n)^{\mathcal C}\bigr) \ar[r]^{\iota_{R'}(\alpha^{\mathcal C})}_{\cong} \ar[d]^{\varepsilon_{R'^n}}_{\cong} & \iota_{R'}\bigl( (R'\negotimes M)^{\mathcal C}\bigr) \ar[d]^{\varepsilon_{M_{R'}}} \\ R'^n \ar[r]^{\alpha}_{\cong} & R'\negotimes M, }$$ which shows that $\varepsilon_{M_{R'}}$ is an isomorphism, too. \end{proof} \begin{thm}\label{thm:exists-sol-ring} Let $M\in {\mathcal C}$ be dualizable. Then there exists a non-zero solution ring for~$M$. \end{thm} \begin{proof} We show the theorem by explicitly constructing a solution ring. This construction is motivated by the Tannakian point of view in \cite{pd-jsm:tc} and by Section 3.4 in \cite{ya:dnctgdd}.\\ Let $n:={\rm rank}(\upsilon(M))$ be the global rank of the ${\mathcal O}_{\mathcal X}$-module $\upsilon(M)$ which exists by Lemma \ref{lemma:dualizables-are-projective-of-finite-rank}. We then define $U$ to be the residue ring of $\Sym\Bigl( \left(M\otimes (\1^n)^\vee\right) \oplus \left(\1^n\otimes M^\vee\right) \Bigr)$ subject to the ideal generated by the image of the morphism \begin{eqnarray*} (-{\rm ev}, {\rm id}_{M}\otimes \delta_{\1^n}\otimes {\rm id}_{M^\vee}):M\otimes M^\vee &\to & \1\oplus \left( M\otimes (\1^n)^\vee \otimes \1^n \otimes M^\vee\right)\\ &\subset& \Sym\Bigl( \left(M\otimes (\1^n)^\vee\right) \oplus \left(\1^n\otimes M^\vee\right) \Bigr) . \end{eqnarray*} First we show that $U\ne 0$ by showing $\upsilon(U)\ne 0$. By exactness of $\upsilon$, the ring $\upsilon(U)$ is given as the residue ring of $\Sym\left( (\upsilon(M) \otimes_{{\mathcal O}_{\mathcal X}} ({\mathcal O}_{\!{\mathcal X}}^{\,n})^\vee) \oplus ({\mathcal O}_{\!{\mathcal X}}^{\,n}\otimes_{{\mathcal O}_{\mathcal X}} \upsilon(M)^\vee)\right)$ subject to the ideal generated by the image of $(-{\rm ev}_{\upsilon(M)}, {\rm id}\otimes \delta_{{\mathcal O}_{\!{\mathcal X}}^{\,n}}\otimes {\rm id})$. Let ${\mathcal U}=\spec(S)\subseteq {\mathcal X}$ be an affine open subset such that $\tilde{M}:=\upsilon(M)({\mathcal U})$ is free over $S$. Let $\{b_1,\ldots,b_n\}$ be a basis of $\tilde{M}$ and $b_1^\vee,\ldots, b_n^\vee\in \tilde{M}^\vee$ the corresponding dual basis. Then $\upsilon(U)({\mathcal U})$ is generated by $x_{ij}:=b_i\otimes e_j^\vee\in \tilde{M} \otimes (S^{n})^\vee$ and $y_{ji}:=e_j\otimes b_i^\vee\in S^{n}\otimes (\tilde{M})^\vee$ for $i,j=1,\ldots, n$, where $\{e_1,\ldots, e_n\}$ denotes the standard basis of $S^n$ and $\{e_1^\vee,\ldots, e_n^\vee\}$ the dual basis. The relations are generated by $$b_k^\vee(b_i)={\rm ev}_{\tilde{M}}(b_i\otimes b_k^\vee)= ({\rm id}_{\tilde{M}}\otimes \delta_{S^n}\otimes {\rm id}_{\tilde{M}^\vee})(b_i\otimes b_k^\vee)= \sum_{j=1}^n (b_i\otimes e_j^\vee)\otimes (e_j\otimes b_k^\vee),$$ i.e.~$\delta_{ik}=\sum_{j=1}^n x_{ij}y_{jk}$ for all $i,k=1,\ldots, n$. This just means that the matrix $Y=(y_{jk})$ is the inverse of the matrix $X=(x_{ij})$. Hence $\upsilon(U)({\mathcal U})=S[X,X^{-1}]$ is the localisation of a polynomial ring over $S$ in $n^2$ variables. For showing that $U$ is indeed a solution ring, we consider the following diagram $$\xymatrix@R+10pt@C+30pt{ M \ar[r]^{ {\rm id}_M\otimes \delta_{\1^n}} \ar[d]^{ {\rm id}_M\otimes \delta_M} & (M \negotimes (\1^n)^\negvee)\negotimes \1^n \ar[r]^{ \text{incl.}\otimes {\rm id}_{\1^n}} \ar[d]^{ {\rm id}\otimes \delta_M} & U\negotimes \1^n \ar[d]^{ {\rm id}\otimes \delta_M} \\ M\negotimes M^\negvee \negotimes M \ar[r]^(.4){{\rm id}_M\otimes \delta_{\1^n}\otimes {\rm id}} \ar[d]^{ {\rm ev}_M\otimes {\rm id}_M} & (M\negotimes (\1^n)^\negvee)\negotimes (\1^n \negotimes M^\negvee)\negotimes M \ar[r]^(.55){\text{incl.}\otimes {\rm id}} \ar[d]^{ \mu_U\otimes {\rm id}_M}& U\negotimes (\1^n\negotimes M^\negvee) \negotimes M \ar[d]^{\mu_U\otimes {\rm id}_M} \\ \1\negotimes M \ar[r]^{u_U\otimes {\rm id}_M} & U\negotimes M \ar[r]^{{\rm id}}& U\negotimes M. }$$ It is easy to see that the upper left, upper right and lower right squares all commute. The lower left square also commutes by definition of $U$, since the difference of the two compositions in question is just $(-{\rm ev}_M,{\rm id}_{M}\otimes \delta_{\1^n}\otimes {\rm id}_{M^\vee})\otimes {\rm id}_M$. Furthermore the composition of the two vertical arrows on the left is just the identity on $M$ by definition of the dual. Tensoring the big square with $U$ leads to the left square of the next diagram $$\xymatrix@C+30pt{ U\otimes M \ar[r] \ar[d]^{{\rm id}} & U\otimes U\otimes \1^n \ar[r]^{ \mu_U\otimes {\rm id}_{\1^n}} \ar[d]^{{\rm id}_U \otimes\alpha} & U\otimes\1^n \ar[d]^{\alpha}\\ U\otimes M \ar[r]^(.45){{\rm id}_U\otimes u_U\otimes {\rm id}_M}& U\otimes U \otimes M \ar[r]^{ \mu_U\otimes{\rm id}_M}& U\otimes M\\ }$$ where $\alpha:=( \mu_U\otimes{\rm id}_M)\circ ({\rm id}\otimes \delta_M)$. The right square of this diagram also commutes, as is easily checked, and the composition in the bottom row is just the identity according to the constraints on the unit morphism $u_U$ and the multiplication map $\mu_U$. Hence, $\alpha:U\otimes \1^n\to U\otimes M$ is a split epimorphism in ${\mathcal C}$, and even in ${\mathcal C}_U$ (since the right square commutes). Since the rank of $\upsilon(U\otimes \1^n)=\upsilon(U)^n$ and the rank of $\upsilon(U\otimes M)$ as $\upsilon(U)$-modules are both $n$, the split epimorphism $\upsilon(\alpha)$ is in fact an isomorphism, i.e.~$\alpha$ is an isomorphism. Applying the natural transformation $\varepsilon$, we finally obtain the commutative square $$\xymatrix@C+20pt{ U^n=\iota_U\bigl( (U \negotimes \1^n)^{\mathcal C}\bigr) \ar[r]^{\iota_U(\alpha^{\mathcal C})}_{\cong} \ar[d]^{\varepsilon_{U^n}}_{\cong} & \iota_U\bigl( (U\negotimes M)^{\mathcal C}\bigr) \ar[d]^{\varepsilon_{M_U}} \\ U^n=U\negotimes \1^n \ar[r]^{\alpha}_{\cong} & U\negotimes M, }$$ which shows that $\varepsilon_{M_U}$ is an isomorphism. Hence, $U$ is a solution ring for $M$. \end{proof} \begin{rem}\label{rem:universal-solution-ring} In the case of difference or differential modules over a difference or differential field $F$, respectively, the ring $U$ constructed in the previous proof is just the usual universal solution algebra $F[X,\det(X)^{-1}]$ for a fundamental solution matrix $X$ having indeterminates as entries. We will therefore call $U$ the {\bf universal solution ring} for $M$.\\ This is moreover justified by the following theorem which states that $U$ indeed satisfies a universal property. \end{rem} \begin{thm}\label{thm:univ-sol-ring} Let $R$ be a solution ring for $M$, such that $(R\otimes M)^{\mathcal C}$ is a free $R^{\mathcal C}$-module, and let $U$ be the solution ring for $M$ constructed in Thm.~\ref{thm:exists-sol-ring}. Then there exists a morphism of ${\mathcal C}$-algebras $f:U\to R$. Furthermore, the image of $\iota(R^{\mathcal C})\otimes U\xrightarrow{\varepsilon_R\otimes f} R\otimes R\xrightarrow{\mu_R} R$ does not depend on the choice of $f$. \end{thm} \begin{proof} By assumption, we have an isomorphism in ${\mathcal C}_R$: $$\alpha: R^n\xrightarrow{\cong} \iota_R\left( (M_R)^{{\mathcal C}_R} \right)=R\otimes_{\iota(R^{\mathcal C})} \iota\left((R\otimes M)^{\mathcal C}\right) \xrightarrow{\cong} R\otimes M.$$ Since $M$ is dualizable, one has bijections \begin{align*} \Mor_{{\mathcal C}_R}(R^n,R\otimes M) &\simeq \Mor_{{\mathcal C}_R}(R\otimes (\1^n\otimes M^\vee), R) &\simeq \Mor_{\mathcal C}(\1^n\otimes M^\vee,R) \\ \alpha &\mapsto \tilde{\alpha}_R:=({\rm id}_R\otimes {\rm ev}_M)\circ (\alpha\otimes {\rm id}_{M^\vee}) &\mapsto \tilde{\alpha}:=\tilde{\alpha}_R|_{\1^n\otimes M^\vee} \end{align*} Similarly, for the inverse morphism $\beta:=\alpha^{-1}:R\otimes M\to R^n$, one has \begin{align*} \Mor_{{\mathcal C}_R}(R\otimes M,R^n) &\simeq \Mor_{{\mathcal C}_R}(R\otimes (M\otimes (\1^n)^\vee), R) &\simeq \Mor_{\mathcal C}(M\otimes (\1^n)^\vee,R) \\ \beta &\mapsto \tilde{\beta}_R:=({\rm id}_R\otimes {\rm ev}_{\1^n})\circ (\beta\otimes {\rm id}_{(\1^n)^\vee}) &\mapsto \tilde{\beta}:=\tilde{\beta}_R|_{M\otimes (\1^n)^\vee} \end{align*} Therefore the isomorphism $\alpha$ induces a morphism of ${\mathcal C}$-algebras $$f:\Sym\Bigl( (M\otimes (\1^n)^\vee )\oplus (\1^n\otimes M^\vee) \Bigr)\to R.$$ We check that this morphism factors through $U$, i.e.~we have to check that the morphisms $$M\otimes M^\vee \xrightarrow{{\rm id}\otimes\delta_{\1^n}\otimes {\rm id}} M \otimes (\1^n)^\vee\otimes \1^n \otimes M^\vee \xrightarrow{ \tilde{\beta} \otimes \tilde{\alpha}}R\otimes R\xrightarrow{\mu_R} R $$ and $$M\otimes M^\vee \xrightarrow{{\rm ev}_M} \1 \xrightarrow{u_R} R$$ are equal. For this we consider the $R$-linear extensions in the category ${\mathcal C}_R$. By \cite[Sect.~2.4]{pd:ct}, the composition $$M_{\!R}^\vee\xrightarrow{\delta_{R^n}\otimes {\rm id}_{M_{\!R}^\vee}} (R^n)^\vee \otimesR R^n\otimesR M_{\!R}^\vee \xrightarrow{{\rm id}\otimes \alpha\otimes {\rm id}} M_{\!R}^\vee\otimesR M_{\!R}\otimesR (R^n)^\vee \xrightarrow{ {\rm id}\otimes {\rm ev}_{M_{\!R}}} (R^n)^\vee$$ is just the transpose $\tp{\alpha}:M_R^\vee\to (R^n)^\vee$ of the morphism $\alpha$, and this equals the contragredient $\beta^\vee$ of $\beta=\alpha^{-1}$.\\ Hence the equality of the two morphisms reduces to the commutativity of the diagram $$\xymatrix@C+20pt{ M_R \otimes_R M_R^\vee \ar[r]^{\beta \otimes \beta^\vee} \ar[dr]_{{\rm ev}_{M_R}} & R^n \otimes_R (R^n)^\vee \ar[d]^{{\rm ev}_{R^n}} \\ & R. }$$ But by definition of the contragredient (see \cite[Sect.~2.4]{pd:ct}), this diagram commutes. It remains to show that the image of $\iota(R^{\mathcal C})\otimes U\xrightarrow{\varepsilon_R\otimes f} R\otimes R\xrightarrow{\mu_R} R$ does not depend on the chosen morphism $f:U\to R$.\\ Given two morphism of ${\mathcal C}$-algebras $f,g:U\to R$, let $\tilde{\alpha}_f, \tilde{\alpha}_g\in \Mor_{{\mathcal C}}(\1^n \otimes M^\vee, R)$ be the restrictions of $f$ resp. of $g$ to $\1^n\otimes M^\vee\subseteq U$, and let $\tilde{\beta}_f, \tilde{\beta}_g\in \Mor_{{\mathcal C}}(M\otimes (\1^n)^\vee,R)$ be the restrictions of $f$ resp. of $g$ to $M\otimes (\1^n)^\vee\subseteq U$. Furthermore, let $\alpha_f,\alpha_g\in \Mor_{{\mathcal C}_R}(R^n,M_R)$ and $\beta_f,\beta_g\in \Mor_{{\mathcal C}_R}(M_R,R^n)$ denote the corresponding isomorphisms. Then by similar considerations as above one obtains that $\beta_f$ and $\beta_g$ are the inverses of $\alpha_f$ and $\alpha_g$, respectively. Then $$\beta_g\circ \alpha_f\in \Mor_{{\mathcal C}_R}(R^n,R^n)\simeq \Hom_{R^{\mathcal C}}((R^{\mathcal C})^n,(R^n)^{\mathcal C}) \simeq \Mor_{{\mathcal C}_{\iota(R^{\mathcal C})}}(\iota(R^{\mathcal C})^n,\iota(R^{\mathcal C})^n)$$ is induced by an isomorphism on $\iota(R^{\mathcal C})^n$ (which we also denote by $\beta_g\circ \alpha_f$). Therefore for the $\iota(R^{\mathcal C})$-linear extension $\tilde{\alpha}_{f,\iota(R^{\mathcal C})},\tilde{\alpha}_{g,\iota(R^{\mathcal C})}:\iota(R^{\mathcal C})\otimes \1^n\otimes M^\vee \to R$, one has \begin{eqnarray*} \tilde{\alpha}_{f,\iota(R^{\mathcal C})} &=& ({\rm id}_R\otimes {\rm ev}_M)\circ (\alpha_f|_{\iota(R^{\mathcal C})^n}\otimes {\rm id}_{M^\vee}) \\ &=& ({\rm id}_R\otimes {\rm ev}_M)\circ (\alpha_g|_{\iota(R^{\mathcal C})^n}\otimes {\rm id}_{M^\vee})\circ \left((\beta_g\circ \alpha_f)\otimes {\rm id}_{M^\vee}\right) \\ &=& \tilde{\alpha}_{g,\iota(R^{\mathcal C})}\circ \left((\beta_g\circ \alpha_f)\otimes {\rm id}_{M^\vee}\right). \end{eqnarray*} and similarly, $$\tilde{\beta}_{f,\iota(R^{\mathcal C})}= \tilde{\beta}_{g,\iota(R^{\mathcal C})}\circ \left((\alpha_g\circ \beta_f)\otimes {\rm id}_{M^\vee}\right).$$ Hence, the morphism $\mu_R\circ (\varepsilon_R\otimes f):\iota(R^{\mathcal C})\otimes U\to R$ factors through $\mu_R\circ (\varepsilon_R\otimes g)$ and by changing the roles of $f$ and $g$, the morphism $\mu_R\circ (\varepsilon_R\otimes g)$ factors through $\mu_R\circ (\varepsilon_R\otimes f)$. So the images are equal. \end{proof} \begin{rem} In the classical settings, every Picard-Vessiot ring for some module $M$ is a quotient of the universal solution ring $U$. This is also the case in this abstract setting (see Thm.~\ref{thm:simple-minimal-solution-rings-are-quotients} below). More generally, we will see that every simple minimal solution ring for $M$ (i.e.~without the assumption on the constants) is a quotient of $U$. Conversely, in Cor.~\ref{cor:special-quotients-are-pv-rings} we show that every quotient of $U$ by a maximal ${\mathcal C}$-ideal ${\mathfrak{m}}$ is a Picard-Vessiot ring if $(U/{\mathfrak{m}})^{\mathcal C}=k$.\\ Dropping the assumption $(U/{\mathfrak{m}})^{\mathcal C}=k$, however, one still has a simple solution ring $U/{\mathfrak{m}}$ (by Prop.~\ref{prop:image-of-solution-rings}), but $U/{\mathfrak{m}}$ may not be minimal. To see this, let $M=\1$. Then trivially $R:=\1$ is a Picard-Vessiot ring for $M$, and the only one, since it is contained in any other ${\mathcal C}$-algebra.\\ The universal solution ring for $M=\1$, however, is given by $U\cong \1\otimes_k k[x,x^{-1}]$. Hence, for every maximal ideal $I$ of $k[x,x^{-1}]$, ${\mathfrak{m}}:=\iota(I)$ is a maximal ${\mathcal C}$-ideal of $U=\iota( k[x,x^{-1}])$ by Lemma~\ref{lemma:ideal-bijection::abstract}. But $U/{\mathfrak{m}}\cong \iota(k[x,x^{-1}]/I)$ is only a minimal solution ring, if $k[x,x^{-1}]/I\cong k$, i.e. $U/{\mathfrak{m}}\cong \1$. \end{rem} We continue with properties of quotients of $U$. \begin{prop}\label{prop:properties-of-quotients-of-U} Let $U$ be the universal solution ring for some dualizable $M\in {\mathcal C}$, and let $R$ be a quotient algebra of $U$. Then $\upsilon(R)$ is a finitely generated faithfully flat ${\mathcal O}_{\mathcal X}$-algebra. If in addition $R$ is a simple ${\mathcal C}$-algebra, then $R^{\mathcal C}$ is a finite field extension of $k$. \end{prop} \begin{proof} Since $R$ is a quotient of $U$, it is a quotient of $T:=\Sym\Bigl( (M\otimes (\1^n)^\vee ) \oplus (\1^n\otimes M^\vee) \Bigr)$. Since $\upsilon(M)$ is finitely generated, $\upsilon(T)$ is a finitely generated ${\mathcal O}_{\mathcal X}$-algebra and therefore also $\upsilon(R)$ is a finitely generated ${\mathcal O}_{\mathcal X}$-algebra. Since $M$ is dualizable, $\tenscat{M}$ is a Tannakian category (see Rem.~\ref{rem:dualizables-are-projective}), and $T$ is an ind-object of $\tenscat{M}$. Being a quotient of $T$, $R$ also is an ind-object of $\tenscat{M}$. Therefore by \cite[Lemma 6.11]{pd:ct}, $\upsilon(R)$ is faithfully flat over ${\mathcal O}_{\mathcal X}$. If in addition $R$ is simple, $\ell:=R^{\mathcal C}$ is a field. By exactness of $\iota$ and Prop.~\ref{prop:adjointness}(iii), we have a monomorphism $\iota(\ell)\hookrightarrow R$, and hence by exactness of $\upsilon$, an inclusion of ${\mathcal O}_{\mathcal X}$-algebras ${\mathcal O}_{\mathcal X} \otimes_k \ell=\upsilon(\iota(\ell))\hookrightarrow \upsilon(R)$. After localising to some affine open subset of ${\mathcal X}$, we can apply Thm.~\ref{thm:abstract-algebra}, and obtain that $\ell$ is a finite extension of $k$. \end{proof} \begin{thm}\label{thm:simple-minimal-solution-rings-are-quotients} Let $M$ be a dualizable object of ${\mathcal C}$, and let $U$ be the universal solution ring for $M$. Then every simple minimal solution ring for $M$ is isomorphic to a quotient of the universal solution algebra $U$. In particular, every Picard-Vessiot ring for $M$ is isomorphic to a quotient of $U$. \end{thm} \begin{proof} Let $R$ be a simple minimal solution ring for $M$. Since $R$ is simple, $R^{\mathcal C}$ is a field, and therefore $(R\otimes M)^{\mathcal C}$ is a free $R^{\mathcal C}$-module. Hence $R$ fulfills the assumptions of Theorem \ref{thm:univ-sol-ring}, and there is a morphism of ${\mathcal C}$-algebras $f:U\to R$. As $(U\otimes M)^{\mathcal C}$ is a free $U^{\mathcal C}$-module, the image $f(U)$ is a solution ring by Prop.~\ref{prop:image-of-solution-rings}. As $R$ is minimal, we obtain $f(U)=R$. Hence, $R$ is the quotient of $U$ by the kernel of $f$. \end{proof} \begin{cor}\label{cor:properties-of-simple-minimal-solution-rings} Let $R\in {\mathcal C}$ be a simple minimal solution ring for some dualizable $M\in {\mathcal C}$. Then $\upsilon(R)$ is a finitely generated faithfully flat ${\mathcal O}_{\mathcal X}$-algebra, and $R^{\mathcal C}$ is a finite field extension of $k$. \end{cor} \begin{proof} This follows directly from Thm.~\ref{thm:simple-minimal-solution-rings-are-quotients} and Prop.~\ref{prop:properties-of-quotients-of-U}. \end{proof} \begin{prop}\label{prop:unique-pv-inside-simple-sol-ring} Let $M$ be a dualizable object of ${\mathcal C}$, and let $R$ be a simple solution ring for $M$ with $R^{\mathcal C}=k$. Then there is a unique Picard-Vessiot ring for $M$ inside $R$. This is the image of the universal solution ring $U$ under a morphism $f:U\to R$. \end{prop} \begin{proof} As in the proof of Thm.~\ref{thm:simple-minimal-solution-rings-are-quotients}, $R$ fulfills the assumptions of Theorem \ref{thm:univ-sol-ring}, so there is a morphism of ${\mathcal C}$-algebras $f:U\to R$. By assumption on $R$, we have $\iota(R^{\mathcal C})=\iota(k)=\1$, and hence $\varepsilon_R\otimes f=f:\1\otimes U=U\to R$. So by the second part of Theorem \ref{thm:univ-sol-ring}, the image $f(U)$ does not depend on the choice of $f$. In particular, $f(U)$ (which is a solution ring by Prop.~\ref{prop:image-of-solution-rings}) is the unique minimal solution ring inside $R$. It remains to show that $f(U)$ is a simple algebra. Let $I\subseteq U$ be a maximal subobject in ${\mathcal C}_U$ (i.e.~an ideal of $U$), let $R':=U/I$ and let $g:U\to R'$ be the canoncial epimorphism. Furthermore, let ${\mathfrak{m}}\in {\mathcal C}$ be a maximal ideal of $R'\otimes R$. Since $R$ and $R'$ are simple, the natural morphisms $R\to (R'\otimes R)/{\mathfrak{m}}$ and $R'\to (R'\otimes R)/{\mathfrak{m}}$ considered in ${\mathcal C}_R$ and ${\mathcal C}_{R'}$, respectively, are monomorphisms, and it suffices to show that $\1\otimes f(U)\subseteq (R'\otimes R)/{\mathfrak{m}}$ is simple. \[ \xymatrix{ U \ar^f[rr] \ar_g[d] && R \ar^{1\otimes {\rm id}_R}[d]\\ R' \ar^(.4){{\rm id}_{R'}\otimes 1}[rr] && (R'\otimes R)/{\mathfrak{m}} \\ } \] $g(U)=R'$ is simple by construction, and so is $g(U)\otimes \1\subseteq (R'\otimes R)/{\mathfrak{m}}$. By Theorem \ref{thm:univ-sol-ring}, we have $\iota(l)\cdot (g(U)\otimes \1)=\iota(l)\cdot (1\otimes f(U))$, where $l=\left((R'\otimes R)/{\mathfrak{m}}\right)^{\mathcal C}$, and $l$ is a field, since $(R'\otimes R)/{\mathfrak{m}}$ is simple. By Corollary \ref{cor:still-simple::abstract}, applied to the category ${\mathcal C}_{R'}$, $\iota(l)\cdot (g(U)\otimes \1)$ is also simple, i.e.~$\iota(l)\cdot (\1\otimes f(U))$ is simple. Since, $\iota(l)\cdot (\1\otimes f(U))\cong l\otimes_{k} f(U)$ is a faithfully flat extension of $f(U)$, this implies that $f(U)$ is also simple. \end{proof} \begin{rem} The previous proposition ensures the existence of Picard-Vessiot rings in special cases. For example, in the differential setting over e.g.~$F=\mathbb{C}(t)$, if $x$ is a point which is non-singular for the differential equation, then one knows that the ring of holomorphic functions on a small disc around that point is a solution ring for the equation. Hence, there exists a Picard-Vessiot ring (even unique) for the corresponding differential module inside this ring of holomorphic functions.\\ Similarly, in the case of rigid analytically trivial pre-$t$-motives (which form a special case of the difference setting) the field of fractions of a given ring of restricted power series is a simple solution ring for all these modules (cf.~\cite{mp:tdadmaicl}). \end{rem} \begin{cor}\label{cor:special-quotients-are-pv-rings} Let $M\in {\mathcal C}$ be dualizable, and let ${\mathfrak{m}}$ be a maximal ${\mathcal C}$-ideal of the universal solution ring $U$ for $M$ such that $(U/{\mathfrak{m}})^{\mathcal C}=k$. Then $U/{\mathfrak{m}}$ is a Picard-Vessiot ring for $M$. \end{cor} \begin{proof} By Prop.~\ref{prop:image-of-solution-rings}, $U/{\mathfrak{m}}$ fulfills the conditions of $R$ in the previous propostion. Hence, the image of the morphism $U\to U/{\mathfrak{m}}$ (which clearly is $U/{\mathfrak{m}}$) is a Picard-Vessiot ring. \end{proof} \begin{cor}\label{cor:pv-rings-isom-over-finite-ext} Let $M\in {\mathcal C}$ be dualizable, and let $R$ and $R'$ be two simple minimal solution rings for $M$. Then there exists a finite field extension $\ell$ of $k$ containing $R^{\mathcal C}$ and $(R')^{\mathcal C}$ such that $R\otimes_{R^{\mathcal C}} \ell\cong R'\otimes_{(R')^{\mathcal C}} \ell$. \end{cor} \begin{proof} As in the proof of the previous theorem, let $f:U\to R$ and $g:U\to R'$ be epimorphisms of ${\mathcal C}$-algebras whose existence is guaranteed by Thm.~\ref{thm:simple-minimal-solution-rings-are-quotients}. Let ${\mathfrak{m}}$ be a maximal ${\mathcal C}$-ideal of $R'\otimes R$, and let $\ell:=\left(R'\otimes R/{\mathfrak{m}}\right)^{\mathcal C}$. Then $R'$ and $R$ embed into $R'\otimes R/{\mathfrak{m}}$ and hence $(R')^{\mathcal C}$ and $R^{\mathcal C}$ both embed into $\ell$. Furthermore by Thm.~\ref{thm:univ-sol-ring}, the subrings $\iota(\ell)(g(U)\otimes 1)$ and $\iota(\ell)(1\otimes f(U))$ are equal. As $\ell$ contains both $R^{\mathcal C}$ and $(R')^{\mathcal C}$, one has $\iota(\ell)(g(U)\otimes 1)=\iota(\ell)(R'\otimes 1)\cong R'\otimes_{(R')^{\mathcal C}} \ell$ and $\iota(\ell)(1\otimes f(U))\cong R\otimes_{R^{\mathcal C}} \ell$. Hence, $R'\otimes_{(R')^{\mathcal C}} \ell \cong R\otimes_{R^{\mathcal C}} \ell$. As in the proof of Prop.~\ref{prop:properties-of-quotients-of-U}, one shows that $\ell$ is indeed finite over $k$. \end{proof} \begin{thm}\label{thm:existence-of-pv-ring} Let $M\in {\mathcal C}$ be dualizable. Then there exists a Picard-Vessiot ring for $M$ up to a finite field extension of $k$, i.e.~there exists a finite field extension $\ell$ of $k$ and a ${\mathcal C}_{\iota(\ell)}$-algebra $R$ such that $R$ is a PV-ring for $M_{\iota(\ell)}\in {\mathcal C}_{\iota(\ell)}$. \end{thm} \begin{proof} Let $U$ be the universal solution ring for $M$, and let ${\mathfrak{m}}\subset U$ be a maximal ${\mathcal C}$-ideal of $U$. Then $R:=U/{\mathfrak{m}}$ is a simple solution ring for $M$ by Prop.~\ref{prop:image-of-solution-rings}, and $\ell:=R^{\mathcal C}$ is a finite field extension of $k$ by Prop.~\ref{prop:properties-of-quotients-of-U}.\\ Considering now $M_{\iota(\ell)}\in {\mathcal C}_{\iota(\ell)}$, and $R$ as an algebra in ${\mathcal C}_{\iota(\ell)}$ via $\varepsilon_R:\iota(R^{\mathcal C})=\iota(\ell)\to R$, we obtain that $R$ is a simple solution ring for $M_{\iota(\ell)}$ with $R^{\mathcal C}=\ell$. Hence by Prop.~\ref{prop:unique-pv-inside-simple-sol-ring}, with $k$ replaced by $\ell$ (and ${\mathcal C}$ by ${\mathcal C}_{\iota(\ell)}$ etc.), there is a unique Picard-Vessiot ring for $M_{\iota(\ell)}$ inside $R$. Indeed also by Prop.~\ref{prop:unique-pv-inside-simple-sol-ring}, this Picard-Vessiot ring is $R$ itself, since the canonical morphism $\iota(\ell)\otimes U\to R$ is an epimorphism, and $\iota(\ell)\otimes U$ is easily seen to be the universal solution ring for $M_{\iota(\ell)}$. \end{proof} \section{Picard-Vessiot rings and fibre functors}\label{sec:pv-rings-and-fibre-functors} Throughout this section, we fix a dualizable object $M\in {\mathcal C}$. Recall that we denote by $\tenscat{M}$ the strictly full tensor subcategory of ${\mathcal C}$ generated by $M$ and $M^\vee$, i.e.~the full subcategory of ${\mathcal C}$ containing all objects isomorphic to subquotients of direct sums of objects $M^{\otimes n}\otimes (M^\vee)^{\otimes m}$ for $n,m\geq 0$. In this section we consider the correspondence between Picard-Vessiot rings $R$ for $M$ and fibre functors $\omega:\tenscat{M}\to \mathsf{vect}_k$. The main result is Thm.~\ref{thm:pv-rings-equiv-to-fibre-functors} which states that there is a bijection between their isomorphism classes. This generalises \cite[Thm.~3.4.2.3]{ya:dnctgdd} to our abstract setting. \begin{prop}\label{prop:fibre-functor-associated-to-pv-ring} Assume $R$ is a Picard-Vessiot ring for $M$. Then the functor $$\omega_R:\tenscat{M}\to \mathsf{vect}_k, N\mapsto (R\otimes N)^{\mathcal C}$$ is an exact faithful tensor-functor, i.e.~a fibre functor.\\ We call the fibre functor $\omega_R$ the {\bf fibre functor associated to $R$}. \end{prop} \begin{proof} By definition of a Picard-Vessiot ring, the morphism $\varepsilon_{M_R}:R\otimes_k (R\otimes M)^{\mathcal C} \to R\otimes M$ is an isomorphism. Hence, by Prop.~\ref{prop:subcat-of-trivial-modules}, $\varepsilon_{N_R}$ is an isomorphism for all $N\in \tenscat{M}$. Recall $R\otimes_k (R\otimes N)^{\mathcal C}=\iota_R((N_R)^{\mathcal C})=\iota_R(\omega_R(N))$ for all $N$. As $\upsilon(R)$ is faithfully flat over ${\mathcal O}_{\mathcal X}=\upsilon(\1)$ by Cor.~\ref{cor:properties-of-simple-minimal-solution-rings}, the functor $N\mapsto R\otimes N$ is exact and faithful. Hence, given a short exact sequence $0\to N'\to N\to N''\to 0$ in $\tenscat{M}$, the sequence $$0\to R\otimes N'\to R\otimes N\to R\otimes N''\to 0$$ is exact, and $R\otimes N=0$ if and only if $N=0$. Using the isomorphisms $\varepsilon_{N_R}$ etc. the sequence $$0\to R\otimes_k \omega_R(N')\to R\otimes_k \omega_R(N)\to R\otimes_k \omega_R(N'')\to 0$$ is exact. As $\iota_R$ is exact and faithful, this implies that $$0\to \omega_R(N')\to \omega_R(N)\to \omega_R(N'')\to 0$$ is exact. Furthermore, $ \omega_R(N)=0$ if and only if $ R\otimes_k \omega_R(N)=0$ if and only if $ R\otimes N=0$ if and only if $N=0$. It remains to show that $\omega_R$ is a tensor-functor which is already done by showing that $\varepsilon_{(N\otimes N')_R}$ is an isomorphism if $\varepsilon_{N_R}$ and $\varepsilon_{N'_R}$ are. \end{proof} Given a fibre functor $\omega:\tenscat{M} \to \mathsf{vect}_k$, we want to obtain a Picard-Vessiot ring associated to $\omega$.\\ Apparently, this Picard-Vessiot ring is already given in the proof of \cite[Thm.~3.2]{pd-jsm:tc}, although the authors don't claim that it is a Picard-Vessiot ring. We will recall the construction to be able to prove the necessary facts:\\ For $N\in \tenscat{M}$, one defines $P_N$ to be the largest subobject of $N\otimes_k \omega(N)^\vee$ such that for all $n\geq 1$ and all subobjects $N'\subseteq N^n$, the morphism $$P_N\to N\otimes_k \omega(N)^\vee\xrightarrow{\text{diag}}N^n\otimes_k \omega(N^n)^\vee \to N^n\otimes_k \omega(N')^\vee$$ factors through $N'\otimes_k \omega(N')^\vee$.\\ For monomorphisms $g:N'\to N$ and epimorphisms $g:N\to N'$, one obtains morphisms $\phi_{g}:P_N\to P_{N'}$, and therefore $$R_\omega:= \varinjlim\limits_{N\in\tenscat{M}} P_N^\vee\in{\rm Ind}(\tenscat{M})\subseteq {\mathcal C}$$ is welldefined. The multiplication $\mu_{R_\omega}:R_\omega\otimes R_\omega\to R_\omega$ is induced by the natural morphisms $P_{N\otimes L}\to P_N\otimes P_L$ via dualizing and taking inductive limits. \begin{lemma}\label{lem:R-omega-representing} The functor ${\mathcal C}{\rm -}\mathsf{Alg} \to \mathsf{Sets}$ which associates to each ${\mathcal C}$-algebra $R'$ the set of natural tensor-transformations from the functor $R'\otimes (\iota\circ \omega):\tenscat{M}\to {\mathcal C}_{R'}$ to the functor $R'\otimes {\rm id}_{\tenscat{M}}:\tenscat{M}\to {\mathcal C}_{R'}$ is represented by the ${\mathcal C}$-algebra $R_\omega$, i.e.~there is a natural bijection between the natural transformations $R'\otimes (\iota\circ \omega)\to R'\otimes {\rm id}_{\tenscat{M}}$ of tensor functors and the morphisms of ${\mathcal C}$-algebras $R_\omega\to R'$. \end{lemma} \begin{proof} Let $R'$ be a ${\mathcal C}$-algebra, and let $\alpha$ be a natural transformation not necessarily respecting the tensor structure. Then for every $N\in \tenscat{M}$ one has a morphism \begin{eqnarray*} \alpha_N & \in & \Mor_{{\mathcal C}_{R'}}(R'\otimes \iota(\omega(N)), R'\otimes N) \simeq \Mor_{{\mathcal C}}(\iota(\omega(N)), R'\otimes N) \\ &\simeq & \Mor_{{\mathcal C}}(\1, R'\otimes N\otimes \iota(\omega(N))^\vee)= (R'\otimes N\otimes \iota(\omega(N))^\vee)^{\mathcal C} \end{eqnarray*} It is straight forward to check that such a collection of morphisms $(\alpha_N)_{N}$ where $\alpha_N\in \Mor_{{\mathcal C}}(\1, R'\otimes N\otimes \iota(\omega(N))^\vee)$ defines a natural transformation if and only if $\alpha_N\in \Mor_{{\mathcal C}}(\1, R'\otimes P_N)$ for all $N$, and $\alpha_{N'}=({\rm id}_{R'}\otimes \phi_{g})\circ \alpha_N$ whenever $\phi_{g}:P_N\to P_{N'}$ is defined.\\ On the other hand, one has \begin{eqnarray*} \Mor_{{\mathcal C}}(R_\omega,R') &=& \Mor_{{\mathcal C}}(\varinjlim\limits_{N\in\tenscat{M}} P_N^\vee, R') \\ &=& \varprojlim\limits_{N\in\tenscat{M}}\Mor_{{\mathcal C}}(P_N^\vee, R') \simeq \varprojlim\limits_{N\in\tenscat{M}}\Mor_{{\mathcal C}}(\1,R'\otimes P_N) \end{eqnarray*} Hence, giving such a compatible collection of morphisms $\alpha_N$ is equivalent to giving a ${\mathcal C}$-morphism $R_\omega\to R'$.\\ It is also not hard to check that the natural transformations that respect the tensor structure correspond to morphisms of ${\mathcal C}$-algebras $R\to R'$ under this identification. \end{proof} Before we show that $R_\omega$ is a simple solution ring for $M$, we need some more results from \cite{pd-jsm:tc} resp. from \cite{pd:ct}:\\ As $\omega$ has values in $k$-vector spaces, $\tenscat{M}$ together with $\omega$ is a neutral Tannakian category (see \cite{pd:ct}), and therefore equivalent to the category of representations of the algebraic group scheme $G=\underline{\Aut}^\otimes(\omega)$.\\ This also induces an equivalence of their ind-categories, and $R_\omega$ corresponds to the group ring $k[G]$ with the right regular representation (cf.~proof of \cite[Theorem 3.2]{pd-jsm:tc}). \begin{prop}\label{prop:pv-ring-associated-to-fibre-functor} The object $R_\omega\in {\rm Ind}(\tenscat{M})\subseteq {\mathcal C}$ associated to $\omega$ is a simple solution ring for $M$, and satisfies $(R_\omega)^{\mathcal C}=k$. \end{prop} \begin{rem}\label{rem:pv-ring-associated-to-fibre-functor} By Prop.~\ref{prop:unique-pv-inside-simple-sol-ring}, $R_\omega$ therefore contains a unique Picard-Vessiot ring for $M$. This Picard-Vessiot ring will be called the {\bf PV-ring associated to $\omega$}. Indeed, $R_\omega$ is already minimal and hence a Picard-Vessiot ring itself. This will be seen at the end of the proof of Thm.~\ref{thm:pv-rings-equiv-to-fibre-functors}. There is also a way of directly showing that $R_\omega$ is isomorphic to a quotient of the universal solution ring for $M$ which would also imply that $R_\omega$ is a PV-ring (cf.~Cor.~\ref{cor:special-quotients-are-pv-rings}). But we don't need this here, so we will omit it. \end{rem} \begin{proof} As $\omega$ defines an equivalence of categories $\tenscat{M}\to {\rm Rep}_k(G)$ (and also of their ind-categories), and $\omega(R_\omega)=k[G]$, one obtains $$(R_\omega)^{{\mathcal C}}=\Mor_{\mathcal C}(\1,R_\omega)\simeq \Hom_G(k,k[G])=k[G]^G=k.$$ For showing that $R_\omega$ is simple, let $I\ne R_\omega$ be an ideal of $R_\omega$ in ${\mathcal C}$. We even have $I\in {\rm Ind}(\tenscat{M})$, as it is a subobject of $R$. By the equivalence of categories $\omega(I)$ belongs to ${\rm Ind}({\rm Rep}_k(G))$, and $\omega(I)$ is an ideal of $\omega(R_\omega)=k[G]$. But $k[G]$ does not have non-trivial $G$-stable ideals. Hence, $\omega(I)=0$, and therefore $I=0$. As seen in Lemma \ref{lem:R-omega-representing}, ${\rm id}_{R_\omega}\in \Mor_{{\mathcal C}}(R_\omega,R_\omega)$ induces a natural transformation $\alpha:R_\omega\otimes (\iota\circ \omega) \to R_\omega\otimes {\rm id}_{\tenscat{M}}$, in particular it induces a ${\mathcal C}_{R_\omega}$-morphism $\alpha_M:R_\omega\otimes \iota(\omega(M))\to R_\omega\otimes M$. By \cite[Prop.~1.13]{pd-jsm:tc}, such a natural transformation is an equivalence, as $\tenscat{M}$ is rigid\footnote{Rigidity of the target category which is assumed in loc.~cit.~is not needed. See also \cite[Prop.~1.1]{ab:ttnc}.}. Therefore, the morphism $\alpha_M$ is an isomorphism. As $R_\omega\otimes \iota(\omega(M))=\iota_{R_\omega}(\omega(M))$, Lemma \ref{lemma:when-eps-is-iso} implies that $\varepsilon_{M_R}$ is an isomorphism.\\ Hence, $R_\omega$ is a solution ring for $M$. \end{proof} \begin{thm}\label{thm:pv-rings-equiv-to-fibre-functors} Let $M\in {\mathcal C}$ be dualizable, and let $\ell$ be a field extension of $k$. Then there is a bijection between isomorphism classes of Picard-Vessiot rings $R$ for $M_{\iota(\ell)}$ over $\widetilde{\1}:=\iota(\ell)$ and isomorphism classes of fibre functors $\omega$ from $\tenscat{M_{\iota(\ell)}}$ into $\ell$-vector spaces.\\ This bijection is induced by $R\mapsto \omega_R$ and $\omega\mapsto (\text{PV-ring inside }R_\omega)$ given in Prop.~\ref{prop:fibre-functor-associated-to-pv-ring} and Rem.~\ref{rem:pv-ring-associated-to-fibre-functor}, respectively. \end{thm} \begin{proof} Clearly isomorphic Picard-Vessiot rings give rise to isomorphic fibre functors and isomorphic fibre functors give rise to isomorphic Picard-Vessiot rings. Hence, we only have to show that the maps are inverse to each other up to isomorphisms.\\ By working directly in the category ${\mathcal C}_{\iota(\ell)}$ we can assume that $\ell=k$. On one hand, for given $\omega$ and corresponding PV-ring $R$, one has natural isomorphisms $$\iota_R(\omega(N))=R\otimes_k \omega(N)\to N_R$$ (see proof of Prop.~\ref{prop:pv-ring-associated-to-fibre-functor}). By adjunction these correspond to natural isomorphisms $$\lambda_N:\omega(N)\cong(N_R)^{\mathcal C}=\omega_R(N),$$ i.e.~the functors $\omega$ and $\omega_R$ are isomorphic. Conversely, given a Picard-Vessiot ring $R$ and associated fibre functor $\omega_R$, let $R_\omega$ be the simple solution ring constructed above.\\ As $\iota_R=R\otimes \iota$ and $(N_R)^{{\mathcal C}_R}=\omega_R(N)$ for all $N\in \tenscat{M}$, the natural isomorphisms $\varepsilon_{N_R}:\iota_R\left( (N_R)^{{\mathcal C}_R} \right) \to N_R$ form a natural transformation $R\otimes (\iota\circ \omega_R) \to R\otimes {\rm id}_{\tenscat{M}}$. By Lemma \ref{lem:R-omega-representing}, this natural transformation corresponds to a morphism of ${\mathcal C}$-algebras $\varphi:R_\omega\to R$. As $R_\omega$ is a simple ${\mathcal C}$-algebra, $\varphi$ is a monomorphism. But $R$ is a minimal solution ring, and hence $\varphi$ is even an isomorphism. Therefore, $R_\omega$ is isomorphic to $R$ and already minimal, i.e.~$R_\omega$ is a Picard-Vessiot ring itself. \end{proof} \section{Galois group schemes}\label{sec:galois-groups} Given a dualizable object $M\in {\mathcal C}$ and a Picard-Vessiot ring $R$ for $M$, one considers the group functor $$\underline{\Aut}_{{\mathcal C}-\text{alg}}(R): \mathsf{Alg}_k \to \mathsf{Groups}$$ which associates to each $k$-algebra $D$ the group of automorphisms of $R\otimes_k D$ as an algebra in ${\mathcal C}_{\iota(D)}$, i.e.~the subset of $\Mor_{{\mathcal C}_{\iota(D)}}(R\otimes_k D,R\otimes_k D)$ consisting of all isomorphisms which are compatible with the algebra structure of $R\otimes_k D$.\\ This functor is called the {\bf Galois group of $R$} over $\1$. On the other hand, given a fibre functor $\omega:\tenscat{M}\to \mathsf{vect}_k$, one considers the group functor $$\underline{\Aut}^\otimes(\omega): \mathsf{Alg}_k \to \mathsf{Groups}$$ which associates to each $k$-algebra $D$ the group of natural automorphisms of the functor $D\otimes_k \omega: N\mapsto D\otimes_k\omega(N)$.\\ As $\tenscat{M}$ together with the fibre functor $\omega$ is a neutral Tannakian category, this group functor is called the {\bf Tannakian Galois group} of $(\tenscat{M},\omega)$. In \cite{pd:ct} it is shown that this group functor is indeed an algebraic group scheme. The aim of this section is to show that both group functors are isomorphic algebraic group schemes if $\omega=\omega_R$ is the fibre functor associated to $R$. \medskip We start by recalling facts about group functors, (commutative) Hopf-algebras and affine group schemes. All of this can be found in \cite{ww:iags}. A group functor $ \mathsf{Alg}_k \to \mathsf{Groups}$ is an affine group scheme over $k$ if it is representable by a commutative algebra over $k$. This commutative algebra then has a structure of a Hopf-algebra. The group functor is even an algebraic group scheme (i.e.~of finite type over $k$) if the corresponding Hopf-algebra is finitely generated.\\ The category of commutative Hopf-algebras over $k$ and the category of affine group schemes over $k$ are equivalent. This equivalence is given by taking the spectrum of a Hopf-algebra in one direction and by taking the ring of regular functions in the other direction.\\ For a Hopf-algebra $H$ over $k$, and corresponding affine group scheme ${\mathcal G}:=\spec(H)$, the category $\mathsf{Comod}(H)$ of right comodules of $H$ and the category $\Rep({\mathcal G})$ of representations of ${\mathcal G}$ are equivalent. This equivalence is given by attaching to a comodule $V$ with comodule map $\rho:V\to V\otimes_k H$ the following representation $\varrho:{\mathcal G}\to \End(V)$ of ${\mathcal G}$: For any $k$-algebra $D$ and $g\in {\mathcal G}(D)=\Hom_{k\rm{-alg}}(H,D)$, the endomorphism $\varrho(g)$ on $V\otimes_k D$ is the $D$-linear extension of $$g\circ \rho:V\to V\otimes_k H\to V\otimes_k D.$$ On the other hand, for any representation $\varrho:{\mathcal G}\to \End(V)$, the universal element ${\rm id}_H\in \Hom_{k\rm{-alg}}(H,H)={\mathcal G}(H)$ gives a $H$-linear homomorphism $\varrho({\rm id}_H):V\otimes_k H\to V\otimes_k H$, and its restriction to $V\otimes 1$ is the desired comodule map $\rho:V\to V\otimes_k H$. \medskip For showing that the group functors $\underline{\Aut}_{{\mathcal C}-\text{alg}}(R)$ and $\underline{\Aut}^\otimes(\omega_R)$ are isomorphic algebraic group schemes, we show that they are both represented by the $k$-vector space $H:=(R\otimes R)^{\mathcal C}=\omega_R(R)$. The next lemma shows that $H$ is a finitely generated (commutative) $k$-Hopf-algebra, and hence $\spec(H)$ is an algebraic group scheme over $k$. \begin{rem} This fact is shown for differential modules over algebraically closed constants in \cite[Thm.~2.33]{mvdp-mfs:gtlde}, and for t-motives in \cite[Sections 3.5-4.5]{mp:tdadmaicl}. \end{rem} \begin{lemma}\label{lemma:H-is-Hopf-algebra} Let $R$ be a PV-ring for $M$ and $H:=\omega_R(R)=(R\otimes R)^{\mathcal C}$. \begin{enumerate} \item The morphism $\varepsilon_{R_R}:R\otimes_k H \to R_R=R\otimes R$ is an isomorphism in ${\mathcal C}_R$ (with $R$-module structure on $R\otimes R$ given on the first factor). \item $H$ is a finitely generated commutative $k$-algebra where the structure maps $u_H:k\to H$ (unit), $\mu_H:H\otimes_k H\to H$ (multiplication) are given by $$u_H:=\omega_R(u_R) \quad \text{and} \quad \mu_H:=\omega_R(\mu_R),$$ respectively. \item The $k$-algebra $H$ is even a Hopf-algebra where the structure maps $c_H:H\to k$ (counit), $\Delta:H\to H\otimes_k H$ (comultiplication) and $s:H\to H$ (antipode) are given as follows: Counit and antipode are given by $$c_H:=(\mu_R)^{\mathcal C} \quad \text{and} \quad s:=(\tau)^{\mathcal C},$$ respectively, where $\tau\in \Mor_{\mathcal C}(R\otimes R, R\otimes R)$ denotes the twist morphism. The comultiplication is given by $$\Delta:=\omega_R\bigl( \varepsilon_{R_R}^{-1}\circ (u_R\otimes {\rm id}_R)\bigr)\ \footnote{Hence, $\Delta$ is the image under $\omega_R$ of the morphism $R\xrightarrow{u_R \otimes {\rm id}_R} R\otimes R \xrightarrow{\varepsilon_{R_R}^{-1}} R\otimes_k H$}$$ \end{enumerate} \end{lemma} \begin{rem} The definition of $\Delta$ might look strange. Compared to other definitions (e.g.~in \cite[Sect.~2]{mt:haapvt}), where $\Delta$ is the map on constants/invariants induced by the map $R\otimes R\to R\otimes R\otimes R, a\otimes b\mapsto a\otimes 1\otimes b$, one might think that $\Delta$ should be defined as $({\rm id}_R\otimes u_R\otimes {\rm id}_R)^{\mathcal C}=\omega_R(u_R\otimes {\rm id}_R)$. The reason for the difference is that in \cite{mt:haapvt} and others, one uses $(R\otimes R)\otimes_R (R\otimes R)\cong R\otimes R\otimes R$ with right-$R$-module structure on the left tensor factor $(R\otimes R)$ and left-$R$-module structure on the right tensor factor $(R\otimes R)$.\\ In our setting, however, we are always using left-$R$-modules. In particular, the natural isomorphism $\omega_R(R)\otimes_k \omega_R(R)\to \omega_R(R\otimes R)$ reads as $$\Mor_{{\mathcal C}_R}(R,R\otimes R)\otimes_k \Mor_{{\mathcal C}_R}(R,R\otimes R)\to \Mor_{{\mathcal C}_R}(R,R\otimes R\otimes R)$$ where the left hand side is isomorphic to $\Mor_{{\mathcal C}_R}(R,(R\otimes R)\otimes_R (R\otimes R))$. But here, this is the tensor product of left-$R$-modules.\\ The additional $\varepsilon_{R_R}^{-1}$ in the definition of $\Delta$ solves the problem. It is also implicitly present in the identification $H\otimes_k H\cong (R\otimes R\otimes R)^{\mathcal C}$ in \cite{mt:haapvt} (cf.~proof of Lemma 2.4(b) loc.~cit.). \end{rem} \begin{proof}[Proof of Lemma \ref{lemma:H-is-Hopf-algebra}] As $R$ is an object of ${\rm Ind}(\tenscat{M})$, part (i) follows from Prop.~\ref{prop:subcat-of-trivial-modules}. As $\omega_R$ is a tensor functor, it is clear that the structure of a commutative algebra of $R$ induces a structure of a commutative algebra on $\omega_R(R)=H$ via the maps $u_H$ and $\mu_H$ defined in the lemma. As in the proof of Prop.~\ref{prop:properties-of-quotients-of-U}, one verifies that $H=\omega_R(R)$ is finitely generated as $k$-algebra.\\ Part (iii) is obtained by checking that the necessary diagrams commute. We only show that $\Delta$ is coassociative, i.e.~that $(\Delta\otimes_k {\rm id}_H)\circ \Delta=({\rm id}_H\otimes_k \Delta)\circ \Delta$, and leave the rest to the reader. As $\Delta=\omega_R\bigl( \varepsilon_{R_R}^{-1}\circ (u_R\otimes {\rm id}_R)\bigr)$, $\Delta\otimes_k {\rm id}_H=\omega_R\bigl( (\varepsilon_{R_R}^{-1}\otimes_k {\rm id}_H) \circ (u_R\otimes {\rm id}_R\otimes_k {\rm id}_H)\bigr)$ and ${\rm id}_H\otimes_k \Delta=\omega_R( {\rm id}_R\otimes_k \Delta)$, it suffices to show that the morphisms $$ (\varepsilon_{R_R}^{-1}\otimes_k {\rm id}_H) \circ (u_R\otimes {\rm id}_R\otimes_k {\rm id}_H) \circ \varepsilon_{R_R}^{-1}\circ (u_R\otimes {\rm id}_R) \quad \text{and}$$ $$( {\rm id}_R\otimes_k \Delta) \circ \varepsilon_{R_R}^{-1}\circ (u_R\otimes {\rm id}_R)$$ are equal. This is seen by showing that the following diagram commutes: $$\xymatrix@C+25pt{ R \ar[r]^{u_R\otimes {\rm id}_R} \ar[d]_{u_R\otimes {\rm id}_R} & R\otimes R \ar[r]^{\varepsilon_{R_R}^{-1}} \ar[d]_(0.45){u_R\otimes {\rm id}_{R\otimes R}} & R\otimes_k H \ar[d]^{u_R\otimes {\rm id}_{R\otimes_k H}} \\ R\otimes R \ar[r]^(0.45){{\rm id}_R\otimes u_R\otimes {\rm id}_R}\ar[d]_{\varepsilon_{R_R}^{-1}} & R\otimes R\otimes R \ar[r]^{{\rm id}_R\otimes \varepsilon_{R_R}^{-1}} & R\otimes R\otimes_k H \ar[d]^{\varepsilon_{R_R}^{-1}\otimes_k {\rm id}_H} \\ R\otimes_k H \ar[rr]^{{\rm id}_R\otimes_k \Delta=\iota_R(\Delta) }& & R\otimes_k H\otimes_k H }$$ Obviously the upper squares commute. Let $\delta:=\varepsilon_{R_R}^{-1}\circ (u_R\otimes {\rm id}_R)$. Then the middle horizontal morphism equals ${\rm id}_R\otimes \delta$ and the lower horizontal morphism is $\iota_R(\Delta)=\iota_R(({\rm id}_R\otimes \delta)^{{\mathcal C}_R})$. As $\varepsilon$ is a natural transformation $\iota_R\circ ()^{{\mathcal C}_R}\to {\rm id}_{{\mathcal C}_R}$, and as $\varepsilon_{R_R}^{-1}\otimes_k {\rm id}_H=\varepsilon_{(R\otimes_k H)_R}^{-1}$, also the lower square commutes. \end{proof} \begin{thm}\label{thm:Aut-R-represented-by-H} Let $R$ be a PV-ring for $M$. Then the group functor $$\underline{\Aut}_{{\mathcal C}-\text{alg}}(R): \mathsf{Alg}_k \to \mathsf{Groups}$$ is represented by the Hopf-algebra $H=\omega_R(R)=(R\otimes R)^{\mathcal C}$. Furthermore $\spec(\upsilon(R))$ is a torsor of $\underline{\Aut}_{{\mathcal C}-\text{alg}}(R)$ over $X$. \end{thm} \begin{proof} This is shown similar to \cite[Prop.10.9]{am:gticngg} or \cite{td:tipdgtfrz}. One has to use that $$\delta: R\xrightarrow{u_R \otimes {\rm id}_R} R\otimes R \xrightarrow{\varepsilon_{R_R}^{-1}} R\otimes_k H$$ defines a right coaction of $H$ on $R$. The property of a right coaction, however, is given by the commutativity of the diagram in the proof of Lemma~\ref{lemma:H-is-Hopf-algebra}. The torsor property is obtained by the isomorphism $\upsilon(\varepsilon_{R_R}^{-1}):\upsilon(R)\otimes_{{\mathcal O}_{\mathcal X}} \upsilon(R)\to \upsilon(R)\otimes_k H$. \end{proof} \begin{thm}\label{thm:H-acting-on-omega_R} Let $R$ be a PV-ring for $M$ and $H=\omega_R(R)$. \begin{enumerate} \item For all $N\in \tenscat{M}$, $\rho_N:\omega_R(N)\to H\otimes_k \omega_R(N)$ given by $$\rho_N:=\omega_R\bigl( \varepsilon_{N_R}^{-1}\circ (u_R\otimes {\rm id}_{N}) \bigr)\ \footnote{The map $\varepsilon_{N_R}^{-1}\circ (u_R\otimes {\rm id}_{N})$ is a morphism in ${\mathcal C}$: $N\to R\otimes N \to R\otimes_k \omega_R(N)$}$$ defines a left coaction of $H$ on $\omega_R(N)$. \item The collection $\rho:=(\rho_N)_{N\in \tenscat{M}}$ is a natural transformation of tensor functors $\omega_R\mapsto H\otimes_k \omega_R$, where $H\otimes_k \omega_R$ is a functor $\tenscat{M}\to {\bf Mod}_H$. \end{enumerate} \end{thm} \begin{rem} By going to the inductive limit one also gets a map $\rho_R:\omega_R(R)\to H\otimes_k \omega_R(R)$. This map is nothing else then the comultiplication $\Delta:H\to H\otimes_k H$. \end{rem} \begin{proof}[Proof of Thm.~\ref{thm:H-acting-on-omega_R}] Part (i) is proven in the same manner as the coassociativity of $\Delta$. For proving the second part, recall that $\varepsilon$ is a natural transformation. Hence, for every morphism $f:N\to N'$ the diagram $$\xymatrix@C+10pt{ N \ar[r]^(0.4){u_R \otimes {\rm id}_N} \ar[d]_{f} & R\otimes N \ar[r]^(0.4){\varepsilon_{N_R}^{-1}} \ar[d]_{{\rm id}_R\otimes f}& R\otimes_k \omega_R(N) \ar[d]^{\iota_R(({\rm id}_R\otimes f)^{\mathcal C})} \\ N' \ar[r]^(0.45){u_R \otimes {\rm id}_{N'}} & R\otimes N' \ar[r]^(0.4){\varepsilon_{N'_R}^{-1}} & R\otimes_k \omega_R(N') }$$ commutes. As $\iota_R(({\rm id}_R\otimes f)^{\mathcal C})={\rm id}_R\otimes_k \omega_R(f)$, applying $\omega_R$ to the diagram gives the desired commutative diagram for $\rho$ being a natural transformation. Compatibility with the tensor product is seen in a similar way. \end{proof} \begin{thm}\label{thm:Aut-omega_R-represented-by-H} Let $R$ be a PV-ring for $M$ and $H=\omega_R(R)$. Then the group functor $$\underline{\Aut}^\otimes(\omega_R):\mathsf{Alg}_k \to \mathsf{Groups}$$ is represented by the Hopf-algebra $H$.\footnote{As shown in the following proof, the representing Hopf-algebra naturally is the coopposite Hopf-algebra $H^{\rm cop}$ of $H$. However, the antipode $s$ is an isomorphism of Hopf-algebras $s:H\to H^{\rm cop}$, hence $H^{\rm cop}\cong H$.} \end{thm} \begin{proof} As $\rho:=(\rho_N)_{N\in \tenscat{M}}$ defines a left coaction of $H$ on the functor $\omega_R$ by natural transformations, one obtains a right action of $\spec(H)$ on $\omega_R$. Composing with the antipode (i.e. taking inverse group elements), one therefore gets a homomorphism of group functors $$\varphi:\spec(H)\to \underline{\Aut}^\otimes(\omega_R).$$ Explicitly, for any $k$-algebra $D$ and $h\in H(D)=\Hom_{k-\rm{alg}}(H,D)$, one defines $\varphi(h)\in \underline{\Aut}^\otimes(\omega_R)(D)=\Aut^\otimes(D\otimes_k \omega_R)$ as the natural transformation which for $N\in\tenscat{M}$ is the $D$-linear extension of the composition $$\omega_R(N)\xrightarrow{\rho_N} H\otimes_k \omega_R(N) \xrightarrow{s\otimes {\rm id}_{\omega_R(N)}} H\otimes_k \omega_R(N) \xrightarrow{h\otimes {\rm id}_{\omega_R(N)}}D\otimes_k \omega_R(N).$$ For showing that the homomorphism $\varphi$ is indeed an isomorphism, we give the inverse map:\\ For any $k$-algebra $D$ and $g\in \underline{\Aut}^\otimes(\omega_R)(D)$, one has the homomorphism $g_R\in \End_D(D\otimes_k \omega_R(R))=\End_D(D\otimes_k H)$, and one defines $\psi(g)\in H(D)$ as the composition $$H\xrightarrow{s} H\xrightarrow{u_D\otimes {\rm id}_H} D\otimes_k H \xrightarrow{g_R}D\otimes_k H \xrightarrow{{\rm id}_D\otimes c_H}D.$$ It is a straight forward calculation to check that $\psi(g)$ is indeed a homomorphism of $k$-algebras and that $\varphi$ and $\psi$ are inverse to each other. \end{proof} \begin{cor}\label{cor:auts-are-isomorphic} The affine group schemes $\underline{\Aut}_{{\mathcal C}\text{-}\mathsf{Alg}}(R)$ and $\underline{\Aut}^\otimes(\omega_R)$ are isomorphic. \end{cor} \begin{proof} By Thm.~\ref{thm:Aut-R-represented-by-H} and Thm.~\ref{thm:Aut-omega_R-represented-by-H} both functors are represented by the Hopf-algebra $H=\omega_R(R)$. \end{proof} \section{Galois correspondence}\label{sec:galois-correspondence} In this section we will establish a Galois correspondence between subalgebras of a PV-ring and closed subgroups of the corresponding Galois group. As in \cite{am:pvtdsr}, the Galois correspondence will only take into account subalgebras which are PV-rings themselves on the one hand, and normal subgroups on the other. \medskip We start by recalling facts about sub-Hopf-algebras and closed subgroup schemes which can be found in \cite{ww:iags}. In the equivalence of affine group schemes and Hopf-algebras, closed subgroup schemes correspond to Hopf-ideals, and closed normal subgroup schemes correspond to so called normal Hopf-ideals. As there is a correspondence between closed normal subgroup schemes and factor group schemes of ${\mathcal G}$ by taking the cokernel and the kernel, respectively, there is also a correspondence between normal Hopf-ideals and sub-Hopf-algebras (\cite[Thm.~4.3]{mt:cbhisha}). This correspondence is given by $$I\mapsto H(I):=\Ker\left(H\xrightarrow{\Delta - {\rm id}_H\otimes u_H}H\otimes_k H\to H\otimes_k (H/I)\right),$$ for a normal Hopf-ideal $I$, and by $$H'\mapsto (H')^+H,$$ for a sub-Hopf-algebra $H'$, where $(H')^+$ is defined to be the kernel of the counit $c_{H'}:H'\to k$. Furthermore, for a sub-Hopf-algebra $H'\subseteq H$, the category $\mathsf{Comod}(H')$ embeds into $\mathsf{Comod}(H)$ as a full subcategory. \begin{thm}\label{thm:galois-correspondence} Let $M\in {\mathcal C}$ be dualizable, $R$ a PV-ring for $M$ (assuming it exists), $\omega=\omega_R$ the corresponding fibre functor, $H=\omega_R(R)$, and ${\mathcal G}=\spec(H)=\underline{\Aut}_{{\mathcal C}\text{-}\mathsf{Alg}}(R)=\underline{\Aut}^\otimes(\omega)$ the corresponding Galois group. Then there is a bijection between $${\mathfrak T}:=\{ T \in {\mathcal C}\text{-}\mathsf{Alg} \mid T\subseteq R \text{ is PV-ring for some }N\in \tenscat{M} \}$$ and $${\mathfrak N}:=\{ {\mathcal N} \mid {\mathcal N}\leq {\mathcal G} \text{ closed normal subgroup scheme of }{\mathcal G} \}$$ given by $\Psi:{\mathfrak T}\to {\mathfrak N}, T\mapsto \underline{\Aut}_{{\mathcal C}_T\text{-}\mathsf{Alg}}(R)$ resp.~$\Phi:{\mathfrak N}\to {\mathfrak T}, {\mathcal N}\mapsto R^{{\mathcal N}}$. \end{thm} Here, the ring of invariants $R^{\mathcal N}$ is the largest subobject $T$ of $R$ such that for all $k$-algebras $D$ and all $\sigma\in {\mathcal N}(D)\subset \Aut_{{\mathcal C}_{\iota(D)}}(R\otimes_k D)$, one has $\sigma|_{T\otimes_k D}={\rm id}_{T\otimes_k D}$. Equivalently, $R^{\mathcal N}$ is the equalizer of the morphisms ${\rm id}_R\otimes u_{k[{\mathcal N}]}:R\to R\otimes_k k[{\mathcal N}]$~\footnote{$k[{\mathcal N}]:={\mathcal O}_{\mathcal N}({\mathcal N})$ denotes the ring of regular functions on the affine scheme ${\mathcal N}$.} and $R\xrightarrow{\delta} R\otimes_k H\twoheadrightarrow R\otimes_k k[{\mathcal N}]$, where $\delta=\varepsilon_{R_R}^{-1}\circ (u_R\otimes {\rm id}_R)$ is the comodule map of $R$ as $H$-comodule, and $H\twoheadrightarrow k[{\mathcal N}]$ is the canonical epimorphism. \begin{proof}[Proof of Thm.~\ref{thm:galois-correspondence}] The functor $\omega_R$ is an equivalence of categories $$\omega_R:\tenscat{M}\to \mathsf{comod}(H),$$ and also of their ind-categories.\footnote{Here, $\mathsf{comod}(H)$ denotes the category of left-$H$-comodules which are finite-dimensional as $k$-vector spaces.} Hence, it provides a bijection between subalgebras of $R$ in ${\mathcal C}$ and subalgebras of $H$ stable under the left comodule structure.\\ We will show that under this bijection sub-PV-rings correspond to sub-Hopf-algebras and that this bijection can also be described as given in the theorem. First, let $T\subseteq R$ be a PV-ring for some $N\in \tenscat{M}$. Then $\tenscat{N}$ is a full subcategory of $\tenscat{M}$, and the fibre functor $\omega_T:\tenscat{N}\to \mathsf{vect}_k$ corresponding to $T$ is nothing else than the restriction of $\omega_R$ to the subcategory $\tenscat{N}$, as $T$ is a subobject of $R$. Hence, $H':=\omega_R(T)=\omega_T(T)$ is a sub-Hopf-algebra of $H$. Therefore, we obtain a closed normal subgroup scheme of ${\mathcal G}=\spec(H)$ as the kernel of $\spec(H)\twoheadrightarrow \spec(H')$. As $\spec(H)=\underline{\Aut}_{{\mathcal C}\text{-}\mathsf{Alg}}(R)$ and $\spec(H')=\underline{\Aut}_{{\mathcal C}\text{-}\mathsf{Alg}}(T)$, this kernel is exactly $\underline{\Aut}_{{\mathcal C}_T\text{-}\mathsf{Alg}}(R)$. On the other hand, let ${\mathcal N}$ be a closed normal subgroup scheme of ${\mathcal G}=\spec(H)$ defined by a normal Hopf-ideal $I$ of $H$, and $$H'=\Ker\left(H\xrightarrow{\Delta - {\rm id}_H\otimes u_H}H\otimes_k H\twoheadrightarrow H\otimes_k (H/I)\right)$$ the corresponding sub-Hopf-algebra of $H$.\\ The subcategory $\mathsf{comod}(H')$ is generated by one object $V$ (as every category of finite comodules is), and the object $N\in \tenscat{M}$ corresponding to $V$ via $\omega_R$, has a PV-ring $T$ inside $R$ by Thm.~\ref{thm:existence-of-pv-ring}, since $R$ is a simple solution ring for $N$ with $R^{\mathcal C}=k$. Furthermore, since $T$ is the PV-ring corresponding to the fibre functor $\omega_R:\tenscat{N}\to \mathsf{comod}(H')$, we have $\omega_R(T)=H'$. It remains to show that $T=R^{\mathcal N}$, i.e. that $$T=\Ker\left( R\xrightarrow{\delta\, -\,\, {\rm id}_R\otimes_k u_H}R\otimes_k H\twoheadrightarrow R\otimes_k k[{\mathcal N}]=R\otimes_k (H/I)\right).$$ As $\omega_R$ is an equivalence of categories, this is equivalent to $$\omega_R(T)=\Ker\left( \omega_R(R)\xrightarrow{\omega_R(\delta) \, -\,\, \omega_R({\rm id}_R)\otimes_k u_H}\omega_R(R)\otimes_k H\twoheadrightarrow \omega_R(R)\otimes_k (H/I)\right).$$ But, as $\omega_R(T)=H'$, $\omega_R(R)=H$ and $\omega_R(\delta)=\Delta$, this is just the definition of $H'$. \end{proof} \bibliographystyle{plain} \def\cprime{$'$}
{'timestamp': '2015-07-16T02:07:55', 'yymm': '1507', 'arxiv_id': '1507.04166', 'language': 'en', 'url': 'https://arxiv.org/abs/1507.04166'}
\section{PanJoin} \label{sec:pan} \subsection{Architecture} \label{sec:archtect} PanJoin supports a sliding window theta join over two data streams (Stream $S$ and Stream $R$ in the following). PanJoin processes incoming data at two levels of parallelism: the \textit{architecture level}, or node level, where multiple worker nodes generate partial results in parallel, and \textit{thread level}, where each worker node executes with multiple threads. \begin{figure}[!htb] \centering \begin{minipage}[b]{.40\textwidth} \centering \includegraphics[width=\textwidth]{diagrams/Sigmod2019_-_sys_arch.pdf} \caption{System architecture.} \label{fig:sys_arch} \end{minipage}\hfill% \begin{minipage}[b]{0.40\textwidth} \centering \includegraphics[width=\textwidth]{diagrams/Sigmod2019_-_sys_process.pdf} \caption{Process procedure. The buffer of Stream $S$ (blue) is full and is processed.} \label{fig:sys_process} \end{minipage} \hfill% \end{figure} The architecture of PanJoin is shown in Figure \ref{fig:sys_arch}. There are several worker nodes and one manager node. Each worker node has some number of subwindows. The subwindows of the same stream are chronologically chained and allocated in a round robin way among the worker nodes, and there is no communication between worker nodes, which is different from (Low Latency) Handshake Join \cite{teubner2011soccer, roy2014low}. The manager node manages the topology and the locations of all subwindows. The manager node also preprocesses incoming raw data and distributes the preprocessed data together with processing commands to the worker nodes. The processing commands includes: \textsl{create} subwindow, \textsl{insert} tuples, \textsl{probe} subwindow, and \textsl{expire} subwindow. To generate correct commands, the manager node needs to collect the running status from every worker node. This information is sent together with the result data from the worker nodes to the manager node with very low overhead (several bits). The two most important messages are whether the oldest subwindow is empty and whether the newest subwindow is full. Additionally, because the manager node needs to distribute data to worker nodes, to improve network and processor utilization, PanJoin can process more than one tuple packed as a batch simultaneously, which is referred to as \textit{batch mode}. The high-level algorithm of PanJoin implements a five-step procedure, as demonstrated in Figure \ref{fig:sys_process}. Each of the steps is defined as follows: \begin{enumerate}[label=Step \arabic*] \item \textit{Collecting}: the manager node collects tuples and places them into two independent buffers, one for each stream. \item \textit{Preprocessing}: The manager node retrieves the joining field from the incoming tuples and other fields that are necessary for the theta join. In batch mode, the manager node sorts the tuples by the values of the joining field. Then, the manager node decides whether to create a new subwindow if the current newest subwindow is full, as well as whether the oldest subwindows have to expire tuples. Subsequently, the manager node generates processing commands and packs the commands with the incoming tuples into messages. \item \textit{Sending}: The manager node sends the messages to the worker nodes and ensures that the \textsl{probe} commands are sent to all the worker nodes with nonempty subwindow(s) of the opposite stream. In Figure \ref{fig:sys_process}, when the system is processing tuples from Stream $S$, the second worker node does not have a nonempty subwindow of stream $R$. Thus, it has no need to perform probing, and it will not receive a \textsl{probe} command. The message sent to the newest subwindow has an \textsl{insert} command, and the message sent to the oldest subwindow has an \textsl{expire} command. If necessary, the manager node sends a node with a \textsl{create} command to create a new subwindow. \item \textit{Processing}: The worker nodes receive messages and perform the processing commands. \item \textit{Feedback}: The worker nodes send the probing result (optional, depending on the topology of the processing nodes, e.g., worker nodes may directly forward their result to the nodes that process the ``select'' operation) and their running status back to the manager node. \end{enumerate} Note that the manager node can perform Steps 1 and 2 in parallel with Steps 4 and 5 when the worker nodes are processing the data. We can also add one or several prefilter nodes ahead of the manager node to retrieve the joining field from the incoming tuples. In our preliminary experiment, the manager node can process more than 300M tuples per second, and the main bottleneck is in Step 4 in the worker nodes and network communication between the manager and worker nodes. To accelerate Step 4, we introduce three data structures to manage tuples inside a subwindow: \textit{Range Partition Table} (RaP-Table), \textit{Wide B$^+$-Tree} (WiB$^+$-Tree), and \textit{Buffered Indexed Sort} (BI-Sort). RaP-Table and WiB$^+$-Tree perform well when batch size and selectivity is small, while BI-Sort runs faster when batch size or selectivity is large. WiB$^+$-Tree is slower but more powerful than RaP-Table because it can handle increasing values. Users can choose one of these data structures for a subwindow, where the chosen data structure serves as an ``index'' and further divides the subwindow into partitions. Therefore, in the probing step, rather than scanning the whole window, the worker node only needs to check the tuples in a limited number of partitions, which significantly reduces the number of comparisons and improves the system throughput by several orders of magnitude. The following subsections provide detailed discussions about these data structures. \begin{comment} PanJoin supports sliding window theta join over data streams. PanJoin handles the three major challenges of stream join processing by range partitioning the tuples in the sliding window of each stream. Using range partitioning brings several advantages. First, it naturally accelerates non-equi-join. By comparing the boundaries of join conditions, it is easy to select the partitions that store the possible tuples that may satisfy the join condition. Instead of scanning the whole window, we only need to check the tuples in the selected partitions, which significantly reduces the amount of comparisons. Second, it also accelerates equi-join. We can consider range partitioning as a special form of hash partitioning. Third, when the partition is smaller than a threshold, the overhead of preparing the data and post-comparison processing will exceed the time spent on the actual comparison. In other words, the necessary number of partitions is limited to a certain value. This value depends on many factors, such as the input parsing time, the partitioning time, the memory access delay, tuple storage time, and other overheads during the join processing. This feature enables us to design a joiner with limited resource requirement, as the size of the partition table is limited. It also allows us to deploy our joiner to a heterogeneous computing platform, e.g., an FPGA. The main difficulty to use range partitioning is the adaptiveness of the range partitioner to data skew. Because our target system processes real-time data, there is strong possibility that the distribution of data values varies frequently and unpredictably. Example situation can be breaking news on Twitter, a new buzzword spreading across the Internet, or some repetitive data generated by random errors in a sensor network. Those events can result in unbalanced partitioning outcomes, i.e., some partitions receive more tuples than average, which can significantly reduce the performance since it takes more time to process the tuples joining with those unbalanced partitions. It also brings about difficulties in storage; because the size of every partition is not predictable, we need to reserve more space for each partition than the average partition size. In the worst case, the size of the reserved storage for each partition can equal the window size, which is not acceptable for a system with a large number of partitions. To handle the skewed data, PanJoin samples the incoming tuples frequently. The time interval between two sampling actions can be configured to be shorter than the period of filling the entire window with incoming data. During each sampling action, PanJoin analyzes the histograms that are generated during this interval and calculate a new partition table for the next interval. In this way, we utilize a critical feature of stream processing that has been overlooked: the tuples in a small time interval share the same information on the value distribution with many of their successors. In practice, we can calculate an approximate variance from the histograms before generating the partition table. When the data distribution changes, the variance is above a predefined threshold, and we compute the new partition table. When the distribution stays unchanged, we can retrieve this information from the variance which is under the threshold and reuse the current partition table for the next interval. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{diagrams/Sigmod2018_-_Overview2-3.pdf} \caption{Overview of a sliding window in PanJoin.} \label{fig:overview_algorithm} \end{figure} Since the data distribution may change during each interval, in the same window, the tuples adhering to different partition tables need to be managed separately. PanJoin realizes this by dividing the entire sliding window into subwindows based on the arrival time of incoming tuples, as shown in Figure \ref{fig:overview_algorithm}. Each subwindow holds the partitioned tuples within a time interval and an associated header which includes a partition table and several run-time histograms. There is also a global table called \textit{expiration table} dealing with expiration for the entire window. The subwindows are organized as a queue: every new tuple is inserted into the newest subwindow, and the oldest tuple is expired from the oldest subwindow. When the new tuple is inserted, the header of the newest subwindow is also updated. When the newest subwindow is full, a new subwindow is created and becomes the newest subwindow, and its partition table is calculated based on the histograms in its predecessor. Every time a new tuple arrives, the \textit{expiration table} checks if there are some tuple to be expired and let the oldest subwindow execute expiration if needed. When the oldest subwindow is empty, it is popped from the subwindow queue, and its successor becomes the oldest subwindow. Our algorithm implements a similar methodology used in RBSNLJ \cite{ya2006indexed}. We manage the tuples in each subwindow by range partitioning instead of a red-black tree. Different from handshake join \cite{teubner2011soccer, roy2014low}, each tuple is inserted into a fixed subwindow and it will not be bypassed through the data flow, which means that each tuple will not be compared with all the tuples in the window of the partner stream. Compared with SplitJoin \cite{najafi2016splitjoin}, ScaleJoin \cite{gulisano2016scalejoin} and CellJoin \cite{gedik2009celljoin}, we add additional tuple management (range partitioning) inside each subwindow to reduce comparison during the probing phase of join processing. \end{comment} \subsection{RaP-Table} RaP-Table range partitions the tuples in a subwindow. The partition boundaries, which are called \textit{splitters}, are stored in the \textit{partition table}. To find the target partition during insertion and probing, RaP-Table performs a binary search on the partition table. The main challenge for using range partitioning is handling the skew in real-time data, i.e., the data may unevenly distribute among the partitions according to the current splitters such that the subwindow needs more time to scan some large partitions to obtain the join result. In addition, data skew introduces another challenge for storing the tuples because RaP-Table also attempts to store the tuples continuously in the same partition to accelerate probing, and skewed data make it difficult to predict the proper size of each partition to allocate in memory. RaP-Table provides a solution to the two challenges by implementing an \textit{adjustment algorithm} and a data storage structure called \textit{Linked List Adaptive Table} (LLAT). \subsubsection{Adjustment Algorithm} \label{adjust_algo} When a new subwindow is created, it receives a new partition table calculated based on the sampling information in its predecessor. The sampling information includes three histograms: the tuple count $count_i$, the maximum tuple value $max_i$, and the minimum tuple value $min_i$ of each partition. The main idea is to scan the histogram of the tuple count to find an approximate range for each splitter and use two other histograms to calculate a new value for each splitter. \begin{figure*}[!htb] \centering \begin{minipage}[b]{.26\textwidth} \centering \includegraphics[width=\textwidth]{diagrams/Sigmod2018_-_Adjust1.pdf} \caption{Scan the count histogram.} \label{fig:adjust1} \end{minipage}\hfill% \begin{minipage}[b]{0.26\textwidth} \centering \includegraphics[width=\textwidth]{diagrams/Sigmod2019_-_rap_worst.pdf} \caption{A adjustment worst case.} \label{fig:adjust2} \end{minipage} \hfill% \begin{minipage}[b]{0.26\textwidth} \centering \includegraphics[width=\textwidth]{diagrams/Sigmod2018_-_LLAT-2.pdf} \caption{An LLAT with $P = 4$.} \label{fig:llat} \end{minipage} \end{figure*} First, we calculate the prefix sums $sum_i = sum_{i-1} + count_i$ of the tuple count histogram and the \textit{balancing indicator} $bal_i = N / P \cdot i $, where $N$ is the total tuple count and $P$ is the partition count. The value of $bal_i$ is the ideal value of $sum_i$ if the tuples are evenly partitioned. In the example shown in Figure \ref{fig:adjust1} where $N = 16$ and $P = 4$, we have $bal_2 = 8$, which means that the previous 2 partitions should have 8 tuples in the ideal case. For any two partitions $i$ and $j$, if $bal_j \in (sum_{i-1}, sum_i]$, we know that the new $j^{th}$ splitter $s^{new}_j$ should be the value of a tuple stored in the $i^{th}$ partition, i.e., $s^{new}_j \in [min_i, max_i]$. For example, in Figure \ref{fig:adjust1}, $bal_2 = 8 \in (sum_2, sum_3] = (5, 10]$, which indicates that the $s^{new}_2$ should be a tuple value stored in the $3^{rd}$ partition, i.e., $s^{new}_2 \in [min_3, max_3]$. Then, we compute the value of $s^{new}_j$. Since we do not have more information about the distribution inside a partition, we assume that they follow a uniform distribution. Thus, according to the feature of a uniform distribution, the value of $s^{new}_j$ should be: $$ s^{new}_j = \frac{bal_j - sum_{i-1}}{count_i} \cdot (max_i - min_i) $$ The calculation of the new splitters can be performed in a single loop. We present the pseudocode in Algorithm \ref{alg:adjust}. For each iteration, we compute the prefix sum $sum_i$ (Line 4). Because there may be more than one $bal_j$ that satisfies $bal_j \in (sum_{i-1}, sum_i]$, once we find a valid $bal_j$ (Line 5), we use a while loop to check all possible $bal_j$ values (Line 6) and calculate the new splitters (Line 7). The time complexity of the algorithm is $\mathcal{O}(P+P) = \mathcal{O}(P)$ because we basically generate and scan the two arrays ($sum_i$ and $bal_i$) of size P only one time. \begin{algorithm}[ht] \KwData{Histogram $count_i$, $max_i$, $min_i$, $i$ in $1...P$} \KwResult{New splitters $s^{new}$} \Begin{ $sum_0 \longleftarrow 0$; $bal_1 \longleftarrow N / P$; $i \longleftarrow 0$; $j \longleftarrow 0$\; \While{$sum_i < N$}{ $i \longleftarrow i + 1$; $sum_{i} = sum_{i-1} + count_i$\; \If{$bal_j \in (sum_{i-1}, sum_i]$}{ \While{$bal_j <= sum_i$}{ $s^{new}_j \longleftarrow \frac{bal_j - sum_{i-1}}{count_i} \cdot (max_i - min_i)$\; $j \longleftarrow j + 1$; $bal_{j} \longleftarrow bal_{j-1} + N / P $\; } } } } \caption{Splitter Adjustment} \label{alg:adjust} \end{algorithm} For this algorithm, we have constructed a worst case scenario, as shown in Figure \ref{fig:adjust2}. In this case, before calling the adjustment algorithm, all of the data are inserted into one partition (for convenience, let us assume that it is the $1^{st}$ partition). The data are distributed inside that partition as follows: the largest value equals the splitter $s_1$, the second largest value equals $s_1/P$, the third largest value equals $s_1/P^2$, and so forth. Now, because $min_1 = 0$, $max_1 = s_1$, $bal_i = N/P \cdot i$, $count_1 = N$ and $sum_0 = 0$, the algorithm provides $s^{new}_1 = s_1/P$, and other splitters $s^{new}_i = s_1 \cdot i/P$. However, in this way, all updated partitions are still empty, except the $P^{th}$ partition holds one tuple and the $1^{st}$ partition holds the remaining tuples. The same situation will repeat after another adjustment and will end once the the range $[min_1, max_1]$ is not divisible by $P$. If we assume that the values are 32-bit integers, then the maximum number of adjustments is $\lceil log_{P}2^{32}\rceil$. If $P = 1024$, then $\lceil log_{P}2^{32}\rceil = 4$. If $P = 65536$, then $\lceil log_{P}2^{32}\rceil = 3$. Thus, the algorithm needs only a few calls to adjust the splitters. In Section \ref{sec:eva_rap}, we show that for some commonly used distributions, the algorithm can converge in only 1-3 times of adjustment. The size of a histogram equals the partition count $P$. RaP-Table uses three histograms (count, maximum value, and minimum value) and a partition table, which are four arrays that hold $4P$ elements in total. An element has the same size as a tuple value. Since the partition count is considerably smaller than the subwindow size $N_{Sub}$, e.g., $P=64K$ and $N_{Sub}=8M$, the overhead of memory allocation for those arrays is negligible ($64K * 4 / 8M < 0.04$). \begin{comment} Suppose we have a case that the data is evenly distributed in the value range of 32-bit integers, and the distribution is suddenly narrowed down to a small range $R$. In the case of $P$ partitions in the subwindow, $R$ can be as small as $P$ (if $R < P$, the data is not partition-able by $P$ partitions). Figure \ref{fig:adjust2} shows the worst case, where only one tuple has a value at one extremum, and the values of the remaining tuples concentrate on other extremum. Because the algorithm assumes that the data follows a uniform distribution, one tuple is partitioned into one partition and the remaining $N_{sub}-1$ tuples are partitioned into another partition, which means that we have to adjust the latter partition again. The worst case can repeat recursively for several times until the value range $R$ of the tuples in one partition is equal to $P$, when the algorithm can compute the splitters in the correct way. From the figure, we can see that this procedure takes $log_{P}R_{sub}$ iterations, where $R_{sub}$ is equal to $2^{32}$ when the values are 32-bit integers. If $P = 1000$, $\lceil log_{P}R_{sub} \rceil = 4$. Thus, the algorithm needs only a few iterations to adjust the splitters. \begin{figure} \centering \includegraphics[width=0.40\textwidth]{diagrams/Sigmod2018_-_Adjust3.pdf} \caption{A global worst case for the adjustment algorithm.} \label{fig:adjust3} \end{figure} However, in the real-time use cases, the distribution can be arbitrary. The worse case scenario could be as shown in Figure \ref{fig:adjust3}. In this case, $P-2$ partitions hold values concentrated on one extremum, and two partitions have values that are distributed among the rest of the data range. In other words, the algorithm needs to adjust two partitions with the worst case scenario discussed in the previous paragraph. Similarly, it needs $log_2[R_{sub}-(P-2)]$ iterations to get the balanced result. When $(P-2)\ll R_{sub}$, we have $log_2[R_{sub}-(P-2)] \approx log_2R_{sub}$. Given $R_{sub} = 2^{32}$, the algorithm needs 32 iterations to converge. Nevertheless, in Section \ref{sec:adjust}, we show that for some commonly used distributions, the algorithm can converge in only 1-2 iterations. Note that for each iteration, we need to repartition the tuples inside the subwindow; the total time complexicity for this operation will be $N_{sub} \cdot log_2R_{sub}$. Thus, we introduce a strategy called \textit{preliminary sampling}, meaning that we sample and calculate the new splitters when the subwindow receives a small number of tuples $N$ where $N \ll N_{sub}$. We can trigger preliminary sampling by calculating the mean absolute error (MAE) of the count histogram. If the MAE is larger than a threshold, we can either calculate the new partition table or dynamically shrink the subwindow size and allocate a new subwindow. Because MAE is easy to compute on FPGA, this strategy is hardware friendly. \end{comment} \subsubsection{LLAT} \begin{comment} For each sliding window, there are three kinds of data we need to store: the global expiration table, the subwindows' headers, and the partitioned tuples in each subwindow. In the following subsections, we discuss each of the categories in detail. \subsubsection{Global Expiration Table} \label{sec:expire} The global expiration table stores the partition index of every tuple in the window. The partition index is generated by the partition function of the subwindow the tuple belongs to. The expiration table stores those indexes in order of the arrival time of the tuples. Each time a new tuple arrives, it is not only processed by the subwindow but sends its partition index to the expiration table to add a new entry for itself as well. When the window is full, every time a new tuple is inserted, the expiration table pops its oldest entry and sends it to the oldest subwindow. Because the tuples of each partition inside the subwindow are also saved in order of their arrival, once the subwindow gets the partition index, it expires the oldest tuple stored in that partition. The global expiration table requires the ability to store entries in order of their arrival. For a count-based sliding window, it can be implemented as a circular buffer. For a time-based sliding window, we can realize it by a queue. However, for a time-based sliding window, each entry also needs to store a timestamp for each tuple. When a new tuple arrives, it scans one or more tuples in the tail and pops out all the tuples with an expired timestamp. Note that expired tuples may come from more than one subwindow. \subsubsection{Subwindow Header} The storage of the headers is trivial. We save the partition table and histograms as arrays. We have three histograms: one that keeps the highest value of each partition, one that preserves the lowest value of each partition, and one that saves the number of tuples in each partition. The histograms are updated every time a tuple arrives in the subwindow. \subsubsection{Partitioned Tuples} There are three requirements that a good storage strategy needs to satisfy to store the range-partitioned tuples. First, it must be adaptive to the data skew. Because the distribution is unknown, it must be able to deal with the most extreme condition without a high overhead, that is, all the tuples belong to a partition with an unpredictable index. Second, it should keep the memory access pattern continuous. We do not expect a strategy that allocates random addresses for each tuple in a chunk of data and links them together as a conventional linked list, which can significantly reduce the performance for comparison. Third, it should be trivial to apply on FPGA. That is the reason why we do not consider the dynamically size-adaptive containers in the standard C++ libraries, e.g., vector or list. \end{comment} RaP-Table requires a storage strategy that keeps the memory access pattern continuous and adapts to the data skew. To meet these requirements, we designed a data structure called \textit{Linked List Adaptive Table} (LLAT). Figure \ref{fig:llat} illustrates the structure of an LLAT. Each LLAT has $2P$ entries, where $P$ is the partition count. The first $P$ entries, the \textit{normal entries}, are used for all partitions, with one entry per partition. The last $P$ entries are called \textit{reserved entries} which store skewed data. The LLAT has a global pointer \textsl{PtrG} that points to the first unused reserved entry. Each entry has an array that holds $(N_{Sub} / P) \cdot \sigma$ tuples belonging to the same partition, where $N_{Sub}$ is the subwindow size and $\sigma$ is a threshold larger than 1. We suggest that $\sigma$ should be approximately 1.10-1.25 to handle the small skew in the real-time data. Each entry has two data pointers: \textsl{Head} and \textsl{Tail}. Similar to the case in a linked list, each entry also has a \textsl{Next} pointer referring to an entry in the LLAT. The insertion and deletion operation is defined as follows. Initially, in every entry, the \textsl{Head} and the \textsl{Tail} point to the zero position of its array. The \textsl{PtrG} is set to $P$. Once a tuple obtains its partition index $i$, it is inserted at the partition's \textsl{Head}, which is then increased by 1. If an entry is full after an insertion, its \textsl{Next} pointer will point to the entry that \textsl{PtrG} refers to, then \textsl{PtrG} is increased by 1. In this way, we create a linked list for an unbalanced partition. Once an unbalanced partition receives another new tuple, LLAT will go through its linked list and insert the tuple at the last entry. The deletion is straightforward. When the LLAT receives the expired request with a value of a partition index, it increases the \textsl{Tail} of the corresponding entry by 1. If the entry is empty (its \textsl{Head} equals its \textsl{Tail}), LLAT will check the entry indicated by the \textsl{Next} pointer. This process continues iteratively until LLAT finds a nonempty entry and expires the tuple. It is easy to prove that $2P$ entries are enough for all the cases. Suppose that we have a case in which $2P$ entries are not enough. When this occurs, there must be at least $P$ entries that are full (so they require $P$ reserved entries). Because $\sigma$ is larger than 1, it means that the subwindow has at least $(N_{Sub} / P) \cdot \sigma \cdot P = N_{Sub} \cdot \sigma > N_{Sub}$ tuples, which is not possible because a subwindow has at most $N_{Sub}$ tuples. The storage overhead of LLAT can be calculated as $(N_{Sub} / P \cdot \sigma) \cdot 2P / N_{Sub}-1$. When $\sigma$ is 1.25, the overhead equals 150\%, meaning that we need additionally 1.5 times more space to store all the data, or the space utilization is 40\%, which is acceptable in practice. \subsubsection{Known Issues} RaP-Table has a limitation in handling incoming data that have increasing values. For example, the id field of the tuples generally contains increasing values. Because RaP-Table generates the new partition table based on the data pattern in the previous subwindow, the new partition table is not able to partition newly arrived tuples evenly but delivers them into one or few partitions. For example, if the value range of the previous subwindow is $[0, 1000]$, the generated new partition table will attempt to partition the values between $[0, 1000]$. If the following data has a value range of $[1000, 2000]$, the new partition table becomes powerless. To address this issue, we designed a new method based on B$^+$-tree called \textit{Wide B$^+$-Tree} (WiB$^+$-Tree), a slower but more powerful data structure, which is shown in the following subsection. \subsection{WiB$^+$-Tree} \textit{Wide B$^+$-Tree} (WiB$^+$-Tree) is a data structure adapted from regular B$^+$-tree; therefore, it naturally adapts to skewed data. A leaf node in WiB$^+$-Tree is designed similar to a partition in RaP-Table, while internal nodes are used as the partition table to index the leaf nodes. The major differences between a WiB$^+$-Tree and a B$^+$-tree is that the leaf nodes have different configurations than the internal nodes in a WiB$^+$-Tree. First, a leaf node has more elements than internal nodes. We would like to keep the size of the internal nodes small enough such that the internal nodes remain in the CPU cache. An internal node with 64 elements of 32-bit integers needs at least 256 Bytes of memory, which means that a modern CPU core is able to hold 4096 nodes in its private 1 MB L2 cache or approximately 85,000 nodes in the shared 22 MB L3 cache, which are enough for a WiB$^+$-Tree to index a large subwindow (larger than 1M). Second, the elements in the leaf nodes are unsorted, unlike the internal nodes where all the elements are sorted. We sort the leaf node only when we need to split it into two nodes. Suppose that the width of a full leaf node is $W$; if we keep the order inside the leaf node, the time complexity of obtaining a full node (inserting $W$ elements in that node) is $\mathcal{O}(W^2)$, while a sort operation at the time of node splitting has a complexity of only $\mathcal{O}(WlogW)$. In our preliminary experiment, we found that a WiB$^+$-Tree with unsorted leaf nodes commonly has 3-5 times less insertion time than a WiB$^+$-Tree with sorted leaf nodes. Third, no internal node has duplicate elements. All the tuples with the same value will be inserted into the same leaf node, which may cause some leaf nodes to have more elements than their maximum width. To efficiently store the extra tuples, we use LLAT to organize all the tuples stored in all the leaf nodes. A slight difference between the LLAT in WiB$^+$-Tree and RaP-Table is that the \textsl{PtrG} pointer initially points to the first partition, since there is no leaf node serving as a partition when the subwindow is empty. Each leaf node holds a pointer to an entry in the LLAT, while the other information of the leaf node, such as node size, can be held in the CPU cache. \subsection{BI-Sort}\label{sec:bisort} After we designed RaP-Table and WiB$^+$-Tree, we found that both data structures sort the subwindow at a coarse level. RaP-Table behaves similar to a bucket sort, while WiB$^+$-Tree indexes those ``buckets" with a heterogeneous B$^+$-Tree. Thus, we design a data structure that genuinely sorts the entire subwindow at a fine level. There are two challenges in designing a sorted-based data structure. First, we have to spend $\mathcal{O}(N)$ time to insert a new tuple into the correct position. Although we can find the address in $\mathcal{O}(logN)$ time through a binary search, we still need $\mathcal{O}(N)$ time to shift the tuples larger than the inserted tuple to the new addresses. Second, in the probing operation, even though we can find the target tuple in $\mathcal{O}(logN)$ time with a binary search, each step of the binary search has a memory access that can be treated as a random memory access. Although the first few accesses may hit the cache, when $N$ becomes large, e.g., 8M = $2^{23}$, most of the memory accesses have a considerable delay, which slows the probing operation. BI-Sort overcomes the first challenge with an \textit{insertion buffer}. When a new tuple arrives, it is first inserted into the insertion buffer. The insertion buffer has a limited size $B$, and all the data are unsorted. The new tuple will remain in the buffer until the buffer is full, then the buffer will be sorted and merged into the subwindow data that are stored as a sorted array called the \textit{main array} in the memory. Merging two sorted arrays requires $\mathcal{O}(M+B)$ time ($M$ is the size of the main array), which is shared by $B$ tuples. Therefore, we can significantly reduce the insertion time such that the insertion will not be the bottleneck of the stream join processing. When the subwindow is probed, both the tuples in the main array and the insertion buffer will be both probed. Since the insertion buffer has a limited size, it can be stored in the cache to have a limited time cost of a linear scan. To address the second challenge, BI-Sort adds an \textit{index array} for the main array. The index array is updated immediately after the insertion buffer is merged into the main array. The index array samples the value of every $M/P$ tuple in the main array, where $P$ is the size of the index array, i.e., the partition count. The memory space between two adjacent indexed tuples is called a \textit{partition}. When $P$=64K, the index of 32-bit integer needs only 256 KB, which can easily be stored in the L2 private cache. When BI-Sort needs to perform a binary search on the main array, it first performs the binary search on the index to find the target partition, and then it performs a binary or linear search (batch mode, discussed in the following subsection) inside the partition. Note that usually $P << M$; therefore, the time complexity of merging the main array together with updating the index is $\mathcal{O}(M+B+P) = \mathcal{O}(M+B)$. Thus, updating index has little effect on the performance. In conclusion, to overcome the disadvantages of the traditional sort-based solution, we add a buffer and an index to a ``merge sort''-based data structure, which is named \textit{Buffered Indexed Sort}. BI-Sort has the smallest storage overhead among the three data structures: the size of the main array is the same size as the subwindow, while RaP-Table and WiB$^+$-Tree use LLAT that needs more space than BI-Sort. With the same number of partitions, the index array and the insertion buffer of BI-Sort also require less space than the partition table and histograms in RaP-Table, as well as the tree nodes in WiB$^+$-Tree. BI-Sort is also the simplest algorithm, which allows us to implement it on FPGA. Furthermore, because BI-Sort saves the tuples in order, the theta join result of a probing tuple can be represented as a $<\mathsf{id_{start}}, \mathsf{id_{end}}>$ record (with a label \textit{not} for condition $\neq$), where $\mathsf{id_{start}}$ is the index of the main array where the result starts and $\mathsf{id_{end}}$ is the index where the result ends. If the join condition has more than one band, e.g., $a \in [b-5, b+5] \lor a \in [b+20, b+35]$, BI-Sort uses the same number of records as the bands to represent the result. Therefore, when the selectivity is very high (e.g., most of the tuple values in the stream are the same), RaP-Table and WiB$^+$-Tree need to copy a large amount of tuples into the result, while BI-Sort simply returns index records. In this way, BI-Sort can significantly reduce the size of the feedback message to the manager node. To support this, the manager node has to maintain a mirror of the main array of every subwindow. The main array in the manager node will perform the same insertion as in the corresponding worker node. In our preliminary experiment, there is no performance overhead to keep a mirror of each subwindow, since the insertion in the manager node is performed in parallel with the newest worker node. \begin{comment} Compare to a tree structure, LLAT offers two advantages summarized in the two following paragraphs. First, it keeps the memory access pattern continuous. In each entry of an LLAT, the tuples are saved continuously as an array. Those tuples share only one random memory access penalty. In a balanced tree, each node is added based on its created time. Thus, the memory location of those nodes is continuous in the aspect of the arriving time of tuples instead of their logical relationship. Furthermore, their logical relationships are frequently changed during insertion of nodes. Thus, in practice, each read or write to a tree node is a random memory access. Because our sliding window can be large (millions of tuples), there is not enough space to hold all the trees of every subwindow in the CPU cache. Therefore, we have to perform a large amount of random access to main memory during the comparison phase of join processing. Suppose the access penalty is 200 cycles. During this time, we can linearly scan 200 tuples in a fully pipelined fashion on CPU and may scan more if the implementation is empowered by SIMD! Thus, when the selectivity is higher than a threshold (in this example is $1/200=0.5\%$), it is even better to use nested-loop join instead of a tree-based indexed join. Second, it makes hardware design simple. Similar to CPU, an FPGA chip also has limited internal memory resources, called Block RAMs (BRAMs). A typical size of total BRAM is around 1.6-8.6MB on the newest Xilinx FPGA \cite{xilinx2017ultrascalep}, which means that an FPGA can hold less than 1.1 million tuples if each tuple has a 32-bit value and 32-bit key. Recent studies have presented a tree implementation that keeps a maximum number of 219, or 512K nodes, on an FPGAs with a large capacity of BRAMs \cite{qu2013high}. Apparently, we can not store large trees on the FPGA chip. If we put trees in the external memory, we will encounter the same problem we have discussed above. Besides, implementing trees is not trivial on FPGA. However, PanJoin only needs a few pointers, while tuples are stored in the external memory. In the real implementation, those pointers are configured as integers that serve as offsets of the entries. Thus, it is straightforward to implement LLAT on FPGA. Note that for every insertion, the LLAT has to go through the entire linked list of one partition to find the destination entry. This process will slow down the performance when most of the tuples concentrate on a few partitions. A trick is to maintain a table that saves the id of all the final entries for each partition. Thus, for the insertion, only once access is necessary. Similarly, a table for expiration is also required to reduce the processing time. \end{comment} \subsection{Batch Mode} \begin{figure} \centering \includegraphics[width=.40\textwidth]{diagrams/Sigmod2019_-_rebounding_bi.pdf} \caption{Rebounding binary search.} \label{fig:rebounding} \end{figure} In PanJoin, the manager node needs to send incoming data to the worker nodes. To fully utilize the network bandwidth, PanJoin supports \textit{batch mode} where a batch of tuples will be processed simultaneously. In addition to batching the tuples at the architecture level, all three of the data structures support batch mode for insertion and probing. RaP-Table uses a modified partitioning algorithm for insertion and probing in batch mode. Rather than regular binary search, RaP-Table uses a \textit{rebounding binary search}, shown in Figure \ref{fig:rebounding}. Rebounding binary search has two phases: forward phase and backward phase. Starting with an initial element in the sorted array, the algorithm goes in one direction in the forward phase, then turns back to backward phase if it reaches a value larger than the target value, and then performs a regular binary search. The step length is doubled after each comparison in the forward phase and halved in the backward phase. Because all the tuples in the batch are presorted by the manager node, the target partition for the next tuple must have an index larger or equal to the target partition of the current tuple. Therefore, during insertion and probing, RaP-Table uses rebounding binary search to find the target partitions of the batched incoming tuples : each tuple starts it search from the partition id of its previous tuple, which is faster than performing a ``complete'' binary search per tuple. In this manner, the partition phase only accounts for less than 10\% of the total execution time of the insertion and probing operation, which is competitive to a hash-based solution, e.g., SHJ. When probing the subwindow, the tuples with the same target partition can share the same memory access, which also notably improves the performance. When traversing the tree, WiB$^+$-Tree starts with the node path in the tree of the previous batch tuple, backtracks to the upper levels, and then goes downwards to the correct leaf node if necessary. During probing, similar to RaP-Table, WiB$^+$-Tree first finds the target partitions for all the batch tuples, and then it scans the partitions to improve the performance. Both RaP-Table and WiB$^+$-Tree use a nested loop (inner: related batch tuples, outer: partition tuples) to scan a target partition of several related batch tuples. BI-Sort first compares the sizes of the batch and the insertion buffer before actual insertion. If the batch is larger than the buffer, BI-Sort directly merges the batch into the main array. Otherwise, it places all the batch tuples into the insertion buffer. While performing probing, BI-Sort first uses the rebounding binary search to find the target partitions of the batch tuples, and then it scans the corresponding partitions. Because the tuples in the partition and related batch tuples are both sorted, the result index of tuple $t_i$ must be larger than or equal to the result index of $t_{i+1}$. Thus, rather than a nested loop, BI-Sort can simply scan the partition with a merged-like loop: if the current partition tuple is smaller than the join condition of the current batch tuple, we increase the iterator of the partition tuples; otherwise, we increase the iterator of the batch tuples. In this way, each partition tuple is accessed only once: thus, BI-Sort can achieve higher performance than the other two data structures. We illustrate this algorithm by using equi-join as an example. In RaP-Table and WiB$^+$-Tree, because the partition is unsorted, a partition tuple in partition $P$ needs to be joined with all the batch tuple with the same target partition as $P$. In BI-Sort, when a batch tuple $t_1$ reaches to a partition tuple with a larger value than $t_1$, it stops probing and this partition tuple becomes the start of the next batch tuple $t_2$. Here, we assume $t_1$ has larger value than $t_2$. Otherwise, $t_2$ will have the same value as $t_1$ (batch is sorted), then BI-Sort copies the join result from $t_1$ to $t_2$. In this way, a partition tuple in partition $P$ only needs to be joined with a single batch tuple. Therefore, Although RaP-Table and WiB$^+$-Tree can sort their partitions after the subwindow is full and stays unchanged, this can cause some extra overhead and may significantly reduce the performance in some cases. Thus, in this paper, all the partitions in RaP-Table and WiB$^+$-Tree remain unsorted. In practice, the manager node sets two conditions of maximum collecting time and maximum tuple count for batch mode: either of the two conditions is satisfied, the manager node packs all the collected tuples into a batch and starts processing. \subsection{Equi-Join vs. Non-Equi-Join} \label{sec:join_con} The three data structures each have different strategies for equi-join and non-equi-join to probe partitions. \subsubsection{Equi-join} In RaP-Table and WiB$^+$-Tree, each probing tuple has a single target partition. In BI-Sort, a probing tuple may have multiple target partitions because tuples with the same value may be stored across several partitions. Therefore, in BI-Sort, the rebounding binary search provides the target partition $P_i$ with the lowest id $i$. Then, BI-Sort converts the current equi-join $x=v$ into a non-equi-join as $x \in [v, v^+)$, where $v^+$ means the smallest value larger than $v$. \subsubsection{Non-equi-join} Except for the condition $\neq$, which can be performed by a equi-join with a filtering operation, PanJoin will calculate the upper bound and lower bound of the band of the join condition for each probing tuple. Then, three data structure probes the target partitions $P_{low}$ of the lower bounds, and the target partition $P_{up}$ of the upper bounds if $P_{low} \neq P_{up}$. Additionally, RaP-Table and WiB$^+$-Tree copy the partitions between $P_{low}$ and $P_{up}$, while BI-Sort can skip this procedure if it chooses $<\mathsf{id_{start}}, \mathsf{id_{end}}>$ as its result format. \subsection{Miscellaneous Implementation Decisions} \subsubsection{Expiration} PanJoin expires an entire subwindow (the oldest one) instead of several tuples in the oldest subwindow. Therefore, none of the three data structures currently has a deletion operation (LLAT has, but RaP-Table does not). Since there will be a number of subwindows for a stream and all of them are probed in parallel, an extra subwindow will not cause much overhead. While probing the oldest subwindow, we employ a filtering operation to remove expired tuples in the result. \subsubsection{Count-based Window vs. Time-based Window vs. Out-of-Order Window} A time-based window needs extra fields in each tuple to save the event time or arrived time, depending on the application requirements. There is no difference in using the three data structures. The running status of each worker node saved in the manager node is slightly different: for a count-based window, the manager node monitors the tuple count in each subwindow, while for a time-based window, the manager node also saves the time fields of the oldest and newest tuple in each subwindow. These fields are used for expiring the subwindow: when all the tuples are older than the watermark, the entire subwindow is expired. This mechanism can also handle out-of-order tuples. In addition, when there are a few late arrived tuples in a subwindow, the subwindow can put them into a small buffer instead of its data structure so that when there is a probing request, it does not need to probe the whole data structure to find the join result. \subsubsection{Multithreading in Worker Nodes} When we implement the three data structure in batch mode, we find that a single thread on CPU cannot fully utilize the memory bandwidth. Therefore, in batch mode, all of the three data structures perform probing with multithreading. The workload is divided based on how many partitions each thread need to probe. In addition, BI-Sort also performs insertion with multiple threads, where BI-Sort tries to balance the size of each piece of the partially merged main array. \section{Evaluation} \label{sec:eva} In this section, we first show the analytical performance of the three data structures on CPU and of BI-Sort on FPGA. Then, we show the throughput of PanJoin as a whole system and the comparisons with other stream join solutions. All the throughput axes in the following figures are in \textbf{log} scale. \subsection{Analytical Evaluation} \label{sec:ana_eva} \begin{figure*}[!htb] \captionsetup[subfigure]{justification=centering} \centering \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{figures/rap_insert_1.tikz} \caption{Insertion, $N_{sub}$ = 8M.} \label{fig:rap_insert_1} \end{subfigure}% \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{figures/rap_insert_2.tikz} \caption{Insertion, $P$ = 64K.} \label{fig:rap_insert_2} \end{subfigure}% \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{figures/rap_probe_1.tikz} \caption{Equi-join probing, selectivity $S$= 1.} \label{fig:rap_probe_1} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{figures/rap_probe_2.tikz} \caption{Equi-join probing, $N_{sub}$ = 8M \\ and batch size $N_{Bat}$ = 32K.} \label{fig:rap_probe_2} \end{subfigure} \centering \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{figures/rap_probe_3.tikz} \caption{Non-equi-join probing, $N_{sub}$ = 8M \\ and batch size $N_{Bat}$ = 32K.} \label{fig:rap_probe_3} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{figures/rap_adjust.tikz} \caption{Adjustment algorthm. $N_{sub}$ = 510K for Youtube and $N_{sub}$ = 8M for the rest.} \label{fig:rap_adjust} \end{subfigure} \caption{Performance of RaP-Table.} \label{fig:rap} \end{figure*} We implemented the three data structures in C++17. The program is tested on one node of a high-performance cluster released in 2018. The node has two 20-core Intel\textsuperscript{\textregistered} Xeon\textsuperscript{\textregistered} Gold 6148 processors and 192 GB DDR4 memory with a total bandwidth of 150 GB/s. We evaluate the performance of the insertion and probing operation for each data structure by measuring the throughput of input tuples (unit: tuples per second) on the host subwindow of the data structure. Each input tuple has a format of $<\mathsf{key}, \mathsf{value}>$, where $\mathsf{key}$ and $\mathsf{value}$ are both 32-bit integers. We use band join to test the performance of PanJoin processing a non-equi-join. The band join for Stream S is defined as: $$ \bm{\mathsf{WHERE}}\; \mathsf{s.value}\; \bm{\mathsf{BETWEEN}}\; \mathsf{r.value} - \epsilon\; \bm{\mathsf{AND}}\; \mathsf{r.value} + \epsilon $$ and vice versa for Stream R, where $\epsilon$ is used to control the selectivity. The throughput is mainly influenced by the following four factors: subwindow size $N_{Sub}$, batch size $N_{Bat}$, partition count $P$, partition size $N_P = N_{Sub}/P$, and selectivity $S$ which is defined as the average number of matching tuples in a full subwindow per probing tuple. We use count-based windows to present the throughput because it is easier to show the correlation between throughput and the factors it relies on, while changing the input rate of time-based windows will change both the window size and the throughput. Some data structures have their features, which are also presented in the following part of this section. RaP-Table and WiB$^+$-Tree perform insertion with 1 thread, and BI-Sort performs insertion with 8 threads. The probe operations of all data structures are paralleled with 8 threads when $N_{Bat} \geqslant 128$ and are executed with 1 thread when $N_{Bat} < 128$. \subsubsection{RaP-Table on CPU} \label{sec:eva_rap} We first show the insertion throughput with a fixed subwindow size $N_{Sub}$=8M in Figure \ref{fig:rap_insert_1}. A larger batch size provides better performance because tuples in the same partition can share memory accesses to the LLAT, and fewer partitions (smaller $P$) save time during binary search on the partition table. When $N_{Bat}$=8M, RaP-Table reaches its peak throughput (approximately 100M tuples/s). In Figure \ref{fig:rap_insert_2} with a fixed $P$, we observe a slight but not significant advantage of smaller subwindow size $N_{Sub}$, which proves that the insertion throughput of RaP-Table mainly relies on the scan of the partition table and memory access to the LLAT. Figure \ref{fig:rap_probe_1} presents the throughput of equi-join with a selectivity $S$=1, meaning that on average, a probing tuple matches one tuple in the subwindow. Here, smaller $N_P$ provides better performance because fewer tuples are accessed per probing tuple. Additionally, a smaller subwindow runs faster because it has a smaller partition table to scan. When $S$ increases, as shown in Figure \ref{fig:rap_probe_2}, the throughput begins to decrease. Meanwhile, when $S$ is small, a larger partition count $P$ provides higher throughput since it corresponds to a smaller $N_P$. The performance of non-equi-join probing shown in Figure \ref{fig:rap_probe_3} is similar to the equi-join probing: smaller $S$ and larger $P$ provides higher throughput. In Figure \ref{fig:rap_adjust}, we also present the performance, i.e. the normalized MAE (mean average error), under the multimodal normal distributions with legends ``N(normalized $\sigma$, modal count, $P$)'', uniform distributions with legends ``U(normalized range, modal count, $P$)'', and a real dataset of Youtube videos (first file, depth 4) \cite{chen2008youtube}. In the Youtube data, the values (view counts) follow a rank-size distribution, where 99\% fall in the 1\% of the data range or 0.01\% of the range of a 32-bit integer. Initially, the partition table assumes that the value is evenly distributed among the range of a 32-bit integer. We can observe that subwindows with smaller $P$ require fewer iterations to converge and deliver more balanced outcomes (with a lower MAE). Under each distribution, RaP-Table is able to converge in 3 iterations, which proves our statement in Section \ref{adjust_algo}. \begin{comment} We first explore the throughput of insertion in RaP-Table. In Figure \ref{fig:rap_insert_1} with a subwindow size of 8M, different lines show the average insertion throughputs using left y-axis. We can observe that increasing the batch size can improve the insertion performance significantly. With the same batch size, larger partition count can slow down the performance because RaP-Table has a larger partition table to process. However, when the batch size gets larger, the throughput difference gets smaller, which shows the effectiveness of the rebounding binary search in the batch mode of RaP-Table. The bars in Figure \ref{fig:rap_insert_1} shows the throughput of inserting the last batch before the subwindow is full, using the right y-axis. This data is important because the subwindow can be a bottleneck if the last insertion is slow. We can see that when the batch size is smaller than 128, the throughput is even lower than 10000. Therefore, when the input rate is high, we need a large batch size to handle the incoming data. We also explore the relationship between the throughput and the subwindow size. In Figure \ref{fig:rap_insert_2}, with a fixed partition count, larger batch size still gives higher throughput, while the subwindow size does not affect the performance much. Note that in Figure \ref{fig:rap_insert_2}, for each line, the batch size must be smaller than $N_{Sub}$; otherwise the subwindow is overfilled. For probing, we first explore equi-join. Figure \ref{fig:rap_probe_1} gives the throughput of equi-join with a selectivity of 1, meaning that on average a probing tuple matches one tuple in the subwindow. This figure shows two series of data: a series with average partition size $N_P = 64$ and another with $N_P = 512$, where $N_P = N_{Sub}/P$. Therefore, a bigger subwindow size $N_{Sub}$ corresponds to a bigger partition count $P$. We can see that a large batch size can significantly improve the throughput by as much as three orders of magnitude. When batch size gets larger, a smaller average partition size $N_P$ gives higher throughput because each probing tuple needs to compare with $N_P$ tuples in the target partition. With the same $N_P$ and batch size, smaller subwindow runs faster, since it has fewer partitions to probe and a smaller partition table to visit. When the selectivity $S$ increases, as shown in Figure \ref{fig:rap_probe_2}, the throughput starts to drop when selectivity is larger than 1K ($2^{10}$). When $S$ is small, a larger partition count $P$ gives higher throughput, since it corresponds to a smaller $N_P$, the number of the total tuples for each probing tuple to compare. Note that when selectivity $S$ becomes larger than 1K ($2^{10}$), $N_{Sub}/S < P$, meaning that the number of non-empty partitions is equal to $N_{Sub}/S$ (we are testing equi-join), and the number of tuples in a non-empty partition is $S$. In these case, the throughput does not differ among the given $P$. The performance of non-equi-join probing (here we use band-join with different selectivities) shown in Figure \ref{fig:rap_probe_3} is similar to the equi-join probing, except that there is a throughput drop in each line, e.g., $P$=256K at selectivity $S=2^6$. In this case, $S > N_P$, meaning that RaP-Table needs to probe both the partition with the lower bound of the join condition and the partition with the upper bound of the join condition, as well as copy the middle partitions to the result. Comparing with Figure \ref{fig:rap_probe_2} and Figure \ref{fig:rap_probe_3}, we can observe that non-equi-join is slower than equi-join by approximately 30\% when selectivity is small, due to the extra check of the partition table for both the lower bound and upper bound of the join condition for each tuple, where in equi-join RaP-Table checks the partition table for the tuple value only once per tuple. However, with a larger selectivity, non-equi-join is faster than equi-join, because in non-equi-join RaP-Table does not really compare the value in the middle partitions but instead copies them straightly to the result, which is faster than reading and comparing the same amount of tuples as being done in the equi-join probing. We also test the adjustment algorithm under normal distributions, uniform distributions and a real data set of Youtube videos \cite{chen2008youtube, chen2008youtubedata}. We measure the performance by the normalized mean absolute error (MAE) of tuple count in every partition of the subwindow. The normalized value is the raw MAE divided by the average partition size $N_P$. Figure \ref{fig:rap_adjust} shows the normalized MAE under different distributions. The thin solid lines present performance under the multimodal normal distributions with legends ``(normalized $\sigma$, the number of modals, partition count $P$)''. We can see that the normalized MAE quickly reduces to 0.02 - 0.06 after 2-3 iterations of adjustment, which can be considered as a balanced status. Here, subwindows with smaller $P$ (blue or green lines) have smaller MAE (0.1-0.24) than those with larger $P$ (red or orange lines, with MAE around 0.5). Furthermore, a smaller $\sigma = \frac{1}{60000}$ causes 1 extra iteration than a larger $\sigma = \frac{1}{600}$ to reach the balance status. The number of modals does not affect the performance much. The thin dashed lines present throughput under the multimodal uniform distributions with legends ``(normalized range, the number of modals, partition count $P$)''. Here, we only use one range value $\frac{1}{6000}$ to illustrate the performance as we do not find much difference in performance among different ranges. Similar to the situation under the normal distributions, subwindows with smaller $P$ have smaller MAE (around 0.1) than those with larger $P$ (around 0.5). For the Youtube data, we use the view count of 510K Youtube videos stored in the depth 4 of the first file in the dataset \cite{chen2008youtube, chen2008youtubedata}. We tested the performance with 2 different $P=128$ and $P=512$, shown as bold solid lines in Figure \ref{fig:rap_adjust}. We do not use large $P$ because the data cannot be divided by $P \geqslant 1024$. The data follows a rank-size distribution, where 99\% falls in the 1\% of the data range, or 0.01\% of the value range of a 32-bit integer. Though the data is highly skewed, RaP-Table only needs 1-3 iterations to adjust, where smaller $P$ gives smaller MAE than larger $P$. \end{comment} \subsubsection{WiB$^+$-Tree on CPU} \begin{figure*}[!htb] \captionsetup[subfigure]{justification=centering} \centering \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{figures/wib_insert_1.tikz} \caption{Insertion, $N_{sub}$ = 8M.} \label{fig:wib_insert_1} \end{subfigure}% \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{figures/wib_insert_2.tikz} \caption{Insertion, $P$ = 64K.} \label{fig:wib_insert_2} \end{subfigure}% \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{figures/wib_probe_1.tikz} \caption{Equi-join probing, selectivity $S$ = 1.} \label{fig:wib_probe_1} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{figures/wib_probe_2.tikz} \caption{Equi-join probing, $N_{sub}$ = 8M \\ and batch size $N_{Bat}$ = 32K.} \label{fig:wib_probe_2} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{figures/wib_probe_3.tikz} \caption{Non-equi-join probing, $N_{sub}$ = 8M \\ and batch size $N_{Bat}$ = 32K.} \label{fig:wib_probe_3} \end{subfigure} \centering \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{figures/wib_comp.tikz} \caption{Speedup of WiB$^+$-Tree over B-Tree. $N_{Sub}$ = 8M, $P$=128K.} \label{fig:wib_comp} \end{subfigure} \caption{Performance of WiB$^+$-Tree.} \label{fig:wib} \end{figure*} We use the same metrics as for RaP-Table to test the performance of WiB$^+$-Tree. Here, the partition count $P$ is the number of leaf nodes in the tree. The number is approximate because the tree structure may vary with different randomly generated data. Similar to RaP-Table, in Figure \ref{fig:wib_insert_1} and Figure \ref{fig:wib_insert_2}, we can also observe the benefit brought by a large batch size for insertion operations. However, with a fixed $N_{Sub}$, the insertion throughput of WiB$^+$-Tree is not sensitive to the value of $P$, as shown in Figure \ref{fig:wib_insert_1}. This is because we do not retain the tuples sorted in the leaf node; therefore the size of leaf node (=$N_{Sub} / P$) does not affect the throughput much. Figure \ref{fig:wib_insert_2} shows that with a fixed $P$, larger subwindows ($>$512K) normally perform slower because the LLAT is too large to fit in the cache, which leads to more memory accesses and lower throughput. In Figure \ref{fig:wib_probe_1}, we can observe a similar impact of $N_P$ on the probing throughput of WiB$^+$-Tree to RaP-Table. Figure \ref{fig:wib_probe_2} and Figure \ref{fig:wib_probe_3} also show the similar throughput decreases when the selectivity is large. Both WiB$^+$-Tree to RaP-Table can reach an ideal throughput when $P>$16K. Figure \ref{fig:wib_comp} shows the speedup of WiB$^+$-Tree over a regular B-Tree implemented by Google \cite{btree2011google}, where $N_{Sub}$ = 8M, $P$=128K, and $S$ varies from 1 to 16K. The insertion speedup is 3.5-4.5x because WiB$^+$-Tree does not sort the leaf nodes. When the batch size $N_{Bat}$ is larger, the speedup of equi-join and non-equi-join probing becomes higher (up to 2.8x for equi-join and 1.3x for non-equi-join), which shows the efficiency of batch mode. Here, we prove the correctness of our design rationale. \subsubsection{BI-Sort on CPU} \begin{figure*}[!htb] \captionsetup[subfigure]{justification=centering} \centering \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{figures/bis_insert_1.tikz} \caption{Insertion, $N_{sub}$ = 8M.} \label{fig:bis_insert_1} \end{subfigure}% \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{figures/bis_insert_2.tikz} \caption{Insertion, $P$ = 64K.} \label{fig:bis_insert_2} \end{subfigure}% \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{figures/bis_probe_1.tikz} \caption{Equi-join probing, selectivity $S$ = 1.} \label{fig:bis_probe_1} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{figures/bis_probe_2.tikz} \caption{Equi-join probing, $N_{sub}$ = 8M \\ and batch size $N_{Bat}$ = 32K.} \label{fig:bis_probe_2} \end{subfigure} \centering \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{figures/bis_probe_3.tikz} \caption{Non-equi-join probing, $N_{sub}$ = 8M \\ and batch size $N_{Bat}$ = 32K.} \label{fig:bis_probe_3} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{figures/bis_buffer.tikz} \caption{Insertion throughput for different buffer size, $N_{Sub}$ = 64K.} \label{fig:bis_buffer} \end{subfigure} \caption{Performance of BI-Sort.} \label{fig:bis} \end{figure*} We use the same metrics as RaP-Table and WiB$^+$-Tree to test the performance of BI-Sort. The default size of the insertion buffer is 1K. Figure \ref{fig:bis_insert_1} and Figure \ref{fig:bis_insert_2} show a significant impact of the insertion buffer: the throughput remains at the same value when $N_{Bat} < $1K. Figure \ref{fig:bis_insert_1} shows insertion of BI-Sort is not sensitive to $P$ because the index array is very small and requires little computation time compared to the main array. However, Figure \ref{fig:bis_insert_2} shows that a larger $N_{Sub}$ has a considerably slower throughput when $N_{Bat}$ is small. In these cases, main array is so large that merging the insertion buffer or tuple batch is costly. The only solution is to increase $N_{Bat}$ to amortize the merging time. As shown in Figure \ref{fig:bis_probe_1}, $N_P$ does not greatly affect on BI-Sort's probing throughput except when $N_{Bat}$ is small. Figure \ref{fig:bis_probe_2} and Figure \ref{fig:bis_probe_3} show the main difference and advantage of BI-Sort compared with RaP-Table and WiB$^+$-Tree: the probing throughput of BI-Sort is not sensitive to selectivity $S$ because BI-Sort only attempts to find the indices of the join result in the subwindow rather than copying the real tuples to the result. For equi-join shown in Figure \ref{fig:bis_probe_2}, as $S$ increases, each tuple may need to probe two partitions instead of one. Thus, there is a throughput decrease when $P$ is sufficiently large such that $N_P$ is smaller than $S$, whereas Figure \ref{fig:bis_probe_3} has no decreases because during non-equi-join, BI-Sort always checks the upper and lower bounds of join conditions per probing tuple. Figure \ref{fig:bis_probe_2} and \ref{fig:bis_probe_3} also tell us that a larger $P$ improves the throughput because more tuples share the same memory access to the same target partition. In Figure \ref{fig:bis_buffer}, we use $N_{sub}$=64K to illustrate the impact of buffer size on handling inputs with a small batch size. As shown, the throughput with a large buffer size is 2-3 orders of magnitude larger than with a small buffer size. However, because the buffer is unsorted, the tuple batch needs to perform a nest-loop-join with the buffer during the probing operation. Therefore, we do not recommend a large buffer size which can reduce the system performance. \subsubsection{Comparison} \begin{figure}[!htb] \centering \captionsetup[subfigure]{justification=centering} \begin{subfigure}[t]{0.22\textwidth} \centering \includegraphics[width=\textwidth]{figures/com_insert.tikz} \caption{Insertion.} \label{fig:com_insert} \end{subfigure}\hfill% \begin{subfigure}[t]{0.22\textwidth} \centering \includegraphics[width=\textwidth]{figures/com_probe.tikz} \caption{Probing.} \label{fig:com_probe} \end{subfigure} \caption{Performance comparisons, $N_{Sub}$=8M and $P$=64K.} \label{fig:com} \end{figure} We compare the throughput of insertion and non-equi-join probing with the configuration $N_{Sub}$=8M and $P$=64K. For insertion shown in \ref{fig:com_insert} , BI-Sort outperforms the other two only when batch size is larger than 64K. When $N_{Bat} <$64K, the insertion of BI-Sort becomes the bottleneck of the system. For non-equi-join probing with $N_{Bat} =$32K shown in \ref{fig:com_probe}, when selectivity $S<$1K ($2^{10}$), RaP-Table is 1.7-2.4x over WiB$^+$-Tree and BI-Sort is about 1.6-7x over RaP-Table. When $S$ is large, BI-Sort can be 100x over the other two because it is not sensitive to $S$, as we have stated before. Therefore, we suggest the following strategy: if $N_{Bat}$ is large or $S$ is large, we should choose BI-Sort; otherwise, if the tuple values does not vary too often and do not gradually increase, we can use RaP-Table, and if not, we should use WiB$^+$-Tree. \subsection{BI-Sort on FPGA} \begin{figure}[!htb] \centering \captionsetup[subfigure]{justification=centering} \begin{subfigure}[t]{0.22\textwidth} \centering \includegraphics[width=\textwidth]{figures/fpga_insert.tikz} \caption{Insertion.} \label{fig:fpga_insert} \end{subfigure}\hfill% \begin{subfigure}[t]{0.22\textwidth} \centering \includegraphics[width=\textwidth]{figures/fpga_probe.tikz} \caption{Probing.} \label{fig:fpga_probe} \end{subfigure} \caption{Performance of BI-Sort on FPGA.} \label{fig:fpga} \end{figure} \input{tables/fpga_util} We implement our FPGA subwindow on a Terasic DE5a-Net FPGA Development Kit which contains an Arria 10 (10AX115N\-2F45E1SG) FPGA and two channels of DDR3-1066 4 GB memory. We present the performance comparison of the FPGA version with the 3 data structures on CPU in Figure \ref{fig:fpga}, where $N_{Sub}$=8M and $P$=64K. There are 8 mergers and 8 probers on the FPGA. More mergers or probers can not provide better performance because these 16 units are enough to fully utilize the memory bandwidth on FPGA. Figure \ref{fig:fpga_insert} presents the insertion throughput: when $N_{Bat}$ becomes larger than 64K, BI-Sort on FPGA is faster than RaP-Table and WiB$^+$-Tree on CPU. BI-Sort on FPGA is approximately 0.4x-0.6x over the throughput of BI-Sort on CPU because the DDR3 memory (2 channels) used by FPGA is 8.8x slower than the DDR4 memory (6 channels) used by CPU. However, the energy efficiency ratio (EER) on FPGA is approximate 4x over that on CPU, shown as bars using the right y-axis. This result occurs because the TDP (Thermal Design Power) of the CPU is 150 W and we use 8 out of 20 cores on CPU such that the power of the CPU version is 60 W, while the power of the FPGA solution is only 7.9W. For equi-join probing with $S$=1 show in Figure \ref{fig:fpga_probe}, when $N_{Bat}$ is larger than 64K, BI-Sort on FPGA provides better throughput than RaP-Table and WiB$^+$-Tree on CPU. It also provides an EER larger than 1 over BI-Sort on CPU. Therefore, when $N_{Bat}$ is large enough, BI-Sort on FPGA becomes an excellent choice when both throughput and power are concidered. Table \ref{tab:fpga} shows that our FPGA solution uses 42\% of the logic resources and 31\% of the internal block memory (BRAMs), which means that we can puts more processing units for other operations. \subsection{System Performance} \begin{figure*}[!htb] \captionsetup[subfigure]{justification=centering} \centering \begin{subfigure}[t]{0.22\textwidth} \centering \includegraphics[width=\textwidth]{figures/sys_per_1.tikz} \caption{Equi-join: $W$=128M, $S$=1, $P$=64.} \label{fig:sys_per_1} \end{subfigure}\hfill% \begin{subfigure}[t]{0.22\textwidth} \centering \includegraphics[width=\textwidth]{figures/sys_per_2.tikz} \caption{Non-equi-join: $W$=128M, $N_{Bat}$ =32K, $P$=64.} \label{fig:sys_per_2} \end{subfigure}\hfill% \begin{subfigure}[t]{0.22\textwidth} \centering \includegraphics[width=\textwidth]{figures/sys_per_3.tikz} \caption{Receive overhead. } \label{fig:sys_per_3} \end{subfigure}\hfill \begin{subfigure}[t]{0.22\textwidth} \centering \includegraphics[width=\textwidth]{figures/sys_per_4.tikz} \caption{$W$=1G, $N_{Bat}$ = 8M, $P$=64K.} \label{fig:sys_per_4} \end{subfigure} \begin{subfigure}[b]{0.22\textwidth} \centering \includegraphics[width=\textwidth]{figures/sys_com_1.tikz} \caption{System comparison: $W$=8M.} \label{fig:sys_com_1} \end{subfigure}\hfill% \begin{subfigure}[b]{0.22\textwidth} \centering \includegraphics[width=\textwidth]{figures/sys_com_2.tikz} \caption{System comparison: $W$=128M.} \label{fig:sys_com_2} \end{subfigure}\hfill% \begin{subfigure}[b]{0.22\textwidth} \centering \includegraphics[width=\textwidth]{figures/sys_storm_1.tikz} \caption{PanJoin in Storm: throughput comparison.} \label{fig:sys_storm_1} \end{subfigure}\hfill% \begin{subfigure}[b]{0.22\textwidth} \centering \includegraphics[width=\textwidth]{figures/sys_storm_2.tikz} \caption{PanJoin in Storm: throughput ratio.} \label{fig:sys_storm_2} \end{subfigure} \label{fig:sys} \caption{System performance of PanJoin.} \end{figure*} We test the complete PanJoin on a cluster. Each node has the same configuration as discussed in Section \ref{sec:ana_eva}. Every two nodes are connected with InfiniBand with a bandwidth of 100 Gb/s (12.5 GB/s). We first compare the throughput of equi-join with selectivity $S$=1, as shown in Figure \ref{fig:sys_per_1}. The window size $W$=128M for both streams. Each stream has 16 worker nodes and there is one subwindow per node, i.e., $N_{Sub}$=128M$/16$=8M. We find that PanJoin with BI-Sort has the highest throughput up to 28.9M/s. At this input rate, we can process 1G input tuples within 34.6 seconds. We also observe that RaP-Table is faster than WiB$^+$-Tree by approximate 1.5x when $N_{Bat}>$64K. Figure \ref{fig:sys_per_1} also suggests that if the input rate is high, we should use a large batch size such that the system can handle the data. Figure \ref{fig:sys_per_2} shows the system performance of non-equi-join with a smaller batch size ($N_{Bat}$=32K). We can observe the throughput decreases on RaP-Table and WiB$^+$-Tree when the selectivity $S$ becomes larger. Note that the throughput of RaP-Table and WiB$^+$-Tree are nearly identical because under this configuration the overhead of data transmission between manger and workers is more than 60\% of the processing time, which amortizes the throughput differences inside subwindows. Therefore, their lines are overlapped in the figure. Figure \ref{fig:sys_per_3} shows the overhead of data transmission from workers to the manager (not manager to workers). When the selectivity becomes larger than $2^{10}$ (1K), the result receiving overhead for RaP-Table and WiB$^+$-Tree can be more than 90\% of the processing time. For this reason, we marked result receiving as an optional operation in Step 5 of the system architecture (Section \ref{sec:archtect}). We also evaluate non-equi-join with large window $W$=1G processed by different numbers of worker nodes, as shown in Figure \ref{fig:sys_per_4}, where we use lines for selectivity $S$=1 and bars for $S$=64. Here, adding more worker nodes does not always gives better performance because the network communication overhead is dominating the execution time. Still, the system with BI-Sort provide a throughput of approximately 10M/s ($10^7$), where the other two data structures are 1-10x slower when $S$=1 and 10-100x slower when $S$=64 We compare PanJoin with Low Latency Handshake Join and SplitJoin (ScaleJoin), which are adapted to support large $N_{Bat}=$32K and more subwindows/nodes. We consider SplitJoin and ScaleJoin together because their architectures are similar. With window size $W$=8M and 16 worker nodes in Figure \ref{fig:sys_com_1}, the speedup of BI-Sort is more than 1000x over Low Latency Handshake Join and SplitJoin (ScaleJoin), while RaP-Table and WiB$^+$-Tree is more than 100x when the selectivity is small, and RaP-Table is approximate 1.5x over WiB$^+$-Tree. When the selectivity is larger than 4K, RaP-Table and WiB$^+$-Tree have the same throughput as Handshake Join and SplitJoin (ScaleJoin). When $W$ increases to 128 in Figure \ref{fig:sys_com_2}, the speedup of BI-Sort increases to approximately 5000x, while the RaP-Table's and WiB$^+$-Tree's speedup are approximately 100x-5000x when $S\leqslant$1K and 2-100x when $S>$1K. In this way, we show how powerful PanJoin is as an integrated design compared with the existing stream join solutions, which use a nested-loop join inside their subwindows/nodes. We also attempt to integrate PanJoin into Apache Storm \cite{toshniwal2014storm}. Because the system overhead is too large compared with the pure PanJoin solution, we compare the performance of integrated PanJoin with a system where every subwindow is empty, i.e., the join processor does nothing but receive the input tuples and discard them afterward. Figure \ref{fig:sys_storm_1} shows the absolute throughput and Figure \ref{fig:sys_storm_2} shows the throughput ratio of PanJoin to the empty system, where we set $N_{Sub}$=1M. Here, throughput ratio is always approximately 90 percent, which proves that PanJoin works well with Storm and is fast enough for an existing stream processing engine. \begin{comment} \subsection{Throughput on CPU} We implemented our algorithm in C++11. The program is tested on a 2013 released notebook with a Core i5 4258U 2.4 GHz dual-core processor and 8GB DDR3L 1600 memory. We choose this notebook because its processor has better performance than several desktop processors we have tested. Furthermore, we would like to show how robust our algorithm can be under imperfect conditions. The algorithm runs with one thread. We have tested the multi-threaded version, and its performance was much slower than the single-threaded version. This is because the algorithm processes one incoming tuple with several threads. When the throughput is very high (>100K tuples/s), the cost of creating and managing the threads is much higher than the actual processing. An alternative way is to process multiple tuples in parallel like \textit{micro-batching} in Apache Spark. However, because the achieved throughput is already very high, in this work, we focus on the single-threaded approach. We have tested our algorithm on both equi-join and non-equi-join. In this evaluation, we use the configuration of uniform distribution as the ideal case to test the peak performance of our algorithm. The non-uniform cases are shown in Section \ref{sec:adjust} to illustrate the effectiveness of our adjustment algorithm under unevenly distributed data. Note that the input data must be created by a random data generator rather than an arithmetic progression to mimic a uniform distribution. This is because a arithmetic progression will force the algorithm to have an excellent memory locality (a series of tuples will be stored in the same local partition and probe the same partner partition), which can not simulate the real-time situation when the values of tuples follow a random pattern. In our experiment, we find that arithmetic sequences lead to 2-3x better performance than random data. This number happens to be the bandwidth ratio of two adjacent levels of caches. \subsubsection{Equi-Join} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{figures/throughput_cpu_4m.tikz} \caption{Throughput comparison under different subwindow counts and partition counts. The window size is 4M.} \label{fig:throu_cpu_4m} \end{figure} \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{figures/throughput_cpu.tikz} \caption{Throughput comparison under different window sizes and partition counts. The subwindow count is 1+1.} \label{fig:throu_cpu_all} \end{figure*} There are four parameters that can affect the throughput: the \textit{window size}, the number of partitions of each stream (\textit{partition count}), the number of subwindows in each stream (\textit{subwindow count}), and the \textit{data type} of tuple values. Figure \ref{fig:throu_cpu_4m} shows the throughput of our algorithm when the window size is 4M (2M for each stream) and the data type is a 32-bit unsigned integer. We choose 4M because it is slightly larger than the window tested in handshake join (a combined input rate 2000-3000 tuples/s $\cdot$ 15 min window = 1.8M-2.7M) \cite{teubner2011soccer, roy2014low}. We also divided the window into between 1 and 16 subwindows. During runtime, neither the oldest nor the newest subwindows is full, therefore an extra subwindow is needed, denoted as $+1$ in the figure. We can see that the peak throughput reduces when the \textit{subwindow count} increases. This is because the algorithm needs to probe more subwindows, during which it has to probe the partition table $N$ times if the subwindow count is $N$. For a fixed \textit{subwindow count}, the throughput increases with the growth of \textit{partition count} to a peak value and decreases when the \textit{partition count} is too large. When the \textit{partition count} is large, the algorithm needs to spend more time to probe the partition tables in both local stream and partner stream, hence the performance reduces. Figure \ref{fig:throu_cpu_all} shows how the performance changes when the \textit{window size} increases. Here we set the \textit{subwindow count} as $1+1$ to compare the maximum throughput for each case. The peak throughput reduces as the window size increases. The reason is that even though the partition sizes are equal, the \textit{partition count} increases with the growth of \textit{window size}, meaning that the partition tables become larger. Thus, the algorithm takes more time to probe the partition tables. In addition, as the size of the partition tables grow, it becomes increasingly likely that the CPU cache is not able to hold the tables. Because the algorithm uses binary search to probe the table, it may generate more cache misses in these cases. From the figure, we can also observe that the algorithm performs well even when the \textit{window size} is very large. For a \textit{window size} of 256M (128M per stream), it can process more than 200K tuples per second. This window size is equal to a 21-minute window when the combined input rate is 200K tuples/s. Even when we reduce the partition count to 2K, we can still process 15469 tuples per second or 7734 tuples per second per stream. If the input rate per stream is 7.8K tuples/s, each stream (128M) can hold a window of 4.6 hours. Consider the fact that there are 6000-7800 tweets posted per second in the whole world \cite{tweet2017persecond, sayce2017persecond}, our algorithm is very powerful if the input data is partition-able (for integer, the value range is larger than the \textit{partition count}) by the number of partitions given in the figure. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{figures/throughput_cpu_float.tikz} \caption{Throughput comparison under different data type and window size. The subwindow count is 1+1.} \label{fig:throu_cpu_float} \end{figure} Figure \ref{fig:throu_cpu_float} shows the throughput comparison with different data types of tuple values. Here we present the highest throughput for each given window size. We can see that using a floating type slightly decreases the performance by 10-25\% when the window size is larger than 2M. Thus, our algorithm can perform well with more complex data types. \subsubsection{Non-equi-join} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{figures/throughput_cpu_non_equal.tikz} \caption{Throughput comparison for non-equi-join under different selectivity and window size. The subwindow count is 1+1.} \label{fig:throu_cpu_non_equal} \end{figure} We evaluate our algorithm using band join with different selectivities from $\frac{1}{250000}$ to $\frac{1}{25}$. Here the data type of tuple values is 32-bit unsigned integer. To process band join, the algorithm scans the two boundary partitions and copies all the tuples of internal partitions into the result array. Figure \ref{fig:throu_cpu_non_equal} shows the throughput with different \textit{window size}s and the y-axis is set to a logarithmic scale. In this figure, we present the maximum throughput for each given window size. When the selectivity is $\frac{1}{250000}$ and the \textit{window size} is 4M, the throughput is higher than 500K tuples per second, which is two orders of magnitude higher than handshake join. We can also see that the throughput drops significantly when the selectivity increases. However, when the selectivity is lower than $\frac{1}{250}$, our algorithm can still process more than 5500 tuples per second with a \textit{window size} of 256M. \subsection{Adjustment Algorithm}\label{sec:adjust} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{figures/adjust_uni.tikz} \caption{Adjustment accuracy under uniform distribution} \label{fig:adjust_uni} \end{figure} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{figures/adjust_normal.tikz} \caption{Adjustment accuracy under normal distribution} \label{fig:adjust_normal} \end{figure} We tested our adjust algorithm on both uniform and normal distribution. Here we use the configuration \textit{window size} = 4M, \textit{subwindow count} = 1+1, and \textit{data type} is 32-bit unsigned integer. We measure the converge accuracy with the mean absolute error (MAE) of the count histogram (which saves how many tuples are stored in each partition) divided by the average number of tuples a partition should hold (the ideal partition size, acquired by subwindow size divided by partition count). This ratio denotes the adjustment accuracy of how unbalanced the number of tuples of each partition could be compared with the ideal case. Initially, we assume the data is evenly distributed among the range of 32-bit unsigned integer. Its accuracy is also calculated from the data that is generated by the random function of C++. Figure \ref{fig:adjust_uni} shows the adjustment accuracy under multi-modal uniform distributions with different configurations. The configuration is denoted as a vector (data range, modal, partition count). The data range represents the ratio of shrunk new range with the original range, meaning that how narrow the new distribution becomes. We can see that no matter how narrow the new distribution becomes, or how big the modal is, the algorithm needs only one iteration to reach a low accuracy, and becomes stable afterward. This accuracy happens to be the accuracy we obtained from the original distribution with a given partition count (0.022 or 1K partitions and 0.091 for 16K partitions), thus our algorithm converges fast. Note that the dashed line represents a special case where the range is $\frac{1}{60000}$, modal is 1, and partition count is 16K. In this case, each partition holds only $0xFFFFFFFF \cdot \frac{1}{60000} \div 16K \approx 4$ kinds of values. We find that if a large number of partitions concentrates on such a narrow range, the value of accuracy ratio will increase, meaning that the adjusted result is more unbalanced. Thus, if the data range is too narrow, a better practice is to reduce the partition count, if the throughput is still acceptable for the system. \input{tables/compare} Figure \ref{fig:adjust_normal} shows the adjustment accuracy under multi-modal normal distributions. The configuration is denoted as a vector ($\sigma$, modal, partition count). Here, the value of $\sigma$ we show is not the real $\sigma$ value we use, but the normalized $\sigma_{nor}$ which is the ratio $\frac{\sigma_{real}}{0xFFFFFFFF}$. We can see that when the $\sigma$ is large ($>=\frac{1}{600}$), the algorithm converges after only one iteration. When $\sigma$ is very small ($\frac{1}{60000}$), it needs two iterations, and then reaches a stable accuracy ratio that is close to the ratio obtained from the original ideal distribution. Similarly, the number of modals does not affect the performance much. Therefore, from the cases we studied, we conclude that our algorithm adjust well to data skew. \subsection{Performance on FPGA} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{figures/throughput_fpga.tikz} \caption{Throughput of FPGA solution under different window sizes. subwindow count is set to 1+1 and partition count to 4096.} \label{fig:through_fpga_slow} \end{figure} \input{tables/fpga_util} We implement our system on a Alpha-Data ADM-PCIE-8K5 board \cite{alpha20178k5}. It has a Xilinx Kintex Ultrascale XCKU115-FLVA1517E-2 FPGA. The board is equipped with 2x banks of 8G DDR4-2400 as the external memory of the FPGA. Figure \ref{fig:through_fpga_slow} shows the throughput of PanJoin on FPGA. Because the current memory controller supports 4GB memory, our system can not hold a window size larger than 128M. The maximum \textit{partition count} that can be configured on this FPGA is 4K. We can see that the throughput on FPGA is about 3-30\% of the throughput on CPU show in Figure \ref{fig:throu_cpu_all}. This is because the current AXI Interconnect module constraints the number of burst read transactions into a very small number (around 2). Nevertheless, the core can still process more than 12K tuple per second if the \textit{window size} is 128M. Consider that the energy consumption of our FPGA is only 3.8W (our CPU consumes 28W), the energy efficiency of our FPGA solution is high. In addition, the resource utilization shown in Table \ref{tab:util} tells us that there are still free areas on FPGA that can be configured into other processing units. \subsection{Comparison with Other Solutions} \label{sec:comp} We compared our algorithm with several stream join implementations shown in Table \ref{tab:compare}. Because the other solutions are either deployed on a cluster or some specified hardware, we compare the performance of our single-threaded solution with the performances of the other solutions shown in their original papers. For our algorithm, we set the same input format as CellJoin \cite{gedik2009celljoin}, which are also used in other solutions. The input data value range is [1, 10000] and the selectivity is $\frac{1}{250000}$. Here the partition table is two dimensions, each for one attribute to be compared. We can see that our algorithm is 22-144x faster than the other solutions (not including handshake join with index) when the partition count is set as 2K, In addition, if the data is divisible by 32K partitions, our algorithm is 44-283x faster than the others. In this way, we show the power of range partition to reduce the comparison during join processing. In addition, though our algorithm runs only one core, we still outperform indexed handshake join by 0.3x (with a partition count of 2K) or 1.5x (with a partition count of 32K). Because in our design, the subwindows are independent with each other. Thus, it is easy to extend our algorithm on multi-node systems. In this way, we can increase the parallelism of our algorithm to achieve better performance in the future work. \end{comment} \section{Conclusions} \label{sec:con} In this paper, we present a stream join solution called PanJoin that has high throughput, supports non-equi-join, and adapts to skewed data. We present the three new data structures for subwindows to manage their data and provide a strategy that users can choose. Our evaluation proved that the performance of PanJoin is more than three orders of magnitude higher than several recently proposed stream join solutions. The limitation of PanJoin on large window sizes is the network bandwidth. Since InfiniBand is one of the fastest solutions on the market, we expect that the future network technology will solve this limitation. \section{BI-Sort on FPGA} \label{sec:imp} Because BI-Sort is simple enough to implement a hardware version, we attempt to build a worker node with BI-Sort on FPGA by Intel OpenCL in order to benefit from the two major advantages of FPGA: high throughput and low energy consumption. The system architecture is shown in Figure \ref{fig:fpga_arch}. The system has an insertion engine and a probing engine. Both of these engines can access the data buffered in the external memory on the FPGA board. The data can either come from the Internet I/O port that connects to the manager node or come from the host computer, depending on the configuration of the system. \begin{figure}[!htb] \centering \begin{minipage}[b]{.40\textwidth} \centering \includegraphics[width=\textwidth]{diagrams/Sigmod2019_-_fpga_arch.pdf} \caption{System architecture of FPGA solution.} \label{fig:fpga_arch} \end{minipage} \centering \begin{minipage}[b]{.20\textwidth} \centering \includegraphics[width=\textwidth]{diagrams/Sigmod2019_-_merger.pdf} \caption{Merger.} \label{fig:merger} \end{minipage} \begin{minipage}[b]{0.20\textwidth} \centering \includegraphics[width=\textwidth]{diagrams/Sigmod2019_-_prober.pdf} \caption{Prober.} \label{fig:prober} \end{minipage} \hfill% \end{figure} The insertion engine has several \textsf{merger}s that merge the insertion buffer into the main array. The structure of a merger is shown in Figure \ref{fig:merger}, where there are two input stream ports, one output stream port, and a comparator. Here, we implement continuous memory access as a data stream to utilize the memory bandwidth. Initially, the merger reads one tuple from each input stream. Then, in each iteration, it compares the two tuples, chooses the smaller one to write into the output stream, and reads one tuple from the chosen stream. When all the mergers finish their work, an \textsf{indexer} will generate the index array. During probing, the \textsf{boundary generator} inside the probing engine first calculates the upper bound and lower bound of the join condition per probing tuple. Then, the \textsf{partitioner}s find the target partition(s) of each probing tuple. Subsequently, the \textsf{prober}s probe partitions and generate the result. The design of a prober is shown in Figure \ref{fig:prober}, which is similar to a merger. It has an input port for target partitions, an input port for bounds (upper or lower bounds), an output port for results, and a comparator. Initially, it reads a partition tuple and a bound. Then, in each iteration, if the value of the partition tuple matches the join condition, it writes $i$ into the result and reads the next bound. If the tuple exceeds the join condition, it reads the next bound. Otherwise, it reads the next tuple. \begin{comment} \begin{figure} \centering \includegraphics[width=0.50\textwidth]{diagrams/Sigmod2018_-_fpga_arch.pdf} \caption{System architecture.} \label{fig:arch} \end{figure} \begin{figure} \centering \includegraphics[width=0.50\textwidth]{diagrams/Sigmod2018_-_fpga_core.pdf} \caption{An example of a \textit{join core} which has two input streams, S and R. Each stream has four sub-windows. The two \textit{stream modules} have the same content.} \label{fig:core} \end{figure} \subsection{Overview} We divided our FPGA implementation into three parts: input, processing, and communication. Figure \ref{fig:arch} shows the system architecture of our implementation. Their functionalities, high-level architectures, and design rationales are described as follows: \subsubsection*{Input} The input part is responsible for generating input data and dispatching it to the corresponding \textit{Stream Module} inside the processing part. For a system with two streams, there are two \textit{data generator}s. A \textit{data generator} can create tuples that follow different value distributions, e.g., uniform distribution or normal distribution. \subsubsection*{Processing} The processing part consists of a \textit{join core} that performs our stream join algorithm on the input data streams. It has $M$ input stream data ports, one output stream data port, and an I/O port for DDR4 memory data. Here $M$ is the number of input streams. \subsubsection*{Communication} The communication part has an \textit{AXI Interconnect}, a \textit{DDR4 memory controller} and a \textit{MicroBlaze} processor. The main function of the \textit{AXI Interconnect} is to interconnect the memory port of the \textit{join core} and the data port of the \textit{DDR4 memory controller}, as well as distribute the control signal from the \textit{MicroBlaze} processor. The \textit{MicroBlaze} processor is a softcore CPU that serves as a control unit for the system. It can initiate the \textit{data generators} and the \textit{join core}. It can also communicate with the host PC so that we can monitor the system and its performance at runtime. \subsection{Join Core} The join core has several \textit{stream module}s and a \textit{memory prober}, as shown in Figure \ref{fig:core}. Each \textit{stream module} is used for managing the data in its corresponding stream. These stream modules are responsible for processing the incoming tuples for storage and sending the probing requests to their partner \textit{stream module}(s) who will then probe its stream for this tuple. When there are memory requests (memory write during storage and memory reads during probing), the \textit{stream module} will deliver those requests to the \textit{memory prober}, which performs the actual communication with other modules in the system. Inside each \textit{stream module}, there are three histograms implemented as Block-RAMs. The histograms are only for the current newest sub-window. We do not save the histograms for other sub-windows because they are not used after the generation of a new partition table by the \textit{splitter adjuster}. There is also a hardware version of our LLAT data structure for each sub-window. However, this hardware LLAT does not hold any data: the tuples are stored in the external memory. It only saves the value of the \texttt{head} and \texttt{tail} pointers, which are implemented as offset integers stored in Block-RAMs. When there is a memory operation, the \textit{memory requester} will send the request to the \textit{memory prober}. The \textit{expiration table} performs the same function as a \textit{global expiration table} that has been described in Section \ref{sec:expire}. We implemented the \textit{join core} and the \textit{data generators} using High-Level Synthesis (HLS). HLS is a technology that converts C/C++ code into a hardware description at the register-transfer level (RTL), such as Verilog or VHDL. In other words, by using HLS, we only need to write C/C++ code and let the IDE tool compile the software into hardware. The hardware code is derived from software code. We changed the code related to the external memory access for the tuple data. This part of the code accounts for 5.5\% of the total code. \end{comment} \section{Related Work} \label{sec:rel} In this section, we mainly consider stream join and its hardware acceleration. We do not include non-streaming cases because it is not straightforward to map these works directly to stream join processing: traditional join processes a large batch of data at once, while stream join has to produce results on the fly. \textit{Symmetric Hash Join} (SHJ) \cite{wilschut1993dataflow} is one of the first proposals for bounded input stream-like data. It continuously updates its two hash tables (one for each relation) that stores the incoming tuples while it receives tuples of each relation in turn. SHJ uses hash functions to reduce the comparison for equi-join. It also has an extension called XJoin \cite{urhan2001dynamic} to handle disk-resident data. XJoin becomes a basis for many later designed algorithms, such as \textit{Rate-based Progressive Join} (RPJ) \cite{tao2005rpj} which uses a smarter memory replacement algorithm. After XJoin, Kang et al. presented the first formalized 3-step paradigm for window-based stream join \cite{kang2003evaluating}. Red Black indexing tree based Symmetric Nested Loop Join (RBSNLJ) \cite{ya2006indexed} is one of the first works on processing non-equi stream joins. RBSNLJ partitions the sliding window into subwindows based on the arrival time of tuples. The complex data structures (e.g., the red-black trees in RBSNLJ) are built in subwindows. Only the data structure in the newest subwindow is updated at runtime and the other subwindows reuse their data structures for probing operation. In this way, the cost of updating the complex data structure is significantly reduced, which increases the overall performance. However, RBSNLJ does not explore how to process a subwindow in parallel. The red-black tree is also unnecessarily complicated and generates a large number of random memory accesses during processing, compared with the data structures used in PanJoin. To reduce comparisons during probing, some research works focus on increasing the parallelism of join processing. CellJoin \cite{gedik2009celljoin} divides a sliding window into subwindows based on the arrival time of tuples, which is similar to RBSNLJ. CellJoin maps subwindows to processing cores on heterogeneous multicore Cell processors. Because CellJoin implements nested-loop-based join processing, its performance is limited to several thousand tuples per second on a 15-minutes window (several million tuples in total). Nevertheless, we believe CellJoin can easily be extended to support our PanJoin by adding our specially designed data structures into subwindows. (Low Latency) Handshake Join \cite{teubner2011soccer, roy2014low} parallelizes the join processing by maintaining a bidirectional data flow similar to a doubly linked list. Each node (subwindow) in the flow is mapped to a processing core. New tuples flow from the starting node to the ending node and join with the tuples of the opposite stream saved in all bypassed node. The performance of handshake join is also constrained by its nested-loop-based join processing. SplitJoin \cite{najafi2016splitjoin} parallelizes join processing in a more straightforward manner. Rather than forwarding tuples through the data flow, SplitJoin stores each tuple in a fixed node. It also splits storage and probing into two separate steps, where probing can be further divided into several independent processes that run in parallel. BiStream \cite{lin2015scalable} processes stream join in a similar fashion and is applied on large-scale distributed stream join systems. BiStream also implements indexes to accelerate equi-join and non-equi-join. Another similar work called ScaleJoin first merges all incoming tuples into one stream, and then it distributes them to \textit{processing threads} (PTs) \cite{gulisano2016scalejoin}. Each tuple is dispatched to all PTs but is stored in only one PT in a round robin fashion. However, (Low Latency) Handshake Join, SplitJoin and ScaleJoin lack discussions on the data management inside the subwindows (nodes or PTs) while PanJoin provides three new data structures for users to chose. Their subwindows also update and expire the tuples frequently, which is inefficient for a complex data structure to manage. PanJoin solves this problem by providing an architecture with a highly simplified expiration mechanism. Recently FPGAs have received increasingly more attention in accelerating stream processing. Pioneering works show the tremendous potential of FPGAs in stream processing \cite{hagiescu2009computing, woods2010complex, sadoghi2012multi}. Handshake Join has an FPGA implementation \cite{teubner2011soccer}. ScaleJoin \cite{gulisano2016scalejoin} also has an FPGA version \cite{kritikakis2016fpga} that uses 4 Virtex-6 FPGAs. In addition to a pure stream join processor, M. Najafi et al. suggest a more general FPGA-based stream processing architecture, Flexible Query Processor (FQP) \cite{najafi2015configurable}, which perform stream join similar to Handshake Join. FQP introduces OP-Block, a connectable stream processing unit, for constructing a Flexible Query Processor to process complex queries. Each OP-Block can be configured as a join core, a projection filter or a selection filter. FQP maps stream processing queries to a chain of OP-Blocks, thus handshake join can be realized as a chain of OP-Blocks which are configured as join cores. Similar to handshake join, in FQP each tuple is passed from the first partition stored in the first OP-Block (near input) to the next partition (OP-Block). Therefore FQP also dynamically partitions the window based on the arrival time of each tuple. To collects result, each OP-Blocks is connected to a result filter which inserts the result into a result aggregation buffer, and then the result is output to the result stream. FQP implements nested-loop join inside each OP-Block. Therefore, as a pioneer work, the peak throughput of FQP is limited. Here, both FQP and the FPGA version of Handshake Join use internal memory on FPGA to store the tuples, which limits the maximum window size to several thousand, where PanJoin's subwindow on FPGA stores data in the external DDR3 memory which decides the maximum size of the window. \begin{comment} \subsection{Hardware Acceleration on Relational Database Processing} In addition to stream processing, many other works explore the usage of FPGAs in relational database processing. \cite{teubner2013data} presents a detailed survey about data processing on FPGAs. \cite{mueller2009data} compares FPGAs and different kinds of CPUs on processing basic database queries and shows that FPGA has better performance and lower energy consumption. This work also introduces a query compiler that translates queries into an FPGA implementation \cite{mueller2009streams}. A. Putnam et al. proposed an FPGA-based accelerator for large-scale data center services \cite{putnam2014reconfigurable}. It is the first paper that illustrates how costumed FPGAs succeed in a commercial database engine. On FPGA there is a custom multi-core processor with 60 cores that process Free Form Expressions (FFEs). Each processor core is implemented as an 8-stage pipeline. The onboard CPU processes each query and dispatches jobs to the FPGA. The system contains 1632 server notes and has been evaluated by the Bing ranking engine applications, showing an improvement of 95\% in throughput and reduces the tail latency by 29\%. J. Casper et al. implements selector, merge join core, and merge sort algorithm on FPGAs, and combines them to construct a sort merge join processor \cite{casper2014hardware}. In their work, the selector or the merge join core consists of an array of comparators and input/output control modules. They also implement a sorting tree to realize the merge sort algorithm. The system consists of 4 FPGA working in parallel and 4 DDR3 RAMs running at 400MHz with a total bandwidth of 153.6GB/s. The memory bandwidth utilization varies from 5 to 25 percent. L. Wu et al. proposed the first hardware accelerator for range partitioning called \textit{Hardware-Accelerated Range Partitioner} (HARP) \cite{wu2014hardware}. This framework can seamlessly integrate existing hardware and software by plugging an ASIC HARP block into a CPU core. Each HARP can pull and push data through memory controller and compare the input key in a highly pipelined way. The partitioner is programmed by software with extra instructions, including setting values for each splitter (boundary). There are many other data processing solutions on hardware. Q100 proposed by L. Wu et al. \cite{wu2014q100} is a relational database query processor which contains a collection of heterogeneous ASIC tiles. It is the first DPU (database processing unit) that targets database applications. A dedicated FPGA accelerator for Large Semantic Web Database was proposed in \cite{werner2013hardware} and reached a speed up for Symmetric Hash Join between 1.3-10.2x compared to the CPU-based counterparts. GROUP BY aggregation on FPGA was also explored in \cite{woods2014ibex} which achieved improvements around 7-11x. \cite{halstead2015fpga} presented an FPGA-based multi-threading solution for in-memory hash join and claimed that the throughput is 2-3.4x faster than multicore solutions. \cite{istvan2015hash} described a new hash table design on FPGA, which is implemented without a large overflowed buffer and outperforms a CPU-based solution by a factor of 10. Recent works also explored CPU-FPGA heterogeneous architectures to accelerate pattern matching and data partitioning \cite{sidler2017accelerating, kara2017fpga}. \end{comment} \section{Introduction} Stream processing techniques are used in many modern applications to analyze data in real time. In stream processing, stream join is a commonly used operator for relating information in different streams. Many researchers \cite{gulisano2016scalejoin, teubner2011soccer, ananthanarayanan2013photon, lin2015scalable} consider stream join to be one of the most critical and expensive operations in stream processing. The definition of stream join is extended from the relational (theta) join. A commonly used model is sliding-window-based stream join (henceforth referred to as \textit{stream join} in this paper), where each stream maintains a sliding window and each incoming tuple from one stream is joined with the sliding window(s) of other stream(s). Stream join has been intensively studied over the past two decades \cite{kang2003evaluating, xie2007survey, teubner2011soccer, roy2014low, vitorovic2016squall, najafi2016splitjoin, lin2015scalable, gulisano2016scalejoin}. The main challenge in processing a stream join is producing results at runtime with high throughput, particularly when the window size is large and the input data rate is high. Recent works \cite{teubner2011soccer, roy2014low, najafi2016splitjoin, gulisano2016scalejoin} attempted to address this challenge by dividing a stream window into several subwindows and assigning them to multiple processors or join cores that work in parallel. In this way, their solutions can handle several thousand input tuples per second with a window size of several million. The limitation of these works is that they mainly parallelize stream join at the architectural level: there are few discussions on how to parallelize subwindow internally or design a data structure to accelerate the processing. Furthermore, a recent report from LinkedIn \cite{noghabi2017samza} shows a real-world requirement to handle input rates of 3.5K-150K per second, which is higher than the maximum throughput of the aforementioned works. \begin{comment} The third challenge is to add support for non-equi-join. Non-equi-join, especially band join, is useful in many stream processing applications. However, it is poorly supported by the popular stream processing platforms. All of the three most popular platforms, Apache Storm\cite{joinstorm2017doc}, Apache Spark Streaming\cite{joinspark2017doc} and Apache Flink DataStream \cite{joinflink2017doc}, only provide functions for equi-join. Apache Kafka Streams also only supports limited modes of equi-join \cite{joinkafka2017doc}. To achieve high performance (high throughput and low latency), we can try to partition the sliding window into a serious of subwindows. The tuple then only needs to join with one or several subwindows, which reduces the required comparison for each tuple. This approach also increases the parallelism of stream join. Therefore, a partition method that supports non-equi-join could help us overcome the challenges in designing a stream joiner. L. Wu et al. have listed three properties a good partitioner should have \cite{wu2014hardware}: \textit{ordered partitions}, whereby there is an order among the value ranges of partitions, thus the input data is sorted in a coarse way; \textit{record order preservation}, whereby the order inside a partition appears the same as the arrival order of tuples; and \textit{skew tolerance}, whereby the partitioner maintains the same throughput regardless of the distribution of tuples among partitions. In this paper, we add a further requirement for \textit{skew tolerance}: the partitioner can distribute the tuple evenly among partitions regardless of the data skew. There are two partitioning methods commonly used by database system: \textit{hash partitioning} and \textit{range partitioning} \cite{oracledb2017doc}. Both of them satisfy \textit{record order preservation} if the tuples are saved in their arrival order. Hash partitioning is adaptive to data skew since a good hash function can evenly distribute the data regardless of its distribution (assuming the portion of repetitive values is small). However, it does not save the ordering of values among partitions. It is difficult to determine whether a partition contains values that are larger or smaller than a given range. Thus, it can not be used to accelerate non-equi-join. On the other hand, range partitioning holds the ordering information, but it does not adapt to data skew with a fixed partition table. Therefore, if we choose range partitioning to support non-equi-join, we need to find a method that updates the partition table at runtime and separates the result generated by different partition tables. Since the partition table needs to be updated online, we can take a snapshot of the sliding window in a given period. The snapshot contains the values of all tuples in the current window. We can use this information to create a range partition table that can evenly partition the current window and use the new table for the next period. By analyzing the histograms among current partitions, we designed an algorithm that gives an approximate estimation of the new partition table which partitions the window in a near even fashion. We show that we can get the new partition table after $log_PR$ iterations (P is the partition count, R is the range of tuple values calculated by $Value_{max} - Value_{min}$). For example, given $P=1000$ and $R = 2^{32}-1$ (range of a 32bit integer), we only need a maximum of 4 iterations to calculate the new table. This algorithm also works on floating point values where the precision is close to commonly used values (like 0.01 in currency representation). Furthermore, we studied the features of stream join and found an important characteristic we should use: given a time period when the value distribution of incoming tuples is unchanged, we can use a few early arriving tuples as samples to construct the partition table for the tuples arriving later. Thus, instead of analyzing the histograms of the entire sliding window, we can divide the window into subwindows based on the arrival time (or order) of tuples and build new partition tables for each hop (subwindow). New tuples are inserted into the newest subwindow and a new partition table is constructed once the newest subwindow is full, while oldest tuples expire from the oldest subwindow and other subwindows stay unchanged. This mechanism is close to the idea used in \cite{ya2006indexed}, where complex data structures (like trees) are built based on subwindows: only the data structure of the newest subwindow is updated, while the data structures of other subwindows stay unchanged and are retained for the use of probing. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{diagrams/Sigmod2018_-_Overview.pdf} \caption{Overview of PanJoin.} \label{fig:overview} \end{figure} \end{comment} In this paper, we first propose a new architecture generalized from an algorithm named Red Black indexing tree based Symmetric Nested Loop Join (RBSNLJ) \cite{ya2006indexed}. In the new architecture, the window is divided into subwindows based on the arrival time of tuples, and subwindows are chained similar to a circular buffer. The new tuples are only inserted into the newest subwindow, and the oldest subwindow(s) is(are) expired as a whole, while the middle subwindows remain unchanged. This architecture is suitable for implementing a complex data structure such as a tree inside each subwindow, since expiration is highly simplified and the processing overhead caused by the remaining expired tuples is decreased by the number of subwindows. We then propose three new data structures that are specifically designed for the subwindows of this architecture. Each of the data structures has advantages over the others under different categories of configuration, e.g., different subwindow sizes or selectivity values. The main idea of these data structures is to further divide a subwindow into several partitions based on the tuple value. In this way, we can reduce a considerable amount of memory access for probing operation compared with the nested-loop join used in the related works \cite{teubner2011soccer, najafi2016splitjoin, gulisano2016scalejoin}, thereby significantly accelerating the overall stream join by more than 1000x. The three data structures can also process requests with multiple threads, and we design some mechanisms for our data structures to handle highly skewed data, where the tuple value is not evenly distributed among the value range. Therefore, we name our algorithm PanJoin (Partitioned Adaptive uNiformization join), which partitions and parallelizes a sliding window at both the architectural level and the subwindow level, as well as adaptively manages highly skewed data to achieve performance as good as managing data following a uniform distribution. Furthermore, since FPGA is commercially applicable in several cloud computing service providers (e.g., IBM, Amazon and Microsoft) and becomes increasingly popular, we decide to implement the most hardware-friendly data structure, named BI-Sort, on FPGA to achieve high throughput and high energy efficiency. Our experiment shows that PanJoin can handle an input rate of 10M-28M tuples per second for a window size of 8M-1G on a cluster, which is more than 1000x faster than the recently proposed solutions \cite{teubner2011soccer, roy2014low, najafi2016splitjoin, gulisano2016scalejoin}. Our data structures adapt well to highly skewed data from the real world. The subwindows on FPGA also provide energy-efficient performance compared to the subwindows on CPU. The contributions of this paper are four-fold: \begin{enumerate} \item A new stream join architecture which significantly simplifies expiration operation and avoids communication between worker nodes. \item Three novel data structures which remarkably reduce the comparisons in probing opeartion over nested-loop join by more than three orders of magnitude. \item Several innovative data structures (e.g., Linked List Adaptive Table) and algorithms to implement the three new data structures or provide efficient storage strategy. \item An FPGA solution which has more than 4x energy efficiency over the corresponding CPU solution. \end{enumerate} The organization of this paper is as follows: Section \ref{sec:rel} shows and discusses the related works; Section \ref{sec:pan} presents the PanJoin algorithm; Section \ref{sec:imp} presents the implementation of PanJoin on FPGA; Section \ref{sec:eva} shows the performance results of PanJoin. \section{Headings in Appendices}
{'timestamp': '2018-11-14T02:05:21', 'yymm': '1811', 'arxiv_id': '1811.05065', 'language': 'en', 'url': 'https://arxiv.org/abs/1811.05065'}
\section{Introduction} The characteristics of the top-quark, mainly the total decay width which is one of the fundamental property of top physics is measured with very good precision from the partial decay width $\Gamma(t \to Wb)$. In addition, its huge mass $m_t = 173.0\pm 0.4\hspace{0.8mm}GeV$ \cite{Data2018}, as well as its anomalous couplings to bosons in the $t\bar t \gamma$, $t\bar t Z$, $t\bar t g$, $Wtb$, $\bar t t H$ and $tq\gamma$ vertices, have turned the top-quark into one of the most attractive particles for new physics searches. Measurements of the properties of the top-quark offer an interesting probe to understanding the electroweak sector \cite{CPYuan} and physics Beyond the Standard Model (BSM). These and other characteristics have led to developing your own physics program for the top-quark for present and future $pp$, $e^-p$ and $e^+e^-$ colliders. Therefore, top-quark physics is one of the most attractive topics at the Large Hadron Collider (LHC), as well at the High-Luminosity Large Hadron Collider (HL-LHC) and High-Energy Large Hadron Collider (HE-LHC). However, at the post LHC era a very attractive and interesting option to study the physics of the top-quark, mainly its anomalous couplings is through the future electron-proton $(e^-p)$ hybrid colliders, such as the Future Circular Collider Hadron Electron (FCC-he) \cite{FCChe,Fernandez,Fernandez1,Fernandez2,Huan,Acar}. The $e^-p$ colliders will open up new perspective in the field of fundamental physics, especially for particle physics. Several potential features in favor of this type of electron-proton colliders are the following: 1) Would represent the high resolution collider, with the cleaner environment for exploring the substructure and dynamics inside matter, with unmatchable precision and sensitivity. 2) The center-of-mass energies are much higher than that of the future International Linear Collider (ILC) and the Compact Linear Collider (CLIC). 3) With concurrent $e^-p$ and $pp$ operation, the FCC-he would transform the LHC into an energy frontier accelerator facility. 4) With very precise strong and electroweak interaction measurements and with suppressed backgrounds from strong interactions, the $e^-p$ results would make the FCC-he a much more powerful search and measurement laboratory than present laboratories based on $pp$ collisions. 5) The joint $pp + e^-p$ facility can become a Higgs bosons and top-quark factory for study the physics of both with an unprecedented impact. 6) For it's high-energy, high-luminosity and while maintaining a very clean experimental environment, the FCC-he has an outstanding opportunity to discover new physics BSM, such as Higgs sector, top-quark physics, exotic Higgs, dark matter, heavy neutrino, matter-antimatter asymmetry and possible discoveries in QCD. All these topics are being studied very actively. In conclusion, the physics program of high-energy and high-luminosity $e^-p$ + $pp$ collisions are vast and contain many unique opportunities. It provides high precision of Higgs boson, top-quark, QCD and electroweak physics complementary to $e^+e^-$ colliders. Furthermore, $e^-p$ is an attractive, realistic option for a future energy frontier collider for particle physics. For an exhaustive study on the physics and detector design concepts see Refs. \cite{FCChe,Fernandez,Fernandez1,Fernandez2,Huan,Acar}. In the first instants of the creation of the universe, the Big Bang should have created equal amounts of matter and antimatter. However, there is an unexplained dominance of matter over antimatter observed in the universe. CP violation offers one explanation for the asymmetry in baryonic matter, however, there are not enough current observed sources of CP violation to account for the total matter-antimatter asymmetry. For this reason, it is necessary to study new sources of CP violation. About this topic, the Electric Dipole Moments (EDM) are very sensitive to CP violation in the quark and lepton sectors. EDM searches are then in the ideal situation of that an observation in the next generation of experiments would be a clear indication of new physics BSM. Under this perspective, the Anomalous Magnetic Dipole Moment (AMDM) and EDM of the top-quark are currently under intense scrutiny from the point of view: theoretical, phenomenological and experimental. Within the scope of this project, CP violation can be parameterized by the presence of anomalous couplings in the $t\bar t\gamma$ vertex of top quark production. In this study, we focus on AMDM $(\hat a_V)$ and the EDM $(\hat a_A)$ of the top-quark. Since the AMDM and EDM of the top-quark is chirality changing it can be significantly enhanced compared to AMDM and EDM of light fermions by the large top coupling. For the dominant $e^-p \to e^-\bar b \to \bar t \nu_e \gamma \to \bar t(\to W^- \to (qq', l^- \bar\nu_l)+b) \nu_e\gamma$ production channel considered here, we find that the proposed FCC-he with $\sqrt{s}=7.07, 10\hspace{0.8mm}TeV$, ${\cal L}=50, 100, 300, 500, 1000\hspace{0.8mm}fb^{-1}$ can probe the dipole moments of the top-quark with good sensitivity. We focus on two different signals: (i) the hadronic channel with polarized electron beam for $P_{e^-}=-80, 0, 80\hspace{0.8mm}\%$, and (ii) the leptonic channel with polarized electron beam for $P_{e^-}=-80, 0, 80\hspace{0.8mm}\%$ in the $\bar t \nu_e \gamma$ final state. We show that sensitivity measuring expected on the electromagnetic anomalous couplings in the $t\bar t\gamma$ vertex can be probed at $95\%$ Confidence Level (C.L.) with center-of-mass energies $\sqrt{s}=7.07, 10\hspace{0.8mm}TeV$ and integrated luminosity ${\cal L}=1000\hspace{0.8mm}fb^{-1}$ at FCC-he. \begin{table}[!ht] \caption{Sensitivities achievable on the electromagnetic dipole moments of the top-quark in the SM and in different processes and colliders.} \begin{center} \begin{tabular}{|c|c|c|c|} \hline\hline {\bf Model} & {\bf Sensitivity of the SM} & {\bf C. L.} & {\bf References}\\ \hline SM & $a_t= 0.02$, $d_t < 10^{-30} ({\rm ecm})$ & $68 \%$ & \cite{Benreuther}, \cite{Hoogeveen,Pospelov,Soni} \\ \hline \hline \hline {\bf Model} & {\bf Theoretical sensitivity: $\hat a_V$, $\hat a_A$} & {\bf C. L.} & {\bf Reference}\\ \hline Top-quark pair production at LHC & $ (-0.041, 0.043), (-0.035, 0.038) $ & $68 \%$ & \cite{Juste} \\ \hline $t\bar t\gamma$ production at LHC & $ (-0.2, 0.2), (-0.1, 0.1) $ & $90 \%$ & \cite{Baur} \\ \hline Radiative $b\to s\gamma$ transitions at Tevatron and LHC & $ (-2, 0.3), (-0.5, 1.5) $ & $90 \%$ & \cite{Bouzas} \\ \hline Process $pp \to p\gamma^*\gamma^*p\to pt\bar t p $ at LHC & $ (-0.6389, 0.0233), (-0.1158, 0.1158) $ & $68 \%$ & \cite{Sh} \\ \hline Measurements of $\gamma p \to t\bar t$ at LHeC & $ (-0.05, 0.05), (-0.20, 0.20) $ & $90 \%$ & \cite{Bouzas1} \\ \hline Top-quark pair production $e^+e^- \to t\bar t$ at ILC & $ (-0.002, 0.002), (-0.001, 0.001) $ & $68 \%$ & \cite{Aguilar} \\ \hline\hline \end{tabular} \end{center} \end{table} AMDM and EDM searches of the top-quark are performed in the Standard Model (SM) and on a variety of physics processes. The sensitivity estimated on the AMDM and EDM of the top-quark in the SM, as well as in different processes and colliders are reported in Table I. Other direct collider probes of the AMDM and EDM have been studied extensively \cite{Ibrahim,Atwood,Polouse,Choi,Polouse1,Aguilar0,Amjad,Juste,Asner,Abe,Aarons,Brau,Baer,Grzadkowski:2005ye,murat,Billur}. Plan of the article is as follows: In Section II, we introduce the top-quark effective electromagnetic interactions. In Section III, we sensitivity measurement on top-quark anomalous electromagnetic couplings through $e^-p \to e^-\bar b \to \bar t \nu_e \gamma \to \bar t(\to W^- \to (qq', l^- \bar\nu_l)+b) \nu_e\gamma$ signal. Finally, we present our conclusions in Section IV. \section{Single top-quark production via the process $e^-p \to e^-\bar b \to \bar t \nu_e \gamma$} \subsection{Effective Lagrangian of $t\bar t \gamma$ interaction of the top-quark} The SM predicts CP violation outside the K, D and B meson systems is small to be observed. However, in some extensions of the SM, CP violation might be considerably enhanced, especially in the presence of heavy particles as the top quark. In particular, CP-violating EDM of the top-quark could be enhanced. Single top-quark production processes are sensitive to the anomalous couplings in the $t\bar t \gamma$ vertex. Furthermore, since the top-quark lifetime is shorter than the timescale of spin decoherence induced by QCD, its decay products preserve information of its polarization by the production mechanism. This provides additional powerful tools in the search for BSM physics in single top-quark studies. On the other hand, due to the absence so far of any signal of new heavy particles decaying into top-quark, an attractive approach for describing possible new physics effects in a model-independent way is based on effective Lagrangian. In this approach, all the heavy degrees of freedom are integrated out leading to obtain the effective interactions between the SM particles. This is justified because the related observables have not shown any significant deviation from the SM predictions so far. The Lagrangian describing interaction of the anomalous $tt\gamma$ coupling including the SM contribution and terms BSM which are related to new physics has the structure: \begin{equation} {\cal L}_{eff}={\cal L}^{(4)}_{SM} + \frac{1}{\Lambda^2}\sum_n \Bigl[C_n{\cal O}^{(6)}_n + C^*_n{\cal O}^{\dagger(6)}_n \Bigr]. \end{equation} \noindent Here, ${\cal L}_{eff}$ is the effective Lagrangian gauge-invariant which contains a series of dimension-six operators built with the SM fields, ${\cal L}^{(4)}_{SM}$ is the renormalizable SM Lagrangian of dimension-four, $\Lambda$ is the scale at which new physics expected to be observed, $C_n$ are Wilson coefficients which are dimensionless coefficients and ${\cal O}^{(6)}_n$ represents the dimension-six gauge-invariant operator. The ${\cal O}^{(6)}_n$ operator and the unknown coefficients $C_n$, combined with $\Lambda^{-2}$, produce the non-standard coupling constants, that is generated anomalous contributions to the photon-top-quark interaction vertex which is similar in structure to radiative corrections in the SM. The most general Lagrangian term that one can write for the $t\bar t\gamma$ coupling up to dimension-six gauge invariant operators \cite{Sh,Kamenik,Baur,Aguilar,Aguilar1} is: \begin{equation} {\cal L}_{t\bar t\gamma}=-g_eQ_t\bar t \Gamma^\mu_{ t\bar t \gamma} t A_\mu, \end{equation} \noindent this equation includes the SM coupling and contributions from dimension-six effective operators. $g_e$ is the electromagnetic coupling constant, $Q_t$ is the top-quark electric charge and the Lorentz-invariant vertex function $\Gamma^\mu_{t\bar t \gamma}$ is given by: \begin{equation} \Gamma^\mu_{t\bar t\gamma}= \gamma^\mu + \frac{i}{2m_t}(\hat a_V + i\hat a_A\gamma_5)\sigma^{\mu\nu}q_\nu. \end{equation} \noindent $m_t$ and $q$ are the mass of the top-quark and the momentum transfer to the photon, respectively. The $\hat a_V$ and $\hat a_A$ couplings in Eq. (3) are real and related to the AMDM and the EDM of the top-quark. These couplings $\hat a_V$ and $\hat a_A$ are directly related to the $a_t$ and $d_t$, via the relations: \begin{eqnarray} \hat a_V&=&Q_t a_t, \\ \hat a_A&=&\frac{2m_t}{e}d_t. \end{eqnarray} As shown in Refs. \cite{Buhmuller,Aguilar2,Antonio,Grzadkowski:2005ye}, among operators of dimension-six there exist only two relevant for the $t\bar t\gamma$ interaction operator: \begin{eqnarray} {\cal O}_{uW}^{33}&=&\bar q_{L3}\sigma^{\mu\nu}\tau^a t_{R}{\tilde \phi} W_{\mu\nu}^{a}+{\mbox{h.c.}},\\ {\cal O}_{uB\phi}^{33}&=&\bar q_{L3}\sigma^{\mu\nu}t_{R}{\tilde \phi} B_{\mu\nu}+{\mbox{h.c.}}, \end{eqnarray} \noindent where the index 3 means the 3rd quark generation, $\bar q_{L3}$ is the quark field, $\sigma^{\mu\nu}$ are the Pauli matrices, ${\tilde \phi}=i\tau_2\phi^*$, $\phi$ is the SM Higgs doublet, $W_{\mu\nu}^{a}$ and $B_{\mu\nu}$ are the $U(1)_Y$ and $SU(2)_L$ gauge field strength tensors which are defined as: \begin{eqnarray} B_{\mu\nu}&=&\partial_\mu B_\nu-\partial_\nu B_\mu, \\ W^a_{\mu\nu}&=&\partial_\mu W^a_\nu-\partial_\nu W^a_\mu-g\epsilon^{abc}W^b_\mu W^c_\nu, \end{eqnarray} \noindent with $a,b,c=1,2,3$. From the parametrization given by Eq. (3), and from the operators of dimension-six given in Eqs. (6) and (7) after replacing $<{\tilde \phi}> \to \frac{1}{\sqrt{s}}$ give rise to the corresponding CP even ${\hat a_V}$ and CP odd ${\hat a_A}$ couplings: \begin{eqnarray} \hat a_V=\frac{2 m_t}{e} \frac{\sqrt{2}\upsilon}{\Lambda^{2}} Re\Bigl[\cos\theta _{W} C_{uB\phi}^{33} + \sin\theta _{W} C_{uW}^{33}\Bigr],\\ \hat a_A=\frac{2 m_t}{e} \frac{\sqrt{2}\upsilon}{\Lambda^{2}} Im\Bigl[\cos\theta _{W} C_{uB\phi}^{33} + \sin\theta _{W} C_{uW}^{33}\Bigr], \end{eqnarray} \noindent which are related to the AMDM and EDM of the top-quark. The ${\hat a_V}$ and ${\hat a_A}$ couplings contain $\upsilon=246$ GeV, the breaking scale of the electroweak symmetry, $\sin\theta _{W} (\cos\theta _{W})$, the sine(cosine) of the weak mixing angle and $\Lambda$ is the new physics scale. \subsection{Cross-section of the $e^-p \to e^-\bar b \to \bar t \nu_e \gamma \to \bar t(\to W^- \to (qq', l^- \bar\nu_l)+b) \nu_e\gamma$ signal} The FCC-eh project \cite{FCChe,Fernandez,Fernandez1,Fernandez2,Huan,Acar} offer $e^-p$ collisions at TeV-scale center-of-mass energy and luminosities of order 1000 times larger than that of Hadron-Electron Ring Accelerator (HERA), the first and to date the only lepton-hadron collider worldwide, providing fascinating probes of QCD and hadron structure as well as a novel configuration for Higgs boson, top-quark and BSM physics. The physics highlights are available from combining the capabilities of these facilities they were already mentioned in the introduction. However, a general aspect is to maximize the BSM physics search potential at high energies by exploiting the unique capabilities of an $e^-p$ collider. The most significant top-quark production processes at the $e^-p$ colliders are single top-quark, $t\bar t$, and associated $tW$ production. In Ref. \cite{Bouzas1} is shown the values of the associated cross-sections for these processes. The main source of production is single-top via the charged current $W$ t-channel \cite{Moretti}, whereas for the signals, $t\bar t$ and $tW$, the cross-section is minor. In addition, taking into account the advantage of an experimental cleaner environment than the $pp$ colliders, we can anticipate a potential efficiency of these colliders to study the top-quark physics. The deep inelastic $e^-p \to e^-\bar b \to \bar t \nu_e \gamma \to \bar t(\to W^- \to (qq', l^- \bar\nu_l)+b) \nu_e\gamma$ scattering process is measured at FCC-he via the exchange of a $W^{\pm}$ boson in charged current scattering as shown in Figs. 1 and 2. For the calculation of the charged current cross-section, we consider the CTEQ6L1 PDFs \cite{CTEQ6L1} and we apply the following detector acceptance cuts on the pseudorapidity of the photon and the transverse momentum of the photon and the neutrino, respectively, to reduce the background and to optimize the signal sensitivity: \begin{eqnarray} |\eta^{\gamma}|&<& 2.5, \nonumber \\ p^\gamma_T &>& 20\hspace{0.8mm}GeV, \\ p^{(\nu)}_T &>& 20\hspace{0.8mm}GeV. \nonumber \end{eqnarray} The proposed FCC-he is well-suited for discovering physics BSM and for precisely unraveling the structure of the fundamental physics with unpolarized and polarized electron beam. In particle physics, polarization refers to the extent to which a particle spin is aligned along a particular direction. The t-channel single-top-quarks of the process $e^-p \to e^-\bar b \to \bar t \nu_e \gamma$ are produced with a strong degree of polarization along the direction of the momentum of the light spectator quark, whose direction then defines the top-quark spin axis. Regarding this subject, the physics in study can be maximized by the use of polarized beams. In this paper shows the important role of polarized beam and summarizes the benefits obtained from polarizing the electron beam. The polarized $e^-$ beam, combined with the clean experimental environment provided by the FCC-he, will allow to improve strongly the potential of searches for the dipole moments, which opens the possibility to resolve shortcomings of the SM. With these arguments, we consider polarized electron beam in our study. The formula for the total cross-section for an arbitrary degree of longitudinal $e^-$ beams polarization is given by \cite{XiaoJuan}: \begin{eqnarray} \sigma_{e^-_r}= \sigma_{e^-_0}\cdot (1-P_{e^-_r}), \hspace{7mm} \sigma_{e^-_l} + \sigma_{e^-_r} &=& 2\sigma_{e^-_0}, \end{eqnarray} \noindent where $\sigma_{e^-_r}$, $\sigma_{e^-_l}$ and $\sigma_{e^-_0}$ represent the right, left and without electron beam polarization, respectively and $P_{e^-}$ is the polarization degree of the electron. We have implemented $t\bar t \gamma$ effective coupling corresponding given by the Lagrangian (2) in CalcHEP \cite{Belyaev} to compute the tree level amplitudes relevant for the process. The partonic cross-section is convoluted with CTEQ6L1 PDFs \cite{CTEQ6L1}. Finally, we use CalcHEP to compute numerically the cross-section $\sigma(\sqrt{s}, \hat a_V, \hat a_A, P_{e^-})$ as a function of the center-of-mass energy and effective couplings. We displayed the $7.07\hspace{0.8mm}TeV$ and $10\hspace{0.8mm}TeV$ cross-section of the $2 \to 3$ process $e^-p \to e^-\bar b \to \bar t \nu_e \gamma$ with $P_{e^-}=0\%$, $P_{e^-}=-80\%$ and $P_{e^-}=80\%$ in Eqs. (14)-(25) for the FCC-he:\\ $i)$ Total cross-section for $\sqrt{s}=7.07\hspace{0.8mm} TeV$ and $P_{e^-}=0\%$: \begin{eqnarray} \sigma(\hat a_V)&=&\Bigl[(0.0236)\hat a^2_V +(0.0000489)\hat a_V +0.737 \Bigr] (pb), \\ \sigma(\hat a_A)&=&\Bigl[(0.0236)\hat a^2_A + 0.737 \Bigr] (pb). \end{eqnarray} $ii)$ Total cross-section for $\sqrt{s}=10\hspace{0.8mm} TeV$ and $P_{e^-}=0\%$: \begin{eqnarray} \sigma(\hat a_V)&=&\Bigl[(0.0593)\hat a^2_V +(0.000618)\hat a_V + 1.287 \Bigr] (pb), \\ \sigma(\hat a_A)&=&\Bigl[(0.0593)\hat a^2_A + 1.287 \Bigr] (pb). \end{eqnarray} $iii)$ Total cross-section for $\sqrt{s}=7.07\hspace{0.8mm} TeV$ and $P_{e^-}=80\%$: \begin{eqnarray} \sigma(\hat a_V)&=&\Bigl[(0.00472)\hat a^2_V +(0.0000109)\hat a_V +0.148 \Bigr] (pb), \\ \sigma(\hat a_A)&=&\Bigl[(0.00472)\hat a^2_A + 0.148 \Bigr] (pb). \end{eqnarray} $iv)$ Total cross-section for $\sqrt{s}=10\hspace{0.8mm} TeV$ and $P_{e^-}=80\%$: \begin{eqnarray} \sigma(\hat a_V)&=&\Bigl[(0.0119)\hat a^2_V +(0.000127)\hat a_V +0.257 \Bigr] (pb), \\ \sigma(\hat a_A)&=&\Bigl[(0.0119)\hat a^2_A + 0.257 \Bigr] (pb). \end{eqnarray} $v)$ Total cross-section for $\sqrt{s}=7.07\hspace{0.8mm} TeV$ and $P_{e^-}=-80\%$: \begin{eqnarray} \sigma(\hat a_V)&=&\Bigl[(0.0423)\hat a^2_V +(0.000417)\hat a_V + 1.328 \Bigr] (pb), \\ \sigma(\hat a_A)&=&\Bigl[(0.0423)\hat a^2_A + 1.328 \Bigr] (pb). \end{eqnarray} $vi)$ Total cross-section for $\sqrt{s}=10\hspace{0.8mm} TeV$ and $P_{e^-}=-80\%$: \begin{eqnarray} \sigma(\hat a_V)&=&\Bigl[(0.107)\hat a^2_V +(0.00196)\hat a_V +2.315 \Bigr] (pb), \\ \sigma(\hat a_A)&=&\Bigl[(0.107)\hat a^2_A + 2.315 \Bigr] (pb). \end{eqnarray} Our results given by Eqs. (14)-(25) show the effect of taking $-80\%$ beam polarization for electron, which results in the enhancement of the SM and non-SM single-top production cross-section as the cross-section scales as $(1 + P_{e^-})$, $P_{e^-}$ being the degree of polarization of the electron. The variation of the single top-quark production cross-section with the effective $t\bar t \gamma$ couplings, $\hat a_V$ or $\hat a_A$ and taking one anomalous coupling at a time are shown in Figs. 3-6. The curves depict the cross-section for $e^-p \to e^-\bar b \to \bar t \nu_e \gamma$ from the $80\%$, $-80\%$ polarized and unpolarized $e^-$ beam, respectively, and we fixed the energy of the $e^-$ beam $E_e = 250 \hspace{0.8mm}GeV$, $500\hspace{0.8mm}GeV$ and the energy of the $p$ beam $E_p = 50\hspace{0.8mm}TeV$. These figures have shown a stronger dependence of the cross-section on the anomalous coupling $\hat a_V(\hat a_A)$ in the range allowed by these parameters. Our results indicate that considering the proposed $10\hspace{0.8mm}TeV$ energy, detector acceptance cuts (see Eq. (12)) and $-80\%$ electron polarization, the cross-sections as a function of $\hat a_V$ or $\hat a_A$ are higher. For instance, the cross-section projected is $\sigma(\hat a_V, -80\%)=(1.66)\sigma(\hat a_V, 0\%)$ for $\sqrt{s}= 7.07\hspace{0.8mm}TeV$, while $\sigma(\hat a_V, -80\%)=(1.78)\sigma(\hat a_V, 0\%)$ for $\sqrt{s}= 10\hspace{0.8mm}TeV$, that is, there is an improvement in the cross-section by a factor of 1.66 (1.78) for the polarized case with respect to the case unpolarized. Similar results are obtained for $\sigma(\hat a_A, 80\%)$. The cross-sections for the energies $\sqrt{s}= 7.07\hspace{0.8mm}TeV$ and $10\hspace{0.8mm}TeV$ are shown in Figs. 7 and 8. As can be seen from Figs. 7-8, the surfaces $\sigma(e^-p \to \bar t \nu_e \gamma)$ as functions of $\hat a_V$ and $\hat a_A$ have extreme points: maximum and minimum. The minimum value corresponds to the SM, while the maximum value corresponds to the anomalous contribution, which is consistent with Eqs. (14)-(25). In both figures, the cross-section depends significantly on the observables $\hat a_V$ and $\hat a_A$. With the purpose of comparison and analysis, we compare our results for the anomalous couplings $\hat a_V$ and $\hat a_A$ with those quoted in the papers \cite{Sh,Bouzas} (see also Table I). The authors of Ref. \cite{Bouzas}, specifically discussed the bounds on the AMDM and EDM of the top-quark that can be obtained from measurements of the semi-inclusive decays $B \to X_s\gamma$, and of $t\bar t\gamma$ production at the Tevatron and the LHC. Performing their analysis they find that the AMDM is bounded by $-2 < \kappa < 0.3$ whereas the EDM is bound by $-0.5 < \tilde\kappa < 1.5$, respectively. For our case, we consider the process of single anti-top-quark production through charged current with the $e^-p \to e^-\bar b \to \bar t \nu_e \gamma \to \bar t(\to W^- \to (qq', l^- \bar\nu_l)+b) \nu_e\gamma$ signal. We based our results on the data at $\sqrt{s}=10\hspace{0.8mm}TeV$, ${\cal L}=1000\hspace{0.8mm}fb^{-1}$, $\delta_{sys}=0\%$, $P_{e^-}=0\%$ and $95\%\hspace{0.8mm}C.L.$, we obtain $\hat a_V=(-0.2308, 0.2204)$, $\hat a_A=|0.2259|$ and $\hat a_V=(-0.3067, 0.2963)$, $\hat a_A=|0.3019|$ for the hadronic and leptonic modes. Although the conditions for the study of the dipole moments of the top-quark through the $b\to s\gamma$ transitions at Tevatron and LHC, and $e^-p \to e^-\bar b \to \bar t \nu_e \gamma \to \bar t(\to W^- \to (qq', l^- \bar\nu_l)+b) \nu_e\gamma$ are different, our results are competitive with respect to the results reported in Ref. \cite{Bouzas}. More recently, using the process $pp\to p\gamma^* \gamma^* p \to pt\bar t p$, a detailed study on the top-quark anomalous couplings $\hat a_v$ and $\hat a_A$ for LHC at $14\hspace{0.8mm}TeV$ with $300\hspace{0.8mm} fb^{-1}$ of data \cite{Sh} is done. The $68\%$ C.L. bounds that they have obtained are found to be in the intervals of $\hat a_V= (−0.6389, 0.0233)$ and $\hat a_A=(−0.1158, 0.1158)$. From the comparison of our study via the process $e^-p \to e^-\bar b \to \bar t \nu_e \gamma \to \bar t(\to W^- \to (qq', l^- \bar\nu_l)+b) \nu_e\gamma$ at the FCC-he, with respect to the process $pp\to p\gamma^* \gamma^* p \to pt\bar t p$ at the LHC, our results indicate a significant improvement in the measurements for $\hat a_V$ and $\hat a_A$. Additionally, it is noteworthy that with our process the total cross-sections is a factor ${\cal O}(10^3)$ between $pp\to p\gamma^* \gamma^* p \to pt\bar t p$ and $e^-p \to e^-\bar b \to \bar t \nu_e \gamma \to \bar t(\to W^- \to (qq', l^- \bar\nu_l)+b) \nu_e\gamma$, indicating that our results project 3 orders of magnitude more higher than the reported in Ref. \cite{Sh}. These predictions indicate that the sensitivity on the anomalous couplings $\hat a_V$ and $\hat a_A$ can be measured better at the FCC-he by a few orders of magnitude in comparison with the predictions of the LHC. \section{Model-independent sensitivity estimates on the $\hat a_V$ and $\hat a_A$} In Tables II-VII is shown the results for the model-independent sensitivity achievable at $95\%$ C.L. for the non-standard couplings $\hat a_V$ and $\hat a_A$ obtained from an analysis of the process $e^-p \to e^- \bar b \to \bar t \nu_e \gamma$ at the FCC-he. At the FCC-he, we assume the center-of-mass energies $\sqrt{s}=7.07, 10\hspace{0.8mm}TeV$ and luminosities ${\cal L}= 50, 100, 300, 500, 1000\hspace{0.8mm}fb^{-1}$ with unpolarized and polarized electron beam $P_{e^-}=-80\%, 0\%, 80\%$. Additionally, we impose the acceptance cuts for the FCC-he given by Eq. (12) and take into account the systematic uncertainties $\delta_{sys}=0\%, 3\%, 5\%$. In order to extract the expected sensitivity at $95\%$ C.L. on the effective operators couplings $\hat a_V$ and $\hat a_A$, we compute $\sigma_{BSM}(\hat a_V, \hat a_A)$ cross-section of the process $e^-p \to e^- \bar b \to \bar t \nu_e \gamma$ as function of the effective couplings as discussed in the previous section, and we assume the measured cross-section to coincide with the SM predictions and we construct the following $\chi^2$ function: \begin{equation} \chi^2(\hat a_V, \hat a_A)=\biggl(\frac{\sigma_{SM}-\sigma_{BSM}(\sqrt{s}, \hat a_V, \hat a_A, P_{e^-})}{\sigma_{SM}\sqrt{(\delta_{st})^2 +(\delta_{sys})^2}}\biggr)^2. \end{equation} \noindent $\sigma_{SM}$ is the cross-section of the SM and $\sigma_{BSM}(\sqrt{s}, \hat a_V, \hat a_A, P_{e^-})$ is the total cross-section containing contributions from the SM and BSM, while $\delta_{st}=\frac{1}{\sqrt{N_{SM}}}$ and $\delta_{sys}$ are the statistical and systematic uncertainties, respectively. The number of events $N_{SM}$ for the process $e^-p \to e^- \bar b \to \bar t \nu_e \gamma$ is calculated by $N_{SM}={\cal L}_{int}\times \sigma_{SM} \times BR(\bar t \to W^-b)\times BR(W^- \to qq' (l^-\nu_l))\times \epsilon_{b-tag}$, where ${\cal L}_{int}$ is the integrated FCC-he luminosity and $b$-jet tagging efficiency is $\epsilon_b=0.8$ \cite{atlas}. The top-quark decays weakly and almost $100\%$ to a $W$ boson and $b$ quark, specifically $\bar t\to \bar bW^-$, where the $W$ boson decays to either hadronically ($W \to qq'$) or leptonically ($W\to l^-\nu_l$), with a Branching Ratio of: $BR(W \to q q')=0.674$ for hadronic decay, $BR(W \to l\nu_l)(l=e, \mu)=0.213$ for light leptonic decays and $BR(W \to \tau\nu_\tau)=0.113$ \cite{Data2018}. Next, we present the sensibility measurement for the anomalous couplings $\hat a_V$ and $\hat a_A$ as is shown in Tables II-VII which are obtained for $\sqrt{s}=7.07, 10\hspace{0.8mm}TeV$, ${\cal L}= 50-1000\hspace{0.8mm}fb^{-1}$ and $P_{e^-}=-80\%, 0\%, 80\%$, where only one coupling at a time is varied. From Tables II-VII, the results for the dipole moments $\hat a_V$ and $\hat a_A$, for specific values of $\sqrt{s}= 10\hspace{0.8mm}TeV$, ${\cal L}=1000\hspace{0.8mm}fb^{-1}$, $P_{e^-}= -80\%, 0\%, 80\%$ and $\delta_{sys}=0\%$ are as follows:\\ $i)$ Sensitivity on $\hat a_V$ and $\hat a_A$ for $\sqrt{s}=10\hspace{0.8mm} TeV$, $P_{e^-}=-80\%$ and BR($W^- \to$ hadronic): \begin{eqnarray} -0.2041 \leq & \hat a_V & \leq 0.1858, \hspace{3mm} \mbox{$95\%$ C.L.}, \\ -0.1939 \leq & \hat a_A & \leq 0.1939, \hspace{3mm} \mbox{$95\%$ C.L.}. \end{eqnarray} $ii)$ Sensitivity on $\hat a_V$ and $\hat a_A$ for $\sqrt{s}=10\hspace{0.8mm} TeV$, $P_{e^-}=-80\%$ and BR($W^- \to$ leptonic): \begin{eqnarray} -0.2695 \leq & \hat a_V & \leq 0.2512, \hspace{3mm} \mbox{$95\%$ C.L.}, \\ -0.2592 \leq & \hat a_A & \leq 0.2592, \hspace{3mm} \mbox{$95\%$ C.L.}. \end{eqnarray} $iii)$ Sensitivity on $\hat a_V$ and $\hat a_A$ for $\sqrt{s}=10\hspace{0.8mm} TeV$, $P_{e^-}=0\%$ and BR($W^- \to$ hadronic): \begin{eqnarray} -0.2308 \leq & \hat a_V & \leq 0.2204, \hspace{3mm} \mbox{$95\%$ C.L.}, \\ -0.2259 \leq & \hat a_A & \leq 0.2259, \hspace{3mm} \mbox{$95\%$ C.L.}. \end{eqnarray} $iv)$ Sensitivity on $\hat a_V$ and $\hat a_A$ for $\sqrt{s}=10\hspace{0.8mm} TeV$, $P_{e^-}=0\%$ and BR($W^- \to$ leptonic): \begin{eqnarray} -0.3067 \leq & \hat a_V & \leq 0.2963, \hspace{3mm} \mbox{$95\%$ C.L.}, \\ -0.3019 \leq & \hat a_A & \leq 0.3019, \hspace{3mm} \mbox{$95\%$ C.L.}. \end{eqnarray} $v)$ Sensitivity on $\hat a_V$ and $\hat a_A$ for $\sqrt{s}=10\hspace{0.8mm} TeV$, $P_{e^-}=80\%$ and BR($W^- \to$ hadronic): \begin{eqnarray} -0.3428 \leq & \hat a_V & \leq 0.3321, \hspace{3mm} \mbox{$95\%$ C.L.}, \\ -0.3371 \leq & \hat a_A & \leq 0.3371, \hspace{3mm} \mbox{$95\%$ C.L.}. \end{eqnarray} $vi)$ Sensitivity on $\hat a_V$ and $\hat a_A$ for $\sqrt{s}=10\hspace{0.8mm} TeV$, $P_{e^-}=80\%$ and BR($W^- \to$ leptonic): \begin{eqnarray} -0.4563 \leq & \hat a_V & \leq 0.4456, \hspace{3mm} \mbox{$95\%$ C.L.}, \\ -0.4505 \leq & \hat a_A & \leq 0.4505, \hspace{3mm} \mbox{$95\%$ C.L.}. \end{eqnarray} A direct comparison of Eqs. (27)-(38) for $\hat a_V$ and $\hat a_V$ shown that the sensitivity is increases up to $12\%$ for the case with $P_{e^-}=- 80\%$ and BR($W^- \to$ hadronic, leptonic) that for the case with $P_{e^-}=0\%$ and BR($W^- \to$ hadronic, leptonic). While from Eqs. (27)-(30) and (35)-(38) the sensitivity is increases up to $67\%$ for the case with $P_{e^-}=-80\%$ and BR($W^- \to$ hadronic, leptonic) with respect to the case with $P_{e^-}=80\%$ and BR($W^- \to$ hadronic, leptonic), respectively. For $P_{e^-}=0\%$ and the hadronic channel of the W boson, Figs. 9-10 show the prospects of the sensitivity on the electromagnetic anomalous couplings $\hat a_V$ and $\hat a_V$ at the FCC-he. Also, in order to obtain the plots we have assumed that $\sqrt{s}=7.07, 10\hspace{0.8mm}TeV$ and ${\cal L}= 50, 250, 1000\hspace{0.8mm}fb^{-1}$ at the $95\%$ C.L.. Of these contours plots, our forecast for the future sensitivity of the observables $\hat a_V$ and $\hat a_A$ are based on the process $e^-p \to e^- \bar b \to \bar t \nu_e \gamma$, as well as in the future projections of the FCC-he for $\sqrt{s}$ and ${\cal L}$. The regions allowed for the top-quark AMDM, EDM and collider sensitivity are colored in pink, blue, purple, respectively, while the prediction corresponds to the SM can be obtained from Eqs. (14)-(25). The $95\%$ C.L. regions for each of these couplings separately are $\hat a_V \hspace{1mm}\in \hspace{1mm}[-0.25, 0.25]$, $\hat a_A \hspace{1mm}\in \hspace{1mm}[-0.29, 0.29]$ for $\sqrt{s}=7.07\hspace{0.8mm}TeV$ and ${\cal L}= 250 \hspace{0.8mm}fb^{-1}$. In addition, $\hat a_V \hspace{1mm}\in \hspace{1mm}[-0.20, 0.20]$, $\hat a_A \hspace{1mm}\in \hspace{1mm}[-0.20, 0.20]$ for $\sqrt{s}=10\hspace{0.8mm}TeV$ and ${\cal L}= 1000 \hspace{0.8mm}fb^{-1}$. These results are consistent with those shown in Tables IV and V. It is worth mentioning that, the results obtained in Tables IV and V, as well as the corresponding results obtained through the contours Figs. 9 and 10, in some cases are more sensitive than the reported ones in Table I. In particular, an improvement is reachable in comparison with the constraints obtained from the radiative $b \to s\gamma$ transitions at Tevatron and LHC \cite{Bouzas} and $pp\to p\gamma^* \gamma^* p \to pt\bar t p$ \cite{Sh} searches mentioned in Table I and subsection B. \section{Conclusions} As mentioned above, due to its sizable $t\bar t\gamma$ coupling the top-quark is one of the most attractive particles and provides one of the most convincing alternatives to probe new physics BSM, such as AMDM ($\hat a_V$) and EDM ($\hat a_A$). Furthermore, the EDM is particularly interesting because it is very sensitive to possible new sources of CP violation in the quark and lepton sectors. In this paper, we have studied the potential of the FCC-he for sensitivity measuring expected on the electromagnetic anomalous couplings in the $t\bar t \gamma$ vertex. We consider the single top-quark production mode $e^-p \to e^- \bar b \to \bar t \nu_e \gamma$ with $W$ boson exchange via the $t$-channel, that is through charged current production. This channel has the largest cross-section and is hence the dominant production mode of single top-quark \cite{Sun}. Our study is based on the projections for the center-of-mass energies $\sqrt{s}$, the integrated luminosity ${\cal L}$ and the polarization electron beam of the FCC-he. Additionally, we take into account kinematic cuts and systematic uncertainties $\delta_{sys}=0\%, 3\%, 5\%$. The cut based optimization at $7.07\hspace{0.8mm}TeV$ and $10\hspace{0.8mm}TeV$, involves a set of selection cuts in various kinematic variables, carefully chosen with the criterion of not being built with kinematic properties of one or part of the decay products of the top-quark, that could in principle bias the sensitivity to the anomalous couplings. These kinematic variables selected for these cuts include $\eta^{\gamma}$, $p^\gamma_T$ and $p^{(\nu)}_T$ as defined in Eq. (12). The final results of this optimization for the resulting cross-section of the $e^-p \to e^- \bar b \to \bar t \nu_e \gamma$ signal and prospective sensitivities for the dipole moments $\hat a_V$ and $\hat a_A$ indicated in Figs. 3-10 and Tables II-VII as well as in Eqs. (14)-(25) and (27)-(38) imply that the process $e^-p \to \bar t \nu_e\gamma$ at FCC-he is an excellent option for probing the physics of the top-quark. This makes a future $e^-p$ collider an ideal tool to study the electromagnetic properties of the top-quark through the $t\bar t \gamma$ vertex. From Tables III, V and VII, the sensitivity estimated on dipole moments of the top-quark are $\hat a_V=[-0.2308, 0.2204]$, $|\hat a_A|=0.2259$ at $95\%$ C.L. in the hadronic channel with unpolarized electron beam $P_{e^-}=0\%$. In the case with polarized electron beam for $P_{e^-}=80\%$ and $P_{e^-}=-80\%$ are $\hat a_V=[-0.3428, 0.3321]$, $|\hat a_A|=0.3371$ and $\hat a_V=[-0.2041, 0.1858]$, $|\hat a_A|=0.1939$ at $95\%$ C.L. The corresponding results for the leptonic channel with $P_{e^-}=0\%, 80\% -80\%$ are $\hat a_V=[-0.3067, 0.2963]$, $|\hat a_A|=0.3019$, $\hat a_V=[-0.4563, 0.4456]$, $|\hat a_A|=0.4505$ and $\hat a_V=[-0.2695, 0.2512]$, $|\hat a_A|=0.2592$, respectively. The results for $\hat a_V$ and $\hat a_A$ in the leptonic channel are weaker by a factor of 0.75 than those corresponding to the hadronic channel. From these results, we find that the sensitivity estimated on dipole moments of the top-quark are of the same order of magnitude as those reported in Table I and Refs. \cite{Ibrahim,Atwood,Polouse,Choi,Polouse1,Aguilar0,Amjad,Juste,Asner,Abe,Aarons,Brau,Baer,Grzadkowski:2005ye,murat,Billur}. In particular, of the comparison with the constraints obtained from the radiative $b \to s\gamma$ transitions at Tevatron and LHC \cite{Bouzas} and the process $pp\to p\gamma^* \gamma^* p \to pt\bar t p$ at LHC \cite{Sh}, our results are more sensitive. Given these prospective sensitivities, we highlight that the FCC-he is the potential top-quark factory that is particularly well suited to sensitivity study on its dipole moments with cleaner environments. Summarizing, the FCC-he offers us significant opportunities to study the anomalous couplings of the quark-top. However, more extensive studies on the theoretical, phenomenological and experimental level they are needed. These new possibilities for investigating the electromagnetic properties of the top-quark will eventually open new avenues in the understanding of the quark-top physics, as well as new physics BSM. \vspace{1.5cm} \begin{center} {\bf Acknowledgments} \end{center} A. G. R. and M. A. H. R. acknowledge support from SNI and PROFOCIE (M\'exico). \vspace{1.5cm} \begin{table}[!ht] \caption{Sensitivities on the AMDM $\hat a_V$ and the EDM $\hat a_A$ of the top-quark through the process $e^-p \to e^-\bar b \to \bar t \nu_e\gamma$ at the FCC-he.} \begin{center} \begin{tabular}{|cc|cc|cc|} \hline\hline \multicolumn{6}{|c|}{ $\sqrt{s}=$ 7.07 TeV, \hspace{5mm} $P_{e^-} = 0 \%$, \hspace{5mm} $95\%$ C.L.} \\ \hline \multicolumn{2}{|c|}{} & \multicolumn{2}{c|}{Hadronic} & \multicolumn{2}{c|}{Leptonic} \\ \hline \cline{1-6} ${\cal L} \, (fb^{-1})$ & \hspace{0.5cm} $ \delta_{sys}$ \hspace{0.5cm} & \hspace{1.5cm} $\hat a_V$ \hspace{1.5cm} & \hspace{0.5cm} $|\hat a_A|$ \hspace{0.5cm} & \hspace{1.5cm} $\hat a_V$ \hspace{1.5cm} & \hspace{0.5cm} $|\hat a_A|$ \hspace{0.5cm} \\ \hline 50 & $0\%$ & [-0.6599, 0.6578] & 0.6587 & [-0.8815, 0.8794] & 0.8802 \\ 50 & $3\%$ & [-1.3758, 1.3737] & 1.3742 & [-1.4138, 1.4118] & 1.4123 \\ 50 & $5\%$ & [-1.7606, 1.7585] & 1.7589 & [-1.7792, 1.7772] & 1.7776 \\ \hline 100 & $0\%$ & [-0.5551, 0.5530] & 0.5539 & [-0.7414, 0.7393] & 0.7401 \\ 100 & $3\%$ & [-1.3666, 1.3645] & 1.3651 & [-1.3864, 1.3843] & 1.3849 \\ 100 & $5\%$ & [-1.7563, 1.7542] & 1.7546 & [-1.7657, 1.7636] & 1.7640 \\ \hline 300 & $0\%$ & [-0.4221, 0.4199] & 0.4208 & [-0.5636, 0.5615] & 0.5624 \\ 300 & $3\%$ & [-1.3604, 1.3583] & 1.3589 & [-1.3672, 1.3651] & 1.3656 \\ 300 & $5\%$ & [-1.7534, 1.7513] & 1.7517 & [-1.7565, 1.7545] & 1.7549 \\ \hline 500 & $0\%$ & [-0.3715, 0.3695] & 0.3704 & [-0.4962, 0.4941] & 0.4949 \\ 500 & $3\%$ & [-1.3591, 1.3571] & 1.3576 & [-1.3632, 1.3612] & 1.3617 \\ 500 & $5\%$ & [-1.7528, 1.7507] & 1.7511 & [-1.7547, 1.7526] & 1.7530 \\ \hline 1000 & $0\%$ & [-0.3126, 0.3105] & 0.3115 & [-0.4174, 0.4153] & 0.4162 \\ 1000 & $3\%$ & [-1.3582, 1.3561] & 1.3567 & [-1.3602, 1.3582] & 1.3587 \\ 1000 & $5\%$ & [-1.7523, 1.7503] & 1.7507 & [-1.7533, 1.7512] & 1.7516 \\ \hline\hline \end{tabular} \end{center} \end{table} \begin{table}[!ht] \caption{Sensitivities on the AMDM $\hat a_V$ and the EDM $\hat a_A$ of the top-quark through the process $e^-p \to e^-\bar b \to \bar t \nu_e\gamma$ at the FCC-he.} \begin{center} \begin{tabular}{|cc|cc|cc|} \hline\hline \multicolumn{6}{|c|}{ $\sqrt{s}=$ 10 TeV, \hspace{5mm} $P_{e^-} = 0 \%$, \hspace{5mm} $95\%$ C.L.} \\ \hline \multicolumn{2}{|c|}{} & \multicolumn{2}{c|}{Hadronic} & \multicolumn{2}{c|}{Leptonic} \\ \hline \cline{1-6} ${\cal L} \, (fb^{-1})$ & \hspace{0.5cm} $ \delta_{sys}$ \hspace{0.5cm} & \hspace{1.5cm} $\hat a_V$ \hspace{1.5cm} & \hspace{0.5cm} $|\hat a_A|$ \hspace{0.5cm} & \hspace{1.5cm} $\hat a_V$ \hspace{1.5cm} & \hspace{0.5cm} $|\hat a_A|$ \hspace{0.5cm} \\ \hline 50 & $0\%$ & [-0.4824, 0.4719] & 0.4777 & [-0.6428, 0.6324] & 0.6384 \\ 50 & $3\%$ & [-1.1429, 1.1326] & 1.1391 & [-1.1618, 1.1514] & 1.1579 \\ 50 & $5\%$ & [-1.4667, 1.4563] & 1.4632 & [-1.4757, 1.4653] & 1.4722 \\ \hline 100 & $0\%$ & [-0.4065, 0.3961] & 0.4017 & [-0.5414, 0.5309] & 0.5368 \\ 100 & $3\%$ & [-1.1386, 1.1281] & 1.1347 & [-1.1482, 1.1378] & 1.1443 \\ 100 & $5\%$ & [-1.4647, 1.4542] & 1.4612 & [-1.4692, 1.4588] & 1.4657 \\ \hline 300 & $0\%$ & [-0.3101, 0.2997] & 0.3052 & [-0.4126, 0.4022] & 0.4079 \\ 300 & $3\%$ & [-1.1356, 1.1252] & 1.1317 & [-1.1388, 1.1284] & 1.1349 \\ 300 & $5\%$ & [-1.4633, 1.4529] & 1.4598 & [-1.4648, 1.4544] & 1.4613 \\ \hline 500 & $0\%$ & [-0.2735, 0.2631] & 0.2686 & [-0.3638, 0.3534] & 0.3589 \\ 500 & $3\%$ & [-1.1349, 1.1246] & 1.1311 & [-1.1369, 1.1265] & 1.1330 \\ 500 & $5\%$ & [-1.4629, 1.4526] & 1.4595 & [-1.4639, 1.4535] & 1.4604 \\ \hline 1000 & $0\%$ & [-0.2308, 0.2204] & 0.2259 & [-0.3067, 0.2963] & 0.3019 \\ 1000 & $3\%$ & [-1.1345, 1.1241] & 1.1306 & [-1.1355, 1.1251] & 1.1316 \\ 1000 & $5\%$ & [-1.4628, 1.4524] & 1.4593 & [-1.4632, 1.4528] & 1.4597 \\ \hline\hline \end{tabular} \end{center} \end{table} \begin{table} \caption{Sensitivities on the AMDM $\hat a_V$ and the EDM $\hat a_A$ of the top-quark through the process $e^-p \to e^-\bar b \to \bar t \nu_e\gamma$ at the FCC-he.} \begin{center} \begin{tabular}{|cc|cc|cc|} \hline\hline \multicolumn{6}{|c|}{$\sqrt{s}=$ 7.07 TeV, \hspace{5mm} $P_e = -80 \%$, \hspace{5mm} $95\%$ C.L.} \\ \hline \multicolumn{2}{|c|}{} & \multicolumn{2}{c|}{Hadronic} & \multicolumn{2}{c|}{Leptonic} \\ \hline \cline{1-6} ${\cal L} \, (fb^{-1})$ & \hspace{0.5cm} $ \delta_{sys}$ \hspace{0.5cm} & \hspace{1.5cm} $\hat a_V$ \hspace{1.5cm} & \hspace{0.5cm} $|\hat a_A|$ \hspace{0.5cm} & \hspace{1.5cm} $\hat a_V$ \hspace{1.5cm} & \hspace{0.5cm} $|\hat a_A|$ \hspace{0.5cm} \\ \hline 50 & $0\%$ & [-0.5756, 0.5574] & 0.5696 & [-0.7694, 0.7512] & 0.7612 \\ 50 & $3\%$ & [-1.3809, 1.3627] & 1.3685 & [-1.4029, 1.3848] & 1.3904 \\ 50 & $5\%$ & [-1.7727, 1.7545] & 1.7582 & [-1.7832, 1.7651] & 1.7687 \\ \hline 100 & $0\%$ & [-0.4835, 0.4654] & 0.4789 & [-0.6469, 0.6288] & 0.6401 \\ 100 & $3\%$ & [-1.3757, 1.3575] & 1.3633 & [-1.3869, 1.3688] & 1.3746 \\ 100 & $5\%$ & [-1.7702, 1.7521] & 1.7558 & [-1.7756, 1.7574] & 1.7611 \\ \hline 300 & $0\%$ & [-0.3659, 0.3477] & 0.3639 & [-0.4910, 0.4729] & 0.4864 \\ 300 & $3\%$ & [-1.3722, 1.3540] & 1.3599 & [-1.3760, 1.3579] & 1.3637 \\ 300 & $5\%$ & [-1.7686, 1.7504] & 1.7541 & [-1.7704, 1.7522] & 1.7559 \\ \hline 500 & $0\%$ & [-0.3209, 0.3027] & 0.3203 & [-0.4316, 0.4134] & 0.4280 \\ 500 & $3\%$ & [-1.3715, 1.3533] & 1.3592 & [-1.3738, 1.3556] & 1.3615 \\ 500 & $5\%$ & [-1.7683, 1.7501] & 1.7538 & [-1.7693, 1.7512] & 1.7549 \\ \hline 1000 & $0\%$ & [-0.2678, 0.2496] & 0.2694 & [-0.3618, 0.3436] & 0.3599 \\ 1000 & $3\%$ & [-1.3709, 1.3528] & 1.3586 & [-1.3721, 1.3539] & 1.3598 \\ 1000 & $5\%$ & [-1.7680, 1.7499] & 1.7536 & [-1.7686, 1.7504] & 1.7541 \\ \hline\hline \end{tabular} \end{center} \end{table} \begin{table} \caption{Sensitivities on the AMDM $\hat a_V$ and the EDM $\hat a_A$ of the top-quark through the process $e^-p \to e^-\bar b \to \bar t \nu_e\gamma$ at the FCC-he.} \begin{center} \begin{tabular}{|cc|cc|cc|} \hline\hline \multicolumn{6}{|c|}{ $\sqrt{s}=$ 10 TeV, \hspace{5mm} $P_e = -80 \%$, \hspace{5mm} $95\%$ C.L.} \\ \hline \multicolumn{2}{|c|}{} & \multicolumn{2}{c|}{Hadronic} & \multicolumn{2}{c|}{Leptonic} \\ \hline \cline{1-6} ${\cal L} \, (fb^{-1})$ & \hspace{0.5cm} $ \delta_{sys}$ \hspace{0.5cm} & \hspace{1.5cm} $\hat a_V$ \hspace{1.5cm} & \hspace{0.5cm} $|\hat a_A|$ \hspace{0.5cm} & \hspace{1.5cm} $\hat a_V$ \hspace{1.5cm} & \hspace{0.5cm} $|\hat a_A|$ \hspace{0.5cm} \\ \hline 50 & $0\%$ & [-0.4210, 0.4027] & 0.4103 & [-0.5595, 0.5411] & 0.5482 \\ 50 & $3\%$ & [-1.1423, 1.1239] & 1.1287 & [-1.1529, 1.1346] & 1.1394 \\ 50 & $5\%$ & [-1.4679, 1.4496] & 1.4531 & [-1.4729, 1.4546] & 1.4581 \\ \hline 100 & $0\%$ & [-0.3556, 0.3372] & 0.3450 & [-0.4719, 0.4536] & 0.4609 \\ 100 & $3\%$ & [-1.1399, 1.1215] & 1.1263 & [-1.1453, 1.1269] & 1.1317 \\ 100 & $5\%$ & [-1.4668, 1.4484] & 1.4520 & [-1.4693, 1.4509] & 1.4545 \\ \hline 300 & $0\%$ & [-0.2724, 0.2541] & 0.2621 & [-0.3609, 0.3425] & 0.3503 \\ 300 & $3\%$ & [-1.1382, 1.1198] & 1.1246 & [-1.1400, 1.1216] & 1.1264 \\ 300 & $5\%$ & [-1.4660, 1.4477] & 1.4512 & [-1.4669, 1.4485] & 1.4520 \\ \hline 500 & $0\%$ & [-0.2409, 0.2226] & 0.2307 & [-0.3187, 0.3004] & 0.3083 \\ 500 & $3\%$ & [-1.1379, 1.1195] & 1.1243 & [-1.1389, 1.1206] & 1.1254 \\ 500 & $5\%$ & [-1.4659, 1.4475] & 1.4510 & [-1.4664, 1.4480] & 1.4515 \\ \hline 1000 & $0\%$ & [-0.2041, 0.1858] & 0.1939 & [-0.2695, 0.2512] & 0.2592 \\ 1000 & $3\%$ & [-1.1376, 1.1192] & 1.1240 & [-1.1382, 1.1198] & 1.1246 \\ 1000 & $5\%$ & [-1.4657, 1.4474] & 1.4509 & [-1.4660, 1.4476] & 1.4512 \\ \hline\hline \end{tabular} \end{center} \end{table} \begin{table} \caption{Sensitivities on the AMDM $\hat a_V$ and the EDM $\hat a_A$ of the top-quark through the process $e^-p \to e^-\bar b \to \bar t \nu_e\gamma$ at the FCC-he.} \begin{center} \begin{tabular}{|cc|cc|cc|} \hline\hline \multicolumn{6}{|c|}{ $\sqrt{s}=$ 7.07 TeV, \hspace{5mm} $P_e = 80 \%$, \hspace{5mm} $95\%$ C.L.} \\ \hline \multicolumn{2}{|c|}{} & \multicolumn{2}{c|}{Hadronic} & \multicolumn{2}{c|}{Leptonic} \\ \hline \cline{1-6} ${\cal L} \, (fb^{-1})$ & \hspace{0.5cm} $ \delta_{sys}$ \hspace{0.5cm} & \hspace{1.5cm} $\hat a_V$ \hspace{1.5cm} & \hspace{0.5cm} $|\hat a_A|$ \hspace{0.5cm} & \hspace{1.5cm} $\hat a_V$ \hspace{1.5cm} & \hspace{0.5cm} $|\hat a_A|$ \hspace{0.5cm} \\ \hline 50 & $0\%$ & [-0.9861, 0.9838] & 0.9842 & [-1.3174, 1.3150] & 1.3152 \\ 50 & $3\%$ & [-1.4429, 1.4406] & 1.4406 & [-1.5905, 1.5882] & 1.5881 \\ 50 & $5\%$ & [-1.7936, 1.7915] & 1.7913 & [-1.8772, 1.8749] & 1.8746 \\ \hline 100 & $0\%$ & [-0.8294, 0.8271] & 0.8276 & [-1.1079, 1.1056] & 1.1059 \\ 100 & $3\%$ & [-1.4019, 1.3996] & 1.3997 & [-1.4874, 1.4851] & 1.4852 \\ 100 & $5\%$ & [-1.7731, 1.7708] & 1.7705 & [-1.8177, 1.8153] & 1.8151 \\ \hline 300 & $0\%$ & [-0.6305, 0.6282] & 0.6289 & [-0.8421, 0.8398] & 0.8404 \\ 300 & $3\%$ & [-1.3725, 1.3702] & 1.3702 & [-1.4046, 1.4023] & 1.4024 \\ 300 & $5\%$ & [-1.7588, 1.7565] & 1.7563 & [-1.7744, 1.7721] & 1.7719 \\ \hline 500 & $0\%$ & [-0.5550, 0.5527] & 0.5535 & [-0.7413, 0.7389] & 0.7396 \\ 500 & $3\%$ & [-1.3663, 1.3640] & 1.3641 & [-1.3861, 1.3838] & 1.3839 \\ 500 & $5\%$ & [-1.7559, 1.7536] & 1.7534 & [-1.7654, 1.7630] & 1.7628 \\ \hline 1000 & $0\%$ & [-0.4669, 0.4646] & 0.4654 & [-0.6236, 0.6212] & 0.6219 \\ 1000 & $3\%$ & [-1.3617, 1.3594] & 1.3595 & [-1.3718, 1.3695] & 1.3696 \\ 1000 & $5\%$ & [-1.7537, 1.7514] & 1.7512 & [-1.7585, 1.7562] & 1.7559 \\ \hline\hline \end{tabular} \end{center} \end{table} \begin{table} \caption{Sensitivities on the AMDM $\hat a_V$ and the EDM $\hat a_A$ of the top-quark through the process $e^-p \to e^-\bar b \to \bar t \nu_e\gamma$ at the FCC-he.} \begin{center} \begin{tabular}{|cc|cc|cc|} \hline\hline \multicolumn{6}{|c|}{ $\sqrt{s}=$ 10 TeV, \hspace{5mm} $P_e = 80 \%$, \hspace{5mm} $95\%$ C.L.} \\ \hline \multicolumn{2}{|c|}{} & \multicolumn{2}{c|}{Hadronic} & \multicolumn{2}{c|}{Leptonic} \\ \hline \cline{1-6} ${\cal L} \, (fb^{-1})$ & \hspace{0.5cm} $ \delta_{sys}$ \hspace{0.5cm} & \hspace{1.5cm} $\hat a_V$ \hspace{1.5cm} & \hspace{0.5cm} $|\hat a_A|$ \hspace{0.5cm} & \hspace{1.5cm} $\hat a_V$ \hspace{1.5cm} & \hspace{0.5cm} $|\hat a_A|$ \hspace{0.5cm} \\ \hline 50 & $0\%$ & [-0.7189, 0.7083] & 0.7129 & [-0.9589, 0.9482] & 0.9526 \\ 50 & $3\%$ & [-1.1770, 1.1663] & 1.1703 & [-1.2567, 1.2460] & 1.2499 \\ 50 & $5\%$ & [-1.4835, 1.4728] & 1.4765 & [-1.5256, 1.5149] & 1.5185 \\ \hline 100 & $0\%$ & [-0.6054, 0.5947] & 0.5995 & [-0.8072, 0.7965] & 0.8011 \\ 100 & $3\%$ & [-1.1563, 1.1456] & 1.1497 & [-1.2003, 1.1896] & 1.1936 \\ 100 & $5\%$ & [-1.4733, 1.4627] & 1.4663 & [-1.4953, 1.4846] & 1.4882 \\ \hline 300 & $0\%$ & [-0.4613, 0.4506] & 0.4555 & [-0.6146, 0.6039] & 0.6087 \\ 300 & $3\%$ & [-1.1419, 1.1312] & 1.1352 & [-1.1576, 1.1469] & 1.1509 \\ 300 & $5\%$ & [-1.4665, 1.4558] & 1.4594 & [-1.4739, 1.4633] & 1.4669 \\ \hline 500 & $0\%$ & [-0.4067, 0.3959] & 0.4009 & [-0.5416, 0.5308] & 0.5357 \\ 500 & $3\%$ & [-1.1389, 1.1282] & 1.1323 & [-1.1485, 1.1378] & 1.1419 \\ 500 & $5\%$ & [-1.4651, 1.4544] & 1.4581 & [-1.4696, 1.4589] & 1.4626 \\ \hline 1000 & $0\%$ & [-0.3428, 0.3321] & 0.3371 & [-0.4563, 0.4456] & 0.4505 \\ 1000 & $3\%$ & [-1.1367, 1.1259] & 1.1300 & [-1.1415, 1.1309] & 1.1348 \\ 1000 & $5\%$ & [-1.4640, 1.4533] & 1.4570 & [-1.4663, 1.4556] & 1.4593 \\ \hline\hline \end{tabular} \end{center} \end{table}
{'timestamp': '2019-05-08T02:16:35', 'yymm': '1905', 'arxiv_id': '1905.02564', 'language': 'en', 'url': 'https://arxiv.org/abs/1905.02564'}
\section{Introduction} \label{sec:intro} Quantum Monte Carlo (QMC) methods can accurately calculate the electronic structure of real materials\cite{QMCreview,NigUmr-BOOK-99,Kolorenc11}. The two most commonly used QMC methods for zero temperature calculations are variational Monte Carlo (VMC), which can compute expectation values of operators for optimized trial wave functions, and fixed-node diffusion Monte Carlo (DMC), which improves upon VMC results by using the imaginary-time evolution operator to project the trial wave function onto the ground state subject to the fixed-node boundary condition\cite{Anderson77}. QMC has been used to calculate a variety of properties such as cohesive energies, defect formation energies, and phase transition pressures\cite{Yao96, Gaudoin02, Hood03, Maezono03, Needs03, Alfe04b, Alfe05a, Alfe05b, Drummond06, Batista06, Maezono07, Pozzo08, Kolorenc08, Sola09, Hennig10, Driver10, Maezono10, Parker11, Abbasnejad12, Schwarz12, Hood12, Azadi13, Ertekin13, Shulenburger13, Chen14, Benali14, Azadi14, Foyevtsova14}. The accuracy is limited mostly by the fixed-node approximation\cite{Anderson77, Parker11} and the computational power required to reduce statistical uncertainty (the subject of this paper). Minimizing the time for a QMC calculation of a property (e.g., energy) to a given statistical accuracy requires minimizing the evaluation cost of the orbitals -- used in the trial wave function $\Psi({\bf R})$ -- at each sampling point ${\bf R}$ of the electron coordinates. The QMC energy, $E_{\rm QMC}$, is a weighted average of the {\it local energy}, \begin{equation} E_L(\Rvec) = \frac{H \Psi(\Rvec)}{\Psi(\Rvec)}, \end{equation} at $N_{\rm MC}$ stochastically-chosen configurations: \begin{equation} E_{\rm QMC} = \frac{1}{N_{\rm MC}} \sum_{i=1}^{N_{\rm MC}} w_i E_L(\Rvec_i). \label{qmc_energy} \end{equation} The statistical uncertainty in $E_{\rm QMC}$ is proportional to $1/\sqrt{N_{\rm MC}}$. Thus, repeated evaluation of the wave function $\Psi(\Rvec)$ and the Hamiltonian $H$ acting on the wave function, which requires both the wave function and its first and second derivatives, reduces the statistical uncertainty in the calculated property. The root-mean-square fluctuation of the local energy in VMC \begin{equation} \sigma_{\rm VMC} = \sqrt{\frac{1}{N_{\rm MC}} \sum_{i=1}^{N_{\rm MC}} (E_L(\Rvec_i)-E_{\rm VMC})^2} \label{qmc_sigma} \end{equation} indicates the quality of $\Psi(\Rvec)$ because the individual local energies equal the average when $\Psi(\Rvec)$ is an exact eigenfunction of $H$. QMC simulations frequently use the Slater-Jastrow form of the wave function\cite{QMCreview}, $\Psi(\Rvec) = J(\Rvec) D(\Rvec)$, where $J(\Rvec)$ is a Jastrow factor\cite{JastrowFactor} (in this work, a simple electron-electron Jastrow with no free parameters is used to impose the electron-electron cusp condition) and $D(\Rvec)$ is a Slater determinant\cite{SlaterDeterminant} of single-particle orbitals. The orbitals used in QMC wave functions typically come from density-functional or Hartree-Fock calculations and, in periodic systems, are Bloch functions of the form \begin{equation} \phi_{n \kvec} ({\bf r}} \def\kvec{{\bf k}} \def\xt{\tilde{x}) = u_{n \kvec}({\bf r}} \def\kvec{{\bf k}} \def\xt{\tilde{x}) e^{i\kvec\cdot{\bf r}} \def\kvec{{\bf k}} \def\xt{\tilde{x}}, \end{equation} where $u_{n \kvec}({\bf r}} \def\kvec{{\bf k}} \def\xt{\tilde{x})$ has the periodicity of the crystal lattice, $n$ is the band index, and $\kvec$ the crystal momentum. The periodic function, $u_{n \kvec}({\bf r}} \def\kvec{{\bf k}} \def\xt{\tilde{x})$, is represented by a linear combination of basis functions. Frequently QMC calculations are performed using simulation cells larger than the primitive cell to reduce Coulomb finite-size errors. However, since $u_{n \kvec}({\bf r}} \def\kvec{{\bf k}} \def\xt{\tilde{x})$ is periodic in the primitive cell, representing it by basis-function expansions in just the primitive cell is sufficient to simulate larger cells. The computational cost per $N$-electron Monte Carlo move of evaluating the Slater determinant is ${\it O}(N^3)$, when spatially-extended basis functions are used to represent the orbitals, since $N$ orbitals are evaluated for each of the $N$ electrons, and each orbital is a sum over $O(N)$ basis functions. Spatially-localized basis functions avoid the linear scaling of the number of basis functions with system size since only those basis functions that are non-zero at a given point contribute to the wave function value at that point, resulting in ${\it O}(N^2)$ scaling. Planewaves, despite their undesirable scaling, are a popular choice for basis functions for the density-functional and Hartree-Fock methods because of their desirable analytic properties. The advantage of a planewave representation is that planewaves form an orthogonal basis, and, in the infinite sum, a complete single-particle basis. Thus, adding more planewaves to a truncated basis (as is always used in practice) systematically improves the wave function representation towards the infinite single-particle basis limit. The energy of the highest frequency planewave included in the sum, the cutoff energy $E_{\rm cut} = \hbar^2 {G_{\rm cut}}^2 / 2\,m_{\rm e}$, characterizes a given truncated planewave basis by setting the smallest length scale about which the wave function has information. Thus, a planewave-based orbital $\phi_{\rm PW}$ is a sum over each planewave ${\bf G}$ below the cutoff multiplying a real- or complex-valued coefficient $c_{{\bf G} n {\bf k}}$ unique to that planewave, the band index $n$, and the crystal momentum ${\bf k}$: \begin{equation} \label{eq:pw_orbital} u_{n {\bf k}}({\bf r}) = \phi_{\rm PW}(\bf{r}) = \sum_{\bf G} c_{{\bf G} n {\bf k}} \exp(\imath {\bf G}\cdot {\bf r}). \end{equation} Williamson {\it et al.}\cite{Williamson01} first applied the pp-form spline interpolation method to approximate planewave-based orbitals by localized basis functions in QMC calculations. They report an $O(N)$ reduction in the time scaling. Alf\`e and Gillan\cite{Alfe04} introducing the related method of B-spline approximation in QMC, report significant reduction in the calculation time while maintaining planewave-level accuracy. This work compares the three methods previously applied to QMC (pp-splines\cite{Williamson01}, interpolating B-splines\cite{Esler_unpub}, and smoothing B-splines\cite{Hernandez97, Alfe04}) with a fourth method (Lagrange polynomials) originally implemented by one of us in QMC but heretofore unpublished. Section~\ref{sec:methods} introduces, compares and contrasts the four methods. Section~\ref{sec:accuracy} compares the accuracy of the polynomial methods in reproducing the QMC energies and fluctuations in the local energy relative to the corresponding values from the planewave expansion. Section~\ref{sec:accuracy} also studies whether it is advantageous to construct separate approximations for the gradient and the Laplacian of the orbitals. Section~\ref{sec:speed} compares the time cost in QMC calculations of polynomial methods and planewave expansions. Section~\ref{sec:memory} compares the memory requirements of the polynomial methods and planewaves. Section~\ref{sec:conclusions} concludes that higher accuracy and lower memory requirement make smoothing B-splines, with a separate approximation for the Laplacian, the best choice. The appendix describes the details of the approximating polynomials. \section{Methods} \label{sec:methods} The four methods of approximating the planewave-represented single-particle orbitals with polynomials that we study in this paper are: interpolating Lagrange polynomials\cite{Lagrange}, interpolating piecewise-polynomial-form splines (pp-splines)\cite{deBoor01} (often simply called interpolating splines), and basis-form splines (B-splines) (both interpolating\cite{B_spline,deBoor01} and smoothing\cite{Hernandez97,Alfe04}). For the pp-splines, we employ the Princeton \textsc{pspline} package\cite{PrincetonSpline}, and, for interpolating B-splines, the \textsc{einspline} package\cite{Einspline}, interfaced to the \textsc{champ} QMC program\cite{Cha-PROG-XX}. {\it Common features.} The methods share several aspects. They construct the orbital approximation by a trivariate polynomial tensor product. Each of the methods can employ polynomials of arbitrary order, $n$. We use cubic polynomials, the customary choice. The methods transform the cartesian coordinates to reduced coordinates prior to evaluating the polynomial approximation (see Eq.~(\ref{eq:reducedcoordinate}) and the Coordinate paragraph). They use a grid of real-space points with associated coefficients and have a natural grid spacing defined by the highest-energy planewave in the planewave sum representing the orbital (see Eq.~(\ref{eq:natural_spacing}) and the Grid paragraph). They share two possibilities for evaluating the required derivatives of a polynomial-represented function: (1) derivatives of the polynomials or (2) separate polynomial approximations of the planewave-represented derivatives. {\it Distinctive features.} As Lagrange-interpolated functions have discontinuous derivatives at the grid points, ensuring continuity in the derivative-dependent energies requires a separate interpolation for each of the components of the gradient and for the Laplacian of the orbitals, increasing the memory requirement by a factor of five. In contrast to Lagrange interpolation, splines of degree $n$ have continuous derivatives up to order $n-1$ at the grid points, and, thus, the gradient and Laplacian of the splined function can approximate the gradient and Laplacian of the planewave sum, though this choice leads to a loss of accuracy. Spline functions have two free parameters in each dimension that are used to set the boundary conditions. Since planewave-based orbitals are periodic, we choose the boundary conditions to have matching first and second derivatives at the boundaries in each dimension. The formulation of both B-splines and pp-splines may be either interpolating (exact function values at the grid points)\cite{Einspline} or smoothing\cite{Hernandez97,Alfe04}. Smoothing splines are advantageous when the data is noisy, but this is not the case in our application. Instead our rationale is the following: the planewave coefficients of each orbital specify that orbital, and the particular form of smoothing spline we use\cite{Hernandez97,Alfe04} is constructed to exactly reproduce the nonzero coefficients (see~\ref{smoothing_spline}). Since fixing the values at the grid points and specifying the boundary conditions uniquely determines the interpolating spline function, interpolating B-splines and pp-splines yield identical function values\cite{deBoor01}. Due to the reduced number of coefficients stored per point, interpolating B-splines are preferable to pp-splines provided the time required for their evaluation is not larger than for pp-splines. {\it Grid.} Each of the interpolation methods permits either uniform or nonuniform grids. For simplicity, we employ uniform grids, but the number of grid points in each dimension needs not be the same. The highest-energy planewave in the planewave sum representing a given orbital defines a natural maximal grid spacing, above which short length scale information is lost. One point per maximum and minimum of the highest-energy planewave $G_{\rm max}$, or two points per wavelength, $\lambda_{\rm min} = 2 \pi / G_{\rm max}$, defines this natural spacing $h_{\rm natural}$: \begin{equation} h_{\rm natural} = \frac{\lambda_{\rm min}}{2} = \frac{\pi}{G_{\rm max}}. \label{eq:natural_spacing} \end{equation} {\it Coordinates.} To simplify the form of the polynomials for the evaluation of the splines, we formulate the methods such that the point where the function is evaluated lies in the interval $[0,1)$ in each dimension. The spline evaluation for a given point requires the coefficients at the four neighboring grid points in each dimension and requires transforming the Cartesian coordinates to reduced coordinates. The primitive-cell lattice vectors of the crystal ${\bf a}_i$ need not be orthogonal. The reduced vector, ${\tilde{\bf r}} = (\tilde{r}_1,\tilde{r}_2,\tilde{r}_3) = (\tilde{x},\tilde{y},\tilde{z})$, corresponding to the Cartesian vector ${\bf r}} \def\kvec{{\bf k}} \def\xt{\tilde{x}$, is \begin{eqnarray} \tilde{r}_i = {N_{{\rm grid},i}}} \def\beq{\begin{eqnarray} \left( {\left( \vector{a}^{-1} \vector{r} \right)}_i - \lfloor {\left( \vector{a}^{-1} \vector{r} \right)}_i \rfloor \right) - \\ \nonumber \lfloor {N_{{\rm grid},i}}} \def\beq{\begin{eqnarray} \left( {\left( \vector{a}^{-1} \vector{r} \right)}_i - \lfloor {\left( \vector{a}^{-1} \vector{r} \right)}_i \rfloor \right) \rfloor, \label{eq:reducedcoordinate} \end{eqnarray} where $\lfloor \, \rfloor$ is the floor function, which returns the integer part of a number and $\bf a$ is the $3\times 3$ matrix of lattice vectors ${\bf a}_i$. Multiplying the Cartesian coordinate by ${\bf a}^{-1}$ transforms it to crystal coordinates. Subtracting the integer part of the crystal coordinates forces the coordinate inside the primitive cell, restricting the magnitude of the coordinate values to be between zero and one. Multiplying the crystal coordinates by the number of grid points $N_{{\rm grid},i}$ along the $i^{th}$ lattice vector transforms the crystal coordinates into units of the grid point interval. Subtracting the integer part of the interval-unit crystal coordinate (i.e., the index of the grid point smaller than and closest to $r_i$) yields the reduced coordinate ${\tilde{\bf r}}$, each component of which is in the interval $[0,1)$. To obtain the Cartesian coordinate gradient and Laplacian starting from the reduced coordinate gradient and Hessian requires we use the chain rule, yielding ${\bf \nabla}\phi({\bf r}) = {N_{{\rm grid},i}}} \def\beq{\begin{eqnarray} ({\bf a}^{-1})^{\rm T} \tilde{\bf \nabla}\phi(\tilde{\bf r})$ and $\nabla^{2}\phi({\bf r}) = {N_{{\rm grid},i}}} \def\beq{\begin{eqnarray}^2 \sum_{i,j}(({\bf a}^{-1})({\bf a}^{-1})^{\rm T})_{ij} \partial^{2}\phi(\tilde{\bf r})/\partial \tilde{r}_i \partial \tilde{r}_j$. \ref{sec:explicitforms} gives the explicit forms of the approximating functions. \section{Results} \label{sec:results} Three quantities compare the performance of Lagrange interpolation, pp-spline interpolation, B-spline interpolation and B-spline smoothing, within quantum Monte Carlo calculations of periodic systems: (i) the {\it accuracy} in reproducing the planewave orbital values, (ii) the {\it speedup} from the planewave-based calculation, and (iii) the computer {\it memory} required. The results presented in this section are obtained for single-particle orbitals at the $\Gamma$ point, that are obtained with the LDA exchange-correlation functional in density functional theory and Troullier-Martins pseudopotentials\cite{TroullierMartins91}. \footnote{Mg -- valence configuration: $2s^22p^63d^{0.1}$, cutoff radii: $= 1.20, 1.50, 1.80 \:{\rm a.u.}$ respectively. O -- valence configuration: $2s^22p^43d^0$, cutoff radii $= 1.0,1.0,1.0\:{\rm a.u.}$ respectively with Hamann's generalized state method\cite{Hamann89} for the d-channel. Si -- valence configuration: $3s^23p^23d^0$, cutoff radii $= 2.25,2.25,2.25\:{\rm a.u.}$ respectively. Al -- valence configuration: $3s^23p^13d^0$, cutoff radii $= 2.28,2.28,2.28\,{\rm a.u.}$ respectively with the generalized state method for the d-channel.} \subsection{Accuracy} \label{sec:accuracy} To understand the accuracy of the polynomial approximation to the planewave sum in the context of QMC, we compare the error in the orbital value, gradient of the orbital, Laplacian of the orbital, the total VMC energy, and the root-mean-square fluctuation in the VMC local energy relative to the corresponding quantity computed using the planewave sum. Increasing the planewave cutoff tests the accuracy of the approximations as the planewave basis becomes more complete. \footnote{Reducing the grid spacing for fixed planewave cutoff results in all quantities converging to their planewave values for the selected cutoff\cite{supplementary}}. We find that the choice of $k$-point in our test does not affect the conclusions as similar results were obtained for various $k$-points ($\Gamma$, L, and X high-symmetry points in diamond Si) and different simulation cell sizes (2, 8, 16, and 32-atom cells in diamond Si). However, significant differences in accuracy of the approximations occur for the three different materials tested, diamond Si, fcc Al and rock-salt MgO\cite{supplementary}. Since the approximation methods show the greatest differences from each other for the case of MgO in the rock-salt structure, we focus here on those results. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{accuracy_cutoff_all_MAE_over_MAV+uncertainty_MgO_rock-salt_2_Gamma.eps} \end{center} \caption{(Color online) Relative mean absolute error of the orbitals, gradient of the orbitals, and Laplacian of the orbitals as a function of planewave cutoff in rock-salt MgO (chosen for greatest contrast in results) at natural grid spacing for each of the approximation methods: Lagrange polynomials (squares), pp-splines and interpolating B-splines (diamonds), and smoothing B-splines (triangles). Error bars indicate one standard deviation from the mean of the data point. The error in the gradient and Laplacian of the orbitals is the error in a direct approximation of the gradient and Laplacian, respectively, of the orbital, not the gradient or Laplacian of an approximation of the orbital. The large fluctuation in the error of the Laplacian is due to the small, stochastically chosen sample used to calculate the mean. Smoothing B-splines show the smallest error relative to the planewave value at all grid spacings although the statistical uncertainty in the average over the components of the gradient obscures that result. } \label{fig:orbdorbddorb_error} \vskip 9mm \end{figure} \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{accuracy_cutoff_VMC_E_MgO_rocksalt_2_Gamma.eps} \end{center} \caption{(Color online) VMC total energy as a function of planewave basis cutoff in rock-salt MgO for each of the approximation methods at the natural grid spacing for that cutoff. Error bars on points indicate one standard deviation of statistical uncertainty in the total energies. Smoothing B-splines with separate approximations for the orbitals and the Laplacian of the orbitals lie within one standard deviation of statistical uncertainty of the planewave value for all cutoffs tested. (Note: Lagrange polynomials appear on the figure only at a cutoff of 250~Ha [$\approx 6800$ eV]). } \label{fig:energy_with_cutoff} \vskip 9mm \end{figure} \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{sigma_cutoff_VMC_MgO_rocksalt_2_Gamma.eps} \end{center} \caption{(Color online) Root-mean-square fluctuation of the VMC local energy, $\sigma$, as a function of planewave basis cutoff in rock-salt MgO for each of the approximation methods at the natural grid spacing for that cutoff. Error bars on points indicate one standard deviation of statistical uncertainty in the values. Once the planewave value has converged, near 120~Ha ($\approx3270$ eV), the smoothing B-splines with a separate approximation for the Laplacian is within statistical uncertainty of the planewave $\sigma$. (Note: Lagrange polynomials and smoothing B-splines without separate approximation for the Laplacian appear on the figure only at a cutoff of 250~Ha[$\approx6800$ eV]---interpolating B-splines without separate interpolation for the Laplacian do not appear at all on this scale)} \label{fig:sigma_with_cutoff} \vskip 9mm \end{figure} Figure~\ref{fig:orbdorbddorb_error} shows the relative mean absolute error in the orbital, its gradient, and its Laplacian as a function of the planewave cutoff for the four approximation methods at natural grid spacing in rock-salt MgO. The relative mean absolute error is the mean absolute error divided by the mean absolute value computed from the planewave sum. Since pp-splines and interpolating B-splines give identical function values\cite{deBoor01}, they lie on a single curve and set of points. For each of the approximations, the gradient and Laplacian used by the QMC calculation can be obtained either by taking derivatives of the polynomial-approximated orbital, or, by constructing separate polynomial approximations for the gradient and the Laplacian of the planewave sum. The central and lower panels of Figure~\ref{fig:orbdorbddorb_error} show the accuracy of separate approximations of the gradient and Laplacian of the planewave sum. When separately approximating any derivatives of the orbitals, the resulting energy need not be an upper bound to the true energy, but the separate approximations recover the planewave value in the limit of infinite basis set. Spline interpolation is more accurate than Lagrange interpolation for all planewave cutoffs. Splines utilize all the tabulated function values (a global approximation) to enforce first and second derivative continuity across grid points, whereas Lagrange interpolation uses just the closest 64 points (a local approximation) and has derivative discontinuities at the grid points. This leads to larger fluctuations in the error of Lagrange interpolation compared to splines. Figures~\ref{fig:energy_with_cutoff} and~\ref{fig:sigma_with_cutoff} show the quantities of importance to QMC calculations, the total VMC energy $E_\mathrm{VMC}$ and the standard deviation of the local energy $\sigma_\mathrm{VMC}$, respectively, as a function of planewave cutoff in rock-salt MgO for the four approximations. The deviations of $E_\mathrm{VMC}$ and $\sigma_\mathrm{VMC}$ from the planewave values reflect the errors in the orbitals, their gradients and Laplacian. Smoothing B-splines are more accurate than interpolating splines, which in turn are more accurate than Lagrange interpolation. Furthermore, separately approximating the Laplacian in the spline approximations significantly improves the accuracy of $E_\mathrm{VMC}$ and $\sigma_\mathrm{VMC}$. The standard deviation of the local energy $\sigma_\mathrm{VMC}$ is more sensitive than the total energy $E_\mathrm{VMC}$ to the errors in the approximations because errors in the local energies partially cancel when averaging the local energy to obtain $E_\mathrm{VMC}$. In all systems tested, convergence of $E_\mathrm{VMC}$ to within 1~mHa is observed for planewave cutoff energies 9-25~Ha smaller than for the convergence of $\sigma_\mathrm{VMC}$ to the same level. \subsection{Speedup} \label{sec:speed} Figure~\ref{fig:speed} illustrates that the three methods of polynomial approximation speed up the planewave calculation by the same factor which scales as $O(N)$. Tests on three different computer platforms (3.0 GHz Intel Pentium 4, 2.4 GHz Intel Xeon, 900 MHz Intel Itanium 2) show that the time scaling (in seconds) with the number of atoms $N$ for the approximating polynomials is of the order of $10^{-4}\,N^2 + 10^{-6}\,N^3$ compared to the scaling for planewaves of $10^{-3}\,N^3$. The difference in computational time between the Lagrange polynomials and the B-splines is less than 10\% and varies between the different computers. While pp-splines store eight coefficients at each grid point, and Lagrange interpolation and B-splines store just one, all methods require accessing the same number of coefficients (64 for cubic polynomials in 3D) from memory for each function evaluation. However, Lagrange and B-splines access one coefficient from each of the 64 nearest-neighbor points whereas pp-splines access eight coefficients from each of the eight nearest-neighbor points. This reduction in accessed neighbor points could make pp-splines faster since the data access is more local. However, the calculations show that, for the implementation of pp-splines used here\cite{PrincetonSpline}, no speedup occurs in practice. Additionally, further optimization of the smoothing B-splines routines has reduced the time scaling prefactor by an order of magnitude. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{speed_tMC_Natoms_Si_diamond.eps} \end{center} \caption{(Color online) VMC time per Monte Carlo step versus number of atoms. All three approximations speed up the planewave calculation by a factor of $O(N)$ with nearly the same prefactor, recommending all approximation methods equally {\em on the basis of speed}. Further optimization of the smoothing B-splines routines has reduced the time scaling prefactor by an order of magnitude from the data shown here. The similarity in form of the approximation methods (see Eqs.~(\ref{eq:lagrangeinterpolation3}),~(\ref{eq:pp-splineinterpolation3}),~(\ref{eq:b-splineapproximation})) and the fact that all methods need to retrieve a similar number of coefficients from memory account for the similarity in evaluation speed despite differences in approximation properties. } \label{fig:speed} \end{figure} \subsection{Memory} \label{sec:memory} At the natural grid spacing, the polynomial approximations store a total number of values equal to or greater than the number of planewaves. \footnote{If the planewave expansion includes only planewaves that lie within a parallelepiped defined by three reciprocal lattice vectors, then the number of grid points at natural spacing (see Eq.~(\ref{eq:natural_spacing})) equals the number of planewave coefficients. However, it is customary to include planewaves in a sphere up to some energy cutoff in the planewave sum, in which case the number of grid points is larger than the number of planewave coefficients. For example, in a cubic lattice, the ratio of the number of grip points at natural grid spacing to the number of planewaves is the ratio of the volume of a cube to the volume of the inscribed sphere, namely $6/\pi$. } Trivariate, cubic pp-splines store eight values per grid point for each function, namely the function values, the three second derivatives along each direction, three mixed fourth-order derivatives, and one mixed sixth-order derivative (see \citep{PrincetonSpline}] or Eq.~(\ref{eq:pp-splinecoefficients3}) for details). Lagrange interpolation and B-splines store only one value per grid point for each function. In the case of Lagrange interpolation, the stored values are the function values, whereas, for B-splines, the stored values are the derived B-spline coefficients (see \ref{sec:explicitforms}). All the approximations can obtain the gradient and the Laplacian by either taking appropriate derivatives of the splined functions or by generating separate approximations for the gradient and the Laplacian. Separate approximations for the Laplacian increase the memory requirement by a factor of two, and, separate approximations for the Laplacian and the gradient increase the memory requirement by a factor of five. Since the gradient and Laplacian of the Lagrange interpolation are not continuous, we always use separate approximations for the Laplacian and the gradient when using Lagrange interpolation. For splines, the increased accuracy achieved by using separate approximations warrants using separate approximations for the Laplacian but not for the gradient. \section{Conclusions} \label{sec:conclusions} The four polynomial approximation methods -- interpolating Lagrange polynomials, interpolating pp-splines, interpolating B-splines, and smoothing B-splines -- speed up planewave-based quantum Monte Carlo (QMC) calculations by $O(N)$, where $N$ is the number of atoms in the system. At natural grid spacing, smoothing B-splines are more accurate than interpolating splines, which are in turn more accurate than Lagrange interpolation for all planewave cutoff values tested. Separately approximating the Laplacian of the orbitals results in the total energy and root-mean-square fluctuation of the local energy to be closest to the values obtained using the planewave sum. High accuracy and low memory requirement make smoothing B-splines, with the Laplacian splined separately from the orbitals, the best choice for approximating planewave-based orbitals in QMC calculations. \section{Acknowledgments} This work was supported by the Department of Energy Basic Energy Sciences, Division of Materials Sciences (DE-FG02-99ER45795 and DE-FG05-08OR23339), and the National Science Foundation (CHE-1112097 and DMR-1056587). Computational resources were provided by the Ohio Supercomputing Center, the National Center for Supercomputing Applications, and the National Energy Research Scientific Computing Center (supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231). We thank Neil Drummond, Ken Esler, Jeongnim Kim, Mike Towler, and Andrew Williamson for helpful discussions, Ken Esler for recommending and helping with implementation of his Einspline library, and Jos{\'e} Lu{\'i}s Martins for the use of his pseudopotential generation and density functional programs. \bibliographystyle{apsrev}
{'timestamp': '2014-10-31T01:14:07', 'yymm': '1309', 'arxiv_id': '1309.6250', 'language': 'en', 'url': 'https://arxiv.org/abs/1309.6250'}
\section{Introduction}\label{introduction} \IEEEPARstart{I}{n} the era of big data, distributed machine learning (DML) is increasingly applied in various areas of our daily lives, especially with proliferation of training data. Typical applications of DML include machine-aided prescription \cite{fredrikson2014privacy}, natural language processing \cite{le2014distributed}, recommender systems \cite{wang2015collaborative}, to name a few. Compared with the traditional single-machine model, DML is more competent for large-scale learning tasks due to its scalability and robustness to faults. The alternating direction method of multipliers (ADMM), as a commonly-used parallel computing approach in optimization community, is a simple but efficient algorithm for multiple servers to collaboratively solve learning problems \cite{boyd2011distributed}. Our DML framework also use ADMM as the underlying algorithm. However, privacy is a significant issue that has to be considered in DML. In many machine learning tasks, users' data for training the prediction model contains sensitive information, such as genotypes, salaries, and political orientations. For example, if we adopt DML methods to predict HIV-1 infection \cite{qi2010semi}, the data used for protein-protein interactions identification mainly includes patients' information about their proteins, labels indicating whether they are HIV-1 infected or not, and other kinds of health data. Such information, especially the labels, is extremely sensitive for the patients. Moreover, there exist potential risks of privacy disclosure. On one hand, when users report their data to servers, illegal parties can eavesdrop the data transmission processes or penetrate the servers to steal reported data. On the other, the communicated information between servers, which is required to train a common prediction model, can also disclose users' private data. If these disclosure risks are not properly controlled, users would refuse to contribute their data to servers even though DML may bring convenience for them. Various privacy-preserving solutions have been proposed in the literature. Differential privacy (DP) \cite{dwork2008differential} is one of the standard non-cryptographical approaches and has been applied in distributed computing scenarios \cite{wang2019privacy, nozari2017differentially, dpc2012, wang2018differentially}. Other schemes which are not DP-preserving can be found in \cite{mo2017privacy, manitara2013privacy, he2019consensus}. In addition, privacy-aware machine learning problems \cite{chaudhuri2011differentially, zhang2017dynamic, ding2019optimal, gade2018privacy} have attracted a lot of attentions, and many researchers have proposed ADMM-based solutions \cite{lee2015maximum, zhang2018improving, zhang2019admm}. However, there exists an underlying assumption in most privacy-aware schemes that the data contributors trust the servers collecting their data. This trustworthy assumption may lead to privacy disclosure in many cases. For instance, when the server is penetrated by an adversary, the information obtained by the adversary may be the users' original private data. Moreover, most existing schemes provide the same privacy guarantee for the entire data sample of a user though different data pieces are likely to have distinct sensitive levels. In the example of HIV-1 infection prediction \cite{qi2010semi} mentioned above, it is obvious that the label indicating HIV-1 infected or uninfected is more sensitive than other health data. Thus, the data pieces with higher sensitive levels should obtain stronger protection. On the other hand, as claimed in \cite{wang2019privacy}, different servers present diverse trust degrees to users due to the distinct permissions to users' data. The servers having no direct connection with a user, compared with the server collecting his/her data, may be less trustworthy. Here, the user would require that the less trustworthy servers obtain his/her information under stronger privacy preservation. Therefore, we will investigate a privacy-aware DML framework that preserves heterogeneous privacy, where users' data pieces with distinct sensitive levels can obtain different privacy guarantee against servers of diverse trust degrees. One challenging issue is to reduce the accumulation of privacy losses over ADMM iterations as much as possible, especially for the privacy guarantee of the most sensitive data pieces. Most existing ADMM-based private DML frameworks preserve privacy by perturbing the intermediate results shared by servers. Since each intermediate result is computed with users' original data, its release will disclose part of private information, implying that the privacy loss may increase as iterations proceed. Moreover, these private DML frameworks only provide the same privacy guarantee for all data pieces. In addition to intermediate information perturbation, original data randomization methods can be combined to provide heterogeneous privacy protection. However, such an approach introduces coupled uncertainties into the classification model. The lack of uncertainty decoupling methods leads to the performance quantification a challenging task. In this paper, we propose a privacy-preserving distributed machine learning (PDML) framework to settle these challenging issues. After removing the trustworthy servers assumption, we incorporate the users' data reporting into the DML process, which forms a two-phase training scheme together with the distributed computing process. For privacy preservation, we adopt different approaches in the two phases. In Phase~1, a user first leverages a local randomization approach to obfuscate the most sensitive data pieces and sends the randomized version to a server. This technique provides the user with self-controlled privacy guarantee for the most sensitive information. Further, in Phase~2, multiple servers collaboratively train a common prediction model and there, they use a combined noise-adding method to perturb the communicated messages, which preserves privacy for users' less sensitive data pieces. Also, such perturbation strengthens the privacy preservation of data pieces with the highest sensitive level. For the performance of the PDML framework, we analyze the generalization error of current classifiers trained by different servers. The main contributions of this paper are threefold: \begin{enumerate \item A two-phase PDML framework is proposed to provide heterogeneous privacy protection in DML, where users' data pieces obtain different privacy guarantees depending on their sensitive levels and servers' trust degrees. \item In Phase~1, we design a local randomization approach, which preserves DP for the users' most sensitive information. In Phase~2, a combined noise-adding method is devised to compensate the privacy protection of other data pieces. \item The convergence property of the proposed ADMM-based privacy-aware algorithm is analyzed. We also give a theoretical bound of the difference between the generalization error of trained classifiers and the ideal optimal classifier. \end{enumerate} The remainder of this paper is organized as follows. Related works are discussed in Section~\ref{related_works}. We provide some preliminaries and formulate the problem in Section~\ref{pro_formulation}. Section~\ref{pp_framework} presents the designed privacy-preserving framework, and the performance is analyzed in Section~\ref{performance_ana}. In order to validate the classification performance, we use multiple real-world datasets and conduct experiments in Section~\ref{evaluation}. Finally, Section~\ref{conclusion} concludes the paper. A preliminary version \cite{wang2019differential} of this paper was accepted for presentation at IEEE CDC 2019. This paper contains a different privacy-preserving approach with a fully distributed ADMM setting, full proofs of the main results, and more experimental results. \section{Related Works} \label{related_works} As one of the important applications of distributed optimization, DML has received widespread attentions from researchers. Besides ADMM schemes, many distributed approaches have been proposed in the literature, e.g., subgradient descent methods \cite{nedic2009distributed}, local message-passing algorithms \cite{predd2009collaborative}, adaptive diffusion mechanisms \cite{chen2012diffusion}, and dual averaging approaches \cite{duchi2011dual}. Compared with these approaches, ADMM schemes achieve faster empirical convergence \cite{shi2014linear}, making it more suitable for large-scale DML tasks. For privacy-preserving problems, cryptographic techniques \cite{biham2012differential, brakerski2014efficient, shoukry2016privacy} are often used to protect information from being inferred when the key is unknown. In particular, homomorphic encryption methods \cite{brakerski2014efficient}, \cite{shoukry2016privacy} allow untrustworthy servers to calculate with encrypted data, and this approach has been applied in an ADMM scheme \cite{zhang2019admm}. Nevertheless, such schemes unavoidably bring extra computation and communication overheads. Another commonly used approach to preserve privacy is random value perturbation \cite{dwork2008differential}, \cite{erlingsson2014rappor}, \cite{xu2012building}. DP has been increasingly acknowledged as the de facto criterion for non-encryption-based data privacy. This approach requires less costs but still provides strong privacy guarantee, though there exist tradeoffs between privacy and performance \cite{wang2019privacy}. In recent years, random value perturbation-based approaches have been widely used to address privacy protection in distributed computing, especially in consensus problems \cite{francesco2019lectures}. For instance, \cite{wang2019privacy, nozari2017differentially, dpc2012}, \cite{mo2017privacy, manitara2013privacy, he2019consensus} provide privacy-preserving average consensus paradigms, where the mechanisms in \cite{wang2019privacy, nozari2017differentially, dpc2012} provide DP guarantee. Moreover, for a maximum consensus algorithm, \cite{wang2018differentially} gives a differentially private mechanism. Since these solutions mainly focus on simple statistical analysis (e.g., computation of average and maximum elements), there may exist difficulties in directly applying them to DML. Privacy-preserving machine learning problems have also attracted a lot of attention recently. Under centralized scenarios, Chaudhuri et al. \cite{chaudhuri2011differentially} proposed a DP solution for an empirical risk minimization problem by perturbing the objective function with well-designed noise. For privacy-aware DML, Han et al. \cite{han2016differentially} also gave a differentially private mechanism, where the underlying distributed approach is subgradient descent. The works \cite{zhang2017dynamic} and \cite{ding2019optimal} present dynamic DP schemes for ADMM-based DML, where privacy guarantee is provided in each iteration. However, if a privacy violator uses the published information in all iterations to make inference, there will be no privacy guarantee. In addition, an obfuscated stochastic gradient method via correlated perturbations was proposed in \cite{gade2018privacy}, though it cannot provide DP preservation. Different from these works, in this paper we remove the trustworthy servers assumption. Moreover, we take into consideration the distinct sensitive levels of data pieces and the diverse trust degrees of servers, and propose the PDML framework providing heterogeneous privacy preservation. \section{Preliminaries and Problem Statement} \label{pro_formulation} In this section, we introduce the overall computation framework of DML and the ADMM algorithm used there. Moreover, the privacy-preserving problem for the framework is formulated with the definition of local differential privacy. \subsection{System Setting} We consider a collaborative DML framework to carry out classification problems based on data collected from a large number of users. Fig. \ref{framework} gives a schematic diagram. There are two parties involved: Users (or data contributors) and computing servers. The DML's goal is to train a classification model based on data of all users. It has two phases of data collection and distributed computation, called Phase~1 and Phase~2, respectively. In Phase~1, each user sends his/her data to the server, which is responsible to collect all the data from the user's group. In Phase~2, each computing server utilizes a distributed computing approach to cooperatively train the classifier through information interaction with other servers. The proposed DML framework is based on the one in \cite{wang2019privacy}, but the learning tasks are much more complex than the basic statistical analysis considered by \cite{wang2019privacy}. \textbf{Network Model}. Consider $n\geq2$ computing servers participating in the framework where the $i$th server is denoted by $s_i$. We use an undirected and connected graph $\mathcal{G}=(\mathcal{S}, \mathcal{E})$ to describe the underlying communication topology, where $\mathcal{S}=\{s_i\;| \;i=1, 2, \ldots, n\}$ is the servers set and $\mathcal{E}\subseteq\mathcal{S}\times\mathcal{S}$ is the set of communication links between servers. The number of communication links in $\mathcal{G}$ is denoted by $E$, i.e., $E=|\mathcal{E}|$. Let the set of neighbor servers of $s_i$ be $\mathcal{N}_i=\{s_l\in \mathcal{S}\;| \;(s_i, s_l)\in \mathcal{E}\}$. The degree of server $s_i$ is denoted by $N_i=|\mathcal{N}_i|$. Different servers collect data from different groups of users, and thus all users can be divided into $n$ distinct groups. The $i$th group of users, whose data is collected by server $s_i$, is denoted by the set $\mathcal{U}_i$, and $m_i=|\mathcal{U}_i|$ is the number of users in $\mathcal{U}_i$. Each user $j\in \mathcal{U}_i$ has a data sample $\mathbf{d}_{i, j}=(\mathbf{x}_{i, j}, y_{i, j})\in \mathcal{X}\times\mathcal{Y} \subseteq \mathbb{R}^{d+1}$, which is composed of a feature vector $\mathbf{x}_{i, j}\in\mathcal{X}\subseteq\mathbb{R}^d$ and the corresponding label $y_{i, j}\in\mathcal{Y} \subseteq \mathbb{R}$. In this paper, we consider a binary-classification problem. That is, there are two types of labels as $y_{i, j}\in\{-1,1\}$. Suppose that all data samples $\mathbf{d}_{i, j}, \forall i,j$, are drawn from an underlying distribution $\mathcal{P}$, which is unknown to the servers. Here, the learning goal is that the classifier trained with limited data samples can match the ideal model trained with known $\mathcal{P}$ as much as possible. \begin{figure} \centering \includegraphics[scale=0.5]{figure/framework_new.pdf} \caption{{\small Illustration of the DML framework}}\label{framework} \end{figure} \subsection{Classification Problem and ADMM Algorithm} \label{admm_alg} We first introduce the classification problem solved by the two-phase DML framework. Let $\mathbf{w}: \mathcal{X}\rightarrow\mathcal{Y}$ be the trained classification model. The trained classifier $\mathbf{w}$ should guarantee that the accuracy of mapping any feature vector $\mathbf{x}_{i, j}$ (sampled from the distribution $\mathcal{P}$) to its correct label $y_{i, j}$ is high. We employ the method of regularized empirical risk minimization, which is a commonly used approach to find an appropriate classifier \cite{vapnik2013nature}. Denote the classifier trained by server $s_i$ as $\mathbf{w}_i\in\mathbb{R}^d$. The objective function (or the empirical risk) of the minimization problem is defined as \begin{equation}\label{original_objective} \small J(\{\mathbf{w}_i\}_{i\in\mathcal{S}}):=\sum_{i=1}^n\left[\sum_{j=1}^{m_i}\frac{1}{m_i}\ell(y_{i, j}, \mathbf{w}_i^\mathrm{T}\mathbf{x}_{i, j})+\frac{a}{n}N(\mathbf{w}_i)\right], \end{equation} where $\ell: \mathbb{R}\times\mathbb{R}\rightarrow \mathbb{R}$ is the loss function measuring the performance of the trained classifier $\mathbf{w}_i$. The regularizer $N(\mathbf{w}_i)$ is introduced to mitigate overfitting, and $a>0$ is a constant. We take a bounded classifier class $\mathcal{W}\subset\mathbb{R}^d$ such that $\mathbf{w}_i\in\mathcal{W}, \forall i$. For the loss function $\ell(\cdot)$ and the regularizer $N(\cdot)$, we introduce the following assumptions \cite{chaudhuri2011differentially} \cite{zhang2017dynamic} \begin{assumption}\label{loss_assumption} The loss function $\ell(\cdot)$ is convex and doubly differentiable in $\mathbf{w}$. In particular, $\ell(\cdot)$, $\frac{\partial\ell(\cdot)}{\partial \mathbf{w}}$ and $\frac{\partial\ell^2(\cdot)}{\partial \mathbf{w}^2}$ are bounded over the class $\mathcal{W}$ as \begin{equation}\nonumber \small |\ell(\cdot)|\leq c_1, \left|\frac{\partial\ell(\cdot)}{\partial \mathbf{w}}\right|\leq c_2, \left|\frac{\partial\ell^2(\cdot)}{\partial \mathbf{w}^2}\right|\leq c_3, \end{equation} where $c_1$, $c_2$ and $c_3$ are positive constants. Moreover, it holds $\frac{\partial\ell^2(y,\mathbf{w}^\mathrm{T}\mathbf{x})}{\partial {\mathbf{w}}^2}=\frac{\partial\ell^2(-y,\mathbf{w}^\mathrm{T}\mathbf{x})}{\partial {\mathbf{w}}^2}$. \end{assumption} \begin{assumption}\label{regularizer_assumption} The regularizer $N(\cdot)$ is doubly differentiable and strongly convex with parameter $\kappa>0$, i.e., $\forall \mathbf{w}_1, \mathbf{w}_2 \in \mathcal{W}$, \vspace{-0.4cm} \begin{equation}\label{strongly_convex} \small N(\mathbf{w}_1)-N(\mathbf{w}_2)\geq \nabla N(\mathbf{w}_1)^\mathrm{T}(\mathbf{w}_2-\mathbf{w}_1)+\frac{\kappa}{2}\|\mathbf{w}_2-\mathbf{w}_1\|_2^2, \end{equation} \vspace{-0.2cm} where $\nabla N(\cdot)$ indicates the gradient with respect to $\mathbf{w}$. \end{assumption} We note that $J(\{\mathbf{w}_i\}_{i\in\mathcal{S}})$ in (\ref{original_objective}) can be separated into $n$ different parts, where each part is the objective function of the local minimization problem to be solved by each server. The objective function of server $s_i$ is \begin{equation}\label{local_objective} \small J_i(\mathbf{w}_i):=\sum_{j=1}^{m_i}\frac{1}{m_i}\ell(y_{i, j}, \mathbf{w}_i^\mathrm{T}\mathbf{x}_{i, j})+\frac{a}{n}N(\mathbf{w}_i). \end{equation} Since $\mathbf{w}_i$ is trained based on the data of the $i$th group of users, it may only partially reflect data characteristics. To find a common classifier taking account of all participating users, we place a global consensus constraint in the minimization problem, as $\mathbf{w}_i=\mathbf{w}_l, \forall s_i, s_l\in\mathcal{S}$. However, since we use a connected graph to describe the interaction between servers, we have to utilize a local consensus constraint: \begin{equation}\label{constraint} \small \mathbf{w}_i=\mathbf{z}_{il}, \quad \mathbf{w}_l=\mathbf{z}_{il}, \quad \forall (s_i, s_l)\in \mathcal{E}, \end{equation} where $\mathbf{z}_{il}\in\mathbb{R}^d$ is an auxiliary variable enforcing consensus between neighbor servers $s_i$ and $s_l$. Obviously, (\ref{constraint}) also implies global consensus. We can now write the whole regularized empirical risk minimization problem as follows \cite{forero2010consensus}. \begin{problem} \label{problem_1} \begin{small} \begin{alignat}{2} \min_{\{\mathbf{w}_i\}, \{\mathbf{z}_{i, l}\}} & \sum_{i=1}^n\left[\sum_{j=1}^{m_i}\frac{1}{m_i}\ell(y_{i, j}, \mathbf{w}_i^\mathrm{T}\mathbf{x}_{i, j})+\frac{a}{n}N(\mathbf{w}_i)\right] \label{minimization_pro} \\ \mathrm{s.t.} \quad & \mathbf{w}_i=\mathbf{z}_{il}, \quad \mathbf{w}_l=\mathbf{z}_{il}, \quad \forall (s_i, s_l)\in \mathcal{E}. \end{alignat} \end{small} \end{problem} Next, we establish a compact form of Problem~\ref{problem_1}. Let $\mathbf{w}:=[\mathbf{w}_1^\mathrm{T} \cdots \mathbf{w}_n^\mathrm{T}]^\mathrm{T}\in\mathbb{R}^{nd}$ and $\mathbf{z}\in\mathbb{R}^{2Ed}$ be vectors aggregating all classifiers $\mathbf{w}_i$ and auxiliary variables $\mathbf{z}_{il}$, respectively. To transfer all local consensus constraints into a matrix form, we introduce two block matrices $A_1, A_2\in \mathbb{R}^{{2Ed}\times{nd}}$, which are partitioned into $2E\times n$ submatrices with dimension $d\times d$. For the communication link $(s_i, s_l)\in\mathcal{E}$, if $\mathbf{z}_{il}$ is the $m$th block of $\mathbf{z}$, then the $(m, i)$th submatrix of $A_1$ and $(m, l)$th submatrix of $A_2$ are the $d\times d$ identity matrix $I_d$; otherwise, these submatrices are the $d\times d$ zero matrix $0_d$. We write $J(\mathbf{w})=\sum_{i=1}^n J(\mathbf{w}_i)$, $A:=[A_1^{\mathrm{T}} A_2^{\mathrm{T}}]^{\mathrm{T}}$, and $B:=[-I_{2Ed}\; -\!I_{2Ed}]^{\mathrm{T}}$. Then, Problem~\ref{problem_1} can be written in a compact form as \begin{alignat}{2} \min_{\mathbf{w}, \mathbf{z}} \quad & J(\mathbf{w}) \label{minimization_pro_mat} \\ \mathrm{s.t.} \quad & A\mathbf{w}+B\mathbf{z}=0. \label{constraint_mat} \end{alignat} For solving this problem we introduce the fully distributed ADMM algorithm from \cite{shi2014linear}. The augmented Lagrange function associated with (\ref{minimization_pro_mat}) and (\ref{constraint_mat}) is given by $\mathcal{L}(\mathbf{w}, \mathbf{z}, \boldsymbol{\lambda}) := J(\mathbf{w}) +\boldsymbol{\lambda}^{\mathrm{T}}(A\mathbf{w}+B\mathbf{z})+ \frac{\beta}{2}\|A\mathbf{w}+B\mathbf{z}\|_2^2$, where $\boldsymbol{\lambda}\in\mathbb{R}^{4Ed}$ is the dual variable ($\mathbf{w}$ is correspondingly called the primal variable) and $\beta\in \mathbb{R}$ is the penalty parameter. At iteration $t+1$, the solved optimal auxiliary variable $\mathbf{z}$ satisfies the relation $\nabla \mathcal{L}(\mathbf{w}(t+1), \mathbf{z}(t+1), \boldsymbol{\lambda}(t))=0$. Through some simple transformation, we have $B^\mathrm{T}\boldsymbol{\lambda}(t+1)=0$. Let $\boldsymbol{\lambda}=[\boldsymbol{\xi}^{\mathrm{T}} \boldsymbol{\zeta}^{\mathrm{T}}]^{\mathrm{T}}$ with $\boldsymbol{\xi}, \boldsymbol{\zeta}\in\mathbb{R}^{2Ed}$. If we set the initial value of $\boldsymbol{\lambda}$ to $\boldsymbol{\xi}(0)=-\boldsymbol{\zeta}(0)$, we have $\boldsymbol{\xi}(t)=-\boldsymbol{\zeta}(t), \forall t\geq 0$. Thus, we can obtain the complete dual variable $\boldsymbol{\lambda}$ by solving $\boldsymbol{\xi}$. Let \begin{equation}\nonumber \small L_{+} := \frac{1}{2}(A_1+A_2)^\mathrm{T}(A_1+A_2), L_{-} := \frac{1}{2}(A_1-A_2)^\mathrm{T}(A_1-A_2). \end{equation} Define a new dual variable $\boldsymbol{\gamma}:=(A_1-A_2)^\mathrm{T}\boldsymbol{\xi}\in\mathbb{R}^{nd}$. Through the simplification process in \cite{shi2014linear}, we obtain the fully distributed ADMM for solving Problem~\ref{problem_1}, which is composed of the following iterations: \begin{alignat}{2} \small \nabla J(\mathbf{w}(t+1)) +\boldsymbol{\gamma}(t) + \beta(L_{+}+L_{-})\mathbf{w}(t+1) - \beta L_{+}\mathbf{w}(t)& = 0, \nonumber \\ \small \boldsymbol{\gamma}(t+1) - \boldsymbol{\gamma}(t) - \beta L_{-}\mathbf{w}(t+1)& = 0. \nonumber \end{alignat} Note that $\boldsymbol{\gamma}$ is also a compact vector of all local dual variables $\boldsymbol{\gamma}_i\in\mathbb{R}^{d}$ for $s_i\in\mathcal{S}$, i.e., $\boldsymbol{\gamma}=[\boldsymbol{\gamma}_1^\mathrm{T} \cdots \boldsymbol{\gamma}_n^\mathrm{T}]^\mathrm{T}$. The above ADMM iterations can be separated into $n$ different parts, which are solved by the $n$ different servers. At iteration $t+1$, the information used by server $s_i$ to update a new primal variable $\mathbf{w}_i(t+1)$ includes users' data $\mathbf{d}_{i,j}, \forall j$, current classifiers $\left\{\mathbf{w}_l(t)\;|\; l\in\mathcal{N}_i\bigcup\{i\}\right\}$ and dual variable $\boldsymbol{\gamma}_i(t)$. The local augmented Lagrange function $\mathcal{L}_i(\mathbf{w}_i, \mathbf{w}_i(t), \boldsymbol{\gamma}_i(t))$ associated with the primal variable update is given by \begin{equation}\nonumber \small \begin{split} & \mathcal{L}_i(\mathbf{w}_i, \{\mathbf{w}_l(t)\}_{l\in \mathcal{N}_i\bigcup\{i\}},\boldsymbol{\gamma}_i(t)) \\ & :=J_i(\mathbf{w}_i) +\boldsymbol{\gamma}_i^{\mathrm{T}}(t)\mathbf{w}_i +\beta\sum_{l\in\mathcal{N}_i}\left\|\mathbf{w}_i-\frac{1}{2}(\mathbf{w}_i(t)+\mathbf{w}_l(t))\right\|_2^2. \end{split} \end{equation} At each iteration, server $s_i$ will update its primal variable $\mathbf{w}_i(t+1)$ and dual variable $\boldsymbol{\gamma}_i(t+1)$ as follows: \begin{alignat}{2} \small \mathbf{w}_i(t+1)& = \arg\min_{\mathbf{w}_i} \mathcal{L}_i(\mathbf{w}_i, \{\mathbf{w}_l(t)\}_{l\in \mathcal{N}_i\bigcup\{i\}}, \boldsymbol{\gamma}_i(t)), \label{new_primal_local} \\ \small \boldsymbol{\gamma}_i(t+1)& = \boldsymbol{\gamma}_i(t) + \beta \sum_{l\in\mathcal{N}_i}\left(\mathbf{w}_i(t+1) - \mathbf{w}_l(t+1)\right). \label{new_dual_local} \end{alignat} Clearly, in (\ref{new_primal_local}) and (\ref{new_dual_local}), the information communicated between computing servers is the newly updated classifiers. \subsection{Privacy-preserving Problem} In this subsection, we introduce the privacy-preserving problem in the DML framework. The private information to be preserved is first defined, followed by the introduction of privacy violators and information used for privacy inference. Further, we present the objectives of the two phases. \textbf{Private information}. For users, both the feature vectors and the labels of the data samples contain their sensitive information. The private information contained in the feature vectors may be the ID, gender, general health data and so on. However, the labels may indicate, for example, whether a patient contracts a disease (e.g., HIV-1 infected) or whether a user has a special identity (e.g., a member of a certain group). We can see that compared with the feature vectors, the labels may be more sensitive for the users. In this paper, we consider that the labels of users' data are the most sensitive information, which should be protected with priority and obtain stronger privacy guarantee than that of feature vectors. \textbf{Privacy attacks}. All computing servers are viewed as untrustworthy potential privacy violators desiring to infer the sensitive information contained in users' data. In the meantime, different servers present distinct trust degrees to users. User $j\in\mathcal{U}_i$ divides the potential privacy violators into two types. The server $s_i$, collecting user $j$'s data directly, is the first type. Other servers $s_l\in\mathcal{S}, s_l\neq s_i$, having no direct connection with user $j$, are the second type. Compared with server $s_i$, other servers are less trustworthy for user $j$. To conduct privacy inference, the first type of privacy violators leverages user $j$'s reported data while the second type can utilize only the intermediate information shared by servers. \textbf{Privacy protections in Phases 1 \& 2}. Since the label of user $j\in\mathcal{U}_i$ is the most sensitive information, its original value should not be disclosed to any servers including server $s_i$. Thus, during the data reporting process in Phase~1, user $j$ must obfuscate the private label in his/her local device. For the less sensitive feature vector, considering that server $s_i$ is more trustworthy, user $j$ can choose to transmit the original version to that server. Nevertheless, the user is still unwilling to disclose the raw feature vector to servers with lower trust degrees. Hence, in this paper, when server $s_i$ interacts with other servers to find a common classifier in Phase~2, the released information about user $j$'s data will be further processed before communication. More specifically, in Phase 1, to obfuscate the labels, we use a local randomization approach, whose privacy-preserving property will be measured by local differential privacy (LDP) \cite{erlingsson2014rappor}. LDP is developed from differential privacy (DP), which is originally defined for trustworthy databases to publish aggregated private information \cite{dwork2008differential}. The privacy preservation idea of DP is that for any two neighbor databases differing in one record (e.g., one user selects to report or not to report his/her data to the server) as input, a randomized mechanism is adopted to guarantee the two outputs to have high similarity so that privacy violators cannot identify the different record with high confidence. Since there is no trusted server for data collection in our setting, users locally perturb their original labels and report noisy versions to the servers. To this end, we define a randomized mechanism $M: \mathbb{R}^{d+1}\rightarrow \mathbb{R}^{d+1}$, which takes a data sample as input and outputs its noisy version. The definition of LDP is given as follows. \begin{definition}\label{LDP_def} ($\epsilon$-LDP). Given $\epsilon>0$, a randomized mechanism $M(\cdot)$ preserves $\epsilon$-LDP if for any two data samples $\mathbf{d}_1=(\mathbf{x}_1, y_1)$ and $\mathbf{d}_2=(\mathbf{x}_2, y_2)$ satisfying $\mathbf{x}_2=\mathbf{x}_1$ and $y_2=-y_1$, and any observation set $\mathcal{O}\subseteq \textrm{Range}(M)$, it holds \begin{equation}\label{eq_ldp} \small \Pr[M(\mathbf{d}_1)\in \mathcal{O}] \leq e^{\epsilon}\Pr[M(\mathbf{d}_2)\in \mathcal{O}]. \end{equation} \end{definition} In (\ref{eq_ldp}), the parameter $\epsilon$ is called the privacy preserving degree (PPD), which describes the strength of privacy guarantee of $M(\cdot)$. A smaller $\epsilon$ implies stronger privacy guarantee. This is because smaller $\epsilon$ means that the two outputs $M(\mathbf{d}_1)$ and $M(\mathbf{d}_2)$ are closer, making it more difficult for privacy violators to infer the difference in $\mathbf{d}_1$ and $\mathbf{d}_2$ (i.e., $y_1$ and $y_2$). \subsection{System Overview} In this paper, we propose the PDML framework, where users can obtain heterogeneous privacy protection. The heterogeneity is characterized by two aspects: i) When a user faces a privacy violator, his/her data pieces with distinct sensitive levels (i.e., the feature vector and the label) obtain different privacy guarantees; ii) for one type of private data piece, the privacy protection provided by the framework is stronger against privacy violators with low trust degrees than those with higher trust degrees. Particularly, in our approach, the privacy preservation strength of users' labels is controlled by the users. Moreover, a modified ADMM algorithm is proposed to meet the heterogeneous privacy protection requirement. The workflow of the proposed PDML framework is illustrated in Fig. \ref{workflow}. Some details are explained below. \begin{enumerate} \item In Phase 1, a user first appropriately randomizes the private label, and then sends the noisy label and the original feature vector to a computing server. The randomization approach used here determines the PPD of the label. \item In Phase 2, multiple computing servers collaboratively train a common classifier based on their collected data. To protect privacy of feature vectors against less trustworthy servers, we further use a combined noise-adding method to perturb the ADMM algorithm, which also strengthens the privacy guarantee of users' labels. \item The performance of the trained classifiers is analyzed in terms of their generalization errors. To decompose the effects of uncertainties introduced in the two phases, we modify the loss function in Problem~\ref{problem_1}. We finally quantify the difference between the generalization error of trained classifiers and that of the ideal optimal classifier. \end{enumerate} \begin{figure} \centering \includegraphics[scale=0.5]{figure/workflow_new.pdf} \caption{{\small Workflow of the PDML framework}}\label{workflow} \end{figure} \section{Privacy-Preserving Framework Design} \label{pp_framework} In this section, we introduce the privacy-preserving approaches used in Phases 1 and 2, and analyze their properties. \subsection{Privacy-Preserving Approach in Phase 1} In this subsection, we propose an appropriate approach used in Phase~1 to provide privacy preservation for the most sensitive labels. In particular, it is controlled by users and will not be weakened in Phase~2. We adopt the idea of randomized response (RR) \cite{erlingsson2014rappor} to obfuscate the users' labels. Originally, RR was used to set plausible deniability for respondents when they answer survey questions about sensitive topics (e.g., HIV-1 infected or uninfected). When using RR, respondents only have a certain probability to answer questions according to their true situations, making the server unable to determine with certainty whether the reported answers are true. In our setting, user $j\in\mathcal{U}_i$ randomizes the label through RR and sends the noisy version to server $s_i$. This is done by the randomized mechanism $M$ defined below. \begin{definition}\label{randomized_M} For $p\in(0,\frac{1}{2})$, the randomized mechanism $M$ with input data sample $\mathbf{d}_{i, j}=(\mathbf{x}_{i, j}, y_{i, j})$ is given by $M(\mathbf{d}_{i, j})=(\mathbf{x}_{i, j}, y'_{i, j})$, where \begin{equation} \label{randomization} \small y'_{i, j}= \begin{cases} 1, &\text{with probability $p$} \\ -1, &\text{with probability $p$} \\ y_{i, j}, &\text{with probability $1-2p$}. \end{cases} \end{equation} \end{definition} In the above definition, $p$ is the randomization probability controlling the level of data obfuscation. Obviously, a larger $p$ implies higher uncertainty on the reported label, making it harder for the server to learn the true label. Denote the output $M(\mathbf{d}_{i, j})$ as $\mathbf{d}'_{i, j}$, i.e., $\mathbf{d}'_{i, j}=M(\mathbf{d}_{i, j})=(\mathbf{x}_{i, j}, y'_{i, j})$. After the randomization, $\mathbf{d}'_{i, j}$ will be transmitted to the server. In this case, server $s_i$ can use only $\mathbf{d}'_{i, j}$ to train the classifier, and the released information about the true label $y_{i, j}$ in Phase~2 is computed based on $\mathbf{d}'_{i, j}$. This implies that once $\mathbf{d}'_{i, j}$ is reported to the server, no more information about the true label $y_{i, j}$ will be released. In this paper, we set the randomization probability $p$ in (\ref{randomization}) as \begin{equation}\label{eq_p} \small p=\frac{1}{1+e^\epsilon}, \end{equation} where $\epsilon>0$. The following theorem gives the privacy-preserving property of the randomized mechanism in Definition~\ref{randomized_M}, justifying this choice of $p$ from the viewpoint of LDP. \begin{proposition}\label{privacy_preservation} Under (\ref{eq_p}), the randomized mechanism $M(\mathbf{d}_{i, j})$ preserves $\epsilon$-LDP for $\mathbf{d}_{i, j}$. \end{proposition} The proof can be found in Appendix \ref{proof_p1}. Proposition \ref{privacy_preservation} clearly indicates that the users can tune the randomization probability according to their privacy demands. This can be seen as given a randomization probability $p$, by (\ref{eq_p}), the PPD $\epsilon$ provided by $M(\mathbf{d}_{i, j})$ is $\epsilon=\ln \frac{1-p}{p}$. Obviously, a larger randomization probability leads to smaller PPD, indicating stronger privacy guarantee. If all data samples $\mathbf{d}_{i, j}, \forall i, j$, drawn from the distribution $\mathcal{P}$ are randomized through $M$, the noisy data $\mathbf{d}'_{i, j}, \forall i, j$, can be considered to be obtained from a new distribution $\mathcal{P}_{\epsilon}$, which is related to the PPD $\epsilon$. Note that $\mathcal{P}_{\epsilon}$ is also an unknown distribution due to the unknown $\mathcal{P}$. \subsection{Privacy-Preserving Approach in Phase 2} \label{ppa_2} To deal with less trustworthy servers in Phase~2, we devise a combined noise-adding approach to simultaneously preserve privacy for users' feature vectors and enhance the privacy guarantee of users' labels. We first adopt the method of objective function perturbation \cite{chaudhuri2011differentially}. That is, before solving Problem \ref{problem_1}, the servers perturb the objective function $J(\{\mathbf{w}_i\}_{i\in\mathcal{S}})$ with random noises. For server $s_i\in \mathcal{S}$, the perturbed objective function is given by \begin{equation}\label{per_obj} \small \widetilde{J}_i(\mathbf{w}_i):=J_i(\mathbf{w}_i)+\frac{1}{n}\boldsymbol{\eta}_i^{\mathrm{T}}\mathbf{w}_i, \end{equation} where $J_i(\mathbf{w}_i)$ is the local objective function given in (\ref{local_objective}), and $\boldsymbol{\eta}_i\in\mathbb{R}^{d}$ is a bounded random noise with arbitrary distribution. Let $R$ be the bound of noises $\boldsymbol{\eta}_i, \forall i$, namely, $\|\boldsymbol{\eta}_i\|_{\infty}\leq R$. Denote the sum of $\widetilde{J}_i(\mathbf{w}_i)$ as $\widetilde{J}(\{\mathbf{w}_i\}_{i\in\mathcal{S}}):=\sum_{i=1}^{n} \widetilde{J}_i(\mathbf{w}_i)$. \textbf{Limitation of objective function perturbation}. We remark that in our setting, the objective function perturbation in (\ref{per_obj}) is not sufficient to provide reliable privacy guarantee. This is because each server publishes current classifier multiple times and each publication utilizes users' reported data. Note that in the more centralized setting of \cite{chaudhuri2011differentially}, the classifier is only published once. More specifically, according to (\ref{new_primal_local}), $\mathbf{w}_i(t+1)$ is the solution to $\nabla \mathcal{L}_i(\mathbf{w}_i, \{\mathbf{w}_l(t)\}_{l\in \mathcal{N}_i\bigcup\{i\}}, \boldsymbol{\gamma}_i(t))=0$. In this case, it holds $\nabla \widetilde{J}_i(\mathbf{w}_i(t+1))= -\boldsymbol{\gamma}_i(t) +\beta\sum_{l\in\mathcal{N}_i}\left(\mathbf{w}_i(t)+\mathbf{w}_l(t)-2\mathbf{w}_i(t+1)\right)$. As (\ref{new_dual_local}) shows, the dual variable $\boldsymbol{\gamma}_i(t)$ can be deduced from updated classifiers. Thus, if $s_i$'s neighbor servers have access to $\mathbf{w}_i(t+1)$ and $\left\{\mathbf{w}_l(t)\;|\;l\in\{i\}\bigcup \mathcal{N}_i\right\}$, then they can easily compute $\nabla \widetilde{J}_i(\mathbf{w}_i(t+1))$. We should highlight that multiple releases of $\nabla \widetilde{J}_i(\mathbf{w}_i(t+1))$ increase the risk of users' privacy disclosure. This can be explained as follows. First, note that $\nabla \widetilde{J}_i(\mathbf{w}_i)=\nabla J_i(\mathbf{w}_i)+\frac{1}{n}\boldsymbol{\eta}_i$, where $\nabla J_i(\mathbf{w}_i)$ contains users' private information. The goal of $\boldsymbol{\eta}_i$-perturbation is to protect $\nabla J_i(\mathbf{w}_i)$ not to be derived directly by other servers. However, after publishing an updated classifier $\mathbf{w}_i(t+1)$, server $s_i$ releases a new gradient $\nabla \widetilde{J}_i(\cdot)$. Since the noise $\boldsymbol{\eta}_i$ is fixed for all iterations, each release of $\nabla \widetilde{J}_i(\cdot)$ means disclosing more information about $\nabla J_i(\cdot)$. In particular, we have $\nabla \widetilde{J}_i(\mathbf{w}_i(t+1))-\nabla \widetilde{J}_i(\mathbf{w}_i(t))=\nabla J_i(\mathbf{w}_i(t+1))-\nabla J_i(\mathbf{w}_i(t))$. That is, the effect of the added noise $\boldsymbol{\eta}_i$ can be cancelled by integrating the gradients of objective functions at different time instants. \textbf{Modified ADMM by primal variable perturbation}. To ensure appropriate privacy preservation in Phase~2, we adopt an extra perturbation method, which sets obstructions for other servers to obtain the gradient $\nabla J_i(\cdot)$. Specifically, after deriving classifier $\mathbf{w}_i(t)$, server $s_i$ first perturbs $\mathbf{w}_i(t)$ with a Gaussian noise $\boldsymbol{\theta}_i(t)$ whose variance is decaying as iterations proceed, and then sends a noisy version of $\mathbf{w}_i(t)$ to neighbor servers. This is denoted by $\widetilde{\mathbf{w}}_i(t):=\mathbf{w}_i(t) + \boldsymbol{\theta}_i(t)$, where $\boldsymbol{\theta}_i(t)\sim\mathcal{N}(0, \rho^{t-1}V_i^2I_d)$ with decaying rate $0<\rho<1$. The local augmented Lagrange function associated with $\boldsymbol{\eta}_i$-perturbed objective function $\widetilde{J}_i(\mathbf{w}_i)$ in (\ref{per_obj}) is given by \begin{equation}\nonumber \small \begin{split} & \tilde{\mathcal{L}}_i(\mathbf{w}_i, \{\widetilde{\mathbf{w}}_l(t)\}_{l\in \mathcal{N}_i\bigcup\{i\}},\boldsymbol{\gamma}_i(t)) \\ & :=\widetilde{J}_i(\mathbf{w}_i) +\boldsymbol{\gamma}_i^{\mathrm{T}}(t)\mathbf{w}_i +\beta\sum_{l\in\mathcal{N}_i}\left\|\mathbf{w}_i-\frac{1}{2}(\widetilde{\mathbf{w}}_i(t)+\widetilde{\mathbf{w}}_l(t))\right\|_2^2. \end{split} \end{equation} We then introduce the perturbed version of the ADMM algorithm in (\ref{new_primal_local}) and (\ref{new_dual_local}) as \begin{alignat}{2} \small \mathbf{w}_i(t+1)& = \arg\min_{\mathbf{w}_i} \tilde{\mathcal{L}}_i(\mathbf{w}_i, \{\widetilde{\mathbf{w}}_l(t)\}_{l\in \mathcal{N}_i\bigcup\{i\}}, \boldsymbol{\gamma}_i(t)), \label{wi_update} \\ \small \widetilde{\mathbf{w}}_i(t+1)& = \mathbf{w}_i(t+1) + \boldsymbol{\theta}_i(t+1), \label{wi_perturb}\\ \small \boldsymbol{\gamma}_i(t+1)& = \boldsymbol{\gamma}_i(t) + \beta \sum_{l\in\mathcal{N}_i}\left(\widetilde{\mathbf{w}}_i(t+1) - \widetilde{\mathbf{w}}_l(t+1)\right). \label{gammai_update} \end{alignat} At iteration $t+1$, a new classifier $\mathbf{w}_i(t+1)$ is first obtained by solving $\nabla \tilde{\mathcal{L}}_i(\mathbf{w}_i, \widetilde{\mathbf{w}}_i(t), \boldsymbol{\gamma}_i(t))=0$. Then, server $s_i$ will send $\widetilde{\mathbf{w}}_i(t+1)$ out and wait for the updated classifiers from neighbor servers. At the end of an iteration, the server will update the dual variable $\boldsymbol{\gamma}_i(t+1)$. \subsection{Discussions} \label{discussion_privacy} We now discuss the effectiveness of the primal variable perturbation. It is emphasized that at each iteration, $s_i$ only releases a small amount of information about $\nabla \widetilde{J}_i(\mathbf{w}_i(t+1))$ through the communicated $\widetilde{\mathbf{w}}_i(t+1)$. Although $\boldsymbol{\gamma}_i(t)$ and $\left\{\widetilde{\mathbf{w}}_l(t)\;|\;l\in\{i\}\bigcup \mathcal{N}_i\right\}$ are known to $s_i$'s neighbors, $\nabla \widetilde{J}_i(\mathbf{w}_i(t+1))$ cannot be directly computed due to the unknown $\boldsymbol{\theta}_i(t+1)$. More specifically, observe that by (\ref{wi_update}), we have $\nabla \widetilde{J}_i(\mathbf{w}_i(t+1))= -\boldsymbol{\gamma}_i(t)+\beta\sum_{l\in\mathcal{N}_i}\left(\widetilde{\mathbf{w}}_i(t)+\widetilde{\mathbf{w}}_l(t)\right) -2\beta N_i(\widetilde{\mathbf{w}}_i(t+1) -\boldsymbol{\theta}_i(t+1))$, where $N_i$ is the degree of $s_i$. On the other hand, using available information, other servers can compute only $\nabla \widetilde{J}_i(\widetilde{\mathbf{w}}_i(t+1))$, i.e., the gradient with respect to perturbed classifier $\widetilde{\mathbf{w}}_i(t+1)$. We have $\nabla \widetilde{J}_i(\widetilde{\mathbf{w}}_i(t+1))=-\boldsymbol{\gamma}_i(t)-\beta\sum_{l\in\mathcal{N}_i}\left[2\widetilde{\mathbf{w}}_i(t+1)-(\widetilde{\mathbf{w}}_i(t)+ \widetilde{\mathbf{w}}_l(t))\right]$. Thus, we obtain $\nabla \widetilde{J}_i(\widetilde{\mathbf{w}}_i(t+1))-\nabla \widetilde{J}_i(\widetilde{\mathbf{w}}_i(t))= \nabla J_i(\mathbf{w}_i(t+1))-\nabla J_i(\mathbf{w}_i(t))-2\beta N_i(\boldsymbol{\theta}_i(t+1)-\boldsymbol{\theta}_i(t))$. Hence, due to $\boldsymbol{\theta}_i$, it would not be helpful for inferring $\nabla J_i(\cdot)$ to integrate the gradients of the objective functions at different iterations. We should also observe that since $\lim_{t\rightarrow\infty} \boldsymbol{\theta}_i(t+1)=0$, $\nabla \widetilde{J}_i(\mathbf{w}_i(t+1))$ can be derived when $t\rightarrow\infty$. Moreover, it is clear that the relation $\nabla \widetilde{J}_i(\widetilde{\mathbf{w}}_i(t+1))-\nabla \widetilde{J}_i(\widetilde{\mathbf{w}}_i(t))= \nabla J_i(\mathbf{w}_i(t+1))-\nabla J_i(\mathbf{w}_i(t))$ holds for $t\rightarrow\infty$. However, $\nabla \widetilde{J}_i(\cdot)$ is the result of $\nabla J_i(\cdot)$ under $\boldsymbol{\eta}_i$-perturbation. Moreover, due to the local consensus constraint (\ref{constraint}), the trained classifiers $\mathbf{w}_i(t)$ may not have significant differences when $t\rightarrow\infty$. Such limited information is not sufficient for privacy violators to infer $\nabla J_i(\cdot)$ with high confidence. \textbf{Differential privacy analysis}. We remark that in our scheme, the noise $\boldsymbol{\eta}_i$ added to the objective function provides underlying privacy protection in Phase~2. Even if privacy violators make inference with published $\widetilde{\mathbf{w}}_i$ in all iterations, the disclosed information is users' reported data plus extra noise perturbation. If the objective function perturbation is removed, the primal variable perturbation method cannot provide DP guarantee when $t\rightarrow\infty$. It is proved in \cite{zhang2017dynamic} and \cite{ding2019optimal} that the $\mathbf{w}_i$-perturbation in (\ref{wi_perturb}) preserves dynamic DP. According to the composition theorem of DP \cite{dwork2008differential}, the PPD will increase (indicating weaker privacy guarantee) when other servers obtain the perturbed classifiers $\widetilde{\mathbf{w}}_i$ of multiple iterations. In particular, if the perturbed classifiers in all iterations are used for inference, the PPD will be $\infty$, implying no privacy guarantee any more. \begin{remark} The objective function perturbation given in (\ref{per_obj}) preserves the so-called $(\epsilon_p, \delta_p)$-DP \cite{he2017differential}. Also, according to \cite{chaudhuri2011differentially}, the perturbation in (\ref{per_obj}) preserves $\epsilon_2$-DP if $\boldsymbol{\eta}_i$ has density $f(\boldsymbol{\eta}_i)=\frac{1}{\nu}e^{-\epsilon_2\|\boldsymbol{\eta}_i\|}$ with normalizing parameter $\nu$. Note that the noise with this density is not bounded, which is not consistent with our setting. Although we use a bounded noise, this kind of perturbation still provides $(\epsilon_p, \delta_p)$-DP guarantee, which is a relaxed form of pure $\epsilon_p$-DP. \end{remark} \textbf{Strengthened privacy guarantee}. For users' labels, the privacy guarantee in Phase~2 is stronger than that of Phase~1. Since differential privacy is immune to post-processing \cite{dwork2008differential}, the PPD $\epsilon$ in Phase~1 will not increase during the iterations of the ADMM algorithm executed in Phase~2. However, such immunity is established based on a strong assumption that there is no limit to the capability of privacy violators. In our considered problem, this assumption is satisfied when all servers can have access to user $j$'s reported data $\mathbf{d}'_{i, j}$, which may not be realistic. Hence, in our problem setting, one server (i.e., server $s_i$) obtains $\mathbf{d}'_{i, j}$ while other servers can access only the classifiers trained with users' reported data. \begin{remark} The $(\epsilon_p, \delta_p)$-DP guarantee is provided for users' feature vectors. Thus, in Phase~2, the sensitive information in those vectors is not disclosed much to the servers with lower trust degrees. For the labels, they obtain extra $(\epsilon_p, \delta_p)$-DP preservation in Phase~2. Since the privacy-preserving scheme in Phase~1 preserves $\epsilon$-DP for the labels, the released information about them in Phase~2 provides stronger privacy guarantee under the joint effect of $\epsilon$-DP in Phase~1 and $(\epsilon_p, \delta_p)$-DP in Phase~2. We will investigate the joint privacy-preserving degree in the future. \end{remark} \section{Performance Analysis} \label{performance_ana} In this section, we analyze the performance of the classifiers trained by the proposed PDML framework. Note that three different uncertainties are introduced into the ADMM algorithm, and these uncertainties are coupled together. The difficulty in analyzing the performance lies in decomposing the effects of the three uncertainties and quantifying the role of each uncertainty. Further, it is also challenging to achieve perturbations mitigation on the trained classifiers, especially to mitigate the influence of users' wrong labels. Here, we first give the definition of generalization error as the metric on the performance of the trained classifiers. Then, we establish a modified version of the loss function $\ell(\cdot)$, which simultaneously achieves uncertainty decomposition and mitigation of label obfuscation. We finally derive a theoretical bound for the difference between the generalization error of trained classifiers and that of the ideal optimal classifier. \subsection{Performance Metric} To measure the quality of trained classifiers, we use generalization error for analysis, which describes the expected error of a classifier on future predictions \cite{shalev2008svm}. Recall that users' data samples are drawn from the unknown distribution $\mathcal{P}$. The generalization error of a classifier $\mathbf{w}$ is defined as the expectation of $\mathbf{w}$'s loss function with respect to $\mathcal{P}$ as $\mathbb{E}_{(\mathbf{x},y)\sim\mathcal{P}} \left[\ell(y, \mathbf{w}^{\mathrm{T}}\mathbf{x})\right]$. Further, define the regularized generalization error by \begin{equation}\label{general_error \small J_{\mathcal{P}}(\mathbf{w}):=\mathbb{E}_{(\mathbf{x},y)\sim\mathcal{P}} \left[\ell(y, \mathbf{w}^{\mathrm{T}}\mathbf{x})\right]+\frac{a}{n} N(\mathbf{w}). \end{equation} We denote the classifier minimizing $J_{\mathcal{P}}(\mathbf{w})$ as $\mathbf{w}^\star$, i.e., $\mathbf{w}^\star:=\arg\min_{\mathbf{w}\in\mathcal{W}} J_{\mathcal{P}}(\mathbf{w})$. We call $\mathbf{w}^\star$ the ideal optimal classifier. Here, $J_{\mathcal{P}}(\mathbf{w}^\star)$ is the reference regularized generalization error under the classifier class $\mathcal{W}$ and the used loss function $\ell(\cdot)$. The trained classifier can be viewed as a good predictor if it achieves generalization error close to $J_{\mathcal{P}}(\mathbf{w}^\star)$. Thus, as the performance metric of the classifiers, we use the difference between the generalization error of trained classifiers and $J_{\mathcal{P}}(\mathbf{w}^\star)$. The difference is denoted as $\Delta J_{\mathcal{P}}(\mathbf{w})$, that is, $\Delta J_{\mathcal{P}}(\mathbf{w}):=J_{\mathcal{P}}(\mathbf{w})-J_{\mathcal{P}}(\mathbf{w}^\star)$ Furthermore, to measure the performance of the classifiers trained by different servers at multiple iterations, we introduce a comprehensive metric. First, considering that the classifiers $\mathbf{w}_i$ solved by server $s_i$ at different iterations may be different until the consensus constraint (\ref{constraint}) is satisfied, we define a classifier $\overline{\mathbf{w}}_i(t)$ to aggregate $\mathbf{w}_i$ in the first $t$ rounds as $\overline{\mathbf{w}}_i(t):=\frac{1}{t} \sum_{k=1}^{t} \mathbf{w}_i(k)$, where $\mathbf{w}_i(k)$ is the obtained classifier by solving (\ref{wi_update}). Moreover, due to the diversity of users' reported data, the classifiers solved by different servers may also differ (especially in the initial iterations). For this reason, we will later study the accumulated difference among the $n$ servers, that is, $\sum_{i=1}^{n} \Delta J_{\mathcal{P}}(\overline{\mathbf{w}}_i(t))$. \subsection{Modified Loss Function in ADMM Algorithm} \label{sub_b} To mitigate the effect of label obfuscation executed in Phase~1, we make some modification to the loss function $\ell(\cdot)$ in Problem~\ref{problem_1}. We use the noisy labels and the corresponding PPD $\epsilon$ in Phase~1 to adjust the loss function $\ell(\cdot)$ in (\ref{minimization_pro}). (Note that other parts of Problem~\ref{problem_1} are not affected by the noisy labels.) Define the modified loss function $\hat{\ell}(y'_{i, j}, \mathbf{w}_i^{\mathrm{T}}\mathbf{x}_{i, j}, \epsilon)$ by \vspace{-0.3cm} \begin{equation}\label{modified_loss} \small \hat{\ell}(y'_{i, j}, \mathbf{w}_i^{\mathrm{T}}\mathbf{x}_{i, j}, \epsilon):=\frac{e^\epsilon\ell(y'_{i, j}, \mathbf{w}_i^{\mathrm{T}}\mathbf{x}_{i, j})-\ell(-y'_{i, j}, \mathbf{w}_i^{\mathrm{T}}\mathbf{x}_{i, j})}{e^\epsilon-1}. \end{equation} This function has the following properties. \begin{proposition} \label{unbiased_loss} \begin{enumerate}[itemindent=0.3em, label=(\roman*),labelsep=0.3em \item $\hat{\ell}(y'_{i, j}, \mathbf{w}_i^{\mathrm{T}}\mathbf{x}_{i, j}, \epsilon)$ is an unbiased estimate of $\ell(y_{i, j}, \mathbf{w}_i^{\mathrm{T}}\mathbf{x}_{i, j})$ as \begin{equation}\label{unbiased_estimator} \small \mathbb{E}_{y'_{i, j}}\left[\hat{\ell}(y'_{i, j}, \mathbf{w}_i^{\mathrm{T}}\mathbf{x}_{i, j}, \epsilon)\right]=\ell(y_{i, j}, \mathbf{w}_i^{\mathrm{T}}\mathbf{x}_{i, j}). \end{equation} \item $\hat{\ell}(y'_{i, j}, \mathbf{w}_i^{\mathrm{T}}\mathbf{x}_{i, j}, \epsilon)$ is Lipschitz continuous with Lipschitz constant \begin{equation}\label{lipschitz} \small \hat{c}_2:= \frac{e^{\epsilon}+1}{e^{\epsilon}-1}c_2, \end{equation} where $c_2$ is the bound of $\left|\frac{\partial\ell(\cdot)}{\partial \mathbf{w}_i}\right|$ given in Assumption \ref{loss_assumption}. \end{enumerate} \end{proposition} The proof can be found in Appendix~\ref{proof_l1}. Now, we make server $s_i$ use $\hat{\ell}(y'_{i, j}, \mathbf{w}_i^{\mathrm{T}}\mathbf{x}_{i, j}, \epsilon)$ in (\ref{modified_loss}) as the loss function. Thus, the objective function in (\ref{local_objective}) must be replaced with the one as follows: \begin{equation}\label{modified_objective} \small \widehat{J}_i(\mathbf{w}_i):=\sum_{j=1}^{m_i}\frac{1}{m_i}\hat{\ell}(y'_{i, j}, \mathbf{w}_i^{\mathrm{T}}\mathbf{x}_{i, j}, \epsilon)+\frac{a}{n}N(\mathbf{w}_i). \end{equation} Similar to $J(\{\mathbf{w}_i\}_{i\in\mathcal{S}})$ in (\ref{original_objective}), we denote the objective function with the modified loss function as $\widehat{J}(\{\mathbf{w}_i\}_{i\in\mathcal{S}}):=\sum_{i=n}^n \widehat{J}_i(\mathbf{w}_i)$. Then, the following lemma holds, whose proof can be found in Appendix \ref{proof_l2}. \begin{lemma}\label{objective_convex} If the loss function $\ell(\cdot)$ and the regularizer $N(\cdot)$ satisfy Assumptions \ref{loss_assumption} and \ref{regularizer_assumption}, respectively, then $\widehat{J}(\{\mathbf{w}_i\}_{i\in\mathcal{S}})$ is $a\kappa$-strongly convex. \end{lemma} To simplify the notation, let $\hat{\kappa}:=a\kappa$. With the objective function $\widehat{J}(\{\mathbf{w}_i\}_{i\in\mathcal{S}})$, the whole optimization problem for finding a common classifier can be stated as follows: \begin{problem} \label{problem_2} \begin{small} \begin{alignat}{2} \min_{\{\mathbf{w}_i\}} & \quad \widehat{J}(\{\mathbf{w}_i\}_{i\in\mathcal{S}}) \nonumber \\ \mathrm{s.t.}& \quad \mathbf{w}_i=\mathbf{w}_l, \forall i, l. \nonumber \end{alignat} \end{small} \end{problem} \begin{lemma} \label{modified_solution} Problem~\ref{problem_2} has an optimal solution set $\{\widehat{\mathbf{w}}_i\}_{i\in\mathcal{S}}\subset\mathcal{W}$ such that $\widehat{\mathbf{w}}_\mathrm{opt} = \widehat{\mathbf{w}}_i = \widehat{\mathbf{w}}_l, \forall i, l$. \end{lemma} Lemma~\ref{modified_solution} can be proved directly from Lemma~1 in \cite{forero2010consensus}, whose condition is satisfied by Lemma~\ref{objective_convex}. We finally arrive at stating the optimization problem to be solved in this paper. To this end, for the modified objective function in (\ref{modified_objective}), we define the perturbed version as in (\ref{per_obj}) by $\widetilde{J}_i(\mathbf{w}_i):=\widehat{J}_i(\mathbf{w}_i)+\frac{1}{n}\boldsymbol{\eta}_i^{\mathrm{T}}\mathbf{w}_i$. Then, the whole objective function becomes \begin{equation}\nonumber \small \widetilde{J}(\{\mathbf{w}_i\}_{i\in\mathcal{S}})=\sum_{i=1}^{n} \left[ \widehat{J}_i(\mathbf{w}_i)+\frac{1}{n}\boldsymbol{\eta}_i^{\mathrm{T}}\mathbf{w}_i\right]. \end{equation} The problem for finding the classifier with randomized labels and perturbed objective functions is as follows: \begin{problem} \label{problem_3} \begin{small} \begin{alignat}{2} \min_{\{\mathbf{w}_i\}} & \quad \widetilde{J}(\{\mathbf{w}_i\}_{i\in\mathcal{S}}) \nonumber \\ \mathrm{s.t.}& \quad \mathbf{w}_i=\mathbf{w}_l, \forall i, l. \nonumber \end{alignat} \end{small} \end{problem} For $\widetilde{J}(\{\mathbf{w}_i\}_{i\in\mathcal{S}})$, we have the following lemma showing its convexity properties. \begin{lemma}\label{J_tildle_convex} $\widetilde{J}(\{\mathbf{w}_i\}_{i\in\mathcal{S}})$ is $\hat{\kappa}$-strongly convex. If $N(\cdot)$ satisfies that $\|\nabla^2 N(\cdot)\|_2\leq \varrho$, then $\widetilde{J}(\{\mathbf{w}_i\}_{i\in\mathcal{S}})$ has a $(nc_3+a\varrho)$-Lipschitz continuous gradient, where $c_3$ is the bound of $\frac{\partial\ell^2(\cdot)}{\partial \mathbf{w}^2}$ given in Assumption~\ref{loss_assumption}. \end{lemma} The proof can be found in Appendix~\ref{proof_l4}. For simplicity, we denote the Lipschitz continuous gradient of $\widetilde{J}(\mathbf{w})$ as $\varrho_{\widetilde{J}}$, namely, $\varrho_{\widetilde{J}}:=nc_3 + a\varrho$. We now observe that Problem~\ref{problem_3} associated with the objective function $\widetilde{J}(\{\mathbf{w}_i\}_{i\in\mathcal{S}})$ has an optimal solution set $\{\widetilde{\mathbf{w}}_i\}_{i\in\mathcal{S}}\subset \mathcal{W}$ where \begin{equation}\label{optimal_perturbation} \small \widetilde{\mathbf{w}}_\mathrm{opt} = \widetilde{\mathbf{w}}_i = \widetilde{\mathbf{w}}_l, \forall i, l. \end{equation} In fact, this can be shown by an argument similar to Lemma~\ref{modified_solution}, where Lemma~\ref{J_tildle_convex} establishes the convexity of the objective function (as in Lemma~\ref{objective_convex}). \subsection{Generalization Error Analysis} In this subsection, we analyze the the accumulated difference between the generalization error of trained classifiers and $J_{\mathcal{P}}(\mathbf{w}^\star)$, i.e., $\sum_{i=1}^{n} \Delta J_{\mathcal{P}}(\overline{\mathbf{w}}_i(t))$. For the analysis, we use the technique from \cite{li2017robust}, which considers the problem of ADMM learning in the presence of erroneous updates. Here, our problem is more complicated because besides the erroneous updates brought by primal variable perturbation, there is also uncertainty in the training data and the objective functions. All these uncertainties are coupled together, which brings extra challenges for performance analysis. We first decompose $\Delta J_{\mathcal{P}}(\overline{\mathbf{w}}_i(t))$ in terms of different uncertainties. To do so, we must introduce a new regularized generalization error associated with the modified loss function $\hat{\ell}(y', \mathbf{w}^{\mathrm{T}}\mathbf{x}, \epsilon)$ and the noisy data distribution $\mathcal{P}_{\epsilon}$. Similar to (\ref{general_error}), for a classifier $\mathbf{w}$, it is defined by \begin{equation}\nonumber \small J_{\mathcal{P}_{\epsilon}}(\mathbf{w}) =\mathbb{E}_{(\mathbf{x},y')\sim\mathcal{P}_{\epsilon}} \left[\hat{\ell}(y', \mathbf{w}^{\mathrm{T}}\mathbf{x}, \epsilon)\right]+\frac{a}{n} N(\mathbf{w}). \end{equation} According to Proposition~\ref{unbiased_loss}, $\hat{\ell}(y', \mathbf{w}^{\mathrm{T}}\mathbf{x}, \epsilon)$ is an unbiased estimate of $\ell(y, \mathbf{w}^{\mathrm{T}}\mathbf{x})$. Thus, it is straightforward to obtain the following lemma, whose proof is omitted. \begin{lemma}\label{equal_error} For a classifier $\mathbf{w}$, we have $J_{\mathcal{P}_{\epsilon}}(\mathbf{w})=J_{\mathcal{P}}(\mathbf{w})$. \end{lemma} Now, we can decompose $\Delta J_{\mathcal{P}}(\overline{\mathbf{w}}_i(t))$ as follows: \begin{equation}\label{deltaJ_wi} \small \begin{split} & \Delta J_{\mathcal{P}}(\overline{\mathbf{w}}_i(t)) =J_{\mathcal{P}}(\overline{\mathbf{w}}_i(t))-J_{\mathcal{P}}(\mathbf{w}^\star) \\ & =J_{\mathcal{P}_{\epsilon}}(\overline{\mathbf{w}}_i(t))-J_{\mathcal{P}_{\epsilon}}(\mathbf{w}^\star) \\ & =\widetilde{J}_i({\overline{\mathbf{w}}_i}(t))-\widetilde{J}_i(\widetilde{\mathbf{w}}_\mathrm{opt}) + \widehat{J}_i(\widetilde{\mathbf{w}}_\mathrm{opt})-\widehat{J}_i(\widehat{\mathbf{w}}_\mathrm{opt}) \\ & \quad +\widehat{J}_i(\widehat{\mathbf{w}}_\mathrm{opt})-\widehat{J}_i({\mathbf{w}^\star}) +J_{\mathcal{P}_{\epsilon}}(\overline{\mathbf{w}}_i(t))-\widehat{J}_i(\overline{\mathbf{w}}_i(t)) \\ & \quad +\widehat{J}_i({\mathbf{w}^\star})-J_{\mathcal{P}_{\epsilon}}(\mathbf{w}^\star) + \boldsymbol{\eta}_i^{\mathrm{T}} (\widetilde{\mathbf{w}}_\mathrm{opt}-\overline{\mathbf{w}}_i(t)). \end{split} \end{equation} We will analyze each term in the far right-hand side of (\ref{deltaJ_wi}). The term $\widetilde{\mathbf{w}}_\mathrm{opt}-\overline{\mathbf{w}}_i(t)$ describes the difference between the classifier $\overline{\mathbf{w}}_i(t)$ and the optimal solution $\widetilde{\mathbf{w}}_\mathrm{opt}$ to Problem~\ref{problem_3}. Before analyzing this difference, we first consider the deviation between the perturbed classifier $\widetilde{\mathbf{w}}_i(t)$ and $\widetilde{\mathbf{w}}_\mathrm{opt}$, and a bound for it can be obtained by \cite{li2017robust}. Here, we introduce some notations related to the bound. Let the compact forms of vectors be $\widetilde{\mathbf{w}}(t):=[\widetilde{\mathbf{w}}_1^\mathrm{T}(t) \cdots \widetilde{\mathbf{w}}_n^\mathrm{T}(t)]^\mathrm{T}$, $\boldsymbol{\theta}(t):=[\boldsymbol{\theta}_1^\mathrm{T}(t)\cdots \boldsymbol{\theta}_n^\mathrm{T}(t)]^\mathrm{T}$, and $\boldsymbol{\eta} := [\boldsymbol{\eta}_1^\mathrm{T} \cdots \boldsymbol{\eta}_n^\mathrm{T}]^\mathrm{T}$. Also, let $\widehat{\mathbf{w}}^{*}:=[I_d \cdots I_d]^\mathrm{T}\cdot\widehat{\mathbf{w}}_\mathrm{opt}$, $\widetilde{\mathbf{w}}^{*}:=[I_d \cdots I_d]^\mathrm{T}\cdot\widetilde{\mathbf{w}}_\mathrm{opt}$, and $\overline{L}:=\frac{1}{2} (L_{+}+L_{-})$. An auxiliary sequence $\mathbf{r}(t)$ is defined as $\mathbf{r}(t) := \sum_{k=0}^{t} Q\widetilde{\mathbf{w}}(k)$ with $Q:=\bigl(\frac{L_{-}}{2}\bigr)^{\frac{1}{2}}$ \cite{makhdoumi2017convergence}. $\mathbf{r}(t)$ has an optimal value $\mathbf{r}_{\mathrm{opt}}$, which is the solution to the equation $Q\mathbf{r}_{\mathrm{opt}}+\frac{1}{2\beta} \nabla \widetilde{J}(\widetilde{\mathbf{w}}_\mathrm{opt})=0$. Further, we define some important parameters to be used in the next lemma. The first two parameters, $b\in(0,1)$ and $\lambda_1>1$, are related to the underlying network topology $\mathcal{G}$ and will be used to establish convergence property of the perturbed ADMM algorithm. Let $\varphi := \frac{\lambda_1-1}{\lambda_1}\frac{2\hat{\kappa} \sigma_{\min}^2(Q)\sigma_{\min}^2(L_{+})}{\varrho_{\widetilde{J}}^2 \sigma_{\min}^2(L_{+})+ 2\hat{\kappa} \sigma_{\max}^2(L_{+})}$, where $\sigma_{\max}(\cdot)$ and $\sigma_{\min}(\cdot)$ denote the maximum and minimum nonzero eigenvalues of a matrix, respectively. Also, we define $M_1$ and $M_2$ with constant $\lambda_2>1$ as \begin{equation}\nonumber \small \begin{split} M_1 & := \frac{b(1+\varphi) \sigma_{\min}^2(L_{+}) (1-1/{\lambda_2})}{4b\sigma_{\min}^2(L_{+}) (1-1/{\lambda_2}) + 16\sigma_{\max}^2(\overline{L})}, \\ M_2 & := \frac{(1-b) (1+\varphi)\sigma_{\min}^2(L_{+}) - \sigma_{\max}^2(L_{+})}{4\sigma_{\max}^2(L_{+})+4(1-b)\sigma_{\min}^2(L_{+})}. \end{split} \end{equation} Then, we have the following lemma from \cite{li2017robust}, which gives a bound for $\widetilde{\mathbf{w}}(t)-\widetilde{\mathbf{w}}^{*}$. \begin{lemma}\label{classifier_converge} Suppose that the conditions of Lemma \ref{J_tildle_convex} hold. If the parameters $b$ and $\lambda_1$ can be chosen such that \begin{equation}\label{b_delta} \small (1-b)(1+\varphi)\sigma_{\min}^2(L_{+})-\sigma_{\max}^2(L_{+})>0. \end{equation} Take $\beta$ in (\ref{gammai_update}) as $\beta = \sqrt{\frac{\lambda_1 \lambda_3 (\lambda_4-1)\varrho_{\widetilde{J}}^2}{\lambda_4(\lambda_1-1)\sigma_{\max}^2(L_{+}) \sigma_{\min}^2(Q)}}$, where $\lambda_4:=1+\sqrt{\frac{\varrho_{\widetilde{J}}^2 \sigma_{\min}^2(L_{+}) +2\hat{\kappa} \sigma_{\max}^2(L_{+})}{\alpha \lambda_3 \varrho_{\widetilde{J}}^2 \sigma_{\min}^2(L_{+})}}$ with $0<\alpha< \min\{M_1, M_2\}$, and $\lambda_3 := 1+\frac{2\hat{\kappa} \sigma_{\max}^2(L_{+})}{\varrho_{\widetilde{J}}^2\sigma_{\min}^2(L_{+})}$. Then, it holds \begin{equation}\label{classifer_bound} \small \left\|\widetilde{\mathbf{w}}(t)-\widetilde{\mathbf{w}}^{*}\right\|_2^2 \leq C^{t} \left(H_1 + \sum_{k=1}^{t} C^{-k} H_2 \|\boldsymbol{\theta}(k)\|_2^2\right), \end{equation} where $C := \frac{(1+4\alpha)\sigma_{\max}^2(L_{+})}{(1-b)(1+\varphi-4\alpha)\sigma_{\min}^2(L_{+})}$, and $H_1:= \left\|\mathbf{w}(0)-\widetilde{\mathbf{w}}^{*}\right\|_2^2+\frac{4}{(1+4\alpha)\sigma_{\max}^2(L_{+})}\left\|\mathbf{r}(0)- \mathbf{r}_{\mathrm{opt}}\right\|_2^2$, $H_2 := \frac{b(\lambda_2 -1)}{1-b} + \frac{\frac{4\varphi \lambda_1 \sigma_{\max}^2(\overline{L})}{\sigma_{\min}^2(Q)} + \sigma_{\max}^2(L_{+}) \left(\sqrt{\varphi} + \sqrt{\frac{2(\lambda_1-1)\sigma_{\min}^2(Q)}{\alpha \lambda_1 \lambda_3\varrho_{\widetilde{J}}^2}}\right)^2}{(1-b) (1+\varphi) (1+\varphi-4\alpha)\sigma_{\min}^2(L_{+})}$. \end{lemma} Lemma~\ref{classifier_converge} implies that given a connected graph $\mathcal{G}$ and the objective function in Problem~\ref{problem_3}, if the parameters $b$ and $\lambda_1$ satisfy (\ref{b_delta}), then $C$ in (\ref{classifer_bound}) is guaranteed to be less than 1. In this case, the obtained classifiers will converge to the neighborhood of the optimal solution $\widetilde{\mathbf{w}}_{\mathrm{opt}}$, where the radius of the neighborhood is $\lim_{t\rightarrow\infty} \sum_{k=1}^{t} C^{t-k} H_2 \|\boldsymbol{\theta}(k)\|_2^2$. The modified ADMM algorithm can achieve different radii depending on the added noises $\boldsymbol{\theta}(k)$. Since many parameters are involved, to meet the condition (\ref{b_delta}) may not be straightforward. In order to make $C$ smaller to achieve better convergence rate, in addition to the parameters, one may change, for example, the graph $\mathcal{G}$ to make the value $\frac{\sigma_{\max}^2(L_{+})}{\sigma_{\min}^2(L_{+})}$ smaller. Theorem \ref{delta_Jt} to be stated below gives the upper bound of the accumulated difference $\sum_{i=1}^n \Delta J_{\mathcal{P}}(\overline{\mathbf{w}}_i(t))$ in the sense of expectation. In the theorem, we employ the important concept of Rademacher complexity \cite{shalev2014understanding}. It is defined on the classifier class $\mathcal{W}$ and the collected data used for training, that is, $\mathrm{Rad}_i(\mathcal{W}):=\frac{1}{m_i} \mathbb{E}_{\nu_j}\left[\sup_{\mathbf{w}\in \mathcal{W}} \sum_{j=1}^{m_i} \nu_j \mathbf{w}^\mathrm{T}\mathbf{x}_{i, j}\right]$, where $\nu_1, \nu_2, \ldots, \nu_{m_i}$ are independent random variables drawn from the Rademacher distribution, i.e., $\Pr (\nu_j=1)=\Pr (\nu_j=-1)=\frac{1}{2}$ for $j=1, 2, \ldots, m_i$. In addition, we use the notation $\|\mathbf{v}\|_{A}^2$ to denote the norm of a vector $\mathbf{v}$ with a positive definite matrix $A$, i.e., $\|\mathbf{v}\|_{A}^2=\mathbf{v}^{\mathrm{T}}A\mathbf{v}$. \begin{theorem} \label{delta_Jt} Suppose that the conditions in Lemma~\ref{classifier_converge} are satisfied and the decaying rate of noise variance is set as $\rho\in (0,C)$. Then, for $\epsilon>0$ and $\delta\in(0,1)$, the aggregated classifier $\overline{\mathbf{w}}_i(t)$ obtained by the privacy-aware ADMM scheme (\ref{wi_update})-(\ref{gammai_update}) satisfies with probability at least $1-\delta$ \begin{equation}\label{eq_delta_Jt} \small \begin{split} & \mathbb{E}_{\{\boldsymbol{\theta}(k)\}} \left\{\sum_{i=1}^{n} \Delta J_{\mathcal{P}}(\overline{\mathbf{w}}_i(t))\right\} \leq \frac{1}{2t}\frac{C}{1-C}\left(H_1 + \frac{\rho H_2}{C- \rho}\sum_{i=1}^{n} d V_i^2\right) \\ & + \frac{n}{2}R^2 + \frac{\beta}{t}\left[H_3 + \left(\frac{\sigma_{\max}^2(L_{+})}{2\sigma_{\max}^2(L_{-})} + 2\sigma_{\max}^2(Q)\right)\frac{\sum_{i=1}^n d V_i^2}{1-\rho}\right]\\ & +\frac{1}{n \hat{\kappa}}R^2 + 4 \frac{e^{\epsilon}+1}{e^{\epsilon}-1}\sum_{i=1}^n\left(c_2 \mathrm{Rad}_i(\mathcal{W})+2c_1\sqrt{\frac{ 2\ln(4/\delta)}{m_i}}\right), \end{split} \end{equation} where $H_3 =\|\mathbf{r}(0)\|_2^2 + \|\mathbf{w}(0)-\widetilde{\mathbf{w}}^{*}\|_{\frac{L_+}{2}}^2$, and the parameters $C$, $H_1$, $H_2$ and $\beta$ are found in Lemma \ref{classifier_converge}. \end{theorem} \begin{proof} In what follows, we evaluate the terms in the far right-hand side of (\ref{deltaJ_wi}) by dividing them into three groups. The first is the terms $J_{\mathcal{P}_{\epsilon}}(\overline{\mathbf{w}}_i(t))-\widehat{J}_i(\overline{\mathbf{w}}_i(t))+\widehat{J}_i({\mathbf{w}^\star})-J_{\mathcal{P}_{\epsilon}}(\mathbf{w}^\star)$. We can bound them from above as \begin{equation}\nonumber \small \begin{split} & J_{\mathcal{P}_{\epsilon}}(\overline{\mathbf{w}}_i(t))-\widehat{J}_i(\overline{\mathbf{w}}_i(t))+\widehat{J}_i({\mathbf{w}^\star})-J_{\mathcal{P}_{\epsilon}}(\mathbf{w}^\star) \\ & \leq2\max_{\mathbf{w}\in\mathcal{W}} \left|J_{\mathcal{P}_{\epsilon}}(\mathbf{w})-\widehat{J}_i({\mathbf{w}})\right|. \end{split} \end{equation} According to Theorem 26.5 in \cite{shalev2014understanding}, with probability at least $1-\delta$, we have \begin{equation}\nonumber \small \begin{split} & \max_{\mathbf{w}\in\mathcal{W}} \left|J_{\mathcal{P}_{\epsilon}}(\mathbf{w})-\widehat{J}_i({\mathbf{w}})\right| \\ & \leq2\mathrm{Rad}_i(\hat{\ell}\circ\mathcal{W})+4\left|\hat{\ell}(y'_{i, j}, \mathbf{w}_i^{\mathrm{T}}\mathbf{x}_{i, j}, \epsilon)\right|\sqrt{\frac{2\ln (4/\delta)}{m_i}}, \end{split} \end{equation} where $\mathrm{Rad}_i(\hat{\ell}\circ\mathcal{W})$ is the Rademacher complexity of $\mathcal{W}$ with respect to $\hat{\ell}$. Further, by the contraction lemma in \cite{shalev2014understanding}, \begin{equation}\nonumber \small \mathrm{Rad}_i(\hat{\ell}\circ\mathcal{W})\leq \hat{c}_2\mathrm{Rad}_i(\mathcal{W})=c_2\frac{e^{\epsilon}+1}{e^{\epsilon}-1} \mathrm{Rad}_i(\mathcal{W}), \end{equation} where we have used Proposition~\ref{unbiased_loss}. Also, from (\ref{modified_loss}), we derive \begin{equation}\nonumber \small \left|\hat{\ell}(y'_{i, j}, \mathbf{w}_i^{\mathrm{T}}\mathbf{x}_{i, j}, \epsilon)\right| \leq \frac{e^{\epsilon}+1}{e^{\epsilon}-1}c_1, \end{equation} where $c_1$ is the bound of the original loss function $\ell(\cdot)$ (Assumption \ref{loss_assumption}). Then, it follows that \begin{equation}\label{rademacher_final} \small \begin{split} & J_{\mathcal{P}_{\epsilon}}(\overline{\mathbf{w}}_i(t))-\widehat{J}_i({\overline{\mathbf{w}}_i}(t))+\widehat{J}_i({\mathbf{w}^\star})-J_{\mathcal{P}_{\epsilon}}(\mathbf{w}^\star) \\ & \leq 4\frac{e^{\epsilon}+1}{e^{\epsilon}-1}\left(c_2\mathrm{Rad}_i(\mathcal{W})+2c_1\sqrt{\frac{2\ln (4/\delta)}{m_i}}\right). \end{split} \end{equation} The second group in (\ref{deltaJ_wi}) are the terms about $\widetilde{J}_i(\cdot)$ and $\widehat{J}_i(\cdot)$. In their aggregated forms, by Lemma~\ref{modified_solution}, it holds \begin{equation}\label{summation_form} \small \begin{split} & \widetilde{J}(\overline{\mathbf{w}}(t))- \widetilde{J}(\widetilde{\mathbf{w}}^{*}) + \widehat{J}(\widetilde{\mathbf{w}}^{*})-\widehat{J}(\widehat{\mathbf{w}}^{*}) + \widehat{J}(\widehat{\mathbf{w}}^{*}) - \widehat{J}({\mathbf{w}^\star}) \\ & \leq \widetilde{J}(\overline{\mathbf{w}}(t))- \widetilde{J}(\widetilde{\mathbf{w}}^{*}) + \widehat{J}(\widetilde{\mathbf{w}}^{*})-\widehat{J}(\widehat{\mathbf{w}}^{*}) \\ & \leq \frac{1}{t} \sum_{k=1}^{t} \widetilde{J}(\mathbf{w}(k))-\widetilde{J}(\widetilde{\mathbf{w}}^{*}) + \widehat{J}(\widetilde{\mathbf{w}}^{*})-\widehat{J}(\widehat{\mathbf{w}}^{*}), \end{split} \end{equation} where we have used Jensen's inequality given the strongly convex $\widetilde{J}(\cdot)$. For the first two terms in (\ref{summation_form}), by Theorem~1 of \cite{li2017robust}, we have \begin{equation}\label{taking_expectation} \small \begin{split} & \frac{1}{t} \sum_{k=1}^{t} \widetilde{J}(\mathbf{w}(k))- \widetilde{J}(\widetilde{\mathbf{w}}^{*}) \leq \frac{\beta}{t} \|\mathbf{r}(0)\|_2^2 + \|\mathbf{w}(0)-\widetilde{\mathbf{w}}^{*}\|_{\frac{L_+}{2}}^2 \\ & + \frac{\beta}{t} \sum_{k=1}^{t} \left(\frac{\sigma_{\max}^2(L_{+})}{2\sigma_{\max}^2(L_{-})} \|\boldsymbol{\theta}(k)\|_2^2 + 2\boldsymbol{\theta}(k)^{\mathrm{T}}Q\mathbf{r}(k)\right). \end{split} \end{equation} Take the expectation on both sides of (\ref{taking_expectation}) with respect to $\boldsymbol{\theta}(k)$. Given $\mathbb{E}_{\{\boldsymbol{\theta}(k)\}} \left\{\|\boldsymbol{\theta}(k)\|_2^2\right\}= \sum_{i=1}^{n} d V_i^2 \rho^{k-1}$, we derive \begin{equation}\nonumber \small \begin{split} \mathbb{E}_{\{\boldsymbol{\theta}(k)\}} \left\{2\boldsymbol{\theta}(k)^{\mathrm{T}}Q\mathbf{r}(k)\right\}& = \mathbb{E}_{\{\boldsymbol{\theta}(k)\}} \left\{2 \|Q \boldsymbol{\theta}(k)\|_2^2\right\} \\ & \leq 2\sigma_{\max}^2(Q) \sum_{i=1}^{n} d V_i^2 \rho^{k-1}, \end{split} \end{equation} where we used $\mathbb{E}\left\{\boldsymbol{\theta}(k)\right\}=0$ and $\mathbb{E}_{\{\boldsymbol{\theta}(k)\}}\left\{\boldsymbol{\theta}(k-1) \boldsymbol{\theta}(k)\right\}=0$. Thus, it follows that \begin{equation}\nonumber \small \begin{split} & \mathbb{E}_{\{\boldsymbol{\theta}(k)\}} \left\{ \sum_{k=1}^{t} \left(\frac{\sigma_{\max}^2(L_{+})}{2\sigma_{\max}^2(L_{-})} \|\boldsymbol{\theta}(k)\|_2^2 + 2\boldsymbol{\theta}(k)^{\mathrm{T}}Q\mathbf{r}(k)\right)\right\} \\ & \leq \left(\frac{\sigma_{\max}^2(L_{+})}{2\sigma_{\max}^2(L_{-})} + 2\sigma_{\max}^2(Q)\right)\sum_{i=1}^{n} d V_i^2 \sum_{k=1}^{t} \rho^{k-1} \\ & \leq \left(\frac{\sigma_{\max}^2(L_{+})}{2\sigma_{\max}^2(L_{-})} + 2\sigma_{\max}^2(Q)\right) \frac{\sum_{i=1}^{n} d V_i^2}{1-\rho}. \end{split} \end{equation} Then, for (\ref{taking_expectation}), we arrive at \begin{equation}\label{exp_J_tilde} \small \begin{split} & \mathbb{E}_{\{\boldsymbol{\theta}(k)\}} \left\{ \widetilde{J}(\overline{\mathbf{w}}(t))- \widetilde{J}(\widetilde{\mathbf{w}}^{*})\right\} \\ & \leq \frac{\beta}{t}\left[H_3 + \left(\frac{\sigma_{\max}^2(L_{+})}{2\sigma_{\max}^2(L_{-})} + 2\sigma_{\max}^2(Q)\right)\frac{\sum_{i=1}^n d V_i^2}{1-\rho}\right]. \end{split} \end{equation} Next, we focus on the latter two terms in (\ref{summation_form}). Due to (\ref{optimal_perturbation}), we have $\widetilde{J}(\widetilde{\mathbf{w}}^{*})\leq \widetilde{J}(\widehat{\mathbf{w}}^{*})$, which yields \begin{equation}\nonumber \small \widehat{J}(\widetilde{\mathbf{w}}^{*})-\widehat{J}(\widehat{\mathbf{w}}^{*}) \leq \frac{1}{n} \boldsymbol{\eta}^{\mathrm{T}} (\widetilde{\mathbf{w}}^{*}-\widehat{\mathbf{w}}^{*}) \leq \frac{1}{n} \|\boldsymbol{\eta}\|\|\widetilde{\mathbf{w}}^{*}-\widehat{\mathbf{w}}^{*}\|, \end{equation} By Lemma 7 in \cite{chaudhuri2011differentially}, we obtain $\|\widetilde{\mathbf{w}}^{*}-\widehat{\mathbf{w}}^{*}\| \leq \frac{1}{n} \frac{\|\boldsymbol{\eta}\|}{\hat{\kappa}}$. It follows \begin{equation}\label{bound_J_hat} \small \widehat{J}(\widetilde{\mathbf{w}}^{*})-\widehat{J}(\widehat{\mathbf{w}}^{*}) \leq \frac{1}{\hat{\kappa}} \frac{\|\boldsymbol{\eta}\|^2}{n^2} \leq \frac{1}{n \hat{\kappa}} R^2, \end{equation} where $R$ is the bound of noise $\boldsymbol{\eta}_i$. Substituting (\ref{exp_J_tilde}) and (\ref{bound_J_hat}) into (\ref{summation_form}), we derive \begin{equation}\label{exp_objective} \small \begin{split} & \mathbb{E}_{\{\boldsymbol{\theta}(k)\}} \{\widetilde{J}(\overline{\mathbf{w}}(t))- \widetilde{J}(\widetilde{\mathbf{w}}^{*}) + \widehat{J}(\widetilde{\mathbf{w}}^{*})-\widehat{J}(\widehat{\mathbf{w}}^{*}) + \widehat{J}(\widehat{\mathbf{w}}^{*}) - \widehat{J}({\mathbf{w}^\star}) \} \\ & \leq \frac{1}{n \hat{\kappa}} R^2 + \frac{\beta}{t}\left[H_3 + \left(\frac{\sigma_{\max}^2(L_{+})}{2\sigma_{\max}^2(L_{-})} + 2\sigma_{\max}^2(Q)\right)\frac{\sum_{i=1}^n d V_i^2}{1-\rho}\right]. \end{split} \end{equation} The third group in (\ref{deltaJ_wi}) is the term $\boldsymbol{\eta}^{\mathrm{T}}(\widetilde{\mathbf{w}}^{*}-\overline{\mathbf{w}}(t))$. We have \begin{equation}\nonumber \small \boldsymbol{\eta}^{\mathrm{T}}(\widetilde{\mathbf{w}}^{*}-\overline{\mathbf{w}}(t)) = \boldsymbol{\eta}^{\mathrm{T}} \left(\widetilde{\mathbf{w}}^{*}-\frac{1}{t} \sum_{k=1}^{t}(\widetilde{\mathbf{w}}(k)-\boldsymbol{\theta}(k))\right). \end{equation} Taking the expectation with respect to $\boldsymbol{\theta}(k)$, we obtain \begin{equation}\nonumber \small \begin{split} & \mathbb{E}_{\{\boldsymbol{\theta}(k)\}} \left\{\boldsymbol{\eta}^{\mathrm{T}}(\widetilde{\mathbf{w}}^{*}-\overline{\mathbf{w}}(t))\right\} \\ & = \mathbb{E}_{\{\boldsymbol{\theta}(k)\}} \left\{\boldsymbol{\eta}^{\mathrm{T}}\left(\widetilde{\mathbf{w}}^{*}- \frac{1}{t} \sum_{k=1}^{t} \widetilde{\mathbf{w}}(k)\right)\right\} \\ & \leq \frac{1}{2} \|\boldsymbol{\eta}\|_2^2 + \mathbb{E}_{\{\boldsymbol{\theta}(k)\}} \left\{\frac{1}{2t^2} \left\|\sum_{k=1}^{t} (\widetilde{\mathbf{w}}^{*}-\widetilde{\mathbf{w}}(k)) \right\|_2^2\right\} \\ & \leq \frac{1}{2} \|\boldsymbol{\eta}\|_2^2 + \mathbb{E}_{\{\boldsymbol{\theta}(k)\}} \left\{\frac{1}{2t} \sum_{k=1}^{t} \left\|\widetilde{\mathbf{w}}^{*}-\widetilde{\mathbf{w}}(k)\right\|_2^2\right\}. \end{split} \end{equation} By Lemma \ref{classifier_converge}, we have \begin{equation}\nonumber \small \left\|\widetilde{\mathbf{w}}^{*}-\widetilde{\mathbf{w}}(k)\right\|_2^2 \leq C^{t} \left(H_1 + \sum_{k=1}^{t} C^{-k} H_2 \|\boldsymbol{\theta}(k)\|_2^2\right). \end{equation} Then, it follows that \begin{equation}\label{exp_eta_w} \small \begin{split} & \mathbb{E}_{\{\boldsymbol{\theta}(k)\}} \left\{\boldsymbol{\eta}^{\mathrm{T}}(\widetilde{\mathbf{w}}^{*}-\overline{\mathbf{w}}(t))\right\} \\ & \leq \frac{1}{2} \|\boldsymbol{\eta}\|_2^2 + \frac{1}{2t} \mathbb{E}_{\{\boldsymbol{\theta}(k)\}} \left\{\sum_{k=1}^{t} C^{t} \left(H_1 + \sum_{k=1}^{t} C^{-k} H_2 \|\boldsymbol{\theta}(k)\|_2^2\right) \right\} \\ & \leq \frac{n}{2} R^2 + \frac{1}{2t}\frac{C}{1-C} \left(H_1 + \frac{\rho H_2}{C- \rho}\sum_{i=1}^{n} d V_i^2\right), \end{split} \end{equation} where we have used $0<\rho<C$. Substituting (\ref{rademacher_final}), (\ref{exp_objective}) and (\ref{exp_eta_w}) into (\ref{deltaJ_wi}), we arrive at the bound in (\ref{eq_delta_Jt}). \end{proof} Theorem \ref{delta_Jt} provides a guidance for both users and servers to obtain a classification model with desired performance. In particular, the effects of three uncertainties on the bound of $\sum_{i=1}^n \Delta J_{\mathcal{P}}(\overline{\mathbf{w}}_i(t))$ have been successfully decomposed. Note that these effects are not simply superimposed but coupled together. Specifically, the terms in (\ref{eq_delta_Jt}) related to the primal variable perturbation decrease with iterations at the rate of $O\left(\frac{1}{t}\right)$. This also implies that the whole framework achieves convergence in expectation at this rate. Compared with \cite{ding2019optimal} and \cite{li2017robust}, where bounds of $\frac{1}{t} \sum_{k=1}^{t} \widetilde{J}(\mathbf{w}(k))- \widetilde{J}(\widetilde{\mathbf{w}}^{*})$ are provided, we derive the difference between the generalization error of the aggregated classifier $\overline{\mathbf{w}}(t)$ and that of the ideal optimal classifier $\mathbf{w}^{\star}$, which is moreover given in a closed form. The bound in (\ref{eq_delta_Jt}) contains the effect of the unknown data distribution $\mathcal{P}$ while the bound of $\frac{1}{t} \sum_{k=1}^{t} \widetilde{J}(\mathbf{w}(k))- \widetilde{J}(\widetilde{\mathbf{w}}^{*})$ covers only the role of existing data. Although \cite{zhang2017dynamic} also considers the generalization error of found classifiers, no closed form of the bound is given, and the obtained bound may not decrease with iterations since the reference classifier therein is not $\mathbf{w}^{\star}$ but a time-varying one. In the more centralized setting of \cite{chaudhuri2011differentially}, $\Delta J_{\mathcal{P}}(\mathbf{w})$ is analyzed for the derived classifier $\mathbf{w}$, but there is no convergence issue since $\mathbf{w}$ is perturbed and published only once. Moreover, different from the works \cite{chaudhuri2011differentially,zhang2017dynamic,ding2019optimal} and \cite{li2017robust}, our analysis considers the effects of the classifier class $\mathcal{W}$ by Rademacher complexity. Such effects have been used in \cite{shalev2014understanding} in non-private centralized machine learning scenarios. Furthermore, in the privacy-aware (centralized or distributed) frameworks of \cite{chaudhuri2011differentially,zhang2017dynamic,ding2019optimal} and the robust ADMM scheme for erroneous updates \cite{li2017robust}, there is only one type of noise perturbation, and the uncertainty in the training data is not considered. \subsection{Comparisons and Discussions} \label{com_dis} Here, we compare the proposed framework with existing schemes from the perspective of privacy and performance, and discuss how each parameter contributes to the results. First, we find that the bound in (\ref{eq_delta_Jt}) is larger than those in \cite{chaudhuri2011differentially,zhang2017dynamic,ding2019optimal} if we adopt the approach in this paper to conduct performance analysis on these works. This is obvious since there are more perturbations in our setting. However, as we have discussed in Section \ref{discussion_privacy}, these existing frameworks do not meet the heterogeneous privacy requirements, and some of them cannot avoid accumulation of privacy losses, resulting in no protection at all. It should be emphasized that extra performance costs must be paid when the data contributors want to obtain stronger privacy guarantee. These existing frameworks may be better than ours in the sense of performance, but the premise is that users accept the privacy preservation provided by them. If users require heterogenous privacy protection, our framework can be more suitable. Further, compared with \cite{chaudhuri2011differentially,zhang2017dynamic,ding2019optimal}, \cite{li2017robust} and \cite{shalev2014understanding}, we provide a more systematic result on the performance analysis in Theorem~\ref{delta_Jt}, where most parameters related to useful measures of classifiers (also privacy preservation) are included. Servers and users can set these parameters as needed, and thus obtain classifiers which can appropriately balance the privacy and the performance. We will discuss the roles of these parameters after some further analysis on the theoretical result. According to Lemma~\ref{classifier_converge}, the classifiers solved by different servers converge to $\widetilde{\mathbf{w}}_\mathrm{opt}$ in the sense of expectation. The performance of $\widetilde{\mathbf{w}}_\mathrm{opt}$ can be analyzed in a similar way as in Theorem~\ref{delta_Jt}. This is given in the following corollary. \begin{corollary} \label{corollary1} For $\epsilon>0$ and $\delta\in(0,1)$, with probability at least $1-\delta$, we have \begin{equation}\label{Jp_w_tilde} \small \begin{split} & \Delta J_{\mathcal{P}}(\widetilde{\mathbf{w}}_\mathrm{opt}) \\ & \leq \frac{4}{n}\frac{e^{\epsilon}+1}{e^{\epsilon}-1} \sum_{i=1}^n \left(c_2 \mathrm{Rad}_i(\mathcal{W})+2c_1\sqrt{\frac{ 2\ln(4/\delta)}{m_i}}\right)+ \frac{1}{n\hat{\kappa}}R^2. \end{split} \end{equation} \end{corollary} For the sake of comparison, the next theorem provides a performance analysis when the privacy-preserving approach in Phase~2 is removed, and a corresponding result on the bound of $\Delta J_{\mathcal{P}}(\widehat{\mathbf{w}}_\mathrm{opt})$ is given in the subsequent corollary. \begin{theorem} \label{thm3} For $\epsilon>0$ and $\delta\in(0,1)$, the aggregated classifier $\overline{\mathbf{w}}_i(t)$ obtained by the original ADMM scheme (\ref{new_primal_local}) and (\ref{new_dual_local}) satisfies with probability at least $1-\delta$ \begin{equation}\label{delta_Jt_unper} \small \begin{split} \sum_{i=1}^{n} \Delta J_{\mathcal{P}}(\overline{\mathbf{w}}_i(t)) & \leq \frac{\beta}{t} \left(\|\mathbf{w}(0)-\widehat{\mathbf{w}}^{*}\|_{\frac{L_+}{2}}^2 + \|\mathbf{r}(0)\|_2^2\right) \\ & + 4 \frac{e^{\epsilon}+1}{e^{\epsilon}-1}\sum_{i=1}^n\left(c_2 \mathrm{Rad}_i(\mathcal{W})+2c_1\sqrt{\frac{ 2\ln(4/\delta)}{m_i}}\right). \end{split} \end{equation} \end{theorem} \begin{corollary} \label{corollary2} For $\epsilon>0$ and $\delta\in(0,1)$, with probability at least $1-\delta$, we have \begin{equation}\label{Jp_w_hat} \small \Delta J_{\mathcal{P}}(\widehat{\mathbf{w}}_\mathrm{opt}) \leq\frac{4}{n}\frac{e^{\epsilon}+1}{e^{\epsilon}-1} \sum_{i=1}^n \left(c_2 \mathrm{Rad}_i(\mathcal{W})+2c_1\sqrt{\frac{ 2\ln(4/\delta)}{m_i}}\right). \end{equation} \end{corollary} It is observed that the bound in (\ref{delta_Jt_unper}) is not in expectation since there is no noise perturbation during the ADMM iterations. It is interesting to note that the convergence rate of the unperturbed ADMM algorithm is also $O(\frac{1}{t})$. This implies that the modified ADMM algorithm preserves the convergence speed of the general distributed ADMM scheme. However, there exists a tradeoff between performance and privacy protection. Comparing (\ref{eq_delta_Jt}) and (\ref{delta_Jt_unper}), we find that the extra terms in (\ref{eq_delta_Jt}) are the results of perturbations in Phase~2. Also, the effect of the objective function perturbation is reflected in (\ref{Jp_w_tilde}), that is, the term $\frac{1}{n\hat{\kappa}}R^2$. When $R$ (the bound of $\boldsymbol{\eta}_i$) increases, the generalization error of the trained classifier would increase as well, indicating worse performance. Similarly, if we use noises with larger initial variances and decaying rates to perturb the solved classifiers in each iteration, the bound in (\ref{eq_delta_Jt}) will also increase. \textbf{Effect of data quality}. We observe that the bound of $\Delta J_{\mathcal{P}}(\widehat{\mathbf{w}}_\mathrm{opt})$ in (\ref{Jp_w_hat}) also appears in (\ref{eq_delta_Jt}), (\ref{Jp_w_tilde}) and (\ref{delta_Jt_unper}). This bound reflects the effect of users' reported data, whose labels are randomized in Phase~1. It can be seen that besides the probability $\delta$, the bound in (\ref{Jp_w_hat}) is affected by three factors: PPD $\epsilon$, Rademacher complexity $\mathrm{Rad}_i(\mathcal{W})$, and the number of data samples $m_i$. Here, we discuss the roles of these factors. For the effect of PPD, we find that when $\epsilon$ is small, the bound will decrease with an increase in $\epsilon$. However, when $\epsilon$ is sufficiently large, it has limited influence on the bound. In particular, by taking $\epsilon\rightarrow\infty$, the bound reduces to that for the optimal solution of Problem~\ref{problem_1}, where $(e^{\epsilon}+1)/(e^{\epsilon}-1)$ goes to 1 in (\ref{Jp_w_hat}). Note that $\mathrm{Rad}_i(\mathcal{W})$ and $m_i$ still remain and affect the performance. For the effect of $\mathrm{Rad}_i(\mathcal{W})$, we observe that the generalization errors of trained classifiers may become larger when $\mathrm{Rad}_i(\mathcal{W})$ increases. The Rademacher complexity is directly related to the size of the classifier class $\mathcal{W}$. If there are only a small number of candidate classifiers in $\mathcal{W}$, the solutions have a high probability of obtaining smaller deviation between their generalization errors and the reference generalization error $J_{\mathcal{P}}(\mathbf{w}^{\star})$. Nevertheless, we should guarantee the richness of the class $\mathcal{W}$ to make $J_{\mathcal{P}}(\mathbf{w}^{\star})$ small since $\mathbf{w}^\star$ trained in terms of $\mathcal{W}$ will have large generalization error. Though the deviation $\Delta J_{\mathcal{P}}(\cdot)$ may be small, the trained classifiers are not good predictors due to the bad performance of $\mathbf{w}^\star$. Thus, setting an appropriate classifier class is important for obtaining a classifier with qualified performance. Finally, we consider the effect of the number of users. From the bound of $\Delta J_{\mathcal{P}}(\widehat{\mathbf{w}}_\mathrm{opt})$ in (\ref{Jp_w_hat}), we know that if $m_i$ becomes larger, the last term of the bound will decrease. In general, more data samples imply access to more information about the underlying distribution $\mathcal{P}$. Then, the trained classifier can predict the labels of newly sampled data from $\mathcal{P}$ with higher accuracy. Moreover, it can be seen that the bound is the average of $n$ local errors generated in different servers. When new servers participate in the DML framework, these servers should make sure that they have collected sufficient amount of training data samples. Otherwise, the bound may not decrease though the total number of data samples increases. This is because unbalanced local errors may lead to an increase in their average, implying larger bound of $\Delta J_{\mathcal{P}}(\cdot)$. \begin{figure} \vspace*{-2mm} \begin{center} \subfigure[Norm of the errors]{\label{consensus_obs} \includegraphics[scale=0.45]{figure/consensus.pdf}} \subfigure[Empirical risks]{\label{com_risks} \includegraphics[scale=0.45]{figure/convergence_com.pdf}} \caption{{\small The convergence properties of PDML.}} \end{center} \end{figure} \section{Experimental Evaluation} \label{evaluation} In this section, we conduct experiments to validate the obtained theoretical results and study the classification performance of the proposed PDML framework. Specifically, we first use a real-world dataset to verify the convergence property of the PDML framework and study how key parameters would affect the performance. Also, we leverage another seven datasets to verify the classification accuracy of the classifiers trained by the framework. \subsection{Experiment Setup} \subsubsection{Datasets} We use two kinds of publicly available datasets as described below to validate the convergence property and classification accuracy of the PDML. (i) Adult dataset \cite{Dua2019}. The dataset contains census data of 48,842 individuals, where there are 14 attributes (e.g., age, work-class, education, occupation and native-country) and a label indicating whether a person's annual income is over \$50,000. After removing the instances with missing values, we obtain a training dataset with 45,222 samples. To preprocess the dataset, we adopt unary encoding approach to transform the categorial attributes into binary vectors, and further normalize the whole feature vector to be a vector with maximum norm of 1. The preprocessed feature vector is a 105-dimensional vector. For the labels, we mark the annual income over \$50,000 as 1, otherwise it is labeled as $-1$. (ii) Gunnar R\"{a}tsch's benchmark datasets \cite{ucidata}. There are thirteen data subsets from the UCI repository in the benchmark datasets. To mitigate the effect of data quality, we select seven datasets with the largest data sizes to conduct experiments. The seven datasets are \textit{German}, \textit{Image}, \textit{Ringnorm}, \textit{Banana}, \textit{Splice}, \textit{Twonorm} and \textit{Waveform}, where the numbers of instances are 1,000, 2,086, 7,400, 5,300, 2,991, 7,400 and 5,000, respectively. Each dataset is partitioned into training and test data, with a ratio of approximately $70\%:30\%$. \subsubsection{Underlying classification approach} Logistic regression (LR) is utilized for training the prediction model, where the loss function and regularizer are $\ell_{LR} (y_{i, j}, \mathbf{w}_i^\mathrm{T}\mathbf{x}_{i, j})= \log \bigl(1+e^{-y_{i, j} \mathbf{w}_i^\mathrm{T}\mathbf{x}_{i, j}}\bigr)$ and $N(\mathbf{w}_i) = \frac{1}{2}\|\mathbf{w}_i\|^2$, respectively. Then, the local objective function is given by \begin{equation}\nonumber \small J_i(\mathbf{w}_i) = \sum_{j=1}^{m_i}\frac{1}{m_i}\log \left(1+e^{-y_{i, j} \mathbf{w}_i^\mathrm{T}\mathbf{x}_{i, j}}\right)+\frac{a}{2n}\|\mathbf{w}_i\|^2. \end{equation} It is easy to check that when the classifier class $\mathcal{W}$ is bounded (e.g., a bounded set $\mathcal{W}= \{\mathbf{w}\in\mathbb{R}^d\; |\; \|\mathbf{w}\|\leq W\}$), $\ell_{LR} (\cdot)$ satisfies Assumption~\ref{loss_assumption}. Due to the convexity property of $N(\mathbf{w}_i)$, $J_i(\mathbf{w}_i)$ is strongly convex. Then, according to Lemma~\ref{modified_solution}, Problems~\ref{problem_2} and \ref{problem_3} have optimal solution sets, and thus, we can use LR to train the classifiers. \subsubsection{Network topology} We consider $n=10$ servers collaboratively train a prediction model. A connected random graph is used to describe the communication topology of the 10 servers. The used graph has $E=13$ communication links in total. Each server is responsible for collecting the data from a group of users, and thus there are 10 groups of users. In the experiments, we assume that each group has the same number of users, that is, $m_i=m_l, \forall i, l$. For example, we use $m=45,000$ instances sampled from the Adult dataset to train the classifier, and then each server collects data from $m_i = 4,500$ users. \subsection{Experimental Results with Adult Dataset} Based on the Adult dataset, we first verify the convergence property of the PDML framework. Fig.~\ref{consensus_obs} illustrates the maximum distances between the norms of arbitrary two classifiers found by different servers. We set the bound of $\boldsymbol{\eta}_i$ to 1. Other settings are the same as those with experiments under the synthetic dataset. For the sake of comparison, we also draw the variation curve (with circle markers) of the maximum distance when the privacy-preserving approach in Phase~2 is removed. We observe that both distances converge to 0, implying that the consensus constraint is eventually satisfied. Fig. \ref{com_risks} shows the variation of empirical risks (the objective function in (\ref{original_objective})) as iterations proceed. Here, the green dashed line depicts the final empirical risk achieved by general ADMM with original data, which we call the reference empirical risk. There are also two curves showing varying empirical risks with privacy preservation. Comparing the two curves, we find that the ADMM with combined noise-adding scheme preserves the convergence property of the general ADMM algorithm. Due to the noise perturbations in Phase~2, the convergence time becomes longer. In addition, it can be seen that regardless of whether the privacy-preserving approach in Phase~2 is used, both ADMM schemes cannot achieve the same final empirical risks with that of the green line, which is consistent with the analysis in Section~\ref{com_dis}. \begin{figure} \begin{center} \subfigure[Different $R$ (with $\rho=0.8$)]{\label{effects_M} \includegraphics[scale=0.45]{figure/different_m.pdf}} \subfigure[Different $\rho$ (with $R=1$)]{\label{effects_rho} \includegraphics[scale=0.45]{figure/different_rho.pdf}} \subfigure[Different $\epsilon$]{\label{effects_epsilon} \includegraphics[scale=0.45]{figure/different_epsilon.pdf}} \caption{{\small The effects of key parameters.}} \end{center} \end{figure} \begin{table*}[thb!] \caption{{\small Classification accuracy with test data (\%)}} \label{table} \renewcommand{\arraystretch}{1.5} \centering \begin{tabular}{*{8}{c}} \hline \hline \multirow{2}{*}{Dataset} & \multirowcell{2}{Without privacy protection} &\multicolumn{2}{c}{Modified loss} &\multicolumn{4}{c}{Perturbed ADMM} \\ \cmidrule(lr){3-4}\cmidrule(lr){5-8} \morecmidrules& &{$\epsilon=0.4$, $R=0$} &{$\epsilon=1$, $R=0$} &{$\epsilon=0.4$, $R=1$} &{$\epsilon=0.4$, $R=9$} &{$\epsilon=1$, $R=1$} &{$\epsilon=1$, $R=9$} \\ \hline {German} &75.00 &71.00 &74.00 &69.67 &64.00 &74.33 &67.67 \\ {Image} &75.56 &70.13 &72.84 &69.33 &63.10 &70.45 &65.50 \\ {Ringnorm} &77.38 &73.44 &76.82 &73.74 &66.18 &75.77 &70.23 \\ {Banana} &58.22 &54.33 &56.06 &54.28 &43.11 &55.89 &54.44 \\ {Splice} &56.60 &46.84 &56.60 &54.94 &46.39 &55.83 &52.50 \\ {Twonorm} &97.90 &96.59 &97.38 &96.51 &92.28 &97.41 &94.77 \\ {Waveform} &88.93 &84.60 &87.93 &84.07 &80.47 &87.67 &81.73 \\ \hline \hline \end{tabular} \end{table*} We then study the effects of the key parameters on the performance. In Fig.~\ref{effects_M}, we examine the impact of the noise bound $R$ when the decaying rate $\rho$ is fixed at $0.8$. It is observed that $R$ affects the final empirical risks of the trained classifiers. The larger the noise bound, the greater the gap between the achieved empirical risks and the reference value, which is reconciled with Corollary~\ref{corollary1}. In Fig.~\ref{effects_rho}, we inspect the effect of Gaussian noise decaying rate $\rho$ when $R$ is fixed at $1$. We find that the convergence time is affected by $\rho$. A larger $\rho$ implies that the communicated classifiers are still perturbed by noises with larger variance even after iterating over multiple steps. Thus, more iterations are needed to obtain the same final empirical risk with that of smaller $\rho$. Such a property can be derived from the bound in (\ref{eq_delta_Jt}). Fig. \ref{effects_epsilon} illustrates the variation of final empirical risks when the PPD $\epsilon$ changes. The final empirical risks decrease with larger PPD (weaker privacy guarantee), which implies the tradeoff relation between the privacy protection and the performance. Further, the extra perturbations in Phase~2 lead to larger empirical risks for all the PPDs in the experiments. We also find that when $\epsilon$ is large ($\epsilon>0.6$), the achieved empirical risks are close to the reference value, and do not significantly change. Again, the result is consistent with the analysis of the bound in (\ref{Jp_w_hat}). \subsection{Classification Accuracy Evaluation} We use the test data of the seven datasets to evaluate the prediction performance of the trained classifiers, which is shown in Table~\ref{table}. The classification accuracy is defined as the ratio that the labels predicted by the trained classifier match the true labels of test data. For comparison, we present the classification accuracy achieved by general ADMM with the original data. For validation of classification accuracy under the PDML framework, we choose six different sets of parameter configurations to conduct the experiments. The specific configurations can be found in the second row of Table~\ref{table}. We find that lager $\epsilon$ and smaller $R$ will generate better accuracy. According to the theoretical results, the upper bounds for the differences $\Delta J_{\mathcal{P}}(\widetilde{\mathbf{w}}_\mathrm{opt})$ and $ \Delta J_{\mathcal{P}}(\widehat{\mathbf{w}}_\mathrm{opt})$ will decrease with lager $\epsilon$ and smaller $R$, implying better performance of the trained classifiers. Thus, the bound in Theorem \ref{delta_Jt} also provides a guideline to choose appropriate parameters to obtain a prediction model with satisfied classification accuracy. It is impressive to observe that even under the strongest privacy setting ($\epsilon=0.4$, $R=9$), the proposed framework achieves comparable classification accuracy to the reference precision. We also notice that under the datasets \textit{Banana} and \textit{Splice}, PDML achieves inferior accuracy in all settings. For a binary classification problem, it is meaningless to obtain a precision of around $50\%$. The reason for the poor accuracy may be that LR is not a suitable classification approach for these two datasets. Overall, the proposed PDML framework achieves competitive classification accuracy on the basis of providing strong privacy protection. \section{Conclusion} \label{conclusion} In this paper, we have provided a privacy-preserving ADMM-based distributed machine learning framework. By a local randomization approach, data contributors obtain self-controlled DP protection for the most sensitive labels and the privacy guarantee will not decrease as ADMM iterations proceed. Further, a combined noise-adding method has been designed for perturbing the ADMM algorithm, which simultaneously preserves privacy for users' feature vectors and strengthens protection for the labels. Lastly, the performance of the proposed PDML framework has been analyzed in theory and validated by extensive experiments. For future investigations, we will study the joint privacy-preserving effects of the local randomization approach and the combined noise-adding method. Moreover, it is interesting while challenging to extend the PDML framework to the non-empirical risk minimization {problems}. When users allocate distinct sensitive levels to different attributes, we are interested in designing a new privacy-aware scheme providing heterogeneous privacy protections for different attributes. \section{Introduction}\label{introduction} \IEEEPARstart{I}{n} the era of big data, distributed machine learning (DML) is increasingly applied in various areas of our daily lives, especially with proliferation of training data. Typical applications of DML include machine-aided prescription \cite{fredrikson2014privacy}, natural language processing \cite{le2014distributed}, recommender systems \cite{wang2015collaborative}, to name a few. Compared with the traditional single-machine model, DML is more competent for large-scale learning tasks due to its scalability and robustness to faults. The alternating direction method of multipliers (ADMM), as a commonly-used parallel computing approach in optimization community, is a simple but efficient algorithm for multiple servers to collaboratively solve learning problems \cite{boyd2011distributed}. Our DML framework also use ADMM as the underlying algorithm. However, privacy is a significant issue that has to be considered in DML. In many machine learning tasks, users' data for training the prediction model contains sensitive information, such as genotypes, salaries, and political orientations. For example, if we adopt DML methods to predict HIV-1 infection \cite{qi2010semi}, the data used for protein-protein interactions identification mainly includes patients' information about their proteins, labels indicating whether they are HIV-1 infected or not, and other kinds of health data. Such information, especially the labels, is extremely sensitive for the patients. Moreover, there exist potential risks of privacy disclosure. On one hand, when users report their data to servers, illegal parties can eavesdrop the data transmission processes or penetrate the servers to steal reported data. On the other, the communicated information between servers, which is required to train a common prediction model, can also disclose users' private data. If these disclosure risks are not properly controlled, users would refuse to contribute their data to servers even though DML may bring convenience for them. Various privacy-preserving solutions have been proposed in the literature. Differential privacy (DP) \cite{dwork2008differential} is one of the standard non-cryptographical approaches and has been applied in distributed computing scenarios \cite{wang2019privacy, nozari2017differentially, dpc2012, wang2018differentially}. Other schemes which are not DP-preserving can be found in \cite{mo2017privacy, manitara2013privacy, he2019consensus}. In addition, privacy-aware machine learning problems \cite{chaudhuri2011differentially, zhang2017dynamic, ding2019optimal, gade2018privacy} have attracted a lot of attentions, and many researchers have proposed ADMM-based solutions \cite{lee2015maximum, zhang2018improving, zhang2019admm}. However, there exists an underlying assumption in most privacy-aware schemes that the data contributors trust the servers collecting their data. This trustworthy assumption may lead to privacy disclosure in many cases. For instance, when the server is penetrated by an adversary, the information obtained by the adversary may be the users' original private data. Moreover, most existing schemes provide the same privacy guarantee for the entire data sample of a user though different data pieces are likely to have distinct sensitive levels. In the example of HIV-1 infection prediction \cite{qi2010semi} mentioned above, it is obvious that the label indicating HIV-1 infected or uninfected is more sensitive than other health data. Thus, the data pieces with higher sensitive levels should obtain stronger protection. On the other hand, as claimed in \cite{wang2019privacy}, different servers present diverse trust degrees to users due to the distinct permissions to users' data. The servers having no direct connection with a user, compared with the server collecting his/her data, may be less trustworthy. Here, the user would require that the less trustworthy servers obtain his/her information under stronger privacy preservation. Therefore, we will investigate a privacy-aware DML framework that preserves heterogeneous privacy, where users' data pieces with distinct sensitive levels can obtain different privacy guarantee against servers of diverse trust degrees. One challenging issue is to reduce the accumulation of privacy losses over ADMM iterations as much as possible, especially for the privacy guarantee of the most sensitive data pieces. Most existing ADMM-based private DML frameworks preserve privacy by perturbing the intermediate results shared by servers. Since each intermediate result is computed with users' original data, its release will disclose part of private information, implying that the privacy loss may increase as iterations proceed. Moreover, these private DML frameworks only provide the same privacy guarantee for all data pieces. In addition to intermediate information perturbation, original data randomization methods can be combined to provide heterogeneous privacy protection. However, such an approach introduces coupled uncertainties into the classification model. The lack of uncertainty decoupling methods leads to the performance quantification a challenging task. In this paper, we propose a privacy-preserving distributed machine learning (PDML) framework to settle these challenging issues. After removing the trustworthy servers assumption, we incorporate the users' data reporting into the DML process, which forms a two-phase training scheme together with the distributed computing process. For privacy preservation, we adopt different approaches in the two phases. In Phase~1, a user first leverages a local randomization approach to obfuscate the most sensitive data pieces and sends the randomized version to a server. This technique provides the user with self-controlled privacy guarantee for the most sensitive information. Further, in Phase~2, multiple servers collaboratively train a common prediction model and there, they use a combined noise-adding method to perturb the communicated messages, which preserves privacy for users' less sensitive data pieces. Also, such perturbation strengthens the privacy preservation of data pieces with the highest sensitive level. For the performance of the PDML framework, we analyze the generalization error of current classifiers trained by different servers. The main contributions of this paper are threefold: \begin{enumerate \item A two-phase PDML framework is proposed to provide heterogeneous privacy protection in DML, where users' data pieces obtain different privacy guarantees depending on their sensitive levels and servers' trust degrees. \item In Phase~1, we design a local randomization approach, which preserves DP for the users' most sensitive information. In Phase~2, a combined noise-adding method is devised to compensate the privacy protection of other data pieces. \item The convergence property of the proposed ADMM-based privacy-aware algorithm is analyzed. We also give a theoretical bound of the difference between the generalization error of trained classifiers and the ideal optimal classifier. \end{enumerate} The remainder of this paper is organized as follows. Related works are discussed in Section~\ref{related_works}. We provide some preliminaries and formulate the problem in Section~\ref{pro_formulation}. Section~\ref{pp_framework} presents the designed privacy-preserving framework, and the performance is analyzed in Section~\ref{performance_ana}. In order to validate the classification performance, we use multiple real-world datasets and conduct experiments in Section~\ref{evaluation}. Finally, Section~\ref{conclusion} concludes the paper. A preliminary version \cite{wang2019differential} of this paper was accepted for presentation at IEEE CDC 2019. This paper contains a different privacy-preserving approach with a fully distributed ADMM setting, full proofs of the main results, and more experimental results. \section{Related Works} \label{related_works} As one of the important applications of distributed optimization, DML has received widespread attentions from researchers. Besides ADMM schemes, many distributed approaches have been proposed in the literature, e.g., subgradient descent methods \cite{nedic2009distributed}, local message-passing algorithms \cite{predd2009collaborative}, adaptive diffusion mechanisms \cite{chen2012diffusion}, and dual averaging approaches \cite{duchi2011dual}. Compared with these approaches, ADMM schemes achieve faster empirical convergence \cite{shi2014linear}, making it more suitable for large-scale DML tasks. For privacy-preserving problems, cryptographic techniques \cite{biham2012differential, brakerski2014efficient, shoukry2016privacy} are often used to protect information from being inferred when the key is unknown. In particular, homomorphic encryption methods \cite{brakerski2014efficient}, \cite{shoukry2016privacy} allow untrustworthy servers to calculate with encrypted data, and this approach has been applied in an ADMM scheme \cite{zhang2019admm}. Nevertheless, such schemes unavoidably bring extra computation and communication overheads. Another commonly used approach to preserve privacy is random value perturbation \cite{dwork2008differential}, \cite{erlingsson2014rappor}, \cite{xu2012building}. DP has been increasingly acknowledged as the de facto criterion for non-encryption-based data privacy. This approach requires less costs but still provides strong privacy guarantee, though there exist tradeoffs between privacy and performance \cite{wang2019privacy}. In recent years, random value perturbation-based approaches have been widely used to address privacy protection in distributed computing, especially in consensus problems \cite{francesco2019lectures}. For instance, \cite{wang2019privacy, nozari2017differentially, dpc2012}, \cite{mo2017privacy, manitara2013privacy, he2019consensus} provide privacy-preserving average consensus paradigms, where the mechanisms in \cite{wang2019privacy, nozari2017differentially, dpc2012} provide DP guarantee. Moreover, for a maximum consensus algorithm, \cite{wang2018differentially} gives a differentially private mechanism. Since these solutions mainly focus on simple statistical analysis (e.g., computation of average and maximum elements), there may exist difficulties in directly applying them to DML. Privacy-preserving machine learning problems have also attracted a lot of attention recently. Under centralized scenarios, Chaudhuri et al. \cite{chaudhuri2011differentially} proposed a DP solution for an empirical risk minimization problem by perturbing the objective function with well-designed noise. For privacy-aware DML, Han et al. \cite{han2016differentially} also gave a differentially private mechanism, where the underlying distributed approach is subgradient descent. The works \cite{zhang2017dynamic} and \cite{ding2019optimal} present dynamic DP schemes for ADMM-based DML, where privacy guarantee is provided in each iteration. However, if a privacy violator uses the published information in all iterations to make inference, there will be no privacy guarantee. In addition, an obfuscated stochastic gradient method via correlated perturbations was proposed in \cite{gade2018privacy}, though it cannot provide DP preservation. Different from these works, in this paper we remove the trustworthy servers assumption. Moreover, we take into consideration the distinct sensitive levels of data pieces and the diverse trust degrees of servers, and propose the PDML framework providing heterogeneous privacy preservation. \section{Preliminaries and Problem Statement} \label{pro_formulation} In this section, we introduce the overall computation framework of DML and the ADMM algorithm used there. Moreover, the privacy-preserving problem for the framework is formulated with the definition of local differential privacy. \subsection{System Setting} We consider a collaborative DML framework to carry out classification problems based on data collected from a large number of users. Fig. \ref{framework} gives a schematic diagram. There are two parties involved: Users (or data contributors) and computing servers. The DML's goal is to train a classification model based on data of all users. It has two phases of data collection and distributed computation, called Phase~1 and Phase~2, respectively. In Phase~1, each user sends his/her data to the server, which is responsible to collect all the data from the user's group. In Phase~2, each computing server utilizes a distributed computing approach to cooperatively train the classifier through information interaction with other servers. The proposed DML framework is based on the one in \cite{wang2019privacy}, but the learning tasks are much more complex than the basic statistical analysis considered by \cite{wang2019privacy}. \textbf{Network Model}. Consider $n\geq2$ computing servers participating in the framework where the $i$th server is denoted by $s_i$. We use an undirected and connected graph $\mathcal{G}=(\mathcal{S}, \mathcal{E})$ to describe the underlying communication topology, where $\mathcal{S}=\{s_i\;| \;i=1, 2, \ldots, n\}$ is the servers set and $\mathcal{E}\subseteq\mathcal{S}\times\mathcal{S}$ is the set of communication links between servers. The number of communication links in $\mathcal{G}$ is denoted by $E$, i.e., $E=|\mathcal{E}|$. Let the set of neighbor servers of $s_i$ be $\mathcal{N}_i=\{s_l\in \mathcal{S}\;| \;(s_i, s_l)\in \mathcal{E}\}$. The degree of server $s_i$ is denoted by $N_i=|\mathcal{N}_i|$. Different servers collect data from different groups of users, and thus all users can be divided into $n$ distinct groups. The $i$th group of users, whose data is collected by server $s_i$, is denoted by the set $\mathcal{U}_i$, and $m_i=|\mathcal{U}_i|$ is the number of users in $\mathcal{U}_i$. Each user $j\in \mathcal{U}_i$ has a data sample $\mathbf{d}_{i, j}=(\mathbf{x}_{i, j}, y_{i, j})\in \mathcal{X}\times\mathcal{Y} \subseteq \mathbb{R}^{d+1}$, which is composed of a feature vector $\mathbf{x}_{i, j}\in\mathcal{X}\subseteq\mathbb{R}^d$ and the corresponding label $y_{i, j}\in\mathcal{Y} \subseteq \mathbb{R}$. In this paper, we consider a binary-classification problem. That is, there are two types of labels as $y_{i, j}\in\{-1,1\}$. Suppose that all data samples $\mathbf{d}_{i, j}, \forall i,j$, are drawn from an underlying distribution $\mathcal{P}$, which is unknown to the servers. Here, the learning goal is that the classifier trained with limited data samples can match the ideal model trained with known $\mathcal{P}$ as much as possible. \begin{figure} \centering \includegraphics[scale=0.5]{figure/framework_new.pdf} \caption{{\small Illustration of the DML framework}}\label{framework} \end{figure} \subsection{Classification Problem and ADMM Algorithm} \label{admm_alg} We first introduce the classification problem solved by the two-phase DML framework. Let $\mathbf{w}: \mathcal{X}\rightarrow\mathcal{Y}$ be the trained classification model. The trained classifier $\mathbf{w}$ should guarantee that the accuracy of mapping any feature vector $\mathbf{x}_{i, j}$ (sampled from the distribution $\mathcal{P}$) to its correct label $y_{i, j}$ is high. We employ the method of regularized empirical risk minimization, which is a commonly used approach to find an appropriate classifier \cite{vapnik2013nature}. Denote the classifier trained by server $s_i$ as $\mathbf{w}_i\in\mathbb{R}^d$. The objective function (or the empirical risk) of the minimization problem is defined as \begin{equation}\label{original_objective} \small J(\{\mathbf{w}_i\}_{i\in\mathcal{S}}):=\sum_{i=1}^n\left[\sum_{j=1}^{m_i}\frac{1}{m_i}\ell(y_{i, j}, \mathbf{w}_i^\mathrm{T}\mathbf{x}_{i, j})+\frac{a}{n}N(\mathbf{w}_i)\right], \end{equation} where $\ell: \mathbb{R}\times\mathbb{R}\rightarrow \mathbb{R}$ is the loss function measuring the performance of the trained classifier $\mathbf{w}_i$. The regularizer $N(\mathbf{w}_i)$ is introduced to mitigate overfitting, and $a>0$ is a constant. We take a bounded classifier class $\mathcal{W}\subset\mathbb{R}^d$ such that $\mathbf{w}_i\in\mathcal{W}, \forall i$. For the loss function $\ell(\cdot)$ and the regularizer $N(\cdot)$, we introduce the following assumptions \cite{chaudhuri2011differentially} \cite{zhang2017dynamic} \begin{assumption}\label{loss_assumption} The loss function $\ell(\cdot)$ is convex and doubly differentiable in $\mathbf{w}$. In particular, $\ell(\cdot)$, $\frac{\partial\ell(\cdot)}{\partial \mathbf{w}}$ and $\frac{\partial\ell^2(\cdot)}{\partial \mathbf{w}^2}$ are bounded over the class $\mathcal{W}$ as \begin{equation}\nonumber \small |\ell(\cdot)|\leq c_1, \left|\frac{\partial\ell(\cdot)}{\partial \mathbf{w}}\right|\leq c_2, \left|\frac{\partial\ell^2(\cdot)}{\partial \mathbf{w}^2}\right|\leq c_3, \end{equation} where $c_1$, $c_2$ and $c_3$ are positive constants. Moreover, it holds $\frac{\partial\ell^2(y,\mathbf{w}^\mathrm{T}\mathbf{x})}{\partial {\mathbf{w}}^2}=\frac{\partial\ell^2(-y,\mathbf{w}^\mathrm{T}\mathbf{x})}{\partial {\mathbf{w}}^2}$. \end{assumption} \begin{assumption}\label{regularizer_assumption} The regularizer $N(\cdot)$ is doubly differentiable and strongly convex with parameter $\kappa>0$, i.e., $\forall \mathbf{w}_1, \mathbf{w}_2 \in \mathcal{W}$, \vspace{-0.4cm} \begin{equation}\label{strongly_convex} \small N(\mathbf{w}_1)-N(\mathbf{w}_2)\geq \nabla N(\mathbf{w}_1)^\mathrm{T}(\mathbf{w}_2-\mathbf{w}_1)+\frac{\kappa}{2}\|\mathbf{w}_2-\mathbf{w}_1\|_2^2, \end{equation} \vspace{-0.2cm} where $\nabla N(\cdot)$ indicates the gradient with respect to $\mathbf{w}$. \end{assumption} We note that $J(\{\mathbf{w}_i\}_{i\in\mathcal{S}})$ in (\ref{original_objective}) can be separated into $n$ different parts, where each part is the objective function of the local minimization problem to be solved by each server. The objective function of server $s_i$ is \begin{equation}\label{local_objective} \small J_i(\mathbf{w}_i):=\sum_{j=1}^{m_i}\frac{1}{m_i}\ell(y_{i, j}, \mathbf{w}_i^\mathrm{T}\mathbf{x}_{i, j})+\frac{a}{n}N(\mathbf{w}_i). \end{equation} Since $\mathbf{w}_i$ is trained based on the data of the $i$th group of users, it may only partially reflect data characteristics. To find a common classifier taking account of all participating users, we place a global consensus constraint in the minimization problem, as $\mathbf{w}_i=\mathbf{w}_l, \forall s_i, s_l\in\mathcal{S}$. However, since we use a connected graph to describe the interaction between servers, we have to utilize a local consensus constraint: \begin{equation}\label{constraint} \small \mathbf{w}_i=\mathbf{z}_{il}, \quad \mathbf{w}_l=\mathbf{z}_{il}, \quad \forall (s_i, s_l)\in \mathcal{E}, \end{equation} where $\mathbf{z}_{il}\in\mathbb{R}^d$ is an auxiliary variable enforcing consensus between neighbor servers $s_i$ and $s_l$. Obviously, (\ref{constraint}) also implies global consensus. We can now write the whole regularized empirical risk minimization problem as follows \cite{forero2010consensus}. \begin{problem} \label{problem_1} \begin{small} \begin{alignat}{2} \min_{\{\mathbf{w}_i\}, \{\mathbf{z}_{i, l}\}} & \sum_{i=1}^n\left[\sum_{j=1}^{m_i}\frac{1}{m_i}\ell(y_{i, j}, \mathbf{w}_i^\mathrm{T}\mathbf{x}_{i, j})+\frac{a}{n}N(\mathbf{w}_i)\right] \label{minimization_pro} \\ \mathrm{s.t.} \quad & \mathbf{w}_i=\mathbf{z}_{il}, \quad \mathbf{w}_l=\mathbf{z}_{il}, \quad \forall (s_i, s_l)\in \mathcal{E}. \end{alignat} \end{small} \end{problem} Next, we establish a compact form of Problem~\ref{problem_1}. Let $\mathbf{w}:=[\mathbf{w}_1^\mathrm{T} \cdots \mathbf{w}_n^\mathrm{T}]^\mathrm{T}\in\mathbb{R}^{nd}$ and $\mathbf{z}\in\mathbb{R}^{2Ed}$ be vectors aggregating all classifiers $\mathbf{w}_i$ and auxiliary variables $\mathbf{z}_{il}$, respectively. To transfer all local consensus constraints into a matrix form, we introduce two block matrices $A_1, A_2\in \mathbb{R}^{{2Ed}\times{nd}}$, which are partitioned into $2E\times n$ submatrices with dimension $d\times d$. For the communication link $(s_i, s_l)\in\mathcal{E}$, if $\mathbf{z}_{il}$ is the $m$th block of $\mathbf{z}$, then the $(m, i)$th submatrix of $A_1$ and $(m, l)$th submatrix of $A_2$ are the $d\times d$ identity matrix $I_d$; otherwise, these submatrices are the $d\times d$ zero matrix $0_d$. We write $J(\mathbf{w})=\sum_{i=1}^n J(\mathbf{w}_i)$, $A:=[A_1^{\mathrm{T}} A_2^{\mathrm{T}}]^{\mathrm{T}}$, and $B:=[-I_{2Ed}\; -\!I_{2Ed}]^{\mathrm{T}}$. Then, Problem~\ref{problem_1} can be written in a compact form as \begin{alignat}{2} \min_{\mathbf{w}, \mathbf{z}} \quad & J(\mathbf{w}) \label{minimization_pro_mat} \\ \mathrm{s.t.} \quad & A\mathbf{w}+B\mathbf{z}=0. \label{constraint_mat} \end{alignat} For solving this problem we introduce the fully distributed ADMM algorithm from \cite{shi2014linear}. The augmented Lagrange function associated with (\ref{minimization_pro_mat}) and (\ref{constraint_mat}) is given by $\mathcal{L}(\mathbf{w}, \mathbf{z}, \boldsymbol{\lambda}) := J(\mathbf{w}) +\boldsymbol{\lambda}^{\mathrm{T}}(A\mathbf{w}+B\mathbf{z})+ \frac{\beta}{2}\|A\mathbf{w}+B\mathbf{z}\|_2^2$, where $\boldsymbol{\lambda}\in\mathbb{R}^{4Ed}$ is the dual variable ($\mathbf{w}$ is correspondingly called the primal variable) and $\beta\in \mathbb{R}$ is the penalty parameter. At iteration $t+1$, the solved optimal auxiliary variable $\mathbf{z}$ satisfies the relation $\nabla \mathcal{L}(\mathbf{w}(t+1), \mathbf{z}(t+1), \boldsymbol{\lambda}(t))=0$. Through some simple transformation, we have $B^\mathrm{T}\boldsymbol{\lambda}(t+1)=0$. Let $\boldsymbol{\lambda}=[\boldsymbol{\xi}^{\mathrm{T}} \boldsymbol{\zeta}^{\mathrm{T}}]^{\mathrm{T}}$ with $\boldsymbol{\xi}, \boldsymbol{\zeta}\in\mathbb{R}^{2Ed}$. If we set the initial value of $\boldsymbol{\lambda}$ to $\boldsymbol{\xi}(0)=-\boldsymbol{\zeta}(0)$, we have $\boldsymbol{\xi}(t)=-\boldsymbol{\zeta}(t), \forall t\geq 0$. Thus, we can obtain the complete dual variable $\boldsymbol{\lambda}$ by solving $\boldsymbol{\xi}$. Let \begin{equation}\nonumber \small L_{+} := \frac{1}{2}(A_1+A_2)^\mathrm{T}(A_1+A_2), L_{-} := \frac{1}{2}(A_1-A_2)^\mathrm{T}(A_1-A_2). \end{equation} Define a new dual variable $\boldsymbol{\gamma}:=(A_1-A_2)^\mathrm{T}\boldsymbol{\xi}\in\mathbb{R}^{nd}$. Through the simplification process in \cite{shi2014linear}, we obtain the fully distributed ADMM for solving Problem~\ref{problem_1}, which is composed of the following iterations: \begin{alignat}{2} \small \nabla J(\mathbf{w}(t+1)) +\boldsymbol{\gamma}(t) + \beta(L_{+}+L_{-})\mathbf{w}(t+1) - \beta L_{+}\mathbf{w}(t)& = 0, \nonumber \\ \small \boldsymbol{\gamma}(t+1) - \boldsymbol{\gamma}(t) - \beta L_{-}\mathbf{w}(t+1)& = 0. \nonumber \end{alignat} Note that $\boldsymbol{\gamma}$ is also a compact vector of all local dual variables $\boldsymbol{\gamma}_i\in\mathbb{R}^{d}$ for $s_i\in\mathcal{S}$, i.e., $\boldsymbol{\gamma}=[\boldsymbol{\gamma}_1^\mathrm{T} \cdots \boldsymbol{\gamma}_n^\mathrm{T}]^\mathrm{T}$. The above ADMM iterations can be separated into $n$ different parts, which are solved by the $n$ different servers. At iteration $t+1$, the information used by server $s_i$ to update a new primal variable $\mathbf{w}_i(t+1)$ includes users' data $\mathbf{d}_{i,j}, \forall j$, current classifiers $\left\{\mathbf{w}_l(t)\;|\; l\in\mathcal{N}_i\bigcup\{i\}\right\}$ and dual variable $\boldsymbol{\gamma}_i(t)$. The local augmented Lagrange function $\mathcal{L}_i(\mathbf{w}_i, \mathbf{w}_i(t), \boldsymbol{\gamma}_i(t))$ associated with the primal variable update is given by \begin{equation}\nonumber \small \begin{split} & \mathcal{L}_i(\mathbf{w}_i, \{\mathbf{w}_l(t)\}_{l\in \mathcal{N}_i\bigcup\{i\}},\boldsymbol{\gamma}_i(t)) \\ & :=J_i(\mathbf{w}_i) +\boldsymbol{\gamma}_i^{\mathrm{T}}(t)\mathbf{w}_i +\beta\sum_{l\in\mathcal{N}_i}\left\|\mathbf{w}_i-\frac{1}{2}(\mathbf{w}_i(t)+\mathbf{w}_l(t))\right\|_2^2. \end{split} \end{equation} At each iteration, server $s_i$ will update its primal variable $\mathbf{w}_i(t+1)$ and dual variable $\boldsymbol{\gamma}_i(t+1)$ as follows: \begin{alignat}{2} \small \mathbf{w}_i(t+1)& = \arg\min_{\mathbf{w}_i} \mathcal{L}_i(\mathbf{w}_i, \{\mathbf{w}_l(t)\}_{l\in \mathcal{N}_i\bigcup\{i\}}, \boldsymbol{\gamma}_i(t)), \label{new_primal_local} \\ \small \boldsymbol{\gamma}_i(t+1)& = \boldsymbol{\gamma}_i(t) + \beta \sum_{l\in\mathcal{N}_i}\left(\mathbf{w}_i(t+1) - \mathbf{w}_l(t+1)\right). \label{new_dual_local} \end{alignat} Clearly, in (\ref{new_primal_local}) and (\ref{new_dual_local}), the information communicated between computing servers is the newly updated classifiers. \subsection{Privacy-preserving Problem} In this subsection, we introduce the privacy-preserving problem in the DML framework. The private information to be preserved is first defined, followed by the introduction of privacy violators and information used for privacy inference. Further, we present the objectives of the two phases. \textbf{Private information}. For users, both the feature vectors and the labels of the data samples contain their sensitive information. The private information contained in the feature vectors may be the ID, gender, general health data and so on. However, the labels may indicate, for example, whether a patient contracts a disease (e.g., HIV-1 infected) or whether a user has a special identity (e.g., a member of a certain group). We can see that compared with the feature vectors, the labels may be more sensitive for the users. In this paper, we consider that the labels of users' data are the most sensitive information, which should be protected with priority and obtain stronger privacy guarantee than that of feature vectors. \textbf{Privacy attacks}. All computing servers are viewed as untrustworthy potential privacy violators desiring to infer the sensitive information contained in users' data. In the meantime, different servers present distinct trust degrees to users. User $j\in\mathcal{U}_i$ divides the potential privacy violators into two types. The server $s_i$, collecting user $j$'s data directly, is the first type. Other servers $s_l\in\mathcal{S}, s_l\neq s_i$, having no direct connection with user $j$, are the second type. Compared with server $s_i$, other servers are less trustworthy for user $j$. To conduct privacy inference, the first type of privacy violators leverages user $j$'s reported data while the second type can utilize only the intermediate information shared by servers. \textbf{Privacy protections in Phases 1 \& 2}. Since the label of user $j\in\mathcal{U}_i$ is the most sensitive information, its original value should not be disclosed to any servers including server $s_i$. Thus, during the data reporting process in Phase~1, user $j$ must obfuscate the private label in his/her local device. For the less sensitive feature vector, considering that server $s_i$ is more trustworthy, user $j$ can choose to transmit the original version to that server. Nevertheless, the user is still unwilling to disclose the raw feature vector to servers with lower trust degrees. Hence, in this paper, when server $s_i$ interacts with other servers to find a common classifier in Phase~2, the released information about user $j$'s data will be further processed before communication. More specifically, in Phase 1, to obfuscate the labels, we use a local randomization approach, whose privacy-preserving property will be measured by local differential privacy (LDP) \cite{erlingsson2014rappor}. LDP is developed from differential privacy (DP), which is originally defined for trustworthy databases to publish aggregated private information \cite{dwork2008differential}. The privacy preservation idea of DP is that for any two neighbor databases differing in one record (e.g., one user selects to report or not to report his/her data to the server) as input, a randomized mechanism is adopted to guarantee the two outputs to have high similarity so that privacy violators cannot identify the different record with high confidence. Since there is no trusted server for data collection in our setting, users locally perturb their original labels and report noisy versions to the servers. To this end, we define a randomized mechanism $M: \mathbb{R}^{d+1}\rightarrow \mathbb{R}^{d+1}$, which takes a data sample as input and outputs its noisy version. The definition of LDP is given as follows. \begin{definition}\label{LDP_def} ($\epsilon$-LDP). Given $\epsilon>0$, a randomized mechanism $M(\cdot)$ preserves $\epsilon$-LDP if for any two data samples $\mathbf{d}_1=(\mathbf{x}_1, y_1)$ and $\mathbf{d}_2=(\mathbf{x}_2, y_2)$ satisfying $\mathbf{x}_2=\mathbf{x}_1$ and $y_2=-y_1$, and any observation set $\mathcal{O}\subseteq \textrm{Range}(M)$, it holds \begin{equation}\label{eq_ldp} \small \Pr[M(\mathbf{d}_1)\in \mathcal{O}] \leq e^{\epsilon}\Pr[M(\mathbf{d}_2)\in \mathcal{O}]. \end{equation} \end{definition} In (\ref{eq_ldp}), the parameter $\epsilon$ is called the privacy preserving degree (PPD), which describes the strength of privacy guarantee of $M(\cdot)$. A smaller $\epsilon$ implies stronger privacy guarantee. This is because smaller $\epsilon$ means that the two outputs $M(\mathbf{d}_1)$ and $M(\mathbf{d}_2)$ are closer, making it more difficult for privacy violators to infer the difference in $\mathbf{d}_1$ and $\mathbf{d}_2$ (i.e., $y_1$ and $y_2$). \subsection{System Overview} In this paper, we propose the PDML framework, where users can obtain heterogeneous privacy protection. The heterogeneity is characterized by two aspects: i) When a user faces a privacy violator, his/her data pieces with distinct sensitive levels (i.e., the feature vector and the label) obtain different privacy guarantees; ii) for one type of private data piece, the privacy protection provided by the framework is stronger against privacy violators with low trust degrees than those with higher trust degrees. Particularly, in our approach, the privacy preservation strength of users' labels is controlled by the users. Moreover, a modified ADMM algorithm is proposed to meet the heterogeneous privacy protection requirement. The workflow of the proposed PDML framework is illustrated in Fig. \ref{workflow}. Some details are explained below. \begin{enumerate} \item In Phase 1, a user first appropriately randomizes the private label, and then sends the noisy label and the original feature vector to a computing server. The randomization approach used here determines the PPD of the label. \item In Phase 2, multiple computing servers collaboratively train a common classifier based on their collected data. To protect privacy of feature vectors against less trustworthy servers, we further use a combined noise-adding method to perturb the ADMM algorithm, which also strengthens the privacy guarantee of users' labels. \item The performance of the trained classifiers is analyzed in terms of their generalization errors. To decompose the effects of uncertainties introduced in the two phases, we modify the loss function in Problem~\ref{problem_1}. We finally quantify the difference between the generalization error of trained classifiers and that of the ideal optimal classifier. \end{enumerate} \begin{figure} \centering \includegraphics[scale=0.5]{figure/workflow_new.pdf} \caption{{\small Workflow of the PDML framework}}\label{workflow} \end{figure} \section{Privacy-Preserving Framework Design} \label{pp_framework} In this section, we introduce the privacy-preserving approaches used in Phases 1 and 2, and analyze their properties. \subsection{Privacy-Preserving Approach in Phase 1} In this subsection, we propose an appropriate approach used in Phase~1 to provide privacy preservation for the most sensitive labels. In particular, it is controlled by users and will not be weakened in Phase~2. We adopt the idea of randomized response (RR) \cite{erlingsson2014rappor} to obfuscate the users' labels. Originally, RR was used to set plausible deniability for respondents when they answer survey questions about sensitive topics (e.g., HIV-1 infected or uninfected). When using RR, respondents only have a certain probability to answer questions according to their true situations, making the server unable to determine with certainty whether the reported answers are true. In our setting, user $j\in\mathcal{U}_i$ randomizes the label through RR and sends the noisy version to server $s_i$. This is done by the randomized mechanism $M$ defined below. \begin{definition}\label{randomized_M} For $p\in(0,\frac{1}{2})$, the randomized mechanism $M$ with input data sample $\mathbf{d}_{i, j}=(\mathbf{x}_{i, j}, y_{i, j})$ is given by $M(\mathbf{d}_{i, j})=(\mathbf{x}_{i, j}, y'_{i, j})$, where \begin{equation} \label{randomization} \small y'_{i, j}= \begin{cases} 1, &\text{with probability $p$} \\ -1, &\text{with probability $p$} \\ y_{i, j}, &\text{with probability $1-2p$}. \end{cases} \end{equation} \end{definition} In the above definition, $p$ is the randomization probability controlling the level of data obfuscation. Obviously, a larger $p$ implies higher uncertainty on the reported label, making it harder for the server to learn the true label. Denote the output $M(\mathbf{d}_{i, j})$ as $\mathbf{d}'_{i, j}$, i.e., $\mathbf{d}'_{i, j}=M(\mathbf{d}_{i, j})=(\mathbf{x}_{i, j}, y'_{i, j})$. After the randomization, $\mathbf{d}'_{i, j}$ will be transmitted to the server. In this case, server $s_i$ can use only $\mathbf{d}'_{i, j}$ to train the classifier, and the released information about the true label $y_{i, j}$ in Phase~2 is computed based on $\mathbf{d}'_{i, j}$. This implies that once $\mathbf{d}'_{i, j}$ is reported to the server, no more information about the true label $y_{i, j}$ will be released. In this paper, we set the randomization probability $p$ in (\ref{randomization}) as \begin{equation}\label{eq_p} \small p=\frac{1}{1+e^\epsilon}, \end{equation} where $\epsilon>0$. The following theorem gives the privacy-preserving property of the randomized mechanism in Definition~\ref{randomized_M}, justifying this choice of $p$ from the viewpoint of LDP. \begin{proposition}\label{privacy_preservation} Under (\ref{eq_p}), the randomized mechanism $M(\mathbf{d}_{i, j})$ preserves $\epsilon$-LDP for $\mathbf{d}_{i, j}$. \end{proposition} The proof can be found in Appendix \ref{proof_p1}. Proposition \ref{privacy_preservation} clearly indicates that the users can tune the randomization probability according to their privacy demands. This can be seen as given a randomization probability $p$, by (\ref{eq_p}), the PPD $\epsilon$ provided by $M(\mathbf{d}_{i, j})$ is $\epsilon=\ln \frac{1-p}{p}$. Obviously, a larger randomization probability leads to smaller PPD, indicating stronger privacy guarantee. If all data samples $\mathbf{d}_{i, j}, \forall i, j$, drawn from the distribution $\mathcal{P}$ are randomized through $M$, the noisy data $\mathbf{d}'_{i, j}, \forall i, j$, can be considered to be obtained from a new distribution $\mathcal{P}_{\epsilon}$, which is related to the PPD $\epsilon$. Note that $\mathcal{P}_{\epsilon}$ is also an unknown distribution due to the unknown $\mathcal{P}$. \subsection{Privacy-Preserving Approach in Phase 2} \label{ppa_2} To deal with less trustworthy servers in Phase~2, we devise a combined noise-adding approach to simultaneously preserve privacy for users' feature vectors and enhance the privacy guarantee of users' labels. We first adopt the method of objective function perturbation \cite{chaudhuri2011differentially}. That is, before solving Problem \ref{problem_1}, the servers perturb the objective function $J(\{\mathbf{w}_i\}_{i\in\mathcal{S}})$ with random noises. For server $s_i\in \mathcal{S}$, the perturbed objective function is given by \begin{equation}\label{per_obj} \small \widetilde{J}_i(\mathbf{w}_i):=J_i(\mathbf{w}_i)+\frac{1}{n}\boldsymbol{\eta}_i^{\mathrm{T}}\mathbf{w}_i, \end{equation} where $J_i(\mathbf{w}_i)$ is the local objective function given in (\ref{local_objective}), and $\boldsymbol{\eta}_i\in\mathbb{R}^{d}$ is a bounded random noise with arbitrary distribution. Let $R$ be the bound of noises $\boldsymbol{\eta}_i, \forall i$, namely, $\|\boldsymbol{\eta}_i\|_{\infty}\leq R$. Denote the sum of $\widetilde{J}_i(\mathbf{w}_i)$ as $\widetilde{J}(\{\mathbf{w}_i\}_{i\in\mathcal{S}}):=\sum_{i=1}^{n} \widetilde{J}_i(\mathbf{w}_i)$. \textbf{Limitation of objective function perturbation}. We remark that in our setting, the objective function perturbation in (\ref{per_obj}) is not sufficient to provide reliable privacy guarantee. This is because each server publishes current classifier multiple times and each publication utilizes users' reported data. Note that in the more centralized setting of \cite{chaudhuri2011differentially}, the classifier is only published once. More specifically, according to (\ref{new_primal_local}), $\mathbf{w}_i(t+1)$ is the solution to $\nabla \mathcal{L}_i(\mathbf{w}_i, \{\mathbf{w}_l(t)\}_{l\in \mathcal{N}_i\bigcup\{i\}}, \boldsymbol{\gamma}_i(t))=0$. In this case, it holds $\nabla \widetilde{J}_i(\mathbf{w}_i(t+1))= -\boldsymbol{\gamma}_i(t) +\beta\sum_{l\in\mathcal{N}_i}\left(\mathbf{w}_i(t)+\mathbf{w}_l(t)-2\mathbf{w}_i(t+1)\right)$. As (\ref{new_dual_local}) shows, the dual variable $\boldsymbol{\gamma}_i(t)$ can be deduced from updated classifiers. Thus, if $s_i$'s neighbor servers have access to $\mathbf{w}_i(t+1)$ and $\left\{\mathbf{w}_l(t)\;|\;l\in\{i\}\bigcup \mathcal{N}_i\right\}$, then they can easily compute $\nabla \widetilde{J}_i(\mathbf{w}_i(t+1))$. We should highlight that multiple releases of $\nabla \widetilde{J}_i(\mathbf{w}_i(t+1))$ increase the risk of users' privacy disclosure. This can be explained as follows. First, note that $\nabla \widetilde{J}_i(\mathbf{w}_i)=\nabla J_i(\mathbf{w}_i)+\frac{1}{n}\boldsymbol{\eta}_i$, where $\nabla J_i(\mathbf{w}_i)$ contains users' private information. The goal of $\boldsymbol{\eta}_i$-perturbation is to protect $\nabla J_i(\mathbf{w}_i)$ not to be derived directly by other servers. However, after publishing an updated classifier $\mathbf{w}_i(t+1)$, server $s_i$ releases a new gradient $\nabla \widetilde{J}_i(\cdot)$. Since the noise $\boldsymbol{\eta}_i$ is fixed for all iterations, each release of $\nabla \widetilde{J}_i(\cdot)$ means disclosing more information about $\nabla J_i(\cdot)$. In particular, we have $\nabla \widetilde{J}_i(\mathbf{w}_i(t+1))-\nabla \widetilde{J}_i(\mathbf{w}_i(t))=\nabla J_i(\mathbf{w}_i(t+1))-\nabla J_i(\mathbf{w}_i(t))$. That is, the effect of the added noise $\boldsymbol{\eta}_i$ can be cancelled by integrating the gradients of objective functions at different time instants. \textbf{Modified ADMM by primal variable perturbation}. To ensure appropriate privacy preservation in Phase~2, we adopt an extra perturbation method, which sets obstructions for other servers to obtain the gradient $\nabla J_i(\cdot)$. Specifically, after deriving classifier $\mathbf{w}_i(t)$, server $s_i$ first perturbs $\mathbf{w}_i(t)$ with a Gaussian noise $\boldsymbol{\theta}_i(t)$ whose variance is decaying as iterations proceed, and then sends a noisy version of $\mathbf{w}_i(t)$ to neighbor servers. This is denoted by $\widetilde{\mathbf{w}}_i(t):=\mathbf{w}_i(t) + \boldsymbol{\theta}_i(t)$, where $\boldsymbol{\theta}_i(t)\sim\mathcal{N}(0, \rho^{t-1}V_i^2I_d)$ with decaying rate $0<\rho<1$. The local augmented Lagrange function associated with $\boldsymbol{\eta}_i$-perturbed objective function $\widetilde{J}_i(\mathbf{w}_i)$ in (\ref{per_obj}) is given by \begin{equation}\nonumber \small \begin{split} & \tilde{\mathcal{L}}_i(\mathbf{w}_i, \{\widetilde{\mathbf{w}}_l(t)\}_{l\in \mathcal{N}_i\bigcup\{i\}},\boldsymbol{\gamma}_i(t)) \\ & :=\widetilde{J}_i(\mathbf{w}_i) +\boldsymbol{\gamma}_i^{\mathrm{T}}(t)\mathbf{w}_i +\beta\sum_{l\in\mathcal{N}_i}\left\|\mathbf{w}_i-\frac{1}{2}(\widetilde{\mathbf{w}}_i(t)+\widetilde{\mathbf{w}}_l(t))\right\|_2^2. \end{split} \end{equation} We then introduce the perturbed version of the ADMM algorithm in (\ref{new_primal_local}) and (\ref{new_dual_local}) as \begin{alignat}{2} \small \mathbf{w}_i(t+1)& = \arg\min_{\mathbf{w}_i} \tilde{\mathcal{L}}_i(\mathbf{w}_i, \{\widetilde{\mathbf{w}}_l(t)\}_{l\in \mathcal{N}_i\bigcup\{i\}}, \boldsymbol{\gamma}_i(t)), \label{wi_update} \\ \small \widetilde{\mathbf{w}}_i(t+1)& = \mathbf{w}_i(t+1) + \boldsymbol{\theta}_i(t+1), \label{wi_perturb}\\ \small \boldsymbol{\gamma}_i(t+1)& = \boldsymbol{\gamma}_i(t) + \beta \sum_{l\in\mathcal{N}_i}\left(\widetilde{\mathbf{w}}_i(t+1) - \widetilde{\mathbf{w}}_l(t+1)\right). \label{gammai_update} \end{alignat} At iteration $t+1$, a new classifier $\mathbf{w}_i(t+1)$ is first obtained by solving $\nabla \tilde{\mathcal{L}}_i(\mathbf{w}_i, \widetilde{\mathbf{w}}_i(t), \boldsymbol{\gamma}_i(t))=0$. Then, server $s_i$ will send $\widetilde{\mathbf{w}}_i(t+1)$ out and wait for the updated classifiers from neighbor servers. At the end of an iteration, the server will update the dual variable $\boldsymbol{\gamma}_i(t+1)$. \subsection{Discussions} \label{discussion_privacy} We now discuss the effectiveness of the primal variable perturbation. It is emphasized that at each iteration, $s_i$ only releases a small amount of information about $\nabla \widetilde{J}_i(\mathbf{w}_i(t+1))$ through the communicated $\widetilde{\mathbf{w}}_i(t+1)$. Although $\boldsymbol{\gamma}_i(t)$ and $\left\{\widetilde{\mathbf{w}}_l(t)\;|\;l\in\{i\}\bigcup \mathcal{N}_i\right\}$ are known to $s_i$'s neighbors, $\nabla \widetilde{J}_i(\mathbf{w}_i(t+1))$ cannot be directly computed due to the unknown $\boldsymbol{\theta}_i(t+1)$. More specifically, observe that by (\ref{wi_update}), we have $\nabla \widetilde{J}_i(\mathbf{w}_i(t+1))= -\boldsymbol{\gamma}_i(t)+\beta\sum_{l\in\mathcal{N}_i}\left(\widetilde{\mathbf{w}}_i(t)+\widetilde{\mathbf{w}}_l(t)\right) -2\beta N_i(\widetilde{\mathbf{w}}_i(t+1) -\boldsymbol{\theta}_i(t+1))$, where $N_i$ is the degree of $s_i$. On the other hand, using available information, other servers can compute only $\nabla \widetilde{J}_i(\widetilde{\mathbf{w}}_i(t+1))$, i.e., the gradient with respect to perturbed classifier $\widetilde{\mathbf{w}}_i(t+1)$. We have $\nabla \widetilde{J}_i(\widetilde{\mathbf{w}}_i(t+1))=-\boldsymbol{\gamma}_i(t)-\beta\sum_{l\in\mathcal{N}_i}\left[2\widetilde{\mathbf{w}}_i(t+1)-(\widetilde{\mathbf{w}}_i(t)+ \widetilde{\mathbf{w}}_l(t))\right]$. Thus, we obtain $\nabla \widetilde{J}_i(\widetilde{\mathbf{w}}_i(t+1))-\nabla \widetilde{J}_i(\widetilde{\mathbf{w}}_i(t))= \nabla J_i(\mathbf{w}_i(t+1))-\nabla J_i(\mathbf{w}_i(t))-2\beta N_i(\boldsymbol{\theta}_i(t+1)-\boldsymbol{\theta}_i(t))$. Hence, due to $\boldsymbol{\theta}_i$, it would not be helpful for inferring $\nabla J_i(\cdot)$ to integrate the gradients of the objective functions at different iterations. We should also observe that since $\lim_{t\rightarrow\infty} \boldsymbol{\theta}_i(t+1)=0$, $\nabla \widetilde{J}_i(\mathbf{w}_i(t+1))$ can be derived when $t\rightarrow\infty$. Moreover, it is clear that the relation $\nabla \widetilde{J}_i(\widetilde{\mathbf{w}}_i(t+1))-\nabla \widetilde{J}_i(\widetilde{\mathbf{w}}_i(t))= \nabla J_i(\mathbf{w}_i(t+1))-\nabla J_i(\mathbf{w}_i(t))$ holds for $t\rightarrow\infty$. However, $\nabla \widetilde{J}_i(\cdot)$ is the result of $\nabla J_i(\cdot)$ under $\boldsymbol{\eta}_i$-perturbation. Moreover, due to the local consensus constraint (\ref{constraint}), the trained classifiers $\mathbf{w}_i(t)$ may not have significant differences when $t\rightarrow\infty$. Such limited information is not sufficient for privacy violators to infer $\nabla J_i(\cdot)$ with high confidence. \textbf{Differential privacy analysis}. We remark that in our scheme, the noise $\boldsymbol{\eta}_i$ added to the objective function provides underlying privacy protection in Phase~2. Even if privacy violators make inference with published $\widetilde{\mathbf{w}}_i$ in all iterations, the disclosed information is users' reported data plus extra noise perturbation. If the objective function perturbation is removed, the primal variable perturbation method cannot provide DP guarantee when $t\rightarrow\infty$. It is proved in \cite{zhang2017dynamic} and \cite{ding2019optimal} that the $\mathbf{w}_i$-perturbation in (\ref{wi_perturb}) preserves dynamic DP. According to the composition theorem of DP \cite{dwork2008differential}, the PPD will increase (indicating weaker privacy guarantee) when other servers obtain the perturbed classifiers $\widetilde{\mathbf{w}}_i$ of multiple iterations. In particular, if the perturbed classifiers in all iterations are used for inference, the PPD will be $\infty$, implying no privacy guarantee any more. \begin{remark} The objective function perturbation given in (\ref{per_obj}) preserves the so-called $(\epsilon_p, \delta_p)$-DP \cite{he2017differential}. Also, according to \cite{chaudhuri2011differentially}, the perturbation in (\ref{per_obj}) preserves $\epsilon_2$-DP if $\boldsymbol{\eta}_i$ has density $f(\boldsymbol{\eta}_i)=\frac{1}{\nu}e^{-\epsilon_2\|\boldsymbol{\eta}_i\|}$ with normalizing parameter $\nu$. Note that the noise with this density is not bounded, which is not consistent with our setting. Although we use a bounded noise, this kind of perturbation still provides $(\epsilon_p, \delta_p)$-DP guarantee, which is a relaxed form of pure $\epsilon_p$-DP. \end{remark} \textbf{Strengthened privacy guarantee}. For users' labels, the privacy guarantee in Phase~2 is stronger than that of Phase~1. Since differential privacy is immune to post-processing \cite{dwork2008differential}, the PPD $\epsilon$ in Phase~1 will not increase during the iterations of the ADMM algorithm executed in Phase~2. However, such immunity is established based on a strong assumption that there is no limit to the capability of privacy violators. In our considered problem, this assumption is satisfied when all servers can have access to user $j$'s reported data $\mathbf{d}'_{i, j}$, which may not be realistic. Hence, in our problem setting, one server (i.e., server $s_i$) obtains $\mathbf{d}'_{i, j}$ while other servers can access only the classifiers trained with users' reported data. \begin{remark} The $(\epsilon_p, \delta_p)$-DP guarantee is provided for users' feature vectors. Thus, in Phase~2, the sensitive information in those vectors is not disclosed much to the servers with lower trust degrees. For the labels, they obtain extra $(\epsilon_p, \delta_p)$-DP preservation in Phase~2. Since the privacy-preserving scheme in Phase~1 preserves $\epsilon$-DP for the labels, the released information about them in Phase~2 provides stronger privacy guarantee under the joint effect of $\epsilon$-DP in Phase~1 and $(\epsilon_p, \delta_p)$-DP in Phase~2. We will investigate the joint privacy-preserving degree in the future. \end{remark} \section{Performance Analysis} \label{performance_ana} In this section, we analyze the performance of the classifiers trained by the proposed PDML framework. Note that three different uncertainties are introduced into the ADMM algorithm, and these uncertainties are coupled together. The difficulty in analyzing the performance lies in decomposing the effects of the three uncertainties and quantifying the role of each uncertainty. Further, it is also challenging to achieve perturbations mitigation on the trained classifiers, especially to mitigate the influence of users' wrong labels. Here, we first give the definition of generalization error as the metric on the performance of the trained classifiers. Then, we establish a modified version of the loss function $\ell(\cdot)$, which simultaneously achieves uncertainty decomposition and mitigation of label obfuscation. We finally derive a theoretical bound for the difference between the generalization error of trained classifiers and that of the ideal optimal classifier. \subsection{Performance Metric} To measure the quality of trained classifiers, we use generalization error for analysis, which describes the expected error of a classifier on future predictions \cite{shalev2008svm}. Recall that users' data samples are drawn from the unknown distribution $\mathcal{P}$. The generalization error of a classifier $\mathbf{w}$ is defined as the expectation of $\mathbf{w}$'s loss function with respect to $\mathcal{P}$ as $\mathbb{E}_{(\mathbf{x},y)\sim\mathcal{P}} \left[\ell(y, \mathbf{w}^{\mathrm{T}}\mathbf{x})\right]$. Further, define the regularized generalization error by \begin{equation}\label{general_error \small J_{\mathcal{P}}(\mathbf{w}):=\mathbb{E}_{(\mathbf{x},y)\sim\mathcal{P}} \left[\ell(y, \mathbf{w}^{\mathrm{T}}\mathbf{x})\right]+\frac{a}{n} N(\mathbf{w}). \end{equation} We denote the classifier minimizing $J_{\mathcal{P}}(\mathbf{w})$ as $\mathbf{w}^\star$, i.e., $\mathbf{w}^\star:=\arg\min_{\mathbf{w}\in\mathcal{W}} J_{\mathcal{P}}(\mathbf{w})$. We call $\mathbf{w}^\star$ the ideal optimal classifier. Here, $J_{\mathcal{P}}(\mathbf{w}^\star)$ is the reference regularized generalization error under the classifier class $\mathcal{W}$ and the used loss function $\ell(\cdot)$. The trained classifier can be viewed as a good predictor if it achieves generalization error close to $J_{\mathcal{P}}(\mathbf{w}^\star)$. Thus, as the performance metric of the classifiers, we use the difference between the generalization error of trained classifiers and $J_{\mathcal{P}}(\mathbf{w}^\star)$. The difference is denoted as $\Delta J_{\mathcal{P}}(\mathbf{w})$, that is, $\Delta J_{\mathcal{P}}(\mathbf{w}):=J_{\mathcal{P}}(\mathbf{w})-J_{\mathcal{P}}(\mathbf{w}^\star)$ Furthermore, to measure the performance of the classifiers trained by different servers at multiple iterations, we introduce a comprehensive metric. First, considering that the classifiers $\mathbf{w}_i$ solved by server $s_i$ at different iterations may be different until the consensus constraint (\ref{constraint}) is satisfied, we define a classifier $\overline{\mathbf{w}}_i(t)$ to aggregate $\mathbf{w}_i$ in the first $t$ rounds as $\overline{\mathbf{w}}_i(t):=\frac{1}{t} \sum_{k=1}^{t} \mathbf{w}_i(k)$, where $\mathbf{w}_i(k)$ is the obtained classifier by solving (\ref{wi_update}). Moreover, due to the diversity of users' reported data, the classifiers solved by different servers may also differ (especially in the initial iterations). For this reason, we will later study the accumulated difference among the $n$ servers, that is, $\sum_{i=1}^{n} \Delta J_{\mathcal{P}}(\overline{\mathbf{w}}_i(t))$. \subsection{Modified Loss Function in ADMM Algorithm} \label{sub_b} To mitigate the effect of label obfuscation executed in Phase~1, we make some modification to the loss function $\ell(\cdot)$ in Problem~\ref{problem_1}. We use the noisy labels and the corresponding PPD $\epsilon$ in Phase~1 to adjust the loss function $\ell(\cdot)$ in (\ref{minimization_pro}). (Note that other parts of Problem~\ref{problem_1} are not affected by the noisy labels.) Define the modified loss function $\hat{\ell}(y'_{i, j}, \mathbf{w}_i^{\mathrm{T}}\mathbf{x}_{i, j}, \epsilon)$ by \vspace{-0.3cm} \begin{equation}\label{modified_loss} \small \hat{\ell}(y'_{i, j}, \mathbf{w}_i^{\mathrm{T}}\mathbf{x}_{i, j}, \epsilon):=\frac{e^\epsilon\ell(y'_{i, j}, \mathbf{w}_i^{\mathrm{T}}\mathbf{x}_{i, j})-\ell(-y'_{i, j}, \mathbf{w}_i^{\mathrm{T}}\mathbf{x}_{i, j})}{e^\epsilon-1}. \end{equation} This function has the following properties. \begin{proposition} \label{unbiased_loss} \begin{enumerate}[itemindent=0.3em, label=(\roman*),labelsep=0.3em \item $\hat{\ell}(y'_{i, j}, \mathbf{w}_i^{\mathrm{T}}\mathbf{x}_{i, j}, \epsilon)$ is an unbiased estimate of $\ell(y_{i, j}, \mathbf{w}_i^{\mathrm{T}}\mathbf{x}_{i, j})$ as \begin{equation}\label{unbiased_estimator} \small \mathbb{E}_{y'_{i, j}}\left[\hat{\ell}(y'_{i, j}, \mathbf{w}_i^{\mathrm{T}}\mathbf{x}_{i, j}, \epsilon)\right]=\ell(y_{i, j}, \mathbf{w}_i^{\mathrm{T}}\mathbf{x}_{i, j}). \end{equation} \item $\hat{\ell}(y'_{i, j}, \mathbf{w}_i^{\mathrm{T}}\mathbf{x}_{i, j}, \epsilon)$ is Lipschitz continuous with Lipschitz constant \begin{equation}\label{lipschitz} \small \hat{c}_2:= \frac{e^{\epsilon}+1}{e^{\epsilon}-1}c_2, \end{equation} where $c_2$ is the bound of $\left|\frac{\partial\ell(\cdot)}{\partial \mathbf{w}_i}\right|$ given in Assumption \ref{loss_assumption}. \end{enumerate} \end{proposition} The proof can be found in Appendix~\ref{proof_l1}. Now, we make server $s_i$ use $\hat{\ell}(y'_{i, j}, \mathbf{w}_i^{\mathrm{T}}\mathbf{x}_{i, j}, \epsilon)$ in (\ref{modified_loss}) as the loss function. Thus, the objective function in (\ref{local_objective}) must be replaced with the one as follows: \begin{equation}\label{modified_objective} \small \widehat{J}_i(\mathbf{w}_i):=\sum_{j=1}^{m_i}\frac{1}{m_i}\hat{\ell}(y'_{i, j}, \mathbf{w}_i^{\mathrm{T}}\mathbf{x}_{i, j}, \epsilon)+\frac{a}{n}N(\mathbf{w}_i). \end{equation} Similar to $J(\{\mathbf{w}_i\}_{i\in\mathcal{S}})$ in (\ref{original_objective}), we denote the objective function with the modified loss function as $\widehat{J}(\{\mathbf{w}_i\}_{i\in\mathcal{S}}):=\sum_{i=n}^n \widehat{J}_i(\mathbf{w}_i)$. Then, the following lemma holds, whose proof can be found in Appendix \ref{proof_l2}. \begin{lemma}\label{objective_convex} If the loss function $\ell(\cdot)$ and the regularizer $N(\cdot)$ satisfy Assumptions \ref{loss_assumption} and \ref{regularizer_assumption}, respectively, then $\widehat{J}(\{\mathbf{w}_i\}_{i\in\mathcal{S}})$ is $a\kappa$-strongly convex. \end{lemma} To simplify the notation, let $\hat{\kappa}:=a\kappa$. With the objective function $\widehat{J}(\{\mathbf{w}_i\}_{i\in\mathcal{S}})$, the whole optimization problem for finding a common classifier can be stated as follows: \begin{problem} \label{problem_2} \begin{small} \begin{alignat}{2} \min_{\{\mathbf{w}_i\}} & \quad \widehat{J}(\{\mathbf{w}_i\}_{i\in\mathcal{S}}) \nonumber \\ \mathrm{s.t.}& \quad \mathbf{w}_i=\mathbf{w}_l, \forall i, l. \nonumber \end{alignat} \end{small} \end{problem} \begin{lemma} \label{modified_solution} Problem~\ref{problem_2} has an optimal solution set $\{\widehat{\mathbf{w}}_i\}_{i\in\mathcal{S}}\subset\mathcal{W}$ such that $\widehat{\mathbf{w}}_\mathrm{opt} = \widehat{\mathbf{w}}_i = \widehat{\mathbf{w}}_l, \forall i, l$. \end{lemma} Lemma~\ref{modified_solution} can be proved directly from Lemma~1 in \cite{forero2010consensus}, whose condition is satisfied by Lemma~\ref{objective_convex}. We finally arrive at stating the optimization problem to be solved in this paper. To this end, for the modified objective function in (\ref{modified_objective}), we define the perturbed version as in (\ref{per_obj}) by $\widetilde{J}_i(\mathbf{w}_i):=\widehat{J}_i(\mathbf{w}_i)+\frac{1}{n}\boldsymbol{\eta}_i^{\mathrm{T}}\mathbf{w}_i$. Then, the whole objective function becomes \begin{equation}\nonumber \small \widetilde{J}(\{\mathbf{w}_i\}_{i\in\mathcal{S}})=\sum_{i=1}^{n} \left[ \widehat{J}_i(\mathbf{w}_i)+\frac{1}{n}\boldsymbol{\eta}_i^{\mathrm{T}}\mathbf{w}_i\right]. \end{equation} The problem for finding the classifier with randomized labels and perturbed objective functions is as follows: \begin{problem} \label{problem_3} \begin{small} \begin{alignat}{2} \min_{\{\mathbf{w}_i\}} & \quad \widetilde{J}(\{\mathbf{w}_i\}_{i\in\mathcal{S}}) \nonumber \\ \mathrm{s.t.}& \quad \mathbf{w}_i=\mathbf{w}_l, \forall i, l. \nonumber \end{alignat} \end{small} \end{problem} For $\widetilde{J}(\{\mathbf{w}_i\}_{i\in\mathcal{S}})$, we have the following lemma showing its convexity properties. \begin{lemma}\label{J_tildle_convex} $\widetilde{J}(\{\mathbf{w}_i\}_{i\in\mathcal{S}})$ is $\hat{\kappa}$-strongly convex. If $N(\cdot)$ satisfies that $\|\nabla^2 N(\cdot)\|_2\leq \varrho$, then $\widetilde{J}(\{\mathbf{w}_i\}_{i\in\mathcal{S}})$ has a $(nc_3+a\varrho)$-Lipschitz continuous gradient, where $c_3$ is the bound of $\frac{\partial\ell^2(\cdot)}{\partial \mathbf{w}^2}$ given in Assumption~\ref{loss_assumption}. \end{lemma} The proof can be found in Appendix~\ref{proof_l4}. For simplicity, we denote the Lipschitz continuous gradient of $\widetilde{J}(\mathbf{w})$ as $\varrho_{\widetilde{J}}$, namely, $\varrho_{\widetilde{J}}:=nc_3 + a\varrho$. We now observe that Problem~\ref{problem_3} associated with the objective function $\widetilde{J}(\{\mathbf{w}_i\}_{i\in\mathcal{S}})$ has an optimal solution set $\{\widetilde{\mathbf{w}}_i\}_{i\in\mathcal{S}}\subset \mathcal{W}$ where \begin{equation}\label{optimal_perturbation} \small \widetilde{\mathbf{w}}_\mathrm{opt} = \widetilde{\mathbf{w}}_i = \widetilde{\mathbf{w}}_l, \forall i, l. \end{equation} In fact, this can be shown by an argument similar to Lemma~\ref{modified_solution}, where Lemma~\ref{J_tildle_convex} establishes the convexity of the objective function (as in Lemma~\ref{objective_convex}). \subsection{Generalization Error Analysis} In this subsection, we analyze the the accumulated difference between the generalization error of trained classifiers and $J_{\mathcal{P}}(\mathbf{w}^\star)$, i.e., $\sum_{i=1}^{n} \Delta J_{\mathcal{P}}(\overline{\mathbf{w}}_i(t))$. For the analysis, we use the technique from \cite{li2017robust}, which considers the problem of ADMM learning in the presence of erroneous updates. Here, our problem is more complicated because besides the erroneous updates brought by primal variable perturbation, there is also uncertainty in the training data and the objective functions. All these uncertainties are coupled together, which brings extra challenges for performance analysis. We first decompose $\Delta J_{\mathcal{P}}(\overline{\mathbf{w}}_i(t))$ in terms of different uncertainties. To do so, we must introduce a new regularized generalization error associated with the modified loss function $\hat{\ell}(y', \mathbf{w}^{\mathrm{T}}\mathbf{x}, \epsilon)$ and the noisy data distribution $\mathcal{P}_{\epsilon}$. Similar to (\ref{general_error}), for a classifier $\mathbf{w}$, it is defined by \begin{equation}\nonumber \small J_{\mathcal{P}_{\epsilon}}(\mathbf{w}) =\mathbb{E}_{(\mathbf{x},y')\sim\mathcal{P}_{\epsilon}} \left[\hat{\ell}(y', \mathbf{w}^{\mathrm{T}}\mathbf{x}, \epsilon)\right]+\frac{a}{n} N(\mathbf{w}). \end{equation} According to Proposition~\ref{unbiased_loss}, $\hat{\ell}(y', \mathbf{w}^{\mathrm{T}}\mathbf{x}, \epsilon)$ is an unbiased estimate of $\ell(y, \mathbf{w}^{\mathrm{T}}\mathbf{x})$. Thus, it is straightforward to obtain the following lemma, whose proof is omitted. \begin{lemma}\label{equal_error} For a classifier $\mathbf{w}$, we have $J_{\mathcal{P}_{\epsilon}}(\mathbf{w})=J_{\mathcal{P}}(\mathbf{w})$. \end{lemma} Now, we can decompose $\Delta J_{\mathcal{P}}(\overline{\mathbf{w}}_i(t))$ as follows: \begin{equation}\label{deltaJ_wi} \small \begin{split} & \Delta J_{\mathcal{P}}(\overline{\mathbf{w}}_i(t)) =J_{\mathcal{P}}(\overline{\mathbf{w}}_i(t))-J_{\mathcal{P}}(\mathbf{w}^\star) \\ & =J_{\mathcal{P}_{\epsilon}}(\overline{\mathbf{w}}_i(t))-J_{\mathcal{P}_{\epsilon}}(\mathbf{w}^\star) \\ & =\widetilde{J}_i({\overline{\mathbf{w}}_i}(t))-\widetilde{J}_i(\widetilde{\mathbf{w}}_\mathrm{opt}) + \widehat{J}_i(\widetilde{\mathbf{w}}_\mathrm{opt})-\widehat{J}_i(\widehat{\mathbf{w}}_\mathrm{opt}) \\ & \quad +\widehat{J}_i(\widehat{\mathbf{w}}_\mathrm{opt})-\widehat{J}_i({\mathbf{w}^\star}) +J_{\mathcal{P}_{\epsilon}}(\overline{\mathbf{w}}_i(t))-\widehat{J}_i(\overline{\mathbf{w}}_i(t)) \\ & \quad +\widehat{J}_i({\mathbf{w}^\star})-J_{\mathcal{P}_{\epsilon}}(\mathbf{w}^\star) + \boldsymbol{\eta}_i^{\mathrm{T}} (\widetilde{\mathbf{w}}_\mathrm{opt}-\overline{\mathbf{w}}_i(t)). \end{split} \end{equation} We will analyze each term in the far right-hand side of (\ref{deltaJ_wi}). The term $\widetilde{\mathbf{w}}_\mathrm{opt}-\overline{\mathbf{w}}_i(t)$ describes the difference between the classifier $\overline{\mathbf{w}}_i(t)$ and the optimal solution $\widetilde{\mathbf{w}}_\mathrm{opt}$ to Problem~\ref{problem_3}. Before analyzing this difference, we first consider the deviation between the perturbed classifier $\widetilde{\mathbf{w}}_i(t)$ and $\widetilde{\mathbf{w}}_\mathrm{opt}$, and a bound for it can be obtained by \cite{li2017robust}. Here, we introduce some notations related to the bound. Let the compact forms of vectors be $\widetilde{\mathbf{w}}(t):=[\widetilde{\mathbf{w}}_1^\mathrm{T}(t) \cdots \widetilde{\mathbf{w}}_n^\mathrm{T}(t)]^\mathrm{T}$, $\boldsymbol{\theta}(t):=[\boldsymbol{\theta}_1^\mathrm{T}(t)\cdots \boldsymbol{\theta}_n^\mathrm{T}(t)]^\mathrm{T}$, and $\boldsymbol{\eta} := [\boldsymbol{\eta}_1^\mathrm{T} \cdots \boldsymbol{\eta}_n^\mathrm{T}]^\mathrm{T}$. Also, let $\widehat{\mathbf{w}}^{*}:=[I_d \cdots I_d]^\mathrm{T}\cdot\widehat{\mathbf{w}}_\mathrm{opt}$, $\widetilde{\mathbf{w}}^{*}:=[I_d \cdots I_d]^\mathrm{T}\cdot\widetilde{\mathbf{w}}_\mathrm{opt}$, and $\overline{L}:=\frac{1}{2} (L_{+}+L_{-})$. An auxiliary sequence $\mathbf{r}(t)$ is defined as $\mathbf{r}(t) := \sum_{k=0}^{t} Q\widetilde{\mathbf{w}}(k)$ with $Q:=\bigl(\frac{L_{-}}{2}\bigr)^{\frac{1}{2}}$ \cite{makhdoumi2017convergence}. $\mathbf{r}(t)$ has an optimal value $\mathbf{r}_{\mathrm{opt}}$, which is the solution to the equation $Q\mathbf{r}_{\mathrm{opt}}+\frac{1}{2\beta} \nabla \widetilde{J}(\widetilde{\mathbf{w}}_\mathrm{opt})=0$. Further, we define some important parameters to be used in the next lemma. The first two parameters, $b\in(0,1)$ and $\lambda_1>1$, are related to the underlying network topology $\mathcal{G}$ and will be used to establish convergence property of the perturbed ADMM algorithm. Let $\varphi := \frac{\lambda_1-1}{\lambda_1}\frac{2\hat{\kappa} \sigma_{\min}^2(Q)\sigma_{\min}^2(L_{+})}{\varrho_{\widetilde{J}}^2 \sigma_{\min}^2(L_{+})+ 2\hat{\kappa} \sigma_{\max}^2(L_{+})}$, where $\sigma_{\max}(\cdot)$ and $\sigma_{\min}(\cdot)$ denote the maximum and minimum nonzero eigenvalues of a matrix, respectively. Also, we define $M_1$ and $M_2$ with constant $\lambda_2>1$ as \begin{equation}\nonumber \small \begin{split} M_1 & := \frac{b(1+\varphi) \sigma_{\min}^2(L_{+}) (1-1/{\lambda_2})}{4b\sigma_{\min}^2(L_{+}) (1-1/{\lambda_2}) + 16\sigma_{\max}^2(\overline{L})}, \\ M_2 & := \frac{(1-b) (1+\varphi)\sigma_{\min}^2(L_{+}) - \sigma_{\max}^2(L_{+})}{4\sigma_{\max}^2(L_{+})+4(1-b)\sigma_{\min}^2(L_{+})}. \end{split} \end{equation} Then, we have the following lemma from \cite{li2017robust}, which gives a bound for $\widetilde{\mathbf{w}}(t)-\widetilde{\mathbf{w}}^{*}$. \begin{lemma}\label{classifier_converge} Suppose that the conditions of Lemma \ref{J_tildle_convex} hold. If the parameters $b$ and $\lambda_1$ can be chosen such that \begin{equation}\label{b_delta} \small (1-b)(1+\varphi)\sigma_{\min}^2(L_{+})-\sigma_{\max}^2(L_{+})>0. \end{equation} Take $\beta$ in (\ref{gammai_update}) as $\beta = \sqrt{\frac{\lambda_1 \lambda_3 (\lambda_4-1)\varrho_{\widetilde{J}}^2}{\lambda_4(\lambda_1-1)\sigma_{\max}^2(L_{+}) \sigma_{\min}^2(Q)}}$, where $\lambda_4:=1+\sqrt{\frac{\varrho_{\widetilde{J}}^2 \sigma_{\min}^2(L_{+}) +2\hat{\kappa} \sigma_{\max}^2(L_{+})}{\alpha \lambda_3 \varrho_{\widetilde{J}}^2 \sigma_{\min}^2(L_{+})}}$ with $0<\alpha< \min\{M_1, M_2\}$, and $\lambda_3 := 1+\frac{2\hat{\kappa} \sigma_{\max}^2(L_{+})}{\varrho_{\widetilde{J}}^2\sigma_{\min}^2(L_{+})}$. Then, it holds \begin{equation}\label{classifer_bound} \small \left\|\widetilde{\mathbf{w}}(t)-\widetilde{\mathbf{w}}^{*}\right\|_2^2 \leq C^{t} \left(H_1 + \sum_{k=1}^{t} C^{-k} H_2 \|\boldsymbol{\theta}(k)\|_2^2\right), \end{equation} where $C := \frac{(1+4\alpha)\sigma_{\max}^2(L_{+})}{(1-b)(1+\varphi-4\alpha)\sigma_{\min}^2(L_{+})}$, and $H_1:= \left\|\mathbf{w}(0)-\widetilde{\mathbf{w}}^{*}\right\|_2^2+\frac{4}{(1+4\alpha)\sigma_{\max}^2(L_{+})}\left\|\mathbf{r}(0)- \mathbf{r}_{\mathrm{opt}}\right\|_2^2$, $H_2 := \frac{b(\lambda_2 -1)}{1-b} + \frac{\frac{4\varphi \lambda_1 \sigma_{\max}^2(\overline{L})}{\sigma_{\min}^2(Q)} + \sigma_{\max}^2(L_{+}) \left(\sqrt{\varphi} + \sqrt{\frac{2(\lambda_1-1)\sigma_{\min}^2(Q)}{\alpha \lambda_1 \lambda_3\varrho_{\widetilde{J}}^2}}\right)^2}{(1-b) (1+\varphi) (1+\varphi-4\alpha)\sigma_{\min}^2(L_{+})}$. \end{lemma} Lemma~\ref{classifier_converge} implies that given a connected graph $\mathcal{G}$ and the objective function in Problem~\ref{problem_3}, if the parameters $b$ and $\lambda_1$ satisfy (\ref{b_delta}), then $C$ in (\ref{classifer_bound}) is guaranteed to be less than 1. In this case, the obtained classifiers will converge to the neighborhood of the optimal solution $\widetilde{\mathbf{w}}_{\mathrm{opt}}$, where the radius of the neighborhood is $\lim_{t\rightarrow\infty} \sum_{k=1}^{t} C^{t-k} H_2 \|\boldsymbol{\theta}(k)\|_2^2$. The modified ADMM algorithm can achieve different radii depending on the added noises $\boldsymbol{\theta}(k)$. Since many parameters are involved, to meet the condition (\ref{b_delta}) may not be straightforward. In order to make $C$ smaller to achieve better convergence rate, in addition to the parameters, one may change, for example, the graph $\mathcal{G}$ to make the value $\frac{\sigma_{\max}^2(L_{+})}{\sigma_{\min}^2(L_{+})}$ smaller. Theorem \ref{delta_Jt} to be stated below gives the upper bound of the accumulated difference $\sum_{i=1}^n \Delta J_{\mathcal{P}}(\overline{\mathbf{w}}_i(t))$ in the sense of expectation. In the theorem, we employ the important concept of Rademacher complexity \cite{shalev2014understanding}. It is defined on the classifier class $\mathcal{W}$ and the collected data used for training, that is, $\mathrm{Rad}_i(\mathcal{W}):=\frac{1}{m_i} \mathbb{E}_{\nu_j}\left[\sup_{\mathbf{w}\in \mathcal{W}} \sum_{j=1}^{m_i} \nu_j \mathbf{w}^\mathrm{T}\mathbf{x}_{i, j}\right]$, where $\nu_1, \nu_2, \ldots, \nu_{m_i}$ are independent random variables drawn from the Rademacher distribution, i.e., $\Pr (\nu_j=1)=\Pr (\nu_j=-1)=\frac{1}{2}$ for $j=1, 2, \ldots, m_i$. In addition, we use the notation $\|\mathbf{v}\|_{A}^2$ to denote the norm of a vector $\mathbf{v}$ with a positive definite matrix $A$, i.e., $\|\mathbf{v}\|_{A}^2=\mathbf{v}^{\mathrm{T}}A\mathbf{v}$. \begin{theorem} \label{delta_Jt} Suppose that the conditions in Lemma~\ref{classifier_converge} are satisfied and the decaying rate of noise variance is set as $\rho\in (0,C)$. Then, for $\epsilon>0$ and $\delta\in(0,1)$, the aggregated classifier $\overline{\mathbf{w}}_i(t)$ obtained by the privacy-aware ADMM scheme (\ref{wi_update})-(\ref{gammai_update}) satisfies with probability at least $1-\delta$ \begin{equation}\label{eq_delta_Jt} \small \begin{split} & \mathbb{E}_{\{\boldsymbol{\theta}(k)\}} \left\{\sum_{i=1}^{n} \Delta J_{\mathcal{P}}(\overline{\mathbf{w}}_i(t))\right\} \leq \frac{1}{2t}\frac{C}{1-C}\left(H_1 + \frac{\rho H_2}{C- \rho}\sum_{i=1}^{n} d V_i^2\right) \\ & + \frac{n}{2}R^2 + \frac{\beta}{t}\left[H_3 + \left(\frac{\sigma_{\max}^2(L_{+})}{2\sigma_{\max}^2(L_{-})} + 2\sigma_{\max}^2(Q)\right)\frac{\sum_{i=1}^n d V_i^2}{1-\rho}\right]\\ & +\frac{1}{n \hat{\kappa}}R^2 + 4 \frac{e^{\epsilon}+1}{e^{\epsilon}-1}\sum_{i=1}^n\left(c_2 \mathrm{Rad}_i(\mathcal{W})+2c_1\sqrt{\frac{ 2\ln(4/\delta)}{m_i}}\right), \end{split} \end{equation} where $H_3 =\|\mathbf{r}(0)\|_2^2 + \|\mathbf{w}(0)-\widetilde{\mathbf{w}}^{*}\|_{\frac{L_+}{2}}^2$, and the parameters $C$, $H_1$, $H_2$ and $\beta$ are found in Lemma \ref{classifier_converge}. \end{theorem} \begin{proof} In what follows, we evaluate the terms in the far right-hand side of (\ref{deltaJ_wi}) by dividing them into three groups. The first is the terms $J_{\mathcal{P}_{\epsilon}}(\overline{\mathbf{w}}_i(t))-\widehat{J}_i(\overline{\mathbf{w}}_i(t))+\widehat{J}_i({\mathbf{w}^\star})-J_{\mathcal{P}_{\epsilon}}(\mathbf{w}^\star)$. We can bound them from above as \begin{equation}\nonumber \small \begin{split} & J_{\mathcal{P}_{\epsilon}}(\overline{\mathbf{w}}_i(t))-\widehat{J}_i(\overline{\mathbf{w}}_i(t))+\widehat{J}_i({\mathbf{w}^\star})-J_{\mathcal{P}_{\epsilon}}(\mathbf{w}^\star) \\ & \leq2\max_{\mathbf{w}\in\mathcal{W}} \left|J_{\mathcal{P}_{\epsilon}}(\mathbf{w})-\widehat{J}_i({\mathbf{w}})\right|. \end{split} \end{equation} According to Theorem 26.5 in \cite{shalev2014understanding}, with probability at least $1-\delta$, we have \begin{equation}\nonumber \small \begin{split} & \max_{\mathbf{w}\in\mathcal{W}} \left|J_{\mathcal{P}_{\epsilon}}(\mathbf{w})-\widehat{J}_i({\mathbf{w}})\right| \\ & \leq2\mathrm{Rad}_i(\hat{\ell}\circ\mathcal{W})+4\left|\hat{\ell}(y'_{i, j}, \mathbf{w}_i^{\mathrm{T}}\mathbf{x}_{i, j}, \epsilon)\right|\sqrt{\frac{2\ln (4/\delta)}{m_i}}, \end{split} \end{equation} where $\mathrm{Rad}_i(\hat{\ell}\circ\mathcal{W})$ is the Rademacher complexity of $\mathcal{W}$ with respect to $\hat{\ell}$. Further, by the contraction lemma in \cite{shalev2014understanding}, \begin{equation}\nonumber \small \mathrm{Rad}_i(\hat{\ell}\circ\mathcal{W})\leq \hat{c}_2\mathrm{Rad}_i(\mathcal{W})=c_2\frac{e^{\epsilon}+1}{e^{\epsilon}-1} \mathrm{Rad}_i(\mathcal{W}), \end{equation} where we have used Proposition~\ref{unbiased_loss}. Also, from (\ref{modified_loss}), we derive \begin{equation}\nonumber \small \left|\hat{\ell}(y'_{i, j}, \mathbf{w}_i^{\mathrm{T}}\mathbf{x}_{i, j}, \epsilon)\right| \leq \frac{e^{\epsilon}+1}{e^{\epsilon}-1}c_1, \end{equation} where $c_1$ is the bound of the original loss function $\ell(\cdot)$ (Assumption \ref{loss_assumption}). Then, it follows that \begin{equation}\label{rademacher_final} \small \begin{split} & J_{\mathcal{P}_{\epsilon}}(\overline{\mathbf{w}}_i(t))-\widehat{J}_i({\overline{\mathbf{w}}_i}(t))+\widehat{J}_i({\mathbf{w}^\star})-J_{\mathcal{P}_{\epsilon}}(\mathbf{w}^\star) \\ & \leq 4\frac{e^{\epsilon}+1}{e^{\epsilon}-1}\left(c_2\mathrm{Rad}_i(\mathcal{W})+2c_1\sqrt{\frac{2\ln (4/\delta)}{m_i}}\right). \end{split} \end{equation} The second group in (\ref{deltaJ_wi}) are the terms about $\widetilde{J}_i(\cdot)$ and $\widehat{J}_i(\cdot)$. In their aggregated forms, by Lemma~\ref{modified_solution}, it holds \begin{equation}\label{summation_form} \small \begin{split} & \widetilde{J}(\overline{\mathbf{w}}(t))- \widetilde{J}(\widetilde{\mathbf{w}}^{*}) + \widehat{J}(\widetilde{\mathbf{w}}^{*})-\widehat{J}(\widehat{\mathbf{w}}^{*}) + \widehat{J}(\widehat{\mathbf{w}}^{*}) - \widehat{J}({\mathbf{w}^\star}) \\ & \leq \widetilde{J}(\overline{\mathbf{w}}(t))- \widetilde{J}(\widetilde{\mathbf{w}}^{*}) + \widehat{J}(\widetilde{\mathbf{w}}^{*})-\widehat{J}(\widehat{\mathbf{w}}^{*}) \\ & \leq \frac{1}{t} \sum_{k=1}^{t} \widetilde{J}(\mathbf{w}(k))-\widetilde{J}(\widetilde{\mathbf{w}}^{*}) + \widehat{J}(\widetilde{\mathbf{w}}^{*})-\widehat{J}(\widehat{\mathbf{w}}^{*}), \end{split} \end{equation} where we have used Jensen's inequality given the strongly convex $\widetilde{J}(\cdot)$. For the first two terms in (\ref{summation_form}), by Theorem~1 of \cite{li2017robust}, we have \begin{equation}\label{taking_expectation} \small \begin{split} & \frac{1}{t} \sum_{k=1}^{t} \widetilde{J}(\mathbf{w}(k))- \widetilde{J}(\widetilde{\mathbf{w}}^{*}) \leq \frac{\beta}{t} \|\mathbf{r}(0)\|_2^2 + \|\mathbf{w}(0)-\widetilde{\mathbf{w}}^{*}\|_{\frac{L_+}{2}}^2 \\ & + \frac{\beta}{t} \sum_{k=1}^{t} \left(\frac{\sigma_{\max}^2(L_{+})}{2\sigma_{\max}^2(L_{-})} \|\boldsymbol{\theta}(k)\|_2^2 + 2\boldsymbol{\theta}(k)^{\mathrm{T}}Q\mathbf{r}(k)\right). \end{split} \end{equation} Take the expectation on both sides of (\ref{taking_expectation}) with respect to $\boldsymbol{\theta}(k)$. Given $\mathbb{E}_{\{\boldsymbol{\theta}(k)\}} \left\{\|\boldsymbol{\theta}(k)\|_2^2\right\}= \sum_{i=1}^{n} d V_i^2 \rho^{k-1}$, we derive \begin{equation}\nonumber \small \begin{split} \mathbb{E}_{\{\boldsymbol{\theta}(k)\}} \left\{2\boldsymbol{\theta}(k)^{\mathrm{T}}Q\mathbf{r}(k)\right\}& = \mathbb{E}_{\{\boldsymbol{\theta}(k)\}} \left\{2 \|Q \boldsymbol{\theta}(k)\|_2^2\right\} \\ & \leq 2\sigma_{\max}^2(Q) \sum_{i=1}^{n} d V_i^2 \rho^{k-1}, \end{split} \end{equation} where we used $\mathbb{E}\left\{\boldsymbol{\theta}(k)\right\}=0$ and $\mathbb{E}_{\{\boldsymbol{\theta}(k)\}}\left\{\boldsymbol{\theta}(k-1) \boldsymbol{\theta}(k)\right\}=0$. Thus, it follows that \begin{equation}\nonumber \small \begin{split} & \mathbb{E}_{\{\boldsymbol{\theta}(k)\}} \left\{ \sum_{k=1}^{t} \left(\frac{\sigma_{\max}^2(L_{+})}{2\sigma_{\max}^2(L_{-})} \|\boldsymbol{\theta}(k)\|_2^2 + 2\boldsymbol{\theta}(k)^{\mathrm{T}}Q\mathbf{r}(k)\right)\right\} \\ & \leq \left(\frac{\sigma_{\max}^2(L_{+})}{2\sigma_{\max}^2(L_{-})} + 2\sigma_{\max}^2(Q)\right)\sum_{i=1}^{n} d V_i^2 \sum_{k=1}^{t} \rho^{k-1} \\ & \leq \left(\frac{\sigma_{\max}^2(L_{+})}{2\sigma_{\max}^2(L_{-})} + 2\sigma_{\max}^2(Q)\right) \frac{\sum_{i=1}^{n} d V_i^2}{1-\rho}. \end{split} \end{equation} Then, for (\ref{taking_expectation}), we arrive at \begin{equation}\label{exp_J_tilde} \small \begin{split} & \mathbb{E}_{\{\boldsymbol{\theta}(k)\}} \left\{ \widetilde{J}(\overline{\mathbf{w}}(t))- \widetilde{J}(\widetilde{\mathbf{w}}^{*})\right\} \\ & \leq \frac{\beta}{t}\left[H_3 + \left(\frac{\sigma_{\max}^2(L_{+})}{2\sigma_{\max}^2(L_{-})} + 2\sigma_{\max}^2(Q)\right)\frac{\sum_{i=1}^n d V_i^2}{1-\rho}\right]. \end{split} \end{equation} Next, we focus on the latter two terms in (\ref{summation_form}). Due to (\ref{optimal_perturbation}), we have $\widetilde{J}(\widetilde{\mathbf{w}}^{*})\leq \widetilde{J}(\widehat{\mathbf{w}}^{*})$, which yields \begin{equation}\nonumber \small \widehat{J}(\widetilde{\mathbf{w}}^{*})-\widehat{J}(\widehat{\mathbf{w}}^{*}) \leq \frac{1}{n} \boldsymbol{\eta}^{\mathrm{T}} (\widetilde{\mathbf{w}}^{*}-\widehat{\mathbf{w}}^{*}) \leq \frac{1}{n} \|\boldsymbol{\eta}\|\|\widetilde{\mathbf{w}}^{*}-\widehat{\mathbf{w}}^{*}\|, \end{equation} By Lemma 7 in \cite{chaudhuri2011differentially}, we obtain $\|\widetilde{\mathbf{w}}^{*}-\widehat{\mathbf{w}}^{*}\| \leq \frac{1}{n} \frac{\|\boldsymbol{\eta}\|}{\hat{\kappa}}$. It follows \begin{equation}\label{bound_J_hat} \small \widehat{J}(\widetilde{\mathbf{w}}^{*})-\widehat{J}(\widehat{\mathbf{w}}^{*}) \leq \frac{1}{\hat{\kappa}} \frac{\|\boldsymbol{\eta}\|^2}{n^2} \leq \frac{1}{n \hat{\kappa}} R^2, \end{equation} where $R$ is the bound of noise $\boldsymbol{\eta}_i$. Substituting (\ref{exp_J_tilde}) and (\ref{bound_J_hat}) into (\ref{summation_form}), we derive \begin{equation}\label{exp_objective} \small \begin{split} & \mathbb{E}_{\{\boldsymbol{\theta}(k)\}} \{\widetilde{J}(\overline{\mathbf{w}}(t))- \widetilde{J}(\widetilde{\mathbf{w}}^{*}) + \widehat{J}(\widetilde{\mathbf{w}}^{*})-\widehat{J}(\widehat{\mathbf{w}}^{*}) + \widehat{J}(\widehat{\mathbf{w}}^{*}) - \widehat{J}({\mathbf{w}^\star}) \} \\ & \leq \frac{1}{n \hat{\kappa}} R^2 + \frac{\beta}{t}\left[H_3 + \left(\frac{\sigma_{\max}^2(L_{+})}{2\sigma_{\max}^2(L_{-})} + 2\sigma_{\max}^2(Q)\right)\frac{\sum_{i=1}^n d V_i^2}{1-\rho}\right]. \end{split} \end{equation} The third group in (\ref{deltaJ_wi}) is the term $\boldsymbol{\eta}^{\mathrm{T}}(\widetilde{\mathbf{w}}^{*}-\overline{\mathbf{w}}(t))$. We have \begin{equation}\nonumber \small \boldsymbol{\eta}^{\mathrm{T}}(\widetilde{\mathbf{w}}^{*}-\overline{\mathbf{w}}(t)) = \boldsymbol{\eta}^{\mathrm{T}} \left(\widetilde{\mathbf{w}}^{*}-\frac{1}{t} \sum_{k=1}^{t}(\widetilde{\mathbf{w}}(k)-\boldsymbol{\theta}(k))\right). \end{equation} Taking the expectation with respect to $\boldsymbol{\theta}(k)$, we obtain \begin{equation}\nonumber \small \begin{split} & \mathbb{E}_{\{\boldsymbol{\theta}(k)\}} \left\{\boldsymbol{\eta}^{\mathrm{T}}(\widetilde{\mathbf{w}}^{*}-\overline{\mathbf{w}}(t))\right\} \\ & = \mathbb{E}_{\{\boldsymbol{\theta}(k)\}} \left\{\boldsymbol{\eta}^{\mathrm{T}}\left(\widetilde{\mathbf{w}}^{*}- \frac{1}{t} \sum_{k=1}^{t} \widetilde{\mathbf{w}}(k)\right)\right\} \\ & \leq \frac{1}{2} \|\boldsymbol{\eta}\|_2^2 + \mathbb{E}_{\{\boldsymbol{\theta}(k)\}} \left\{\frac{1}{2t^2} \left\|\sum_{k=1}^{t} (\widetilde{\mathbf{w}}^{*}-\widetilde{\mathbf{w}}(k)) \right\|_2^2\right\} \\ & \leq \frac{1}{2} \|\boldsymbol{\eta}\|_2^2 + \mathbb{E}_{\{\boldsymbol{\theta}(k)\}} \left\{\frac{1}{2t} \sum_{k=1}^{t} \left\|\widetilde{\mathbf{w}}^{*}-\widetilde{\mathbf{w}}(k)\right\|_2^2\right\}. \end{split} \end{equation} By Lemma \ref{classifier_converge}, we have \begin{equation}\nonumber \small \left\|\widetilde{\mathbf{w}}^{*}-\widetilde{\mathbf{w}}(k)\right\|_2^2 \leq C^{t} \left(H_1 + \sum_{k=1}^{t} C^{-k} H_2 \|\boldsymbol{\theta}(k)\|_2^2\right). \end{equation} Then, it follows that \begin{equation}\label{exp_eta_w} \small \begin{split} & \mathbb{E}_{\{\boldsymbol{\theta}(k)\}} \left\{\boldsymbol{\eta}^{\mathrm{T}}(\widetilde{\mathbf{w}}^{*}-\overline{\mathbf{w}}(t))\right\} \\ & \leq \frac{1}{2} \|\boldsymbol{\eta}\|_2^2 + \frac{1}{2t} \mathbb{E}_{\{\boldsymbol{\theta}(k)\}} \left\{\sum_{k=1}^{t} C^{t} \left(H_1 + \sum_{k=1}^{t} C^{-k} H_2 \|\boldsymbol{\theta}(k)\|_2^2\right) \right\} \\ & \leq \frac{n}{2} R^2 + \frac{1}{2t}\frac{C}{1-C} \left(H_1 + \frac{\rho H_2}{C- \rho}\sum_{i=1}^{n} d V_i^2\right), \end{split} \end{equation} where we have used $0<\rho<C$. Substituting (\ref{rademacher_final}), (\ref{exp_objective}) and (\ref{exp_eta_w}) into (\ref{deltaJ_wi}), we arrive at the bound in (\ref{eq_delta_Jt}). \end{proof} Theorem \ref{delta_Jt} provides a guidance for both users and servers to obtain a classification model with desired performance. In particular, the effects of three uncertainties on the bound of $\sum_{i=1}^n \Delta J_{\mathcal{P}}(\overline{\mathbf{w}}_i(t))$ have been successfully decomposed. Note that these effects are not simply superimposed but coupled together. Specifically, the terms in (\ref{eq_delta_Jt}) related to the primal variable perturbation decrease with iterations at the rate of $O\left(\frac{1}{t}\right)$. This also implies that the whole framework achieves convergence in expectation at this rate. Compared with \cite{ding2019optimal} and \cite{li2017robust}, where bounds of $\frac{1}{t} \sum_{k=1}^{t} \widetilde{J}(\mathbf{w}(k))- \widetilde{J}(\widetilde{\mathbf{w}}^{*})$ are provided, we derive the difference between the generalization error of the aggregated classifier $\overline{\mathbf{w}}(t)$ and that of the ideal optimal classifier $\mathbf{w}^{\star}$, which is moreover given in a closed form. The bound in (\ref{eq_delta_Jt}) contains the effect of the unknown data distribution $\mathcal{P}$ while the bound of $\frac{1}{t} \sum_{k=1}^{t} \widetilde{J}(\mathbf{w}(k))- \widetilde{J}(\widetilde{\mathbf{w}}^{*})$ covers only the role of existing data. Although \cite{zhang2017dynamic} also considers the generalization error of found classifiers, no closed form of the bound is given, and the obtained bound may not decrease with iterations since the reference classifier therein is not $\mathbf{w}^{\star}$ but a time-varying one. In the more centralized setting of \cite{chaudhuri2011differentially}, $\Delta J_{\mathcal{P}}(\mathbf{w})$ is analyzed for the derived classifier $\mathbf{w}$, but there is no convergence issue since $\mathbf{w}$ is perturbed and published only once. Moreover, different from the works \cite{chaudhuri2011differentially,zhang2017dynamic,ding2019optimal} and \cite{li2017robust}, our analysis considers the effects of the classifier class $\mathcal{W}$ by Rademacher complexity. Such effects have been used in \cite{shalev2014understanding} in non-private centralized machine learning scenarios. Furthermore, in the privacy-aware (centralized or distributed) frameworks of \cite{chaudhuri2011differentially,zhang2017dynamic,ding2019optimal} and the robust ADMM scheme for erroneous updates \cite{li2017robust}, there is only one type of noise perturbation, and the uncertainty in the training data is not considered. \subsection{Comparisons and Discussions} \label{com_dis} Here, we compare the proposed framework with existing schemes from the perspective of privacy and performance, and discuss how each parameter contributes to the results. First, we find that the bound in (\ref{eq_delta_Jt}) is larger than those in \cite{chaudhuri2011differentially,zhang2017dynamic,ding2019optimal} if we adopt the approach in this paper to conduct performance analysis on these works. This is obvious since there are more perturbations in our setting. However, as we have discussed in Section \ref{discussion_privacy}, these existing frameworks do not meet the heterogeneous privacy requirements, and some of them cannot avoid accumulation of privacy losses, resulting in no protection at all. It should be emphasized that extra performance costs must be paid when the data contributors want to obtain stronger privacy guarantee. These existing frameworks may be better than ours in the sense of performance, but the premise is that users accept the privacy preservation provided by them. If users require heterogenous privacy protection, our framework can be more suitable. Further, compared with \cite{chaudhuri2011differentially,zhang2017dynamic,ding2019optimal}, \cite{li2017robust} and \cite{shalev2014understanding}, we provide a more systematic result on the performance analysis in Theorem~\ref{delta_Jt}, where most parameters related to useful measures of classifiers (also privacy preservation) are included. Servers and users can set these parameters as needed, and thus obtain classifiers which can appropriately balance the privacy and the performance. We will discuss the roles of these parameters after some further analysis on the theoretical result. According to Lemma~\ref{classifier_converge}, the classifiers solved by different servers converge to $\widetilde{\mathbf{w}}_\mathrm{opt}$ in the sense of expectation. The performance of $\widetilde{\mathbf{w}}_\mathrm{opt}$ can be analyzed in a similar way as in Theorem~\ref{delta_Jt}. This is given in the following corollary. \begin{corollary} \label{corollary1} For $\epsilon>0$ and $\delta\in(0,1)$, with probability at least $1-\delta$, we have \begin{equation}\label{Jp_w_tilde} \small \begin{split} & \Delta J_{\mathcal{P}}(\widetilde{\mathbf{w}}_\mathrm{opt}) \\ & \leq \frac{4}{n}\frac{e^{\epsilon}+1}{e^{\epsilon}-1} \sum_{i=1}^n \left(c_2 \mathrm{Rad}_i(\mathcal{W})+2c_1\sqrt{\frac{ 2\ln(4/\delta)}{m_i}}\right)+ \frac{1}{n\hat{\kappa}}R^2. \end{split} \end{equation} \end{corollary} For the sake of comparison, the next theorem provides a performance analysis when the privacy-preserving approach in Phase~2 is removed, and a corresponding result on the bound of $\Delta J_{\mathcal{P}}(\widehat{\mathbf{w}}_\mathrm{opt})$ is given in the subsequent corollary. \begin{theorem} \label{thm3} For $\epsilon>0$ and $\delta\in(0,1)$, the aggregated classifier $\overline{\mathbf{w}}_i(t)$ obtained by the original ADMM scheme (\ref{new_primal_local}) and (\ref{new_dual_local}) satisfies with probability at least $1-\delta$ \begin{equation}\label{delta_Jt_unper} \small \begin{split} \sum_{i=1}^{n} \Delta J_{\mathcal{P}}(\overline{\mathbf{w}}_i(t)) & \leq \frac{\beta}{t} \left(\|\mathbf{w}(0)-\widehat{\mathbf{w}}^{*}\|_{\frac{L_+}{2}}^2 + \|\mathbf{r}(0)\|_2^2\right) \\ & + 4 \frac{e^{\epsilon}+1}{e^{\epsilon}-1}\sum_{i=1}^n\left(c_2 \mathrm{Rad}_i(\mathcal{W})+2c_1\sqrt{\frac{ 2\ln(4/\delta)}{m_i}}\right). \end{split} \end{equation} \end{theorem} \begin{corollary} \label{corollary2} For $\epsilon>0$ and $\delta\in(0,1)$, with probability at least $1-\delta$, we have \begin{equation}\label{Jp_w_hat} \small \Delta J_{\mathcal{P}}(\widehat{\mathbf{w}}_\mathrm{opt}) \leq\frac{4}{n}\frac{e^{\epsilon}+1}{e^{\epsilon}-1} \sum_{i=1}^n \left(c_2 \mathrm{Rad}_i(\mathcal{W})+2c_1\sqrt{\frac{ 2\ln(4/\delta)}{m_i}}\right). \end{equation} \end{corollary} It is observed that the bound in (\ref{delta_Jt_unper}) is not in expectation since there is no noise perturbation during the ADMM iterations. It is interesting to note that the convergence rate of the unperturbed ADMM algorithm is also $O(\frac{1}{t})$. This implies that the modified ADMM algorithm preserves the convergence speed of the general distributed ADMM scheme. However, there exists a tradeoff between performance and privacy protection. Comparing (\ref{eq_delta_Jt}) and (\ref{delta_Jt_unper}), we find that the extra terms in (\ref{eq_delta_Jt}) are the results of perturbations in Phase~2. Also, the effect of the objective function perturbation is reflected in (\ref{Jp_w_tilde}), that is, the term $\frac{1}{n\hat{\kappa}}R^2$. When $R$ (the bound of $\boldsymbol{\eta}_i$) increases, the generalization error of the trained classifier would increase as well, indicating worse performance. Similarly, if we use noises with larger initial variances and decaying rates to perturb the solved classifiers in each iteration, the bound in (\ref{eq_delta_Jt}) will also increase. \textbf{Effect of data quality}. We observe that the bound of $\Delta J_{\mathcal{P}}(\widehat{\mathbf{w}}_\mathrm{opt})$ in (\ref{Jp_w_hat}) also appears in (\ref{eq_delta_Jt}), (\ref{Jp_w_tilde}) and (\ref{delta_Jt_unper}). This bound reflects the effect of users' reported data, whose labels are randomized in Phase~1. It can be seen that besides the probability $\delta$, the bound in (\ref{Jp_w_hat}) is affected by three factors: PPD $\epsilon$, Rademacher complexity $\mathrm{Rad}_i(\mathcal{W})$, and the number of data samples $m_i$. Here, we discuss the roles of these factors. For the effect of PPD, we find that when $\epsilon$ is small, the bound will decrease with an increase in $\epsilon$. However, when $\epsilon$ is sufficiently large, it has limited influence on the bound. In particular, by taking $\epsilon\rightarrow\infty$, the bound reduces to that for the optimal solution of Problem~\ref{problem_1}, where $(e^{\epsilon}+1)/(e^{\epsilon}-1)$ goes to 1 in (\ref{Jp_w_hat}). Note that $\mathrm{Rad}_i(\mathcal{W})$ and $m_i$ still remain and affect the performance. For the effect of $\mathrm{Rad}_i(\mathcal{W})$, we observe that the generalization errors of trained classifiers may become larger when $\mathrm{Rad}_i(\mathcal{W})$ increases. The Rademacher complexity is directly related to the size of the classifier class $\mathcal{W}$. If there are only a small number of candidate classifiers in $\mathcal{W}$, the solutions have a high probability of obtaining smaller deviation between their generalization errors and the reference generalization error $J_{\mathcal{P}}(\mathbf{w}^{\star})$. Nevertheless, we should guarantee the richness of the class $\mathcal{W}$ to make $J_{\mathcal{P}}(\mathbf{w}^{\star})$ small since $\mathbf{w}^\star$ trained in terms of $\mathcal{W}$ will have large generalization error. Though the deviation $\Delta J_{\mathcal{P}}(\cdot)$ may be small, the trained classifiers are not good predictors due to the bad performance of $\mathbf{w}^\star$. Thus, setting an appropriate classifier class is important for obtaining a classifier with qualified performance. Finally, we consider the effect of the number of users. From the bound of $\Delta J_{\mathcal{P}}(\widehat{\mathbf{w}}_\mathrm{opt})$ in (\ref{Jp_w_hat}), we know that if $m_i$ becomes larger, the last term of the bound will decrease. In general, more data samples imply access to more information about the underlying distribution $\mathcal{P}$. Then, the trained classifier can predict the labels of newly sampled data from $\mathcal{P}$ with higher accuracy. Moreover, it can be seen that the bound is the average of $n$ local errors generated in different servers. When new servers participate in the DML framework, these servers should make sure that they have collected sufficient amount of training data samples. Otherwise, the bound may not decrease though the total number of data samples increases. This is because unbalanced local errors may lead to an increase in their average, implying larger bound of $\Delta J_{\mathcal{P}}(\cdot)$. \begin{figure} \vspace*{-2mm} \begin{center} \subfigure[Norm of the errors]{\label{consensus_obs} \includegraphics[scale=0.45]{figure/consensus.pdf}} \subfigure[Empirical risks]{\label{com_risks} \includegraphics[scale=0.45]{figure/convergence_com.pdf}} \caption{{\small The convergence properties of PDML.}} \end{center} \end{figure} \section{Experimental Evaluation} \label{evaluation} In this section, we conduct experiments to validate the obtained theoretical results and study the classification performance of the proposed PDML framework. Specifically, we first use a real-world dataset to verify the convergence property of the PDML framework and study how key parameters would affect the performance. Also, we leverage another seven datasets to verify the classification accuracy of the classifiers trained by the framework. \subsection{Experiment Setup} \subsubsection{Datasets} We use two kinds of publicly available datasets as described below to validate the convergence property and classification accuracy of the PDML. (i) Adult dataset \cite{Dua2019}. The dataset contains census data of 48,842 individuals, where there are 14 attributes (e.g., age, work-class, education, occupation and native-country) and a label indicating whether a person's annual income is over \$50,000. After removing the instances with missing values, we obtain a training dataset with 45,222 samples. To preprocess the dataset, we adopt unary encoding approach to transform the categorial attributes into binary vectors, and further normalize the whole feature vector to be a vector with maximum norm of 1. The preprocessed feature vector is a 105-dimensional vector. For the labels, we mark the annual income over \$50,000 as 1, otherwise it is labeled as $-1$. (ii) Gunnar R\"{a}tsch's benchmark datasets \cite{ucidata}. There are thirteen data subsets from the UCI repository in the benchmark datasets. To mitigate the effect of data quality, we select seven datasets with the largest data sizes to conduct experiments. The seven datasets are \textit{German}, \textit{Image}, \textit{Ringnorm}, \textit{Banana}, \textit{Splice}, \textit{Twonorm} and \textit{Waveform}, where the numbers of instances are 1,000, 2,086, 7,400, 5,300, 2,991, 7,400 and 5,000, respectively. Each dataset is partitioned into training and test data, with a ratio of approximately $70\%:30\%$. \subsubsection{Underlying classification approach} Logistic regression (LR) is utilized for training the prediction model, where the loss function and regularizer are $\ell_{LR} (y_{i, j}, \mathbf{w}_i^\mathrm{T}\mathbf{x}_{i, j})= \log \bigl(1+e^{-y_{i, j} \mathbf{w}_i^\mathrm{T}\mathbf{x}_{i, j}}\bigr)$ and $N(\mathbf{w}_i) = \frac{1}{2}\|\mathbf{w}_i\|^2$, respectively. Then, the local objective function is given by \begin{equation}\nonumber \small J_i(\mathbf{w}_i) = \sum_{j=1}^{m_i}\frac{1}{m_i}\log \left(1+e^{-y_{i, j} \mathbf{w}_i^\mathrm{T}\mathbf{x}_{i, j}}\right)+\frac{a}{2n}\|\mathbf{w}_i\|^2. \end{equation} It is easy to check that when the classifier class $\mathcal{W}$ is bounded (e.g., a bounded set $\mathcal{W}= \{\mathbf{w}\in\mathbb{R}^d\; |\; \|\mathbf{w}\|\leq W\}$), $\ell_{LR} (\cdot)$ satisfies Assumption~\ref{loss_assumption}. Due to the convexity property of $N(\mathbf{w}_i)$, $J_i(\mathbf{w}_i)$ is strongly convex. Then, according to Lemma~\ref{modified_solution}, Problems~\ref{problem_2} and \ref{problem_3} have optimal solution sets, and thus, we can use LR to train the classifiers. \subsubsection{Network topology} We consider $n=10$ servers collaboratively train a prediction model. A connected random graph is used to describe the communication topology of the 10 servers. The used graph has $E=13$ communication links in total. Each server is responsible for collecting the data from a group of users, and thus there are 10 groups of users. In the experiments, we assume that each group has the same number of users, that is, $m_i=m_l, \forall i, l$. For example, we use $m=45,000$ instances sampled from the Adult dataset to train the classifier, and then each server collects data from $m_i = 4,500$ users. \subsection{Experimental Results with Adult Dataset} Based on the Adult dataset, we first verify the convergence property of the PDML framework. Fig.~\ref{consensus_obs} illustrates the maximum distances between the norms of arbitrary two classifiers found by different servers. We set the bound of $\boldsymbol{\eta}_i$ to 1. Other settings are the same as those with experiments under the synthetic dataset. For the sake of comparison, we also draw the variation curve (with circle markers) of the maximum distance when the privacy-preserving approach in Phase~2 is removed. We observe that both distances converge to 0, implying that the consensus constraint is eventually satisfied. Fig. \ref{com_risks} shows the variation of empirical risks (the objective function in (\ref{original_objective})) as iterations proceed. Here, the green dashed line depicts the final empirical risk achieved by general ADMM with original data, which we call the reference empirical risk. There are also two curves showing varying empirical risks with privacy preservation. Comparing the two curves, we find that the ADMM with combined noise-adding scheme preserves the convergence property of the general ADMM algorithm. Due to the noise perturbations in Phase~2, the convergence time becomes longer. In addition, it can be seen that regardless of whether the privacy-preserving approach in Phase~2 is used, both ADMM schemes cannot achieve the same final empirical risks with that of the green line, which is consistent with the analysis in Section~\ref{com_dis}. \begin{figure} \begin{center} \subfigure[Different $R$ (with $\rho=0.8$)]{\label{effects_M} \includegraphics[scale=0.45]{figure/different_m.pdf}} \subfigure[Different $\rho$ (with $R=1$)]{\label{effects_rho} \includegraphics[scale=0.45]{figure/different_rho.pdf}} \subfigure[Different $\epsilon$]{\label{effects_epsilon} \includegraphics[scale=0.45]{figure/different_epsilon.pdf}} \caption{{\small The effects of key parameters.}} \end{center} \end{figure} \begin{table*}[thb!] \caption{{\small Classification accuracy with test data (\%)}} \label{table} \renewcommand{\arraystretch}{1.5} \centering \begin{tabular}{*{8}{c}} \hline \hline \multirow{2}{*}{Dataset} & \multirowcell{2}{Without privacy protection} &\multicolumn{2}{c}{Modified loss} &\multicolumn{4}{c}{Perturbed ADMM} \\ \cmidrule(lr){3-4}\cmidrule(lr){5-8} \morecmidrules& &{$\epsilon=0.4$, $R=0$} &{$\epsilon=1$, $R=0$} &{$\epsilon=0.4$, $R=1$} &{$\epsilon=0.4$, $R=9$} &{$\epsilon=1$, $R=1$} &{$\epsilon=1$, $R=9$} \\ \hline {German} &75.00 &71.00 &74.00 &69.67 &64.00 &74.33 &67.67 \\ {Image} &75.56 &70.13 &72.84 &69.33 &63.10 &70.45 &65.50 \\ {Ringnorm} &77.38 &73.44 &76.82 &73.74 &66.18 &75.77 &70.23 \\ {Banana} &58.22 &54.33 &56.06 &54.28 &43.11 &55.89 &54.44 \\ {Splice} &56.60 &46.84 &56.60 &54.94 &46.39 &55.83 &52.50 \\ {Twonorm} &97.90 &96.59 &97.38 &96.51 &92.28 &97.41 &94.77 \\ {Waveform} &88.93 &84.60 &87.93 &84.07 &80.47 &87.67 &81.73 \\ \hline \hline \end{tabular} \end{table*} We then study the effects of the key parameters on the performance. In Fig.~\ref{effects_M}, we examine the impact of the noise bound $R$ when the decaying rate $\rho$ is fixed at $0.8$. It is observed that $R$ affects the final empirical risks of the trained classifiers. The larger the noise bound, the greater the gap between the achieved empirical risks and the reference value, which is reconciled with Corollary~\ref{corollary1}. In Fig.~\ref{effects_rho}, we inspect the effect of Gaussian noise decaying rate $\rho$ when $R$ is fixed at $1$. We find that the convergence time is affected by $\rho$. A larger $\rho$ implies that the communicated classifiers are still perturbed by noises with larger variance even after iterating over multiple steps. Thus, more iterations are needed to obtain the same final empirical risk with that of smaller $\rho$. Such a property can be derived from the bound in (\ref{eq_delta_Jt}). Fig. \ref{effects_epsilon} illustrates the variation of final empirical risks when the PPD $\epsilon$ changes. The final empirical risks decrease with larger PPD (weaker privacy guarantee), which implies the tradeoff relation between the privacy protection and the performance. Further, the extra perturbations in Phase~2 lead to larger empirical risks for all the PPDs in the experiments. We also find that when $\epsilon$ is large ($\epsilon>0.6$), the achieved empirical risks are close to the reference value, and do not significantly change. Again, the result is consistent with the analysis of the bound in (\ref{Jp_w_hat}). \subsection{Classification Accuracy Evaluation} We use the test data of the seven datasets to evaluate the prediction performance of the trained classifiers, which is shown in Table~\ref{table}. The classification accuracy is defined as the ratio that the labels predicted by the trained classifier match the true labels of test data. For comparison, we present the classification accuracy achieved by general ADMM with the original data. For validation of classification accuracy under the PDML framework, we choose six different sets of parameter configurations to conduct the experiments. The specific configurations can be found in the second row of Table~\ref{table}. We find that lager $\epsilon$ and smaller $R$ will generate better accuracy. According to the theoretical results, the upper bounds for the differences $\Delta J_{\mathcal{P}}(\widetilde{\mathbf{w}}_\mathrm{opt})$ and $ \Delta J_{\mathcal{P}}(\widehat{\mathbf{w}}_\mathrm{opt})$ will decrease with lager $\epsilon$ and smaller $R$, implying better performance of the trained classifiers. Thus, the bound in Theorem \ref{delta_Jt} also provides a guideline to choose appropriate parameters to obtain a prediction model with satisfied classification accuracy. It is impressive to observe that even under the strongest privacy setting ($\epsilon=0.4$, $R=9$), the proposed framework achieves comparable classification accuracy to the reference precision. We also notice that under the datasets \textit{Banana} and \textit{Splice}, PDML achieves inferior accuracy in all settings. For a binary classification problem, it is meaningless to obtain a precision of around $50\%$. The reason for the poor accuracy may be that LR is not a suitable classification approach for these two datasets. Overall, the proposed PDML framework achieves competitive classification accuracy on the basis of providing strong privacy protection. \section{Conclusion} \label{conclusion} In this paper, we have provided a privacy-preserving ADMM-based distributed machine learning framework. By a local randomization approach, data contributors obtain self-controlled DP protection for the most sensitive labels and the privacy guarantee will not decrease as ADMM iterations proceed. Further, a combined noise-adding method has been designed for perturbing the ADMM algorithm, which simultaneously preserves privacy for users' feature vectors and strengthens protection for the labels. Lastly, the performance of the proposed PDML framework has been analyzed in theory and validated by extensive experiments. For future investigations, we will study the joint privacy-preserving effects of the local randomization approach and the combined noise-adding method. Moreover, it is interesting while challenging to extend the PDML framework to the non-empirical risk minimization {problems}. When users allocate distinct sensitive levels to different attributes, we are interested in designing a new privacy-aware scheme providing heterogeneous privacy protections for different attributes.
{'timestamp': '2019-09-10T02:20:44', 'yymm': '1908', 'arxiv_id': '1908.01059', 'language': 'en', 'url': 'https://arxiv.org/abs/1908.01059'}
\section{Cumulants at arbitrary density} \subsection{Law of the distance between two particles} We consider two tagged particles (TPs) in the SEP with density $\rho$. The initial distance between them is $L$ and we derive the equilibrium distribution of the distance (in particular we show that the distribution of the distance is time-independant at large time). We proceed as follow: (i) We write the law of the number of particles $N_p$ between the TPs. This number is fixed initially and does not evolve. (ii) We write the law of the number of vacancies $N_v$ between the tracers at equilibrium; this law depends on $N_p$. Initially there are $L-1$ sites between the TPs. Each site is occupied with probability $\rho$. This gives us a binomial law for $N_p$: \begin{equation} \mathbb{P}(N_p = k) = \binom{L-1}{k} \rho^k (1-\rho)^{L-1-k} \end{equation} At large time, the number of vacancies between the two TPs, knowing that there are $k$ particles between them, is given by the law of the number $m$ of failures before $k+1$ successes in a game in which the probability of a success is $\rho$. It is a negative binomial law: \begin{equation} \mathbb{P}(N_v = m|N_p = k) = \binom{m+k}{m} (1-\rho)^m \rho^{k+1} \end{equation} The distance $D$ between the tracers is given by $D=N_p + N_l + 1$, its law is: \begin{align} \mathbb{P}(D=\delta) &= \sum_{k=0}^{L-1} \mathbb{P}(N_p = k)\mathbb{P}(N_v = \delta-k-1|N_p = k) \\ &= \sum_{k=0}^{L-1}\binom{L-1}{k} \rho^k (1-\rho)^{L-1-k} \binom{\delta-1}{\delta-k-1} (1-\rho)^{\delta-k-1} \rho^{k+1} \\ &= \sum_{k=0}^{L-1}\binom{L-1}{k} \binom{\delta-1}{k} \rho^{2k+1} (1-\rho)^{L+\delta-2k-2} \label{si:eq:predDist} \end{align} This law is in very good agreement with the numerical simulations (Fig.~\ref{si:fig:dist}). \subsection{Large deviations of the law of the distance} From the law of the distance \eqref{si:eq:predDist}, one can derive the generating function $G_D(z)$. \begin{align} G_D(z) &\equiv \sum_{\delta=1}^\infty \mathbb{P}(D=\delta) z^\delta \\ &= \sum_{\delta=1}^\infty z^\delta \sum_{k=0}^{L-1} \mathbb{P}(N_p = k)\mathbb{P}(N_v = \delta-k-1|N_p = k) \\ &= z \sum_{k=0}^{L-1} z^k \mathbb{P}(N_p = k) \sum_{m=0}^\infty z^m \mathbb{P}(N_v = m|N_p = k) \\ &= z \sum_{k=0}^{L-1} z^k \binom{L-1}{k} \rho^k (1-\rho)^{L-1-k} \sum_{m=0}^\infty \binom{m+k}{m} (1-\rho)^m \rho^{k+1} \\ &= z \sum_{k=0}^{L-1} z^k \binom{L-1}{k} \rho^k (1-\rho)^{L-1-k} \left(\frac{\rho}{1-(1-\rho)z}\right)^{k+1} \\ &= \left(\frac{\rho z}{1-(1-\rho)z}\right) \left(\frac{\rho^2 z}{1-(1-\rho)z} + 1-\rho\right)^{L-1} \end{align} We can then derive a large deviation scaling in the limit $L\to\infty$: \begin{equation} \frac{1}{L} \ln G_D(e^t) \xrightarrow[N\to\infty]{} \ln\left(\frac{\rho^2 z}{1-(1-\rho)z} + 1-\rho\right) \equiv \phi(t) \end{equation} From the Gärtner-Ellis theorem \cite{Touchette:2009}, this implies: \begin{align} P(D=L(1+\tilde d)) &\asymp e^{-LI(\tilde d)} \\ I(\tilde d) &= \text{sup}_{t\in\mathbb{R}} \(t(1+\tilde d) - \phi(t)\) \label{si:eq:SI_extremum} \end{align} We now consider the high-density limit: $\rho = 1-\rho_0$ with $\rho_0 \ll 1$. We obtain: \begin{equation} \phi(t) = t + 2\rho_0 \cosh \end{equation} Thus, the supremum in \eqref{si:eq:SI_extremum} is at $t^\ast$ such that: \begin{equation} \sinh t^\ast = \frac{\tilde d}{2\rho_0} \end{equation} At the end of the day, \begin{equation} I(\tilde d) = 2\rho_0\left\{ 1-\sqrt{1+\(\frac{\tilde d}{2\rho_0}\)^2} + \frac{\tilde d}{2\rho_0} \log\(\frac{\tilde d}{2\rho_0} + \sqrt{1+\(\frac{\tilde d}{2\rho_0}\)^2}\) \right\} \end{equation} This is exactly what is found with our high-density approach: Eq.~\eqref{eq:probDist} of the main text. \subsection{Large time behavior of the cumulants of $N$ particles} We consider $N$ TPs having displacements $Y_1, \dots Y_N$. We know that the moments of a single particle scale as $t^{1/2}$ \cite{Imamura:2017} while the moments of the distance scale as $t^0$ (previous section). \begin{align} \langle Y_1^{2p} \rangle &= \mathcal{O}(t^{1/2})\qquad \forall p \in \mathbb{N} \label{si:eq:SI_X1} \\ \langle (Y_i - Y_1)^{2p} \rangle &= \mathcal{O}(t^0)\qquad \forall i \leq N, \forall p \in \mathbb{N} \label{si:eq:SI_dist} \end{align} From this we want to show that \begin{equation} \label{si:eq:SI_HR} A_{p_1, \dots p_N}^{(N)} \equiv \langle Y_1^{p_1}\dots Y_N^{p_N} \rangle - \langle Y_1^{p_1 + \dots + p_N} \rangle = \mathcal{O}(t^{1/4}) \qquad \forall p_1, \dots, p_N \end{equation} We proceed by induction: the case $N=1$ is straightforward. Now assuming that (\ref{si:eq:SI_HR}) holds for a given $N$, we want to prove it for $N+1$. As we have $A_{p_1, \dots p_N, 0}^{(N+1)} = A_{p_1, \dots p_N}^{(N)} = \mathcal{O}(t^{1/4})$, we prove by induction that $A_{p_1, \dots p_N, q}^{(N+1)}= \mathcal{O}(t^{1/4})\ \forall q\geq 0$. Indeed, if $A_{p_1, \dots p_N, q'}^{(N+1)}= \mathcal{O}(t^{1/4})\ \forall q'<q$, we can write: \begin{align} &A_{p_1, \dots p_N, q}^{(N+1)} = \langle Y_1^{p_1}\dots Y_N^{p_N}X_{N+1}^q \rangle - \langle Y_1^{p_1 + \dots + p_N + q} \rangle \\ &= \left\langle Y_1^{p_1}\dots Y_N^{p_N}\left[(Y_{N+1}-Y_1)^q -\sum_{r=1}^q \binom{q}{r} (-1)^r Y_1^r Y_{N+1}^{q-r}\right] \right\rangle - \langle Y_1^{p_1 + \dots + p_N + q} \rangle \\ &= \langle Y_1^{p_1}\dots Y_N^{p_N}(Y_{N+1}-Y_1)^q \rangle -\sum_{r=1}^q \binom{q}{r} (-1)^r \langle Y_1^{p_1+r} Y_2^{p_2} \dots Y_{N+1}^{q-r}\rangle - \langle Y_1^{p_1 + \dots + p_N + q} \rangle \\ &= \langle Y_1^{p_1}\dots Y_N^{p_N}(Y_{N+1}-X_1)^q \rangle -\sum_{r=1}^q \binom{q}{r} (-1)^r \left[ \langle Y_1^{p_1+r} Y_2^{p_2} \dots Y_{N+1}^{q-r}\rangle - \langle Y_1^{p_1 + \dots + p_N + q}\rangle \right] \end{align} All the terms in the sum are of order $\mathcal{O}(t^{1/4})$ from (\ref{si:eq:SI_HR}) and the first term can be bounded by the Cauchy–Schwarz inequality: \begin{equation} \left|\langle Y_1^{p_1}\dots Y_N^{p_N}(Y_{N+1}-Y_1)^q \rangle\right| \leq \sqrt{\langle \(Y_1^{p_1}\dots Y_N^{p_N}\)^2 \rangle\langle (Y_{N+1}-Y_1)^{2q} \rangle} = \sqrt{\mathcal{O}(t^{1/2})\mathcal{O}(t^0)} = \mathcal{O}(t^{1/4}) \end{equation} using (\ref{si:eq:SI_X1}), (\ref{si:eq:SI_dist}) and (\ref{si:eq:SI_HR}). This ends the proof of (\ref{si:eq:SI_HR}). This implies that if $p_1 + \dots + p_N$ is even: \begin{equation} \langle Y_1^{p_1}\dots Y_N^{p_N} \rangle \equi{t\to\infty} \langle Y_1^{p_1 + \dots + p_N} \rangle = \mathcal{O}(t^{1/2}) \end{equation} The moments of $N$ particles are equal, in the large time limit, are given by the moments of a single particle. This extends to the cumulants $\kappa^{(N)}_{p_1, \dots, p_N}$ defined from the second characteristic function. \begin{equation} \psi(k_1, \dots k_N) \equiv \sum_{p_1, \dots p_N} \kappa^{(N)}_{p_1, \dots, p_N} \frac{(ik_1)^{p_1} \dots (ik_N)^{p_N}}{p_1!\dots p_N!} \equiv \log\left[\sum_{p_1, \dots p_N} \langle Y_1^{p_1}\dots Y_N^{p_N} \rangle \frac{(ik_1)^{p_1} \dots (ik_N)^{p_N}}{p_1!\dots p_N!}\right] \end{equation} \begin{equation} \kappa^{(N)}_{p_1, \dots, p_N} \equi{t\to\infty} \kappa^{(1)}_{p_1 + \dots + p_N} \equi{t\to\infty} B_{p_1+\dots + p_N} \sqrt{t} \end{equation} The coefficient $B_p$ are computed in Ref. \cite{Imamura:2017}. \begin{figure} \includegraphics[width=15cm]{figDist} \caption{Comparison of the numerical probability distribution of the distance of two TPs (dots) with the prediction \eqref{si:eq:predDist} (lines) for $\rho = 0.25, 0.5, 0.75$ and $L=5, 10, 20$. The average is performed other $10^6$ simulations at final time $t=2\cdot 10^4$ (we checked that this is enough for the convergence).} \label{si:fig:dist} \end{figure} \section{Detailed calculations in the high-density limit} \subsection{Approximation and thermodynamic limit} Let us consider a system of size $\mathcal{N}$ with $M$ vacancies and denote by $\vec Y(t) = (X_i(t)-X_i^0)_{i=1}^N$ the vector of the displacements of the TPs. The probability $P^{(t)} (\vec Y|\{Z_j\})$ of having displacements $\vec Y$ at time $t$ knowning that the $M$ vacancies started at sites $Z_1\dots Z_M$ is exactly given by~: \begin{equation} P^{(t)} (\vec Y|\{Z_j\}) = \sum_{\vec Y_1,\dots,\vec Y_M} \delta_{\vec Y, \vec Y_1+\dots+\vec Y_M} \mathcal{P}^{(t)} (\{\vec Y_j\}|\{Z_j\}) \end{equation} where $\mathcal{P}^{(t)} (\{\vec Y_j\}|\{Z_j\})$ is the probability of displacement $\vec Y_j$ due to the vacancy $j$ for all $j$ knowing the initial positions of all the vacancies. Assuming that in the large density large ($\rho\to 1$) the vacancies interact independantly with the TPs, we can link it to the probability $p^{(t)}_Z(\vec Y)$ that the tracers have moved by $\vec Y$ at time $t$ due to a single vacancy that was initially at site $Z$: \begin{equation} \mathcal{P}^{(t)} (\{\vec Y_j\}|\{Z_j\}) \underset{\rho\to 1}{\sim} \prod_{j=1}^M p^{(t)}_{Z_j}(\vec Y_j) \end{equation} so that \begin{equation}\label{si:eq:approx} P^{(t)} (\vec Y|\{Z_j\}) \underset{\rho\to 1}{\sim} \sum_{\vec Y_1,\dots,\vec Y_M} \delta_{\vec Y, \vec Y_1+\dots+\vec Y_M} \prod_{j=1}^M p^{(t)}_{Z_j}(\vec Y_j) \end{equation} We take the Fourier transform and we average over the initial positions of the vacancies: \begin{equation} \tilde p^{(t)}(\vec k) \equiv \frac{1}{\mathcal{N}-N} \sum_{Z \notin \{X_i^0\}} \sum_{\vec Y} p^{(t)}_Z(\vec Y) e^{i\vec k\vec Y} \end{equation} and mutatis mutandis for $\tilde P^{(t)} (\vec k)$. Furthermore we write $\tilde p_Z^{(t)}(\vec k) = 1 + \tilde q_Z^{(t)}(\vec k)$ ($q_Z$ corresponds to the deviation from a Dirac centered in $0$). \begin{equation} \tilde P^{(t)} (\vec k) \underset{\rho\to 1}{\sim} \left[\tilde p^{(t)}(\vec k)\right]^M = \left[\frac{1}{\mathcal{N}-N}\sum_{Z\notin \{X_i^0\}}\tilde p_Z^{(t)}(\vec k)\right]^M = \left[1 + \frac{1}{\mathcal{N}-N}\sum_{Z\notin \{X_i^0\}} \tilde q_Z^{(t)}(\vec k)\right]^M \end{equation} We now take the limit $\mathcal{N},M\to\infty$ with $\rho_0 \equiv 1-\rho = M / \mathcal{N}$ (density of vacancies) remaining constant. The second characteristic function reads: \begin{equation} \label{si:eq:psi1} \lim_{\rho_0\to 0} \frac{\psi^{(t)}(\vec k)}{\rho_0} \equiv \lim_{\rho_0\to 0} \frac{\ln \left[\tilde P^{(t)}(\vec k)\right]}{\rho_0} = \sum_{Z \notin \{X_i^0\}} \tilde q_Z^{(t)}(\vec k) \end{equation} \subsection{Expression of the single-vacancy propagator} One can partition over the first passage of the vacancy to the site of one of the tracers to get an expression for $\tilde q_Z$ which is the main quantity involved in (\ref{si:eq:psi1}): \begin{align} p_Z^{(t)}(\vec Y) &= \delta_{\vec Y, \vec 0} \(1 - \sum_{j=0}^t \sum_{\nu = \pm 1, \pm 2} F^{(j)}_{\nu, Z}\) + \sum_{j=0}^t \sum_{\nu = \pm 1, \pm 2} p_{-\nu}^{(t-j)}(\vec Y) F^{(j)}_{\nu, Z} \\ \tilde p_Z^{(t)}(\vec k) &= 1 - \sum_{j=0}^t \sum_{\nu = \pm 1, \pm 2} F^{(j)}_{\nu, Z} + \sum_{j=0}^t \sum_{\nu = \pm 1, \pm 2} \tilde p_{-\nu}^{(t-j),\zeta} (\vec k) F^{(j)}_{\nu, Z} \\ \label{si:eq:qAll} \tilde q_Z^{(t)}(\vec k) &= - \sum_{j=0}^t \sum_{\nu = \pm 1, \pm 2} \left[1 - \(1+\tilde q_{-\nu}^{(t-j), \zeta} (\vec k)\) F^{(j)}_{\nu, Z}\right] \end{align} An exponant $\zeta$ to a quantity means that this quantity is computed taking into account $\zeta = \zeta(Z)$. We now need an expression for $\tilde q_\eta^\zeta$ where $\eta$ is a special site. To do so we decompose the propagator of the displacements over the successive passages of the vacancy to the position of one of the tracers: \begin{multline} \label{si:eq:decompo} p^{(t),\zeta}_\eta (\vec Y) = \delta_{\vec Y,\vec 0} \(1 - \sum_{j=0}^t \sum_\mu F^{(j),\zeta}_{\mu, \eta}\) \\ + \sum_{p=1}^\infty \sum_{m_1,\dots,m_p=1}^{\infty} \sum_{m_{p+1}=0}^\infty \delta_{t, \sum_i\! m_i} \sum_{\nu_1, \dots, \nu_p} \delta_{\vec Y, \sum_i\! \vec e_{\nu_i}} \(1- \sum_{j=0}^{m_{p+1}} \sum_\mu F^{(j),\zeta}_{\mu, -\nu_p}\) F^{(m_p),\zeta}_{\nu_p, -\nu_{p-1}} \dots F^{(m_2),\zeta}_{\nu_2, -\nu_1} F^{(m_1),\zeta}_{\nu_1, \eta} \end{multline} the sums on $\mu$ and $\nu_i$ run over the special sites ($\pm 1, \dots \pm N$). The discrete Laplace transform (power series) of a function of time $g(t)$ is $\hat g(\xi)\equiv \sum_{t=0}^\infty g(t) \xi^t$. We can now take both the Laplace and Fourier transforms of (\ref{si:eq:decompo}) to get: \begin{equation} \hat p_{\eta}^\zeta(\vec Y, \xi) = \frac{1}{1-\xi}\left\{ \delta_{\vec Y,\vec 0} \(1 - \sum_\mu \hat F_{\mu,\nu}^\zeta\) + \sum_{p=1}^\infty \sum_{\nu_1, \dots, \nu_p} \delta_{\vec Y, \sum_i\! \vec e_{\nu_i}} \sum_\mu \(1-\hat F_{\mu,-\nu_p}^\zeta\) \hat F_{\nu_p, -\nu_{p-1}}^\zeta \dots \hat F_{\nu_2, -\nu_1}^\zeta \hat F_{\nu_1, \eta}^\zeta \right\} \end{equation} \begin{equation} \label{si:eq:qBorder} \hat{\tilde q}_\eta^\zeta (\vec k, \xi) \equiv \hat{\tilde p}_\eta^\zeta(\vec k, \xi) - \frac{1}{1-\xi} = \frac{1}{1-\xi} \sum_{\mu,\nu} \{[1-T^\zeta(\vec k, \xi)]^{-1}\}_{\nu\mu} \times \(1-e^{-i\,\vec k\vec e_\nu}\) e^{i\,\vec k\vec e_\mu} \hat F^\zeta_{\mu\eta}(\xi) \end{equation} The matrix $T$ is defined by $T^\zeta(\vec k,\xi)_{\nu\mu} = \hat F^\zeta_{\nu,-\mu} (\xi) e^{i\,\vec k \vec e_{\nu}}$. \subsection{Expression of the characteristic function} Introducing (\ref{si:eq:qAll}) into (\ref{si:eq:psi1}) directly gives: \begin{align} \lim_{\rho_0\to 0}\frac{\hat \psi (\vec k, \xi)}{\rho_0} &= - \sum_\nu \bigg[ \frac{1}{1-\xi} - \(\frac{1}{1-\xi} + \hat{\tilde q}^{(\zeta)}_{-\nu} (\vec k, \xi)\) e^{i\,\vec k\vec e_\nu} \bigg] h_\zeta(\xi) \\ h_\zeta(\xi) &= \sum_{Z\notin \{X_i^0\}} \hat F_{\zeta,Z} (\xi) = \sum_{Z=X_\zeta^0 + 1}^{X_{\zeta+1}^0 - 1} \hat F_{\zeta,Z} (\xi) \end{align} with $\hat{\tilde q}_{\nu}^\zeta$ given by (\ref{si:eq:qBorder}). \subsection{Expression of the quantities of interest ($\hat F_{\nu, \eta}^{(\zeta)}$ and $h_\mu$)} The two results \cite{Hughes:1995} that we recalled in the article are: \begin{itemize} \item The Laplace transform of the first passage density at the origin at time $t$ of a symmetric 1d Polya walk starting from site $l$ is: \begin{equation} \label{si:eq:walk1} \hat f_l(\xi) = \alpha^{|l|} \mbox{ with } \alpha = \frac{1-\sqrt{1-\xi^2}}{\xi} \end{equation} \item The probability of first passage at site $s_1$ starting from $s_0$ and considering $s_2$ as an absorbing site is given by \begin{equation} \label{si:eq:walk2} \hat F^\dagger (s_1|s_0,\xi) = \frac{\hat f_{s_1-s_0}(\xi) - \hat f_{s_1-s_2}(\xi) \hat f_{s_2-s_0}(\xi)} {1-\hat f_{s_1-s_2}(\xi)^2} \end{equation} \end{itemize} From (\ref{si:eq:walk1}), we have: \begin{equation} \hat F_{-1,-1} = \hat F_{+N, +N} = \alpha \end{equation} From (\ref{si:eq:walk1}, \ref{si:eq:walk2}), and recalling that the distance between TP $\mu$ and TP $\mu+1$ is $L_\mu^{(\zeta)}$, we have \begin{align} \hat F_{\mu,\mu}^{(\zeta)} = \hat F_{-\mu+1,-\mu-1}^{(\zeta)} &= \frac{\alpha -\alpha^{2L_\mu^{(\zeta)}-1}}{1-\alpha^{2L_\mu^{(\zeta)}}} \\ \hat F_{\mu,-\mu+1}^{(\zeta)} = \hat F_{-\mu+1,\mu}^{(\zeta)} &= \frac{\alpha^{L_\mu^{(\zeta)}-1} -\alpha^{L_\mu^{(\zeta)} + 1}}{1-\alpha^{2L_\mu^{(\zeta)}}} \end{align} And the sums are: \begin{align} h_0 = h_N &= \sum_{Z=-\infty}^{-1} \alpha^{|Z|} = \frac{\alpha}{1-\alpha} \\ h_\mu &= \sum_{Z=1}^{L_\mu-1} \frac{\alpha-\alpha^{2L_\mu-Z}}{1-\alpha^{2L_\mu}} = \frac{\alpha (1-\alpha^{L_\mu-1})(1-\alpha^{L_\mu})}{(1-\alpha)(1-\alpha^{2L_\mu})} \label{si:eq:exprEnd} \end{align} \section{Results in the high density limit} \subsection{Scaling of the characteristic function} Using a numerical software (Mathematica) we obtain the following result for the discrete Laplace tranform of the second characteristic function is: \begin{multline} \label{si:eq:psiResSI} \lim_{\rho_0\to 0}\frac{\hat \psi(\vec k, \xi)}{\rho_0} = \frac{1}{(1-\alpha^2)(1-\xi)} \sum_{n=0}^{N-1} \sum_{i=1}^{N-n} \alpha^{\mathcal{L}_i^n} \bigg\{ 2\alpha (1-\alpha^{L_{i-1}})(1-\alpha^{L_{i+n}}) \cos(k_i + \dots + k_{i+n}) \\ + (1-\alpha) Q_n(k_i, \dots, k_{i+n}) + C \bigg\} \end{multline} with $C$ a constant enforcing $\hat\psi(\vec k =\vec 0) = 0$ and \begin{align} \mathcal{L}_i^n &= L_i + \dots + L_{i+n-1} \\ Q_2(k_1, k_2) &= \alpha^{L_1}\(e^{ik_1} + e^{-ik_2}\) \label{si:eq:SI_Q2} \\ Q_3(k_1, k_2, k_3) &= \alpha^{L_1}\(e^{ik_1} + e^{-ik_2}\) + \alpha^{L_2}\(e^{ik_2} + e^{-ik_3}\) + \alpha^{L_1+L_2}\(e^{i(k_1+k_2)} + e^{-i(k_2+k_3)} + 2\cos k_2\) \end{align} Similar expressions exist for $Q_n, n>3$. We define the total length $L = L_1 + \dots + L_{N-1}$ and the rescaled variables $l_i^n = \mathcal{L}_i^n/L$. We define $p=1-\xi$ and $\tilde p = p L^2$. We take the limit $L\to\infty$ keeping $\tilde p$ constant. This is a limit of large time. The asymptotic behavior of $\alpha$ is given by~: \begin{equation} \alpha = \frac{1-\sqrt{1-\xi^2}}{\xi} = 1-\sqrt{2p} + \mathcal{O}_{p\to 0}(p) \end{equation} so that \begin{equation} \alpha^{rL} \underset{L\to\infty}{\sim} e^{-r\sqrt{2\tilde p}} \end{equation} (\ref{si:eq:psiResSI}) becomes~: \begin{multline} \label{si:eq:psiResSI2} \lim_{\rho_0\to 0}\frac{\hat \psi(\vec k, p=L^2\tilde p)}{\rho_0} \underset{L\to\infty}{\sim} \frac{L^3}{\sqrt{2} \tilde p^{3/2}} \sum_{n=0}^{N-1} \sum_{i=1}^{N-n} \(e^{-\sqrt{2\tilde p}\, l_i^n} - e^{-\sqrt{2\tilde p}\, l_{i-1}^{n+1}} - e^{-\sqrt{2\tilde p}\, l_i^{n+1}} + e^{-\sqrt{2\tilde p}\, l_{i-1}^{n+2}} \) \\ \times \(\cos(k_i + \dots + k_{i+n}) - 1\) \end{multline} The following continuous inverse Laplace tranform is known: \begin{equation} \hat h(p) = \frac{e^{-r\sqrt{2p}}}{p^{3/2}} \Leftrightarrow h(t) = 2\sqrt\frac{t}{\pi} g\(\frac{r}{\sqrt{2t}}\) \end{equation} \begin{equation} g(u) = e^{-u^2} - \sqrt\pi\, u\, \erfc(u) \end{equation} (\ref{si:eq:psiResSI2}) can then be inverted and we find: \begin{multline} \lim_{\rho_0\to 0}\frac{\psi(\vec k, t)}{\rho_0} \underset{t\to\infty}{\sim} \sqrt\frac{2t}{\pi} \sum_{n=0}^{N-1} \sum_{i=1}^{N-n} \left[ g\(\frac{L_i^n}{\sqrt{2t}}\) - g\(\frac{L_{i-1}^{n+1}}{\sqrt{2t}}\) - g\(\frac{L_i^{n+1}}{\sqrt{2t}}\) + g\(\frac{L_{i-1}^{n+2}}{\sqrt{2t}}\) \right] \\ \times \(\cos(k_i + \dots + k_{i+n}) - 1\) \end{multline} \subsection{Effects of the initial conditions on the odd cumulants} We consider two TPs at an initial distance $L$. We show that, due to the fact that we impose their initial positions, the two TPs separate slightly from one another. From (\ref{si:eq:psiResSI}) and (\ref{si:eq:SI_Q2}) we obtain: \begin{equation} \lim_{\rho_0\to 0} \frac{\langle X_2\rangle (\xi)}{\rho_0} = \frac{\alpha^L}{(1+\alpha)(1-\xi)} \equi{L\to\infty} L^2 \frac{e^{-\sqrt{2\tilde p}}}{2\tilde p} \end{equation} with $\tilde p = (1-\xi)/L^2$. The inversion of the Laplace transform gives: \begin{equation} \lim_{\rho_0\to 0} \frac{\langle X_2\rangle (t)}{\rho_0} \equi{t\to\infty} \frac{1}{2} \text{erfc}\(\frac{L}{\sqrt{2t}}\) \end{equation} and similarly \begin{equation} \lim_{\rho_0\to 0} \frac{\langle X_1\rangle (t)}{\rho_0} \equi{t\to\infty} -\frac{1}{2} \text{erfc}\(\frac{L}{\sqrt{2t}}\) \end{equation} We indeed see that the two tracers separate a little bit. This effect is obvious when $L=1$: initially the two TPs are on neighboring sites, and at large time there is a probability $\rho_0$ that there is a vacancy inbetween: $\langle X_2-X_2\rangle(t\to\infty) = \rho_0$. Similar effects are expected on all the odd cumulants in the $N$-tag problem. These effect also add a term of order $t^0$ in the even cumulants. \subsection{Large deviation function for a single TP} For a single tracer at high density our approach (which coincides with \cite{Illien:2013}) gives: \begin{equation} \psi(k, t) = \mu_t \(\cos k - 1\) \end{equation} with $\mu_t = \rho_0 \sqrt\frac{2t}{\pi}$. The G\"artner-Ellis theorem gives: \begin{align} \label{si:eq:SM_ld1} P\(Y_i=\mu_t\tilde y\) &\asymp e^{-\mu_t I(\tilde y)} \\ I(\tilde y) &= \sup_{q\in\mathbb{R}} \(q \tilde y - (\cosh q - 1)\) \end{align} Solving for the extremum gives: \begin{equation} e^{\pm q} = \pm \tilde y + \sqrt{1 + \tilde y^2} \end{equation} and finally: \begin{equation} I(\tilde y) = 1 - \sqrt{1+\tilde y^2} + \tilde y\ln\left[\tilde y + \sqrt{1+\tilde y^2}\right] \end{equation} \subsection{Large deviation function for $N$ TPs} We assume that the limits $\rho_0\to 0$ and $t\to\infty$ can be exchanged (this would need to be proven) and we write \begin{multline} \label{si:eq:psiResSI3} \psi(i\vec q, t) \underset{\substack{\rho_0\to 0\\ t\to\infty}}{\sim} \mu_t \sum_{n=0}^{N-1} \sum_{i=1}^{N-n} \left[ g\(\frac{\Lambda_i^n}{\sqrt\pi}\) - g\(\frac{\Lambda_{i-1}^{n+1}}{\sqrt\pi}\) - g\(\frac{\Lambda_i^{n+1}}{\sqrt\pi}\) + g\(\frac{\Lambda_{i-1}^{n+2}}{\sqrt\pi}\) \right] \\ \times \(\cosh(q_i + \dots + q_{i+n}) - 1\) \end{multline} with $\mu_t = \rho_0\sqrt\frac{2t}{\pi}$ and $\Lambda_i^n = L_i^n \sqrt\frac{\pi}{2t}$. In the following, the limits will be implicit. The Gärtner-Ellis theorem \cite{Touchette:2009} for $\mu_t\to\infty$ then gives us the probability distribution (at exponential order, denoted $\asymp$). \begin{align} P\(\{Y_i = \mu_t\tilde y_i\}\) &\asymp e^{-\mu_t J(\{\tilde y_i\})} \\ J(\{\tilde y_i\}) &= \sup_{\{q_i\}\in\mathbb{R}} \(\sum_{i=1}^N q_i \tilde y_i - \mu_t^{-1} \psi(-i\vec q)\) \label{si:eq:funJSI1} \end{align} To simplify the problem we define the following variables: \begin{equation} Y = \frac{Y_1 + Y_N}{2} \qquad D_i = \frac{Y_{i+1}-Y_i}{2}\quad (i=1,\dots N-1) \end{equation} \begin{equation} u = q_1+\dots+q_N \qquad v_i = -q_1 - \dots - q_i + q_{i+1} + \dots + q_N\quad (i=1,\dots N-1) \end{equation} \textbf{Small rescaled length.} \begin{equation} g(\Lambda/\sqrt{\pi}) = 1 - \Lambda + \mathcal{O}_{\Lambda\to 0}(\Lambda^2) \end{equation} At the first order in the rescaled lengths, only the ``boundary'' terms contribute in (\ref{si:eq:psiResSI3}). Using the variables defined above, one checks that (\ref{si:eq:funJSI1}) becomes simpler: \begin{multline} J(\tilde y, \{\tilde d_i\}) \underset{\Lambda_i \ll 1}{\sim} \sup_{u,\{v_i\}\in\mathbb{R}} \bigg\{ u\tilde y + \sum_{i=1}^{N-1} v_i\tilde d_i - \(\cosh(u) - 1\) - \sum_{i=1}^{N-1} \Lambda_i\(2\cosh\frac{u}{2} \cosh\frac{v_i}{2} - \cosh(u) - 1\) \bigg\} \end{multline} This extremum can only be solved for in the special cases defined in the main text: $\Lambda_i = 0$, marginal probability of the distances (this corresponds to $u=0$) and Gaussian limit ($\tilde y \ll 1$, $\tilde v_i \ll 1$). \textbf{Two tracers, arbitrary length.} One should note that for two tracers, (\ref{si:eq:funJSI1}) is already rather simple if written with the right variables: we don't need to assume $\Lambda\ll 1$. \begin{equation} J(\tilde y, \tilde d) = \sup_{u,v\in\mathbb{R}} \bigg\{ u\tilde y + v\tilde d - \(\cosh(u) - 1\) - \(1-g\(\frac{\Lambda}{\sqrt{\pi}}\)\)\(2\cosh\frac{u}{2} \cosh\frac{v}{2} - \cosh(u) - 1\) \bigg\} \end{equation} This expression is used to provide a numerical predition in the main text. \subsection{Numerical verification of the expression of $\kappa^{(2)}_{11}$ at arbitrary density} From the Edwards-Wilkinson equation, one expects~: \begin{equation}\label{si:eq:cumEW} \kappa^{(2)}_{11} = \kappa^{(1)}_{2} g\(\frac{1}{\sqrt{2\tau}}\) = \frac{1-\rho}{\rho} \sqrt\frac{2t}{\pi} g\(\frac{1}{\sqrt{2\tau}}\) \end{equation} This behavior is in good agreement with numerical simulations (see Fig.~\ref{si:fig:cumInterm}, left). For the other cumulants, we showed that \begin{equation} \label{si:eq:equal1N} \lim_{t\to\infty}\frac{\kappa^{(N)}_{p_1, \dots, p_N}}{\sqrt{t}} = \lim_{t\to\infty}\frac{\kappa^{(1)}_{p_1 + \dots + p_N}}{\sqrt{t}} = B_{p_1+\dots+p_N}, \end{equation} where the constants $B_{k}$ characteristic of a single tracer have been determined in Ref. \cite{Imamura:2017}. \begin{figure} \includegraphics[width=15cm]{figCumInterm2} \caption{Evolution of the cumulants associated to two TPs at different densities and different distances. The cumulants are rescaled by the single-tag cumulants, following \eqref{si:eq:cumEW}. The dashed line corresponds to $g((2\tau)^{-1/2})$. } \label{si:fig:cumInterm} \end{figure} \subsection{Argument for the breaking of our scaling shape at arbitrary density} In analogy with \eqref{si:eq:cumEW}, one would like to be able to make the following bold conjecture~: \[\kappa^{(2)}_{p, 2n-p} = B_{2n} \sqrt{t}\, g\((2\tau)^{-1/2}\right)\] Unfortunately we show that this is incompatible with the law of the distance \eqref{si:eq:predDist}. Let us focus on the 4th cumulant of the distance (we denote $\langle\rangle_c$ the cumulants)~: \begin{equation} \langle (X_2-X_1)^4 \rangle_c = \langle X_1^4\rangle_c + \langle X_2^4\rangle_c -4\langle X_1^3 X_2 \rangle_c - 4\langle X_1 X_2^3 \rangle_c + 6\langle X_1^2 X_2^2 \rangle_c \end{equation} We \textit{assume} that~: \begin{align} \langle X_1^4\rangle_c = \langle X_2^4\rangle_c &= B_4\sqrt{t} + o(1) \\ \langle X_1^3 X_2\rangle_c = \langle X_1 X_2^3\rangle_c = \langle X_1^2 X_2^2\rangle_c &= B_4\sqrt{t} g\(\frac{1}{\sqrt{2\tau}}\) = B_4\sqrt{t} - B_4\sqrt\frac{\pi}{2} L + o(1) \end{align} This leads us to: \begin{equation} \label{si:eq:assumeEq} \langle (X_2-X_1)^4 \rangle_c = \sqrt{2\pi} B_4 L \end{equation} From Ref.~\cite{Imamura:2017}, \begin{equation} B_4 = \sqrt\frac{2}{\pi} \frac{1-\rho}{\rho^3} \left[1 - (4-(8-3\sqrt{2})\rho)(1-\rho) + \frac{12}{\pi} (1-\rho)^2\right] \end{equation} while from \eqref{si:eq:predDist}, \begin{equation} \langle (X_2-X_1)^4 \rangle_c \equi{L\to\infty} 2L \frac{1-\rho}{\rho^3} \(12 - 24\rho + 13\rho^2\) \end{equation} At an arbitrary density, this is inconsistent with \eqref{si:eq:assumeEq}, thus our conjecture must be wrong. Note that \eqref{si:eq:assumeEq} does hold as expected when $\rho\to 1$. In Fig.~\ref{si:fig:cumInterm} center (resp. right), we tried the following guess: $\kappa^{2}_{22} = B_4\sqrt{t}\, g\((2\tau)^{-1/2}\right)$ (resp. $\kappa^{2}_{31} = B_4\sqrt{t}\, g\((2\tau)^{-1/2}\right)$). We see that it is valid only at high density, as expected. \section{Description of numerical simulations} \subsection{Continuous time simulations on a lattice (for the cumulants)} $N_\text{parts}$ particles are put uniformly at random on the line of size $N_\text{size}$, except the $N$ tagged particles which are put deterministically on their initial positions. We used $N_\text{size}=5000$ Each particle has an exponential clock of time constant $\tau = 1$. Thus, the whole system has an exponential clock of time constant $\tau_\text{all} = \tau / N_\text{parts}$. When it ticks, a particle is chosen at random and tries to move either to the left or to the right with probability $1/2$. If the arrival site is already occupied the particle stays where it was. The cumulants of the $N$ TPs are averaged over 100\,000 to 500\,000 simulations to obtain their time dependence. \subsection{Vacancy-based simulations (for the probability distribution)} The previous approach does not enable one to get sufficient statistics to investigate the probability law. In the case of ponctual brownian particles, Ref.~\cite{Sabhapandit:2015} was able to use the propagator of the displacement to directly obtain the state of the system at a given time and investigate the probability distribution. Here we used a numerical scheme close to out theoretical approach: at high density and in discrete time, we simulate the behavior of the vacancies considered as independant random walker. The displacement $\Delta x$ of a vacancy at (discrete) time $t$ is given by a binomial law: \begin{align} \Delta x &= 2 n_\text{right} - t \\ P(n_\text{right}, t) &= \frac{1}{2^t} \binom{t}{n_\text{right}} \end{align} One is able to recover the final positions of the TPs from the final positions of the vacancies. For two TPs at distance $L$, we put a vacancy at each site between the TPs with probability $\rho_0$ (density of vacancies). We consider a number of sites $N_\text{sites}$ ($N_\text{sites} = 100\, 000$) on the left of the first TP and on the right of the second TP and we put a deterministic number of vacancies $N_\text{vac} = \rho_0 N_\text{sites}$ at random positions on these sites. We make $10^8$ repetition of the simulation before outputting the probability law. \end{widetext} \end{document}
{'timestamp': '2018-01-25T02:09:02', 'yymm': '1801', 'arxiv_id': '1801.08067', 'language': 'en', 'url': 'https://arxiv.org/abs/1801.08067'}
\section{Introduction} The relativistic heavy-ion collider experiments show that human beings may have produced quark-gluon plasma (QGP) in the laboratory \cite{torres2015flavor}. It is generally believed that some quark bound states can survive in QGP. In thermal Quantum chromodynamics (QCD), the thermal properties of QGP can be determined by studying the behavior of these quark bound states in hot medium. In 1986, Satz pointed out that the suppression of $J/\psi$ could be a signature of QGP formation in the relativistic heavy ion collisions \cite{matsui1986j}. Since then, many people have systematically studied the melting of Charmonium and Bottomonium. But the dissociation temperature of nucleon, the lightest baryon, has not been studied systematically. This is due to the difficulties of solving three-body system and obtaining the temperature dependent quark-quark interaction potential within the nucleon. QCD is the fundamental theory of strong interaction. It works well in the perturbative region but it does not work in the non-perturbative region. It is difficult for us to use QCD to study the thermal properties of quark bound states directly. In this case, people have to use the model \cite{karsch1988color,satz2006colour,alberico2005heavy,zhen2012dissociation} to study the dissociation of quarkonium states. In high temperatures and density, the interaction between quarks is screened \cite{digal2005heavy} and the binding energy will be decreased. As a result, the nucleon will start to melt when the binding energy become low enough. The melting of nucleon can be solved by Schr\"odinger equation of three body. For the calculation, we need the interacting potentials among quarks of the nucleon in the hot medium, which is temperature-dependent. Unfortunately, the potentials are not yet well understood up to now. The free energy of a static heavy three-quark system $F_{qqq}(r,T)$ can be calculated in lattice QCD and the internal energy can be obtained by using thermodynamic relation. In the present approach, the needed potential is assumed to be the internal energy, i.e. $V=F+sT$ with $s$ being the entropy density $s=-\partial F/\partial T$. The temperature-dependent form of $F_{q\bar q}(r,T)$ can be constructed based on Debye-H\"uckel theory \cite{dixit1990charge}, as having been done in Ref. \cite{digal2005heavy}. Then, we can determine the T-dependent parameter in the free energy by fitting it with the lattice data. According to the relation between $F_{q\bar q}(r,T)$ and $F_{qqq}(r,T)$, we can obtain the free energy of heavy three-quark system. We assume the conclusions are applicable to light quark system due to the flavor independence of the strong interaction. After constructing the potential of nucleon system at finite temperature, we can obtain the temperature dependence of binding energies and radii by solving the corresponding Schr\"odinger equation. The dissociation temperature is the point where the binding energy decreases to zero. The Gaussian expansion method (GEM), which is an efficient and powerful method in few-body system \cite{hiyama2003gaussian}, is employed to calculate dissociation temperature of nucleon in this paper. This paper is organized as follows. In Sec.\ref{sec:quakonium}, we show the rationality of GEM on studying the melting of Charmonium and Bottomonium by comparing our results with others. In Sec.\ref{sec:forma}, we construct the potential of nucleon and apply GEM to solve corresponding Schr\"odinger equation. In Sec.\ref{sec:res}, we show the results at quenched and 2-flavor QCD, respectively. Sec.\ref{sec:summ} contains summary and conclusion. \section{The Results on Dissociation Temperature of Quarkonium}\label{sec:quakonium} Before studying the dissociation of nucleon, we test the rationality of GEM on studying dissociation temperatures of quarkonium by comparing our results with others. To compare with Satz's results, the potential of quarkonium at finite temperature we use is the same as Satz's work \cite{satz2006colour}. The results on dissociation temperature of Charmonium, Bottomonium in Ref.\cite{satz2006colour} and our calculation results are listed in Table\ref{tab:temp}, Table\ref{tab:temp2}, \begin{table}[!h] \begin{center} \caption{\label{tab:temp}Dissociation Temperature $T_d/T_c$ of Charmonium in Ref.\cite{satz2006colour} and our results.} \renewcommand\arraystretch{1.8} \begin{tabular}{cccc} \hline Charmonium & 1S & 1P & 2S \\ \hline Ref.\cite{satz2006colour} & 2.10 & 1.16 & 1.12 \\ \hline Our Results & 2.08 & 1.16 & 1.14 \\ \hline \end{tabular} \end{center} \end{table} \begin{table}[!h] \begin{center} \caption{\label{tab:temp2}Dissociation Temperature $T_d/T_c$ of Bottomonium in Ref.\cite{satz2006colour} and our results.} \renewcommand\arraystretch{1.8} \begin{tabular}{cccccc} \hline Bottomonium & 1S & 1P & 2S & 2P & 3S \\ \hline Ref.\cite{satz2006colour} & $>$ 4.0 & 1.76 & 1.60 & 1.19 & 1.17 \\ \hline Our Results & 5.83 & 1.72 & 1.59 & 1.18 & 1.17\\ \hline \end{tabular} \end{center} \end{table} which show our results are consistent with that in Ref.\cite{satz2006colour}. So GEM can give accurate results on the dissociation temperature of quarkonium. Giving accurate binding energy and wave function \cite{hiyama2003gaussian} makes GEM very suitable for studying dissociation temperature of quark bound states (more detail can be found in Appendix A). In the following, we will use this method to calculate the dissociation temperature of nucleon. \section{Formalism}\label{sec:forma} \subsection{Constituent Quark Model} The constituent quark model is a non-relativistic quark model \cite{yang2008dynamical}. In the constituent quark model, baryons are formed by three constituent quarks, which are confined by a confining potential and interact with each other \cite{yoshida2015spectrum}. The potential of baryon can be described by a sum of the potential of corresponding two-quark system. In Kaczmarek's work \cite{PhysRevD.75.054504}, it has been calculated in lattice QCD that the potential of diquark system is about half of that of corresponding quark-antiquark system, i.e. $V_{q q}=\frac 12 V_{q \bar q}$. The simplest and most frequently used potential for a $q \bar q$ system is the Cornell potential \cite{digal2005heavy}, \begin{equation} V_{q \bar q}(r) = -\frac{\alpha}{r} + \sigma r \label{eq:cornel} \end{equation} where $\alpha$ is the coupling constant, and $\sigma$ is the string tension. In the present work, we neglect the spin-dependent part of potential here. Thus our Hamiltonian is written as \begin{equation} H = \sum_{i=1}^3 (m_i + \frac{\boldsymbol{p}_i^2}{2m_i}) - T_{cm} + \sum_{1=i<j}^3 \frac 12 V(r_{ij}) \label{eq:hamil} \end{equation} \begin{equation} V(r_{ij}) = \sigma r_{ij} - \frac{\alpha}{r_{ij}} \label{eq:cor1} \end{equation} where $m_i$ is the constituent quark mass of the $i$-th quark, and $T_{cm}$ is the kinetic energy of center-of-mass frame (cm). $\boldsymbol{r}_{ij}=\boldsymbol{r}_i-\boldsymbol{r}_j$ is the relative coordinate between $i$-th quark and $j$-th quark. In this model, the mass of light quark (u and d quark) we use is $300$ MeV. The parameters of Cornell potential we use are: $\alpha=1.4$, $\sqrt{\sigma}=0.131$ GeV. Solving the corresponding Schr\"odinger equation, $ H\Psi_{total}=E_m \Psi_{total}$, with GEM, we can get the mass $E_m$ and corresponding wave function of nucleon $\Psi_{total}$. We define the radii of nucleon as \begin{equation} R=\frac 13 \sum_{i=1}^3 \sqrt{\langle r_i^2 \rangle} \label{eq:radii} \end{equation} with \begin{equation} \langle r_i^2 \rangle=\int \Psi^*_{total} r_i^2 \Psi_{total} d\tau \label{eq:radii1} \end{equation} where $r_i$ is the distance between the center of nucleon and $i$-th quark. Using the calculated wave function, we can calculate the radii of nucleon. The calculating mass and radii of nucleon are $939$ MeV and $0.83933$ fm, respectively. While the corresponding experimental data are about $939$ MeV and $0.841$ fm. We can see this model gives a good estimation of the properties of nucleon even if the spin-dependent part is neglected. So it is reasonable for us to use this potential model to study the dissociation of nucleon. Of course, we need notice that the spin-dependent part plays an important role in the baryon spectrum. \subsection{Wave Function} Here, we solve the Schr\"odinger equation with GEM. In this method, three sets of Jacobi coordinates (Fig.\ref{fig:jacobi}) are introduced to express three-quark wave function. The Jacobi coordinates in each channel $c (c=1,2,3)$ are defined as \begin{figure}[!htbp] \centering \includegraphics[width=0.4\textwidth]{jacobi.png} \caption{\label{fig:jacobi} Three sets of Jacobi coordinates for a three-body system \cite{hiyama2003gaussian}.} \end{figure} \begin{equation} \boldsymbol{r}_i=\boldsymbol{x}_j-\boldsymbol{x}_k \label{eq:recoor1} \end{equation} \begin{equation} \boldsymbol{R}_i=\boldsymbol{x}_i-\frac {m_j\boldsymbol{x}_j+m_k\boldsymbol{x}_k}{m_j+m_k} \label{eq:recoor2} \end{equation} where $\boldsymbol{x}_i$ is the coordinate of the $i$-th quark and $(i,j,k)$ are given by Table~\ref{tab:jacobi}. \begin{table}[!h] \begin{center} \caption{\label{tab:jacobi}The quark assignments $(i,j,k)$ for Jacobi channels.} \renewcommand\arraystretch{1.8} \begin{tabular}{cccc} \hline channel & i & j & k \\ \hline 1 & 1 & 2 & 3 \\ \hline 2 & 2 & 3 & 1 \\ \hline 3 & 3 & 1 & 2 \\ \hline \end{tabular} \end{center} \end{table} The total wave function is given as a sum of three rearrangement channels ($c=1-3$) \begin{equation} \Psi_{total}^{JM} = \sum_{c,\alpha} C_{c,\alpha} \Psi_{JM,\alpha}^{(c)}(\boldsymbol{r}_c,\boldsymbol{R}_c) \label{eq:totalwave} \end{equation} where the index $\alpha$ represents ($s,S,l,L,I,n,N$). Here $s$ is the spin of the $(i,j)$ quark pair, $S$ is the total spin, $l$ and $L$ are the orbital angular momentum for the coordinate $\boldsymbol{r}$ and $\boldsymbol{R}$, respectively, and I is the total orbital angular momentum. The wave function for channel $c$ is given by \begin{equation} \Psi_{JM,\alpha}^{(c)}(\boldsymbol{r}_c,\boldsymbol{R}_c) = \phi_c \otimes [X_{S,s}^{(c)} \otimes \Phi_{l,L,I}^{(c)}]_{JM} \otimes H_{T,t}^{(c)} \end{equation} as given in Ref. \cite{yoshida2015spectrum}. The orbital wave function $\Phi_{l,L,I}^{(c)}$ is given in terms of the Gaussian basis functions written in Jacobi coordinates $\boldsymbol{r}_c$ and $\boldsymbol{R}_c$ \begin{equation} \Phi_{l,L,I}^{(c)} = [\phi_l^{(c)}(\boldsymbol{r}_c) \psi_L^{(c)}(\boldsymbol{R}_c)]_I \label{eq:orbital} \end{equation} \begin{equation} \phi_{lm}^{(c)}(\boldsymbol{r}_c) = N_{nl} r_c^l e^{-\nu_n r_c^2} Y_{lm}(\hat{\boldsymbol{r}}_c) \label{eq:orbital1} \end{equation} \begin{equation} \psi_{LM}^{(c)}(\boldsymbol{R}_c) = N_{NL} R_c^L e^{-\lambda_N R_c^2} Y_{LM}(\hat{\boldsymbol{R}}_c) \label{eq:orbital2} \end{equation} where the range parameters, $\nu_n$ and $\lambda_N$, are given by \begin{gather} \nu_n = 1/{r_n^2}, r_n = r_1 a^{n-1} (n=1,...,n_{max}), \notag \\ \lambda_N = 1/{R_N^2}, R_N = R_1 A^{N-1} (N=1,...,N_{max}); \label{eq:rangepara} \end{gather} In Eqs. (\ref{eq:orbital1}) and (\ref{eq:orbital2}), $N_{nl}$($N_{NL}$) \cite{hiyama2003gaussian} denotes the normalization constant of Gaussian basis. The coefficients $C_{c,\alpha}$ of the variational wave function, Eq. (\ref{eq:totalwave}), are determined by Rayleight-Ritz variational principle. \subsection{Potential model for nucleon} The potential of nucleon at zero temperature has been discussed above and its parameters have been determined by fitting the properties of nucleon. To determine the dissociation temperature of nucleon, we need the potential in hot medium, i.e. $V_{qqq}(\boldsymbol{r},T)$ (the index q represents u or d quark). Here, we assume that the potential is just the internal energy \begin{equation} \begin{split} V_{qqq}(\boldsymbol{r},T) &= U_{qqq}(\boldsymbol{r},T) \\ &= F_{qqq}(\boldsymbol{r},T) + sT \end{split} \label{eq:poten} \end{equation} where $s$ is the entropy density $s=-\partial F_{qqq}/\partial T$. In Refs.~\cite{hubner2008heavy,hubner2005free,huebner2005heavy}, Kaczmarek's works show that the color singlet free energies of the heavy three-quark system ($F_{qqq}^1$) can be described by the sum of antitriplet free energies of the corresponding diquark system ($F_{qq}^{\overline 3}$) plus self energy contributions when the temperature is above $T_c$. It can be expressed as \begin{equation} F_{qqq}^1(P,T) \simeq \sum_{i<j} F_{qq}^{\overline 3}(R_{ij},T)-3F_q(T) \label{eq:free} \end{equation} where $P=\sum_{i<j}R_{ij}$ and the self energy $F_q(T)=\frac 12 F_{qq}^{\overline 3}(\infty,T)$. In Ref.~\cite{PhysRevD.75.054504}, O.Kaczmarek's work suggests a simple relation between free energies of anti-triplet $qq$ states and color singlet $q\bar{q}$ \begin{equation} F_{q\bar{q}}^1(r,T) \simeq 2(F_{qq}^{\overline 3}(r,T)-F_q(T)) \label{eq:free1} \end{equation} The form of $F_{q\bar{q}}^1$ can be obtained based on studies of screening in Debye-H\"uckel theory. It can be written as \cite{digal2005heavy} \begin{eqnarray} F_{q\bar{q}}^1(r,T)=-\frac{\alpha}{r}\left[e^{-\mu r}+\mu r\right]+\frac{\sigma}{\mu}[\frac{\Gamma\left(1/4\right)}{2^{3/2} \Gamma\left(3/4\right)} \notag \\ -\frac{\sqrt{\mu r}}{2^{3/4}\Gamma\left(3/4\right)}K_{1/4}\left[\left(\mu r\right)^2+\kappa \left(\mu r\right)^4\right]] \label{eq:free2} \end{eqnarray} where screening mass $\mu$ and the parameter $\kappa$ are temperature-dependent, and $K_{1/4}[x]$ is the modified Bessel function. We can determine the T-dependent $\mu$ and $\kappa$ by fitting $F_{q\bar{q}}^1(r,T)$ to the lattice result obtained in quenched \cite{kaczmarek2002heavy} and 2-flavor \cite{kaczmarek2005static} QCD. At $r=\infty$, the free energy $F_{q\bar{q}}^1(T)$ is wirtten as \begin{equation} F_{q\bar{q}}^1(T) = \frac {\sigma}{\mu\left(T\right)} \frac {\Gamma\left(1/4\right)}{2^{3/2} \Gamma\left(3/4\right)} - \alpha \mu\left(T\right) \label{eq:inffree} \end{equation} Thus, the form of $\mu(T)$ is given as function of $F(T)$ \begin{equation} \mu\left(T\right) = \frac {\left[\sqrt{{F_{q\bar{q}}^1(T)}^2+4\sigma\alpha \frac {\Gamma\left(1/4\right)}{2^{3/2} \Gamma\left(3/4\right)}}-F_{q\bar{q}}^1(T)\right]}{2\alpha} \label{eq:fmu} \end{equation} Once we obtain the temperature dependence of $\mu(T)$, we fit Eq.~(\ref{eq:free2}) to the lattice data to obtain $\kappa(T)$. The results for $\mu(T)$ and $\kappa(T)$ are shown in Fig. \ref{fig:miu} and Fig. \ref{fig:kai}, respectively. In Fig.~\ref{fig:lattfree}, we show our fit curves (solid lines) together with the lattice results. We can see that the resulting $F_{q\bar q}^1(r,T)$ fits the lattice results quite well for all $r$ and in a broad range of temperatures from $T_c$ to $4T_c$ in the two cases. For higher temperature, the resulting $F_{q\bar q}^1(r,T)$ cannot be fitted quite well to the lattice results in quenched QCD. There can be higher order corrections to Poisson equation \cite{digal2005heavy}. \begin{figure}[!htbp] \centering \includegraphics[width=0.4\textwidth]{miu.png} \includegraphics[width=0.4\textwidth]{2flavormiu.png} \caption{\label{fig:miu} Results for $\mu(T)$ in quenched (upper figure) and 2-flavor (lower figure) QCD.} \end{figure} \begin{figure}[!htbp] \centering \includegraphics[width=0.4\textwidth]{kai.png} \includegraphics[width=0.4\textwidth]{2flavorkai.png} \caption{\label{fig:kai}Results for $\kappa(T)$ in quenched (upper figure) and 2-flavor (lower figure) QCD.} \end{figure} \begin{figure}[!htbp] \centering \includegraphics[width=0.4\textwidth]{free.png} \includegraphics[width=0.4\textwidth]{2flavorfree.png} \caption{\label{fig:lattfree}Results for free energy ($F_{q\bar{q}}^1$) in quenched (upper figure) and 2-flavor (lower figure) QCD.} \end{figure} To obtain the binding energies of nucleon, we define a effective potential as \begin{equation} \tilde V_{qqq}(\boldsymbol{R} ,T) = V_{qqq}(\boldsymbol{R} ,T) - V_{qqq}(\infty,T) \label{eq:effpoten} \end{equation} Combining Eqs.(\ref{eq:poten}-\ref{eq:free1},\ref{eq:effpoten}), we can get a relation between effective potential and free energies of $q\bar q$ \begin{equation} \tilde V_{qqq}(\boldsymbol{R},T) = \sum_{i<j} \frac 12 (\tilde F_{q\bar{q}}^1(R_{ij},T) - T\frac {\partial \tilde F_{q\bar{q}}^1(R_{ij},T)}{\partial T}) \label{eq:effpoten1} \end{equation} where \begin{equation} \tilde F_{q\bar{q}}^1(R_{ij},T) = F_{q\bar{q}}^1(R_{ij},T) - F_{q\bar{q}}^1(\infty,T) \label{eq:efffree} \end{equation} Replacing the potential term, $\sum_{1=i<j}^3 \frac 12 V(r_{ij})$, in Eq.~(\ref{eq:hamil}) with this effective potential $\tilde V_{qqq}(\boldsymbol{R},T)$, we can get a new Hamiltonian for nucleon at finite temperature written as \begin{equation} H_{new} = \sum_{i=1}^3 \frac{\boldsymbol{p}_i^2}{2m_i} - T_{CM} + \tilde V_{qqq}(\boldsymbol{R},T) \label{eq:hamil1} \end{equation} Solving corresponding Schr\"odinger euqation, $$ H_{new}\Psi_{total}^{JM}=\epsilon(T) \Psi_{total}^{JM}, $$ with GEM, we can get the binding energies $\Delta E(T)(=-\epsilon(T))$ and corresponding wave function at finite temperature. Using the wave function, we can calculate the T-dependent radii according to Eq.~(\ref{eq:radii}). \section{Numerical Results}\label{sec:res} In Fig. \ref{fig:bind}, we show the resulting binding energies behaviour for nucleon in quenched and 2-flavor QCD. We can see there is little difference between the two lines. When they vanish, the nucleon no longer exists. So $\Delta E(T)=0$ determines the dissociation temperature. From Fig. \ref{fig:bind}, we get the dissociation temperature in quenched and 2-flavor QCD are about $3.0T_c$ and $3.3T_c$, respectively; in Fig. \ref{fig:ra}, we show the corresponding nucleonic radii. The dissociation temperature determined from Fig. \ref{fig:ra} is consistent with that determined from Fig. \ref{fig:bind}. It is seen that the divergence of the radii defines quite well the different dissociation points in the two cases. The resulting dissociation temperatures have a little difference between the two cases. \begin{figure}[!htbp] \centering \includegraphics[width=0.4\textwidth]{bindingenergy1.png} \caption{\label{fig:bind}T-dependent of binding energy in quenched and 2-flavor QCD, respectively.} \end{figure} \begin{figure}[!htbp] \centering \includegraphics[width=0.4\textwidth]{radii1.png} \caption{\label{fig:ra}T-dependent of radii. in quenched and 2-flavor QCD, respectively} \end{figure} \section{Summary and Conclusion}\label{sec:summ} The free energies of quark-antiquark system we construct based on Debye-H\"uckel theory at finite temperature fit the lattice results quite well from $T_c$ to $4T_c$, but not well for higher temperature. According to Kaczmarek's works, we can get a relation between color singlet free energy of heavy $qqq$ system $F_{qqq}^1$ and color singlet free energy of heavy $q\bar q$ system $F_{q\bar q}^1$, written as $F_{qqq}^1 \simeq \sum_{i<j} \frac 12 F_{q\bar q}^1$. The dissociation temperature of nucleon in quenched and 2-flavor QCD we calculate are about $3.0T_c$ and $3.3T_c$, respectively. There are a little difference between the two results. Comparing with $J/\psi$, the dissociation temperature of nucleon is higher. So, nucleon is more difficult to melt than charmonium. For the potential, we neglect the spin-dependent part in this work which may has some effects to the resulting dissociation temperature. The effects arising from spin-dependent part deserve further studies. \section*{Acknowledgement} We are grateful to Emiko Hiyama, Pengfei Zhuang, Yunpeng liu, Min He and Fan Wang for their work and helpful discussions. This work is supported in part by the National Natural Science Foundation of China (under Grants Nos. 11475085, 11535005, 11690030 and 11775118), the Fundamental Research Funds for the Central Universities (under Grant No. 020414380074), and the International Science \& Technology Cooperation Program of China (under Grant No. 2016YFE0129300).
{'timestamp': '2018-02-06T02:09:21', 'yymm': '1802', 'arxiv_id': '1802.01119', 'language': 'en', 'url': 'https://arxiv.org/abs/1802.01119'}
\section{Introduction} Let $\tau(n) = \sum_{d|n} 1$ be the divisor function, and \begin{align*} S(\alpha) = \sum_{n\le x} \tau(n)e(n\alpha), \qquad e(\alpha) = e^{2\pi i \alpha}. \end{align*} In 2001 Brudern \cite{Brudern} considered the $L^1$ norm of $S(\alpha)$ and claimed to prove \begin{equation} \sqrt{x} \ll \int_0^1 |S(\alpha)| d\alpha \ll \sqrt{x}. \end{equation} However there is a mistake in the proof given there which depends on a lemma which is false. In this note we prove the following result. \begin{theorem} We have \label{thm} \begin{equation} \sqrt{ x} \ll \int_0^1 |S(\alpha)|d\alpha\ll \sqrt{x}\log x. \end{equation} \end{theorem} The upper bound here is obtained by following Brudern's proof with corrections. The lower bound is based on the method Vaughan introduced to study the $L^1$ norm for exponential sums over primes \cite{Va}, and also makes use of a more recent result of Pongsriiam and Vaughan \cite{PV} on the divisor sum in arithmetic progressions. We do not know whether the upper bound or the lower bound reflects the actual size of the $L^1$ norm here. \section{Proof of the upper bound} Let $u$ and $v$ always be positive integers. Following Br\"udern, we have \[ \begin{split} S(\alpha) &= \sum_{n\le x} \left( \sum_{uv=n}1\right) e(n\alpha) \\&=\sum_{uv\le x}e( uv\alpha) \\&= 2\sum_{u\le\sqrt{x}}\sum_{u < v\le x/u} e( uv\alpha) + \sum_{u\le\sqrt{x}} e( u^2\alpha)\\ & := 2T(\alpha) + V(\alpha) . \end{split} \] By Cauchy-Schwarz and Parseval \[ \int_0^1 |V(\alpha)|\, d\alpha \le \left( \int_0^1 |V(\alpha)|^2\, d\alpha \right)^{\frac12} = \sqrt{ \lfloor \sqrt{x}\rfloor} \le x^{\frac14} ,\] and therefore by the triangle inequality \[ \int_0^1 |S(\alpha)|\, d\alpha = 2 \int_0^1 |T(\alpha)|\, d\alpha + O(x^{\frac14}). \] Thus to prove the upper bound in Theorem 1 we need to establish \begin{equation} \label{integral} \int_0^1 |T(\alpha)| d\alpha\ll\sqrt{x}\log x. \end{equation} We proceed as in the circle method. Clearly in \eqref{integral} we can replace the integration range $[0,1]$ by $[1/Q, 1+1/Q]$. By Dirichlet's theorem for any $\alpha \in [1/Q, 1+1/Q]$ we can find a fraction $\frac{a}{q}$, $1\le q \le Q$, $1\leq a \le q$, $(a,q)=1$, with $|\alpha - \frac{a}{q}| \le 1/(qQ)$. Thus the intervals $ [\frac{a}{q}- \frac{1}{qQ}, \frac{a}{q} +\frac{1}{qQ} ] $ cover the interval $[1/Q,1+1/Q]$. Taking \begin{equation} \label{Qequation} 2\sqrt{x} \le Q \ll \sqrt{x}, \end{equation} we obtain \[ \int_0^1 |T(\alpha)|d\alpha \le \sum_{q\le Q}\sum_{\substack{a = 1\\ (a,q)=1}}^q \int_{\frac{a}{q}-1/(2q\sqrt{x})}^{\frac{a}{q}+1/(2q\sqrt{x})} \left\vert T(\alpha)\right\vert d\alpha. \] On each interval $ [\frac{a}{q}- \frac{1}{2q\sqrt{x}}, \frac{a}{q} +\frac{1}{2q\sqrt{x}} ] $ we decompose $T(\alpha)$ into \[ T(\alpha) = F_q(\alpha) + G_q(\alpha) \] where \[ F_q(\alpha) = \sum_{\substack{u\le\sqrt{x}\\ q|u}}\sum_{u < v\le x/u} e( uv\alpha) \] and \[ G_q(\alpha) = \sum_{\substack{u\le\sqrt{x}\\ q\nmid u}}\sum_{u < v\le x/u} e( uv\alpha), \] and have \[ \begin{split} \int_0^1 |T(\alpha)|d\alpha &\le \sum_{q\le Q}\sum_{\substack{a = 1\\ (a,q)=1}}^q \int_{\frac{a}{q}-1/(2q\sqrt{x})}^{\frac{a}{q}+1/(2q\sqrt{x})} \left\vert F_q(\alpha)\right\vert d\alpha + \sum_{q\le Q}\sum_{\substack{a = 1\\ (a,q)=1}}^q \int_{\frac{a}{q}-1/(2q\sqrt{x})}^{\frac{a}{q}+1/(2q\sqrt{x})} \left\vert G_q(\alpha)\right\vert d\alpha\\ & := I_F + I_G. \end{split} \] The upper bound in Theorem 1 follows from the following two lemmas. \begin{lemma} [Br\"udern] We have \[ I_F \ll \sqrt{x}. \] \end{lemma} \begin{lemma} We have \[ I_G \ll \sqrt{x}\log x . \] \end{lemma} In what follows we always assume $(a,q)=1$, and define the new variable $\beta$ by \begin{equation} \label{alpha} \alpha = \frac{a}{q} +\beta . \end{equation} \begin{proof}[Proof of Lemma 2] The proof follows from the estimate \begin{equation} \label{Festimate} F_q(\alpha) \ll \left\{ \begin{array}{ll} {\min\left(x,\frac{1}{|\beta|}\right)\frac{\log \frac{2\sqrt{x}}{q}}{q} ,} & \mbox{if $q\le \sqrt{x}$, $ |\beta|\le \frac{1}{2q\sqrt{x}}$;} \\ 0, & \mbox{if $q>\sqrt{x}$;} \\ \end{array} \right. \end{equation} since this implies \[ \begin{split} I_F &\ll \sum_{q\le \sqrt{x}} q \int_0^{1/(2q\sqrt{x})}\min\left(x, \frac{1}{|\beta|}\right) \frac{\log \frac{2\sqrt{x}}{q}}{q}\, d\beta \\& \ll \sum_{q\le \sqrt{x}} \log \frac{2\sqrt{x}}{q}\left(\int_0^{1/(2x)} x \, d\beta + \int_{1/(2x)}^{1/(2q\sqrt{x})} \frac{1}{\beta} \, d\beta \right) \\& \ll \sum_{q\le \sqrt{x}} \left( \log \frac{2\sqrt{x}}{q}\right)^2 \quad \ll \sqrt{x}. \end{split} \] To prove \eqref{Festimate}, we first note that the conditions $q|u$ and $u \le \sqrt{x}$ force $F_q(\alpha)=0$ when $q>\sqrt{x}$. Next, when $q\le \sqrt{x}$ we write $u=jq$ and have \[ F_q(\alpha) = \sum_{j\le \frac{\sqrt{x}}{q}}\sum_{jq\le v\le \frac{x}{jq}} e( jqv\beta).\] Making use of the estimate \begin{equation} \label{basic} \sum_{N_1 < n \le N_2} e(n\alpha) \ll \min\left( N_2-N_1, \frac{1}{\Vert \alpha \Vert}\right)\end{equation} we have \[ F_q(\alpha) \ll \sum_{j\le \frac{\sqrt{x}}{q}}\min\left(\frac{x}{jq}, \frac{1}{\Vert jq\beta \Vert}\right).\] In this sum $jq\le \sqrt{x}$ so that $|jq\beta | \le |\beta| \sqrt{x}$, and hence the condition $ |\beta|\le \frac{1}{2q\sqrt{x}}$ implies $|jq\beta | \le \frac{1}{2q} \le \frac12$. Hence $ \Vert jq\beta \Vert = jq|\beta|$, and we have \[ F_q(\alpha) \ll \sum_{j\le \frac{\sqrt{x}}{q} } \frac{1}{jq} \min\left(x, \frac{1}{|\beta|}\right)\ll \min(x, \frac{1}{|\beta|})\frac{\log\frac{2\sqrt{x}}{q}}{q} .\] \end{proof} \begin{proof}[Proof of Lemma 3] The proof follows from the estimate, \begin{equation} \label{Gestimate} G_q(\alpha) \ll (\sqrt{x} + q)\log q , \quad \text{for} \ \alpha = \frac{a}{q} + \beta, \quad |\beta| \le \frac{1}{2q\sqrt{x}}, \end{equation} since this implies \[ \begin{split} I_G &\ll \sum_{q\le Q} q \int_0^{1/(2q\sqrt{x})}(\sqrt{x} +q) \log q\, d\beta \\& \ll \frac{1}{\sqrt{x}}( Q(\sqrt{x} +Q)\log Q \ll \sqrt{x}\log x \end{split} \] by \eqref{Qequation}. To prove \eqref{Gestimate}, we apply \eqref{basic} to the sum over $v$ in $G_q(\alpha)$ and obtain \[ G_q(\alpha) \ll \sum_{\substack{u\le\sqrt{x}\\ q\nmid u}} \min\left( \frac{x}{u},\frac{1}{\Vert u\alpha \Vert}\right). \] Recalling $\Vert x \Vert = \Vert -x \Vert $ and the triangle inequality $\Vert x+y\Vert \le \Vert x\Vert +\Vert y\Vert$, and using the conditions $1\le u\le \sqrt{x}$, $q\nmid u$, $ |\beta| \le \frac{1}{2q\sqrt{x}}$, we have \[ \begin{split}\left \Vert u\alpha\right \Vert &\ge \left\Vert \frac{au}{q}\right \Vert - \Vert u \beta \Vert \\& \ge \left \Vert \frac{au}{q} \right\Vert - u|\beta| \\& \ge\left \Vert \frac{au}{q} \right\Vert - \frac{u} {2q\sqrt{x}} \\& \ge \left \Vert \frac{au}{q} \right \Vert - \frac{1} {2q} \\& \ge \frac12\left \Vert \frac{au}{q} \right \Vert , \end{split} \] and therefore \[ G_q(\alpha) \ll \sum_{\substack{u\le\sqrt{x}\\ q\nmid u}} \frac{1}{\left\Vert \frac{au}{q} \right \Vert}. \] Here $\left\Vert \frac{au}{q} \right \Vert = \frac{b}{q}$ for some integer $1\le b \le \frac{q}{2}$ and since the integers $\{au: 1\le u \le \frac{q}{2}\}$ are distinct modulo $q$ since $(a,q)=1$, we see \[ \sum_{1\le u \le \frac{q}{2} } \frac{1}{\left\Vert \frac{au}{q}\right \Vert} = \sum_{1\le b \le \frac{q}{2}}\frac{q}{b} \ll q\log q .\] If $q> \sqrt{x}$ then \[ \sum_{\substack{u\le\sqrt{x}\\ q\nmid u}} \frac{1}{\left\Vert \frac{au}{q} \right \Vert} \le 2 \sum_{1\le u \le \frac{q}{2} } \frac{1}{\left\Vert \frac{au}{q}\right \Vert} \ll q\log q ,\] while if $q\le \sqrt{x}$ then the sum bounding $G_q(\alpha)$ can be split into $\ll \frac{ \sqrt{x}}{q}$ sums of this type and \[ G_q(\alpha) \ll \frac{\sqrt{x}}{q} (q\log q )\ll \sqrt{x}\log q. \] \end{proof} \section{Proof of the lower bound} Following Br\"udern, consider the intervals $|\alpha - \frac{a}{q}| \le 1/(4x)$ for $1\le a\le q\le Q $, where we take $\frac12 \sqrt{x} \le Q \le \sqrt{x}$. These intervals are pairwise disjoint because for two distinct fractions $|a/q - a'/q'| \ge 1/(qq') \ge 1/x$. (We will see later why these intervals have been chosen shorter than required to be disjoint.) Hence, using \eqref{alpha} \[ \int_0^1 |S(\alpha)|\, d\alpha = \int_{1/Q}^{1+1/Q} |S(\alpha)|\, d\alpha \ge \sum_{q\le \frac12\sqrt{x} }\sum_{\substack{a = 1\\ (a,q)=1}}^q \int_{-1/(4x)}^{1/(4x)} \left\vert S\left(\frac{a}{q} +\beta\right)\right\vert d\beta. \] Next we follow Vaughan's method \cite{Va} and apply the triangle inequality to obtain the lower bound \[ \int_0^1 |S(\alpha)|\, d\alpha \ge \sum_{q\le \frac12\sqrt{x} } \int_{-1/(4x)}^{1/(4x)} \left\vert \sum_{\substack{a = 1\\ (a,q)=1}}^qS\left(\frac{a}{q} +\beta\right)\right\vert d\beta. \] Letting \begin{equation} \label{Uq} U_q(x;\beta) := \sum_{\substack{a = 1\\ (a,q)=1}}^qS\left(\frac{a}{q} +\beta\right) = \sum_{n\le x} \tau(n) c_q(n)e(n\beta) , \end{equation} where \[ c_q(n) = \sum_{\substack{a = 1\\ (a,q)=1}}^q e\left(\frac{an}{q}\right) \] is the Ramanujan sum, our lower bound may now be written as \begin{equation} \label{lowerbound} \int_0^1 |S(\alpha)|\, d\alpha \ge \sum_{q\le \frac12\sqrt{x} } \int_{-1/(4x)}^{1/(4x)} \left\vert U_q(x;\beta)\right\vert d\beta. \end{equation} To complete the proof of the lower bound we need the following lemma, which we prove at the end of this section. \begin{lemma} \label{Usum} For $ q\ge 1$ we have \begin{align} U_q(x; 0) = \frac{\varphi(q)}{q}x(\log (x/q^2) + 2\gamma - 1) + O(q\tau(q) (x^{\frac13} + q^{\frac12})x^\epsilon), \end{align} where $\gamma$ is Euler's constant. \end{lemma} \begin{proof}[Proof of the lower bound in Theorem 1] For any exponential sum $T(x; \beta) =\sum_{n\le x} a_n e(n\beta)$ we have by partial summation or direct verification \[ T(x; \beta) = e(\beta x) T(x;0) - 2\pi i \beta \int_1^x e(\beta y) T(y;0) \, dy.\] Taking $T(x;\beta) = U_q(x;\beta)$ we thus obtain from \eqref{lowerbound} and the triangle inequality \begin{equation} \label{lower} \int_0^1 |S(\alpha)|\, d\alpha \ge \sum_{q\le \frac12\sqrt{x} } \int_{-1/(4x)}^{1/(4x)} \left( |U_q(x;0)| - 2\pi |\beta | \int_1^x |U_q(y; 0)| \, dy \right) d\beta.\end{equation} By Lemma 4, with $q\le \frac12\sqrt{x}$, \[ \begin{split} \int_1^x |U_q(y; 0)| \, dy & \le \frac{\varphi(q)}{q} \left( \int_1^x y | \log(y/q^2)| +(2\gamma -1)y \, dy\right) + O(xq\tau(q) (x^{\frac13} + q^{\frac12})x^\epsilon) \\& \le \frac{\varphi(q)}{q} \left( \int_1^{q^2} y \log(q^2/y) \, dy + \int_{q^2}^x y \log(y/q^2) \, dy+ \frac12 x^2 (2\gamma -1) \right) \\ & \hskip 2.5in + O(xq\tau(q) (x^{\frac13} + q^{\frac12})x^\epsilon) \\& = \frac{x}{2}\left( \frac{\varphi(q)}{q}\left( x(\log (x/q^2)+ 2\gamma -1) - \frac{x}{2} + \frac{q^4}{x} \right) + O(q\tau(q) (x^{\frac13} + q^{\frac12})x^\epsilon)\right)\\& \le \frac{x}{2} U_q(x,0) + O(q\tau(q) (x^{\frac13} + q^{\frac12})x^\epsilon). \end{split}\] Using $|\beta | \le 1/(4x)$, we have \[ |U_q(x;0)| - 2\pi \beta \int_1^x |U_q(y; 0)| \, dy \ge \left( 1 - \frac{\pi}{4} \right)|U_q(x;0)| - O(q\tau(q) (x^{\frac13} + q^{\frac12})x^\epsilon). \] We conclude, returning to \eqref{lower} and making use of Lemma 4 again, \[ \begin{split} \int_0^1 |S(\alpha)|\, d\alpha & \ge \frac{4-\pi}{8 x} \sum_{q\le \frac12\sqrt{x} } \left(|U_q(x;0)| - O(q\tau(q) (x^{\frac13} + q^{\frac12})x^\epsilon)\right) \\& \ge \frac{4-\pi}{8} \sum_{q\le \frac12\sqrt{x} } \frac{\varphi(q)}{q}(\log (x/q^2) + 2\gamma - 1) -O(x^{\frac13+\epsilon}) .\end{split} \] It is easy to see that the sum above is $\gg \sqrt{x} $ which suffices to proves the lower bound. More precisely, using \[ \frac{\varphi(n)}{n} = \sum_{d|n} \frac{ \mu(d) }{d} \] a simple calculation gives the well-known result \[ \sum_{ n\le x} \frac{\varphi(n)}{n} = \frac{6}{\pi^2} x +O(\log x), \] and then by partial summation we find \[ \sum_{q\le \frac12\sqrt{x} } \frac{\varphi(q)}{q}(\log (x/q^2) + 2\gamma - 1)\sim \frac{6}{\pi^2}(\log 2 +\gamma -1)\sqrt{x} .\] \end{proof} \begin{proof}[Proof of Lemma 4] Pongsriiam and Vaughan \cite{PV} recently proved the following very useful result on the divisor function in arithmetic progressions. For inteqer $a$ and $d\ge 1$ and real $x\ge 1$ we have \[ \sum_{\substack{ n\le x \\ n\equiv a(\text{mod}\, d)}}\tau(n) = \frac{x}{d} \sum_{r | d}\frac{c_r(a)}{r}\left(\log\frac{x}{r^2} + 2\gamma -1\right) +O( (x^{\frac13} + d^{\frac12})x^\epsilon),\] where $\gamma$ is Euler's constant and $c_r(a)$ is the Ramanujan sum. We need the special case when $a=0$ which along with the situation $(a,d)>1$ is explicitly allowed in this formula. Hence we have \begin{equation} \label{PV} \sum_{\substack{ n\le x \\ d|n}} \tau(n) = \frac{x}{d}f_x(d) + O( (x^{\frac13} + d^{\frac12})x^\epsilon) \end{equation} where \begin{equation}\label{f-g} f_x(d) = \sum_{r|d} g_x(r), \quad g_x(r) = \frac{\varphi(r)}{r}(\log(x/r^2) + 2\gamma - 1). \end{equation} Making use of \[ c_q(n) = \sum_{d|(n, q)}d\mu\left(\frac{q}{d}\right), \] and \eqref{PV} we have \[ \begin{split} U_q(x;0) &= \sum_{n\le x}\tau(n)c_q(n)\\& = \sum_{d|q}d\mu\left(\frac{q}{d}\right)\underset{d|n}{\sum_{n\le x}}\tau(n) \\& = x\sum_{d|q}\mu\left(\frac{q}{d}\right)f_x(d) + O( q\tau(q)(x^{\frac13} + d^{\frac12})x^\epsilon). \end{split} \] We evaluate the sum above using Dirichlet convolution and the identity $1*\mu = \delta$ where $\delta(n)$ is the identity for Dirichlet convolution defined to be 1 if $n=1$ and zero otherwise. Hence \[ \sum_{d|q}\mu\left(\frac{q}{d}\right)f_x(d) = (f_x * \mu)(q) = ( (g_x*1)*\mu)(q) = (g_x * \delta)(q) = g_x(q), \] and Lemma 4 is proved. \end{proof}
{'timestamp': '2017-04-21T02:02:04', 'yymm': '1704', 'arxiv_id': '1704.05953', 'language': 'en', 'url': 'https://arxiv.org/abs/1704.05953'}
\subsection{Sample Questions \label{sec:Sample-Questions}} \begin{itemize} \item \textbf{Q1} (\emph{Ordinal}): How would you describe your day today\textemdash{}has it been a typical day, a particularly good day, or a particularly bad day? \item \textbf{Q7} (\emph{Binary}): Now thinking about our country, overall, are you satisfied or dissatisfied with the way things are going in our country today? \item \textbf{Q5} (\emph{Multicategorical}): What do you think is the most important problem facing you and your family today? \{Economic problems / Housing / Health / Children and education/Work/Social relations / Transportation / Problems with government / Crime / Terrorism and war / No problems / Other / Don't know / Refused\} \item \textbf{Q10,11} (\emph{Category-ranking}): In your opinion, which one of these poses the greatest/second greatest threat to the world: \{the spread of nuclear weapons / religious and ethnic hatred/AIDS and other infectious diseases / pollution and other environmental problems / or the growing gap between the rich and poor\}? \item \textbf{Q74} (\emph{Continuous}): How old were you at your last birthday? \item \textbf{Q91} (\emph{Categorical}): Are you currently married or living with a partner, widowed, divorced, separated, or have you never been married? \end{itemize} \subsection{Mean-field Approximation \label{sub:Mean-field-Approximation}} We present here a simplification of $P\left(v_{i}\mid\vb_{\neg C}\right)$ in Eq.~(\ref{eq:predictive-distribution}) using the mean-field approximation. Recall that $P\left(v_{i}\mid\vb_{\neg C}\right)=\sum_{\hb}P\left(v_{i},\hb\mid\vb_{\neg C}\right)$, where $P\left(v_{i},\hb\mid\vb_{\neg C}\right)$ is defined in Eq.~(\ref{eq:output-hidden-prob}). We approximate $P\left(v_{i},\hb\mid\vb_{\neg C}\right)$ by a fully factorised distribution \[ Q\left(v_{i},\hb\mid\vb_{\neg C}\right)=Q\left(\v_{i}\mid\vb_{\neg C}\right)\prod_{k}Q\left(\h_{k}\mid\vb_{\neg C}\right). \] The approximate distribution $Q\left(v_{i},\hb\mid\vb_{\neg C}\right)$ is obtained by minimising the Kullback-Leibler divergence \[ \mathcal{D}_{KL}\left(Q\left(v_{i},\hb\mid\vb_{\neg C}\right)\parallel P\left(v_{i},\hb\mid\vb_{\neg C}\right)\right)=\sum_{\v_{i}}\sum_{\hb}Q\left(v_{i},\hb\mid\vb_{\neg C}\right)\log\frac{Q\left(v_{i},\hb\mid\vb_{\neg C}\right)}{P\left(v_{i},\hb\mid\vb_{\neg C}\right)} \] with respect to $Q\left(\v_{i}\mid\vb_{\neg C}\right)$ and $\left\{ Q\left(\h_{k}\mid\vb_{\neg C}\right)\right\} _{k=1}^{K}$. This results in the following recursive relations: \begin{eqnarray*} Q\left(\v_{i}\mid\vb_{\neg C}\right) & \propto & \exp\left\{ G_{i}(\v_{i})+\sum_{k}H_{ik}(\v_{i})Q\left(\h_{k}\mid\vb_{\neg C}\right)\right\} ,\\ Q\left(\h_{k}\mid\vb_{\neg C}\right) & = & \frac{1}{1+\exp\{-w_{k}-\sum_{\v_{i}}H_{ik}(\v_{i})Q\left(\v_{i}\mid\vb_{\neg C}\right)-\sum_{j\in\neg C}H_{ik}(\v_{j})\}}. \end{eqnarray*} Now we make a further assumption that $\left|\sum_{\v_{i}}H_{ik}(\v_{i})Q\left(\v_{i}\mid\vb_{\neg C}\right)\right|\ll\left|\sum_{j\in\neg C}H_{ik}(\v_{j})\right|$, e.g., when the set $\neg C$ is sufficiently large. This results in $Q\left(\h_{k}\mid\vb_{\neg C}\right)\approx P\left(\h_{k}\mid\vb_{\neg C}\right)$ and \[ Q\left(\v_{i}\mid\vb_{\neg C}\right)\propto\exp\left\{ G_{i}(\v_{i})+\sum_{k}H_{ik}(\v_{i})P\left(\h_{k}^{1}\mid\vb_{\neg C}\right)\right\} , \] which is essentially the data model $P\left(\v_{i}\mid\hb\right)$ in Eq.~(\ref{eq:data-model}) with $h_{k}$ being replaced by $P\left(\h_{k}^{1}\mid\vb_{\neg C}\right)$. The overall complexity of computing $Q\left(\v_{i}\mid\vb_{\neg C}\right)$ is the same as that of evaluating $P\left(\v_{i}\mid\vb_{\neg C}\right)$ in Eq.~(\ref{eq:predictive-distribution}). However, the approximation is often numerically faster, and in the case of continuous variables, it has the simpler functional form. \subsection{Setting} In this experiment, we run the $\MODEL$ on a large-scale survey of the general world opinion, which was published by the Pew Global Attitudes Project% \footnote{http://pewglobal.org/datasets/% } in the summer of $2002$. The survey was based on interviewing with people in $44$ countries in the period of $2001$--$2002$. Some sample questions are listed in Appendix~\ref{sec:Sample-Questions}. After some pre-processing, we obtain a dataset of $38,263$ people, each of whom provides answers to a subset of $189$ questions over multiple topics ranging from globalization, democracy to terrorism. Many answers are deliberately left empty because it may be inappropriate to ask certain type of questions in a certain area or ethnic group. Of all answers, $43$ are binary, $12$ are categorical, $3$ are multicategorical, $125$ are ordinal, $2$ are category-ranking, and $3$ are continuous. To suppress the scale difference in continuous responses, we normalise them to zeros means and unit variances~\footnote{It may be desirable to learn the variance structure, but we keep it simple by fixing to unit variance. For more sophisticated variance learning, we refer to a recent paper \cite{le2011learning} for more details.}. We evaluate each data type separately. In particular, let $u$ be the user index, $\hat{\v}{}_{i}$ be the predicted value of the $i$-th variable, and $N_{t}$ is the number of variables of type $t$ in the test data, we compute the prediction errors as follows: \begin{tabular}{ll} \noalign{\vskip0.2cm} --Binary & :~~$\frac{1}{N_{bin}}\sum_{u}\sum_{i}\mathbb{I}\left[\v_{i}^{(u)}\ne\hat{\v}_{i}^{(u)}\right]$,\tabularnewline[0.2cm] \noalign{\vskip0.2cm} --Categorical & :~~$\frac{1}{N_{cat}}\sum_{u}\sum_{i}\mathbb{I}\left[\v_{i}^{(u)}\ne\hat{\v}_{i}^{(u)}\right]$,\tabularnewline[0.2cm] \noalign{\vskip0.2cm} --Multicategorical & :~~$1-\nicefrac{2\mathrm{R}\mathrm{P}}{\left(\mathrm{R}+\mathrm{P}\right)}$,\tabularnewline[0.2cm] \noalign{\vskip0.2cm} --Continuous & :~~$\sqrt{\frac{1}{D_{cont}}\sum_{u}\sum_{i}\left(\v_{i}^{(u)}-\hat{\v}_{i}^{(u)}\right)^{2}}$,\tabularnewline[0.2cm] \noalign{\vskip0.2cm} --Ordinal & :~~$\frac{1}{N_{ord}}\sum_{u}\sum_{i}\frac{1}{M_{i}-1}\left|\v_{i}^{(u)}-\hat{\v}_{i}^{(u)}\right|$,\tabularnewline[0.2cm] \noalign{\vskip0.2cm} --Category-ranking & :~~$\frac{1}{D_{rank}}\sum_{u}\sum_{i}\frac{2}{M_{i}(M_{i}-1)}\sum_{l,m>l}\mathbb{I}\left[(\pi_{il}^{(u)}-\pi_{im}^{(u)})(\hat{\pi}_{il}^{(u)}-\hat{\pi}_{im}^{(u)})<0\right]$,\tabularnewline[0.2cm] \end{tabular}\\ where $\mathbb{I}\left[\cdot\right]$ is the identity function, $\pi_{im}\in\{1,2,...,M_{i}\}$ is the rank of the $m$-th category of the $i$-th variable, $\mathrm{R}$ is the recall rate and $\mathrm{P}$ is the precision. The recall and precision are defined as: \[ \mathrm{R}=\frac{\sum_{u}\sum_{i}\frac{1}{M_{i}}\sum_{m=1}^{M_{i}}\mathbb{I}\left[a_{im}^{(u)}=\hat{a}_{im}^{(u)}\right]}{\sum_{u}\sum_{i}\frac{1}{M_{i}}\sum_{m=1}^{M_{i}}a_{im}^{(u)}};\quad\quad\quad\mathrm{P}=\frac{\sum_{u}\sum_{i}\frac{1}{M_{i}}\sum_{m=1}^{M_{i}}\mathbb{I}\left[a_{im}^{(u)}=\hat{a}_{im}^{(u)}\right]}{\sum_{u}\sum_{i}\frac{1}{M_{i}}\sum_{m=1}^{M_{i}}\hat{a}_{im}^{(u)}}, \] where $a_{im}\in\{0,1\}$ is the $m$-th component of the $i$-th multicategorical variable. Note that the summation over $i$ for each type only consists of relevant variables. To create baselines, we use the $\MODEL$ without the hidden layer, i.e., by assuming that variables are independent\footnote{To the best of our knowledge, there has been no totally comparable work addressing the issues we study in this paper. Existing survey analysis methods are suitable for individual tasks such as measuring pairwise correlation among variables, or building individual regression models where complex co-variates are coded into binary variables.}. \subsection{Results} \subsubsection{Feature Extraction and Visualisation} \begin{table*} \begin{centering} \begin{tabular}{|l|c|ccccc|} \hline & Baseline & $K=20$ & $K=50$ & $K=100$ & $K=200$ & $K=500$\tabularnewline \hline Binary & 32.9 & 23.6 & 20.1 & 16.3 & 13.2 & 9.8\tabularnewline \hline Categorical & 52.3 & 29.8 & 22.0 & 17.0 & 13.2 & 7.1\tabularnewline \hline Multicategorical & 49.6 & 46.6 & 42.2 & 36.9 & 29.2 & 23.8\tabularnewline \hline Continuous({*}) & 100.0 & 89.3 & 84.1 & 78.4 & 69.5 & 65.5\tabularnewline \hline Ordinal & 25.2 & 19.5 & 16.2 & 13.5 & 10.9 & 7.7\tabularnewline \hline Category ranking & 19.3 & 11.7 & 6.0 & 5.0 & 3.2 & 2.3\tabularnewline \hline \end{tabular} \par\end{centering} \caption{Error rates (\%) when reconstructing data from posteriors. The baseline is essentially the $\MODEL$ without hidden layer (i.e., assuming variables are independent). ({*}) The continuous variables have been normalised to account for different scales between items, thus the baseline error will be $1$ (i.e., the unit variance).\label{tab:Reconstruction-error-rates}} \end{table*} Recall that our $\MODEL$ can be used as a feature extraction tool through the posterior projection. The projection converts a multimodal input into a real-valued vector of the form $\hat{\hb}=\left(\hat{\h}_{1},\hat{\h}_{2},...,\hat{\h}_{K}\right)$, where $\hat{\h}_{k}=P\left(\h_{k}=1\mid\vb\right)$. Clearly, numerical vectors are much easier to process further than the original data, and in fact the vectorial form is required for the majority of modern data handling tools (e.g., for transformation, clustering, comparison and visualisation). To evaluate the faithfulness of the new representation, we reconstruct the original data using $\hat{\v}_{i}=\arg\max_{\v_{i}}P\left(\v_{i}\mid\hat{\hb}\right)$, that is, in Eq.~(\ref{eq:data-model}), the binary vector $\hb$ is replaced by $\hat{\hb}$. The use of $P\left(\v_{i}\mid\hat{\hb}\right)$ can be reasoned through the mean-field approximation framework presented in Appendix~\ref{sub:Mean-field-Approximation}. Table~\ref{tab:Reconstruction-error-rates} presents the reconstruction results. The trends are not surprising: with more hidden units, the model becomes more flexible and accurate in capturing the data content. \begin{figure} \begin{centering} \includegraphics[height=0.5\linewidth]{Pew_02_44_nations_country_projection.pdf} \par\end{centering} \caption{t-SNE projection of posteriors ($K=50$) with country information removed. Each point is a person from one of the $10$ countries: Angola, Argentina, Bangladesh, Bolivia, Brazil, Bulgaria, Canada, China, Czech Republic, and Egypt. Each colour represents a country. Best viewed in colour. \label{fig:t-SNE-projection}} \end{figure} For visualisation, we first learn our $\MODEL$ (with $K=50$ hidden units) using randomly chosen $3,830$ users, with the country information removed. Then we use the t-SNE \cite{van2008visualizing} to project the posteriors further into 2D. Figure~\ref{fig:t-SNE-projection} shows the distribution of people's opinions in $10$ countries (Angola, Argentina, Bangladesh, Bolivia, Brazil, Bulgaria, Canada, China, Czech Republic, and Egypt). It is interesting to see how opinions cluster geographically and culturally: Europe \& North America (Bulgaria, Canada \& Czech Republic), South America (Argentina, Bolivia, Brazil), East Asia (China), South Asia (Bangladesh), North Africa (Egypt) and South Africa (Angola). \subsubsection{Data Completion} In this task, we need to fill missing answers for each survey response. Missing answers are common in real survey data because the respondents may forget to answer or simply ignore the questions. We create an evaluation test by randomly removing a portion $\rho\in(0,1)$ of answers for each person. The $\MODEL$ is then trained on the remaining answers in a generative fashion (Section~\ref{sub:Estimating-Data-Distribution}). Missing answers are then predicted as in Section~\ref{sub:Prediction}. The idea here is that missing answers of a person can be interpolated from available answers by other persons. This is essentially a multimodal generalisation of the so-called collaborative filtering problem. Table~\ref{tab:Completion-errors-full} reports the completion results for a subset of the data. \begin{table*} \begin{centering} \begin{tabular}{|l|c|ccccc|} \hline & Baseline & $K=20$ & $K=50$ & $K=100$ & $K=200$ & $K=500$\tabularnewline \hline Binary & 32.7 & 26.0 & 24.2 & 23.3 & 22.7 & 22.3\tabularnewline \hline Categorical & 52.1 & 34.3 & 30.0 & 28.2 & 27.5 & 27.1\tabularnewline \hline Multicategorical & 49.5 & 48.3 & 45.7 & 43.6 & 42.4 & 42.0\tabularnewline \hline Continuous({*}) & 101.6 & 93.5 & 89.9 & 87.9 & 87.3 & 87.9\tabularnewline \hline Ordinal & 25.1 & 20.7 & 19.3 & 18.6 & 18.2 & 17.9\tabularnewline \hline Category ranking & 19.3 & 15.4 & 14.7 & 14.2 & 14.1 & 13.9\tabularnewline \hline \end{tabular} \par\end{centering} \caption{Completion error rates (\%) $\rho=0.2$ answers missing at random. ({*}) See Table~\ref{tab:Reconstruction-error-rates}.\label{tab:Completion-errors-full} } \end{table*} \subsubsection{Learning Predictive Models} We study six predictive problems, each of which is representative for a data type. This means six corresponding variables are reserved as outputs and the rest as input co-variates. The predictive problems are: (i) satisfaction with the country (\emph{binary}), (ii) country of origin (\emph{categorical}, of size $44$), (iii) problems facing the country (\emph{multicategorical}, of size $11$), (iv) age of the person (\emph{continuou}s), (v) ladder of life (\emph{ordinal}, of size $11$), and (vi) rank of dangers of the world (\emph{category-ranking}, of size $5$). All models are trained discriminatively (see Section~\ref{sub:Learning-Predictive-Models}). We randomly split the users into a training subset and a testing subset. The predictive results are presented in Table~\ref{tab:Predictive-errors-full}. It can be seen that learning predictive models requires far less number of hidden units than the tasks of reconstruction and completion. This is because in discriminative training, the hidden layer acts as an information filter that allows relevant amount of bits passing from the input to the output. Since there is only one output per prediction task, the number of required bits, therefore number of hidden units, is relatively small. In reconstruction and completion, on the other hand, we need many bits to represent all the available information. \begin{table*} \begin{centering} \begin{tabular}{|l|c|cccccc|} \hline & Baseline & $K=3$ & $K=5$ & $K=10$ & $K=15$ & $K=20$ & $K=50$\tabularnewline \hline Satisfaction (\emph{bin}.) & 26.3 & 18.0 & 17.7 & 17.7 & 17.8 & 18.0 & 18.0\tabularnewline \hline Country (\emph{cat}.) & 92.0 & 70.2 & 61.0 & 21.6 & 11.0 & 9.9 & 5.9\tabularnewline \hline Probs. (\emph{multicat}.) & 49.6 & 47.6 & 41.9 & 39.2 & 38.8 & 39.1 & 39.2\tabularnewline \hline Age (\emph{cont}.{*}) & 99.8 & 67.3 & 67.6 & 66.3 & 66.4 & 65.8 & 66.3\tabularnewline \hline Life ladder (\emph{ord}.) & 16.9 & 12.2 & 12.2 & 11.9 & 11.9 & 12.2 & 11.8\tabularnewline \hline Dangers (\emph{cat.-rank}) & 31.2 & 27.1 & 24.6 & 24.0 & 23.2 & 23.0 & 22.5\tabularnewline \hline \end{tabular} \par\end{centering} \caption{Predictive error rates (\%) with $80/20$ train/test split. ({*}) See Table~\ref{tab:Reconstruction-error-rates}. \label{tab:Predictive-errors-full}} \end{table*} \subsection{Parameter Learning \label{sub:Learning}} We now present parameter estimation for $\{w_{k},U_{i},U_{im},V_{ik},V_{imk}\}$, which clearly depend on the specific applications. \subsubsection{Estimating Data Distribution \label{sub:Estimating-Data-Distribution}} The problem of estimating a distribution from data is typically performed by maximising the data likelihood $\mathcal{L}_{1}=\sum_{\vb}\tilde{P}(\vb)\log P(\vb)$, where $\tilde{P}(\vb)$ denotes the empirical distribution of the visible variables, and $P(\vb)=\sum_{\hb}P(\vb,\hb)$ is the model distribution. Since the $\MODEL$ belongs to the exponential family, the gradient of $\mathcal{L}_{1}$ with respect to parameters takes the form of difference of expectations. For example, in the case of binary variables, the gradient reads \[ \frac{\partial\mathcal{L}_{1}}{\partial V_{ik}}=\left\langle v_{i}h_{k}\right\rangle _{\tilde{P}(v_{i},h_{k})}-\left\langle v_{i}h_{k}\right\rangle _{P(v_{i},h_{k})} \] where $\tilde{P}(h_{k},v_{i})=P(h_{k}|\vb)\tilde{P}(v_{i})$ is the empirical distribution, and $P(h_{k},v_{i})=P(h_{k}|\vb)P(v_{i})$ the model distribution. Due to space constraint, we omit the derivation details here. The empirical expectation $\left\langle v_{i}h_{k}\right\rangle _{\tilde{P}(v_{i},h_{k})}$ is easy to estimate due to the factorisation in Eq.~(\ref{eq:posterior}). However, the model expectation $\left\langle v_{i}h_{k}\right\rangle _{P(v_{i},h_{k})}$ is intractable to evaluate exactly, and thus we must resort to approximate methods. Due to the factorisations in Eqs.~(\ref{eq:posterior},\ref{eq:data-model}), Markov Chain Monte Carlo samplers are efficient to run. More specifically, the sampler is alternating between $\left\{ \widehat{h}_{k}\sim P(h_{k}|\vb)\right\} _{k=1}^{K}$ and $\left\{ \widehat{\v}_{i}\sim P(\v_{i}|\hb)\right\} _{i=1}^{N}$. Note that in the case of multicategorical variables, make use of the factorisation in Eq.~(\ref{eq:multilabel-data-model}) and sample $\{a_{im}\}_{m=1}^{M_{i}}$ simultaneously. On the other hand, in the case of category-ranked variables, we do not sample directly from $P(\v_{i}|\hb)$ but from its relaxation $\left\{ P_{i}(c_{il}\triangleright c_{im}|\hb)\right\} _{l,m>l}$ - which have the form of multinomial distributions. To speed up, we follow the method of Contrastive Divergence (CD) \cite{Hinton02}, in which the MCMC is restarted from the observed data $\vb$ and stopped after just a few steps for every parameter update. This has been known to introduce bias to the model estimate, but it is often fast and effective for many applications. For the data completion application, in the data we observed only some variables and others are missing. There are two ways to handle a missing variable during training time: one is to treat it as hidden, and the other is to ignore it. In this paper, we follows the latter for simplicity and efficiency, especially when the data is highly sparse\footnote{Ignoring missing data may be inadequate if the missing patterns are not at random. However, treating missing data as zero observations (e.g., in the case of binary variables) may not be accurate either since it may introduce bias to the data marginals.}. \subsubsection{Learning Predictive Models \label{sub:Learning-Predictive-Models}} In our $\MODEL$, a predictive task can be represented by an output variable conditioned on input variables. Denote by $v_{i}$ the $i$-th output variable, and $\vb_{\neg i}$ the set of input variables, that is, $\vb=(\v_{i},\vb_{\neg i})$. The learning problem is translated into estimating the conditional distribution $P(\v_{i}\mid\vb_{\neg i})$. There are three general ways to learn a predictive model. The \emph{generative} method first learns the joint distribution $P(\v_{i},\vb_{\neg i})$ as in the problem of estimating data distribution. The \emph{discriminative} method, on the other hand, effectively ignores $P(\vb_{\neg i})$ and concentrates only on $P(\v_{i}\mid\vb_{\neg i})$. In the latter, we typically maximise the conditional likelihood $\mathcal{L}_{2}=\sum_{\v_{i}}\sum_{\vb_{\neg i}}\tilde{P}(\v_{i},\vb_{\neg i})\log P(\vb_{i}\mid\vb_{\neg i})$. This problem is inherently easier than the former because we do not have to make inference about $\vb_{\neg i}$. The learning strategy is almost identical to that of the generative counterpart, except that we \emph{clamp} the input variables $\vb_{\neg i}$ to their observed values. For tasks whose size of the output space is small (e.g., standard binary, ordinal, categorical variables) we can perform exact evaluations and use any non-linear optimisation methods for parameter estimation. The conditional distribution $P(\v_{i}\mid\vb_{\neg i})$ can be computed as in Eq.~(\ref{eq:predictive-distribution}). We omit the likelihood gradient here for space limitation. It is often argued that the discriminative method is more preferable since there is no waste of effort in learning $P(\vb_{\neg i})$, which we do not need at test time. In our setting, however, learning $P(\vb_{\neg i})$ may yield a more faithful representation% \footnote{As we do not need labels to learn $P(\vb_{\neg i})$, this is actually a form of \emph{semi-supervised learning}.% } of the data through the posterior $P(\hb\mid\vb_{\neg i})$. This suggests a third, \emph{hybrid} method: combining the generative and discriminative objectives. One way is to optimise a hybrid likelihood: \[ \mathcal{L}_{3}=\lambda\sum_{\vb_{\neg i}}\tilde{P}(\vb_{\neg i})\log P(\vb_{\neg i})+(1-\lambda)\sum_{\vb_{i}}\sum_{\vb_{\neg i}}\tilde{P}(\vb_{i},\vb_{\neg i})\log P(\vb_{i}\mid\vb_{\neg i}), \] where $\lambda\in(0,1)$ is the hyper-parameter controlling the relative contribution of generative and discriminative components. Another way is to use a \textbf{$2$}-stage procedure: first we \emph{pre-train} the model $P(\vb_{\neg i})$ in an unsupervised manner, and then \emph{fine-tune} the predictive model% \footnote{We can also avoid tuning parameters associated with $\vb_{\neg i}$ by using the posteriors as features and learn $P\left(\v_{i}\mid\hat{\hb}\right)$, where $\hat{\h}_{k}=P\left(h_{k}^{1}\mid\vb_{\neg i}\right).$% } $P(\vb_{i}\mid\vb_{\neg i})$. \subsection{Prediction \label{sub:Prediction}} Once the model has been learnt, we are ready to perform prediction. We study two predictive applications: completing missing data, and output labels in predictive modelling. The former leads to the inference of $P(\vb_{C}\mid\vb_{\neg C})$, where $\vb_{\neg C}$ is the set of observed variables, and $\vb_{C}$ is the set of unseen variables to be predicted. Ideally, we should predict all unseen variables simultaneously but the inference is likely to be difficult. Thus, we resort to estimating $P(v_{i}|\vb_{\neg C})$, for $i\in C$. The prediction application requires the estimation of $P(v_{i}|\vb_{\neg i})$, which is clearly a special case of $P(v_{i}|\vb_{\neg C})$, i.e., when $C=\{i\}$. The output is predicted as follows \begin{align} \hat{\v}_{i} & =\arg\max_{\v_{i}}P(v_{i}|\vb_{\neg C})=\arg\max_{\v_{i}}\sum_{\hb}P(v_{i},\hb|\vb_{\neg C});\quad\mbox{where}\label{eq:prediction}\\ P(v_{i},\hb|\vb_{\neg C}) & =\frac{1}{Z(\vb_{\neg C},\hb)}\exp\left\{ G_{i}(v_{i})+\sum_{k}w_{k}h_{k}+\sum_{j\in\{\neg C,i\}}\sum_{k}H_{jk}(v_{j})h_{k}\right\} ,\label{eq:output-hidden-prob} \end{align} where $Z(\vb_{\neg C})$ is the normalising constant. Noting that $\h_{k}\in\{0,1\}$, the computation of $P(v_{i}|\vb_{\neg C})$ can be simplified as \begin{align} P(v_{i}|\vb_{\neg C}) & =\frac{1}{Z(\vb_{\neg C})}\exp\{G_{i}(v_{i})\}\prod_{k}\left[1+\frac{\exp\{H_{ik}(v_{i})\}}{1/P(h_{k}^{1}|\vb_{\neg C})-1}\right]\label{eq:predictive-distribution} \end{align} where $P(h_{k}^{1}|\vb_{\neg C})$ is computed using Eq.~(\ref{eq:posterior}) as \[ P(h_{k}^{1}|\vb_{\neg C})=\frac{1}{1+\exp\{-w_{k}-\sum_{j\in\neg C}H_{jk}(v_{j})\}}. \] For the cases of binary, categorical and ordinal outputs, the estimation in Eq.~(\ref{eq:prediction}) is straightforward using Eq.~(\ref{eq:predictive-distribution}). However, for other output types, suitable simplification must be made: \begin{itemize} \item For multicategorical and category-ranking variables, we do not enumerate over all possible assignments of $v_{i}$, but rather in an indirect manner: \begin{itemize} \item For multiple categories (Section~\ref{sub:Multi-categorical-Variables}), we first estimate $\left\{ P_{i}(a_{im}=1|\vb_{\neg i})\right\} _{m=1}^{M_{i}}$ and then output $a_{im}=1$ if $P_{i}(a_{im}=1|\vb_{\neg i})\ge\nu$ for some threshold% \footnote{Raising the threshold typically leads to better precision at the expense of recall. Typically we choose $\nu=0.5$ when there is no preference over recall nor precision.% } $\nu\in(0,1)$. \item For category-ranking (Section~\ref{sub:Category-ranking-Variables}), we first estimate $\left\{ P_{i}(c_{il}\succ c_{im}|\vb_{\neg i})\right\} _{l,m>l}$. The complete ranking over the set $\{c_{i1},c_{i2},...,c_{iM_{i}}\}$ can be obtained by aggregating over probability pairwise relations. For example, the score for $c_{im}$ can be estimated as $s(c_{im})=\sum_{l\ne m}P_{i}(c_{im}\succ c_{il}|\vb_{\neg i})$, which can be used for sorting categories% \footnote{Note that we do not estimate the event of ties during prediction.% }. \end{itemize} \item For continuous variables, the problem leads to a non-trivial nonlinear optimisation: even for the case of Gaussian variables, $P(v_{i}|\vb_{\neg C})$ in Eq.~(\ref{eq:predictive-distribution}) is no longer Gaussian. For efficiency and simplicity, we can take a \emph{mean-field} approximation by substituting $\hat{\h}_{k}=P(h_{k}^{1}|\vb_{\neg C})$ for $\h_{k}$. For example, in the case of Gaussian outputs, we then obtain a simplified expression for $P(v_{i}|\vb_{\neg C})$: \[ P(v_{i}|\vb_{\neg C})\propto\exp\left\{ -\frac{\v_{i}^{2}}{2\sigma_{i}^{2}}+U_{i}\v_{i}+\sum_{k}V_{ik}v_{i}\hat{\h}_{k}\right\} , \] which is also a Gaussian. Thus the optimal value is the mean itself: $\hat{\v}_{i}=\sigma_{i}^{2}\left(U_{i}+\sum_{k}V_{ik}\hat{\h}_{k}\right).$ Details of the mean-field approximation is presented in Appendix~\ref{sub:Mean-field-Approximation}.\end{itemize} \subsection{Model Definition} Denote by $\vb=(\v_{1},\v_{2},...,\v_{N})$ the set of \emph{mixed-variate} visible variables where each $\v_{i}$ can be one of the following types: \emph{binary}, \emph{categorical}, \emph{multicategorical}, \emph{continuous}, \emph{ordinal} or \emph{category-ranked}. Let $\vb_{disc}$ be the joint set of discrete elements and $\vb_{cont}$ be the continuous set, and thus $\vb=(\vb_{disc},\vb_{cont})$. Denoting by $\hb=(h_{1},h_{2},...,h_{K})\in\{0,1\}^{K}$ the hidden variables, the model distribution of $\MODEL$ is defined as \begin{equation} P(\vb,\hb)=\frac{1}{Z}\exp\{-E(\vb,\hb)\},\label{eq:model-def} \end{equation} where $E(\vb,\hb)$ is the model energy, $Z$ is the normalisation constant. The model energy is further decomposed into a sum of singleton and pairwise energies: \[ E(\vb,\hb)=\sum_{i=1}^{N}E_{i}(\v_{i})+\sum_{k=1}^{K}E_{k}(\h_{k})+\sum_{i=1}^{N}\sum_{k=1}^{K}E_{ik}(\v_{i},\h_{k}), \] where $E_{i}(\v_{i})$ depends only on the $i$-th visible unit, $E_{k}(\h_{k})$ on the $k$-th hidden unit, and $E_{ik}(\v_{i},\h_{k})$ on the interaction between the $i$-th visible and $k$-hidden units. The $\MODEL$ is thus a $2$-layer mixed-variate Markov random field with pairwise connectivity across layers. For the distribution in Eq.~(\ref{eq:model-def}) to be properly specified, we need to keep the normalisation constant finite. In other words, the following integration \[ Z=\int_{\vb_{cont}}\left(\sum_{\vb_{disc}}\sum_{\hb}\exp\{-E(\vb_{disc},\vb_{cont},\hb)\}\right)d(\vb_{cont}) \] must be bounded from above. One way is to choose appropriate continuous variable types with bounded moments, e.g., Gaussian. Another way is to explicitly bound the continuous variables to some finite ball, i.e., $\left\Vert \vb_{cont}\right\Vert \le R$. In our $\MODEL$, we further assume that the energies have the following form: \begin{equation} E_{i}(\v_{i})=-G_{i}(\v_{i});\quad\quad\quad\quad E_{k}(h_{k})=-w_{k}h_{k};\quad\quad\quad\quad E_{ik}(\v_{i},\h_{k})=-H_{ik}(v_{i})h_{k},\label{eq:energies} \end{equation} where $w_{k}$ is the bias parameter for the $k$-th hidden unit, and $G_{i}(v_{i})$ and $H_{ik}(v_{i})$ are functions to be specified for each data type. An important consequence of this energy decomposition is the factorisation of the \emph{posterior}: \begin{eqnarray} P(\hb\mid\vb) & = & \prod_{k}P(h_{k}\mid\vb);\quad\quad\quad\quad P(h_{k}^{1}\mid\vb)=\frac{1}{1+\exp\{-w_{k}-\sum_{i}H_{ik}(\v_{i})\}},\label{eq:posterior} \end{eqnarray} where $h_{k}^{1}$ denotes the assignment $h_{k}=1$. This posterior is efficient to evaluate, and thus the vector $\left(P(h_{k}^{1}\mid\vb),P(h_{k}^{2}\mid\vb),...,P(h_{k}^{K}\mid\vb)\right)$ can be used as extracted features for mixed-variate input $\vb$. Similarly, the \emph{data model} $P(\vb|\hb)$ has the following factorisation \begin{equation} P(\vb\mid\hb)=\prod_{i}P_{i}(\v_{i}\mid\hb);\quad\quad\quad\quad P_{i}(\v_{i}\mid\hb)=\frac{1}{Z(\hb)}\exp\{G_{i}(\v_{i})+\sum_{k}H_{ik}(\v_{i})h_{k}\},\label{eq:data-model} \end{equation} where $Z(\hb)=\sum_{\v_{i}}\exp\{G_{i}(\v_{i})+\sum_{k}H_{ik}(\v_{i})h_{k}\}$ if $\v_{i}$ is discrete and $Z(\hb)=\int_{\v_{i}}\exp\{G_{i}(\v_{i})+\sum_{k}H_{ik}(\v_{i})h_{k}\}d(\v_{i})$ if $\v_{i}$ is continuous, assuming that the integration exists. Note that we deliberately use the subscript index $i$ in $P_{i}(\cdot\mid\hb)$ to emphasize the heterogeneous nature of the input variables. \subsection{Type-specific Data Models} We now specify $P_{i}(\v_{i}|\mathbf{h})$ in Eq.~(\ref{eq:data-model}), or equivalently, the functionals $G_{i}(\v_{i})$ and $H_{ik}(\v_{i})$. Denote by $\mathbb{S}_{i}=(c_{i1},c_{i2},...,c_{iM_{i}})$ the set of categories in the case of discrete variables. In this section, for continuous types, we limit to Gaussian variables as they are the by far the most common. Interested readers are referred to \cite{le2011learning} for Beta variables in the context of image modelling. The data model and related functionals for binary, Gaussian and categorical data types are well-known, and thus we provide a summary here: \begin{flushleft} \begin{tabular}{lccc} & $G_{i}(\v_{i})$ & $H_{ik}(\v_{i})$ & $P_{i}(\v_{i}|\hb)$\tabularnewline \hline \noalign{\vskip0.2cm} --Binary & $U_{i}v_{i}$ & $V_{ik}v_{i}$ & $\frac{\exp\{U_{i}v_{i}+\sum_{k}V_{ik}h_{k}v_{i}\}}{1+\exp\{U_{i}+\sum_{k}V_{ik}h_{k}\}}$\tabularnewline \noalign{\vskip0.2cm} --Gaussian & $-\nicefrac{\v_{i}^{2}}{2\sigma_{i}^{2}}+U_{i}v_{i}$ & $V_{ik}v_{i}$ & $\mathcal{N}\left(\sigma_{i}^{2}\left(U_{i}+\sum_{k}V_{ik}h_k\right);\sigma_{i}\right)$\tabularnewline \noalign{\vskip0.2cm} --Categorical & $\sum_{m}U_{im}\delta_{m}[v_{i}]$ & $\sum_{m,k}V_{imk}\delta_{m}[v_{i}]$ & $\frac{\exp\{\sum_{m}U_{im}\delta_{m}[v_{i}]+\sum_{m,k}V_{imk}\delta_{m}[v_{i}]h_k\}}{\sum_{l}\exp\{U_{il}+\sum_{k}V_{ilk}h_{k}\}}$\tabularnewline[0.4cm] \end{tabular}\\ where $m=1,2,...,M_{i}$; $U_{i},V_{ik},U_{im},V_{imk}$ are model parameters; and $\delta_{m}[v_{i}]=1$ if $v_{i}=c_{im}$ and $0$ otherwise. \par\end{flushleft} The cases of multicategorical, ordinal and category-ranking variables are, however, much more involved, and thus some further simplification may be necessary. In what follows, we describe the specification details for these three cases. \subsubsection{Multicategorical Variables \label{sub:Multi-categorical-Variables}} An assignment to a multicategorical variable has the form of a subset from a set of categories. For example, a person may be interested in \textsf{games} and \textsf{music} from a set of offers: \textsf{games}, \textsf{sports}, \textsf{music}, and \textsf{photography}. More formally, let $\mathbb{S}_{i}$ be the set of categories for the $i$-th variable, and $\mathcal{P}_{i}=2^{\mathbb{S}_{i}}$ be the power set of $\mathbb{S}_{i}$ (the set of all possible subsets of $\mathbb{S}_{i}$). Each variable assignment consists of a non-empty element of $\mathcal{P}_{i}$, i.e. $\v_{i}\in\left\{ \mathcal{P}_{i}\backslash\emptyset\right\} $. Since there are $2^{M_{i}}-1$ possible ways to select a non-empty subset, directly enumerating $P_{i}(\v_{i}|\hb)$ proves to be highly difficult even for moderate sets. To handle this state explosion, we first assign each category $c_{im}$ with a binary indicator $a_{im}\in\{0,1\}$ to indicate whether the $m$-th category is active, that is $\v_{i}=\left(a_{i1},a_{i2},...,a_{iM_{i}}\right)$. We then assume the following factorisation: \begin{equation} P_{i}(\v_{i}|\hb)=\prod_{m=1}^{M_{i}}P_{i}(a_{im}|\hb).\label{eq:multilabel-data-model} \end{equation} Note that this does not says that binary indicators are independent in their own right but given the knowledge of the hidden variables $\hb$. Since they hidden variables are never observed, binary indicators are therefore interdependent. Now, the probability for activating a binary indicator is defined as \begin{equation} P_{i}(a_{im}=1|\hb)=\frac{1}{1+\exp(-U_{im}-\sum_{k}V_{imk}\h_{k})}.\label{eq:multicat-model} \end{equation} Note that this specification is equivalent to the following decomposition of the functionals $G_{i}(v_{i})$ and $H_{ik}(\v_{i})$ in Eq.~(\ref{eq:energies}): \begin{align*} G_{i}(v_{i}) & =\sum_{m=1}^{M_{i}}U_{im}a_{im};\quad\quad\quad\quad H_{ik}(\v_{i})=\sum_{m=1}^{M_{i}}V_{imk}a_{im}. \end{align*} \subsubsection{Ordinal Variables} An ordinal variable receives individual values from an ordinal set $S_{i}=\{c_{i1}\prec c_{i2}\prec...,\prec c_{iM_{i}}\}$ where $\prec$ denotes the order in some sense. For example, $c_{im}$ can be a numerical rating from a review, or it can be sentimental expression such as \textsf{love}, \textsf{neutral} and \textsf{hate}. There are two straightforward ways to treat an ordinal variable: (i) one is simply ignoring the order, and considering it as a multinomial variable, and (ii) another way is to convert the ordinal expression into some numerical scale, for example, $\{-1,0,+1\}$ for the triple \{\textsf{love}\textsf{\emph{,}}\textsf{neutral}\textsf{\emph{,}}\textsf{hate}\} and then proceed as if it is a continuous variable. However, in the first treatment, substantial ordinal information is lost, and in the second treatment, there is no satisfying interpretation using numbers. In this paper, we adapt the Stereotype Ordered Regression Model (SORM) by \cite{anderson1984rao}. More specifically, the SORM defines the conditional distribution as follows \[ P(v_{i}=m\mid\hb)=\frac{\exp\{U_{im}+\sum_{d=1}^{D}\sum_{k=1}^{K}V_{idk}\phi_{id}(m)h_{k}\}}{\sum_{l}\exp\{U_{il}+\sum_{d=1}^{D}\sum_{k=1}^{K}V_{idk}\phi_{id}(l)h_{k}\}} \] where $U_{im},V_{idk}$ are free parameters, $D\le M_{i}$ is the dimensionality of the ordinal variable% \footnote{This should not be confused with the dimensionality of the whole data $\vb$.% } $\v_{i}$, and $\phi_{id}(m)$ is the monotonically increasing function of $m$: \[ \phi_{id}(1)<\phi_{id}(2)<...<\phi_{id}(M_{i}) \] A shortcoming of this setting is that when $\hb=\boldsymbol{0}$, the model reduces to the standard multiclass logistic, effectively removing the ordinal property. To deal with this, we propose to make the input bias parameters order dependent: \begin{equation} P(v_{i}=m\mid\hb)\propto\exp\left\{ \sum_{d=1}^{D}\phi_{id}(m)\left(U_{id}+\sum_{k=1}^{K}V_{idk}h_{k}\right)\right\} \label{eq:ordinal-model} \end{equation} where $U_{id}$ is the newly introduced parameter. Here we choose $D=M_{i}$, and $\phi_{id}(m)=\nicefrac{\left(m-d\right)}{\left(M_{i}-1\right)}$. \subsubsection{Category-ranking Variables \label{sub:Category-ranking-Variables}} In category ranking, a variable assignment has the form of a ranked list of a set of categories. For example, from a set of offers namely \textsf{games}, \textsf{sports}, \textsf{music}, and \textsf{photography}, a person may express their preferences in a particular decreasing order: \textsf{sports} $\succ$ \textsf{music} $\succ$ \textsf{games} $\succ$ \textsf{photography}. Sometimes, they may like sports and music equally, creating a situation known as \emph{ties} in ranking, or \emph{indifference} in preference. When there are no ties, we can say that the rank is \emph{complete}. More formally, from a set of categories $\mathbb{S}_{i}=\{c_{i1},c_{i2},...,c_{iM_{i}}\}$, a variable assignment without ties is then a permutation of elements of $\mathbb{S}_{i}$. Thus, there are $M_{i}!$ possible complete rank assignments. When we allow ties to occur, however, the number of possible assignments is extremely large. To see how, let us group categories of the same rank into a partition. Orders within a partition are not important, but orders between partitions are. Thus, the problem of rank assignment turns out to be choosing from a set of all possible schemes for partitioning and ordering a set. The number of such schemes is known in combinatorics as the \emph{Fubini's number} \cite[pp. 396--397]{muresan2008concrete}, which is extremely large even for small sets. For example, $\mbox{Fubini}\left(1\right)=1$, $\mbox{Fubini}\left(3\right)=13$, $\mbox{Fubini}\left(5\right)=541$ and $\mbox{Fubini}\left(10\right)=102,247,563$. Directly modelling ranking with ties proves to be intractable. We thus resort to approximate methods. One way is to model just pairwise comparisons: we treat each pair of categories separately \emph{when conditioned on the hidden layer}. More formally, denote by $c_{il}\succ c_{im}$ the preference of category $c_{il}$ over $c_{im}$, and by $c_{il}\simeq c_{im}$ the indifference. We replace the data model $P_{i}(\v_{i}|\hb)$ with a product of pairwise comparisons $\prod_{l}\prod_{m>l}P_{i}(c_{il}\triangleright c_{im}|\hb)$, where $\triangleright$ denotes preference relations (i.e., $\succ$, $\prec$ or $\simeq$). This effectively translates the original problem with Fubini's number complexity to $M_{i}(M_{i}-1)/2$ pairwise sub-problems, each of which has only three preference choices. The drawback is that this relaxation loses the guarantee of \emph{transitivity }(i.e., $c_{il}\succeq c_{im}$ and $c_{im}\succeq c_{in}$ would entail $c_{il}\succeq c_{in}$, where $\succeq$ means \emph{better} or \emph{equal-to}). The hope is that the hidden layer is rich enough to absorb this property, that is, the probability of preserving the transitivity is sufficiently high. Now it remains to specify $P_{i}(c_{il}\triangleright c_{im}|\hb)$ in details. In particular, we adapt the Davidson's model \cite{davidson1970extending} of pairwise comparison: \begin{eqnarray} P_{i}(c_{il}\succ c_{im}|\hb) & = & \frac{1}{Z_{iml}(\hb)}\varphi(c_{il},\hb)\nonumber \\ P_{i}(c_{il}\simeq c_{im}|\hb) & = & \frac{1}{Z_{iml}(\hb)}\gamma\sqrt{\varphi(c_{il},\hb)\varphi(c_{im},\hb)}\label{eq:label-rank-model}\\ P_{i}(c_{il}\prec c_{im}|\hb) & = & \frac{1}{Z_{iml}(\hb)}\varphi(c_{im},\hb)\nonumber \end{eqnarray} where $Z_{ilm}(\hb)=\varphi(c_{il},\hb)+\varphi(c_{im},\hb)+\gamma\sqrt{\varphi(c_{il},\hb)\varphi(c_{im},\hb)}$, $\gamma>0$ is the tie parameter, and \[ \varphi(c_{im},\hb)=\exp\{\frac{1}{M_{i}}(U_{im}+\sum_{k}V_{imk}h_{k})\}. \] The term $\nicefrac{1}{M_{i}}$ normalises the occurrence frequency of a category in the model energy, leading to better numerical stability. \section{Introduction} \input{intro.tex} \section{Mixed-Variate Restricted Boltzmann Machines \label{sec:Mixed-State-Restricted}} \input{model.tex} \section{Learning and Inference \label{sec:Learning-and-Inference}} \input{learning.tex} \section{A Case Study: World Attitudes \label{sub:Filling-Missing-Data}} \input{experiments.tex} \section{Related Work} The most popular use of RBMs is in modelling of individual types, for example, binary variables \cite{freund1994unsupervised}, Gaussian variables \cite{hinton2006rdd,ranzato2010modeling}, categorical variables \cite{Salakhutdinov-et-alICML07}, rectifier linear units \cite{nair2010rectified}, Poisson variables \cite{gehler2006rap}, counts \cite{salakhutdinov2009replicated} and Beta variables \cite{le2011learning}. When RBMs are used for classification \cite{larochelle2008classification}, categorical variables might be employed for labeling in additional to the features. Other than that, there has been a model called Dual-Wing RBM for modelling both continuous and binary variables \cite{xing2005mining}. However, there have been no attempts to address all \emph{six} data types in a single model as we do in the present paper. The literature on ordinal variables is sufficiently rich in statistics, especially after the seminal work of \cite{mccullagh1980rmo}. In machine learning, on the other hand, the literature is quite sparse and recent (e.g. see \cite{shashua2002tlm,yu2006collaborative}) and it is often limitted to single ordinal output (given numerical input co-variates). An RBM-based modelling of ordinal variables addressed in \cite{Truyen:2009a} is similar to ours, except that our treatment is more general and principled. Mixed-variate modelling has been previously studied in statistics, under a variety of names such as \emph{mixed outcomes}, \emph{mixed data}, or \emph{mixed responses} \cite{sammel1997latent,dunson2000bayesian,shi2000latent,mcculloch2008joint}. Most papers focus on the mix of ordinal, Gaussian and binary variables under the \emph{latent variable} framework. More specifically, each observed variable is assumed to be generated from one or more underlying continuous latent variables. Inference becomes complicated since we need to integrate out these correlated latent variables, making it difficult to handle hundreds of variables and large-scale datasets. In machine learning, the problem of predicting a single multicategorical variable is also known as multilabel learning (e.g., see \cite{tsoumakas2007multi}). Previous ideas that we have adapted into our context including the shared structure among labels \cite{ji2008extracting}. In our model, the sharing is captured by the hidden layer in a probabilistic manner and we consider many multicategorical variables at the same time. Finally, the problem of predicting a single category-ranked variable is also known as label-ranking (e.g., see \cite{dekel2003llm,hullermeier2008lrl}). The idea we adopt is the pairwise comparison between categories. However, the previous work neither considered the hidden correlation between those pairs nor attempted multiple category-ranked variables. \section{Conclusion} We have introduced Mixed-Variate Restricted Boltzmann Machines ($\MODEL$) as a generalisation of the RBMs for modelling correlated variables of multiple modalities and types. Six types considered were: binary, categorical, multicategorical, continuous information, ordinal, and category-ranking. We shown that the $\MODEL$ is capable of handling a variety of machine learning tasks including feature exaction, dimensionality reduction, data completion, and label prediction. We demonstrated the capacity of the model on a large-scale world-wide survey. We plan to further the present work in several directions. First, the model has the capacity to handle multiple related predictive models simultaneously by learning a shared representation through hidden posteriors, thereby applicable to the setting of multitask learning. Second, there may exist strong interactions between variables which the RBM architecture may not be able to capture. The theoretical question is then how to model inter-type dependencies directly without going through an intermediate hidden layer. Finally, we plan to enrich the range of applications of the proposed model. \paragraph{Acknowledgment:} We thank anonymous reviewers for insightful comments.
{'timestamp': '2014-08-07T02:03:33', 'yymm': '1408', 'arxiv_id': '1408.1160', 'language': 'en', 'url': 'https://arxiv.org/abs/1408.1160'}
\section{Abstract} Quantum entanglement, one of the most counterintuitive effects in quantum mechanics \cite{Einstein35}, plays an essential role in quantum information and communication technology. Thus far, many efforts have been taken to create multipartite entanglement in photon polarization \cite{Walther2005, Lu2007, Gao2010, Wieczorek2009, Prevedel2009}, quadrature amplitude \cite{Furusawa2009}, and ions \cite{Haffner2005}, for demonstration and precise operation of quantum protocols. These efforts have mainly concentrated on the generation of pure entangled states, such as GHZ \cite{Greenberger1989}, W \cite{Dur2000}, and cluster \cite{Briegel2001} states. By contrast, bound entanglement \cite{Horodecki1998} could not be distilled into pure entangled states, and had been considered useless for quantum information protocols such as quantum teleportation \cite{Boumeester1997, Furusawa1998}. However, it is interesting that some bound entanglement can be distilled by certain procedures \cite{Smolin2001} or interaction with auxiliary systems \cite{Acin2004, Shor2003}. These properties provide new quantum communication schemes, for instance, remote information concentration \cite{Murao2001}, secure quantum key distribution \cite{Horodecki2005}, super-activation \cite{Shor2003}, and convertibility of pure entangled states \cite{Ishizaka2004}. Recently, a distillation protocol from the bound entangled state, so called `unlocking' \cite{Smolin2001}, has been experimentally demonstrated \cite{Amselem2009,Lavoie2010}. In this protocol, as depicted in Fig. 1 (a), four-party bound entanglement in the Smolin state \cite{Smolin2001} can be distilled into two parties (e.g., A and D) when the other two parties (e.g., B and C) come together and make joint measurements on their qubits. The unlocking protocol, in principle, can distill pure and maximal entanglement into the two qubits. However, it does not belong to the category of LOCC, since the two parties have to meet to distill the entanglement. \begin{tiny} \begin{figure}[t!] \includegraphics[width= \columnwidth, clip]{activation_unlock.eps} \label{ } \caption{Principle of the distillation of the bound entanglement in the Smolin state $\rho_s$. Each circle represents a qubit, and the blue line and squares show the entanglement of parties. BSM, Bell state measurement; CC, classical channel. (a) Unlocking. (b) Activation of the bound entanglement. Both of them can distill entanglement from the Smolin state; however, the activation protocol can be carried out under LOCC, while the unlocking needs two parties coming together and making joint measurements. } \end{figure} \end{tiny} The activation of bound entanglement that we demonstrate here is another protocol by which one can distill the Smolin-state bound entanglement by means of LOCC. The principle of the activation is sketched in Fig. 1 (b). Consider four parties, A, B, C, and D each of which has a qubit in the Smolin state. The Smolin state is a statistically equal mixture of pairs of the four Bell states, and its density matrix $\rho_s$ is given by \begin{align} \rho_s &= \sum_{i=1}^{4} |\phi^i\rangle \langle \phi^i |_{AB}\otimes |\phi^i\rangle \langle \phi^i |_{CD} \notag \\ &= \sum_{i=1}^{4} |\phi^i\rangle \langle \phi^i |_{AC}\otimes |\phi^i\rangle \langle \phi^i |_{BD} \notag \\ &= \sum_{i=1}^{4} |\phi^i\rangle \langle \phi^i |_{AD}\otimes |\phi^i\rangle \langle \phi^i |_{BC}, \end{align} where $|\phi^i\rangle \in \{ |\phi^{\pm} \rangle ,|\psi^{\pm} \rangle \}$ are the two-qubit Bell states given by \begin{align} |\phi^{\pm} \rangle &= \frac{1}{\sqrt{2}} \left( |00 \rangle \pm |11 \rangle \right) \notag \\ |\psi^{\pm} \rangle &= \frac{1}{\sqrt{2}} \left( |01 \rangle \pm |10 \rangle \right), \end{align} where $\ket{0}$ and $\ket{1}$ are the qubit bases. Since the $\rho_s$ state is symmetric with respect to the exchange of any two parties, $\rho_s$ is separable in any two-two bipartite cuts. This implies that there is no distillable entanglement in any two-two bipartite cuts: $D_{AB|CD}(\rho_s)=D_{AC|BD}(\rho_s)=D_{AD|BC}(\rho_s)=0$, where $D_{i|j}(\rho)$ is the distillable entanglement of $\rho$ in an $i|j$ bipartite cut. In addition to the Smolin state $\rho_s$, two of the parties (e.g., B and C) share distillable entanglement in the two-qubit Bell state (e.g., $\ket{\psi^+}_{B'C'}$). Hence, the initial state $\rho_I$ is given by \begin{align} \rho_I &= \rho_s \otimes \ket{\psi^+}\bra{\psi^+}_{B'C'} \notag \\ &= \sum_{i=1}^{4} \ket{\phi^i}\bra{\phi^i}_{AD}\otimes \ket{\phi^i}\bra{\phi^i} _{BC} \otimes \ket{\psi^+}\bra{\psi^+}_{B'C'}. \end{align} The state $\rho_s$ or $\ket{\psi^+}_{B'C'}$ gives no distillable entanglement into A and D: $D_{A|D}(\rho_s) = D_{A|D}(\ket{\psi^+}_{B'C'})=0$, since $D_{A|D}(\rho_s) \leq D_{AB|CD}(\rho_s)=0$. To distill entanglement into A and D, the Bell state measurements (BSMs), the projection measurements into the Bell bases, are taken for the qubits B-B' and C-C', and the results are informed A and D via classical channels. Due to the property that A-D and B-C share the same Bell states $\ket{\phi^i}_{AD}\otimes\ket{\phi^i}_{BC}$ in $\rho_s$, the result of the BSMs in each $\ket{\phi^i}_{BC}\otimes \ket{\psi^+}_{B'C'}$ can tell the type of the Bell state shared by A and D. Table I shows the list of the resulting states shared by A and D for all the possible combinations of the result of the BSM of B-B' and C-C'. Given this information one can determine the state shared by A and D and then convert any $\ket{\phi^i}_{AD}$ into $\ket{\psi^-}_{AD}$ by local unitary operations. Hence, the activation protocol can distill entanglement from the Smolin state by four parties' LOCC with the help of the auxiliary two-qubit Bell state. This is in strong contrast to the unlocking protocol (see Fig. 1 (a)), which requires non-local joint BSM between the two parties (B and C). It is noteworthy that in our activation protocol, the distillable entanglement between A and D is superadditive: \begin{equation} D_{A|D}(\rho_I) > D_{A|D}(\rho_s) +D_{A|D}(\ket{\psi^+}_{B'C'}) = 0. \end{equation} This superadditivity means that the bound entanglement is activated with the help of the auxiliary distillable entanglement, although A and D share no distillable entanglement for $\rho_s$ or $\ket{\psi^+}_{B'C'}$ alone. This protocol can also be regarded as an entanglement transfer from B-C to A-D. In this context, it is interesting that two parties (B-C) can transfer the Bell state to the other two parties (A-D) despite being separated: $D_{AD|BC}(\rho_s) = 0$. In other words, entanglement can be transferred by the mediation of the undistillable, bound entanglement. This unique feature is quite different from two-stage entanglement swapping \cite{Goebel2008}, which needs distillable entanglement shared by senders and receivers to transfer the Bell states. \begin{table}[b!] \caption{Relationship between the combination of the results of the BSM of B-B' and C-C', and the state of AD. Each combination of the BSMs tells the state of A-D. } \label{table} \begin{center}\small \def\arraystretch{1.8} \begin{tabular}{lcccccc}\hline & & & $\ket{\phi^+}_{BB'}$ & $\ket{\phi^-}_{BB'}$ & $\ket{\psi^+}_{BB'}$& $\ket{\psi^-}_{BB'}$ \\ \hline $\ket{\phi^+}_{CC'}$ & & & $\ket{\psi^+}_{AD}$ & $\ket{\psi^-}_{AD}$ & $\ket{\phi^+}_{AD}$ & $\ket{\phi^-}_{AD} $ \\ $\ket{\phi^-}_{CC'}$ & & & $\ket{\psi^-}_{AD}$ & $\ket{\psi^+}_{AD}$ & $\ket{\phi^-}_{AD}$ & $\ket{\phi^+}_{AD} $ \\ $\ket{\psi^+}_{CC'}$ & & & $\ket{\phi^+}_{AD}$ & $\ket{\phi^-}_{AD}$ & $\ket{\psi^+}_{AD}$ & $\ket{\psi^-}_{AD} $ \\ $\ket{\psi^-}_{CC'}$ & & & $\ket{\phi^-}_{AD}$ & $\ket{\phi^+}_{AD}$ & $\ket{\psi^-}_{AD}$ & $\ket{\psi^+}_{AD} $ \\ \hline \end{tabular} \end{center} \vspace*{-4mm} \end{table} \begin{tiny} \begin{figure}[t!] \includegraphics[width=\columnwidth, clip]{activation_exsetup05.eps} \label{ } \caption{Scheme of the activation of the bound entanglement. Each source of spontaneous parametric down-conversion (SPDC) produces $\ket{\psi^+}$. The four-photon states emitted from SPDC1 and SPDC2 pass through liquid crystal variable retarders (LCVRs) to be transformed into the Smolin states. The polarization of two-photon states in mode A and D are analyzed on the condition that the pair of the Bell state, $\ket{\phi^+}_{BB'}\otimes\ket{\phi^+}_{CC'}$ is detected. P, a polarizer. } \end{figure} \end{tiny} Figure 2 illustrates the experimental scheme of our activation protocol. In our experiment, the physical qubits are polarized photons, having horizontal $\ket{H}$ and vertical $\ket{V}$ polarizations as the state bases. By using three sources of spontaneous parametric down-conversion (SPDC) \cite{Kwiat95}, we produced three $\ket{\psi^+}$ states simultaneously. The state $\rho(\psi^+) \equiv \ket{\psi^+}\bra{\psi^+}_{AB}\otimes \ket{\psi^+}\bra{\psi^+}_{CD}$ emitted from the SPDC1 and SPDC2 was transformed into the Smolin state by passing through the synchronized liquid-crystal variable retarders (LCVRs, see Supplementary Material). The state $\ket{\psi^+}_{B'C'}$ emitted from SPDC3 was used as the auxiliary Bell state for the activation protocol. A polarizing beam splitter (PBS) and a $\ket{+}_{B}\ket{+}_{B'}$ ($\ket{+}_{C}\ket{+}_{C'}$) coincidence event at modes B and B' (C and C') allow the projection onto the Bell state $\ket{\phi^+}_{BB'}$ ($\ket{\phi^+}_{CC'}$), where $\ket{+}_{i}= (\ket{H}_i+\ket{V}_i)/\sqrt{2}$ \cite{Goebel2008}. Given the simultaneous BSMs at B-B' and C-C' we post-selected the events of detecting $\ket{\phi^+}_{BB'} \otimes \ket{\phi^+}_{CC'}$ out of the 16 combinations (Table I). The result, i.e., the state after the activation, was expected to be $\ket{\psi^+}_{AD}$. To characterize the experimentally obtained Smolin state $\rho_s^{exp}$ and the state after the activation process $\rho_{AD}$, the maximum likelihood state tomography \cite{James2001} was performed (see Supplementary Material). \begin{tiny} \begin{figure}[t!] \includegraphics[width=0.7\columnwidth, clip]{2010112701mlre.eps} \caption{ Real part of the measured Smolin state $\rho_s^{exp}$. } \end{figure} \end{tiny} Figure 3 shows the real part of the reconstructed density matrix of the Smolin state $\rho_s^{exp}$ we obtained. The fidelity to the ideal Smolin state $F_s$ = $ \left({\rm Tr} \sqrt{\sqrt{\rho_s} \rho_s^{exp} \sqrt{\rho_s} }\right)^2$ was calculated to be 82.2$\pm$0.2$\%$. From $\rho_s^{exp}$, we evaluated the separability of the generated state across the bipartite cuts AB$|$CD, AC$|$BD, and AD$|$BC in terms of the logarithmic negativity (LN) \cite{Vidal2002}, which is an entanglement monotone quantifying the upper bound of the distillable entanglement under LOCC. The LN of the density matrix $\rho$ composed of the two subsystems $i$ and $j$ is given by \begin{equation} LN _{i|j}(\rho ) =\log_2 ||\rho^{T_{i}}||, \end{equation} where $\rho^{T_{i}}$ represents the partial transpose with respect to the subsystem $i$, and $||\rho^{T_{i}}||$ is the trace norm of $\rho^{T_{i}}$. The LN values thus obtained for the three bipartite cuts of $\rho_s^{exp}$ are presented in Table II, together with those of the Smolin state $\rho_s$ and the state $\rho(\psi^+)$. The state $\rho_s$ has zero negativity for all three bipartite cuts, while the state $\rho(\psi^+)$ has finite values, i.e., finite distillable entanglement originally from the two Bell states, for AC$|$BD and AD$|$BC cuts. For $\rho_s^{exp}$, the LN values are all close to zero, indicating that $\rho_s^{exp}$ has a very small amount, if any, of distillable entanglement. \begin{table}[b!] \caption{Logarithmic negativities (LNs) for the two-two bipartite cuts. } \label{table} \begin{center}\small \def\arraystretch{1.3} \begin{tabular}{lccc}\hline & $\rho_s^{exp}$ & $\rho_s$ & $\rho(\psi^+)$ \\ \hline $LN_{AB|CD}(\rho)$ & $0.076 \pm 0.012 $ & 0 & 0 \\ $LN_{AC|BD}(\rho)$ & $0.183 \pm 0.012 $ & 0 & 2 \\ $LN_{AD|BC}(\rho)$ & $0.178 \pm 0.012 $ & 0 & 2 \\\hline \end{tabular} \end{center} \vspace*{-4mm} \end{table} To test other separabilities, we calculated the entanglement witness Tr$\{W \rho_s^{exp} \}$, which shows negative values for the non-separable states, and non-negative values for separable ones. The witness for our four-qubit states is $W = I^{\otimes 4}-\sigma_x^{\otimes 4}-\sigma_y^{\otimes 4}-\sigma_z^{\otimes 4}$ \cite{Amselem2009,Toth2005}. We obtained Tr$\{W \rho_s^{exp} \}= -1.30 \pm 0.02$, while the values for the ideal Smolin state and the state $\rho(\psi^i)$ were both -2. The negative witness value indicates that $\rho_s^{exp}$ has no separability in general. Taking account of the result that $\rho_s^{exp}$ has almost no distillable entanglement for the two-two bipartite cuts, $\rho_s^{exp}$ should have distillable entanglement in one-three bipartite cuts and/or tripartite cuts. \begin{tiny} \begin{figure}[t!] \includegraphics[width=0.484\columnwidth, clip]{smolin_ad03.eps} \includegraphics[width=0.49\columnwidth, clip]{rhoad.eps} \label{ } \caption{Density matrices of the qubits A and D before (a) and after (b) the activation experiment. (a) Reduced-density matrix $ \rho_{sAD}^{exp}$ = Tr$_{BC} \rho_s^{exp}$. (b) The density matrix $\rho_{AD}$, triggered by the two BSMs at B-B' and C-C'. } \end{figure} \end{tiny} In the following we describe the results of our activation experiment. Figure 4 (a) shows the reduced density matrix $ \rho_{sAD}^{exp}$ = Tr$_{BC} \rho_s^{exp}$, i.e., the density matrix before the activation. We confirmed that $ \rho_{sAD}^{exp}$ gives no distillable entanglement to A and D: $ LN_{A|D}(\rho_{sAD}^{exp}) = 0$. Figure 4 (b) shows the density matrix after the activation $\rho_{AD}$, the reconstructed two-qubit density matrix in modes A and D triggered by the two BSMs at B-B' and C-C'. In the process of the state reconstruction we subtracted the accidental coincidences caused by higher-order emission of SPDC (see Supplementary Information). The fidelity $F_{AD} = \bra{\psi^+}\rho_{AD}\ket{\psi^+}_{AD} $ to the ideally activated state $\ket{\psi^+}_{AD}$ was 85$\pm 5$\%, which is larger than the classical limit of 50\%. The obtained LN was $LN_{A|D}(\rho_{AD}) = 0.83\pm 0.08$, indicating that we have gained a certain amount of entanglement via our activation process, whereas A and D initially share no distillable entanglement. We quantified the increase of the distillable entanglement via our activation experiment. We evaluated the distillable entanglement before and after the activation by means of its lower and upper bounds as follows. The upper bound of $D_{A|D}(\rho_s^{exp})$, the distillable entanglement before the activation, was given by \begin{align} D_{A|D}(\rho_s^{exp}) \leq LN_{AB|CD}(\rho_s^{exp} ) = 0.076. \end{align} The observed LN value after the activation process, $LN_{A|D}(\rho_{AD} ) = 0.83$, is larger than this value. However, since these are just the upper bounds of the distillable entanglement, we should examine the lower bound of $\rho_{AD}$ to confirm the increase of the distillable entanglement. To quantify the lower bound of $D_{A|D}(\rho)$, we used \begin{align} D_H(\rho) \leq D_{A|D}(\rho ), \end{align} where $D_H(\rho)$ is the distillable entanglement via a certain distillation protocol, the so-called hashing method, known as the best method for Bell diagonal states of rank 2 \cite{Bennett1996}. $D_H(\rho)$ is given by, \begin{align} D_H(\rho ) = 1+F\log_2(F)+(1-F)\log_2\frac{(1-F)}{3}, \end{align} where $F$ is the maximum state-fidelity over the four Bell states $F$ = max$(\bra{\phi^i}\rho\ket{\phi^i})$. It is known that $D_H(\rho) > 0$ for $F > 0.8107$ \cite{Bennett1996}. The fidelity for $\rho_{AD}$ does satisfy this criterion and the value of $D_H(\rho_{AD})$ is calculated to be 0.15. The combination of Eq. (6) and (7) show a clear increase of the distillable entanglement via our activation experiment: $D_{A|D}(\rho_s^{exp}) \leq 0.076 < 0.15 \leq D_{A|D}(\rho_{AD})$. In conclusion, we have experimentally demonstrated the activation of bound entanglement, unleashing the entanglement bound in the Smolin state by means of LOCC with the help of the auxiliary entanglement of the two-qubit Bell state. We reconstructed the density matrices of the states before and after the activation protocol by full state tomography. We observed the increase of distillable entanglement via the activation process, examining two inequalities that bind the values of the distillable entanglement. The gain of distillable entanglement clearly demonstrates the activation protocol in which the undistillable, bound entanglement in $\rho_s$ is essential. Our result will be fundamental for novel multipartite quantum-communication schemes, for example, quantum key secret-sharing, communication complexity reduction, and remote information concentration, in which general classes of entanglement, including bound entanglement, are important. This work was supported by a Grant-in-Aid for Creative Scientific Research (17GS1204) from the Japan Society for the Promotion of Science.
{'timestamp': '2011-11-29T02:01:07', 'yymm': '1111', 'arxiv_id': '1111.6170', 'language': 'en', 'url': 'https://arxiv.org/abs/1111.6170'}
\section{Introduction} Since its inception by Robin Forman \cite{F-98}, discrete Morse theory has been a powerful and versatile tool used not only in diverse fields of mathematics, but also in applications to other areas \cite{Real-15} as well as a computational tool \cite{C-G-N2016}. Its adaptability stems in part from the fact that it is a discrete version of the beautiful and successful ``smooth'' Morse theory \cite{M-63}. While smooth Morse theory touches many branches of math, one such area that has its origins in critical point theory is that of Lusternik--Schnirelmann category. The (smooth) Lusternik--Schnirelmann category or L--S category of a smooth manifold $X$, denoted $\cat(X)$, was first introduced in \cite{L-S}, where the authors proved what is now known as the Lusternik--Schnirelmann Theorem (see also \cite{CLOT} for a detailed survey of the topic). A version of this result can be stated as follows: \begin{theorem} If $M$ is a compact smooth manifold and $f\colon M \to \RR$ is a smooth map, then $$\cat(M)+1\leq \sharp(\Crit(f))$$ where $\Crit{(f)}$ is the set of all critical points of the function $f$. \end{theorem} There are many ``smooth'' versions of this theorem in various contexts (see for example \cite{Palais-66}). The aim of this paper is to view Forman's discrete Morse theory from a different perspective in order to prove a discrete version of the L--S theorem compatible with the recently defined simplicial L--S category developed by three of the authors \cite{F-M-V}. This simplicial version of L--S category is suitable for simplicial complexes. Other attempts have been made to develop such a ``discrete'' L--S category. In \cite{AS}, one of the authors developed a discrete version of L--S category and proved an analogous L--S theorem for discrete Morse functions. Our version of the L--S Theorem, Theorem \ref{sLS theorem}, relates a new generalized notion of critical object of a discrete Morse function to the simplicial L--S category of \cite{F-M-V}. In this paper we use the notion of simplicial L--S category defined in {\cite{F-M-V}}. This simplicial approach of L--S category uses strong collapses in the sense of Barmak and Minian \cite{B-M-12} as a framework for developing categorical sets. As opposed to standard collapses, strong collapses are natural to consider in the simplicial setting since they correspond to simplicial maps (see Figure~\ref{strong_collapse}). \begin{figure}[htbp] \begin{center} \includegraphics[width=.5\linewidth]{strong_collapse} \caption{An elementary strong collapse from $K$ to $K-\{v\}$.} \label{strong_collapse} \end{center} \end{figure} This is especially contrasted with (standard) elementary collapses, which in general do not correspond to simplicial maps (see Figure~\ref{standard_collapse}). \begin{figure}[htbp] \begin{center} \includegraphics[width=.5\linewidth]{standard_collapse} \caption{An elementary (standard) collapse from $K$ to $K-\{\sigma , \tau\}$.} \label{standard_collapse} \end{center} \end{figure} Furthermore, elementary strong collapses correspond to the deletion of the open star of a dominated vertex. Notice that in general, the deletion of a vertex is not a simplicial map. \begin{figure}[htbp] \begin{center} \includegraphics[width=.6\linewidth]{vertex_deletion} \caption{Deletion of a vertex that does not correspond to a simplicial map.} \label{vertex_deletion} \end{center} \end{figure} This paper is organized as follows. Section \ref{Fundamentals of simplicial complexes} contains the necessary background and basics of simplicial complexes and collapsibility. Section \ref{Strong discrete Morse theory} is devoted to both reviewing discrete Morse theory and introducing a generalized notion of critical object in this context. Here we develop a collapsing theorem for discrete Morse functions which is analogous to the classical result of Forman (Theorem 3.3 of \cite{F-98}). Section \ref{simplicial Lusternik--Schnirelmann category} is the heart of the paper. In this section, we recall the definitions and basic properties of the simplicial L--S category and prove the simplicial L--S theorem in Theorem \ref{sLS theorem}. The rest of the section is devoted to examples and immediate applications. \section{Fundamentals of simplicial complexes}\label{Fundamentals of simplicial complexes} In this section we review some of the basics of simplicial complexes (see \cite{Munkres} for a more detailed exposition). Let us start with the definition of simplicial complex. The more usual way to introduce this notion is to do it geometrically. First, let us introduce the basic building blocks, that is, the notion of a simplex. Given $n+1$ points $v_0,\dots,v_n$ in general position in an Euclidean space, the $n$-simplex $\sigma$ generated by them, $\sigma=(v_0,\dots,v_n)$, is defined as their convex hull. A simplex $\sigma$ contains lower dimensional simplices $\tau$, denoted by $\tau\leq \sigma$, called faces just by considering the corresponding simplices generated by any subset of its vertices. A simplicial complex is a collection of simplices satisfying two conditions: \begin{itemize} \item Every face of simplex in a complex is also a simplex of the complex. \item If two simplices in a complex intersect, then the intersection is also a simplex of the complex. \end{itemize} Alternatively, it is possible to define a simplicial complex abstractly. It will avoid confusion when parts of the simplicial structure are removed. Given a finite set $[n]:=\{0,1,2,3,\ldots, n\}$, an \emph{abstract simplicial complex $K$ on $[n]$} is a collection of subsets of $[n]$ such that: \begin{itemize} \item If $\sigma \in K$ and $\tau \subseteq \sigma$, then $\tau \in K$. \item $\{i\} \in K$ for every $i\in [n]$. \end{itemize} The set $[n]$ is called the \emph{vertex set} of $K$ and the elements $\{i\}$ are called \emph{vertices} or \emph{$0$-simplices.} We also may write $V(K)$ to denote the vertex set of $K$. An element $\sigma \in K$ of cardinality $i+1$ is called an \emph{$i$-dimensional simplex} or \emph{$i$-simplex} of $K$. The \emph{dimension} of $K$, denoted $\dim(K)$, is the maximum of the dimensions of all its simplices. \begin{example} Let $n=5$ and $V(K):=\{0,1,2,3,4,5\}$.\par \noindent Define $K:=\{\{0\},\{1\}, \{2\}, \{3\}, \{4\}, \{5\}, \{1,2\}, \{1,4\}, \{2,3\}, \{1,3\}, \{3,4\}, \{3,5\},$\par \hspace*{1.55cm}$\{4,5\}, \{3,4,5\}, \emptyset\}$.\par This abstract simplicial complex may be regarded geometrically as follows: \begin{center} \includegraphics[width=.3\linewidth]{simplicial_complex} \end{center} \end{example} Further definitions are in order. \begin{definition} A \emph{subcomplex} $L$ of $K$, denoted $L\subseteq K$, is a subset $L$ of $K$ such that $L$ is also a simplicial complex. \end{definition} We use $\sigma^{(i)}$ to denote a simplex of dimension $i$, and we write $\tau < \sigma^{(i)}$ to denote any subsimplex of $\sigma$ of dimension strictly less than $i$. If $\sigma, \tau\in K$ with $\tau < \sigma$, then $\tau$ is a \emph{face} of $\sigma$ and $\sigma$ is a \emph{coface} of $\tau$. A simplex of $K$ that is not properly contained in any other simplex of $K$ is called a \emph{facet} of $K$.\\ At this point we recall a key concept in simple-homotopy theory: the notion of simplicial collapse \cite{C-73}. \begin{definition} Let $K$ be a simplicial complex and suppose that there is a pair of simplices $\sigma^{(p)}<\tau^{(p+1)}$ in $K$ such that $\sigma$ is a face of $\tau$ and $\sigma$ has no other cofaces. Such a pair $\{\sigma, \tau\}$ is called a \emph{free pair.}. Then $K-\{\sigma,\tau\}$ is a simplicial complex called an \emph{elementary collapse} of $K$ (see Figure~\ref{standard_collapse}). The action of collapsing is denoted $K \searrow K-\{\sigma, \tau\}$. More generally, $K$ is said to \emph{collapse} onto $L$ if $L$ can be obtained from $K$ through a finite series of elementary collapses, denoted $K \searrow L$. In the case where $L=\{v\}$ is a single vertex, we say that $K$ is \emph{collapsible}. \end{definition} Now we introduce some basic subcomplexes related to a vertex. They play analogous role in the simplicial setting as the closed ball, sphere, and open ball play in the continuous approach. \begin{definition} Let $K$ be a simplicial complex and let $v\in K$ be a vertex. The \emph{star of $v$ in $K$}, denoted $\st(v)$, is the subcomplex of simplices $\sigma \in K$ such that $\sigma\cup \{v\}\in K$. The \emph{link of $v$ in $K$}, denoted $\lk(v)$, is the subcomplex of $\st(v)$ of simplices which do not contain $v$. The \emph{open star of $v$ in $K$} is $\st^o(v):=\st(v)-\lk(v)$. Note that $st^o(v)$ is not a simplicial subcomplex. Finally, given a simplicial complex $K$ and a vertex $v\notin K$, the \emph{cone} $vK$ is the simplicial complex whose simplices are $\{v_0,\dots,v_n\}$ and $\{v,v_0,\dots,v_n\}$ where $\{v_0,\dots,v_n\}$ is any simplex of $K$. \end{definition} It is important to point out that simplicial collapses are not simplicial maps in general. This suggests that it is natural to consider a special kind of collapse which is a simplicial map. The notion of strong collapse, introduced by Barmak and Minian in \cite{BARMAK2011,B-M-12}, satisfies this requirement. \begin{definition} Let $K$ be a simplicial complex and suppose there exists a pair of vertices $v,v^\prime\in K$ such that every maximal simplex containing $v$ also contains $v^\prime$. Then we say that $v^\prime$ \emph{dominates} $v$ and $v$ is \emph{dominated by} $v^\prime$. If $v$ is dominated by $v^\prime$ then the inclusion $i: K - \{v\} \to K$ is a strong equivalence. Its homotopical inverse is the retraction $r \colon K \to K - \{v\}$ which is the identity on $K - \{v\}$ and such that $r(v) = v^\prime$. This retraction is called an \emph{elementary strong collapse} from $K$ to $K - \{v\}$, denoted by $K \searrow\searrow K - \{v\}$. A \emph{strong collapse} is a finite sequence of elementary strong collapses. The inverse of a strong collapse is called a \emph{strong expansion} and two complexes $K$ and $L$ \emph{have the same strong homotopy type} if there is a sequence of strong collapses and strong expansions that transform $K$ into $L$. \end{definition} See Figure~\ref{strong_collapse} to illustrate the above notions. Equivalently, $v$ is a dominated vertex if and only if its link is a cone \cite{BARMAK2011}. \section{Strong discrete Morse theory}\label{Strong discrete Morse theory} We are now ready to introduce a key object of our study, that is, discrete Morse functions in the R. Forman's sense \cite{F-98}. More precisely, we are interested in a generalized notion of critical object suitable for codifying the strong homotopy type of a complex. \begin{definition} Let $K$ be a simplicial complex. A \emph{discrete Morse function} $f\colon K \to \RR$ is a function satisfying for any $p$-simplex $\sigma\in K$: \begin{enumerate} \item[(M1)] $\sharp\left(\{\tau^{(p+1)}>\sigma : f(\tau)\leq f(\sigma)\}\right) \leq 1$. \item[(M2)] $\sharp\left(\{\upsilon^{(p-1)}<\sigma : f(\upsilon)\geq f(\sigma)\}\right) \leq 1$. \end{enumerate} \end{definition} Roughly speaking, we can say that it is a weakly increasing function which satisfies the property that if $f(\sigma)=f(\tau)$, then one of both simplices is a maximal coface of the other one. \begin{definition} A \emph{critical simplex} of $f$ is a simplex $\sigma$ satisfying: \begin{enumerate} \item[(C1)] $\sharp\left(\{\tau^{(p+1)}>\sigma : f(\tau)\leq f(\sigma)\}\right) =0$. \item[(C2)] $\sharp\left(\{\upsilon^{(p-1)}<\sigma : f(\upsilon)\geq f(\sigma)\}\right)=0$. \end{enumerate} \end{definition} If $\sigma$ is a critical simplex, the number $f(\sigma)\in \RR$ is called a \emph{critical value}. Any simplex that is not critical is called a \emph{regular simplex} while any output value of the discrete Morse function which is not a critical value is a \emph{regular value}. The set of critical simplices of $f$ is denoted by $\Crit(f)$.\\ Given any real number $c$, the \emph{level subcomplex} of $f$ at level $c$, $K(c)$, is the subcomplex of $K$ consisting of all simplices $\tau$ with $f(\tau)\leq c$, as well as all of their faces, that is, $$ K(c)=\bigcup_{f(\tau)\leq c}\bigcup_ {\sigma \leq \tau} \sigma .$$ Any discrete Morse function induces a gradient vector field. \begin{definition}\label{GVF} Let $f$ be a discrete Morse function on $K$. The \emph{induced gradient vector field} $V_f$ or $V$ when the context is clear, is defined by the following set of pairs of simplices: $$V_f:=\{(\sigma^{(p)}, \tau^{(p+1)}) : \sigma<\tau, f(\sigma)\geq f(\tau)\}. $$ \end{definition} Note that critical simplices are easily identified in terms of the gradient field as precisely those simplices not contained in any pair in the gradient field. \begin{definition}\label{min outside} Let $f\colon K\to \RR$ be a discrete Morse function and $V_f$ its induced gradient vector field. For each vertex/edge pair $(v,uv)\in V_f$, write $\mathrm{St}(v,u):=\st^o(v)\cap \st(u)$. Define $m_v:=\min\{f(\tau): f(\tau)>f(uv), \tau \in (K-\mathrm{St}(v,u))\bigcup (\Crit(f)\cap \mathrm{St}(v,u) )\}$. \end{definition} \begin{example}\label{cutoff illustration_a} We illustrate Definition \ref{min outside}. Let $K$ be the simplicial complex with discrete Morse function $f$ given in Figure~\ref{DMT_example1a}. \begin{figure}[htbp] \begin{center} \includegraphics[width=.95\linewidth]{DMT_example1a} \caption{A 2-dimensional simplicial complex with a discrete Morse function.} \label{DMT_example1a} \end{center} \end{figure} Observe that taking $v=f^{-1}(10)$ and $u=f^{-1}(0)$, we have $(v,uv)\in V_f$ and $$\mathrm{St}(v,u)=f^{-1}(\{9,10,11,12,13,14\})\,.$$ Since $\mathrm{St}(v,u)$ in Example~\ref{cutoff illustration_a} does not contain any critical simplex, then determining $m_v$ consists on finding the smallest value greater than $f(uv)$ outside of $\mathrm{St}(v,u)$. In this case, $m_v=15$. \end{example} \begin{example}\label{cutoff illustration_b} Now $f$ will be slightly modified to create a new discrete Morse function $g$ on $K$ (Figure~\ref{DMT_example1b}). It is clear that $(v,uv)\in V_g$, but in this case $m_v=11$. \begin{figure}[htbp] \begin{center} \includegraphics[width=.95\linewidth]{DMT_example1b} \caption{The same 2-dimensional simplicial complex of Figure~\ref{DMT_example1a} with another discrete Morse function.} \label{DMT_example1b} \end{center} \end{figure} \end{example} Notice that the values that take both functions $f$ and $g$ on some simplices coincide. It is interesting to point out that both examples have the same induced gradient vector field and consequently, contain the same critical simplices in the usual Forman sense. This justifies the need to take into account additional information in this new approach and thus a more general concept of critical object. \begin{definition}\label{strong definition} Continuing with the notation used in Definition \ref{min outside}, define $l_v$ as the largest regular value in $f(\mathrm{St}(v,u))$ such that \begin{itemize} \item $f(uv)\leq l_v\leq m_v$ \item every maximal regular simplex of $K(l_v)\cap \mathrm{St}(v,u)$ contains the vertex $u$. \end{itemize} \end{definition} \begin{definition} Under the notation of Definition \ref{min outside}, the \emph{strong collapse of $v$ under $f$},\linebreak denoted $S^f_v$, is given by $S^{f}_v:=\{ (\sigma, \tau)\in V_f : f(uv)\leq f(\tau)\leq l_v\}$ and the interval $I(S^f_v)=[f(uv),l_v]$ is called the \emph{strong interval} of $S_v$. The elements of the set $C(V_f):=V_f-\bigcup S^f_v$ are the \emph{critical pairs} of $f$ while each element in $\bigcup S_v$ is a \emph{regular pair} of $f$. If $(\sigma, \tau)$ is a critical pair, the value $f(\tau)$ is a \emph{critical value} of $f$. A \emph{critical object} is either a critical simplex (in the standard Forman sense) or a critical pair. The set of all critical objects of $f$ is denoted by $\scrit(f)$. In order to avoid confusion, we call the images of all critical objects (either in the Forman sense or from a critical pair) \emph{strong critical values}. \end{definition} \begin{remark} It is worthwhile to mention that critical pairs are detecting when a standard collapse has been made. Notice that it may happen either due to combinatorial reasons (e.g. the non-existence of dominated vertices in the corresponding subcomplex) or to a bad choice of the values of the discrete Morse function (i.e. noise). This second option induces a bad ordering in the way the standard collapses are made inside a potential strong one. The key idea is that every elementary strong collapse should ideally be made as a uninterrupted sequence of standard collapses. So every time it is not made in this way, a new critical object appears. \end{remark} \begin{example} Define $f$ and $g$ as in Examples \ref{cutoff illustration_a} and \ref{cutoff illustration_b}. We see that $c^f_v=14$ while $l_v^g=10$. Hence (by abuse of notation), $(16,15)$ is a critical pair under $f$ while $(14,13)$, $(16,15)$ and $(18,17)$ are critical pairs under $g$. Notice that the strong intervals are $I(S^f_v)=[9,14]$ and $I(S^g_v)=[9,10]$, respectively. \end{example} \begin{example}\label{arg exam} To further illustrate Definition \ref{strong definition}, we shall consider two different discrete Morse functions defined on a collapsible but non-strongly collapsible triangulation of the $2$-disc. Notice that both of them have only a single critical value (in the Forman sense), but very different numbers of critical objects, due to a bad election in the ordering of normal collapses of $g$. Let $f$ be given in Figure~\ref{DMT_example2a}. \begin{figure}[htbp] \begin{center} \includegraphics[width=.7\linewidth]{DMT_example2a} \caption{A discrete Morse function defined on a non-strongly collapsible triangulation of the $2$-disc.} \label{DMT_example2a} \end{center} \end{figure} By abuse of notation, we refer to the simplices by their labeling under the discrete Morse function (the fact that pairs in $V_f$ are given the same label should not cause confusion). For each of the pairs $(9,9), (6,6), (3,3),(2,2),(1,1)\in V_f$, we have the corresponding values $l_9=11, l_6=8, l_3=5, l_2=2$, and $l_1=1$. The corresponding strong collapses under the indicated vertices are given by $$ \begin{array}{ccl} S^f_9 & = & \{ (9,9), (10,10), (11,11)\},\\ S^f_6 & = & \{(6,6),(7,7), (8,8)\}, \\ S^f_3 & = & \{(3,3), (4,4), (5,5)\}, \\ S^f_2 & = & \{(2,2)\} \mbox{ and }\\ S^f_1 & = & \{(1,1)\}\,. \end{array}$$ Thus we obtain the following strong intervals: $$\begin{array}{c} I(S^f_9)=[9,11], I(S^f_6)=[6,8], I(S^f_3)=[3,5], I(S^f_2)=[2,2]=\{ 2 \} \\ \mbox{ and } I(S^f_1)=[1,1]=\{ 1 \}\,. \end{array}$$ Hence there is a single critical pair, namely, $(13,13)$, so that $\scrit(f)=\{0, (13,13)\}$. It is interesting to point out that this discrete Morse function can be considered as optimal in the sense that it minimizes the number of critical objects, as we will see in Theorem \ref{sLS theorem}. Now let $g$ be the discrete Morse function given in Figure~\ref{DMT_example2b}. \begin{figure}[htbp] \begin{center} \includegraphics[width=.7\linewidth]{DMT_example2b} \caption{A different discrete Morse function defined on a non-strongly collapsible triangulation of the $2$-disc.} \label{DMT_example2b} \end{center} \end{figure} Now we consider the pairs $(9,9),(8,8),(7,7), (5,5), (3,3)\in V_g$ and obtain corresponding values $ l_9=10, l_8=8, l_7=7,l_5=6$ and $l_3=3.$ The corresponding strong collapses are then given by $$\begin{array}{ccl} S^g_9 & = & \{(9,9)\}, \\ S^g_8 & = & \{(8,8)\}, \\ S^g_7 & = & \{(7,7)\}, \\ S^g_5 & = & \{(5,5),(6,6)\}\mbox{ and } \\ S^g_3 & = & \{(3,3)\}\,. \end{array}$$ It follows that the strong intervals are: $$\begin{array}{c} I(S^g_9)=[9,9]=\{ 9 \}, I(S^g_8)=[8,8]=\{ 8 \}, I(S^g_7)=[7,7]=\{ 7 \}, I(S^g_5)=[5,6]\\ \mbox{ and } I(S^g_3)=[3,3]=\{3 \}\,. \end{array}$$ We conclude that $\scrit(g)=\{ 0, (15,15), (13,13), (14,14), (10,10),(12,12), (11,11)\}$ for a total of $7$ critical objects. \end{example} The following Theorem is the analogue of Forman's classical result Theorem 3.3 of \cite{F-98} and also analogous to the classical result in the smooth setting establishing homotopy equivalence of sublevel sets by assuming non-existence of critical values. \begin{theorem}\label{analagous collapse} Let $f$ be a discrete Morse function on $K$ and suppose that $f$ has no strong critical values on $[a,b]$. Then $K(b)\searrow\searrow K(a)$. In particular, if $I(S^f_v)=[f(uv),l_v]$ is a strong interval for some vertex $v\in K$, then $K(l_v)\searrow\searrow K(f(uv))$. \end{theorem} \begin{proof} Let us consider the interval $[a,b]$ such that it does not contain any strong critical value. If additionally this interval does not contain any strong interval, then we conclude that $K(l_1)=K(l_2)$. Hence, assume that $[a, b]$ contains strong intervals. We may then partition $[a,b]$ into subintervals such that each one of them contains exactly one strong interval. Let us suppose $I(S_v)=[f(uv), l_v]\subseteq [a, b]$ is the unique strong interval in $[a, b]$. Again, since $f$ does not take values in $(a, f(uv))$ or $(l_v,b)$, it follows that $K(a)=K(f(uv))$ and $K(l_v)=K(b)$. Thus it suffices to show that $K(l_v)\searrow\searrow K(f(uv))$. By definition of $l_v$, $u$ is contained in every maximal regular simplex of $K(l_v) \cap \mathrm{St}(v, u).$ Furthermore, since $l_v< m_v$, all the new simplices that are attached from $K(f(uv))$ to $K(l_v)$ are contained within $\mathrm{St}(v, u)$. Hence $K(l_v)-K(f(uv)-\epsilon)$ is an open cone with apex $u$ and thus, $K(l_v)\searrow\searrow K(f(uv))$. \end{proof} \section{Simplicial Lusternik--Schnirelmann category}\label{simplicial Lusternik--Schnirelmann category} In this section, we recall the fundamental definitions and results found in \cite{F-M-V}. \begin{definition} Let $K,L$ be simplicial complexes. We say that two simplicial maps\linebreak $\phi, \psi \colon K\to L$ are \emph{contiguous}, denoted $\phi\sim_c \psi$, if for any simplex $\sigma \in K$, we have that $\phi(\sigma)\cup \psi(\sigma)$ is a simplex of $L$. Because this relation is reflexive and symmetric but not transitive, we say that $\phi$ and $\psi$ are in the same \emph{contiguity class}, denoted $\phi\sim \psi$, if there is a sequence $\phi=\phi_0\sim_c \phi_1 \sim_c\ldots \sim_c \phi_n =\psi$ of contiguous simplicial maps $\phi_i\colon K\to L, 0\leq i \leq n$. A map $\phi \colon K \to L$ is a \emph{strong equivalence} if there exists $\psi\colon L \to K$ such that $\psi \phi \sim \id_K$ and $\phi \psi \sim \id_L$. In this case, we say that $K$ and $L$ are \emph{strongly equivalent}, denoted by $K \sim L$. \end{definition} There is a nice link between strong equivalences and strong collapses. \begin{theorem}\cite[Cor. 2.12]{B-M-12} Two complexes $K$ and $L$ have the same strong homotopy type if and only if $K \sim L$. \end{theorem} \begin{definition}\cite{F-M-V}\label{scat definition} Let K be a simplicial complex. We say that the subcomplex $U \subseteq K$ is \emph{categorical} in $K$ if there exists a vertex $v \in K$ such that the inclusion $i\colon U \to K$ and the constant map $c_v \colon U \to K$ are in the same contiguity class. The \emph{simplicial L--S category}, denoted \emph{$\scat (K)$}, of the simplicial complex $K$, is the least integer $m \geq 0$ such that $K$ can be covered by $m + 1$ categorical subcomplexes. \end{definition} One of the basic results of classic Lusternik-Schnirelmann theory states that the L--S category is homotopy invariant. Next result shows that simplicial L--S category satisfies the analogous property in the discrete setting. \begin{theorem}\label{FMV}\cite[Theorem 3.4]{F-M-V} Let $K \sim L$ be two strongly equivalent complexes. Then $\scat(K) = \scat(L)$. \end{theorem} We refer the interested reader to the papers \cite{F-M-V,F-M-M-V} for a detailed study of this topic. \subsection{Simplicial Lusternik--Schnirelmann theorem} Our main result is the following simplicial version of the Lusternik-Schnirelmann Theorem: \begin{theorem}\label{sLS theorem} Let $f\colon K\to \RR$ be a discrete Morse function. Then $$\scat(K)+1\leq \sharp(\scrit(f))\,.$$ \end{theorem} \begin{proof} For any natural number $n$, define $c_n :=\min\{a\in \RR : \scat(K(a))\geq n-1\}$. We claim that $c_n$ is a strong critical value of $f$. If $c_n$ is a regular value, then it is either contained in a strong interval or it is not. If $c_n$ is contained in a strong interval $I(c_v)$, then by Theorem \ref{analagous collapse}, we have $\scat(K(c_v))=\scat(K(c_n))$, contradicting the minimality of $c_n$. Otherwise, $c_n$ is outside a strong interval. Then, by Theorem \ref{analagous collapse} there exists $\epsilon >0$ such that $K(c_n)\searrow\searrow K(c_n-\epsilon)$. By Theorem \ref{FMV}, $\scat(K(c_n))=\scat(K(c_n-\epsilon))$. But $c_n>c_n-\epsilon$ and $c_n$ was the minimum value such that $\scat(K(c_n))=n-1$, which is a contradiction. Thus each $c_n$ is a strong critical value of $f$. \smallskip We now prove by induction on $n$ that $K(c_n)$ must contain at least $n$ critical objects. By the well-ordering principle, the set $\im(f)$ has a minimum, say $f(v)=0$ for some $0$-simplex $v\in K$. For $n=1$, $c_1=0$ so that $K(c_1)$ contains $1$ critical object. For the inductive hypothesis, suppose that $K(c_n)$ contains at least $n$ strong critical objects. Since all the strong critical values of $f$ are distinct, $c_n<c_{n+1}$ so that there is at least one new critical object in $f^{-1}(c_{n+1})$. Thus $K(c_{n+1})$ contains at least $n+1$ critical objects. Hence if $c_1<c_2<\ldots < c_{\scat(K)+1}$ are the critical objects, then $K(c_{\scat(K)+1})\subseteq K$ contains at least $\scat(K)+1$ critical objects. Thus $\scat(K)+1\leq \sharp(\scrit(f))$. \end{proof} \begin{example}\label{LS equality} We give an example where the upper bound of Theorem \ref{sLS theorem} is attained. Let $A$ be the several times considered non-strongly collapsible triangulation of the $2$-disc with the discrete Morse function $f$ given in Example \ref{arg exam}. This satisfies $\sharp(\scrit(f))=2$. Since $A$ has no dominating vertex, $\scat(A)>0$, whence $\scat(A)+1=\sharp(\scrit(f))=2$.\\ Notice that just adding one simplex to $A$, the above equality turns into a strict inequality and thus, the number of critical objects may increase while the simplicial category keeps the same. To see this, let us consider $A'$ as the clique complex of $A$, that is, we add to $A$ the triangle generated by its three bounding vertices. This is a simplicial $2$-sphere, so by means of Morse inequalities, it follows that every discrete Morse function $f$ defined on $A'$ has at least two critical simplices: one critical vertex (global minimum) and a critical triangle (global maximum). In addition, since $A'$ is a simplicial $2$-sphere, then it does not contain any dominated vertex. Moreover, after removing the critical triangle (and hence obtaining $A$) no new dominated vertices appear, so at least one critical pair arises in order to collapse $A$ to a subcomplex containing dominated vertices. Hence, we conclude that that any discrete Morse function $f\colon A' \to \RR$ must have at least $3$ critical objects, which could be considered optimal in the sense that it minimizes the number of critical objects. Furthermore, it is easy to cover $A'$ with $2$ categorical sets so that $\scat(A')=1$. Hence $\scat(A')+1=2<3\leq \sharp(\scrit(f))$. \end{example} \bibliographystyle{amcjoucc}
{'timestamp': '2016-12-30T02:02:29', 'yymm': '1612', 'arxiv_id': '1612.08840', 'language': 'en', 'url': 'https://arxiv.org/abs/1612.08840'}
\section{Introduction}{\label{sec1}} Flux variations across the electromagnetic spectrum is one of the defining characteristics of Active Galactic Nuclei (AGN). They occur on a wide range of timescales ranging from hours to days to months, making this particular property of AGN an efficient tool to understand the physics of these objects (\citealt{1995PASP..107..803U}; \citealt{1995ARA&A..33..163W}). In AGN, where the flux is dominated by relativistic jets of non-thermal emission pointed towards the direction of the observer, the observed intensity variations will be rapid and of large amplitude (\citealt{1984RvMP...56..255B}). Such rapid and large amplitude variations, generally explained by invoking relativistic jets (\citealt{1985ApJ...298..114M}; \citealt{1992ApJ...396..469H}; \citealt{1992vob..conf...85M}), have been observed in the blazar class of AGN (\citealt{1990AJ....100..347C}; \citealt{1989Natur.337..627M}) mostly on hour like timescales and recently on sub-hour timescales as well (\citealt{2010ApJ...719L.153R}; \citealt{2011ApJS..192...12I}). \begin{table*} \caption{List of the $\gamma$-NLSy1 galaxies monitored in this work. Column informations are as follows: (1) IAU name; (2) other name; (3) right ascension in the epoch 2000; (4) declination in the epoch 2000; (5) redshift; (6) absolute B mangitude; (7) apparent V magnitude; (8) observed optical polarization; (9) radio spectral index and (10) radio loudness parameter R = f$_{1.4 GHz}$/f$_{440 nm}$(\citealt{2011nlsg.confE..24F})}. \begin{tabular}{llcccccrrr} \hline IAU Name & Other Name & RA (2000)$^a$ & Dec (2000)$^a$ & $z$$^a$ & M$_B$$^a$ & V$^a$ & P$_{opt}$ & $\alpha_R$$^d$ & R$^e$ \\ & & & & & (mag) & (mag) & (\%) & & \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) \\ \hline J0324+3410 & 1H 0323+342 & 03:24:41.2 & +34:10:45 & 0.063 & $-$22.2 & 15.72 & $<$ 1$^{b}$ & $-$0.35 & 318 \\ J0948+0022 & PMN J0948+0022 & 09:48:57.3 & +00:22:24 & 0.584 & $-$23.8 & 18.64 & 18.8$^{c}$ & 0.81 & 846 \\ J1505+0326 & PKS 1502+036 & 15:05:06.5 & +03:26:31 & 0.409 & $-$22.8 & 18.64 & --- & 0.41 & 3364 \\ \hline \end{tabular} \hspace*{-11.0cm}$^a$ \citet{2010A&A...518A..10V}\\ \hspace*{-12.2cm}$^b$ \citet{2011nlsg.confE..49E}\\ \hspace*{-12.3cm}$^c$ \citet{2011PASJ...63..639I} \\ \hspace*{-0.9cm}$^d$ radio spectral index calculated using the 6 cm and 20 cm flux densities given in \citet{2010A&A...518A..10V} ($S_\nu \propto \nu^{\alpha}$) \\ \hspace*{-12.6cm}\noindent $^e$ \citet{2011nlsg.confE..24F} \label{tab1} \end{table*} Narrow Line Seyfert 1 (NLSy1) galaxies are an interesting class of AGN similar to Seyfert galaxies that came to be known about 25 years ago by \citet{1985ApJ...297..166O}. Their optical spectra contain narrower than usual permitted lines from the Broad Line Region (BLR), having FWHM(H$_{\beta}$) $<$ 2000 km sec$^{-1}$. Normally, they have [O~{\sc iii}]/H$_{\beta} <$ 3, however exceptions are possible if they have strong [Fe~{\sc vii}] and [Fe~{\sc x}] lines(see \citealt{2011nlsg.confE...2P} and references therein). Both BLR and Narrow Line Region (NLR) are present in NLSy1s, but the permitted lines from BLR are narrower than that of Seyfert 1 galaxies (\citealt{2000ApJ...538..581R}). NLSy1 galaxies were found to show rapid flux variability in the optical (\citealt{2004ApJ...609...69K}; \citealt{2000NewAR..44..539M}). They also show the radio-loud/radio-quiet dichotomy (\citealt{2000ApJ...543L.111L}), however, the radio-loud fraction of NLSy1 is about 7$\%$ (\citealt{2003ApJ...584..147Z}; \citealt{2006AJ....132..531K}), smaller than the fraction of 15$\%$ known in the population of quasars (\citealt{1995PASP..107..803U}). These radio-loud NLSy1 galaxies have flat radio spectrum, high brightness temperatures, suggesting the presence of relativistic jets in them (\citealt{2003ApJ...584..147Z}; \citealt{2006PASJ...58..829D}). Since the launch of the {\it Fermi} Gamma-ray Space Telescope in 2008, high energy (E $>$ 100 MeV) gamma rays were detected in a few radio-loud Narrow-Line Seyfert 1 galaxies. We therefore refer to these sources as $\gamma$-ray loud NLSy1 ($\gamma$-NLSy1) galaxies. These sources are found to have high energy variable $\gamma$-ray radiation as detected by {\it Fermi}/LAT. They are also found to have compact radio structure with a core-jet morphology (\citealt{2006PASJ...58..829D}; \citealt{2011A&A...528L..11G}; \citealt{2007ApJ...658L..13Z};\citealt{2012arXiv1205.0402O}), superluminal motion and high brightness temperatures(\citealt{2012arXiv1205.0402O}; \citealt{2006PASJ...58..829D}). All these characteristics give a distinctive proof of the presence of relativistic jets in them (\citealt{2009ApJ...699..976A}; \citealt{2009ApJ...707..727A}; \citealt{2009ApJ...707L.142A}; \citealt{2010ASPC..427..243F}). Another independent proof of the existence of relativistic jets oriented at small angles to the observer in these $\gamma$-NLSy1 sources is the detection of intranight optical variability (INOV) on hour to sub-hour (minute) timescales. \citet{2010ApJ...715L.113L} reported the first detection of INOV in the $\gamma$-NLSy1 galaxy PMN J0948+0022, wherein the authors detected INOV with amplitudes as large as 0.5 mag on timescale of several hours. Recently \citet{2011nlsg.confE..59M} also found {\bf this} source to show INOV. In this work, we aim to understand the INOV characteristics of this new class of $\gamma$-NLSy1 galaxies, in particular to detect rapid INOV on these sources on sub-hour (minute) time-scales and also to see if their INOV characteristics compare with that of the blazar class of AGN known to have relativistic jets. We detail in Section \ref{sec2}, the sample of $\gamma$-NLSy1s selected for the intranight optical monitoring. Observations are described in Section \ref{sec3}. Section \ref{sec4} is devoted to the results of this work followed by discussion and conclusion in the final Section \ref{sec5}. \section{Sample}{\label{sec2}} One of the discoveries by {\it Fermi}/LAT was the detection of $\gamma$-rays from the NLSy1 galaxy PMN J0948+0022 (\citealt{2009ApJ...699..976A}). An analysis of the publicly available LAT data during the period August 2008 to February 2011, has led to the detection of $\gamma$-rays in a total of seven NLSy1 galaxies including the source PMN J0948+0022 (\citealt{2011nlsg.confE..24F}). From this list of seven sources, we have selected for this work only sources having significant detections in {\it Fermi}/LAT with the test statistic (TS) larger than 100 and integrated $\gamma$-ray flux in the 0.1$-$100 GeV range greater than 5 $\ensuremath{\times}$ 10$^{-8}$ photons cm$^{-2}$ s$^{-1}$. A TS value of 10 roughly corresponds to 3$\sigma$ (\citealt{1996ApJ...461..396M}). This has led to the final selection of three sources for intranight optical monitoring, namely 1H 0323+342, PMN J0948+0022 and PKS 1502+036. The general properties of these sources are given in Table \ref{tab1}. \begin{figure*} \vspace*{-6.0cm} \hspace*{-1.0cm}\psfig{file=1H323_new.ps} \vspace*{-13.0cm} \caption{Intranight DLCs of the source 1H 0323+342. The date of observation and the duration of monitoring (within brackets) are given on the top of each panel. On the bottom panel of each night is given the variations of the FWHM of the stellar images during the monitoring period in the night.} \label{fig1} \end{figure*} \section{Observations and Reductions}{\label{sec3}} Our observations were carried out on the newly commissioned 130 cm telescope (\citealt{2010ASInC...1..203S}) located at Devasthal and operated by the Aryabhatta Research Institute for Observational Sciences (acronym ARIES), Nainital. This telescope is a modified Ritchey Chretein system with a f/4 beam (\citealt{2010ASInC...1..203S}). We have used two detectors for our observations. One is a 2k $\ensuremath{\times}$ 2k conventional CCD having a gain of 1.39 e$^{-}$/ADU and read out noise of 6.14 e$^{-}$. Each pixel in this CCD has a dimension of 13.5 $\mu$m. This corresponds to 0.54 arcsec/pix on the sky thereby covering a field of 12 x 12 arcmin$^{2}$. The second detector used in our observation is the 512 $\ensuremath{\times}$ 512 Electron Multiplying Charged Coupled Device (EMCCD). It has very low readout noise (0.02 e$^{-}$) and a variable gain which can be selected by the observer. The preliminary science results from observations of these CCDs are given by \citet{2012aj}. For observations reported here, we used a gain of 225 e$^{-}$/ADU. It is well known from optical monitoring observations of blazars that the probability of finding INOV can be enhanced by continuous monitoring of a source (\citealt{1990BAAS...22R1337C}; \citealt{1995PhDT.........2N}) thus, in this work we tried to monitor each source continuously for a minimum of 4 hours during a night. However, due to weather constraints for some of the nights we were able to monitor sources for durations as low as 1 hour. All of the observations were done in Cousins R ($R_C$, hereafter) filter as the CCD response is maximum in this band. The exposure time was typically between 30 seconds to 15 minutes depending on the brightness of the source, the phase of the moon and the sky transparency on that night. The target $\gamma$-NLSy1 galaxies were also suitably placed in the field of view (FOV), so as to have atleast three good comparison stars in the observed FOV. Preliminary processing of the images, such as bias subtraction and flat fielding were done through standard procedures in IRAF\footnote [1]{IRAF is distributed by the National Optical Astronomy Observatories, which is operated by the Association of Universities for Research in Astronomy, Inc. under co-operative agreement with the National Science Foundation}. Aperture photometry on both the $\gamma$-NLSy1 and the comparison stars present on the cleaned image frames were carried out using the {\tt phot} task available within the APPHOT package in IRAF. The optimum aperture radius for the photometry was determined using the procedure outlined in \citet{2004JApA...25....1S}. Firstly, star$-$star differential light curves were generated for a series of aperture radii starting from the median seeing FWHM of that night. The aperture that gave the minimum scatter for the different pairs of comparison stars was selected as the optimum aperture for the photometry of the target $\gamma$-NLSy1. Table \ref{tab2} consists of the positions and apparent magnitudes (taken from USNO\footnote[2]{http://www.nofs.navy.mil/data/fchpix/}) of the comparison stars used in the differential photometry. It should be noted that uncertainty in the magnitudes taken from this catalogue may be up to 0.25 mag. The log of observations are given in Table \ref{tab3}. \section{Results}{\label{sec4}} \subsection{Intranight optical variability} From the derived instrumental magnitudes from photometry, differential light curves (DLCs) were generated for the given $\gamma$-NLSy1 relative to steady comparison stars. The optimum aperture used here is close to the median FWHM of the images on any particular night most of the time. Also, as the central $\gamma$-NLSy1 dominates its host galaxy, it should have negligible effects on the contribution of the host galaxy to the photometry (\citealt{2000AJ....119.1534C}). In Figs.\ref{fig1}, \ref{fig2} and \ref{fig3} we present the DLCs for the objects 1H 0323+342, PMN J0948+0022 and PKS 1502+036 relative to two non-variable and steady comparison stars present in their observed frames. In order to judge the variability nature of the sources, we have tried to use more than two comparison stars. However, in Figs \ref{fig1}, \ref{fig2} and \ref{fig3} we have shown the DLCs of $\gamma$-NLSy1 galaxies relative to only two comparison stars. These comparison stars were used later to characterise the variability of $\gamma$-NLSy1 galaxies. A $\gamma$-NLSy1 is considered to be variable only when it shows correlated variations both in amplitude and time relative to all the selected comparison stars. Great care is taken on the selection of comparison stars such that they are in close proximity to the source and also have similar brightness to the source. However, it is not easy to get such comparison stars, firstly due to the small FOV covered by the EMCCD used in some nights of the observations reported here and secondly due to the constraint of using the same standard stars irrespective of which CCD was used for the observations. We note here that the uncertainty of the magnitudes of the comparison stars given in Table \ref{tab2} will have no effect on the DLCs as they involve the differential instrumental magnitudes between the $\gamma$-NLSy1 and the comparison stars. Also the typical error of each point in the DLCs is around 0.01 mag. To access the variability nature of the sources, we have employed the following two criteria. \begin{table} \caption{Positions and apparent magnitudes of the comparison stars from the USNO catalogue (\citealt{2003AJ....125..984M}).} \begin{tabular}{lccccc} \hline Source & Star & RA(J2000) & Dec.(J2000) & R & B\\ & & & & (mag) & (mag)\\ \hline 1H 0323+342 & S1 & 03:24:44.26 & +34:09:31.84 & 15.17 & 14.82 \\ & S2 & 03:24:51.24 & +34:12:26.53 & 15.06 & 14.91 \\ & S3 & 03:24:35.01 & +34:09:36.73 & 15.81 & 15.43 \\ PMN J0948+0022 & S1 & 09:49:00.44 & +00:22:35.02 & 16.47 & 16.32 \\ & S2 & 09:48:57.30 & +00:24:18.53 & 16.15 & 16.35 \\ & S3 & 09:48:53.69 & +00:24:55.14 & 16.14 & 16.34 \\ PKS 1502+036 & S1 & 15:05:11.30 & +03:22:25.57 & 15.24 & 16.20 \\ & S2 & 15:05:15.90 & +03:19:10.91 & 15.44 & 16.25 \\ & S3 & 15:05:26.99 & +03:24:37.25 & 15.71 & 16.95 \\ & S4 & 15:05:15.37 & +03:25:40.90 & 17.12 & 18.85 \\ & S5 & 15:05:15.65 & +03:25:27.62 & 16.55 & 17.92 \\ & S6 & 15:05:14.53 & +03:24:56.34 & 16.74 & 17.08 \\ \hline \end{tabular} \label{tab2} \end{table} \begin{figure*} \vspace*{-6.0cm} \hspace*{-1.0cm}\psfig{file=PMN0948_new.ps} \vspace*{-13.0cm} \caption{Intranight DLCs for the $\gamma$-NLSy1 galaxy PMN J0948+0022. The variation of FWHM over the course of the night is given in the bottom panel. Above the top panel, the date and duration of observations are given.} \label{fig2} \end{figure*} \begin{figure*} \vspace*{-6.0cm} \hspace*{-1.0cm}\psfig{file=PKS1502_new.ps} \vspace*{-13.0cm} \caption{Intranight DLCs for the $\gamma$-NLSy1 galaxy PKS 1502+036. The FWHM variation during the night is given in the bottom panel. The dates of observations and the duration of monitoring are given on the top of each panel.} \label{fig3} \end{figure*} \subsubsection{C-Statistics} To decide on the INOV nature of the sources on any given night of observations we have used the commonly used statistical criteria called the {\it C} parameter. Follwing (\citealt{1997AJ....114..565J}), for any given DLC, it is defined as \begin{center} \begin{equation}\label{eq1} C = \frac{\sigma_{T}}{\sigma}, \end{equation} \end{center} where $\sigma_{T}$ and $\sigma$ are the standard deviations of the source and the comparison star differential light curves. As we have used three comparison stars, we have three star$-$star DLCs. Of these, we consider that DLC where the standard deviation of the light curve is minimum, as this will involve the steadiest pair of comparison stars. The $\sigma$ of this steadiest DLC is used in Eq. 1, to get two estimates of the {\it C}-statistics for the source$-$star DLC, corresponding to each of the comparison stars. A source is considered to be variable if {\it C} $\geq$ 2.576, which corresponds to a 99$\%$ confidence level (\citealt{1997AJ....114..565J}). Here, we get two values of {\it C}, corresponding to two DLCs of the source relative to each of the two comparison stars. Using {\it C} statistics, we consider a source to be variable, when both the calculated {\it C} values exceed 2.576. \subsubsection{F-statistics} Recently, \citet{2010AJ....139.1269D} has argued that the {\it C}-statistics widely used in characterising AGN variability is not a proper statistics and is wrongly established. An alternative to {\it C}-statistics according to \citet{2010AJ....139.1269D}, which can better access the variations in AGN light curves is the {\it F}-statistics. This statistics takes into account the ratio of two variances given as \begin{equation} F = \frac {\sigma^2_T}{\sigma^2} \end{equation} where ${\sigma^2_T}$ is the variance of the source$-$comparison star DLC and $\sigma^2$ is the variance of the comparison stars DLC. Thus, in computing the {\it F}-statistics, of the three comparison star DLCs, similar to the calculation of {\it C}-statistics, we have selected the comparison stars DLC that involves the steadiest pair of comparison stars. Using Eq. 2, for each source, two values of the {\it F} statistics were computed for the source$-$star DLCs corresponding to each of these two comparison stars. This calculated {\it F}-statistics is then compared with the critical F-value, F$^{\alpha}_{\nu}$, where $\alpha$ is the significance level and $\nu$ (= N$_{p}$ $-$ 1) is the degree of freedom for the DLC. We have used a significance level of $\alpha$ = 0.01, corresponding to a confidence levels of p $>$ 99 percent. Thus, if both the computed {\it F}-values, corresponding to the DLCs of the source to each of the two comparison stars are above the critical {\it F}-value corresponding to p $>$ 0.99, we consider the source to be variable. However, we point out that the {\it C} statistics might be a more compelling measure of the presence of variability, particularly when the comparison star light curves are clearly not steady. Also, the variations in the FWHM of the point sources in the observed CCD frames, during the course of the night might give rise to fictitious variations in the target NLSy1 galaxies. However, the variations of the NLSy1s detected in the observations reported here do not have any correlation with the FWHM variations and are thus genuine variations of the NLSy1s. \subsubsection{Amplitude of Variability ($\psi$)} The actual variation displayed by the $\gamma$-NLSy1 galaxies on any given night is quantified using the INOV variability amplitude after correcting for the error in the measurements. This amplitude, $\psi$ is determined using the definition of \citet{1999A&AS..135..477R} \begin{equation}\label{eq6} \psi = 100 \sqrt{D_{max} - D_{min})^{2} - 2\sigma^{2}}\% \end{equation} with \\ D$_{max}$ = maximum in $\gamma$-NLSy1 differential light curve, \\ D$_{min}$ = minimum in $\gamma$-NLSy1 differential light curve, \\ $\sigma^{2}$ = variance in the star$-$star DLC involving the steadiest pair of comparison stars \\ The amplitude of variability calculated using Eq. \ref{eq6} for the nights when INOV was observed is given in Table \ref{tab4}. The results of INOV are also given in Table \ref{tab4}. Also, we mention that, given the random variability nature of the sources with occasional short time scale and large amplitude flares, it is possible to detect large amplitude variability in these NLSy1s, if they are monitored for a longer duration of time. \subsubsection{Duty Cycle} Definition of duty cycle (DC) from \citet{1999A&AS..135..477R} was used to calculate it in our observations. It is very well known that objects may not show flux variations on all the nights they were monitored. Therefore, it is appropriate if DC is evaluated by taking the ratio of the time over which the object shows variations to the total observing time, instead of considering the fraction of variable objects. Thus, it can be written as \begin{equation}\label{eq7} DC = 100\frac{\sum_{i=1}^n N_i(1/\Delta t_i)}{\sum_{i=1}^n (1/\Delta t_i)} {\rm ~\%} \end{equation} where $\Delta t_{i}$ = $\Delta t_{i,obs}(1 + z)^{-1}$ is the duration of the monitoring session of a source on the {\itshape i}$^{th}$ night, corrected for its cosmological redshift $z$. N$_{i}$ was set equal to 1 if INOV was detected, otherwise N$_{i}$ = 0. Using only the {\it C}-statistics to judge the presence of INOV, on any particular night, we find a DC of 57 percent for INOV of these $\gamma$-ray NLSy1 galaxies. However, using the {\it F}-statistics, we find an increased DC of INOV of 85 percent. \subsection{Long term Optical Variability (LTOV)} Since the total time-span covered for observations range from days to months, this allowed us to search for the long term optical variability (LTOV) of the sources. The LTOV results are summarized in the last column of Table \ref{tab4}. Here, the values indicate the difference of the mean $\gamma$-NLSy1 DLC with respect to the {\bf previous} epoch of observation. \begin{table} \caption[Log of observations]{Log of observations. Columns:- (1) source name; (2) date of observations; (3) duration of monitoring in hrs.; (4) number of data points in DLC; (5) exposure time in second; (6) CCD modes used; here "normal" refers to the 2k $\ensuremath{\times}$ 2k pixels$^2$ CCD and "EM" refers to the 512 $\ensuremath{\times}$ pixels$^2$ EMCCD} \begin{tabular}{lccccc} \hline Source & Date & Duration & N & Exp.time & CCD mode\\ & & (hrs.) & & (secs.) & \\ (1) & (2) & (3) & (4) & (5) & (6) \\ \hline 1H 0323+342 & 24.01.12 & 1.3 & 105 & 30 & EM\\ & 25.01.12 & 2.1 & 47 & 120 & EM\\ & 26.01.12 & 3.1 & 89 & 120 & EM\\ & 02.02.12 & 3.5 & 35 & 150 & Normal\\ PMN J0948+0022 & 26.01.12 & 6.0 & 165 & 120 & EM\\ & 02.02.12 & 2.4 & 18 & 200 & Normal\\ & 11.03.12 & 4.9 & 40 & 150 & Normal\\ & 19.04.12 & 3.9 & 73 & 180 & EM\\ PKS 1502+036 & 18.04.12 & 2.1 & 24 & 300 & EM\\ & 22.05.12 & 3.2 & 11 & 900 & Normal\\ & 23.05.12 & 3.8 & 13 & 900 & Normal\\ & 24.05.12 & 5.2 & 12 & 900 & Normal\\ \hline \end{tabular} \label{tab3} \end{table} \begin{table*} \caption[Log of INOV and LTOV observations]{Log of INOV and LTOV observations. Columns:- (1) source name; (2) date of observation; (3) INOV amplitude in percent; (4) and (5) {\it F}-values computed for the $\gamma$-NLSy1 galaxy DLCs relative to the steadiest pair of comparison stars on any night; (6) variability status according to {\it F}-statistics; (7) and (8) Values of {\it C} for the two $\gamma$-NLSy1 galaxy DLCs relative to the two comparison stars; (9) variability status as per {\it C}-statistics and (10) magnitude difference for LTOV relative to the first epoch of observation.} \begin{center} \begin{tabular}{lcrrrcrrcr} \hline Source & Date & $\psi$ & {\it F1} & {\it F2} & Status & {\it C1} & {\it C2} & Status & LTOV \\ & dd.mm.yy & ($\%$) & & & & & & & ($\Delta$m) \\ (1) & (2) & (3) & (4) & (5)& (6) & (7) & (8) & (9) & (10) \\ \hline 1H 0323+342 & 24.01.12 & 12.69 & 1.903 & 1.736 & V & 1.380 & 1.318 & NV & \\ & 25.01.12 & ---- & 1.414 & 1.357 & NV & 1.189 & 1.165 & NV & $-$0.01 \\ & 26.01.12 & 7.35 & 12.445 &12.406 & V & 3.528 & 3.522 & V & 0.04 \\ & 02.02.12 & ---- & 1.766 & 2.153 & NV & 1.329 & 1.467 & NV & $-$0.17 \\ \\ PMN J0948+0022 & 26.01.12 & 51.92 & 114.427 &107.992 & V & 10.697 & 10.392 & V & \\ & 02.02.12 & 17.12 & 54.448 & 56.270 & V & 7.411 & 7.501 & V & $-$0.24 \\ & 11.03.12 & 33.13 & 168.340 &161.495 & V & 12.975 & 12.708 & V & 0.47 \\ & 19.04.12 & 25.25 & 24.071 & 25.500 & V & 4.906 & 5.050 & V & $-$0.01 \\ \\ PKS 1502+036 & 18.04.12 & 6.49 & 3.263 & 3.913 & V & 1.806 & 1.978 & NV & \\ & 22.05.12 & 3.58 & 21.744 & 23.753 & V & 4.663 & 4.874 & V & \\ & 23.05.12 & 3.16 & 8.429 & 10.186 & V & 2.903 & 3.192 & V & \\ & 24.05.12 & 10.09 & 93.043 & 86.512 & V & 9.646 & 9.301 & V & \\ \hline \end{tabular} \label{tab4} \end{center} \end{table*} \begin{figure*} \includegraphics{ltv_1H_new.ps} \caption{Long term light curves of 1H 0323+342} \label{fig4} \end{figure*} \section{Notes on the individual sources}{\label{sec5}} \subsection{1H 0323+342} This source was found to be a $\gamma$-ray emitter by {\it Fermi}/LAT(\citealt{2010ApJ...715..429A}). \citet{2011nlsg.confE..49E} found optical polarization $<$ 1\% on Feb. 07 and Feb. 10, 2011. HST images were well decomposed using a central point source component above a S\'{e}rsic profile (\citealt{2007ApJ...658L..13Z}). VLBI observations give clear evidence of a core-jet structure for this source ({\citealt{2007ApJ...658L..13Z}). Using the 6 cm and 20 cm flux densities given in the \citet{2010A&A...518A..10V} catalogue, the source is found to have an flat radio spectrum with $\alpha_R$=$-$0.35 ($S_{\nu} \propto \nu^{\alpha}$). This source has not been studied for INOV prior to this work. Observations with good time resolution was obtained for a total of four nights. To judge the variability of 1H 0323+342, we have selected three comparison stars, namely S1, S2 and S3. However, as S3 was clearly non-steady, it was not used for generating the DLCs of 1H 0323+342. On 24.01.2012, the comparison stars were not found to be stable. On this night, the observations have an average time resolution of 30 seconds. From {\it F}-statistic we find the $\gamma$-NLSy1 DLCs to be variable. However, from {\it C}-statistics, the $\gamma$-NLSy1 galaxy have a {\it C} $<$ 2.576, making it to be non-variable on that night. One day later, on 25.01.2012, the source was again observed for a total duration of 2.1 hours with a typical sampling of 1 data point every 2 min. The source was non-variable on this night using both the {\it C} and {\it F} statistics. On 26.01.2012, about 3.5 hours of data were gathered on the source with a temporal resolution of 2 min, and the source was found to show clear variations on that night. It was also found to be non-variable over the 5 hours of monitoring done on 02.02.2012. On this night we have on average one data point around every 5 min. Thus, the source was found to show unambiguous evidence of INOV on one of the 4 nights of monitoring. However, on 24.01.2012, according to {\it F}-statistics, the source is variable, while it is not so, when {\it C}-statistics was used. Considering the LTOV of the source, the total time baseline covered for this source is 8 days. Over the course of 8 days, the source was found to show variations. It faded by 0.01 mag in the first 24 hours, however brightened by 0.04 mag in another 24 hours and again became fainter by 0.17 mag over 6 days between 26.01.2012 and 02.02.2012 (Figure \ref{fig4}). \begin{figure*} \includegraphics{ltv_PMN_new.ps} \caption{Long term light curves of PMN J0948+0022} \label{fig5} \end{figure*} \subsection{PMN J0948+0022} This was the first NLSy1 galaxy detected in the $\gamma$-ray band during the initial months of operation of {\it Fermi} (\citealt{2009ApJ...707..727A}). It was found to have high optical polarization of 18.8\% when observed during March$-$April 2009, by \citet{2011PASJ...63..639I}. However, it showed low optical polarization of about 1.85\% when observed again on Feb. 10, 2011 (\citealt{2011nlsg.confE..49E}). Such polarization variations are not uncommon in blazars. It has an inverted radio spectrum with $\alpha_R$ = 0.81 evaluated using the 6 cm and 20 cm flux densities given in \citet{2010A&A...518A..10V} catalogue. VLBI observations revealed high brightness temperature and a compact structure (\citealt{2006PASJ...58..829D}), with a possible core-jet morphology (\citealt{2011A&A...528L..11G}). Previous INOV observations showed the source to show violent variations with amplitudes of variability as large as 0.5 mag in timescale of hours (\citealt{2010ApJ...715L.113L}). Recently, \citet{2011nlsg.confE..59M} has also detected INOV in PMN J0948+0022. This source has been monitored by us for four nights with durations ranging from two to six hours between Jan to Apr 2012. To characterise the variability of PMN J0948+0022 during the nights it was monitored, we have selected three comparison stars, S1, S2 and S3, all of which were found to be non-variable. However, for all variability analysis we have considered only the DLCs of PMN J0948+0022 relative to S1 and S2. On 26.01.2012, it was monitored for a total duration of 7 hours, with a good time resolution of about 2 minutes. Clear evidence of variability was found on this night with amplitudes of variability as large as 52\%. A fast increase in brightness to 0.1 mag and slow declining flare with peak at $\sim$17.9 UT was found . The source then displayed a gradual decrease in brightness during the course of the night. Superimposed on this brightness change of the source, we found several mini-flares with timescales as short as 12 min. One such mini-flare is towards the end of the night. Between 22.6 and 22.9 UT, the source increased in brightness by 0.12 mag, reaching maximum brightness at 22.76 UT and then gradually decreased in brightness. These mini-flares are in fact real and cannot be attributed to seeing fluctuations during the course of the night, as we do not see any correlation between the occurrence of the mini-flares and fluctuations in the FWHM variations. The FWHM after 22.5 UT was nearly steady, whereas a brightness change of 0.12 mag was noticed in the $\gamma$-NLSy1. On the observations done on 02.02.2012 for a duration of 2.4 hours with a temporal resolution of about 7 min, the source showed a gradual brightness change of about 0.2 magnitude during the course of our observations. A large flare over a period of 3 hours was found during the 4.9 hours of observations on 11.03.2012. On this night, the average time resolution of the observations is about 7 min. Again on this night, superimposed on the large flare we noticed two mini-flares one at 17 UT and the other at 19.5 UT. The mini-flare at 19.5 UT showed a fast increase in brightness by 0.05 mag between 19.28 and 19.52 UT and then gradually reached the original brightness level at 19.87 UT. The total flare duration is $\sim$35 min. with a rise time of $\sim$14 min. and a decay time of $\sim$21 min. During 17 UT the FWHM has become poorer by 0.2$^{\prime\prime}$, whereas the $\gamma$-NLSy1 galaxy increased in brightness by 0.05 mag. The increase in brightness of the NLSy1 at 19.5 UT is not associated with the FHWM becoming poorer by 0.2$^{\prime\prime}$, as we might expect the source to become fainter due to FWHM degradation. Thus, the two mini-flares observed on this night are also real and they are not due to any changes in the seeing variations during those times. INOV was also detected on the observations done on 19.04.2012. On this night, the light curve is densely sampled wherein we have on average one data point every 3 min. Thus, the source has shown variations on all the four nights monitored by us. The LTOV of this source can be noticed from the four epochs of monitoring over a duration of 4 months. Between the first two epochs, separated by six days, the source has decreased in brightness by about 0.2 mag. It then brightened by $\sim$0.5 mag between 02.02.2012 and 11.03.2012, and again became fainter by about 0.01 mag when observed on 19.04.2012 (Figure \ref{fig5}). \subsection{PKS 1502+036} This source was found to be emitting in $\gamma$-ray band by {\it Fermi}/LAT (\citealt{2010ApJ...715..429A}) and is the faintest $\gamma$-ray loud NLSy1 known in the northern hemisphere as of now. It was found to have a core-jet structure from VLBA imaging (\citealt{2012arXiv1205.0402O}). Its radio spectrum is inverted with $\alpha_R$ = 0.41. PKS 1502+036 was monitored by us on four nights for INOV. We have used six comparison stars which are brighter than the source PKS 1502+036, to detect the presence of variability in it mainly due to the non-availability of suitable comparison stars of brightness similar to PKS 1502+036 in the observed field. For characterization of variability either using the {\it C} statistics or {\it F} statistics, we have used the stars S4 and S6 for the observations of 18.04.2012, whereas, for the other three nights we have used the stars S1 and S2. The three nights of observations carried out in May have an average time resolution of 19 min, whereas on 18.04.12, we have a better sampling of one data point every 5 min. On the observations done on 18.04.2012, INOV could not be detected. Clear INOV was also detected when the source was monitored on 22.05.2012 for a duration of 3.2 hours, though the order of fluctuations in magnitude was found to be very small. A gradual increase in brightness of 0.03 mag over a period of 2 hours and then a decrease by 0.035 mag in the next one and a half hour was found. Source showed the largest variability on 24.05.2012, with amplitude of variability as large as $\sim$10 percent. The observations made on this source covers a total time baseline of about a month. As the comparison stars used during the April and May observations were different, the LTOV during this period could not be ascertained. However, from the observations done in May, the source brightness remained the same both on 22 and 23 May, but faded by $~$0.045 mag when last observed on 24.05.2012. The large error bars in the DLCs of PKS 1502+036 is mainly due to its faintness. Though from visual examination it is difficult to unambiguously identify the variations, using the conservative {\it C} statistics we classify the source to be variable on three of the four nights it was monitored. \section{Conclusions and Discussion}{\label{sec6}} All the three sources of this present study have flat/inverted radio spectra, and also show $\gamma$-ray flux variability (\citealt{2003ApJ...584..147Z}, \citealt{2009ApJ...699..976A}, \citealt{2009ApJ...707..727A}). The source PMN J0948+0022 showed a large polarization of 18.8\% in 2009 (\citealt{2011PASJ...63..639I}) however, was found to be in a low polarized state with P$_{opt}$ of 1.85\% in 2011 (\citealt{2011nlsg.confE..49E}). Optical polarization of $<$ 1\% was also observed for the source 1H 0323+342 (\citealt{2011nlsg.confE..49E}). High resolution radio observations of these $\gamma$-NLSy1 galaxies point to the presence of core-jet structure, superluminal motion and high brightness temperatures (\citealt{2006PASJ...58..829D}; \citealt{2011A&A...528L..11G}; \citealt{2007ApJ...658L..13Z}; \citealt{2012arXiv1205.0402O}). All these observations clearly point to the presence of relativistically beamed jets in these sources, closely aligned to the observers line of sight. It is known that $\gamma$-ray detected blazars show more INOV, compared to non-$\gamma$-ray detected blazars pointing to an association between INOV and relativistic jets, that are more aligned to the observers line of sight (\citealt{2011MNRAS.416..101G}). Earlier studies have shown that large amplitude $\psi$ $>$ 3\% and high DC of around 70\% is exhibited by the BL Lac class of AGN (\citealt{2004JApA...25....1S}; \citealt{2004MNRAS.348..176S}). The spectral energy distribution (SED) of $\gamma$-NLSy1 galaxies is found to be similar to that of blazars (\citealt{2012arXiv1207.3092D};\citealt{2009ApJ...707..727A} ;\citealt{2011MNRAS.413.1671F}). This non-thermal continuum spectra consists of distinct low energy (due to synchrotron emission mechanism) and high energy (due to inverse Compton emission mechanism) components. The polarized optical flux seen in $\gamma$-NLSy1 galaxies (\citealt{2011nlsg.confE..49E};\citealt{2011PASJ...63..639I}) is therefore a manifestation of relativistically beamed synchrotron emission which also accounts for the low energy component of their SED, similar to the blazar class of AGN. From high resolution VLBI observations and optical monitoring data of AGN, optical flare rise is always associated with the emergence of new superluminal blobs of synchrotron plasma (knots) in the relativistic jet (\citealt{2010ApJ...715..355L}; \citealt{2010MNRAS.401.1231A}). Correlations between flux and polarization variations were observed in blazars such as Mrk 421 (\citealt{1998A&A...339...41T}) and AO 0235+164 (\citealt{2008ApJ...672...40H}). Similar to blazars, the flux variations in $\gamma$-NLSy1 galaxies can be explained by the shock-in-jet model (\citealt{1985ApJ...298..114M}). The turbulent jet plasma when it passes through the shocks in the jet downstream could give rise to increased multiband synchrotron emission and polarization (\citealt{2012A&A...544A..37G} and references therein). Recently, it has been found by \citet{2012A&A...544A..37G}, that sources with strong optical polarization also show high INOV. Though there are ample observational evidence for the presence of closely aligned relativistic jets in these $\gamma$-NLSy1 galaxies, an independent way to test their presence is to look for INOV in them. The prime motivation for this work is therefore, to understand the INOV characteristics of this new class of $\gamma$-ray loud NLSy1 galaxies and also to see for similarities/differences with respect to the $\gamma$-ray emitting blazar population of AGN. The observations presented here report the INOV characteristics of the sample of three $\gamma$-NLSy1 galaxies. The sample of sources in the present study consists of three out of the 7 known $\gamma$-ray loud NLSy1 galaxies, and therefore the INOV results found here might be representative of the INOV characteristics of the new population of $\gamma$-ray loud NLSy1 galaxies. Our high temporal sampling observation carried out on some of the nights using the EMCCD, have enabled us to detect ultra rapid continuum flux variations in the source PMN J0948+0022. Such rapid flux variations are possible as it is known that the jet in $\gamma$-ray bright AGN have large bulk flow Lorentz factor, thereby oriented at small angles to the line of sight leading to stronger relativistic beaming (\citealt{2009A&A...507L..33P}). From the observations of 3 sources, over 10 nights, using {\it C}-statistics we find a DC of variability of 57 percent. However, this increased to 85 percent when the {\it F}-statistics discussed in \citet{2010AJ....139.1269D} was used. Also, the amplitudes of variability ($\psi$) is found to be greater than 3\% for most of the time. Such high amplitude ($\psi >$ 3\%), high DC ($\sim$ 70 percent) INOV are characteristics of the BL Lac class of AGN (\citealt{2004JApA...25....1S}) and thus we conclude that the INOV characteristics of $\gamma$-NLSy1 galaxies are similar to blazars. The present study therefore, provides yet another independent argument for the presence of relativistic jets in these $\gamma$-ray loud NLSy1 galaxies closely aligned to the observer similar to the blazar class of AGN. Our present observations also indicate that $\gamma$-ray loud NLSy1 galaxies do show LTOV on day to month like time scales, similar to that shown by other classes of AGN (\citealt{2000ApJ...540..652W}; \citealt{2004JApA...25....1S} ). However, due to the limited nature of our observations, with each sources observed over different time baselines, definitive estimates of the LTOV of these sample of sources could not be made. Though there are ample evidences for the presence of jets in these sources, both from the INOV observations reported here and other multiwavelength and multimode observations available in the literature, the optical spectra of them do not show any resemblance to that of blazar class of AGN with relativistic jets. Seyferts in general have spiral host galaxies. Optical imaging observations of 1H 0323+342 shows a ring like structure, which hints of a possible collision with another galaxy (\citealt{2008A&A...490..583A}). Such interaction with another galaxy could trigger an AGN activity (\citealt{2007MNRAS.375.1017A}). Also, the images obtained from HST Wide Field Planetary Camera using the F702W filter corresponding to $\lambda_{eff}$ = 6919 \AA ~is well represented when decomposed with a central point source plus a S\'{e}rsic component (\citealt{2007ApJ...658L..13Z}). If the other two sources are also conclusively found to be hosted in spiral galaxies, then it points to a rethink of the well known paradigm of jets being associated to elliptical galaxies. Further dedicated flux, and optical polarization monitoring observations coupled with high resolution optical imaging studies will give clues to the nature of this new class of $\gamma$-ray loud NLSy1 galaxies. \section*{Acknowledgments} The authors thank the anonymous referee for his/her critical reviewing and constructive suggestions that helped to improve the presentation. \def\aj{AJ}% \def\actaa{Acta Astron.}% \def\araa{ARA\&A}% \def\apj{ApJ}% \def\apjl{ApJ}% \def\apjs{ApJS}% \def\ao{Appl.~Opt.}% \def\apss{Ap\&SS}% \def\aap{A\&A}% \def\aapr{A\&A~Rev.}% \def\aaps{A\&AS}% \def\azh{AZh}% \def\baas{BAAS}% \def\bac{Bull. astr. Inst. Czechosl.}% \def\caa{Chinese Astron. Astrophys.}% \def\cjaa{Chinese J. Astron. Astrophys.}% \def\icarus{Icarus}% \def\jcap{J. Cosmology Astropart. Phys.}% \def\jrasc{JRASC}% \defMNRAS{MNRAS}% \def\memras{MmRAS}% \def\na{New A}% \def\nar{New A Rev.}% \def\pasa{PASA}% \def\pra{Phys.~Rev.~A}% \def\prb{Phys.~Rev.~B}% \def\prc{Phys.~Rev.~C}% \def\prd{Phys.~Rev.~D}% \def\pre{Phys.~Rev.~E}% \def\prl{Phys.~Rev.~Lett.}% \def\pasp{PASP}% \def\pasj{PASJ}% \def\qjras{QJRAS}% \def\rmxaa{Rev. Mexicana Astron. Astrofis.}% \def\skytel{S\&T}% \def\solphys{Sol.~Phys.}% \def\sovast{Soviet~Ast.}% \def\ssr{Space~Sci.~Rev.}% \def\zap{ZAp}% \def\nat{Nature}% \def\iaucirc{IAU~Circ.}% \def\aplett{Astrophys.~Lett.}% \def\apspr{Astrophys.~Space~Phys.~Res.}% \def\bain{Bull.~Astron.~Inst.~Netherlands}% \def\fcp{Fund.~Cosmic~Phys.}% \def\gca{Geochim.~Cosmochim.~Acta}% \def\grl{Geophys.~Res.~Lett.}% \def\jcp{J.~Chem.~Phys.}% \def\jgr{J.~Geophys.~Res.}% \def\jqsrt{J.~Quant.~Spec.~Radiat.~Transf.}% \def\memsai{Mem.~Soc.~Astron.~Italiana}% \def\nphysa{Nucl.~Phys.~A}% \def\physrep{Phys.~Rep.}% \def\physscr{Phys.~Scr}% \def\planss{Planet.~Space~Sci.}% \def\procspie{Proc.~SPIE}% \let\astap=\aap \let\apjlett=\apjl \let\apjsupp=\apjs \let\applopt=\ao \bibliographystyle{mn}
{'timestamp': '2012-10-17T02:10:01', 'yymm': '1210', 'arxiv_id': '1210.4387', 'language': 'en', 'url': 'https://arxiv.org/abs/1210.4387'}
\section{Introduction} \label{intro} \subsection{Phase retrieval problem} \indent In optics, most detectors can only record the intensity of the signal while losing the information about the phase. Recovering the signal from the amplitude only measurements is called phase retrieval problem(PR) which arises in a wild range of applications such as Fourier Ptychography Microscopy, diffraction imaging, X-ray crystallography and so on\cite{Bunk2007Diffractive}\cite{Miao1999Extending}\cite{Zheng2013Wide}. Phase retrieval problem can be an instance of solving a system of quadratic equations: \begin{eqnarray} \centering y_i=|\langle\mathbf{a}_i,\mathbf{x}\rangle|^2+\mathbf{\varepsilon}_i,~~i=1,...,m, \end{eqnarray} where $\mathbf{x}\in\mathbb{C}^n$ is the signal of interest, $\mathbf{a}_i\in\mathbb{C}^n$ is the measurement vector, $y_i\in \mathbb{R}$ is the observed measurement, $\varepsilon_i$ is the noise.\\ \indent (1.1) is a non-convex and NP-hard problem. Traditional methods usually fall to find the solutions. Besides, let $\tilde{\mathbf{x}}$ be the solution of (1.1), Obviously, $\tilde{\mathbf{x}}e^{i\theta}$ also satisfies (1.1) for any $\theta\in[0,2\pi]$. So the uniqueness of the solution to (1.1) is often defined up to a global phase factor. \subsection{Prior art} For classical PR problem, $\{\mathbf{a}_i\}_{1\leq i\leq m}$ are the Fourier measurement vectors. There were series of methods came up to solve (1.1). In 1970, error reduction methods such as Gerchberg-Saxton and Hybrid input and output method\cite{Fienup1982Phase}\cite{Gerchberg1971A} were proposed to deal with phase retrieval problems by constantly projecting the evaluations between transform domain and spatial domain with some special constraints. These methods often get stuck into the local minimums, besides, fundamental mathematical questions concerning their convergence remain unsolved. In fact, without any additional assumption over $\mathbf{x}$, it is hard to recover $\mathbf{x}$ from $\{\mathbf{y}_i\}_{1\leq i\leq m}$. For Fourier measurement vectors, the trivial ambiguities of (1.1) are including global phase shift, conjugate inversion and spatial shift. In fact, it has been proven that 1D Fourier phase retrieval problem has no unique solution even excluding those trivialities above. To relief the ill condition characters, one way is to utilize some priori conditions such as nonnegativity and sparsity. Gespar\cite{Shechtman2013GESPAR} and dictionary learning method\cite{tillmann2016dolphin} both made progress in these regions.\\ \indent With the development of compress sensing and theories of random matrix, measurement vectors $\{\mathbf{a}_i\}_{1\leq i\leq m}$ aren't merely constrained in a determined type. When $m\geq c_0n\mathrm{log}n$ and $\mathbf{a}_i\overset{i.i.d}\sim\mathcal{N}(\mathbf{0},\mathbf{I})$, Wirtinger flow(WF)\cite{candes2015phase} method can efficiently find the global optimum of (1.1) with a careful initialization. The objective function of the WF is a forth degree smooth model which can be described as below: \begin{eqnarray} \mathop {\mathrm{minimize}}\limits_{\mathbf{z}\in\mathbb{C}^n}~~\mathit{f}(\mathbf{z})=\frac{1}{2m}\sum_{i=1}^{m}(|\langle\mathbf{a}_i,\mathbf{z}\rangle|^2-y_i)^2, \end{eqnarray} where $y_i=|\langle\mathbf{a}_i,\mathbf{x}\rangle|^2$. $\mathbf{x}$ is the signal to be reconstructed.\\ \indent WF evaluates a good initialization by power method and utilizes the gradient descent algorithm in each step with Wirtinger derivative to solve (1.2). It has been established using the theorem of algebra that for real signal, $2n-1$ random measurements guarantee uniqueness with a high probability\cite{Balan2006On}, for complexity signal $4n-4$ generic measurements are sufficient\cite{Bandeira2013Saving}. In general, when $m/n\geq 4.5$, WF can yield a stable empirical successful rate($\geqslant$ 95\%). However when $m/n\leq3$, WF has a low recovery rate. There is a gap between the sampling complexity of the WF and the known information-limit. Along this line, a batch of similar works have been sprung up. In \cite{chen2015solving}, Chen et al. suggest a truncated Wirtinger flow(TWF) method based on Possion model. TWF can largely improve performance of the WF by truncating some weird components. Zhang et al. came up with a reshaped Wirtinger flow model which have the type below\cite{zhang2016reshaped}: \begin{eqnarray} \mathop {\mathrm{minimize}}\limits_{\mathbf{z}\in\mathbb{C}^n}~~\mathit{f}(\mathbf{z})=\frac{1}{2m}\sum_{i=1}^{m}(|\langle\mathbf{a}_i,\mathbf{z}\rangle|-\sqrt{y_i})^2. \end{eqnarray} (1.3) is a low-order model compared to (1.2). Though it is not differentiable in those points in $\big\{\mathbf{z}|\mathbf{a}^*_i\mathbf{z}=0,i\in\{1,...,m\}\big\}$, it has little influence on the convergence analysis near the optimal points. Reshaped WF utilizes a kind of sub-gradient algorithm to search for the global minimums. Numerical tests demonstrated its comparative lower sample complexity. For real signal, it can have a 100\% successful recovery rate when $m/n\approx3.8$, for complex signal, the value is about $m/n=4.2$. In \cite{wang2016solving}, it came up with a truncated reshaped WF which decreased the sampling complexity further. Recently, a stochastic gradient descent algorithm was also proposed based on this reshaped model for the large scale problem\cite{wang2016solving1}. \subsection{Algorithm in this paper} \indent In this paper, a reweighted Wirtinger flow(RWF) algorithm is proposed to deal with the phase retrieval problem. It is based on the high-degree model (1.2), but it can have a lower sampling complexity. The weights of every compositions are changing during the iterations in RWF. These weights can have a truncation effect indirectly when the current evaluation is far away from the optimum. Theoretical analysis also shows that once the weights are bounded by 1 and $\frac{10}{9}$, RWF will converge to the global optimum exponentially.\\ \indent The remainders of this paper are organized as follows. In section 2, we introduce proposed RWF and establish its theoretical performance. In section 3, numerical tests compare RWF with state-of-the-art approaches. Section 4 is the conclusion. Technical details can be found in Appendix.\\ \indent In this article, the bold capital uppercase and lowercase letters represent matrices and vectors. $(\cdot)^*$ denotes the conjugate transpose, $j=\sqrt{-1}$ is the imaginary unit. $\mathrm{Re}(\cdot)$ is the real part of a complex number. $|\cdot|$ denotes the absolute value of a real number or the module of a complex number. $||\cdot||$ is the Euclidean norm of a vector. \section{Reweighted Wirtinger Flow} \subsection{Algorithm of RWF} Reweighting skills have sprung up in several relating arts. In \cite{chartrand2008iteratively}\cite{daubechies2010iteratively}, iteratively reweighted least square algorithms were came up to deal with problem in compress sensing. Utilizing the same idea, we suggest a reweighted wirtinger flow model as: \begin{eqnarray} \mathop {\mathrm{minimize}}\limits_{\mathbf{z}\in\mathbb{C}^n}~~\mathit{f}(\mathbf{z})=\frac{1}{2m}\sum_{i=1}^{m}f_i(\mathbf{z})=\frac{1}{2m}\sum_{i=1}^{m}\omega_i(|\langle\mathbf{a}_i,\mathbf{z}\rangle|^2-y_i)^2, \end{eqnarray} where $\omega_i\geqslant 0$ are weights. We can easily conclude that $\mathbf{x}$ is the global minimum of (2.1). WF is actually a special case of RWF where $\omega_i=1$ for $i=1,...,m$. If $\{\omega_i\}_{1\leq i\leq m}$ are determined, (2.1) can be solved by gradient descent method with Wirtinger gradient $\nabla\mathit{f}(\mathbf{z})$: \begin{eqnarray} \nabla \mathit{f}(\mathbf{z})=\frac{1}{m}\sum_{i=1}^{m}\nabla f_i(\mathbf{z})=\frac{1}{m}\sum_{i=1}^{m}\omega_i(|\langle\mathbf{a}_i,\mathbf{z}\rangle|^2-y_i)\mathbf{a}_i\mathbf{a}_i^*\mathbf{z}. \end{eqnarray} The details of Wirtinger derivatives can be found in \cite{candes2015phase}.\\ \indent The key of our algorithm is to determine $\omega_i$. In TWF \cite{chen2015solving}, it sets a parameter $C$ to truncate those $(|\langle\mathbf{a}_i,\mathbf{z}\rangle|^2-y_i)\mathbf{a}_i\mathbf{a}_i^*\mathbf{z}$ where $i\in W(i)=\{i~\Big|\big||\langle\mathbf{a}_i,\mathbf{x}\rangle|^2-y_i\big|\geq C,i=1,...m\}$. Those components may make $\nabla\mathit{f}(\mathbf{z})$ deviate from the correct direction. To alleviate the selection of $C$, we choose some special $\omega_i$ which can be seen in (2.3). Those weights are adaptively calculated from the algorithm depending on the value of $\mathbf{z}_{k-1}$. From (2.3), we can see that the corresponding $\omega_i$ will be comparatively low when $i\in W(i)$. This small $\omega_i$ will diminish the contribution of $\nabla f_i(\mathbf{z})$ to the $\nabla f(\mathbf{z})$. Thus adding weights is actually an indirect adaptive truncation. \begin{eqnarray} \omega_i^k=\frac{1}{\Big|\big|\langle\mathbf{a}_i,\mathbf{z}_{k-1}\rangle\big|^2-y_i\Big|+\eta_i},~i=1,...,m, \end{eqnarray} where $\mathbf{z}_{k-1}$ is the result in the (k-1)th iteration. $\eta_i$ is a parameter which can change during the iteration or to be stagnated all the time.\\ \indent Then, we design an algorithm called RWF to search for the global minimum $\mathbf{x}$. RWF updates $\mathbf{z}_k$ from a proper initialization $\mathbf{z}_0$ which is caculated by the power method in \cite{candes2015phase}. The details of RWF can be seen in algorithm 1. \\ \begin{algorithm} \caption{\textbf{Reweighted Wirtinger Flow}($\mathbf{RWF}$)} \label{alg:Framwork} % \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand\algorithmicensure {\textbf{Output:} } \begin{algorithmic} \REQUIRE$\{\{y_i\}_{1\leq i\leq m},\{\mathbf{a}_i\}_{1\leq i\leq m},\{\eta_i\}_{1\leq i\leq m}\ ,T\}$ \STATE $\{\mathbf{a}_i\}_{i=1}^m$: Gaussian vectors\\ $y_i=|\langle\mathbf{a}_i,\mathbf{x}\rangle|^2$: measurements\\ $\eta_i$: the parameter\\ $T$: the maximum iteration times\\ \ENSURE$\mathbf{x}^*$ \STATE $\mathbf{x}^*$: the reconstructed signal\\ \vskip 4mm \hrule \vskip 2mm $\mathbf{Initialization}$ \STATE set $\lambda^2=n\frac{\sum_ry_i}{\sum_i||\mathbf{a}_i||^2}$\\ set $\mathbf{z}_0$, $||\mathbf{z}_0||=\lambda$ to be the eigenvector corresponding to the largest eigenvalue of \begin{eqnarray*} \mathbf{Y}=\frac{1}{m}\sum_{i=1}^{m}y_i\mathbf{a}_i\mathbf{a}_i^* \end{eqnarray*} \vskip 2mm \FOR{$k=1$; $k\leq T$; $k++$ } \STATE $\mathbf{z}_{k+1}\in \mathrm{argmin}f^k(\mathbf{z})$ \ENDFOR\\ $\mathbf{x}^*=\mathbf{z}^T$ \end{algorithmic} \end{algorithm} \begin{algorithm} \caption{\textbf{Gradient descent method solver of (2.4)}} \label{alg:Framwork} % \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand\algorithmicensure {\textbf{Output:} } \begin{algorithmic} \REQUIRE$\{\mathbf{z}_k,\{y_i\}_{1\leq i\leq m},\{\mathbf{a}_i\}_{1\leq i\leq m},\{\eta_i\}_{1\leq i\leq m}\ ,T_1\}$ \STATE $\mathbf{z}_k$: is the evaluation in the $k$th iteration\\ $\mathbf{a}_i$: Gaussian vectors\\ $y_i=|\langle\mathbf{a}_i,\mathbf{x}\rangle|^2$: measurements\\ $\eta_i$: the parameter\\ $T_1$: the maximum steps of gradient descent\\ \ENSURE$\mathbf{z}_{k+1}$ \vskip 4mm \hrule \vskip 2mm $\mathbf{Initialization}$ \STATE Set $\mathbf{z}_{k+1}^0=\mathbf{z}_k$\\ \vskip 2mm \FOR{$t=1$; $t\leq T_1$; $t++$ } \STATE $\mathbf{z}_{k}^{t}=\mathbf{z}_{k}^{t-1}-\mu_t\nabla\mathit{f}^k(\mathbf{z}_{k}^t)$ \ENDFOR\\ $\mathbf{z}_{k+1}=\mathbf{z}_{k}^{T_1}$ \end{algorithmic} \end{algorithm} \indent From algorithm 1, we can see that we will solve an optimization problem in each iteration $k$: \begin{eqnarray} \mathbf{z}_{k+1}\in\mathrm{argmin}~f^k(\mathbf{z})=\frac{1}{m}\sum_{i=1}^{m}f_i^k(\mathbf{z}), \end{eqnarray} where $f_i^k(\mathbf{z})=\omega_i^k(|\langle\mathbf{a}_i,\mathbf{z}\rangle|^2-y_i)^2$.\\ \indent For simplicity, we use gradient descent algorithm to deal with it in this paper. Details can be seen in algorithm 2. On the other hand, there are a wild range of alternatives which can also be used to solve (2.4). Sun et al. \cite{sun2016geometric} came up with the trust region method. Gao et al. \cite{gao2016gauss} utilized the Gauss-Newton method. Li et al. \cite{li2016gradient} suggested using the conjugate gradient method and LBFGS method.\\ \indent It is critical to choose a proper stepsize $\mu_t$ in every step of gradient descent. In this paper, we use the backtracking method. For fairness, we also add this backtracking method into the WF and TWF for every numerical testing. The details of backtracking method can be seen in algorithm 3.\\ \begin{algorithm} \caption{\textbf{Stepsize Choosing via Backtracking Method}} \label{alg:Framwork} % \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand\algorithmicensure {\textbf{Output:} } \begin{algorithmic} \REQUIRE$\{\mathit{f}(\mathbf{z}), \nabla\mathit{f}(\mathbf{z}),\mathbf{z}^{(k)},\beta\}$ \STATE $\beta$ is a predetermined parameter which is in $(0,1)$ \ENSURE$\mu^{(k)}$ \vskip 4mm \hrule \vskip 2mm $\mathbf{General~step}$ \end{algorithmic} \begin{algorithmic}[1] \STATE set $\tau=1$\\ \STATE Repeat $\tau\leftarrow 0.5\tau$ until\\ $\mathit{f}(\mathbf{z}^{(k)}-\tau\nabla\mathit{f}(\mathbf{z}^{(k)}))<\mathit{f}(\mathbf{z}^{(k)})-\tau\beta||\nabla\mathit{f}(\mathbf{z}^{(k)})||^2$ \STATE $\mu^{(k)}=\tau$ \end{algorithmic} \end{algorithm} \indent Empirically, the reweighted procedure will change the objective function $f^{k}$ which will prevent the algorithm from being stagnated into the local minimums all the time and proceed to search for the global optimum. When the ratio between $m$ and $n$ is comparatively large, the geometric property of the $\mathit{f}^{1}(\mathbf{z})$ is benign will fewer local minimums, then we can get a comparatively high accurate solution in several iterations. Figure 2.1 is the function landscape of $\mathit{f}^{1}(\mathbf{z})$ with $\mathbf{x}=\{[0.5;0.5],[-0.5,-0.5]\}$. We can see the weighted function is more steep than the unweighted one in the neighbor of the global optimums. From the geometrical prospect, the weighted function is alible to converge to the optimum. \\ \indent Next, we will give the convergence analysis of RWF. \subsection{Convergence of RWF} \begin{figure} \centering \includegraphics[width=8cm,height=5cm]{figure4} \caption{Left is the landscape of $f^1(\mathbf{z})$, right is the landscape of $f(\mathbf{z})$ defined by (1.2) . For both pictures, $\mathbf{x}=\big\{[0.5;0.5],[-0.5,-0.5]\big\}$, $m=100$.} \end{figure} \indent To establish the convergence of RWF, firstly, we define the distance of any estimation $\mathbf{z}$ to the solution set as \begin{eqnarray} dist(\mathbf{z},\mathbf{x}) =\mathop {\mathrm{min}}\limits_{\phi\in[0,2\pi]}||\mathbf{z}-\mathbf{x}e^{\mathrm{j}\phi}||. \end{eqnarray} \indent Lemma 2.1 and Lemma 2.2 give the bounds of $dist(\mathbf{z}_0,\mathbf{x})$. \begin{lemma}\cite{candes2015phase} Let $\mathbf{x}\in\mathbb{C}^n$ be an arbitrary vector, $\mathbf{y}=|\mathbf{A}\mathbf{x}|^2\in\mathbb{R}^m$ with $\mathbf{A}=[\mathbf{a}_1,...\mathbf{a}_m]^*$, $\mathbf{a}_i^*\overset{i.i.d}\sim\mathcal{N}(\mathbf{0},\mathbf{I})$. Then when $m\geq c_0nlogn$, where $c_0$ is a sufficiently large constant, the Wirtinger flow initial estimation $\mathbf{z}_0$ normalized to squared Euclidean norm equal to $m^{-1}\sum_iy_i$ obeys: \begin{eqnarray} dist(\mathbf{z}_0,\mathbf{x})\leq\frac{1}{8}||\mathbf{x}||. \end{eqnarray} \end{lemma} \begin{lemma}\cite{candes2015phase} Let $\mathbf{x}\in\mathbb{C}^n$ be an arbitrary vector and assume collecting $L$ admissible coded diffraction pattern with $L\geq c_0(logn)^4$, where $c_0$ is a sufficiently large numerical constant. Then $\mathbf{z}_0$ satisfies: \begin{eqnarray} dist(\mathbf{z}_0,\mathbf{x})\leq\frac{1}{8\sqrt{n}}||\mathbf{x}||. \end{eqnarray} \end{lemma} \indent The selection of $\eta_i$ did have effect on the performance of RWF. As Figure 2.2 depicted, different $\alpha$ can have different convergence rate for RWF. \begin{figure} \centering \includegraphics[width=7cm,height=6cm]{figure11} \caption{The $\mathbf{NMSE}$ of different $\alpha$ for RWF. $m/n=3$ for $\mathbf{x}\in \mathrm{R}^{256}$} \end{figure} For convenience to prove, we assume $\eta_i=0.9$ to be a constant for all $i$. Utilizing the Lemmas above, we can establish the convergence theory of the RWF. \ \begin{theorem} Let $\mathbf{z}_k$ be the evaluation of the $k$th iteration in algorithm 1. If $\mathop{\mathrm{max}}\limits_{i=1,...,m}\Big|\big|\langle\mathbf{a}_i,\mathbf{z}_k\rangle\big|^2-y_i\Big|<0.1$. Namely, $1\leq\omega_i\leq\frac{10}{9}$. Then taking a constant stepsize $\mu_t=\mu$ for $t=1,2,...$ with $\mu<c_1/n$ for some fixed numerical $c_1$. Then with probability at least $1-13e^{-\lambda n}-me^{-1.5m}-8/{n^2}$ for some constant term $\lambda$, the estimation in algorithm 1 satisfying: \begin{eqnarray} dist(\mathbf{z}_k^t,\mathbf{x})\leq\frac{1}{8}(1-\frac{\mu}{4})^{t-1}||\mathbf{x}||. \end{eqnarray} \end{theorem} The details of proof are given in appendix.\\ \indent Note that $\mathbf{x}$ is the global optimum of (2.4) for each $k$. The solution of (2.4) can get close enough to $\mathbf{x}$ during the iterations. we assume $dist(\mathbf{z}_k,\mathbf{x})\leq dist (\mathbf{z}_0,\mathbf{x})$ in the $kth$ iteration. During the iteration, $\mathbf{z}_k$ moves to the region $E(\mathbf{z})$, where \begin{eqnarray*} E(\mathbf{z})=\Big\{\mathbf{z}\Big|\mathop {\mathrm{max}}\limits_{ i=1,...,m}\Big|\big|\langle\mathbf{a}_i,\mathbf{z}\rangle\big|^2-y_i\Big|<0.1\Big\}. \end{eqnarray*} Once we find $\mathbf{z}_k$ dropping into $E(\mathbf{z})$, there can be a geometric convergence of RWF~by theorem 2.3. Theorem 2.3 also ensures the exponential convergence of WF becauese it is a special case of RWF. Figure 2.2 shows how RWF converges to the optimum. Here, the maximum steps of gradient descent in RWF during each iteration is $500$. $m/n=2.5$, $n=256$. At first 500 steps, both algorithm can't let $\mathbf{NMSE}$(The definition of $\mathbf{NMSE}$ can be found in section 3) decrease and get stuck into the local minimums. The reweighted procedure can empirically pull out the evaluation from these holes by changing the objective function. So RWF can continue to search for the optimum.\\ \begin{figure} \centering \includegraphics[width=7cm,height=6cm]{figure5} \caption{The $\mathbf{NMSE}$ of different methods during iteration. $m/n=2.5$ for $\mathbf{x}\in \mathrm{R}^{256}$} \end{figure} \section{Numerical testing} Numerical results are given in this section which show the performance of RWF together with WF and TWF. All the tests are carried out on the Lenovo desktop with a 3.60 GHz Intel Corel i7 processor and 4GB DDR3 memory. Here, we are in favor of the normalized mean square error($\mathbf{NMSE}$) which can be calculated as below: \begin{eqnarray*} \centering \mathbf{NMSE}=\frac{\mathrm{dist}(\hat{\mathbf{x}},\mathbf{x})}{||\mathbf{x}||}, \end{eqnarray*} where $\hat{\mathbf{x}}$ is the numerical estimation of $\mathbf{x}$.\\ \indent In the simulation, $\mathbf{x}$ was created by real Gaussian random vector $\mathcal{N}(\mathbf{0},\mathbf{I})$ or the complex Gaussian random vector $\mathcal{N}(\mathbf{0},\mathbf{I}/2)+j\mathcal{N}(\mathbf{0},\mathbf{I}/2)$. $\{\mathbf{a}_i\}_{1\leq i\leq m}$ are drawn i.i.d from either $\mathcal{N}(\mathbf{0},\mathbf{I})$ or $\mathcal{N}(\mathbf{0},\mathbf{I}/2)+j\mathcal{N}(\mathbf{0},\mathbf{I}/2)$. In all simulating testings, $\eta_i=0.9$ to be a constant. Figure 3.1 shows the exact recovery rate of RWF, TWF and WF. The length $n$ of $\mathbf{x}$ is $256$. For RWF, we set the maximum iteration to be 300 and the maximum steps of gradient desent in each iteration to be 500. For fairness, the maximum iterations of WF and TWF are both 150000. Once the $\mathbf{NMSE}$ is less than $10^{-5}$, we stop iteration and judge the real signal is exactly recovered. Let $m/n$ vary from 1 to 8, at each $m/n$ ratio, we do $50$ replications. The empirical recovery rate is the total successful times divides 50 at each $m/n$.\\ \indent The performance of different algorithms in the simulation testing discussed above is shown in Figure 3.1. In figure 3.1(a), for real-valued signal, RWF can have a 90\% successful recover rate of the signals when $m=2.4n$. There are even some instances of successful recovery when $m=1.6n$. In contrast, TWF and RWF need $m=3.3n$ and $m=4.6n$ to recover the signals with the same recovery rate as RWF. In figure 3.1(b), for the complex case, the performance of RWF is superior than TWF and WF too. RWF can nearly get a high recovery probability about 85\% at $m=3.5n$. The sampling complexity of RWF is approximately to the information-limits from those numerical simulations which demonstrates the high capability of RWF to deal with the PR problem. \\ \begin{figure} \centering \begin{subfigure}[t]{3in} \centering \includegraphics[width=3in]{figure1} \caption{}\label{fig:1a} \end{subfigure} \quad \begin{subfigure}[t]{3in} \centering \includegraphics[width=3in]{figure2} \caption{}\label{fig:1b} \end{subfigure} \caption{The comparison between RWF, WF and TWF for the noiseless signal.}\label{fig:1} \end{figure} \indent The times of iteration determine the convergence rate of RWF. Figure 3.2 shows the times of iteration need for RWF to be recover the signal where $m/n$ is from $2$ to $8$. At each $m/n$, we record 50 times successful tests and average their iteration times. The maximum steps of gradient descent in each iteration are 500. If the $\mathbf{NMSE}$ is below $10^{-5}$, we declare this trial successful. We can see that when the $m/n$ is small, RWF need more iterations to search for the global minimum, this procedure need more computation costs. With $m/n$ increasing, the iterations gradually decrease which is nearly equal to one because of the benign geometric property of objective function when $m/n$ becomes large.\\ \begin{figure} \centering \begin{subfigure}[t]{3in} \centering \includegraphics[width=3in]{figure6} \caption{}\label{fig:1a} \end{subfigure} \quad \begin{subfigure}[t]{3in} \centering \includegraphics[width=3in]{figure7} \caption{}\label{fig:1b} \end{subfigure} \caption{The average iteration times of RWF.}\label{fig:1} \end{figure} \indent Our reweighted idea can also be extended to the Code diffraction pattern(CDP). The details of CDP model can be referred in \cite{Cand2013Phase}. The weighted CDP model can be described as below:\\ \begin{eqnarray} y_r^k=\big|\sum_{t=0}^{n-1}\omega_r^kz^k[t]\bar{d}_l(t)e^{-j2\pi k_1t/n}\big|^2,\\ r=(k_1,l),0\leqslant k_1\leqslant n-1,1\leqslant l\leqslant L\nonumber, \end{eqnarray} where \begin{eqnarray*} \omega^k_r=1\Big/\Big(\Big|\big|\sum_{t=0}^{n-1}z^{k-1}[t]\bar{d}_l(t)e^{-j2\pi k_1t/n}\big|^2-y_r\Big|+\eta_r\Big), \end{eqnarray*} \begin{eqnarray*} y_r=\big|\sum_{t=0}^{n-1}x[t]\bar{d}_l(t)e^{-j2\pi k_1t/n}\big|^2,\\ r=(k_1,l),0\leqslant k_1\leqslant n-1,1\leqslant l\leqslant L, \end{eqnarray*} where $\mathbf{x}$ is the real signal. We can also get the high accuracy evaluation of $\mathbf{x}$ with algorithm 1. Figure 3.3 is the comparison between RWF and WF with CDP model. We generated $\mathbf{x}$ from $ \mathcal{N}(0,\mathbf{I}/2)+j\mathcal{N}(0,\mathbf{I}/2)$. The length of $\mathbf{x}$ is 256. L varies from 2 to 8. Other settings are the same as those for real or complex Gaussian signal above. We can see that RWF has a little advantage over WF for 1D CDP model.\\ \begin{figure} \centering \includegraphics[width=7cm,height=6cm]{figure8} \caption{The comparison between RWF and WF with CDP model for noiseless signal} \end{figure} \indent Figure 3.4 is the result of 2D CDP model. These are 3D Caffein molecules projected on the 2D plane. The size of picture is 128$\times$128. L is 7. Because it is a RGB picture. As a result, we apply RWF and WF for every R,G,B channels independly. For RWF, we set the maximum iteration to be 300 and the maximum steps of gradient descent in each iteration to be 500. The maximum iteration is 150000 for WF. We can see that the picture recovered by RWF is better than that recovered by WF. \begin{figure} \centering \begin{subfigure}[t]{1.5in} \centering \includegraphics[width=1.5in]{figure9} \caption{}\label{fig:1a} \end{subfigure} \quad \begin{subfigure}[t]{1.5in} \centering \includegraphics[width=1.5in]{figure10} \caption{}\label{fig:1b} \end{subfigure} \quad \begin{subfigure}[t]{1.5in} \centering \includegraphics[width=1.5in]{figure12} \caption{}\label{fig:1b} \end{subfigure} \caption{The 3D Caffein molecule. (a) is the real, (b) is recovered by WF, (c) is recovered by RWF .}\label{fig:1} \end{figure} \section{Conclusion} In this paper, we propose a reweighted Wirtinger flow algorithm for phase retrieval problem. It can make the gradient descent algorithm more alible to converge to the global minimum when the sampling complexity is low by reweighting the objective function in each iteration. But comparing to WF, this algorithm has more computational cost. So in the future work, we will be keen to accelerate it such as using stochastic gradient method. \section{Acknowledgement} This work was supported in part by National Natural Science foundation(China): 0901020115014.
{'timestamp': '2016-12-30T02:06:34', 'yymm': '1612', 'arxiv_id': '1612.09066', 'language': 'en', 'url': 'https://arxiv.org/abs/1612.09066'}
\section{Introduction} \label{sect:intro} With the commercial deployment of the fifth-generation wireless communication systems (5G), emerging applications including augmented reality/virtual reality (AR/VR) and high density video streaming have triggered the explosive growth of wireless traffic \cite{1}. Although massive multiple-input multiple-output (MIMO), orthogonal frequency-division multiplexing (OFDM), and advanced link adaptation techniques are quite effective for improving the achievable spectral efficiency \cite{2}, the most powerful tool to improve the overall throughput for each base station (BS) is via carrier aggregation (CA) \cite{3}, i.e., by aggregating multiple available transmission bands together to achieve extremely high data rates. In general, the CA technology can be divided into three classes, namely {\em intra-band contiguous CA}, {\em intra-band non-contiguous CA}, and {\em inter-band non-contiguous CA} \cite{DCA}, with one primary component carrier (PCC) and several contiguous or non-contiguous secondary component carriers (SCCs). For intra-band contiguous contiguous or non-contiguous CA, PCC and SCCs share the similar coverage, and the traffic migration among different component carriers is straight forward \cite{5}, while for inter-band non-contiguous CA, the traffic migration task is much more challenging due to the diversified propagation properties of different frequency bands. In the conventional fourth generation wireless systems (4G), the operating frequency bands are below 6 GHz, and the number of supported component carriers are limited to 5, which corresponds to a total bandwidth of 100 MHz. To support enhanced mobile broadband (eMBB) and ultra-reliable low-latency communications (URLLC) of 5G networks, the millimeter wave (mmWave) technology \cite{6,7} has been proposed to deal with the extremely crowded frequency bands below 6 GHz. With more frequency bands available, the number of supported component carriers for 5G CA is increased to 16 with 1 GHz total bandwidth \cite{(6)}. Since mmWave bands suffer from the high isotropic propagation losses, the corresponding link adaptation schemes, including modulation and coding scheme (MCS) and re-transmission processes, as well as the channel outage events are quite different from sub-6 GHz bands \cite{8}. This leads to several novel designs in mmWave bands, especially when interacted with the above 5G CA technology. For example, a digital pre-distortion technique and a filter bank multicarrier (FBMC) technique have been proposed in \cite{9} and \cite{FBMC,IoT}, respectively, to deal with the linearization issues for extremely wide band power amplifiers. In the physical layer, a novel beamwidth selection and optimization scheme has been proposed in \cite{D2D} to deal with the potential interference among mmWave links and sub-6 GHz links, and the extensions to non-orthogonal multiple access (NOMA) based and multiple input multiple output (MIMO) based transmission strategies have been discussed in \cite{2021carrier} and \cite{CAReceive} as well. In the media access control (MAC) layer, a novel mechanism to dynamically select sub-6 GHz and mmWave bands has then been proposed in \cite{10}, and later extended to incorporate resource block (RB) level allocation in \cite{CALet}, which guarantees quality-of-experience (QoE) performance for CA enabled users. In the higher layer, collaborative transmission for video streaming applications has been proposed in \cite{deng2020dash}, which allows sub-6 GHz bands for control information and mmWave bands for data transmission. A more efficient approach to utilize the inter-band non-contiguous CA is to boost transmission rates of mmWave bands with the assistance of sub-6 GHz bands \cite{NCA}. Typical examples include content distribution and processing in vehicular networks \cite{11} and exploiting channel state information (CSI) of a sub-6 GHz channel to choose a mmWave beam \cite{12}. With the recent progresses in achieving high data rates in traffic hotspots, simultaneous transmission via sub-6 GHz and mmWave bands has been proposed in \cite{13} and \cite{14}, which generally requires smart traffic splitting mechanisms in the packet data convergence protocol (PDCP) layer. The above schemes usually consider low mobility users and a simple time-invariant strategy is sufficient to achieve promising results as demonstrated in \cite{15}. However, when user mobility increases, the time-invariant strategy often leads to mismatched traffic demands and transmission capabilities, and the link utilization ratio is limited. Meanwhile, conventional traffic splitting algorithms require the computational complexities to grow exponentially with the number of available bands, which may not be suitable for practical implementations of 5G networks with 16 component carriers as well. In this paper, we consider a novel traffic splitting mechanism for multi-stream CA in hybrid sub-6 GHz and mmWave bands scenario. In order to address the above issues, we model different propagation losses for different frequency bands and propose a low-complexity traffic splitting algorithm based on fuzzy proportional integral derivative (PID) control mechanism, where the main contributions are listed below. \begin{itemize} \item {\em Reduced Feedback Overhead.} Different from the conventional feedback based traffic splitting mechanisms, our proposed approach relies on observing the local RLC buffer statuses of component carriers to approximate UEs' behaviors, which is more favorable for the practical deployment. \item {\em Low-complexity Implementation.} To reduce the implementation complexity, we approximate the original time-varying mixed-integer optimization problem with short term expectation maximization. Meanwhile, we utilize the fuzzy control-based PID adaptation instead of reinforcement learning scheme to achieve lower complexity and quicker convergence. \end{itemize} Since the proposed algorithm only relies on the local RLC buffer information of sub-6 GHz and mmWave bands and minimizes frequent feedback from user equipment (UE) side, it can also be easily extended to machine-to-machine \cite{RLSTM,xu2020deep}, or vehicular-to-vehicular communications \cite{RDDPG}. Through some numerical examples, our proposed traffic splitting mechanism can achieve more than 90\% link resource utilization ratio for different UE transmission requirements with different mobilities, which corresponds to 10\% throughput improvement, if compared with several conventional baseline schemes, such as \cite{15} and \cite{16}. The remainder of the paper is organized as follows. In Section~\ref{sect:sys} we describe the high-layer splitting model for multi-stream carrier aggregation of 5G non-standalone network, in Section~\ref{sect:des} we discuss the design and deployment of allocation mechanism. In Section~\ref{sect:exper} we report some examples and results, and we conclude the paper and provide insights on future works in Section~\ref{sect:con}. \begin{table*}[h] \caption{Notation and Acronym} \label{table:Notation} \begin{tabular}{p{4cm}|p{12cm}} \hline \emph{Notation and Acronym} & \emph{Definition} \\ \hline $N_{SCC}$, $S$, $s^{th}$& Number of SCCs, set of SCCs and the $s^{th}$ SCCs, respectively \\ $Q^{P}(t)$ & Set of data packets buffered in the PDCP layer \\ $Q^{P}_{i}(t)$, $\vert Q^{P}_{i}(t)\vert$ & Set and quantity of arriving packets from the SDAP layer \\ $Q^{P}_{P/s,o}(t)$, $\vert Q^{P}_{P/s,o}(t)\vert$ & Set and quantity of data packets departed to the PCC and the $s^{th}$ SCC \\ $N^{P}_{\max}(t)$, $N^{P}_{\min}(t)$ & Maximum and minimum packet indication in $Q^{P}(t)$, respectively \\ $A_{P/s}(t)$ & Transmission strategy for the the PCC and the $s^{th}$ SCC, respectively \\ $Q_{P/s}^R(t)$, $\vert Q_{P/s}^R(t)\vert $ & Set and quantity of data packets in the PCC and the $s^{th}$ SCC RLC buffer \\ $ S_{P/s}^M(t)$, $\vert S_{P/s}^M(t)\vert $ & Set and quantity of data packets transmitted in MAC and PHY layer of the PCC and the $s^{th}$ SCC \\ $\rho_{P/s}$ & Normalization factors of the PCC and the $s^{th}$ SCC, respectively \\ $\gamma_{P/s}(t)$ & Signal-to-interference-and noise ratio (SINR) of the PCC and $s^{th}$ SCC, respectively \\ $N_{th}$ & Normalized threshold for successful packet delivery \\ $\vert Q_U^{P}(t)\vert$ & Quantity of data packets successfully received at the UE side \\ $T$, $L$, $N$ & Duration of transmission time slots, the data packets generated from IP flows and the length of prediction period, respectively \\ $B(t)$ & $B(t)=\vert Q_P^R(t)\vert-\sum_{s=1}^{N_{SCC}}\vert Q_s^R(t)\vert$ \\ $K_p(t)$, $K_i(t)$, $K_d(t)$ & Time varying coefficients of proportion,integration and derivation, respectively \\ \emph{CA} & Carrier Aggregation \\ \emph{PCC} & Primary Component Carrier \\ \emph{SCC} & Secondary Component Carrier \\ \emph{SDAP} & Service Data Adaptation Protocol \\ \emph{PDCP} & Packet Data Convergence Protocol \\ \emph{RLC} & Radio Link Control \\ \emph{MAC} & Medium Access Control \\ \emph{PHY} & Physical \\ \emph{PID} & Proportional Integral Derivative \\ \hline \end{tabular} \end{table*} \section{System Model} \label{sect:sys} \begin{figure*} \centering \includegraphics[width = 165mm, height =80 mm]{Fig1.eps} \caption{Diagram of carrier aggregation across different cells. } \label{fig:system} \end{figure*} Consider an inter-band non-contiguous CA system as shown in Fig.~\ref{fig:system}, where a user equipment (UE) connects a primary 5G new radio BS (gNB) and a secondary gNB simultaneously. The primary gNB transmits on a sub-6 GHz PCC and the secondary gNB can deliver information on $N_{SCC}$ SCCs (denoted by $\mathcal{S}$) in mmWave bands. Two gNBs can communicate with each other via the Xn link as defined in \cite{17} and the associated delay is simply normalized to $d_{X_n}$. Based on the 5G U-plane protocol stack \cite{18}, an inter-band non-contiguous CA transmission contains the following procedures as illustrated below.\footnote{ In our formulation, we do not consider any specific constraints on sub-6 GHz and mmWave bands. but in the numerical evaluation, we adopt the practical values of sub-6 GHz and mmWave bands to obtain an insightful results of deployed 5G networks.} \begin{itemize} \item{\em Service Data Adaptation Protocol (SDAP):} The main target of SDAP layer is to map different QoS requirements to data radio bearers (DRBs). For illustration purposes, we assume a simple transparent transmission policy is adopted and the lower layers can directly receive $L$ data packets generated from higher layers in each time slot. \item{\em Packet Data Convergence Protocol (PDCP):} Denote $Q^{P}(t)=\{N^{P}_{\min}(t), N^{P}_{\min}(t)+1,\ldots, N^{P}_{\max}(t)\}$ to the buffered packets of PDCP layer at the time slot $t$. We can have the status update expressions as follows.\footnote{ According to 3GPP Release 15 specfication \cite{Nstand}, the SDAP layer is implemented in PCC only to guarantee different QoS requirements to DRBs.} \begin{eqnarray} Q^{P}(t+1) & = & Q^{P}(t) \cup Q^{P}_{i}(t) - Q^{P}_{P,o}(t) \nonumber \\ && - \cup_{s \in \mathcal{S}} Q^{P}_{s,o}(t), \label{eqn:PDCP_Q} \end{eqnarray} where $Q^{P}_{i}(t)$ denotes the arrival packets from upper layers, and $Q^{P}_{P/s,o}(t)$ denotes the departure packets to the PCC or the $s^{th}$ SCC, respectively. The maximum and minimum packet indices in $Q^{P}(t+1)$ are updated by, \begin{eqnarray} N^{P}_{\max}(t+1)&= N^{P}_{\max}(t) + |Q^{P}_{i}(t)|, \\ N^{P}_{\min}(t+1)&= N^{P}_{\min}(t) + |Q^{P}_{P,o}(t)|\nonumber\\&+ |\cup_{s \in \mathcal{S}} Q^{P}_{s,o}(t)|, \end{eqnarray} where $|\cdot|$ denotes the cardinality of the inner set, $|Q^{P}_{i}(t)| = L$, and $|\mathcal{S}| = N_{SCC}$. Denote $A_{P/s}(t)$ to be the transmission strategy for the PCC and $s^{th}$ SCC at the time slot $t$ in the traffic splitting mechanism. $Q^{P}_{P/s,o}(t)$ can be updated via the following expressions. \begin{eqnarray} \vert Q^{P}_{P/s,o}(t+1)\vert &=\vert Q^{P}_{P/s,o}(t)\vert+A_{P/s}(t), \label{eqn:P_OUT} \end{eqnarray} where $A_{P/s}(t) = 1$ indicates the buffered packet of PDCP layer is successfully transmitted to the PCC or $s^{th}$ SCC, and $A_{P/s}(t) = 0$ otherwise. \item{\em Radio Link Control (RLC):} This layer receives departure packets of the PDCP layer and sends the processed packets to the UE side via lower layers. Therefore, the RLC buffer status of the PCC and $s^{th}$ SCC, e.g., $Q_{P/s}^R(t+1)$, can be updated via the following relations. \begin{eqnarray} Q_{P/s}^R(t+1)&=&Q_{P/s}^R(t) \cup Q_{P/s,o}^P(t)\nonumber\\&&- S_{P/s}^M(t), \label{eqn:RLC_Q} \end{eqnarray} where $S_P^M(t)$ and $S_s^M(t)$ denote the processing capabilities of lower layers. \item{\em Medium Access Control (MAC) \& Physical (PHY):} In this layer, we use the abstracted model of MAC and PHY layers and the MAC and PHY layer transmission rates of PCC and several SCCs are given by, \begin{eqnarray} \vert S_{P}^M(t)\vert&=& \left\{ \begin{aligned} &\lfloor \rho_{P}/\rho_{s} \rfloor ,~~~~~~~\textrm{if} \\&~~\rho_{s} \log_2(1+\gamma_{P}(t))\geq N_{th}, \label{eqn:M_P}\\ &0 , ~~~~~~~ \textrm{otherwise}. \end{aligned} \right.\\ \vert S_{s}^M(t)\vert& =& \left\{ \begin{aligned} 1 & , ~~~~~~~~~~~~~~~~~~~ \textrm{if} \\&\rho_{s}\log_2(1+\gamma_{s}(t))\geq N_{th},\label{eqn:M_s} \\ 0 & , ~~~~~~~ \textrm{otherwise}. \end{aligned} \right. \end{eqnarray} In the above equations, $\rho_{P/s}$ are the normalization factors, which include the effects of bandwidth, payloads and transmission durations. $\gamma_{P/s}(t)$ denote the signal-to-interference-and noise ratio (SINR) of the PCC and $s^{th}$ SCC, respectively. $N_{th}$ represents the normalized threshold for successful packet delivery. In the practical deployment, we often utilize sub-6 GHz band for PCC and mmWave bands for SCCs, and the corresponding SINR expressions are given by \cite{LTEstudy,2016study}, \begin{eqnarray} \gamma_{P}(t)&=&PT_P-10\cdot\log_{10}(h_P\cdot\alpha_P(t))\label{eqn:sub_6G}\\ \gamma_{s}(t)&=&PT_s-10\cdot\log_{10}(h_s\cdot\alpha_s(t))\label{eqn:mmwave} \end{eqnarray} where $PT_{P/s}$ denotes the transmission power of the PCC and $s^{th}$ SCC, and $h_{P/s}$ are the normalized path losses for PCC-UE and the $s^{th}$ SCC-UE links. $\alpha_{P/s}(t)$ are the time-varying fading coefficients, where $\alpha_{P}(t)$ follows a Rice distribution with unit mean and variance $\sigma_{P}^2$, and $\alpha_{s}(t)$ follows a Rayleigh distribution with unit mean and variance $\sigma_s^2$, as specified in \cite{Rice,Ray}. \end{itemize} With the CA transmission strategy and the abstracted MAC and PHY layer models, the target UE collects all the data packets from different component carriers in the RLC layer and sends them to the PDCP layer according to \cite{19}. Finally, the UE received packets in the PDCP layer can be modeled as follows. \begin{eqnarray} \vert Q_U^{P}(t) \vert&=&\vert S_{P}^M(t)\vert+\sum_{s=1}^{N_{SCC}} \vert S_{s}^M(t)\vert, \label{eqn:f_func} \end{eqnarray} where $ \vert Q_U^{P}(t)\vert$ indicates the number of data packets successfully received at the UE side. The following assumptions are adopted throughout the rest of this paper. First, the data processing among different layers is assumed to be zero delays and error-free. Second, the packet lengths of different layers are assumed to be fixed for simplicity and the headers of different layers are considered to be negligible. Third, infinite buffer sizes are assumed for different layers, such that the buffer overflow effect is not considered. Moreover, the RLC layer works in the acknowledgment mode according to \cite{20}. For illustration purposes, we summarize all the notations and acronyms in Table~\ref{table:Notation}. \section{Problem Formulation} \label{sect:Pro} In this section, we formulate the transmission duration minimization problem based on the above CA transmission model. In order to adapt with the wireless fading environment, we consider a dynamic packet transmission strategy $A_{P/s}(t)$ in the PDCP layer. With the exact expressions of $Q^{P}_{P/s,o}(t)$, the transmission duration $T$ is thus determined by accumulating the value of $ \vert Q_U^P(t)\vert$, and the optimal packet transmission strategy in the traffic splitting mechanism can be obtained by solving the following minimization problem. \begin{Prob} [Original Problem] \label{prob:ori} The optimal transmission duration can be achieved by solving the following optimization problem. \begin{eqnarray} \underset{\{A_{P/s}(t)\}}{\textrm{minimize}} && T, \label{eqm:ori_obj}\\ \textrm{subject to} && \textcolor{red}{\eqref{eqn:PDCP_Q}-\eqref{eqn:f_func}},\ A_{P/s}(t)\in\{0, 1\},\notag \\ &&\sum_{t=0}^{T}\vert Q_U^{P}(t)\vert\geq L ,\label{eqn:cons2} \\ && Q^{P}_{P,o}(t) \cup \{\cup_{s \in \mathcal{S}} Q^{P}_{s,o}(t)\}\subset Q^P(t), \label{eqn:cons3} \nonumber\\ &&\\ && Q^{P}_{i,o}(t) \cap Q^{P}_{j,o}(t)= \emptyset, \notag \\ && \forall \ i,j \in \{P\} \cup \mathcal{S}, \textrm{and} \ i \neq j \label{eqn:cons4}. \end{eqnarray} where \eqref{eqn:cons3} and \eqref{eqn:cons4} guarantee that all delivered packets to PCC and SCCs RLC layers are taken from the current PDCP buffer and do not overlap with each other. \end{Prob} The above problem is in generally difficult to solve due to the following reasons. First, the searching space of packet transmission strategies grows exponentially concerning the sizes of $Q^{P}(t)$ and the optimal strategy is a typical mixed-integer optimization problem (MIOP). Second, due to the time-varying wireless conditions, e.g., $\gamma_P(t)$ and $\gamma_s(t)$, the searching spaces of $Q_{P/s,o}^P(t)$ are dynamically changing, which further increases the searching complexity as well. To make it mathematically tractable, we focus on the following approximated dynamic programming problem, where the instantaneous transmission action, e.g.,$\{A_{P/s}(t)\}$, can be determined by the following maximization problem. \begin{Prob}[Approximated Problem] \label{prob:equ} For any given time slot $t^{\prime}\in \left[0,T\right]$, the optimal packet allocation action can be determined as follows. \begin{eqnarray} \underset{A_{P/s}(t^{\prime})}{\textrm{maximize}} && \sum_{t=t^{\prime}}^{t^{\prime}+N}\mathbb{E}\left[ \vert Q_U^{P}(t)\vert\big| \{Q_{P/s}^R(t^{\prime})\}, A_{P/s}(t^{\prime}) \right], \nonumber\label{eqn:equ}\\\\ \textrm{subject to} && \eqref{eqn:M_P}-\eqref{eqn:mmwave}, \eqref{eqn:cons3}-\eqref{eqn:cons4}, \nonumber \\ &&\vert Q_{P/s}^R(t^{\prime})\vert \in \{0,1,...,L\} \label{eqn:cons5}. \end{eqnarray} where $N \ll T$ denotes the length of prediction period in the near future. \end{Prob} \begin{theorem}\label{thm:approx} Problem \ref{prob:ori} can be well approximated by recursively solving Problem \ref{prob:equ}. \end{theorem} \begin{proof} Please refer to Appendix~\ref{appd:thm1} for the proof. \end{proof} By applying Theorem~\ref{thm:approx}, the original transmission duration minimization problem has been modified to maximize the expected number of successfully received packets in a known state, where conventional single-slot greedy based algorithms can be used \cite{21}. Since the evaluation of $ \vert Q_U^{P}(t)\vert$ highly relies on the time-varying variables $S_{P/s}^M(t)$ and ideal feedback scheme from the target UE, greedy based algorithms can hardly be implemented in practice. To deal with that, we realize that the RLC layer buffer status can be utilized to model the behaviors of $S_{P/s}^M(t)$, and with some mathematical manipulations as specified in Appendix~\ref{appd:thm2}, we have the following simplified version. \begin{Prob}[Simplified Problem] For any given time slot $t^{\prime}\in \left[0,T\right]$, the simplified transmission strategies can be determined as follows. \label{prob:rec} \begin{eqnarray} \underset{A_{P/s}(t^{\prime})}{\textrm{minimize}} && \big\vert B(t^{\prime})+(N+1)\cdot\big[A_{P}(t^{\prime})-\notag\\&&\sum_{s=1}^{N_{SCC}}A_{s}(t^{\prime})-(\lfloor\rho_{P}/\rho_{s}\rfloor-N_{SCC})\big]\big\vert,\notag\\\\ \textrm{subject to} && \eqref{eqn:M_P}-\eqref{eqn:mmwave}, \eqref{eqn:cons5}, t^{\prime}\in \left[0,T\right],\nonumber\\ && B(t^{\prime})=\vert Q_P^R(t^{\prime})\vert-\sum_{s=1}^{N_{SCC}}\vert Q_s^R(t^{\prime})\vert. \label{eqn:b(t)} \end{eqnarray} \end{Prob} Based on Problem \ref{prob:rec}, we can derive several low-complexity algorithms as shown later. This is because all the changing variables in Problem \ref{prob:rec} can be locally obtained without any interactions with the terminal side. \begin{figure} \centering \includegraphics[width = 3.3in]{Fig2.eps} \caption{The schematic diagram of traffic splitting mechanism based on RLC buffer status.} \label{fig:Mec} \end{figure} \section{Proposed Traffic Splitting Mechanism} \label{sect:des} In this section, instead of solving the above optimization problem using the brute force approach, we propose a low complexity dynamic traffic splitting mechanism using fuzzy logic control structure \cite{22}. As shown in Fig.~\ref{fig:Mec}, the proposed traffic splitting mechanism\footnote{ In our formulation, we adopt an abstract general model, in order to develop a generic algorithm. } collects RLC buffer information to obtain $B(t')$, and then determine the dynamic packet transmission strategy $A_{P/s}(t')$ by solving Problem \ref{prob:rec}. As shown in Algorithm~\ref{Alg:Algorithm}, the proposed fuzzy logic control-based algorithm can be divided into the following two stages, e.g., initialization and adaptation. \begin{itemize} \item{\em Stage 1 (Initialization)}: In the initialization stage when $t^{\prime}\leq N$, we simply let $A_{P/s}(t') = 1$, since there is limited information about the packet transmission process. \item{\em Stage 2 (Adaptation)}: In the adaptation stage when $t^{\prime} > N$, we first obtain $B(t')$ according to \eqref{eqn:b(t)}, and then determine the transmission mode according to the value of $B(t')\cdot B(t'-1)$. {\em Static Mode $\left(B(t')\cdot B(t'-1) > 0\right)$}: In the static mode, we follow the previous packet transmission strategy and simply choose the current actions $A_{P/s}(t')$ as, \begin{eqnarray} A_{P/s}(t') = A_{P/s}(t'-1), \forall t'. \end{eqnarray} {\em Dynamic Mode $\left(B(t')\cdot B(t'-1) \leq 0\right)$}: In the dynamic mode, the current transmission actions are determined from the previous actions in the last $N$ time slots. For illustration purposes, we define $k(t')$ as the auxiliary variable, where the mathematical expression is given as, \begin{eqnarray} k(t')&=& \left\lfloor\frac{\sum_{s=1}^{N_{SCC}}\sum_{n=1}^{N}A_s(t'-n)}{\sum_{n=1}^{N}A_P(t'-n)} \right\rfloor. \label{eqn:k_func} \end{eqnarray} \floatname{algorithm}{Algorithm} \renewcommand{\algorithmicrequire}{\textbf{input:}} \renewcommand{\algorithmicensure}{\textbf{output:}} \begin{algorithm}[h] \caption{Proposed Fuzzy Logic Control Based Algorithm} \label{Alg:Algorithm} \begin{algorithmic}[1] \REQUIRE the RLC buffer status $\vert Q_P^R(t^{\prime})\vert$, $\vert Q_s^R(t^{\prime})\vert$; \ENSURE $A_P(t^{\prime})$,$A_s(t^{\prime})$ \STATE Stage 1: \IF{$t^{\prime}\leq N$} \STATE $A_{P/s}(t^{\prime}) = 1$; \ELSE \STATE go to Stage 2 \ENDIF \STATE Stage 2: \STATE obtain $B(t^{\prime})$ according to \eqref{eqn:b(t)} \IF{$\left(B(t^{\prime})\cdot B(t^{\prime}-1) > 0\right)$} \STATE $ A_{P/s}(t^{\prime}) = A_{P/s}(t^{\prime}-1)$ \ELSE \IF{$(t' mod N==0)$} \STATE update $K_p(t')$,$K_i(t')$,$K_d(t')$ according to \STATE equation \eqref{eqn:K_update} \ELSE \STATE $K_p(t')=K_p(t'-1)$, $K_i(t')=K_i(t'-1)$, \STATE $K_d(t')=K_d(t'-1)$ \ENDIF \STATE update $A_P(t^{\prime})$ according to equation \eqref{eqn:A_P} \STATE update $A_s(t^{\prime})$ according to equation \eqref{eqn:A_s} \ENDIF \STATE output $A_P(t^{\prime})$,$A_s(t^{\prime})$; \end{algorithmic} \end{algorithm} To enable a PID based control strategy, we form a second-order filtering algorithm to obtain the incremental value \cite{23}, $G(t')$, e.g. \begin{eqnarray} G(t')&= &K_p(t')\cdot\left[B(t^{\prime})-B(t^{\prime}-1) \right]\nonumber\\&&+K_i(t')\cdot B(t^{\prime})+K_d(t')\nonumber\\&&\cdot\left[B(t')-2B(t'-1)+B(t'-2) \right]\label{eqn:PID}.\nonumber\\ \end{eqnarray} where $K_p(t')$, $K_i(t')$, and $K_d(t')$ represent the time varying coefficients of proportion, integration and derivation, respectively. With the above calculated variable $k(t^{\prime})$ and $G(t^{\prime})$, we have the current transmission actions as, \begin{eqnarray} &&A_{P}(t')= \left\{ \begin{aligned} &\delta\left(t'\bmod N-i\cdot k(t')\right), \textrm{if} \\ &~~~t'\bmod N \leq G(t')\cdot k(t').\label{eqn:A_P}\\ &\delta\left(t'\bmod N-j\cdot\left[ k(t')+1\right]\right),\\ &~~~~~~~\textrm{otherwise}. \end{aligned} \right.\\ &&A_{s}(t')=1-A_{P}(t'),\forall s \in S, \label{eqn:A_s} \end{eqnarray} where $i\in [1,G(t^{\prime})]$, $j\in[1,N-G(t^{\prime})]$, and $\delta(\cdot)$ is the unit-impulse function as defined in \cite{24}. \end{itemize} The convergence property of the above proposed algorithm is greatly affected by the values of $K_p(t')$, $K_i(t')$, and $K_d(t')$ as proved in \cite{25}. In order to adapt to different application scenarios, we use fuzzy control-based solution to dynamically adjust the above control parameters of PID \cite{Fuzzy_Prove}, which is able to achieve the desired performance with unknown nonlinearities, processing delays, and disturbances. As shown in the Fig.~\ref{fig:PID}, the fuzzy control-based PID parameter optimization consists of three modules, namely fuzzifier, fuzzy inference, and defuzzfier. In the fuzzifier module, we first normalize the value of $B(t')$ and $B(t') - B(t'-1)$ by the maximum value $B_{max}$, and then calculate the corresponding membership degrees $\mathcal{D}_{B}(t')$ and $\mathcal{D}_{E}(t')$ according to the triangular membership function $M(\cdot)$ \cite{26}, respectively. \begin{eqnarray} \mathcal{D}_{B}(t') & = & M\left( \frac{B(t')}{B_{\max}} \right), \\ \mathcal{D}_{E}(t') & = & M\left( \frac{B(t') - B(t'-1)}{2 \times B_{\max}} \right). \end{eqnarray} In the fuzzy inference module, three fuzzy rule tables, including $\mathcal{T}_p(\cdot)$, $\mathcal{T}_i(\cdot)$, and $\mathcal{T}_d(\cdot)$ as defined in \cite{27} are applied to update $K_p(t')$, $K_i(t')$ and $K_d(t')$, respectively. In the defuzzifier module, the incremental output values of $K_p(t')$, $K_i(t')$ and $K_d(t')$ are obtained through, \begin{eqnarray} \left\{ \begin{aligned} &K_{p}(t')=K_{p}(t'-1)+\Delta K_{p}(t'),\label{eqn:K_update}\\ &K_{i}(t')=K_{i}(t'-1)+\Delta K_{i}(t'),\\ &K_{d}(t')=K_{d}(t'-1)+\Delta K_{d}(t'), \end{aligned} \right. \end{eqnarray} where $\Delta K_{p/i/d}(t^{\prime})$ can be calculated via \begin{eqnarray} &&\Delta K_{p/i/d}(t^{\prime})=\begin{bmatrix}\mathcal{D}_{B}(t')&1-\mathcal{D}_{B}(t')\end{bmatrix} \cdot \mathcal{T}_{p/i/d}\nonumber\\&&\cdot\left(\begin{tiny}\begin{bmatrix} \mathcal{D}_{B}(t')\cdot \mathcal{D}_{E}(t')& \mathcal{D}_{B}(t')\cdot(1-\mathcal{D}_{E}(t'))\\ (1-\mathcal{D}_{B}(t'))\cdot \mathcal{D}_{E}(t')&(1-\mathcal{D}_{B}(t'))\cdot (1-\mathcal{D}_{E}(t')) \end{bmatrix}\end{tiny}\right)\notag\\ &&\cdot\begin{bmatrix}\mathcal{D}_{E}(t')\\1-\mathcal{D}_{E}(t')\end{bmatrix}. \end{eqnarray} \begin{figure} \centering \includegraphics[width = 3.3in]{Fig3.eps} \caption{ Fuzzy-PID structure } \label{fig:PID} \end{figure} \begin{figure}[h] \centering \includegraphics[width = 3.3in]{Fig4.eps} \caption{The convergence property of PID control and the proposed fuzzy logic control under the number of $N_{SCC}$ setting.} \label{fig:split} \end{figure} The above traffic splitting mechanism has the following advantages. First, we use a two-stage control algorithm to quickly approach the optimal strategy during the initialization period and keep the algorithm stable during the adaptation period. In the initialization period, we simply set all the actions to be active, e.g., $A_{P/s} = 1, \forall s$, to fulfill the buffer with the shortest period. In the adaptation period, we update the transmission actions based on a historial observation of $N$ time slots and the output value $G(t')$ of PID algorithm. Through this approach, we can quickly increase the number of transmission packets to explore the optimal transmission strategy and slightly adjust the transmission strategy according to the historical transmission strategy and buffer difference to maintain the stability of the algorithm. Second, we use the PID control algorithm with fuzzy based parameter optimization to guarantee the quick convergence property in different scenarios as proved in \cite{28}, \cite{29}. As shown in Fig.~\ref{fig:split}, the proposed fuzzy logic control based traffic splitting algorithm can quickly converge to the optimal value\footnote{ In the static user scenario with flat fading channel conditions, the optimal value should be equal to the ratio of link transmission capacities of PCC and $N_{SCC}$ SCCs as derived in \cite{GoodPUT}.} with less than two rounds adaptation for both $N_{SCC} = 2$ and $N_{SCC} = 3$ cases. \section{Experiment Results} \label{sect:exper} In this section, we provide some numerical results to verify the proposed fuzzy logic control-based adaptive packet transmission mechanism. To provide a fair comparison, we use Network Simulator 3 (NS-3), currently implementing a wide range of protocols in C++ \cite{30}, with the most up-to-date 5G CA protocols as defined in \cite{31}. We simulate real network scenarios, using non-line-of-sight (NLOS) for communication in an urban macro fading condition \cite{2016study}, and other important simulation parameters are listed in Table~\ref{table:Parameters}. All the numerical simulations are performed on a Dell Latitude-7490 with i7-8650 CPU and 16GB memory. \defTable{Table} \begin{table*} \renewcommand\arraystretch{1.5} \centering \caption{Simulation Parameters.} \begin{tabular}{c c c c} \hline \textbf{ Parameter } & \textbf{Value}&\textbf{ Parameter } & \textbf{Value} \\ \hline Frequency of PCC& 4.9GHz &Bandwidth of PCC & 100MHz \\ \hline Frequency of SCC & 28GHz & Bandwidth of SCC & 100MHz \\ \hline Transmission power of PCC & 28dBm & Transmission power of SCC & 35dBm \\ \hline TCP Congestion Control Algorithm & NewReno &UE TCP Receive Window Size& 512KB\\ \hline RLC Layer Transport Mode & AM& RLC layer Polling PDU Threshold & 100\\ \hline Xn link delay& 2ms& Xn link data rate & 1Gbps\\ \hline Variance $\sigma_P^2$ & 0.0004 & Variance $\sigma_s^2$ & 0.27 \\ \hline \end{tabular} \label{table:Parameters} \end{table*} In order to provide a more intuitive result, we define the link resource utilization ratio, $\eta$, to be the performance measure, which is given by, \begin{eqnarray} \eta=\frac{\sum_{t=1}^T \vert Q_U^P(t)\vert}{\sum_{t_p=1}^T \vert Q_U^P(t_p)\vert+\sum_{t_S=1}^T \vert Q_U^P(t_S)\vert}\times 100\%. \label{eqn:eta} \end{eqnarray} In the above expression, $t_p$ and $t_S$ denotes the time indexes of PCC transmission only and $N_{SCC}$ SCCs transmission only modes, respectively. The physical interpretation is the ratio of average end-to-end PDCP throughput of CA transmission over the maximum end-to-end PDCP transmission throughput provided by PCC and $N_{SCC}$ SCC links. In the following numerical examples, we test the average end-to-end PDCP throughput and the link resource utilization ratio, $\eta$, with several baselines. {\em Baseline 1 (BWA)} \cite{15}: The packet transmission strategy is to allocate the PDCP packets according to the available bandwidths of PCC and $N_{SCC}$ SCCs. {\em Baseline 2 (LTR)} \cite{16}: The packet transmission strategy is to allocate the PDCP packets according to the measured end-to-end link delay. {\em Baseline 3 (No Fuzzy):} The packet transmission strategy is similar to our proposed mechanism except that the PID control parameters are not optimized using fuzzy processes. {\em Baseline 4 (Q-learning)} \cite{Q-learn}: The packet transmission strategy is to allocate the PDCP packets based conventional reinforcement learning based approach as proposed in \cite{de2019dm}. \subsection{Buffer Status versus Average Throughput} In Appendix~\ref{appd:thm2}, we derive that the buffer difference $\vert B(t)\vert$ is negatively correlated with the throughput of PDCP layer that we can obtain the maximum throughput by minimizing the buffer difference $\vert B(t)\vert$. To numerically demonstrate the above transformation, we plot the average end-to-end throughput and the buffer difference $\vert B(t)\vert$ versus different transmission strategies and different numbers of $N_{SCC}$ setting. \begin{figure} \centering \includegraphics[width = 3.3in]{Fig5.eps} \caption{Numerical results on throughput and average buffer difference $\vert B(t)\vert$ under different transmission strategies $k(t)$ and the number of $N_{SCC}$ setting.} \label{fig:buffer_differ} \end{figure} The experimental results are shown in Fig.~\ref{fig:buffer_differ}, the buffer difference $\vert B(t)\vert$ decreases when the PDCP throughput gradually increases, and vice versa. Only when the buffer difference reaches the minimum value, the PDCP throughput reaches the maximum. Therefore, the negative correlation between buffer difference $\vert B(t)\vert$ and throughput has been verified that we can adjust the transmission strategy through the buffer difference to obtain the maximum throughput. \subsection{Static User Scenario} In the static user scenario, the receiving UE for multi-stream CA transmission remains static, which is located $100$ meters away from the primary gNB during the entire transmission period. Without the UE mobility, the long-term channel statistics remain stable in this case, and we plot the link resource utilization ratio versus $N_{SCC}$ in Fig.~\ref{fig:stationary} to demonstrate the benefits of the proposed adaptive traffic splitting mechanism. As shown in Fig.~\ref{fig:stationary}, the proposed adaptive traffic splitting scheme outperforms all four conventional baselines under different $N_{SCC}$ settings. For baselines 1 to 4, the achievable link resource utilization ratios are between 80\% to 92\%, while our proposed scheme can reach as much as 95\%. Meanwhile, since the entire bandwidth of SCCs is much greater than PCC as the number of SCCs increases, the advantages of the proposed splitting scheme compared to four baselines in terms of the link resource utilization ratio is decreasing. This is due to the following three reasons. First, by comparing with baseline 1, the proposed splitting scheme allows to dynamically adjust the packet transmission strategy for PCC and SCCs, which is more robust for dynamic channel variations. Second, by comparing with baseline 2 and baseline 4, the proposed splitting scheme estimates the transmission capability without explicit feedback from UEs and the end-to-end throughput evaluation as a reward, which saves the transmission bandwidth for delay feedback and the computational complexity for policy evaluation. Third, by comparing with baseline 3, the proposed splitting scheme dynamically optimizes the PID control parameters to obtain an additional 2-3\% improvement of the link resource utilization ratio. \begin{figure} \centering \includegraphics[width = 3.3in]{Fig6.eps} \caption{Bar graph of link resource utilization ratio of static users with different carrier numbers under different traffic splitting mechanism deployments.} \label{fig:stationary} \end{figure} \begin{figure}[h] \centering \includegraphics[width = 3.3in]{Fig7.eps} \caption{In mobile user scenario, the change curve of the link resource utilization ratio $\eta$ and average throughput over time under different traffic splitting mechanism deployments.} \label{fig:move} \end{figure} \subsection{Mobile User Scenario} In the mobile user scenario, the receiving UE for multi-stream CA transmission is moving away from the primary gNB with a constant speed of $10 m/s$ and then getting back along the same path after 10 seconds. Different from the static user scenario, the end-to-end PDCP throughput will suffer from severe degradation due to the significant path loss when they are far apart. In this experiment, we keep the number of SCCs to be $N_{SCC} = 3$ and plot the link resource utilization ratio as well as the end-to-end PDCP throughput versus time in Fig.~\ref{fig:move}. As shown in Fig.~\ref{fig:move}, the proposed adaptive traffic splitting scheme still outperforms all four baselines in terms of both the link resource utilization ratio and the end-to-end PDCP throughput as well. The link resource utilization ratios of four baselines are between 73\% to 89\%, while it reaches to 91\% for the proposed scheme, which corresponds to 3\% link resource utilization ratio improvement if compared with baseline 3 and more than 10\% if compared with other baselines. These numerical results further confirm that the proposed adaptive packet transmission scheme can quickly adapt to the drastically changing transmission capabilities. \subsection{Complexity and Storage} In this experiment, we still keep the number of SCCs to be $N_{SCC} = 3$ and test the computational complexity and RAM usage of static and mobile user scenarios respectively. As shown in Table~\ref{table:Compute}, the proposed adaptive traffic splitting mechanism has similar RAM usage as baseline 1 and 3, and the computational complexity of the proposed is slightly higher than that of baseline 1 and 3. This is because the fuzzy process requires additional optimization calculations. Baseline 2 shows the highest RAM usage with a large amount of data acquisition. Baselines 4 show the highest computational complexity, which is two times more than that of ours. The proposed adaptive traffic splitting mechanism has lower complexity while ensuring optimal performance. This is because our mechanism is equivalent to the time-varying link capacity by observing the RLC buffer difference that the changing variables can be locally obtained without any interactions with the UE side. \begin{table}[] \caption{In the static and mobile user scenario, the computational complexity(ms) and RAM usage(\%) under the deployment of under different traffic splitting mechanism deployments.} \begin{tabular}{cccc} \hline \textbf{Scenario} & \textbf{\begin{tabular}[c]{@{}c@{}} Splitting \\ Mechanism\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}} Computational\\ Complexity\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}RAM\\ Usge\end{tabular}} \\ \hline \multirow{4}{*}{\begin{tabular}[c]{@{}c@{}}Static\\ User\end{tabular}} & Baseline 1 & 3.6 & 0.56 \\ & Baseline 2 & 8.1 & 0.61 \\ & Baseline 3 & 3.7 & 0.56 \\ & Baseline 4 & 8.2 & 0.57 \\ & Proposed & 3.9 & 0.57 \\ \hline \multirow{4}{*}{\begin{tabular}[c]{@{}c@{}}Mobile\\ User\end{tabular}} & Baseline 1 & 4.6 & 0.62 \\ & Baseline 2 & 9.0 & 0.72 \\ & Baseline 3 & 4.5 & 0.62 \\ & Baseline 4 & 10.2 & 0.63 \\ & Proposed & 4.8 & 0.63 \\ \hline \end{tabular} \label{table:Compute} \end{table} \section{Conclusion} \label{sect:con} In this paper, we propose a low-complexity traffic splitting algorithm based on fuzzy PID in multi-stream CA scenario to better utilize the transmission capacities provided by sub-6 GHz and mmWave bands. Through end-to-end modeling of protocol stacks, our proposed traffic splitting algorithm is able to minimize the entire transmission duration with local RLC buffer information, which eventually improves the end-to-end PDCP throughput. Based on the numerical experiments, the proposed traffic splitting scheme can achieve more than 90\% link resource utilization ratio for both static and mobile user scenarios, and 50\% computational complexity reduction simultaneously. Through the above studies, we believe the proposed traffic splitting mechanism can be efficiently deployed in the practical 5G networks and achieve significant throughput improvement for multi-stream CA transmission with sub-6 GHz and mmWave bands. Meanwhile, since the proposed scheme does not rely on some specific constraints of mmWave bands, it can be easily extended to multi-stream carriers with different transmission rates. \section*{Acknowledgement} \label{sect:Ack} This work was supported by the National Natural Science Foundation of China (NSFC) under Grants 62071284, 61871262, 61901251 and 61904101, the National Key Research and Development Program of China under Grants 2019YFE0196600, the Innovation Program of Shanghai Municipal Science and Technology Commission under Grant 20JC1416400, Pudong New Area Science \& Technology Development Fund, and research funds from Shanghai Institute for Advanced Communication and Data Science (SICS). \theendnotes \begin{appendices} \numberwithin{equation}{section} \section{Appendix A} \label{appd:thm1} Proof the Theorem~\ref{thm:approx}. With limited packet length $L$, the problem of minimizing time $T$ can be transformed into maximizing the number of successfully received packets. Thus, the objective function \eqref{eqm:ori_obj} in Problem~\ref{prob:ori} can be given as follows. \begin{eqnarray} \underset{\{A_{P/s}(t)\}}{\textrm{maximize}} && \sum_{t=0}^{T}\vert Q_U^P(t)\vert, \label{eqn:app_obj} \end{eqnarray} Without loss of generality, we use the standard Markov Decision Process (MDP) to represent the above problem. \textbf{Definition 1 (System State)}: Defined the state function $s(t)$ as RLC buffer status. \begin{equation} s(t) =\{ Q_P^R(t),Q_s^R(t)\},\forall s \in \mathcal{S}. \end{equation} where $\vert Q_{P/s}^R(t)\vert \in \{0,1,...,L\}$, the initial state is $s(0) = (0,0,...,0)$. The state $s(t)$ is Markov’s which satisfies the following equation $P[s(t+1)|s(t)]=P[s(t+1)|s(0),...]$, $P[\cdot]$ is the state transition probability \cite{markov}. \textbf{Definition 2 (Action)}: The action at the time $t$ is denoted as $a(t)=A_{P/s}(t)$. It indicates whether the data packet is successfully transmitted to PCC or $s^{th}$ SCC RLC buffer. \textbf{Definition 3 (Reward)}: Defined the reward function $r(t)$ as the number of packets successfully received at the UE side. It is affected by current state $s(t)$ and action $A_{P/s}(t)$. \begin{equation} r(t)= \vert Q_U^{P}(t)\vert. \end{equation} Thus, the objective function \eqref{eqm:ori_obj} in Problem~\ref{prob:ori} can be given as follows. \begin{eqnarray} \underset{\pi=\{a(t)\}}{\textrm{maximize}} && \sum_{t=0}^{T}\mathbb{E}\left[r(t)|s(t)\right]. \label{eqn:app_obj} \end{eqnarray} where $\pi=\{a(t)\}$ denotes the entire action sets. By choosing $Q^{\pi}(s,a) = \mathbb{E}_{\pi} \big[r(t)| s(t) = s, a(t) = a\big]$ to be the expected Q-value function of taking action $a$ in state $s$ under a policy $\pi$, the optimal policy to solve \eqref{eqn:app_obj}, $\pi^{\star}$, can be obtained as, \begin{eqnarray} \pi^{\star} = \underset{\pi}{\arg \max} \ Q^{\pi}(s,a). \end{eqnarray} According to \cite{32}, we can get the Bellman equation of $Q^{\pi}(s,a)$. \begin{align} Q^{\pi}(s,a)=\mathbb{E}_{\pi}\left[\right.r(t+1)+\gamma Q^{\pi}(s(t+1),a(t+1))\nonumber\\ |s(t)=s,a(t)=a\left.\right]. \end{align} Although standard values and policy iterations can be used to find the optimal solution \cite{21}. However, due to the large state space, it is difficult to realize the computational complexity and memory size of the optimal solution. Therefore, we consider an N-step horizon expressed as follows \begin{eqnarray} \underset{a(t^{\prime})}{\textrm{maximize}} && \sum_{t=t^{\prime}}^{t^{\prime}+N}\mathbb{E}\left[r(t)|s(t^{\prime})\right]. \label{eqn:app_obj} \end{eqnarray} \section{Appendix B } \label{appd:thm2} Simplification of Problem~\ref{prob:equ}. Assuming that $L$ packets are transmitted and it is sufficient to be able to transmit the maximum capacity of all links. According to equation \eqref{eqn:P_OUT} and equation \eqref{eqn:RLC_Q}, $Q_{P/s}^R(t^{\prime})$ are equivalent to the following expressions \begin{eqnarray} \vert Q_{P/s}^R(t^{\prime}+1)\vert&=&\vert Q_{P/s}^R(t^{\prime})\vert\nonumber\\&&+A_{P/s}(t)-\vert S_{P/s}^M(t)\vert. \end{eqnarray} We define $\vert Q_{P/s}^{\star}(t^{\prime})\vert$ and $\vert Q_{P/s}^{\star}(t^{\prime})\vert$ to represent the size of the PCC and $s^{th}$ SCC RLC buffers in the next $N$ time slots. \begin{eqnarray} &&\vert Q_{P/s}^{\star}(t^{\prime})\vert=\vert Q_{P/s}^{R}(t^{\prime})\vert+\nonumber\\ &&\sum_{t=t^{\prime}}^{t^{\prime}+N}\mathbb{E}\left[\vert Q_{P/s}^R(t+1)\vert-\vert Q_{P/s}^R(t)\vert\right]. \end{eqnarray} According to the relationship between receiving and sending data packets in the RLC buffer. It can be rewritten as \begin{eqnarray} \vert Q_{P/s}^{\star}(t^{\prime})\vert&=&\vert Q_{P/s}^{R}(t^{\prime})\vert+\nonumber\\ &&\sum_{t=t^{\prime}}^{t^{\prime}+N}\mathbb{E}\left[A_{P/s}(t)-\vert S_{P/s}^M(t)\vert\right]. \label{eqn:Q_e} \end{eqnarray} The RLC buffer status reflects the dynamic relationship between the transmission strategy $A_{P/s}(t)$ and the transmission capacity $S_{P/s}^M(t)$. If the size of packets sent to one RLC buffer is larger than its transmission capacity, it will lead to packet accumulation in the RLC buffer that may cause bufferbloat problem \cite{33}. And the amount of packets in the other buffer is insufficient that leads to the actual transmission capacity is far lower than the transmission capacity. In order to observe whether the transmission strategy matches the transmission capacity of the different link. We let $\Delta H(t^{\prime})=\vert Q_P^{\star}(t^{\prime})\vert-\sum_{s=1}^{N_{SCC}}\vert Q_s^{\star}(t^{\prime})\vert$. And we consider two cases one is that there are excessive data packets sent to the PCC, and the other is that there are excessive data packets sent to the SCCs. \begin{case} \label{case1} There is an excessive number of data packets sent to the PCC. According to equation \eqref{eqn:M_P}, $S_{P}^M(t)=\lfloor \rho_{P}/\rho_{s} \rfloor$. For SCCs, the arriving data packets $A_s(t)$ is less than its transmission capacity $S_{s}^M(t)$ for $N$ time slots, the packets in RLC buffer will not accumulate. Thus, the size of SCC is as follows: \begin{eqnarray} \vert Q_s^{\star}(t^{\prime})\vert&\approx&0. \label{eqn:Q_s} \end{eqnarray} From equation \eqref{eqn:f_func} we can get the data packet received by the PDCP layer for $N$ slots is as follows: \begin{eqnarray} \sum_{t=t^{\prime}}^{t^{\prime}+N} \mathbb{E}\left[ \vert Q_U^{P}(t)\vert\right]=\sum_{t=t^{\prime}}^{t^{\prime}+N}\mathbb{E}\big[\vert S_{P}^M(t)\vert\nonumber\\+\sum_{s=1}^{N_{SCC}}\vert S_s^M(t)\vert\big]. \end{eqnarray} From equation \eqref{eqn:Q_e},we can get the folowing equation for PCC: \begin{eqnarray} &&\sum_{t=t^{\prime}}^{t^{\prime}+N}\mathbb{E}\left[\vert S_{P}^M(t)\vert\right]=-\vert Q_P^{\star}(t^{\prime})\vert\nonumber\\ &&\quad+\vert Q_{P}^{R}(t^{\prime})\vert+\sum_{t=t^{\prime}}^{t^{\prime}+N}\mathbb{E}\left[ A_{P}(t)\right]. \end{eqnarray} And from equation \eqref{eqn:Q_e} and \eqref{eqn:Q_s}, we can get the folowing equation for $N_{SCC}$ SCCs: \begin{eqnarray} &&\sum_{t=t^{\prime}}^{t^{\prime}+N}\sum_{s=1}^{N_{SCC}}\vert S_s^M(t)\vert=-\sum_{s=1}^{N_{SCC}}\vert Q_s^{\star}(t^{\prime})\vert\nonumber\\ &&+ \sum_{s=1}^{N_{SCC}}\vert Q_{s}^{R}(t^{\prime})+\sum_{t=t^{\prime}}^{t^{\prime}+N}\mathbb{E}\left[\sum_{s=1}^{N_{SCC}}A_{s}(t)\right]\notag\\ &&=\sum_{s=1}^{N_{SCC}}\vert Q_{s}^{R}(t^{\prime})+\sum_{t=t^{\prime}}^{t^{\prime}+N}\mathbb{E}\left[\sum_{s=1}^{N_{SCC}}A_{s}(t)\right]. \end{eqnarray} Thus, the throughput of PDCP layer for $N$ time slots can be rewritten as follow: \begin{eqnarray} &&\sum_{t=t^{\prime}}^{t^{\prime}+N}\mathbb{E}\left[ \vert Q_U^{P}(t)\vert\right]=-\vert Q_P^{\star}(t^{\prime})\vert+\vert Q_{P}^{R}(t^{\prime})\vert+\notag\\ &&\sum_{s=1}^{N_{SCC}}\vert Q_{s}^{R}(t^{\prime})\vert +\sum_{t=t^{\prime}}^{t^{\prime}+N}\mathbb{E}\left[A_{P}(t)+\sum_{s=1}^{N_{SCC}}A_{s}(t)\right].\notag\\ \end{eqnarray} For $N$ time slots, $L$ packets are sent and $\vert Q_P^{\star}(t^{\prime})\vert=\Delta H(t^{\prime})$. Therefore, the expression of $\sum_{t=t^{\prime}}^{t^{\prime}+N}\mathbb{E}\left[ \vert Q_U^{P}(t)\vert\right]$ can finally be written as \begin{eqnarray} \sum_{t=t^{\prime}}^{t^{\prime}+N}\mathbb{E}\left[ \vert Q_U^{P}(t)\vert\right]=-\Delta H(t^{\prime})+L. \end{eqnarray} \end{case} \begin{case} In the case of excessive data packets sent to SCCs. Similarly, $S_{s}^M(t)=1$. For PCC, the arriving data packet $A_P(t)$ is less than its transmission capacity $S_{P}^M(t)$ for $N$ slots. Thus there are the following expressions \begin{eqnarray} Q_P^{\star}(t^{\prime})& \approx&0.\label{eqn:Q_p} \end{eqnarray} Similar to the derivation process of Case \ref{case1}, from equation \eqref{eqn:Q_e} and \eqref{eqn:Q_p} we can get the throughput of the PDCP layer for N time slots: \begin{eqnarray} &&\sum_{t=t^{\prime}}^{t^{\prime}+N} \mathbb{E}\left[ \vert Q_U^{P}(t)\vert\right]=-\sum_{s=1}^{N_{SCC}}\vert Q_s^{\star}(t^{\prime})\vert+\vert Q_{P}^{R}(t^{\prime})\vert+\notag\\&& \sum_{s=1}^{N_{SCC}}\vert Q_s^{R}(t^{\prime})\vert +\sum_{t=t^{\prime}}^{t^{\prime}+N}\mathbb{E}\left[ A_{P}(t)+\sum_{s=1}^{N_{SCC}}A_{s}(t)\right]\notag\\ && =\Delta H(t^{\prime})+L. \end{eqnarray} \end{case} We take the absolute value of $\vert \Delta H(t^{\prime})\vert$. Thus, the sum of $ \vert Q_U^{P}(t)\vert$ for $N$ time slots is negatively correlated with $\vert\Delta H(t^{\prime})\vert $. Problem \ref{prob:equ} can be equivalent to the following form. \begin{eqnarray} \underset{A_{P/s}(t^{\prime})}{\textrm{minimize}} && \vert\Delta H(t^{\prime})\vert. \end{eqnarray} For the expression of $\Delta H(t^{\prime})$, there are the following expressions, \begin{eqnarray} &&\Delta H(t^{\prime})=Q_P^{\star}(t^{\prime})-\sum_{s=1}^{N_{SCC}}Q_s^{\star}(t^{\prime})\notag\\ &&=\vert Q_P^{R}(t^{\prime})\vert-\sum_{s=1}^{N_{SCC}}\vert Q_s^{R}(t^{\prime})\vert\notag\\ &&+\sum_{t=t^{\prime}}^{t^{\prime}+N}\mathbb{E}\Bigg[\vert Q_{P}^R(t+1)\vert -\sum_{s=1}^{N_{SCC}}\vert Q_{s}^R(t+1)\vert\notag\\ &&-(\vert Q_{P}^R(t)\vert-\sum_{s=1}^{N_{SCC}}\vert Q_{s}^R(t)\vert)\Bigg]\notag\\ &&=B(t^{\prime})+\sum_{t=t^{\prime}}^{t^{\prime}+N}\mathbb{E}\left[ B(t+1)-B(t)\right].\label{eqn:Q_B} \end{eqnarray} Assuming that the environmental parameters and the transmission strategy after time slot $t^{\prime}$ remains constant, the formula for equation \eqref{eqn:Q_B} can be expanded as follows \begin{eqnarray} &&\Delta H(t^{\prime})=B(t^{\prime})+\sum_{t=t^{\prime}}^{t^{\prime}+N}\mathbb{E}\left[ B(t^{\prime}+1)-B(t^{\prime})\right]\notag\\ &&=B(t^{\prime})+ B(t^{\prime}+1)-B(t^{\prime})+B(t^{\prime}+2)\notag\\ &&-B(t^{\prime}+1)+...+B(t^{\prime}+N+1)-B(t^{\prime}+N)\notag \\ &&=B(t^{\prime})+(N+1)\cdot\bigg[A_{P}(t^{\prime})-\vert S_{P}^M(t)\vert-\notag \\ &&\quad\sum_{s=1}^{N_{SCC}}\left(A_{s}(t^{\prime})-\vert S_{s}^M(t)\vert\right)\bigg]\notag\\ &&=B(t^{\prime})+(N+1)\cdot\bigg[A_{P}(t^{\prime})-\sum_{s=1}^{N_{SCC}}A_{s}(t^{\prime})-\notag\\ &&\quad(\lfloor \rho_{P}/\rho_{s} \rfloor-N_{SCC})\bigg]. \end{eqnarray} The optimization problem is then transformed into the following form, \begin{eqnarray} \underset{A_{P/s}(t^{\prime})}{\textrm{minimize}} && \big\vert B(t^{\prime})+(N+1)\cdot\big[A_{P}(t^{\prime})-\notag\\&&\sum_{s=1}^{N_{SCC}}A_{s}(t^{\prime}) -(\lfloor \rho_{P}/\rho_{s} \rfloor-N_{SCC})\big]\big\vert\notag.\\ \end{eqnarray} \end{appendices} \bibliographystyle{IEEEtran}
{'timestamp': '2022-08-23T02:13:26', 'yymm': '2208', 'arxiv_id': '2208.09785', 'language': 'en', 'url': 'https://arxiv.org/abs/2208.09785'}
\section{Introduction} How do conceptual knowledge and language work together to affect the way we partition the world into categories? Some researchers \cite{lupyan2007} claim that verbal labels facilitate category learning by helping learners pick out relevant category dimensions. However, others \cite{brojde2011} have shown contrasting results that indicate that the presence of words can actually slow down learning. Recent work (Ivanova \& Hofer, 2020) developed a Bayesian computational model that reconciles these conflicting results. The authors argued that word labels impose a set of priors on the learning process. Thus, if the word-induced prior aligns with the true category structure, then words will facilitate learning. If the word-induced prior does not align with the true structure, they will make learning harder by biasing the learner toward irrelevant stimulus features. However, although this `label effect' is explored in depth by the authors, domain priors are not. In this work, we alter and expand upon an existing Bayesian framework model \cite{ivanovahofer} for examining the effects of linguistic overhypotheses on category learning. Particularly, we examine the interaction between linguistic ``label-based'' priors and ``domain-based'' conceptual priors. By modeling the interactions between general concept priors and word-induced priors, we attain a more accurate representation of the priors involved in category learning. For example, how do priors that drive the belief that shape is more likely to be a determinant of category than texture compare and interact with priors induced by labels? This work centers around examining and implementing these domain and label biases into a hierarchical Bayesian model. Through this model, we can better understand how learning is driven by interactions between language-independent and language-dependent biases. Specifically, we show that these two different biases both affect category learning, with the degree of each one's effect being variable and learnable. \section{Prior Work} Efforts have been made in prior works to describe the effect of labels on category learning. A study conducted by \citeA{lupyan2007} presented participants with two `alien' stimuli and had them learn to distinguish between which aliens to approach and which to avoid in a supervised learning setup. The distinguishing feature between the categories was shape, and the authors performed experiments by providing participants with labels for the aliens (`leebish' and `grecious') that aligned with category membership. These labels did not provide additional information about category membership to the participants (and thus could not be considered as features themselves). The study yielded results that the labels facilitated category learning and posited that the reason was because the labels helped to emphasize features where object variance aligned with category membership. However, another study by \cite{brojde2011} communicated doubts about the beneficial nature of labels in category learning. Specifically, the authors found that providing shape-based labels with data that is categorized based on other dimensions such as texture or hue actually hindered category learning. This provided a contrasting view to the \textit{label advantage effect} of linguistic labels on learning. Hence, \citeA{brojde2011} concluded that instead of emphasizing category-distinguishing differences across all features, labels serve to bring the participants' attention towards dimensions that have been `historically relevant' for classification. To reconcile these two views, \citeA{ivanovahofer} proposed that verbal labels serve as overhypotheses for the learner in category learning. They assert that learners have a set of biases imposed by, for example, words whose meanings are characteristic of certain categories. Thus, when labels are used in the learning process they can either facilitate or hinder learning depending on whether the priors induced by the overhypotheses align with the true category structure. However, their model does not account for the potential conceptual domain biases that a learner may have. Specifically, in their model, the absence of label biases is considered a `no bias' situation, which they acknowledge is not truly representative of the learner due to the fact that even without any label biases the learner may still have inherent domain biases. \begin{figure} \centering \includegraphics[width=1\linewidth]{figures/catdog.png} \caption{Six example objects for category learning that vary among numerous features (e.g. shape, texture, color, size, and position).} \label{fig:catdog} \end{figure} \section{Concept Overhypotheses} The need for modeling of not only the label biases in category learning but also the domain biases is the impetus for this work. When a learner considers a category learning task, they tend to have some form of conceptual prior over which features are more indicative of category membership. An example of this can be demonstrated by looking at Fig \ref{fig:catdog}. Learners faced with these six objects for classification may have some inherent domain biases when learning the categories. For example, English speakers (a language where most nouns for objects are based on shape) tend to be more likely to classify objects by their shape than other features such as texture \cite{landau1988}. In the example in Fig \ref{fig:catdog}, there are numerous features that the data can be partitioned by: shape, texture (hairless or furry), color (light or dark), size, and position (lying down, upright), among others. Focusing on each of these features as deterministic for categorization yields a different set of final categories. The conceptual domain biases that learners have when learning to categorize objects ultimately interplay with biases they develop when given linguistic labels for objects. One example of this is the case when a learner approaches the aforementioned category learning task with a conceptual bias that objects like these are more likely to be categorized by shape. Then, when provided with labels ``furry'' and ``hairless,'' they develop a label bias towards texture being the diagnostic feature (according to their understanding of the meanings of the labels). These biases (whether contrasting or constructive) both have effects on the learner's learning of category structure, especially in the early stages of supervised learning. To better understand the processes by which humans perform these category learning tasks, we aim to account for the conceptual biases learners have about which dimensions are more informative of category membership in addition to any label-induced biases they may acquire. \section{Model Setup} In this model, we consider a learning scenario where the goal is to learn to categorize object exemplars into $C$ disjoint categories. These exemplars possess $F$ perceptual dimensions (i.e. features) along which they may vary. The categories themselves vary along at least one of the $F$ features. \begin{figure*}[!ht] \centering \begin{tikzcd} & & \vec{\alpha_d} \arrow[d] \arrow[ld] & \vec{\alpha_l} \arrow[d] \arrow[rd] & & \\ & domain \; bias_1 \arrow[d, "\omega"'] & domain \; bias_2 \arrow[rrd, "\omega"'] & label \; bias_1 \arrow[lld, "1-\omega"] & label \; bias_2 \arrow[d, "1-\omega"] & \\ & bias_1 \arrow[d] & & & bias_2 \arrow[d] & \\ & \Sigma_1 \arrow[ld] \arrow[rd] & & & \Sigma_2 \arrow[ld] \arrow[rd] & \\ {\mu_{1,1}} \arrow[d] & & {\mu_{1,2}} \arrow[d] & {\mu_{2,1}} \arrow[d] & & {\mu_{2,2}} \arrow[d] \\ {y_{1,1}} & & {y_{1,2}} & {y_{2,1}} & & {y_{2,2}} \end{tikzcd} \caption{Illustration of the model structure with $F=C=2$. From the top down, $\omega$ and all $\vec{\alpha}$'s are fixed parameters; all biases, $\Sigma$'s, and $\mu$'s are stochastic variables; and all $y$ values are observed during learning.} \label{fig:modeltree} \end{figure*} We focus on two types of bias that affect the way the learner evaluates and learns from the exemplars: 1) the bias induced in the learner by the presence of linguistic labels for the data; 2) the inherent domain bias of the learner. These domain and label biases were integrated into the model by considering a vector of bias parameters for each of the two biases. More specifically, the domain bias is parameterized by a vector $\vec{\alpha_d}$ that determines the learner's relative domain biases over each of the $F$ features. Likewise, the label bias is parameterized in the same manner by a vector $\vec{\alpha_l}$. The domain bias itself is a vector $\vec{p}$ containing $F$ values sampled from a Dirichlet distribution with parameter $\vec{\alpha_d}$. Similarly, the label bias is a vector $\vec{k}$ sampled from a Dirichlet distribution with aforementioned parameter $\vec{\alpha_l}$: \begin{equation*} \begin{split} \vec{p} = (p_1, ..., p_F)^T \sim \textrm{Dirichlet}(\vec{\alpha_d}), \\ \vec{k} = (k_1, ..., k_F)^T \sim \textrm{Dirichlet}(\vec{\alpha_l}). \end{split} \end{equation*} These bias vectors $\vec{p}$ and $\vec{k}$ serve as overhypotheses during learning. The model also takes parameters $w$ and $s$ which serve to constrain how the domain and label biases interact. Particularly, $w$ and $s$ provide the mean and standard deviation that are used to sample a weight $\omega$ that will define a convolution of the domain and label biases; \begin{equation*} \omega \sim \textrm{Normal}(w, s) \; \textrm{truncated at 0}. \end{equation*} By sampling $\omega$ from a truncated normal distribution centered at $w$ with lower bound zero, we are essentially choosing an estimate $w$ of how much belief the learner places on the domain bias and then adding a small amount of noise. This can capture to some extent the variability in how much more important a learner may consider the domain bias compared to the label bias or vice versa. Using the weight $\omega$, we can define $\omega \vec{p} + (1-\omega)\vec{k}$, a convolution of the two biases which is the total bias that the learner has when learning. Using $\omega$ and $1-\omega$ as the weights for the biases enforces that increased weight given to the domain bias corresponds to equally decreased weight for the label bias (and vice versa). The final bias values for each feature $i$ are transformed further using a power law function $r(p_i, k_i)$ scaled between -1 and 1 as follows: \begin{equation*} r(p_i, k_i) = 2(\omega p_i + (1-\omega)k_i)^{\frac{1}{\gamma}} - 0.5). \end{equation*} Since the domain and label biases $\vec{p}$ and $\vec{k}$ were both drawn from Dirichlet distributions, they satisfy that $\sum_{i=1}^C{p_i}=1$ and $\sum_{i=1}^C{k_i}=1$. Accordingly, the convolution of $\vec{p}$ and $\vec{k}$ still maintains the same property $\sum_{i=1}^C{\omega p_i + (1-\omega)k_i}=1$. This allows for the domain and label biases to be considered with different weight, but still aligns with the so-called `conservation of belief' where if the learner is biased towards a particular feature then they are less likely to discover category distinctions based on other features. \begin{figure}[!th] \centering \begin{minipage}{0.5\linewidth} \centering \includegraphics[width=0.9\linewidth, height=0.17\textheight]{figures/cor_vs_bias.pdf} \end{minipage} \begin{minipage}{0.49\linewidth} \centering \includegraphics[width=.6\linewidth, height=0.17\textheight]{figures/cor_versus_p_k.pdf} \end{minipage} \caption{\textbf{Left panel:} Nonlinear effect of different $\gamma$ values on the relationship between overall bias and correlations of category means. The rate of change for correlation values gets more extreme as bias approaches 0. Thus, a change in the bias closer to 0 results in a larger change in the distribution of the sampled category means. \; \textbf{Right panel:} A surface plot of the correlation between category means versus biases $p_1$ and $k_1$ for $p_1 \in [0.2, 0.5]$ and $k_1\in [0, 0.5]$. The resultant non-additive effect of the biases $p$ and $k$ is reflected in the nonlinear surface plot.} \label{fig:corrs} \end{figure} Using the function $r(p_i, k_i)$ to scale and transform the bias gives the correlation between the category means for the $i$th feature. More specifically, the correlation between the means of categories $m$ and $n$ for feature $i$ is defined as follows $\forall m, n \in 1, \ldots, C$: \begin{equation*} \operatorname{corr}\left(\mu_{i, m}, \mu_{i, n}\right)=\left\{\begin{array}{ll} r\left(p_i, k_{i}\right) & \text { if } m \neq n \\ 1 & \text { otherwise. } \end{array}\right. \end{equation*} \begin{figure*}[!t] \centering \begin{minipage}{.53\textwidth} \includegraphics[width=1\linewidth]{figures/past_results.JPG} \end{minipage} \begin{minipage}{.46\textwidth} \includegraphics[width=1.05\linewidth]{figures/modeling_0.3_largertext.pdf} \end{minipage} \caption{Average accuracy of the model over 4 learning blocks compared to previous studies' results. The model ($w=0.3, s=0.03$) results corroborate previous findings. Specifically, the presence of a `right bias' facilitates category learning (higher accuracies) while the presence of a `wrong bias' hinders learning (lower accuracies) compared to the `no bias' case. Additionally, the model shows that in general, a wrong label bias results in greater hindrance to learning compared to a wrong domain bias (the same applies for right biases). This occurs because for these settings of $w$ and $s$, the domain weight $\omega$ is less than the label weight $1-\omega$.} \label{fig:modelingaccuracy} \end{figure*} The transformation $r$ serves to account for the need for nonlinear correlation values as discussed in \citeA{ivanovahofer}, where linear correlation values produce very little difference between right-bias, none-bias, and wrong-bias conditions. By using a power law function with parameter $\gamma$ we can specify how conservatively the learner generates hypotheses. When $\gamma$ is close to to 1 (linear), the learner is more likely to learn category structures that do not align with their biases. In contrast, larger values of $\gamma$ specify that the learner will primarily consider category structure hypotheses that align with their biases. The nonlinear transformation also augments the additive nature of the label and domain biases to result in a non-additive final effect (Fig \ref{fig:corrs}). From here, the model infers the category means and variances for each feature. Let $\vec{\sigma_i}$ be the vector of the variances of the $i$th feature for all $C$ categories: \begin{equation*} \vec{\sigma_{i}}^2 = (\sigma_{i, 1}^2, ..., \sigma_{i, C}^2)^T. \end{equation*} Using the correlation values obtained from the biases, we can define a covariance matrix $\Sigma_i$ for each feature $i$: \begin{equation*} \Sigma_i = diag(\vec{\sigma_{i}}) \cdot R_i \cdot diag(\vec{\sigma_{i}}), \end{equation*} where $R_i$ is the correlation matrix with entry $(m, n)$ being $\textrm{corr}(\mu_{i,m}, \mu_{i, n})$. Then, we assume that the learner considers the category means for each feature as adhering to a Normal prior distribution with covariance matrix $\Sigma_i$: \begin{equation*} \vec{\mu_i} = (\mu_{i,1}, ..., \mu_{i,C})^T \sim \textrm{Normal}(0, \Sigma_i) \end{equation*} These category means $\vec{\mu_i}$ and category variances $\vec{\sigma_{i}}^2$ (which are subject to some perceptual noise $\sigma_s^2$) are inferred by the learner based on observed category exemplars $y$. In this model, we make the simplifying assumption that the learner considers exemplars to be sampled from a Multivariate Normal distribution as follows: \begin{equation*} \vec{y_i} = (y_{i,1},...,y_{i,C})^T \sim \textrm{MVN}(\vec{\mu_i}, diag(\vec{\sigma_{i}}^2) + \sigma_s^2I_C), \end{equation*} where $\vec{y_i}$ contains the learner's estimated values of feature $i$ in all categories. Through this model setup, graphically visualized in Fig \ref{fig:modeltree}, we can explore the combined effect that conceptual domain priors and linguistic label priors have on category learning. \begin{figure*} \centering \setlength{\tabcolsep}{-10pt} \begin{tabular}{ccc} $w=0.2, s=0$ & $w=0.3, s=0.03$ & $w=0.5, s=0$\\ \includegraphics[width=0.36\textwidth, height=0.23\textheight]{figures/heatmapweight02.pdf} & \includegraphics[width=0.36\textwidth, height=0.23\textheight]{figures/heatmapweight03.pdf} & \includegraphics[width=0.36\textwidth, height=0.23\textheight]{figures/heatmapweight05.pdf} \end{tabular} \caption{Heatmaps showing the average accuracy for each model for three different settings of $w$ and $s$. Note that when $w=0.5$ with no additional variance the domain and label biases are considered equally by the learner, so the accuracies are symmetric along the anti-diagonal. When $w=0.2$ or $w=0.3$, the learner places less belief in the domain bias compared to the label bias and this imbalance is reflected in the resulting accuracies.} \label{fig:heatmaps} \end{figure*} \section{Model Fitting} \subsubsection{Data} To fit the model, simulated data was generated in a manner to approximate the experiments in \citeA{brojde2011}. A total of 16 exemplars were generated with 8 in each of two categories. These simulated data varied along 2 dimensions, which, for simplification will subsequently be referred to as the ``texture'' and ``shape'' features. In the data, the shape feature was diagnostic of category. \subsubsection{Biases} The model was fitted 9 separate times to consider a variety of possible interactions between domain and label biases. We establish three levels of bias: the bias-induced prior aligns with the true category structure (``right bias''), the bias-induced prior does not align with the true category structure (``wrong bias''), and no priors are induced by the bias (``none bias''). The 9 fitted models consisted of all combinations of \{\texttt{right, none, wrong}\} domain bias and \{\texttt{right, none, wrong}\} label bias. \subsubsection{Markov Chain Monte Carlo} As in \citeA{lupyan2007}, \citeA{brojde2011}, and \citeA{ivanovahofer}, each model was given all 16 exemplars as observed data in every learning block. The models were fitted to the data using PyMC3, a Python-based probabilistic programming language \cite{salvatier2016}. In the fitting process, a Markov chain Monte Carlo (MCMC) simulation-based approach is used to obtain a Markov chain of values of the posterior distribution of the category means given the observed data. During the MCMC sampling, the No-U-Turn Sampler (NUTS) is used to generate posterior samples of the category means. NUTS is especially useful in our model due to its ability to handle many continuous parameters, a situation where other MCMC algorithms work very slowly. It computes the model gradients via automatic differentiation of the log-posterior density to find the regions where higher probabilities are present. \section{Results} We evaluated the performance of the 9 fitted models by classifying the observed exemplars into the two categories. This was done by comparing the exemplars' feature values to the learner's estimated feature means and standard deviations of each category. \begin{table}[!h] \centering \begin{tabular}{ll} \hline Bias & $\vec{\alpha}$ \\ \hline Right & (1, 10) \\ None & (10, 10) \\ Wrong & (10, 1) \end{tabular} \caption{Settings of $\vec{\alpha}$ for each class of bias} \label{tab:biastypes} \end{table} In the model, some parameters were held as constants. These were the perceptual noise $(\sigma_s^2=1)$ and the nonlinear transform parameter $(\gamma=10)$. We also established settings of $\vec{\alpha_d}, \vec{\alpha_l} \in$ \{\texttt{(1, 10), (10, 1), (10, 10)}\} according to Table \ref{tab:biastypes} to represent the domain and label biases. In setting these $\vec{\alpha}$s we are choosing a fixed representation for the overhypotheses. However, it is notable that in principle, these overhypotheses can be learned from data \cite{kemp2007}. \subsubsection{Comparison of Models} The model was able to demonstrate the effects of the domain and label biases on learning. At a basic level, holding the domain bias constant at `none' produced results that align with those in \citeA{ivanovahofer}, where we see that `right label bias` facilitated learning while `wrong label bias` hindered it (Fig \ref{fig:modelingaccuracy}). In addition, the other models fitted with various label and domain biases for $w=0.3, s=0.03$ (Fig \ref{fig:modelingaccuracy}) show that the label bias has a stronger effect (whether positive or negative) on learning. This aligns with the expected results due to the tested settings of $w$ and $s$ generating a domain weight $\omega$ that is less than the label weight $1-\omega$. When looking at the average accuracies yielded by each of the 9 fitted models, we see a reflection of the hypothesized effects of the domain and label biases (Fig \ref{fig:heatmaps}). When the two biases are considered with equal weight by the learner, the resulting accuracy matrix for the different combinations of domain and label bias is symmetric along the anti-diagonal (Fig \ref{fig:heatmaps}). This aligns with the expectation that when the learner considers the two biases equally while learning, changes in one bias have very similar effects to equivalent changes in the other bias on overall accuracy. \begin{figure*} \centering \setlength{\tabcolsep}{-10pt} \begin{tabular}{ccc} $w=0.2, s=0$ & $w=0.3, s=0.03$ & $w=0.5, s=0$\\ \includegraphics[width=0.36\textwidth]{figures/interaction_0.2.pdf} & \includegraphics[width=0.36\textwidth]{figures/interaction_0.3.pdf} & \includegraphics[width=0.36\textwidth]{figures/interaction_0_5.pdf} \end{tabular} \caption{Interaction plots for three different settings of $w$ and $s$ showing average accuracy by model. When $s=0$, the lines are basically parallel and indicate minimal interaction between the domain and label biases. However, of particular interest is the introduction of a nonzero $s$ value (as is the case when $w=0.3$). In this case, the additional variability in $\omega$ makes the lines in the plot begin to deviate from parallel.} \label{fig:interactions} \end{figure*} However, for unequal weights (e.g. when $w=0.2$ and $w=0.3$ in Fig \ref{fig:heatmaps}), the effect of changes in one bias (label bias in the example) is shown to be stronger than equivalent changes in the other bias. Additionally, the inclusion of a nonzero $s$ when $w=0.3$ introduces noise to the bias weights -- a scenario more representative of a real human learner. Finally, another important observation from the results is demonstrated in Fig \ref{fig:dotplot}. Within each particular bias type and class (e.g. right domain bias), changing the other bias from `wrong' to `none' to `right' generally increases accuracy. However, within a particular setting of both bias types and classes (e.g. wrong domain bias and wrong label bias), the accuracy trends vary when parameter $w$ is changed. For example, when domain bias and label bias are both wrong, increasing $w$ (and therefore $\omega$, the degree to which the learner believes their domain bias) decreases accuracy. However, when there is no domain bias and wrong label bias, increasing $w$ has negligible effect on the accuracy. These results all align with our initial hypotheses about the behavior of the domain and label biases. The model also attempts to capture the interactions between the domain and label biases. The convolution of the two biases and the decision to put both on the same level of the Bayesian hierarchical framework was informed by adhesion to the principle of `conservation of belief' where increased belief or weight placed in one type of bias should result in a corresponding decrease in others. Although the initial convolution was additive, the final contribution of the combined biases to the correlations between the category means is not linear (Fig \ref{fig:corrs}). This could potentially lead to interaction effects between the two biases. However, such interactions were minimal in our simulation results. \begin{table*}[t] \centering \setlength{\tabcolsep}{2.5pt} \renewcommand{\arraystretch}{1} \begin{tabular}{l|r|cc|cc|cc} \hline & &\multicolumn{2}{|c|}{$w=0.2, s=0$}&\multicolumn{2}{c|}{$w=0.3, s=0.03$}& \multicolumn{2}{c}{$w=0.5, s=0$} \\ & & GEE & GLMER & GEE & GLMER & GEE & GLMER \\ Effect & Df & p-value & F-value & p-value & F-value & p-value & F-value \\ \hline Block & 1 & 0 & 1896.8 & 0 & 1824.6 & 0 & 1784.3 \\ Label Bias & 2 & 0 & 249.2 & 4.00e-15 & 205.2 & 1.83e-09 & 124.8 \\ Domain Bias & 2 & 0.016 & 22.4 & 5.49e-05 & 57.9 & 3.33e-10 & 121.5 \\ block:Label Bias & 2 & 5.13e-05 & 65.9 & 3.69e-04 & 55.2 & 0.099 & 16.5 \\ block:Domain Bias & 2 & 0.941 & 0.5 & 0.177 & 12.0 & 0.021 & 25.0 \\ Label Bias:Domain Bias & 4 & 0.931 & 1.3 & 0.741 & 2.7 & 0.583 & 3.8 \\ \hline \end{tabular} \caption{Results from mixed effects logistic regression and generalized estimation equation with logit link and exchangeable correlation structure. The interaction effect of label and domain bias is not significant from either the GEE and GLMER models.} \label{tab:testresult} \end{table*} As seen in Fig \ref{fig:interactions}, changes in the model parameters $w$ and $s$ (and therefore weight parameter $\omega$) demonstrate mildly different levels of interaction between domain and label biases. For the cases when $s=0$ ($w=0.2$ and $w=0.5$), the lines in the plot do not intersect at any point. This indicates that there are no significant interactions between the domain and label biases when the learner's belief in their domain bias is not subject to any variation. Finally, parallel to the procedure used in \citeNP{ivanovahofer}, we simulated individual-level predictions with 75 participants per setting of domain and label biases class combinations in each learning block. These were analyzed using a mixed effects logistic regression model that was fitted using the \texttt{glmer} command from the \texttt{lme4} package \cite{bates2015} with formula: \begin{equation*} accuracy \sim (label + domain + block)^2 + (1 | object), \end{equation*} and a generalized estimating equation (GEE) model with observations from the same object having an exchangeable correlation structure and formula: \begin{equation} accuracy \sim (block+label + domain)^2 \end{equation} These two models have identical correlation matrices, but GEE provides a test for the fixed effects while \texttt{glmer} does not. We used both models to assess whether the interaction effect between the label and domain biases is significant. We concluded that the interaction effect for the model fitted with different settings of $(w,s)$ was not statistically significant based on both the $p$-values from the GEE model and the F-statistics from the mixed model as shown in Table \ref{tab:testresult}. This quantitatively confirms the patterns we observed in the interaction plot (Fig \ref{fig:interactions}) where there are not significant interactions between domain and label biases (at least at the level of weight variation $s$ tested). Although the examples modeled did not show significant interaction between biases, the use of more complex data and therefore more complex bias priors could very likely amplify any domain-label interactions. We did observe significant effects for block \& label, block \& domain, and main effects of block, label, and domain. Some of these effects were also reported by \citeNP{brojde2011}. \section{Discussion} \subsubsection{Domain weight flexibility} In this work, the additional parameters $w$ and $s$ that further parameterize the domain bias weight $\omega$ served an important role in making the learner's consideration of domain vs label biases flexible. It appears very reasonable that these parameters are also learnable in category learning tasks -- i.e. learners learn to either emphasize or discount their domain bias as they go through the supervised learning process, thus altering their $\omega$. \subsubsection{Importance of parameters $w$ and $s$} Different learners have different inherent degrees of belief in the two biases. This variability is able to be partially captured by the noise introduced by the $s$ parameter. Hence, the capability to adjust $s$ is important for estimating different types of learners. Beyond these smaller variations between learners, learners may also vary wildly in their preference for domain or label belief. One key example of this is in comparing adult learners to infant learners. Infants will have less developed understanding of word meanings and therefore the label bias effect may be negligible for them (high $\omega$). On the contrary, adults may be strongly influenced by label biases because they understand the structure underlined by the labels (high $1-\omega$). The model can account for these differences by adjusting the magnitude of $w$, which is where the distribution of $\omega$ is centered, to model learners with vastly different beliefs in the two biases. \subsubsection{Limitations} It is important to acknowledge the limitations of this model. Although the model detailed in this work provides an implementation of the domain and label biases, additional data collection to tune the model's assumptions was outside of the scope of this project and thus was not conducted. Additionally, the model is just one implementation of the possible combined behaviors of the label and domain biases. Hence, alternative implementations may be worth investigating to compare and contrast results. \begin{figure} \centering \includegraphics[width=1\linewidth]{figures/pointrange.JPEG} \caption{Dot plot of the average model accuracies and standard errors for each of the 9 model fittings for three different settings of ($w, s$): (0.2, 0), (0.3, 0.03), and (0.5, 0). We can see that the different values of $w$ have slightly different effects on the magnitude and direction of the accuracy trends for any fixed domain bias and label bias.} \label{fig:dotplot} \end{figure} \subsection{Future Research} Given the limitations detailed previously, a key future direction to consider for this work is adjusting the structure of the Bayesian hierarchical model. In this work, we discussed that the convolution of the domain and label biases is additive. Despite this additive implementation, we use a combination of the nonlinear transformation and the learners' variations $s$ in the degree to which they believe in their domain biases to introduce potential interactions between the biases. However, as we saw in Fig. \ref{fig:interactions}, these interactions are still minimal. One way to further model more complex interactions between the domain and label biases when category learning is to consider the domain bias on a separate level from the label bias. This is particularly relevant given our assumption about the label bias for this work: that the label biases are generated by the learner's understanding of word meanings and their alignment with certain dimensions. Since the label bias depends in part on the learner's understanding of the domain, the label biases and domain biases may be more interconnected than defined in this model. Considering the domain bias on a different level of the hierarchical framework or using it to put priors on the label bias (which would then propagate down to generate priors for the category mean correlations) would be interesting ways to implement the domain and label biases differently. Finally, data collection to compare the simulated results of this model with human data is an important next step for further verifying the conclusions generated in this work. Overall, this paper clarifies and conceptualizes the importance of considering both label and domain biases in category learning. By representing the biases as overhypotheses that impose priors over the learning process, we are able to model the effects each bias has on learning, as well as show how they interact. \section{Acknowledgments} Thanks to Anna Ivanova and Matthias Hofer for participating in discussion at the early stages of this project. \nocite{brojde2011} \nocite{ivanovahofer} \nocite{kemp2007} \nocite{landau1988} \nocite{lupyan2015} \nocite{lupyan2007} \nocite{salvatier2016} \bibliographystyle{apacite} \setlength{\bibleftmargin}{.125in} \setlength{\bibindent}{-\bibleftmargin}
{'timestamp': '2020-12-29T02:33:55', 'yymm': '2012', 'arxiv_id': '2012.14400', 'language': 'en', 'url': 'https://arxiv.org/abs/2012.14400'}
\section{Acknowledgement} \label{sec:acknowledgement} This work is partially supported by the Big Data Collaboration Research grant from SenseTime Group (CUHK Agreement No. TS1610626), the General Research Fund (GRF) of Hong Kong (No. 14236516). \section{Methodology} \label{sec:method} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{graph} \caption{\small Visual links and temporal links in our graph. We only keep one strongest link for each pair of tracklets. And we can see that these two kinds of links are complementary. The former allows the identity information to be propagated among those instances that are similar in appearance, while the latter allows the propagation along a continuous tracklet, in which the instances can look significantly different. With both types of links incorporated, we can construct a more connected graph, which allows the identities to be propagated much further. } \label{fig:graph} \end{figure} In this work, we aim to develop a method to find all the occurrences of a person in a long video, \eg~a movie, with just a single portrait. The challenge of this task lies in the vast gap of visual appearance between the portrait (query) and the candidates in the gallery. Our basic idea to tackle this problem by leveraging the inherent \emph{identity invariance} along a person tracklet and propagate the identities among instances via both visual and temporal links. The visual and temporal links are complementary. The use of both types of links allows identities to be propagated much further than using either type alone. However, how to propagate over a large, diverse, and noisy dataset reliably remains a very challenging problem, considering that we only begin with just a small number of labeled samples (the portraits). The key to overcoming this difficulty is to be \emph{prudent}, only propagating the information which we are certain about. To this end, we propose a new propagation framework called \emph{Progressive Propagation via Competitive Consensus}, which can effectively identify confident labels in a competitive way. \subsection{Graph Formulation} \label{sub:graph} The propagation is carried out over a graph among person instances. Specifically, the propagation graph is constructed as follows. Suppose there are $C$ cast in query set, $M$ tracklets in gallery set, and the length of $k$-th tracklet (denoted by $\tau_k$) is $n_k$, \ie~it contains $n_k$ instances. The cast portraits and all the instances along the tracklets are treated as graph nodes. Hence, the graph contains $N = C + \sum_{k=1}^M n_k$ nodes. In particular, the identities of the $C$ cast portraits are known, and the corresponding nodes are referred to as \emph{labeled nodes}, while the other nodes are called \emph{unlabled nodes}. The propagation framework aims to propagate the identities from the labeled nodes to the unlabeled nodes through both \emph{visual} and \emph{temporal} links between them. The \emph{visual links} are based on feature similarity. For each instance (say the $i$-th), we can extract a feature vector, denoted as $\vv_i$. Each visual link is associated with an affinity value -- the affinity between two instances $\vv_i$ and $\vv_j$ is defined to be their cosine similarity as $w_{ij} = \vv_i^T \vv_j / (\|\vv_i\| \cdot \|\vv_j\|)$. Generally, higher affinity value $w_{ij}$ indicates that $\vv_i$ and $\vv_j$ are more likely to be from the same identity. The \emph{temporal links} capture the \emph{identity invariance} along a tracklet, \ie~all instances along a tracklet should share the same identity. In this framework, we treat the identity invariance as hard constraints, which is enforced via a \emph{competitive consensus} mechanism. For two tracklets with lengths $n_k$ and $n_l$, there can be $n_k \cdot n_l$ links between their nodes. Among all these links, the strongest link, \ie~the one between the most similar pair, is the best to reflect the visual similarity. Hence, we only keep one strongest link for each pair of tracklets as shown in Figure~\ref{fig:graph}, which makes the propagation more reliable and efficient. Also, thanks to the temporal links, such reduction would not compromise the connectivity of the whole graph. As illustrated in Figure~\ref{fig:graph}, the visual and temporal links are complementary. The former allows the identity information to be propagated among those instances that are similar in appearance, while the latter allows the propagation along a continuous trajectory, in which the instances can look significantly different. With only visual links, we can obtain clusters in the feature space. With only temporal links, we only have isolated tracklets. However, with both types of links incorporated, we can construct a more connected graph, which allows the identities to be propagated much further. \subsection{Propagating via Competitive Consensus} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{competitive_pool} \caption{\small An example to show the difference between competitive consensus and linear diffusion. There are four nodes here and their probability vectors are shown by their sides. We are going to propagate labels from the left nodes to the right node. However, two of its neighbor nodes are noise. The calculation process of linear diffusion and competitive consensus are shown on the right side. We can see that in a graph with much noise, our competitive consensus, which aims to propagate the most confident information, is more robust. } \label{fig:cp} \end{figure} Each node of the graph is associated with a probability vector $\vp_i \in \mathbb{R}^C$, which will be iteratively updated as the propagation proceeds. To begin with, we set the probability vector for each labeled node to be a one-hot vector indicating its label, and initialize all others to be zero vectors. Due to the identity invariance along tracklets, we enforce all nodes along a tracklet $\tau_k$ to share the same probability vector, denoted by $\vp_{\tau_k}$. At each iteration, we traverse all tracklets and update their associated probability vectors one by one. \paragraph{\bf Linear Diffusion.} Linear diffusion is the most widely used propagation scheme, where a node would update its probability vector by taking a linear combination of those from the neighbors. In our setting with identity invariance, the linear diffusion scheme can be expressed as follows: \begin{equation} \label{eq:linear-diffusion} \vp_{\tau_k}^{(t+1)} = \sum_{j \in \cN(\tau_k)} \alpha_{kj} \vp_j^{(t)}, \quad \text{ with } \alpha_{kj} = \frac{\tilde{w}_{kj}}{\sum_{j' \in \cN(\tau_k)} \tilde{w}_{kj'}}. \end{equation} Here, $\cN(\tau_k) = \cup_{i \in \tau_k} \cN_i$ is the set of all visual neighbors of those instances in $\tau_k$. Also, $\tilde{w}_{kj}$ is the \emph{affinity} of a neighbor node $j$ to the tracklet $\tau_k$. Due to the constraint that there is only one visual link between two tracklets (see Sec.~\ref{sub:graph}), each neighbor $j$ will be connected to just one of the nodes in $\tau_k$, and $\tilde{w}_{kj}$ is set to the affinity between the neighbor $j$ to that node. However, we found that the linear diffusion scheme yields poor performance in our experiments, even far worse than the naive visual matching method. An important reason for the poor performance is that errors will be mixed into the updated probability vector and then propagated to other nodes. This can cause catastrophic errors downstream, especially in a real-world dataset that is filled with noise and challenging cases. \paragraph{\bf Competitive Consensus.} To tackle this problem, it is crucial to improve the reliability and propagate the most confident information only. Particularly, we should only trust those neighbors that provide strong evidence instead of simply taking the weighted average of all neighbors. Following this intuition, we develop a novel scheme called \emph{competitive consensus}. When updating $\vp_{\tau_k}$, the probability vector for the tracklet $\tau_k$, we first collect the strongest evidence to support each identity $c$, from all the neighbors in $\cN(\tau_k)$, as \begin{equation} \eta_k(c) = \max_{j \in \cN(\tau_k)} \alpha_{kj} \cdot p_j^{(t)}(c), \end{equation} where the normalized coefficient $\alpha_{kj}$ is defined in Eq.\eqref{eq:linear-diffusion}. Intuitively, an identity is \emph{strongly} supported for $\tau_k$ if one of its neighbors assigns a high probability to it. Next, we turn the evidences for individual identities into a probability vector via a tempered softmax function as \begin{equation}\label{eq:cc} p^{(t+1)}_{\tau_k}(c) = \exp(\eta_k(c)/T) / \sum_{c'=1}^C \exp(\eta_k(c')/T). \end{equation} Here, $T$ is a temperature the controls how much the probabilities concentrate on the strongest identity. In this scheme, all identities compete for getting high probability values in $\vp^{(t+1)}_{\tau_k}$ by collecting the strongest supports from the neighbors. This allows the strongest identity to stand out. Competitive consensus can be considered as a coordinate ascent method to solve Eq.~\ref{eq:gf}, where we introduce a binary variable $z_{kj}^{(c)}$ to indicate whether the $j$-th neighbor is a trustable source for the class $c$ for the $k$-th tracklet. Here, $\mathcal{H}$ is the entropy. The constraint means that one trustable source is selected for each class $c$ and tracklet $k$. \begin{equation} \label{eq:gf} \max ~~ \sum_{c=1}^{C} p_{\tau_k}^{(c)} \sum_{j \in \cN(\tau_k)} \alpha_{kj} z_{kj}^{(c)} p_j^{(c)} + \sum_{c=1}^{C}\mathcal{H}(p_{\tau_k}^{(c)}) ~~~~ s.t. \sum_{j \in \cN(\tau_k)} z_{kj}^{(c)} = 1. \end{equation} Figure~\ref{fig:cp} illustrates how linear diffusion and our competitive Consensus work. Experiments on CSM also show that competitive consensus significantly improves the performance of the person search problem. \subsection{Progressive Propagation} \label{subsec:pp} In conventional label propagation, labels of all the nodes would be updated until convergence. This way can be prohibitively expensive when the graph contains a large number of nodes. However, for the person search problem, this is unnecessary -- when we are very confident about the identity of a certain instance, we don't have to keep updating it. Motivated by the analysis above, we propose a \emph{progressive propagation} scheme to accelerate the propagation process. At each iteration, we will fix the labels for a certain fraction of nodes that have the highest confidence, where the confidence is defined to be the maximum probability value in $\vp_i$. We found empirically that a simple freezing schedule, \eg~adding $10\%$ of the instances to the label-frozen set, can already bring notable benefits to the propagation process. Note that the progressive scheme not only reduces computational cost but also improves propagation accuracy. The reason is that without freezing, the noise and the uncertain nodes will keep affecting all the other nodes, which can sometimes cause additional errors. Experiments in~\ref{subsec:exp_csm} will show more details. \section{Conclusion} \label{sec:conclusion} In this paper, we studied a new problem named \emph{Person Search in Videos with One Protrait}, which is challenging but practical in the real world. To promote the research on this problem, we construct a large-scale dataset \emph{CSM}, which contains $127K$ tracklets of $1,218$ cast from $192$ movies. To tackle this problem, we proposed a new framework that incorporates both visual and temporal links for identity propagation, with a novel \emph{Progressive Propagation vis Competitive Consensus} scheme. Both quantitative and qualitative studies show the challenges of the problem and the effectiveness of our approach. \section{Cast Search in Movies Dataset} \label{sec:dataset} \begin{table}[t] \centering \caption{Comparing CSM with related datasets} \label{tab:dataset-scale} \begin{tabular}{l|ccccccc} \hline Dataset & ~~CSM~ & ~MARS\cite{zheng2016mars} & ~iLIDS\cite{wang2016person} & ~PRID\cite{hirzer2011person}~ & ~Market\cite{zheng2015scalable} & PSD\cite{xiao2017joint} & PIPA\cite{zhang2015beyond} \\ \hline \hline task & ~~search~ & re-id & re-id & re-id & re-id & det.+re-id & recog. \\ \hline type & ~~video~ & video & video & video & image & image & image \\ \hline identities & ~~1,218~ & 1,261 & 300 & 200 & 1,501 & 8,432 & 2,356 \\ \hline tracklets & ~~127K~ & 20K & 600 & 400 & - & - & - \\ \hline instances & ~~11M~ & 1M & 44K & 40K & 32K & 96K & 63K \\ \hline \end{tabular} \end{table} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{dataset} \caption{ Examples of \emph{CSM} Dataset. In each row, the photo on the left is the query portrait and the following tracklets of are groud-truth tracklets of them in the gallery. } \label{fig:dataset-example} \end{figure} Whereas there have been a number of public datasets for person re-id~\cite{zheng2016mars,li2014deepreid,wang2016person,zheng2015scalable,hirzer2011person,gou2017dukemtmc4reid,karanam2016systematic} and album-based person recognition~\cite{zhang2015beyond}. But dataset for our task, namely person search with a single portrait, remains lacking. In this work, we constructed a large-scale dataset \emph{Cast Search in Movies (CSM)} for this task. \emph{CSM} comprises a \emph{query set} that contains the portraits for $1,218$ cast (the actors and actresses) and a \emph{gallery set} that contains $127K$ tracklets (with $11M$ person instances) extracted from $192$ movies. We compare \emph{CSM} with other datasets for person re-id and person recognition in Tabel~\ref{tab:dataset-scale}. We can see that CSM is significantly larger, $6$ times for tracklets and $11$ times more instances than MARS~\cite{zheng2016mars}, which is the largest dataset for person re-id to our knowledge. Moreover, CSM has a much wider range of tracklet durations (from $1$ to $4686$ frames) and instance sizes (from $23$ to $557$ pixels in height). Figure~\ref{fig:dataset-example} shows several example tracklets as well as their corresponding portraits, which are very diverse in pose, illumination, and wearings. It can be seen that the task is very challenging. \begin{figure}[t] \centering \subfloat[\label{fig:statistic-1}]{\includegraphics[width=0.28\linewidth]{tracklet-movie}} \hfill \subfloat[\label{fig:statistic-2}]{\includegraphics[width=0.67\linewidth]{movie-tracklet}} \\ \vspace{-5pt} \subfloat[\label{fig:statistic-3}]{\includegraphics[width=0.32\linewidth]{tracklet-cast}} \subfloat[\label{fig:statistic-4}]{\includegraphics[width=0.32\linewidth]{length-tracklet}} \subfloat[\label{fig:statistic-5}]{\includegraphics[width=0.32\linewidth]{height-tracklet}} \caption{ Statistics of CSM dataset. (a): the tracklet number distribution over movies. (b): the tracklet number of each movie, both credited cast and ``others''. (c): the distribution of tracklet number over cast. (d): the distribution of length (frames) over tracklets. (e): the distribution of height (px) over tracklets. } \label{fig:dataset-statistic} \end{figure} \paragraph{\bf Query Set.} For each movie in \emph{CSM}, we acquired the cast list from IMDB. For those movies with more than $10$ cast, we only keep the top $10$ according to the IMDB order, which can cover the main characters for most of the movies. In total, we obtained $1,218$ cast, which we refer to as the \emph{credited cast}. For each credited cast, we download a portrait from either its IMDB or TMDB homepage, which will serve as the query portraits in \emph{CSM}. \paragraph{\bf Gallery Set.} We obtained the tracklets in the gallery set through five steps: \begin{enumerate} \item \emph{Detecting shots.} A movie is composed of a sequence of shots. Given a movie, we first detected the shot boundaries of the movies using a fast shot segmentation technique~\cite{apostolidis2014fast,sidiropoulos2011temporal}, resulting in totally $200K$ shots for all movies. For each shot, we selected $3$ frames as the \emph{keyframes}. \item \emph{Annotating bounding boxes on keyframes.} We then \emph{manually} annotated the person bounding boxes on keyframes and obtained around $700K$ bounding boxes. \item \emph{Training a person detector.} We trained a person detector with the annotated bounding boxes. Specifically, all the keyframes are partitioned into a training set and a testing set by a ratio $7:3$. We then finetuned a Faster-RCNN~\cite{ren2015faster} pre-trained on MSCOCO~\cite{lin2014microsoft} on the training set. On the testing set, the detector gets around $91\%$ mAP, which is good enough for tracklet generation. \item \emph{Generating tracklets.} With the person detector as described above, we performed per-frame person detection over all the frames. By concatenating the bounding boxes across frames with $\text{IoU} > 0.7$ \emph{within each shot}, we obtained $127K$ trackets from the $192$ movies. \item \emph{Annotating identities.} Finally, we manually annotated the identities of all the tracklets. Particularly, each tracklet is annotated as one of the credited cast or as ``others''. Note that the identities of the tracklets in each movie are annotated independently to ensure high annotation quality with a reasonable budget. Hence, being labeled as ``others'' means that the tracklet does not belong to any credited cast of the corresponding movie. \end{enumerate} \section{Related Work} \label{sec:related} \noindent{\bf Person Re-id}. Person re-id~\cite{zajdel2005keeping,gheissari2006person,gong2014person}, which aims to match pedestrian images (or tracklets) from different cameras within a short period, has drawn much attention in the research community. Many datasets~\cite{zheng2016mars,li2014deepreid,wang2016person,zheng2015scalable,hirzer2011person,gou2017dukemtmc4reid,karanam2016systematic} have been proposed to promote the research of re-id. However, the videos are captured by just several cameras in nearby locations within a short period. For example, the Airport~\cite{karanam2016systematic} dataset is captured in an airport from 8 a.m. to 8 p.m. in one day. So the instances of the same identities are usually similar enough to identify by visual appearance although with occlusion and pose changes. Based on such characteristic of the data, most of the re-id methods focus on how to match a query and a gallery instance by visual cues. In earily works, the matching process is splited into feature designing~\cite{hamdoun2008person,gray2008viewpoint,ma2012local,ma2014covariance} and metric learning~\cite{prosser2010person,koestinger2012large,liao2015person}. Recently, many deep learning based methods have been proposed to jointly handle the matching problem. \emph{Li et al.}~\cite{li2014deepreid} and \emph{Ahmed et al.}~\cite{ahmed2015improved} designed siamese-based networks which employ a binary verification loss to train the parameters. \emph{Ding et al.}~\cite{ding2015deep} and \emph{Cheng et al.}~\cite{cheng2016person} exploit triple loss for training more discriminating feature. \emph{Xiao et al.}~\cite{xiao2016learning} and \emph{Zheng et al.}~\cite{zheng2016mars} proposed to learn features by classifying identities. Although the feature learning methods of re-id can be adopted for the Person Search with One Portrait problem, they are substantially different as the query and the gallery would have huge visual appearances gap in person search, which would make one-to-one matching fail. \noindent{\bf Person Recognition in Photo Album}. Person recognition~\cite{lin2010joint,zhang2015beyond,joon2015person,li2016multi,huang2018unifying} is another related problem, which usually focuses on the persons in photo album. It aims to recognize the identities of the queries given a set of labeled persons in gallery. \emph{Zhang et al.}~\cite{zhang2015beyond} proposed a Pose Invariant Person Recognition method (PIPER), which combines three types of visual recognizers based on ConvNets, respectively on face, full body, and poselet-level cues. The PIPA dataset published in~\cite{zhang2015beyond} has been widely adopted as a standard benchmark to evaluate person recognition methods. \emph{Oh et al.}~\cite{joon2015person} evaluated the effectiveness of different body regions, and used a weighted combination of the scores obtained from different regions for recognition. \emph{Li et al.}~\cite{li2016multi} proposed a multi-level contextual model, which integrates person-level, photo-level and group-level contexts. But the person recognition is also quite different from the person search problem we aim to tackle in this paper, since the samples of the same identities in query and gallery are still similar in visual appearances and the methods mostly focus on recognizing by visual cues and context. \noindent{\bf Person Search}. There are some works that focus on person search problem. \emph{Xiao et al.}~\cite{xiao2017joint} proposed a person search task which aims to search the corresponding instances in the images of the gallery without bounding box annotation. The associated data is similar to that in re-id. The key difference is that the bounding box is unavailable in this task. Actually it can be seen as a task to combine pedestrian detection and person re-id. There are some other works try to search person with different modality of data, such as language-based~\cite{li2017person} and attribute-based~\cite{su2016deep,feris2014attribute}, which focus on the application scenarios that are different from the portrait-based problem we aim to tackle in this paper. \noindent{\bf Label Propogation}. Label propagation (LP)~\cite{zhu2002learning,zhou2004learning}, also known as Graph Transduction~\cite{wang2008graph,rohrbach2013transfer,sener2016learning}, is widely used as a semi-supervised learning method. It relies on the idea of building a graph in which nodes are data points (labeled and unlabeled) and the edges represent similarities between points so that labels can propagate from labeled points to unlabeled points. Different kinds of LP-based approaches have been proposed for face recognition~\cite{kumar2014face,zoidi2014person}, semantic segmentation~\cite{sheikh2016real}, object detection~\cite{tripathi2016detecting}, saliency detection~\cite{li2015inner} in the computer vision community. In this paper, We develop a novel LP-based approach called Progressive Propagation via Competitive Consensus, which differs from the conventional LP in two folds: (1) propagating by competitive consensus rather than linear diffusion, and (2) iterating in a progressive manner. \section{Experiments} \label{sec:exp} \subsection{Evaluation protocol and metrics of CSM} \label{subsec:exp_proto} \begin{table}[t] \centering \begin{minipage}{.55\linewidth} \caption{train/val/test splits of CSM} \centering \label{tab:dataset-splits} \begin{tabular}{l|c|c|c|c} \hline & movies & cast & tracklets & credited tracklets \\ \hline \hline train & 115 & 739 & 79K & 47K \\ val & 19 & 147 & 15K & 8K \\ test & 58 & 332 & 32K & 18K \\ \hline total & 192 & 1,218 & 127K & 73K \\ \hline \end{tabular} \end{minipage} \begin{minipage}{.4\linewidth} \caption{query/gallery size} \centering \label{tab:dataset-qgsize} \begin{tabular}{c|c|c} \hline setting & ~query~ & ~gallery~ \\ \hline \hline \begin{tabular}[c]{@{}c@{}}IN \\ \small{(per movie)}\end{tabular} & 6.4 & 560.5 \\ \hline CROSS & 332 & 17,927 \\ \hline \end{tabular} \end{minipage} \end{table} The $192$ movies in CSM are partitioned into training (train), validation (val) and testing (test) sets. Statistics of these sets are shown in Table~\ref{tab:dataset-splits}. Note that we make sure that there is no overlap between the cast of different sets. \ie~the cast in the testing set would not appear in training and validation. This ensures the reliability of the testing results. Under the Person Search with One Portrait setting, one should rank all the tracklets in the gallery given a query. For this task, we use \emph{mean Average Precision (mAP)} as the evaluation metric. We also report the recall of tracklet identification results in our experiments in terms of R@k. Here, we rank the identities for each tracklet according to their probabilities. R@k means the fraction of tracklets for which the correct identity is listed within the top $k$ results. We consider two test settings in the CSM benchmark named ``search cast in a movie'' (IN) and ``search cast across all movies'' (ACROSS). The setting ``IN'' means the gallery consists of just the tracklets from one movie, including the tracklets of the credited cast and those of ``others''. While in the ``ACROSS'' setting, the gallery comprises all the tracklets of credited cast in testing set. Here we exclude the tracklets of ``others'' in the ``ACROSS'' setting because ``others'' just means that it does not belong to any one of the credited cast of a particular movie rather than all the movies in the dataset as we have mentioned in Sec.~\ref{sec:dataset}. Table~\ref{tab:dataset-qgsize} shows the query/gallery sizes of each setting. \subsection{Implementation Details} \label{subsec:exp_imp} We use two kinds of visual features in our experiments. The first one is the IDE feature~\cite{zheng2016mars} widely used in person re-id. The IDE descriptor is a CNN feature of the whole person instance, extracted by a Resnet-50~\cite{he2016deep}, which is pre-trained on ImageNet~\cite{russakovsky2015imagenet} and finetuned on the training set of CSM. The second one is the \emph{face feature}, extracted by a Resnet-101, which is trained on MS-Celeb-1M~\cite{guo2016ms}. For each instance, we extract its IDE feature and the face feature of the face region, which is detected by a face detector~\cite{zhang2016joint}. All the visual similarities in experiments are calculated by cosines similarity between the visual features. \begin{table}[] \centering \caption{Results on CSM under Two Test Settings} \begin{tabular}{l|c|ccc|c|ccc} \hline & \multicolumn{4}{c|}{IN} & \multicolumn{4}{c}{ACROSS} \\ \hline & ~~mAP~~ & ~~R@1~~ & ~~R@3~~ & ~~R@5~~ & ~~mAP~~ & ~~R@1~~ & ~~R@3~~ & ~~R@5~~ \\ \hline\hline FACE & 53.33 & 76.19 & 91.11 & 96.34 & 42.16 & 53.15 & 61.12 & 64.33 \\ IDE & 17.17 & 35.89 & 72.05 & 88.05 & 1.67 & 1.68 & 4.46 & 6.85 \\ FACE+IDE & 53.71 & 74.99 & 90.30 & 96.08 & 40.43 & 49.04 & 58.16 & 62.10 \\ LP & 8.19 & 39.70 & 70.11 & 87.34 & 0.37 & 0.41 & 1.60 & 5.04 \\ \hline PPCC-v & 62.37 & \textbf{84.31} & \textbf{94.89} & \textbf{98.03} & 59.58 & \textbf{63.26} & \textbf{74.89} & \textbf{78.88} \\ PPCC-vt & \textbf{63.49} & 83.44 & 94.40 & 97.92 & \textbf{62.27} & 62.54 & 73.86 & 77.44 \\ \hline \end{tabular} \label{tab:exp-csm} \end{table} \subsection{Results on CSM} \label{subsec:exp_csm} We set up four baselines for comparison: \textbf{(1) FACE:} To match the portrait with the tracklet in the gallery by face feature similarity. Here we use the mean feature of all the instances in the tracklet to represent it. \textbf{(2) IDE:} Similar to FACE, except that the IDE features are used rather than the face features. \textbf{(3) IDE+FACE:} To combine face similarity and IDE similarity for matching, respectively with weights $0.8$ and $0.2$. \textbf{(4) LP:} Conventional label propagation with linear diffusion with both visual and temporal links. Specifically, we use face similarity as the visual links between portraits and candidates and the IDE similarity as the visual links between different candidates. We also consider two settings of the proposed Progressive Propagation via Competitive Consensus method. \textbf{(5) PPCC-v:} using only visual links. \textbf{(6) PPCC-vt: } the full config with both visual and temporal links. From the results in Table~\ref{tab:exp-csm}, we can see that: (1) Even with a very powerful CNN trained on a large-scale dataset, matching portrait and candidates by visual cues cannot solve the person search problem well due to the big gap of visual appearances between the portraits and the candidates. Although face features are generally more stable than IDE features, they would fail when the faces are invisible, which is very common in real-world videos like movies. (2) Label propagation with linear diffusion gets very poor results, even worse than the matching-based methods. (3) Our approach raises the performance by a considerable margin. Particularly, the performance gain is especially remarkable on the more challenging ``ACROSS'' setting ($62.27$ with ours vs. $42.16$ with the visual matching method). \paragraph{\bf Analysis on Competitive Consensus}. To show the effectiveness of \emph{Competitive Consensus}, we study different settings of the Competitive Consensus scheme in two aspects: (1) The $\max$ in Eq.~\eqref{eq:cc} can be relaxed to top-$k$ average. Here $k$ indicates the number of neighbors to receive information from. When $k=1$, it reduces to only taking the maximum, which is what we use in PPCC. Performances obtained with different $k$ are shown in Fig.~\ref{fig:exp-cc}. (2) We also study on the ``softmax'' in~ Eq.\eqref{eq:cc} and compare results between different temperatures of it. The results are also shown in Fig.~\ref{fig:exp-cc}. Clearly, using smaller temperature of softmax significantly boosts the performance. This study supports what we have claimed when designing \emph{Competitive Consensus}: we should only propagate the most confident information in this task. \begin{figure}[t] \centering \subfloat[Under ``IN'' setting]{ \includegraphics[width=0.45\linewidth]{exp_cc} \label{fig:exp-cc-1} } \hfill \subfloat[Under ``ACROSS'' setting]{ \includegraphics[width=0.45\linewidth]{exp_cc2} \label{fig:exp-cc-2} } \vspace{-5pt} \caption{\footnotesize mAP of different settings of competitive consensus. Comparison between different temperatures(T) of softmax and different settings of $k$ (in top-$k$ average). } \label{fig:exp-cc} \end{figure} \paragraph{\bf Analysis on Progressive Propagation}. Here we show the comparison between our progressive updating scheme and the conventional scheme that updates all the nodes at each iteration. For progressive propagation, we try two kinds of freezing mechanisms: (1) \emph{Step} scheme means that we set the freezing ratio of each iteration and the ratio are raised step by step. More specifically, the freezing ratio $r$ is set to $r = 0.5 + 0.1 \times \text{iter}$ in our experiment. (2) \emph{Threshold} scheme means that we set a threshold, and each time we freeze the nodes whose max probability to a particular identity is greater than the threshold. In our experiments, the threshold is set to $0.5$. The results are shown in Table~\ref{tab:exp-progressive}, from which we can see the effectiveness of the progressives scheme. \begin{table}[t] \centering \caption{Results of Different Updating Schemes} \begin{tabular}{l|c|ccc|c|ccc} \hline & \multicolumn{4}{c|}{IN} & \multicolumn{4}{c}{ACROSS} \\ \hline & ~~mAP~~ & ~~R@1~~ & ~~R@3~~ & ~~R@5~~ & ~~mAP~~ & ~~R@1~~ & ~~R@3~~ & ~~R@5~~ \\ \hline\hline Conventional & 60.54 & 76.64 & 91.63 & 96.70 & 57.42 & 54.60 & 63.31 & 66.41 \\ Threshold & 62.51 & 81.04 & 93.61 & 97.48 & 61.20 & 61.54 & 72.31 & 76.01 \\ Step & \textbf{63.49} & \textbf{83.44} & \textbf{94.40} & \textbf{97.92} & \textbf{62.27} & \textbf{62.54} & \textbf{73.86} & \textbf{77.44} \\ \hline \end{tabular} \label{tab:exp-progressive} \end{table} \paragraph{\bf Case Study}. We show some samples that are correctly searched in different iterations in Fig.~\ref{fig:cases}. We can see that the easy cases, which are usually with clear frontal faces, can be identified at the beginning. And after iterative propagation, the information can be propagated to the harder samples. At the end of the propagation, even some very hard samples, which are non-frontal, blurred, occluded and under extreme illumination, can be propagated a right identity. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{cases} \caption{ Some samples that are correctly searched in different iterations. } \label{fig:cases} \end{figure} \section{Introduction} \label{sec:intro} Searching persons in videos is frequently needed in real-world scenarios. To catch a wanted criminal, the police may have to go through thousands of hours of videos collected from multiple surveillance cameras, probably with just a single portrait. To find the movie shots featured by a popular star, the retrieval system has to examine many hour-long films, with just a few facial photos as the references. In applications like these, the reference photos are often taken in an environment that is very different from the target environments where the search is conducted. As illustrated in Figure~\ref{fig:teaser}, such settings are very challenging. Even state-of-the-art recognition techniques would find it difficult to reliably identify all occurrences of a person, facing the dramatic variations in pose, makeups, clothing, and illumination. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{teaser} \caption{\small Person re-id differs significantly from the person search task. The first row shows a typical example in person re-id from the \emph{MARS dataset}~\cite{zheng2016mars}, where the reference and the targets are captured under similar conditions. The second row shows an example from our person search dataset \emph{CSM}, where the reference portrait is dramatically different from the targets that vary significantly in pose, clothing, and illumination.} \label{fig:teaser} \end{figure} It is noteworthy that two related problems, namely \emph{person re-identification (re-id)} and \emph{person recognition in albums}, have drawn increasing attention from the research community. However, they are substantially different from the problem of \emph{person search with one portrait}, which we aim to tackle in this work. Specifically, in typical settings of person re-id~\cite{zheng2016mars,li2014deepreid,wang2016person,zheng2015scalable,hirzer2011person,gou2017dukemtmc4reid,karanam2016systematic}, the queries and the references in the gallery set are usually captured under similar conditions, \eg~from different cameras along a street, and within a short duration. Even though some queries can be subject to issues like occlusion and pose changes, they can still be identifies via other visual cues, \eg~clothing. For person recognition in albums~\cite{zhang2015beyond}, one is typically given a diverse collection of gallery samples, which may cover a wide range of conditions and therefore can be directly matched to various queries. Hence, for both problems, the references in the gallery are often good representatives of the targets, and therefore the methods based on visual cues can perform reasonably well~\cite{li2014deepreid,ahmed2015improved,ding2015deep,cheng2016person,xiao2016learning,zheng2016mars,zhang2015beyond,joon2015person,huang2018unifying}. On the contrary, our task is to bridge a single portrait with a highly diverse set of samples, which is much more challenging and requires new techniques that go beyond visual matching. To tackle this problem, we propose a new framework that propagates labels through both visual and temporal links. The basic idea is to take advantage of the \emph{identity invariance} along a person trajectory, \ie~all person instances along a \emph{continuous} trajectory in a video should belong to the same identity. The connections induced by tracklets, which we refer to as the \emph{temporal links}, are complementary to the \emph{visual links} based on feature similarity. For example, a trajectory can sometimes cover a wide range of facial images that can not be easily associated based on visual similarity. With both \emph{visual} and \emph{temporal} links incorporated, our framework can form a large connected graph, thus allowing the identity information to be propagated over a very diverse collection of instances. While the combination of visual and temporal links provide a broad foundation for identity propagation, it remains a very challenging problem to carry out the propagation \emph{reliably} over a large real-world dataset. As we begin with only a single portrait, a few wrong labels during propagation can result in catastrophic errors downstream. Actually, our empirical study shows that conventional schemes like linear diffusion~\cite{zhu2002learning,zhou2004learning} even leads to substantially worse results. To address this issue, we develop a novel scheme called \emph{Progressive Propagation via Competitive Consensus}, which performs the propagation \emph{prudently}, spreading a piece of identity information only when there is high certainty. To facilitate the research on this problem setting, we construct a dataset named \emph{Cast Search in Movies (CSM)}, which contains $127K$ tracklets of $1218$ cast identities from $192$ movies. The identities of all the tracklets are \emph{manually annotated}. Each cast identity also comes with a reference portrait. The benchmark is very challenging, where the person instances for each identity varies significantly in makeup, pose, clothing, illumination, and even age. On this benchmark, our approach get $63.49\%$ and $62.27\%$ mAP under two settings, Comparing to the $53.33\%$ and $42.16\%$ mAP of the conventional visual-matching method, it shows that only matching by visual cues can not solve this problem well, and our proposed framework -- \emph{Progressive Propagation via Competitive Consensus} can significantly raise the performance. In summary, the main contributions of this work lie in four aspects: (1) We systematically study the problem of \emph{person search in videos}, which often arises in real-world practice, but remains widely open in research. (2) We propose a framework, which incorporates both the visual similarity and the identity invariance along a tracklet, thus allowing the search to be carried out much further. (3) We develop the \emph{Progressive Propagation via Competitive Consensus} scheme, which significantly improves the reliability of propagation. (4) We construct a dataset \emph{Cast Search in Movies (CSM)} with $120K$ manually annotated tracklets to promote the study on this problem.
{'timestamp': '2018-07-30T02:07:40', 'yymm': '1807', 'arxiv_id': '1807.10510', 'language': 'en', 'url': 'https://arxiv.org/abs/1807.10510'}
\section{Introduction} \label{intro} Frame synchronization refers to the problem of locating a sync pattern periodically embedded into data and received over a channel (see, e.g., \cite{Ma2,LT,Sc,Ni}). In \cite{Ma2} Massey considered the situation of binary data transmitted across a white Gaussian noise channel. He showed that, given received data of fixed size which the sync pattern is known to belong to, the maximum likelihood rule consists of selecting the location that maximizes the sum of the correlation and a correction term. We are interested in the situation where the receiver wants to locate the sync pattern on the basis of sequential observations, which Massey refers to as the `one-shot' frame synchronization problem in \cite{Ma2}. Surprisingly, this setting has received much less attention than the fixed length frame setting. In particular it seems that this problem hasn't been given a precise formulation yet. In this note we propose a formulation where the decoder has to locate the sync pattern exactly and without delay, with the foreknowledge that the pattern is sent within a certain time interval that characterizes the level of asynchronism between the transmitter and the receiver. Our main result is the asymptotic characterization of the largest asynchronism level with respect to the size of the sync pattern for which a decoder can correctly perform with arbitrarily high probability. \section{Problem formulation and result} \label{pform} We consider discrete-time communication over a discrete memoryless channel characterized by its finite input and output alphabets $\cal{X}$ and $\cal{Y}$, respectively, transition probability matrix $Q(y|x)$, for all $y\in {\cal{Y}}$ and $x\in {\cal{X}}$, and `noise' symbol $\star\in {\cal{X}}$.\footnote{Throughout this note we always assume that for all $y\in {\cal{Y}}$ there is some $x\in {\cal{X}}$ for which $Q(y|x)>0$.} The sync pattern $s^N$ consists of $N\geq 1$ symbols from $\cal{X}$ --- possibly also the $\star$ symbol. The transmission of the sync pattern starts at a random time $\nu$, uniformly distributed in $[1,2,\ldots,A]$, where the integer $A\geq 1$ characterizes the asynchronism level between the transmitter and the receiver. We assume that the receiver knows $A$ but not $\nu$. Before and after the transmission of the sync pattern, i.e., before time $\nu$ and after time $\nu+N-1$, the receiver observes noise. Specifically, conditioned on the value of $\nu$, the receiver observes independent symbols $Y_1,Y_2,\ldots$ distributed as follows. If $i\leq \nu-1$ or $i\geq \nu+N$, the distribution is $Q(\cdot|\star)$. At any time $i\in [\nu ,\nu+1,\ldots, \nu+N-1]$ the distribution is $Q(\cdot|{s_{i-\nu+1}})$, where $s_n$ denotes the $n$\/th symbol of $s^N$. To identify the instant when the sync pattern starts being emitted, the receiver uses a sequential decoder in the form of a stopping time $\tau$ with respect to the output sequence $Y_1,Y_2,\ldots$\footnote{Recall that a stopping time $\tau$ is an integer-valued random variable with respect to a sequence of random variables $\{Y_i\}_{i=1}^\infty$ so that the event $\{\tau=n\}$, conditioned on $\{Y_i\}_{i=1}^{n}$, is independent of $\{Y_{i}\}_{i=n+1}^{\infty}$ for all $n\geq 1$.} If $\tau=n$ the receiver declares that the sync pattern started being sent at time $n-N+1$ (see Fig.~\ref{grapheesss}). \begin{figure} \begin{center} \input{canaltemps} \caption{\label{grapheesss} Time representation of what is sent (upper arrow) and what is received (lower arrow). The `$\star$' represents the `noise' symbol. At time $\nu$ the sync pattern starts being sent and is detected at time $\tau$.} \end{center} \end{figure} The associated error probability is defined as $${\mathbb{P}}(\tau\ne \nu+N-1)\;.$$ \noindent We now define the {\emph{synchronization threshold}}. \begin{defs}\label{tasso} An {\emph{asynchronism exponent $\alpha$ is achievable}} if there exists a sequence of pairs sync pattern/decoder $\{(s^N,\tau_N)\}_{N\geq 1}$ such that $s^N$ and $\tau_N$ operate under asynchronism level $A=e^{\alpha N}$, and so that $${\mathbb{P}}(\tau_N\ne \nu-N+1)\overset{N\to \infty}{\longrightarrow}0\;.$$ The {\emph{synchronization threshold}}, denoted $\alpha(Q)$, is the supremum of the set of achievable asynchronism exponents. \end{defs} \noindent Our main result lies in the following theorem. \begin{thm}\label{unow} The synchronization threshold as defined above is given by $$\alpha(Q)=\max_x D(Q(\cdot|x)||Q(\cdot|\star))\;,$$ where $D(Q(\cdot|x)||Q(\cdot|\star))$ is the divergence (Kullback-Leibler distance) between $Q(\cdot|x)$ and $Q(\cdot|\star)$. Furthermore, if the asynchronism exponent is above the synchronization threshold, a maximum likelihood decoder that is revealed the maximum length sequence of size $A+N-1$ makes an error with a probability that tends to one as $N\rightarrow \infty$. \end{thm} A direct consequence of the theorem is that a sequential decoder can (asymptotically) locate the sync pattern as well as the optimal maximum likelihood decoder that operates on a non-sequential basis, having access to sequences of maximum size $A+N-1$.\footnote{Note that if the receiver has no foreknowledge on $A$, i.e., if $A$ can a priori be arbitrarily large, the problem is ill-posed: for any decoder, the probability of miss-location of the sync pattern can be made arbitrarily large for $A$ large enough.} Note that the synchronization threshold is the same as the one in \cite{TCW}, which is defined as the largest asynchronism level for which reliable communication can be achieved over point-to-point asynchronous channels. This should not come as a surprise since the limit of asynchronous communication is obtained in the zero rate regime where decoding errors are mainly due to a miss location of the transmitted message. We now prove the theorem by first presenting the direct part and then its converse. Recall that a type, or empirical distribution, induced by a sequence $z^N\in {\cal{Z}}^N$ is the probability $\hat{P}$ on $\cal{Z}$ where $\hat{P}(a)$, $a\in {\cal{Z}}$, is equal to the number of occurrences of $a$ in $z^N$ divided by $N$. \begin{proof}[Proof of achievability] We show that a suitable sync pattern together with the sequential typicality decoder\footnote{\label{piednote}The sequential typicality decoder operates as follows. At time $n$, it computes the empirical distribution $\hat{P}$ induced by the sync pattern and the previous $N$ output symbols $y_{n-N+1},y_{n-N+2},\ldots,y_n$. If this distribution is close enough to $P$, i.e., if $|\hat{P}(x,y)-P(x,y)|\leq \mu$ for all $x,y$, the decoder stops, and declares $n-N+1$ as the time the sync pattern started being emitted. Otherwise it moves one step ahead and repeats the procedure. Throughout the argument we assume that $\mu$ is a negligible strictly positive quantity.} achieves an asynchronism exponent arbitrarily close to the synchronization threshold. The intuition is as follows. Let $\bar{x}$ be a `maximally divergent symbol,' i.e., so that $$D(Q(\cdot|\bar{x})||Q(\cdot|\star)) = \alpha(Q)\;.$$ Suppose the sync pattern consists of $N$ repetitions of $\bar{x}$. If we use the sequential typicality decoding we already have almost all the properties we need. Indeed, if $\alpha<\alpha(Q)$, with negligible probability the noise generates a block of $N$ output symbols that is jointly typical with the sync pattern. Similarly, the block of output symbols generated by the sync pattern is jointly typical with the sync pattern with high probability. The only problem occurs when a block of $N$ output symbols is generated partly by noise and partly by the sync pattern. Indeed, consider for instance the block of $N$ output symbols from time $\nu-1$ up to $\nu+N-2$. These symbols are all generated according to the sync pattern, except for the first. Hence, whenever the decoder observes this portion of symbols, it makes an error with constant probability. The argument extends to any fixed length shift. The reason that the decoder is unable to locate the sync pattern exactly is that a constant sync pattern has the undesirable property that when it is shifted to the right, it still looks almost the same. Therefore, to prove the direct part of the theorem, we consider a sync pattern mainly composed of $\bar{x}$'s, but with a few $\star$'s mixed in\footnote{Indeed, any symbol different than $\bar{x}$ can be used.} to make sure that shifts of the sync pattern look sufficiently different from the original sync pattern. This allows the decoder to identify the sync pattern exactly, with no delay, and with probability tending to one as $N$ goes to infinity, for any asynchronism exponent less than $\alpha(Q)$. We formalize this below. Suppose that, for any arbitrarily large $K$, we can construct a sequence of patterns $\{s^N\}$ of increasing lengths such that each $s^N=s_1,s_2,\ldots,s_N$ satisfies the following two properties: \begin{itemize} \item[I.] all $s_i$'s are equal to $\bar{x}$, except for a fraction smaller than $1/K$ that are equal to $\star$; \item[II.] the Hamming distance between the pattern and any of its shifts of the form $$\underbrace{\star, \star,\ldots, \star}_{i\text{ times}}, s_1,s_2,\ldots,s_{N-i}\;\;\quad i\in [1,2,\ldots,N]$$ is linear in $N$. \end{itemize} Now let $A = e^{N (\alpha(Q) - \epsilon)}$, for some $\epsilon > 0$, and consider using patterns with the properties I and II in conjunction with the sequential typicality decoder $\tau=\tau_N$. By \cite[Lemma 2.6, p.32]{CK} and property I, the probability that $N$ output symbols entirely generated by noise are typical with the sync pattern is upper bounded by $\exp \left(- N (1-1/K)(\alpha(Q)-\delta)\right)$, where $\delta>0$ goes to zero as the typicality constant $\mu$ goes to zero.\footnote{See footnote \ref{piednote}.} Hence, by the union bound $${\mathbb{P}}\left(\{\tau<\nu\right\}\cup \left\{\tau\geq \nu+2N-1\}\right) \leq e^{- N (\epsilon-\delta-(\alpha-\delta)/K)}$$ which tends to zero for $\mu$ small enough and $K$ sufficiently large.\footnote{If $\alpha(Q)=\infty$ the upper bound is zero if $\mu$ is small enough.} If the $N$ observed symbols are partly generated by noise and partly by the sync pattern, by property II, the Chernoff bound, and the union bound we obtain $${\mathbb{P}}\left( \tau \in [\nu,\nu+1,\ldots, \nu+N-2]\right) \leq (N-1)e^{- \Omega(N)}$$ which vanishes as $N$ tends to infinity. We then deduce that $${\mathbb{P}}(\tau=\nu+N-1) \rightarrow 1$$ as $N \rightarrow \infty$. To conclude we give an explicit construction of a sequence of sync pattern satisfying the properties I and II above. To that aim we use maximal length shift register sequences (see, e.g.,\cite{Gol}). Actually, for our purpose, the only property we use from such binary sequences of length $l=2^m-1$, $m\in [1,2,\ldots)$, is that they are of Hamming distance $(l+1)/2$ from any of their circular shifts. To construct the sync pattern we start by setting $s_i = \bar{x}$ for all $i \not\equiv 0$ mod $K$ where, without loss of generality, $K$ is chosen to satisfy $\lfloor \frac{N}{K} \rfloor=2^m-1$ for some $m\in [1,2,\ldots)$.\footnote{We use $\left\lfloor x\right\rfloor$ to denote the largest integer smaller than $x$.} With this choice, property I is already satisfied. To specify the $\lfloor \frac{N}{K} \rfloor$ positions $i$ with $i \equiv 0$ mod $K$, pick a maximal length shift register sequence $m_1,m_2,\ldots, m_{\lfloor \frac{N}{K} \rfloor}$, and set $s_{jK} = \bar{x}$ if $m_j = 0$ and $s_{jK} = \star$ if $m_j = 1$, for any integer $j\leq \lfloor \frac{N}{K} \rfloor$. It can be readily verified, using the circular shift property of maximal length shift register sequences, that this construction yields patterns that satisfy property~II. \end{proof} \begin{proof}[Proof of the converse] We assume that $A=e^{N\alpha}$ with $$\alpha>\max_x D(Q(\cdot|x)||Q(\cdot|\star))$$ and show that the (optimal) maximum likelihood decoder that operates on the basis of sequences of maximum length $A+N-1$ yields a probability of error going to one as $N$ tends to infinity. We assume that the sync pattern $s^N$ is composed of $N$ identical symbols $s\in \cal{X}$. The case with multiple symbols is obtained by a straightforward extension. Suppose the maximum likelihood decoder not only is revealed the complete sequence $$y_1,y_2,\ldots,y_{A+N-1}\;,$$ but also knows that the sync pattern was sent in one of the $r$ distinct block periods of duration $N$, where $r$ denotes the integer part of $(A+N-1)/N$, as shown in Fig.~\ref{graphees2}. \begin{figure} \begin{center} \input{line} \caption{\label{graphees2} Parsing of the entire received sequence of size $A+N-1$ into $r$ blocks $y^{(t_1)},y^{(t_2)},\ldots,y^{(t_r)}$ of length $N$, where the $i$\/th block starts at time $t_i$.} \end{center} \end{figure} Assuming $Q(y|\star)>0$ for all $y\in \cal Y $,\footnote{If $Q(y|\star)=0$ for some $y\in \cal Y $ we have $\alpha(Q)=\infty$, and there is nothing to prove.} straightforward algebra shows that the decoder outputs the time $t_i$, $i\in [1,2,\ldots,r]$, that maximizes $$f(y^{(t_i)})= \frac{Q(y^{(t_i)}|s^N)}{Q(y^{(t_i)}|\star)}\;.$$ Note that $f(y^{(t)})$ depends only on the type of the sequence $y^{(t)}$ since $s^N$ is the repetition of a single symbol. For conciseness, from now on we adopt the notation $Q_s(y^{(t)})$ instead of $Q(y^{(t)}|s^N)$ and $Q_\star(y^{(t)})$ instead of $Q(y^{(t)}|\star)$. Let $Q_s\pm \varepsilon_0$ denote the set of types (induced by sequences $y^N$) that are $\varepsilon_0>0$ close to $Q_s$ with respect to the $L_1$ norm, and let $E_1$ denote the event that the type of the $\nu$\/th block (corresponding to the sync transmission period) is not in $Q_s\pm \varepsilon_0$. It follows that \begin{align}\label{e1} {\mathbb{P}}(E_1) \leq e^{-N\epsilon} \end{align} for some $\varepsilon=\varepsilon(\varepsilon_0)>0$.\footnote{Here we implicitly assume that $N$ is large enough so that the set of types $Q_s\pm \varepsilon_0$ is nonempty. } Let $\bar{Q_s} = \arg \max_{P\in Q_s\pm \epsilon_0} f(P)$,\footnote{Note that $\bar{Q_s}$ may not be equal to $Q_s$.} where with a slight abuse of notation $f(P)$ is used to denote $f(y^N)$ for any sequence $y^N$ having type $P$. Now consider the event $E_2$ where the number of blocks generated by $Q_\star$ that have type $\bar{Q}_s$ is smaller than $$\frac{1}{2(N+1)^{|\cal{X}|}}e^{-N(D(\bar{Q_s}||Q_\star)-\alpha)}\;.$$ Using \cite[Lemma 2.6, p.32]{CK}, the expected number of blocks generated by $Q_\star$ that have type $\bar{Q_s}$ is lower bounded as \begin{align*} {\mathbb{E}}\left(\mbox{number of type $\bar{Q_s}$ blocks generated from $Q_\star$}\right)&\geq \frac{1}{(N+1)^{|\cal{X}|}}e^{-N D(\bar{Q_s}||Q_\star)}(r-1)\\ &\geq \poly(N)e^{-N (D(\bar{Q_s}||Q_\star)-\alpha)}\;, \end{align*} and using Chebyshev's inequality we get \begin{align}\label{e2} {\mathbb{P}}\big(E_2\big)\leq \text{poly}(N){e^{-N(\alpha-D(\bar{Q_s}||Q_\star))}} \end{align} where poly$(N)$ denotes a term that increases or decreases no faster than polynomially in $N$. Finally consider the event $E_3$ defined as the complement of $E_1\cup E_2$. Given that $E_3$ happens, the decoder sees at least $$\frac{1}{2(N+1)^{|\cal{X}|}}e^{-N(D(\bar{Q_s}||Q_\star)-\alpha)}$$ time slots that are at least as probable as the correct $\nu$\/th. Hence, the probability of correct detection given that the event $E_3$ happens is upper bounded as \begin{align}\label{e3} {\mathbb{P}}(\text{corr.dec}|E_3)\leq \text{poly}(N)e^{-N(\alpha -D(\bar{Q_s}||Q_\star))}\;. \end{align} We deduce from \eqref{e1}, \eqref{e2}, and \eqref{e3} that the probability of correct decoding is upper bounded as \begin{align*} {\mathbb{P}}\left(\text{corr. dec.}\right)&=\sum_{i=1}^3{\mathbb{P}}(\text{corr.dec}|E_i){\mathbb{P}}(E_i)\nonumber\\ &\leq {\mathbb{P}}(E_1)+{\mathbb{P}}(E_2)+{\mathbb{P}}(\text{corr.dec}|E_3)\\ &\leq (e^{-N\varepsilon}+{e^{-N(\alpha -D(\bar{Q_s}||Q_\star))}})\text{poly}(N)\;. \end{align*} Therefore if $$\alpha> D(\bar{Q_s}||Q_\star)\;,$$ the probability of successful detection goes to zero as $N$ tends to infinity. Since $D(\bar{Q_s}||Q_\star)$ tends to $D({Q_s}||Q_\star)$ as $\varepsilon_0\downarrow 0$ by continuity of $D(\cdot||Q_\star)$,\footnote{We may assume $D(\cdot||Q_\star)$ is continuous because otherwise $\alpha(Q) = \infty$ and there is nothing to prove.} the result follows by maximizing $D(Q_s||Q(\cdot|\star))$ over $s\in {\cal{X}}$. \end{proof} \bibliographystyle{amsplain}
{'timestamp': '2007-08-21T22:00:51', 'yymm': '0708', 'arxiv_id': '0708.2911', 'language': 'en', 'url': 'https://arxiv.org/abs/0708.2911'}
\section*{Acknowledgements} This project was supported by the Department of Cardiology at the Hospital Universitario de Salamanca-IBSAL, the Data Science for Social Good Foundation, and the Gandhi Centre for Inclusive Innovation at Imperial College, London. \section{Conclusion} Deep learning has the potential for automating echocardiogram analysis, the most frequently used imaging technique for early detection of heart disease. While existing studies focus on matching or surpassing experts' ability to predict disease, this paper suggests that predicting normal heart function instead aligns data quality with the prediction objective and significantly reduces cardiologists' time investment in patients that do not need their expertise. This can pave the way for large-scale, low-cost preventative heart screenings, while reducing the time burden on skilled experts. \section{Design Process} Human-centred design offers a practical approach to arriving at innovative design solutions. Methods from IDEO.org's Design Kit \cite{ideo} were used as practical tools to analyze the potential impact of automating echocardiogram analysis with deep learning. At first, five guiding questions were formulated during a brainstorming session: \begin{enumerate} \setlength\itemsep{-0.2em} \item What is the context for echocardiography at HUSA? \item How is an echocardiogram study performed? \item What is the role of the cardiologist in this process? \item How are study results used for decision making? \item Who is impacted by automating the analysis process? \end{enumerate} Stakeholder mapping, testing assumptions with \textit{``5 Whys''}, mapping the journey of a cardiologist and performing a conceptual analysis of deep learning for echocardiogram analysis within a clinical setting were selected as appropriate tools for arriving at answers to the guiding questions. Qualitative information to answer these questions was gathered over a series of video calls and a one-week site visit to HUSA, which included participation in a routine clinic visit to outlying rural areas, observing the process of taking echocardiogram scans of healthy, recovering and sick patients, and participating in specialists' morning meetings where upcoming medical procedures and decisions are discussed. \section{Design Discoveries} \paragraph{Context} The cardiology imaging unit at HUSA serves over 40 patients per day and performs over 10,000 echocardiogram studies every year. 40\% of the patients examined at HUSA have normal heart function. Cardiologists also perform echocardiogram studies during weekly visits to rural Spanish health centers to reduce the public transit burden of patients that come for routine checkup or on referral from their physician. 80\% of these patients require no further intervention. If the study indicates abnormal heart function, patients are referred to HUSA for further screenings. \paragraph{Echocardiogram Procedure} During a study, the cardiologist captures different views of the heart from specific vantage points for several heartbeats on video. A typical study takes 10 -- 20 minutes. After performing the study, the cardiologist analyses the video scans to calculate measurements that indicate heart condition. To calculate LVEF, the cardiologist selects the appropriate scan, identifies image frames corresponding to one cardiac cycle's end-systole and end-diastole, and manually traces the heart chamber in both frames. A software programme subsequently calculates the LVEF. The process is repeated and averaged over 5 heartbeats. A skilled specialist can annotate a study in 8 -- 10 minutes, cumulating to 4 -- 5 hours of annotations per day. \paragraph{Decision Making} At HUSA, a LVEF of less than 40\% is considered abnormal, 40\% to 60\% grey zone, and above 60\% normal. For patients with a normal LVEF and no medical preconditions that put them at risk, deviation in the LVEF is tolerable. Measurement deviations become a concern when they are in the grey zone, especially when the patient's medical history gives reason to suspect heart disease. Patients at risk of heart disease will receive regular checkups, while serious cases will go for follow-up assessments with more expensive equipment. Treatment decisions are typically made by a multidisciplinary team of experts who consider the health condition of the patient holistically. \paragraph{Data Quality and Variability} Echocardiogram studies are guided by a standardized medical protocol. Nonetheless, variability exists in studies, making automated analysis more challenging than for other imaging modalities. The specialist's skill affects the scan quality. Occasionally a scan is retaken due to quality issues, resulting in multiple scans for the same view. The anatomy and health status of the patient further affect data quality. The scans of older and unhealthy patients, who are at greater risk of heart disease, are frequently of poorer quality due to rib protrusions, obesity or movement during the procedure. Image quality also varies across scanner models. HUSA employs multiple different models simultaneously. Outlying clinics have older models, which produce lower quality scans. \paragraph{Implications of Discoveries} Predictive results are adversely affected by poor data quality. Variability in data quality biases predictive accuracy against patients that are more inclined to have a heart condition and that cannot access newer scanner models. Leveraging this insight, by using automated analysis to filter out patients with normal heart function, cardiologists can better allocate their time and expertise. This could lead to positive downstream effects, such as focusing more attention on sick patients. In addition, this could enable the public health system to increase its capacity for routine checkups for early diagnosis. \section{Introduction} Cardiovascular diseases take the lives of 17.9 million people every year, accounting for 31\% of all global deaths \cite{who2017cvd}. Many consequences of heart disease could be mitigated with early diagnosis. Echocardiograms are ultrasound video scans of the heart that are non-invasive, side-effect free, cheaper and faster to perform than other imaging techniques. They are the most frequently used cardiac imaging test for the screening, diagnosis, management, and follow-up of patients with suspected or known heart diseases. The left ventricle ejection fraction (LVEF) measures the percentage of blood leaving the heart each time it contracts. It can be calculated from echocardiograms and is used as a prognostic indicator to identify and follow the progression of heart disease \cite{Drazner2011}. Automating the analysis of echocardiogram scans is considered an important task for increasing access to health care \cite{loh2017deep}, particularly for early disease detection in remote and low resource areas \cite{zhang2018fully}. Further benefits lie in improving the efficiency of medical care and for establishing effective public health programmes \cite{MeleroAlegriae2019salmanticor}. However, the motivation for the use of deep learning in echocardiogram analysis is typically the potential to reduce human error and outperform human experts. Recent research has indeed shown that convolutional neural network models can surpass human-level performance for estimating the LVEF \cite{ouyang2020video}. \citet{bizopoulos2019survey} found that studies using deep learning for echocardiography tended to be highly variable in terms of the research questions that were asked, and inconsistent in the metrics used to report results. The goal of this study was to perform a qualitative analysis of the impact of automating echocardiogram analysis at the Department of Cardiology at the Hospital Universitario de Salamanca-IBSAL (HUSA) in Northern Spain. A secondary objective was to build a pipeline for automated, end-to-end echocardiogram analysis with deep learning for HUSA. This paper discusses the design process and discoveries, their impact on global health and on defining the prediction goal for the pipeline, and the results achieved. \section{Deep Learning Pipeline and Results} HUSA's echocardiogram dataset consists of 25,000 annotated studies collected over 10 years. Each study contains video data in the DICOM format. Segmentation masks and measurements were manually recorded by cardiologists and have been used to derive labels. Anonymized patient metadata is available in a supplementary database. Data cleaning, pre-processing and filtering was done to ensure consistent data representation and sampling. The deep learning pipeline \footnote{\url{https://github.com/dssg/usal_echo_public}} replicates a cardiologist's process of analyzing an echogardiogram in three steps: (1)~\textit{classify views}: parasternal long-axis, apical 2-chamber, and apical 4-chamber; (2)~\textit{segment chambers}: left ventricle and left atrium; (3)~\textit{estimate measurement}: LVEF. The pipeline deploys the models of \citet{zhang2018fully}, using a VGGNet \cite{simonyan2014very} for classification and U-net \cite{ronneberger2015u} for segmentation. The LVEF is then calculated from the segmentation masks. Based on the design discoveries, the objective of the deep learning pipeline is to identify patients with normal heart function. Thus, when executed in sequence, the pipeline predicts heart health to be normal, grey zone or abnormal. Classification results had an accuracy of 98\%. The segmentation model DICE scores ranged from $0.83$ -- $0.88$ depending on view and heart chamber. After estimating the LVEF, the pipeline predicts normal heart condition with 80\% precision and 30\% sensitivity. High precision is necessary to ensure that patients at risk receive care. Sensitivity indicates the potential time saving for cardiologists.
{'timestamp': '2020-06-22T02:12:37', 'yymm': '2006', 'arxiv_id': '2006.06292', 'language': 'en', 'url': 'https://arxiv.org/abs/2006.06292'}
\section{Introduction} The LHC discovery of a scalar with mass of $125$ GeV~\cite{Aad:2012tfa,Chatrchyan:2012xdj} completed the Standard Model particle content. The fact that precision measurements of the properties of this particle~\cite{Aad:2015zhl,Khachatryan:2016vau} indicate that it behaves very much in a Standard Model (SM)-like manner is a further confirmation of the validity and effectiveness of that model. Nonetheless, the SM leaves a lot to be explained, and many extensions of the theory have been proposed to attempt to explain such diverse phenomena as the existence of dark matter, the observed universal matter-antimatter asymmetry and others. In particular, numerous SM extensions consist of enlarged scalar sectors, with singlets, both real and complex, being added to the SM Higgs doublet \cite{McDonald:1993ex,Burgess:2000yq,O'Connell:2006wi, BahatTreidel:2006kx,Barger:2007im,He:2007tt,Davoudiasl:2004be,Basso:2013nza,Fischer:2016rsh}; or doublets, the simplest example of which is the two-Higgs doublet model (2HDM)~\cite{Lee:1973iz,Branco:2011iw}. Certain versions of singlet-doublet models provide dark matter candidates, as does the Inert version of the 2HDM (IDM)~\cite{Deshpande:1977rw,Barbieri:2006dq,LopezHonorez:2006gr,Dolle:2009fn, Honorez:2010re,Gustafsson:2012aj,Swiezewska:2012ej,Swiezewska:2012eh,Arhrib:2013ela,Klasen:2013btp,Abe:2014gua, Krawczyk:2013jta,Goudelis:2013uca,Chakrabarty:2015yia,Ilnicka:2015jba}. Famously, the 2HDM was introduced in 1973 by Lee to allow for the possibility of spontaneous CP violation. But models with dark matter candidates {\em and} extra sources of CP violation (other than the SM mechanism of CKM-matrix generated CP violation) are rare. Even rarer are models for which a ``dark" sector exists, providing viable dark matter candidates, and where the extra CP violation originates exclusively in the ``dark" sector. To the best of our knowledge, the only model with scalar CP violation in the dark sector is the recent work of Refs.~\cite{Cordero-Cid:2016krd,Sokolowska:2017adz}, for which a three-doublet model was considered. The main purpose of Refs.~\cite{Cordero-Cid:2016krd,Sokolowska:2017adz} was to describe the dark matter properties of the model. In Ref.~\cite{TALK} an argument was presented to prove that the model is actually CP violating, adapting the methods of Refs.~\cite{Grzadkowski:2016lpv,Belusca-Maito:2017iob} for the complex 2HDM (C2HDM). In the current paper we will propose a model, simpler than the one in~\cite{Cordero-Cid:2016krd}, but which boasts the same interesting properties, to wit: (a) a SM-like Higgs boson, $h$, ``naturally" aligned due to the vacuum of the model preserving a discrete symmetry; (b) a viable dark matter candidate, the stability of which is guaranteed by the aforementioned vacuum and whose mass and couplings satisfy all existing dark matter search constraints; and (c) extra sources of CP violation exist in the scalar sector of the model, but {\em only} in the ``dark" sector. This {\em hidden} CP violation will mean that the SM-like scalar, $h$, behaves almost exactly like the SM Higgs boson, and in particular (unless contributions from a high number of loops are considered) $h$ has couplings to gauge bosons and fermions which are exactly those of a scalar. This is all the more remarkable since the CP violation of the proposed model is {\em explicit}. The extra particle content of the model, as advertised, is simpler than the model of~\cite{Cordero-Cid:2016krd}, consisting of two Higgs doublets (both of hypercharge $Y = 1$) and a real singlet ($Y = 0$). This is sometimes known as the Next-to-2HDM (N2HDM), and was the subject of a thorough study in~\cite{Muhlleitner:2016mzt}. The N2HDM version considered in this paper uses a different discrete symmetry than the symmetries considered in~\cite{Muhlleitner:2016mzt}, designed, as will be shown, to produce both dark matter and dark CP violation. The paper is organised as follows: in section~\ref{sec:pot} we will introduce the model, explaining in detail its construction and symmetries, as well as the details of spontaneous symmetry breaking that occurs when one of the fields acquires a vacuum expectation value (vev). In section~\ref{sec:sca} we will present the results of a parameter space scan of the model, where all existing constraints -- both theoretical and experimental (from colliders and dark matter searches) -- are taken into account; deviations from the SM behaviour of $h$ in the diphoton channel, stemming from the existence of a charged scalar, will be discussed, as will the contributions of the model to dark matter observables; in section~\ref{sec:dcp} we will discuss how CP violation arises in the dark sector, and how it might have a measurable impact in future colliders. Finally, we conclude in section~\ref{sec:conc}. \section{The scalar potential and possible vacua} \label{sec:pot} For our purposes, the N2HDM considered is very similar to that discussed in Ref.~\cite{Muhlleitner:2016mzt}, in that the fermionic and gauge sectors are identical to the SM and the scalar sector is extended to include an extra doublet and also a singlet scalar field -- thus the model boasts two scalar doublets, $\Phi_1$ and $\Phi_2$, and a real singlet $\Phi_S$. As in the 2HDM, we will require that the Lagrangian be invariant under a sign flip of some scalar fields, so that the number of free parameters of the model is reduced and no tree-level flavour-changing neutral currents (FCNC) occur~\cite{Glashow:1976nt,Paschos:1976ay}. The difference between the current work and that of~\cite{Muhlleitner:2016mzt} consists in the discrete symmetry applied to the Lagrangian -- here, we will consider a single $Z_2$ symmetry of the form \begin{equation} \Phi_1 \,\rightarrow \,\Phi_1\;\;\;,\;\;\; \Phi_2 \,\rightarrow \,-\Phi_2\;\;\;,\;\;\; \Phi_S \,\rightarrow \, -\Phi_S\,. \label{eq:z2z2} \end{equation} With these requirements, the most general scalar potential invariant under $SU(2)\times U(1)$ is given by \begin{eqnarray} V &=& m_{11}^2 |\Phi_1|^2 \,+\, m_{22}^2 |\Phi_2|^2 \,+\, \frac{1}{2} m^2_S \Phi_S^2 \, +\, \left(A \Phi_1^\dagger\Phi_2 \Phi_S \,+\,h.c.\right) \nonumber \\ & & \,+\, \frac{1}{2} \lambda_1 |\Phi_1|^4 \,+\, \frac{1}{2} \lambda_2 |\Phi_2|^4 \,+\, \lambda_3 |\Phi_1|^2 |\Phi_2|^2 \,+\, \lambda_4 |\Phi_1^\dagger\Phi_2|^2\,+\, \frac{1}{2} \lambda_5 \left[\left( \Phi_1^\dagger\Phi_2 \right)^2 + h.c. \right] \nonumber \\ & & \,+\,\frac{1}{4} \lambda_6 \Phi_S^4 \,+\, \frac{1}{2}\lambda_7 |\Phi_1|^2 \Phi_S^2 \,+\, \frac{1}{2} \lambda_8 |\Phi_2|^2 \Phi_S^2 \,, \label{eq:pot} \end{eqnarray} where, with the exception of $A$, all parameters in the potential are real. As for the Yukawa sector, we consider all fermion fields {\em neutral} under this symmetry. As such, only the doublet $\Phi_1$ couples to fermions, and the Yukawa Lagrangian is therefore \begin{equation} -{\cal L}_Y\,=\, \lambda_t \bar{Q}_L \tilde{\Phi}_1 t_R\,+\,\lambda_b \bar{Q}_L \Phi_1 b_R \,+\,\lambda_\tau \bar{L}_L \Phi_1 \tau_R\,+\,\dots \label{eq:yuk} \end{equation} where we have only written the terms corresponding to the third generation of fermions, with the Yukawa terms for the remaining generations taking an analogous form. The left-handed doublets for quarks and leptons are denoted by $Q_L$ and $L_L$, respectively; $t_R$, $b_R$ and $\tau_R$ are the right-handed top, bottom and $\tau$ fields; and $\tilde{\Phi}_1$ is the charge conjugate of the doublet $\Phi_1$. Notice that since the two doublets have the same quantum numbers and are not physical (only the mass eigenstates of the model will be physical particles), the potential must be invariant under basis changes on the doublets. This is a well-known property of 2HDMs, which the N2HDM inherits: any unitary transformation of these fields, $\Phi_i^\prime = U_{ij}\Phi_j$ with a $2\times 2$ unitary matrix $U$, is an equally valid description of the theory. Though the theory is invariant under such transformations, its parameters are not and undergo transformations dependent on $U$. A few observations are immediately in order: \begin{itemize} \item Since only $\Phi_1$ has Yukawa interactions it must have a vev to give mass to all charged fermions\footnote{And neutrinos as well, if one wishes to consider Dirac mass terms for them.}. In fact the Yukawa sector of this model is identical to the one of the SM, and a CKM matrix arises there, as in the SM. \item The fact that all fermions couple to a single doublet, $\Phi_1$, automatically ensures that no scalar-mediated tree-level FCNC occur, as in the 2HDM with a $Z_2$ symmetry~\cite{Glashow:1976nt,Paschos:1976ay}. \item The $Z_2$ symmetry considered eliminates many possible terms in the potential, but does {\em not} force all of the remaining ones to be real -- in particular, both the quartic coupling $\lambda_5$ and the cubic one, $A$, can be {\em a priori} complex. However, using the basis freedom to redefine doublets, we can absorb one of those complex phases into, for instance, $\Phi_2$. We choose, without loss of generality, to render $\lambda_5$ real. \end{itemize} A complex phase on $A$ renders the model explicitly CP violating. Considering, for instance, the CP transformation of the scalar fields given by \begin{equation} \Phi_1^{CP}(t,\vec{r})\,=\,\Phi_1^*(t,-\vec{r})\;\;\;,\;\;\; \Phi_2^{CP}(t,\vec{r})\,=\,\Phi_2^*(t,-\vec{r})\;\;\;,\;\;\; \Phi_S^{CP}(t,\vec{r})\,=\,\Phi_S(t,-\vec{r})\;\;\;, \label{eq:cp} \end{equation} we see that such a CP transformation, to be a symmetry of the potential, would require all of its parameters to be real. Notice that the CP transformation of the singlet trivially does not involve complex conjugation as $\Phi_S$ is real. In fact, this is a well-known CP property of singlet fields \cite{Branco:1999fs}. One point of caution is in order: the complex phase of $A$ is not invariant under the specific CP transformation of Eq.~\eqref{eq:cp}, but by itself that does not prove that the model is explicitly CP violating. In fact, one could consider some form of generalized CP (GCP) transformation involving, other than complex conjugation of the fields, also doublet redefinitions: $\Phi_i^{GCP}(t,\vec{r}) \,=\,X_{ij}\Phi_j^*(t,-\vec{r})$. The model can only be said to be explicitly CP violating if there does not exist any CP transformation under which it is invariant. So, conceivably, though the model breaks the CP symmetry defined by the transformation of Eq.~\eqref{eq:cp}, it might be invariant under some other one. The point is moot, however: As we will see ahead, the vacuum of the model we will be considering is invariant under the CP transformation of Eq.~\eqref{eq:cp} (and the $Z_2$ symmetry of Eq.~\eqref{eq:z2z2}), but the theory has CP violation. Thus the CP symmetry was broken to begin with, and hence the model is explicitly CP violating. Let us consider now the possibility of spontaneous symmetry breaking in which only the $\Phi_1$ doublet acquires a neutral non-zero vev: $\langle \Phi_1 \rangle = (0,v/\sqrt{2})^T$. Given the structure of the potential in Eq.~\eqref{eq:pot}, the minimisation conditions imply that this is a possible solution, with all scalar components of the doublets (except the real, neutral one of $\Phi_1$) and the singlet equal to zero, provided that the following condition is obeyed: \begin{equation} m^2_{11}\,+\,\frac{1}{2}\lambda_1\,v^2\,=\,0\,. \label{eq:min} \end{equation} Since all fermion and gauge boson masses are therefore generated by $\Phi_1$, it is mandatory that $v = 246$ GeV as usual. At this vacuum, then, it is convenient to rewrite the doublets in terms of their component fields as \begin{equation} \Phi_1\,=\,\left( \begin{array}{c} G^+ \\ \frac{1}{\sqrt{2}} (v + h \,+\, \mbox{i} G^0)\end{array}\right) \;\;\; , \;\;\; \Phi_2\,=\, \left( \begin{array}{c} H^+ \\ \frac{1}{\sqrt{2}}(\rho \,+\, \mbox{i} \eta)\end{array}\right) \, , \label{eq:doub} \end{equation} where $h$ is the SM-like Higgs boson, with interaction vertices with fermions and gauge bosons identical to those expected in the SM (the diphoton decay of $h$, however, will differ from its SM counterpart). The mass of the $h$ field is found to be given by \begin{equation} m^2_h \,=\,\lambda_1\,v^2\,, \label{eq:mh} \end{equation} and since $m_h = 125$ GeV, this fixes the value of one of the quartic couplings, $\lambda_1 \simeq 0.258$. The neutral and charged Goldstone bosons $G^0$ and $G^+$, respectively, are found to be massless as expected, and the squared mass of the charged scalar $H^+$ is given by \begin{equation} m^2_{H^+}\,=\,m^2_{22}\,+\,\frac{\lambda_3}{2}\,v^2 \;. \end{equation} Finally, the two neutral components of the doublet $\Phi_2$, $\rho$ and $\eta$, mix with the singlet component $\Phi_s \equiv s$ yielding a $3\times 3$ mass matrix, \begin{equation} \left[M^2_N\right]\,=\,\left( \begin{array}{ccc} m^2_{22}\,+\,\frac{1}{2}\bar{\lambda}_{345}\,v^2 & 0 & -\mbox{Im}(A)\,v \\ 0 & m^2_{22}\,+\,\frac{1}{2}\lambda_{345}\,v^2 & \mbox{Re}(A)\,v \\ -\mbox{Im}(A)\,v & \mbox{Re}(A)\,v & m^2_S\,+\,\frac{1}{2}\lambda_7\,v^2 \end{array}\right)\,, \label{eq:mn} \end{equation} with $\bar{\lambda}_{345} = \lambda_3 + \lambda_4 - \lambda_5$ and $\lambda_{345} = \lambda_3 + \lambda_4 + \lambda_5$. There are therefore three neutral scalars other than $h$, which we call $h_1$, $h_2$ and $h_3$, in growing order of their masses. This mass matrix can then be diagonalized by an orthogonal rotation matrix $R$, such that \begin{equation} R\,M^2_N\,R^T\;=\; \mbox{diag}\left(m^2_{h_1}\,,\,m^2_{h_2}\,,\,m^2_{h_3}\right) \end{equation} and the connection between the original fields and the mass eigenstates is given by \begin{equation} \left(\begin{array}{c} h_1 \\ h_2 \\ h_3 \end{array} \right)\;=\; R\, \left(\begin{array}{c} \rho \\ \eta \\ s \end{array} \right)\,. \end{equation} The rotation matrix $R$ can be parametrized in terms of three angles, $\alpha_1$, $\alpha_2$ and $\alpha_3$, such that \begin{equation} R\,=\,\left( \begin{array}{ccc} c_{\alpha_1} c_{\alpha_2} & s_{\alpha_1} c_{\alpha_2} & s_{\alpha_2}\\ -(c_{\alpha_1} s_{\alpha_2} s_{\alpha_3} + s_{\alpha_1} c_{\alpha_3}) & c_{\alpha_1} c_{\alpha_3} - s_{\alpha_1} s_{\alpha_2} s_{\alpha_3} & c_{\alpha_2} s_{\alpha_3} \\ - c_{\alpha_1} s_{\alpha_2} c_{\alpha_3} + s_{\alpha_1} s_{\alpha_3} & -(c_{\alpha_1} s_{\alpha_3} + s_{\alpha_1} s_{\alpha_2} c_{\alpha_3}) & c_{\alpha_2} c_{\alpha_3} \end{array} \right)\,, \label{eq:matR} \end{equation} where for convenience we use the notation $c_i = \cos \alpha_i$, $s_j = \sin \alpha_j$. Without loss of generality, we may take the angles $\alpha_i$ in the interval $[-\pi/2\,,\,\pi/2]$. In the following we discuss several phenomenological properties of this model. The vacuum preserves the $Z_2$ symmetry. As a result, the physical eigenstates emerging from $\Phi_2$ and $\Phi_S$, {\em i.e.} the charged scalar $H^\pm$ and the neutral ones $h_1$, $h_2$ and $h_3$, carry a quantum number -- a ``dark charge" equal to $-1$ -- which is preserved in all interactions, to all orders of perturbation theory. In the following we refer to these four eigenstates as ``dark particles''. On the other hand, the SM-like particles ($h$, the gauge bosons and all fermions) have ``dark charge" equal to 1. The preservation of this quantum number means that dark particles must always be produced in pairs while in their decays they must always produce at least one dark particle. Therefore, the lightest of these dark particles -- which we will choose in our parameter scans to be the lightest neutral state, $h_1$ -- is {\em stable}. Thus, the model provides one dark matter candidate. The model indeed shares many features with the Inert version of the 2HDM, wherein all particles from the ``dark doublet" are charged under a discrete symmetry, the lightest of which is stable. The main difference with the current model is the mixing that occurs between the two neutral components of the doublet and the singlet due to the cubic coupling $A$, which can be appreciated from the mass matrix of Eq.~\eqref{eq:mn}. In what concerns the charged scalar, though, most of the phenomenology of this model is equal to the Inert 2HDM. \section{Parameter scan, the diphoton signal and dark matter observables} \label{sec:sca} With the model specified, we can set about exploring its available parameter space, taking into account all of the existing theoretical and experimental constraints. We performed a vast scan over the parameter space of the model (100.000 different combinations of the parameters of the potential of Eq.~\eqref{eq:pot}), requiring that: \begin{itemize} \item The correct electroweak symmetry breaking occurs, and the correct value for the mass of the observed Higgs boson is obtained; as already explained, this is achieved by requiring that $v = 246$ GeV in Eq.~\eqref{eq:doub} while at the same time the parameters of the model are such that Eqs.~\eqref{eq:min} and~\eqref{eq:mh} are satisfied. \item By construction, all tree-level interactions and vertices of the Higgs particle $h$ are identical to those of the SM Higgs boson. As a consequence, all LHC production cross sections for $h$ are identical to the values expected in the SM. Additionally, all decay widths of $h$, apart from the diphoton case to be treated explicitly below, are identical to their SM values up to electroweak corrections. This statement holds as we require the $h_1$ mass to be larger than roughly 70 GeV, to eliminate the possibility of the decay $h\rightarrow h_1 h_1$ (when this decay channel is open it tends to affect the branching ratios of $h$, making it difficult to have $h$ be SM-like). \item The quartic couplings of the potential cannot be arbitrary. In particular, they must be such that the theoretical requirements of boundedness from below (BFB) -- that the potential always tends to $+\infty$ along any direction where the scalar fields are taking arbitrarily large values -- and perturbative unitarity -- that the model remains both perturbative and unitary, in all $2\rightarrow 2$ scalar scattering processes -- are satisfied. The model considered in the current paper differs from the N2HDM discussed in Ref.~\cite{Muhlleitner:2016mzt} only via the cubic coefficient $A$, so the tree-level BFB and perturbative unitarity constraints described there (in sections 3.1 and 3.2) are exactly the ones we should use here. \item The constraints on the scalar sector arising from the Peskin-Takeuchi electroweak precision parameters $S$, $T$ and $U$~\cite{Peskin:1990zt,Peskin:1991sw,Maksymyk:1993zm} are required to be satisfied in the model. Not much of the parameter space is eliminated due to this constraint, but it is still considered in full. \item Since the charged scalar $H^\pm$ does not couple to fermions, all $B$-physics bounds usually constraining its interactions are automatically satisfied. The direct LEP bound of $m_{H^\pm} > 90$ GeV assumed a 100 \% branching ratio of $H^\pm$ to fermions, so that this constraint also needs not be considered here. \item The dark matter observables were calculated using \texttt{MicrOMEGAs}~\cite{Belanger:2006is,Belanger:2013oya} and compared to the results from Planck~\cite{Aghanim:2018eyx} and XENON1T~\cite{Aprile:2018dbl}. \item Since all scalars apart from $h$ do not couple to fermions, no electric dipole moment constraints need be considered, this despite the fact that CP violation occurs in the model. \end{itemize} With these restrictions, the scan over the parameters of the model was such that: \begin{itemize} \item The masses of the neutral dark scalars $h_1$ and $h_2$ and the charged one, $H^\pm$, were chosen to vary between 70 and 1000 GeV. The last neutral mass, that of $h_3$, is obtained from the remaining parameters of the model as explained in~\cite{Muhlleitner:2016mzt}. \item The mixing angles of the neutral mass matrix, Eq.~\eqref{eq:matR}, were chosen at random in the interval $-\pi/2$ and $\pi/2$. \item The quartic couplings $\lambda_2$ and $\lambda_6$ are constrained, from BFB constraints, to be positive, and were chosen at random in the intervals $[0\,,\,9]$ and $[0\,,\,17]$, respectively. $\lambda_8$ is chosen in the interval $[-26\,,\,26]$. \item The quadratic parameters $m^2_{22}$ and $m^2_S$ were taken between 0 and $10^6$ GeV$^2$. \end{itemize} All other parameters of the model can be obtained from these using the expressions for the masses of the scalars and the definition of the matrix $R$. The scan ranges for the quartic couplings are chosen larger than the maximally allowed ranges after imposing unitarity and BFB. Therefore, all of the possible values for these parameters are sampled. We have used the implementation of the model, and all of its theoretical constraints, in {\tt ScannerS}~\cite{Coimbra:2013qq}. {\tt N2HDECAY}~\cite{Engeln:2018mbg}, a code based on {\tt HDECAY}~\cite{Djouadi:1997yw,Djouadi:2018xqq}, was used for the calculation of scalar branching ratios and total widths, as in~\cite{Muhlleitner:2016mzt}. As we already explained, the tree-level interactions of $h$ are identical to the ones of a SM Higgs boson of identical mass. The presence of the charged scalar $H^\pm$, however, changes the diphoton decay width of $h$, since a new loop, along with those of the $W$ gauge boson and charged fermions, contributes to that width. This is identical to what occurs in the Inert model, and we may use the formulae of, for instance, Ref.~\cite{Swiezewska:2012ej}. Thus we find that the diphoton decay amplitude in our model is given by \begin{equation} \Gamma(h\rightarrow \gamma\gamma)\,=\,\frac{G_F \alpha^2 m^3_h}{128\sqrt{2} \pi^3}\, \left|\sum_f N_{c,f} Q^2_f A_{1/2}\left(\frac{4 m^2_f}{m^2_h}\right) \,+\,A_1 \left(\frac{4 m^2_W}{m^2_h}\right) \,+\,\frac{\lambda_3 v^2}{2 m^2_{H^\pm}}\,A_0 \left(\frac{4 m^2_{H^\pm}}{m^2_h}\right) \right|^2\,, \label{eq:ampli} \end{equation} where the sum runs over all fermions (of electric charge $Q_f$ and number of colour degrees of freedom $N_{c,f}$) and $A_0$, $A_{1/2}$ and $A_1$ are the well-known form factors for spin 0, 1/2 and 1 particles (see for instance Refs.~\cite{Spira:1997dg,Spira:2016ztx}). The charged Higgs contribution to the diphoton amplitude in Eq.~\eqref{eq:ampli} changes this decay width, and therefore the total decay width, hence all branching ratios, of $h$ with respect to the SM expectation. However, the diphoton decay width being so small compared to the main decay channels for $h$ ($b\bar{b}$, $ZZ$ and $WW$), the overall changes of the total $h$ width are minimal. In fact, numerical checks for our allowed parameter points have shown that the branching ratios of $h$ to $b\bar{b}$, $\tau\bar{\tau}$, $ZZ$ and $WW$ change by less than 0.05\% compared to the corresponding SM quantities -- therefore, all current LHC constraints for the observed signal rates of $h$ in those channels are satisfied at the 1$\sigma$ level. As for the branching ratio into two photons, it can and does change by larger amounts, as can be appreciated from Fig.~\ref{fig:diph}. In that figure we plot the ratio of the branching ratio of $h$ into two photons to its SM value as a function of the charged Higgs mass. \begin{figure}[t] \centering \includegraphics[height=8cm,angle=0]{diphoton_charged.jpeg} \caption{Ratio of the branching ratio of $h$ into two photons to the SM value {\em versus} the value of the charged scalar mass for all the allowed points in the model. } \label{fig:diph} \end{figure} Comparing these results to the recent measurements of the $h\rightarrow \gamma\gamma$ signal rates\footnote{Notice that since $h$ in this model has exactly the same production cross sections as the SM Higgs boson, the ratio of branching ratios presented in Fig.~\ref{fig:diph} corresponds exactly to the measured signal rate, which involves the ratio of the product of production cross sections and decay branching ratios, between observed and SM theoretical values.} $\mu_{\gamma\gamma}$ from Ref.~\cite{Sirunyan:2018ouh}, we see that our model can accommodate values well within the 2$\sigma$ interval. The lower bound visible in Fig.~\ref{fig:diph} emerges from the present experimental lower limit from~\cite{Sirunyan:2018ouh} at $2\sigma$. The experimental upper limit, however, is larger than the maximum value of $\sim 1.2$ possible in our model. The latter results from the combination of BFB and unitarity bounds which constrain the allowed values of the coupling $\lambda_3$. The lowest allowed value for $\lambda_3$, which governs the coupling of $hH^+H^-$, is about $-1.03$, and its maximum one roughly $8.89$. Since the value of $\mu_{\gamma\gamma}$ grows for negative $\lambda_3$, the lower bound on $\lambda_3$ induces an upper bound of $\mu_{\gamma\gamma}\lesssim 1.2$. Thus we see that the model under study in this paper is perfectly capable of reproducing the current LHC data on the Higgs boson. Specific predictions for the diphoton signal rate are also possible in this model -- values of $\mu_{\gamma\gamma}$ larger or smaller than unity are easily accommodated, though they are constrained to the interval $0.917 \lesssim \mu_{\gamma\gamma} \lesssim 1.206$. As the parameter scan was made taking into account all data from dark matter searches, we are comfortable that all phenomenology in that sector is satisfied by the dark particles. Let us now study how the model behaves in terms of dark matter variables. \begin{figure}[h!] \begin{tabular}{ccc} \includegraphics[height=7cm,angle=0]{RelicDensity.png}& \includegraphics[height=7cm,angle=0]{DirectDetection.png} \end{tabular} \caption{Points that survive all experimental and theoretical constraints. Left: relic density abundance versus dark matter mass where the grey line represents the measured DM relic abundance; points either saturate the relic abundance constraints within +1$\sigma$ and -$5 \sigma$ around the central value (pink points) or are below the measured central value (violet points). Right: spin-independent nucleon dark matter scattering cross section as a function of the dark matter mass where the grey line represents the latest XENON1T~\cite{Aprile:2017iyp,Aprile:2018dbl} results; colour code is the same and pink points are superimposed on violet points. } \label{fig:dm} \end{figure} Several experimental results put constraints on the mass of the dark matter (DM) candidate, and on its couplings to SM particles. The most stringent bound comes from the measurement of the cosmological DM relic abundance from the latest results from the Planck Collaboration~\cite{Aghanim:2018eyx}, $({\Omega}h^2)^{\rm obs}_{\rm DM} = 0.120 \pm 0.001$. The DM relic abundance for our model was calculated with \texttt{MicrOMEGAs}~\cite{Belanger:2013oya}. In our scan we accepted all points that do not exceed the value measured by Planck by more than $1\sigma$. This way, we consider not only the points that are in agreement with the DM relic abundance experimental values but also the points that are underabundant and would need further dark matter candidates to saturate the measured experimental value. Another important constraint comes from direct detection experiments , in which the elastic scattering of DM off nuclear targets induces nucleon recoils that are being measured by several experimental groups. Using the expression for the spin-independent DM-nucleon cross section given by \texttt{MicrOMEGAs}, we impose the most restrictive upper bound on this cross section, which is the one from XENON1T~\cite{Aprile:2017iyp,Aprile:2018dbl}. In the left panel of Fig.~\ref{fig:dm} we use the parameter scan previously described to compute dark matter observables. We show the points that passed all experimental and theoretical constraints in the relic abundance versus dark matter mass plane. We present in pink the points that saturate the relic abundance, that is the points that are in the interval between $+1\sigma$ and $-5 \sigma$ around the central value, and in violet the points for which the relic abundance is below the measured value. It is clear that there are points in the chosen dark matter mass range that saturate the relic density. In the right panel we present the spin-independent nucleon dark matter scattering cross section as a function of the dark matter mass. The upper bound (the grey line) represents the latest XENON1T~\cite{Aprile:2017iyp,Aprile:2018dbl} results. The pink points in the right plot show that even if the direct bound improves by a few orders of magnitude there will still be points for the entire mass range where the relic density is saturated. Thus we see that the model under study in this paper can fit, without need for fine tuning, the existing dark matter constraints. Next we will study the rise of CP violation in the dark sector. \section{CP violation in the dark sector} \label{sec:dcp} As we explained in section~\ref{sec:pot}, the model explicitly breaks the CP symmetry defined in Eq.~\eqref{eq:cp}. Notice that the vacuum of the model which we are studying -- wherein only $\Phi_1$ acquires a vev -- preserves that symmetry. Therefore, if there is CP violation (CPV) in the interactions of the physical particles of the model, it did not arise from any spontaneous CPV, but rather the explicit CP breaking mentioned above\footnote{Again, because this is a subtlety of CP symmetries, let us repeat the argument: The fact that the model explicitly violates one CP symmetry -- that defined in Eq.~\eqref{eq:cp} -- does not necessarily mean there is CPV, since the Lagrangian could be invariant under a different CP symmetry. If, however, we prove that there is CPV after spontaneous symmetry breaking with a vacuum that preserves the CP symmetry of Eq.~\eqref{eq:cp}, then that CPV is explicit.}. There are several eventual experimental observables where one could conceivably observe CPV. For instance, a trivial calculation shows that all vertices of the form $Z h_i h_j$, with $i\neq j$, are possible. These vertices arise from the kinetic terms for $\Phi_2$ where from Eq.~\eqref{eq:doub} we obtain, in terms of the neutral components of the second doublet, \begin{equation} |D_\mu \Phi_2|^2\,=\, \dots \,+\,\frac{g}{\cos\theta_W}\,Z_\mu\left(\eta_2\partial^\mu \rho_2 \,-\, \rho_2\partial^\mu \eta_2\right)\; , \end{equation} where $g$ is the $SU(2)_L$ coupling constant and $\theta_W$ is the Weinberg angle. With the rotation matrix between field components and neutral eigenstates defined in Eq.~\eqref{eq:matR}, we easily obtain ($i,j$ =1,2,3) \begin{equation} |D_\mu \Phi_2|^2\,=\, \dots \,+\, \frac{g}{\cos\theta_W}\,\left(R_{ij} R_{ji} - R_{ii} R_{jj}\right)\,Z_\mu\left(h_i\partial^\mu h_j \,-\, h_j\partial^\mu h_i\right) \, . \label{eq:zhihj} \end{equation} Thus decays or production mechanisms of the form $h_j \rightarrow Z\,h_i$, $Z\rightarrow h_j\,h_i$, for any $h_{i\neq j}$ dark neutral scalars, are {\em simultaneously} possible (with the $Z$ boson possibly off-shell) which would clearly not be possible if the $h_i$ had definite CP quantum numbers -- in fact, due to CP violation, the three dark scalars are neither CP-even nor CP-odd, but rather states with mixed CP quantum numbers. The {\em simultaneous} existence of all $Z h_j\,h_i$ vertices, with $i\neq j$, is a clear signal of CPV in the model, in clear opposition to what occurs, for instance, in the CP-conserving 2HDM -- in that model $Z\rightarrow A\,h$ or $Z\rightarrow A\,H$ are possible because $A$ is CP-odd and $h$, $H$ are CP-even, but $Z\rightarrow H\,h$ or $Z\rightarrow A\,A$ are forbidden. Since in our model all vertices $Z h_j\,h_i$ with $i\neq j$ occur, the neutral scalars $h_i$ cannot have definite CP quantum numbers. Thereby CP violation is established in the model in the dark sector. Notice that no vertices of the form $Z h h_i$ are possible. This is not due to any CP properties, however, but rather to the conservation of the $Z_2$ quantum number. Thus observation of such decays or production mechanisms (all three possibilities for $Z\rightarrow h_j\,h_i$, $i\neq j$, would have to be confirmed) could serve as confirmation of CPV in the model, though the non-observability of the dark scalars would mean they would only contribute to missing energy signatures. Both at the LHC and at future colliders, hints on the existence of dark matter can appear in mono-$Z$ or mono-Higgs searches. The current model predicts cascade processes such as $q \bar{q} \, (e^+ e^-) \to Z^* \to h_1 h_2 \to h_1 h_1 Z$ and $q \bar {q} \, (e^+ e^-) \to Z^* \to h_1 h_2 \to h_1 h_1 h_{125}$, leading to mono-Z and mono-Higgs events, respectively. This type of final states occurs in many dark matter models, regardless of the CP-nature of the particles involved. Therefore, these are not good processes to probe CP-violation in the dark sector. However, though CPV occurs in the dark sector of the theory, it can have an observable impact on the phenomenology of the SM particles. A sign of CPV in the model -- possibly the only type of signs of CPV which might be observable -- can be gleaned from the interesting work of Ref.~\cite{Grzadkowski:2016lpv} (see also Ref.~\cite{Belusca-Maito:2017iob}), wherein 2HDM contributions to the triple gauge boson vertices $ZZZ$ and $ZW^+W^-$ were considered. A Lorentz structure analysis of the $ZZZ$ vertex, for instance~\cite{Hagiwara:1986vm,Gounaris:1999kf,Gounaris:2000dn,Baur:2000ae}, reveals that there are 14 distinct structures, which can be reduced to just two form factors on the assumption of two on-shell $Z$ bosons and massless fermions, the off-shell $Z$ being produced by $e^+e^-$ collisions. Under these simplifying assumptions, the $ZZZ$ vertex function becomes ($e$ being the unit electric charge) \begin{equation} e\Gamma^{\alpha\beta\mu}_{ZZZ} \,=\,i\,e\,\frac{p_1^2 - m^2_Z}{m^2_Z} \left[f^Z_4 \left(p_1^\alpha g^{\mu\beta} + p_1^\beta g^{\mu\alpha}\right)\,+\,f^Z_5 \epsilon^{\mu\alpha\beta\rho} \left(p_2-p_3\right)_\rho\right] \;, \label{eq:vert} \end{equation} where $p_1$ is the 4-momentum of the off-shell $Z$ boson, $p_2$ and $p_3$ those of the remaining (on-shell) $Z$ bosons. The dimensionless $f_4^Z$ form factor is CP violating, but the $f_5^Z$ coefficient preserves CP. In our model there is only one-loop diagram contributing to this form factor, shown in Fig.~\ref{fig:diag}. As can be inferred from the diagram there are three different neutral scalars \begin{figure}[t] \centering \includegraphics[height=4cm,angle=0]{zzz1.pdf} \caption{Feynman diagram contributing to the CP violating form factor $f^Z_4$. } \label{fig:diag} \end{figure} circulating in the loop -- in fact, the authors of Ref.~\cite{Grzadkowski:2016lpv} showed that in the 2HDM with explicit CPV (the C2HDM) the existence of at least three neutral scalars with different CP quantum numbers that mix among themselves is a necessary condition for non-zero values for $f^Z_4$. Notice that in the C2HDM there are {\em three} diagrams contributing to $f^Z_4$ -- other than the diagram shown in Fig.~\ref{fig:diag}, the C2HDM calculation involves an additional diagram with an internal $Z$ boson line in the loop, and another, with a neutral Goldstone boson $G^0$ line in the loop. In our model, however, the discrete $Z_2$ symmetry we imposed forbids the vertices $Z Z h_j$ and $Z G^0 h_i$ (these vertices do occur in the C2HDM, being allowed by that model's symmetries), and therefore those two additional diagrams are identically zero. In~\cite{Grzadkowski:2016lpv} an expression for $f^Z_4$ in the C2HDM was found, which can easily be adapted to our model, by only keeping the contributions corresponding to the diagram of Fig.~\ref{fig:diag}. This results in \begin{equation} f^Z_4(p_1^2) \,=\, -\,\frac{2\alpha}{\pi s^3_{2\theta_W}}\,\frac{m^2_Z}{p_1^2 - m^2_Z}\,f_{123}\; \sum_{i,j,k}\,\epsilon_{ijk}\,C_{001}(p_1^2,m^2_Z,m^2_Z,m^2_i,m^2_j,m^2_k) \, , \label{eq:f4Z} \end{equation} where $\alpha$ is the electromagnetic coupling constant and the {\tt LoopTools}~\cite{Hahn:1998yk} function $C_{001}$ is used. The $f_{123}$ factor denotes the product of the couplings from three different vertices, given in Ref.~\cite{Grzadkowski:2016lpv} by \begin{equation} f_{123}\,=\,\frac{e_1 e_2 e_3}{v^3}\,, \end{equation} where the $e_{i,j,k}$ ($i,j,k=1,2,3$) factors, shown in Fig.~\ref{fig:diag}, are related to the coupling coefficients that appear in the vertices $Z h_i h_j$ (in the C2HDM they also concern the $ZG^0 h_i$ and $ZZ h_i$ vertices, {\it cf.}~\cite{Belusca-Maito:2017iob}). With the conventions of the current paper, we can extract these couplings from Eq.~\eqref{eq:zhihj} and it is easy to show that \begin{eqnarray} f_{123} & = & \left(R_{12} R_{21} - R_{11}R_{22}\right)\, \left(R_{13} R_{31} - R_{11} R_{33}\right) \, \left(R_{23} R_{32} - R_{22} R_{33}\right) \nonumber \\ & = & R_{13} R_{23} R_{33}\,, \label{eq:f123} \end{eqnarray} where the simplification that led to the last line originates from the orthogonality of the $R$ matrix. We observe that the maximum value that $f_{123}$ can assume is $(1/\sqrt{3})^3$, corresponding to the {\em maximum mixing} of the three neutral components, $\rho$, $\eta$ and $\Phi_S \equiv s$. This is quite different from what one expects to happen in the C2HDM, for instance -- there one of the mixed neutral states is the observed 125 GeV scalar, and its properties are necessarily very SM-like, which implies that the $3\times 3$ matrix $R$ should approximately have the form of one diagonal element with value close to 1, the corresponding row and column with elements very small and a $2\times 2$ matrix mixing the other eigenstates\footnote{Meaning, a neutral scalar mixing very similar to the CP-conserving 2HDM, where $h$ and $H$ mix via a $2\times 2$ matrix but $A$ does not mix with the CP-even states.}. Within our model, however, the three neutral dark fields can mix as much or as little as possible. In Fig.~\ref{fig:f4} we show, for a random combination of dark scalar masses ($m_{h_1} \simeq 80.5$ GeV, $m_{h_2} \simeq 162.9$ GeV and $m_{h_3} \simeq 256.9$ GeV) the evolution of $f_4^Z$ normalized to $f_{123}$~\footnote{For this specific parameter space point, we have $f_{123} \simeq -0.1835$.} with $p_1^2$, the 4-momentum of the off-shell $Z$ boson. This can be compared with Fig.~2 of Ref.~\cite{Grzadkowski:2016lpv}, where \begin{figure}[t] \centering \includegraphics[height=8cm,angle=0]{f4Z.pdf} \caption{The CP-violating $f_4^Z(p_1^2)$ form factor, normalized to $f_{123}$, for $m_{h_1}= 80.5$ GeV, $m_{h_2}=162.9$ GeV and $m_{h_3}=256.9$ GeV, as a function of the squared off-shell $Z$ boson 4-momentum $p_1^2$, normalized to $m_Z^2$. } \label{fig:f4} \end{figure} we see similar (if a bit larger) magnitudes for the real and imaginary parts of $f_4^Z$, despite the differences in masses for the three neutral scalars in both situations (in that figure, the masses taken for $h_1$ and $h_3$ were, respectively, 125 and 400 GeV, and several values for the $h_2$ mass were considered). As can be inferred from Fig.~\ref{fig:f4}, $f_4^Z$ is at most of the order of $\sim 10^{-5}$. For the parameter scan described in the previous section, we obtain, for the imaginary part of $f_4^Z$, the values shown in Fig.~\ref{fig:imf4}. We considered two values of $p_1^2$ (corresponding to two possible collision energies for a future linear collider). The imaginary part of $f_4^Z$ (which, as we will see, contributes directly to CP-violating observables such as asymmetries) is presented as a function of the overall coupling $f_{123}$ defined in Eq.~\eqref{eq:f123}. We in fact present results as a function of $f_{123}/(1/\sqrt{3})^3$, to \begin{figure}[t] \begin{tabular}{cc} \includegraphics[height=7cm,angle=0]{f4_350.jpg}& \includegraphics[height=7cm,angle=0]{f4_450.jpg}\\ (a) & (b) \end{tabular} \caption{Scatter plots for the imaginary part of $f_4^Z$ as a function of the combined $Z$-scalars coupling $f_{123}$ of Eq.~\eqref{eq:f123}, divided by its maximum possible value of $(1/\sqrt{3})^3$. In (a) results for $p_1^2$ = $($350 GeV$)^2$; in (b), $p_1^2$ = $($450 GeV$)^2$. In red, points for which the masses of all the dark scalars are smaller than 200~GeV, $m_{h_i}<200$ GeV ($i = 1,2,3$). } \label{fig:imf4} \end{figure} illustrate that indeed the model perfectly allows maximum mixing between the neutral, dark scalars. Fig.~\ref{fig:imf4} shows that the maximum values for $|$Im$(f^Z_4)|$ are reached for the maximum mixing scenarios. We also highlight in red the points for which the dark neutral scalars $h_i$ have masses smaller than 200 GeV. The loop functions in the definition of $f_4^Z$, Eq.~\eqref{eq:f4Z}, have a complicated dependence on masses (and external momentum $p_1$) so that an analytical demonstration is not possible, but the plots of Fig.~\ref{fig:imf4} strongly imply that choosing all dark scalar masses small yields smaller values for $|$Im$(f^Z_4)|$. Larger masses, and larger mass splittings, seem to be required for larger $|$Im$(f^Z_4)|$. A reduction on the maximum values of $|$Im$(f^Z_4)|$ (and $|$Re$(f^Z_4)|$) with increasing external momentum is observed (though that variation is not linear, as can be appreciated from Fig.~\ref{fig:f4}). A reduction of the maximum values of $|$Im$(f^Z_4)|$ (and $|$Re$(f^Z_4)|$) when the external momentum tends to infinity is also observed. The smaller values for $|$Im$(f^Z_4)|$ for the red points can be understood in analogy with the 2HDM. The authors of Ref.~\cite{Grzadkowski:2016lpv} argue that the occurrence of CPV in the model implies a non-zero value for the basis-invariant quantities introduced in Refs.~\cite{Lavoura:1994fv,Botella:1994cs}, in particular for the imaginary part of the $J_2$ quantity introduced therein. Since Im$(J_2)$ is proportional to the product of the differences in mass squared of all neutral scalars, having all those scalars with lower masses and lower mass splittings reduces Im$(J_2)$ and therefore the amount of CPV in the model. Now, in our model the CPV basis invariants will certainly be different from those of the 2HDM, but we can adapt the argument to understand the behaviour of the red points in Fig.~\ref{fig:imf4}: those red points correspond to three dark neutral scalars with masses lower than 200 GeV, and therefore their mass splittings will be small (compared to the remaining parameter space of the model). In the limiting case of three degenerate dark scalars, the mass matrix of Eq.~\eqref{eq:mn} would be proportional to the identity matrix and therefore no mixing between different CP states would occur. With this analogy, we can understand how regions of parameter space with larger mass splittings between the dark neutral scalars tend to produce larger values of $|$Im$(f^Z_4)|$. Experimental collaborations have been probing double-$Z$ production to look for anomalous couplings such as those responsible for a $ZZZ$ vertex~\cite{Aaltonen:2008mv,Aaltonen:2009fd, Abazov:2011td,Abazov:2012cj,Aad:2011xj,Chatrchyan:2012sga,Aad:2012awa,CMS:2014xja,Khachatryan:2015pba}. The search for anomalous couplings in those works uses the effective Lagrangian for triple neutral vertices proposed in Ref.~\cite{Hagiwara:1986vm}, parametrised as \begin{equation} \mathcal{L}_{\mathrm{V}ZZ} = -\frac{e}{m_Z^2} \left\{ \left[f_4^\gamma\left(\partial_\mu F^{\mu\alpha}\right) +f_4^Z\left(\partial_\mu Z^{\mu\alpha}\right)\right] Z_\beta\left(\partial^\beta Z_\alpha\right) -\left[f_5^\gamma\left(\partial^\mu F_{\mu\alpha}\right) +f_5^Z\left(\partial^\mu Z_{\mu\alpha}\right)\right] \tilde{Z}^{\alpha\beta}Z_\beta \right\}, \label{eq:effZZZ} \end{equation} where $\gamma ZZ$ vertices were also considered. In this equation, $F_{\mu\nu}$ is the electromagnetic tensor, $Z_{\mu\nu} = \partial_\mu Z_\nu - \partial_\nu Z_\mu$ and $\tilde{Z}_{\mu\nu} = \epsilon_{\mu\nu\rho\sigma} Z^{\rho\sigma}/2$. The $f_4^Z$ coupling above is taken to be a constant, and as such it represents at most an approximation to the $f_4^Z(p_1^2)$ of Eq.~\eqref{eq:f4Z}. Further, the analyses of the experimental collaborations mentioned above take this coupling to be real, whereas the imaginary part of $f_4^Z(p_1^2)$ is the quantity of interest in many interesting observables. With all that under consideration, latest results from LHC~\cite{Khachatryan:2015pba} already probe the $f_4^Z$ coupling of Eq.~\eqref{eq:effZZZ} to order $\sim 10^{-3}$, whereas the typical magnitude of $f_4^Z(p_1^2)$ (both real and imaginary parts) is $\sim 10^{-5}$. We stress , however, that the two quantities cannot be directly compared, as they represent very different approaches to the $ZZZ$ vertex. A thorough study of the experimental results of~\cite{Khachatryan:2015pba} using the full expression for the $ZZZ$ vertex of Eq.~\eqref{eq:vert} and the full momentum (and scalar masses) dependence of the form factors is clearly necessary, but beyond the scope of the current work. The crucial aspect to address here, and the point we wish to make with the present section, is that $f_4^Z(p_1^2)$ is non-zero in the model under study in this paper. Despite the fact that the neutral scalars contributing to the form factor are all dark particles, CP violation is therefore present in the model and it can indeed be ``visible" to us, having consequences in the non-dark sector. We also analysed other vertices, such as $ZW^+W^-$ -- there CPV form factors also arise, also identified as ``$f_4^Z$", and for our parameter scan we computed it by once again adapting the results of Ref.~\cite{Grzadkowski:2016lpv} to our model. In the C2HDM three Feynman diagrams contribute to this CP-violating form factor (see Fig. 17 in~\cite{Grzadkowski:2016lpv}) but in our model the $Z_2$ symmetry eliminates the vertices $h_i W^+ W^-$ and $h_i G^+ W^-$, so only one diagram involving the charged scalar survives. From Eq. (4.4) of Ref.~\cite{Grzadkowski:2016lpv}, we can read the expression of the CP-violating form factor $f_4^Z$ from the $ZW^+W^-$ vertex, obtaining \begin{equation} f^Z_4(p_1^2) \,=\, \frac{\alpha}{\pi s^2_{2\theta_W}}\,f_{123}\; \sum_{i,j,k}\,\epsilon_{ijk}\,C_{001}(p_1^2,m^2_W,m^2_W,m^2_i,m^2_j,m^2_{H^+}) \, . \label{eq:f4ZW} \end{equation} Interestingly, this form factor is larger, by roughly a factor of ten, than the corresponding quantity in the $ZZZ$ vertex (though still smaller than the corresponding C2HDM typical values). This is illustrated in Fig.~\ref{fig:imf4W}, where we plot the imaginary part of $f^Z_4$ as given by Eq.~\eqref{eq:f4ZW} for $p_1^2 = ($450 GeV$)^2$, \begin{figure}[t] \centering \includegraphics[height=7cm,angle=0]{f4Z_W.jpg} \caption{Scatter plot for the imaginary part of $f_4^Z$ for the $ZW^+W^-$ vertex from Eq.~\eqref{eq:f4ZW}, as a function of the combined $Z$-scalars coupling $f_{123}$, divided by its maximum possible value of $(1/\sqrt{3})^3$. The external $Z$ boson 4-momentum is $p_1^2$ = $($450 GeV$)^2$. In red, points for which the masses of all the dark neutral scalars are smaller than 200~GeV, $m_{h_i}<200$ GeV ($i = 1,2,3$). } \label{fig:imf4W} \end{figure} having obtained non-zero values. Therefore CPV also occurs in the $ZW^+W^-$ interactions in this model, though presumably it would be no easier to experimentally establish than for the $ZZZ$ vertex. The point we wished to make does not change, however -- if even a single non-zero CPV quantity is found, then CP violation occurs in the model. As an example of a possible experimental observable to which the form factors $f_4^Z$ for the $ZZZ$ interactions might contribute, let us take one of the asymmetries considered in Ref.~\cite{Grzadkowski:2016lpv}, using the techniques of Ref.~\cite{Chang:1994cs}. Considering a future linear collider and the process $e^+e^-\rightarrow ZZ$, taking cross sections for unpolarized beams $\sigma_{\lambda,\bar{\lambda}}$ for the production of two $Z$ bosons of helicities $\lambda$ and $\bar{\lambda}$ (assuming the helicity of the $Z$ bosons can be determined), the asymmetry $A_1^{ZZ}$ is defined as \begin{equation} A_1^{ZZ}\,=\,\frac{\sigma_{+,0} - \sigma_{0,-}}{\sigma_{+,0} + \sigma_{0,-}}\,=\, -4\beta \gamma^4 \left[(1 + \beta^2)^2 - 4 \beta^2\cos^2\theta\right]\, {\cal F}_1(\beta,\theta)\,\mbox{Im} \left(f_4^Z(p_1^2)\right)\,, \label{eq:A1ZZ} \end{equation} with $\theta$ the angle between the electron beam and the closest $Z$ boson with positive helicity, $\beta = \sqrt{1 - 4 m^2_Z/p_1^2}$ denoting the velocity of the produced $Z$ bosons and the function ${\cal F}_1(\beta,\theta)$ is given in appendix D of Ref.~\cite{Grzadkowski:2016lpv}. Choosing the two points in our parameter scan with largest (positive) and smallest (negative) values of $\mbox{Im} \left(f_4^Z(p_1^2)\right)$ for $p_1^2 = (450$ GeV$)^2$, we obtain the two curves shown in Fig.~\ref{fig:asym}. \begin{figure}[t] \centering \includegraphics[height=8cm,angle=0]{A1ZZ.pdf} \caption{The $A_1^{ZZ}$ asymmetry of Eq.~\eqref{eq:A1ZZ} as a function of the angle $\theta$. The blue (full) curve corresponds to the largest positive value of $\mbox{Im}\left(f_4^Z(p_1^2)\right)$ in our parameter scan, the red (dashed) one to the smallest negative value for the same quantity. In both cases, $p_1^2 = (450$ GeV$)^2$. } \label{fig:asym} \end{figure} Clearly, the smallness ($\sim 10^{-5}$) of the $f_4^Z$ form factor renders the value of this asymmetry quite small, which makes its measurement challenging. This raises the possibility that asymmetries involving the $ZW^+W^-$ vertex might be easier to measure than those pertaining to the $ZZZ$ anomalous interactions, since we have shown that $f^Z_4$ is typically larger by a factor of ten in the former vertex compared to the latter one. To investigate this possibility, we compared $A_1^{ZZ}$, considered above, with the $A_1^{WW}$ asymmetry defined in Eq. (5.21) of Ref.~\cite{Grzadkowski:2016lpv}. A direct comparison of the maximum values of $A_1^{WW}$ and $A_1^{ZZ}$ shows that for some regions of parameter space the former quantity can indeed be one order of magnitude larger than the latter one; but that is by no means a generic feature, since for other choices of model parameters both asymmetries can also be of the same order. Notice that both asymmetries show a quite different $\sqrt{s}$ dependence. \section{Conclusions} \label{sec:conc} We presented a model whose scalar sector includes two Higgs doublets and a real singlet. A specific region of parameter space of the model yields a vacuum which preserves a discrete symmetry imposed on the model -- thus a charged scalar and three neutral ones have a ``dark" quantum number preserved in all interactions and have no interactions with fermions. The lightest of them, chosen to be a neutral particle, is therefore stable and a good dark matter candidate. The first doublet yields the necessary Goldstone bosons and a neutral scalar which has automatically a behaviour almost indistinguishable from the SM Higgs boson. A parameter scan of the model, imposing all necessary theoretical and experimental constraints (including bounds due to relic density and dark matter searches, both direct and indirect) shows that the SM-like scalar state indeed complies with all known LHC data for the Higgs boson -- some deviations may occur in the diphoton signal rate due to the extra contribution of a charged scalar to the involved decay width, but we have shown such deviations are at most roughly 20\% of the expected SM result when all other constraints are satisfied, and this is still well within current experimental uncertainties. The interesting thing about the model presented in this paper is the occurrence of explicit CP violation exclusively within the dark matter sector. A complex phase allowed in the potential forces the neutral components of the second (dark) doublet to mix with the real singlet to yield three neutral eigenstates, none of which possesses definite quantum numbers. Signals of this CP violation would not be observed in the fermion sector (which, by the way, we assume is identical to the one of the SM, and therefore has the usual CKM-type source of CP violation) nor in the interactions of the SM-like scalar -- protected as it is by the unbroken $Z_2$ symmetry, and by the mass ranges chosen for the dark scalars, $h$ will behave like a purely CP-even SM-like scalar, even though the CP symmetry of the model is explicitly broken in the scalar sector as well! Can the model then be said to be CP violating at all? The answer is yes, as an analysis of the contributions from the dark sector to the $ZZZ$ vertex demonstrates. Even though the dark particles have no direct fermion interactions and could elude detection, their presence could be felt through the emergence of anomalous triple gauge boson vertices. Though we concentrated mainly on $ZZZ$ vertices we also studied $ZW^+W^-$ interactions, but our main purpose was to show CPV is indeed occurring. Direct measurements of experimental observables probing this CPV are challenging: we have considered a specific asymmetry, $A_1^{ZZ}$, built with $ZZ$ production cross sections, but the magnitude of the CPV form factor $f_4^Z$ yields extremely small values for that asymmetry, or indeed for other such variables we might construct. Direct measurements of $ZZ$ production cross sections could in theory be used to constraint anomalous $ZZZ$ vertex form factors -- and indeed several experimental collaborations, from LEP, Tevatron and LHC, have tried that. But the experimentalists' approach is based on constant and real form factors, whereas model-specific expressions for $f_4^Z$ such as those considered in our work yield quantities highly dependent on external momenta, which boast sizeable imaginary parts as well. Thus a direct comparison with current experimental analyses is not conclusive. The other remarkable fact is the amount of ``damage" the mere inclusion of a real singlet can do to the model with two doublets. As repeatedly emphasised in the text, the model we considered is very similar to the Inert 2HDM -- it is indeed simply the IDM with an added real singlet and a tweaked discrete symmetry, extended to the singlet having a ``dark charge" as well. But whereas CP violation -- explicit or spontaneous -- is entirely impossible within the scalar sector of the IDM, the presence of the extra singlet produces a completely different situation. That one obtains a model with explicit CPV is all the more remarkable when one considers that the field we are adding to the IDM is a {\em real} singlet, not even a complex one. Notice that within the IDM it is even impossible to tell which of the dark neutral scalars is CP-even and which is CP-odd -- all that can be said is that those two eigenstates have opposite CP quantum numbers. The addition of a real singlet completely changes the CP picture. The occurrence of CP violation in the dark matter sector can be simply a matter of curiosity, but one should not underestimate the possibility that something novel might arise from it. If the current picture of matter to dark matter abundance is indeed true and the observed matter is only 5\% of the total content of the universe, then one can speculate how CP violation occurring in the interactions of the remainder matter might have affected the cosmological evolution of the universe. We reserve such studies for a follow-up work. \section*{Acknowledgements} We acknowledge the contribution of the research training group GRK1694 `Elementary particle physics at highest energy and highest precision’. PF and RS are supported in part by the National Science Centre, Poland, the HARMONIA project under contract UMO-2015/18/M/ST2/00518. JW gratefully acknowledges funding from the PIER Helmholtz Graduate School.
{'timestamp': '2018-10-08T02:00:47', 'yymm': '1807', 'arxiv_id': '1807.10322', 'language': 'en', 'url': 'https://arxiv.org/abs/1807.10322'}
\section{Introduction and main results} \label{Abschnitt mit Hauptergebnissen} When a self-adjoint operator $ T $ is perturbed by a bounded self-adjoint operator $ S $, it is important to investigate the (spectral) properties of the difference \begin{equation*} f(T+S) - f(T), \end{equation*} where $ f $ is a real-valued Borel function on $ \mathds{R} $. It is also of interest to predict the smoothness of the mapping $ S \mapsto f(T+S) - f(T) $ with respect to the smoothness of $ f $. There is a vast amount of literature dedicated to these problems, see, e.\,g., Kre{\u\i}n, Farforovskaja, Peller, Birman, Solomyak, Pushnitski, Yafaev \cite{Krein, Krein_II, Farforovskaja, Peller_I, Peller_II, Birman_Solomyak, Pushnitski_I, Pushnitski_II, Pushnitski_III, Pushnitski_Yafaev}, and the references therein. It is well known (see Kre{\u\i}n \cite{Krein}; see also Peller \cite{Peller_II}) that if $ f $ is an infinitely differentiable function with compact support and $ S $ is trace class, then $ f(T+S) - f(T) $ is a trace class operator. On the other hand, if $ f = \mathds{1}_{(-\infty, \lambda)} $ is the characteristic function of the interval $ (-\infty, \lambda) $ with $ \lambda $ in the essential spectrum of $ T $, then it may occur that \begin{equation*} f(T+S) - f(T) \end{equation*} is not compact, see Kre{\u\i}n's example \cite{Krein,Kostrykin}. In the latter example, $ S $ is a rank one operator, and the difference $ \mathds{1}_{(-\infty, \lambda)}(T+S) - \mathds{1}_{(-\infty, \lambda)}(T) $ is a bounded self-adjoint Hankel integral operator on $ L^{2}(0,\infty) $ that can be computed explicitly for all $ 0 < \lambda < 1 $. Formally, a bounded Hankel integral operator $ \Gamma $ on $ L^{2}(0,\infty) $ is a bounded integral operator such that the kernel function $ k $ of $ \Gamma $ depends only on the sum of the variables: \begin{equation*} (\Gamma g)(x) = \int_{0}^{\infty} k(x+y) g(y) \mathrm{d}y, \quad g \in L^{2}(0,\infty). \end{equation*} For an introduction to the theory of Hankel operators, we refer to Peller's book \cite{Peller}. Inspired by Kre{\u\i}n's example, we may pose the following question. \begin{question} \label{The main question} Let $ \lambda \in \mathds{R} $. Is it true that \begin{equation*} D(\lambda) = E_{(-\infty, \lambda)}(T+S) - E_{(-\infty, \lambda)}(T), \end{equation*} the difference of the spectral projections, is unitarily equivalent to a bounded self-adjoint Hankel operator, provided that $ T $ is semibounded and $ S $ is of rank one? \end{question} Pushnitski \cite{Pushnitski_I, Pushnitski_II, Pushnitski_III, Pushnitski_Yafaev} and Yafaev \cite{Pushnitski_Yafaev} have been studying the spectral properties of the operator $ D(\lambda) $ in connection with scattering theory. If the absolutely continuous spectrum of $ T $ contains an open interval and under some smoothness assumptions, the results of Pushnitski and Yafaev are applicable, cf.\,Section \ref{The section with finite rank perturbation} below. In this case, the essential spectrum of $ D(\lambda) $ is a symmetric interval around zero. Here and for the rest of this paper, we consider a semibounded self-adjoint operator $ T $ acting on a complex separable Hilbert space $ \mathfrak{H} $ of infinite dimension. We denote the spectrum and the essential spectrum of $ T $ by $ \sigma(T) $ and $ \sigma_{\mathrm{ess}}(T) $, respectively. Furthermore, we denote by $ \mathrm{span} \{ x_{i} \in \mathfrak{H} : i \in \mathcal{I} \} $ the linear span generated by the vectors $ x_{i} $, $ i \in \mathcal{I} $, where $ \mathcal{I} $ is some index set. If there exists a vector $ x \in \mathfrak{H} $ such that \begin{equation*} \overline{\mathrm{span}} \left\{ E_{\Omega}(T) x : \Omega \in \mathcal{B}(\mathds{R}) \right\} := \overline{\mathrm{span} \left\{ E_{\Omega}(T) x : \Omega \in \mathcal{B}(\mathds{R}) \right\}} = \mathfrak{H}, \end{equation*} then $ x $ is called \emph{cyclic} for $ T $. Here $ \mathcal{B}(\mathds{R}) $ denotes the sigma-algebra of Borel sets of $ \mathds{R} $. The following theorem is the main result of this paper. \begin{introtheorem} \label{new Main result} Let $ T $ and $ S $ be a semibounded self-adjoint operator and a self-adjoint operator of rank one acting on $ \mathfrak{H} $, respectively. ~Then there exists a number $ k $ in $ \mathds{N} \cup \{ 0 \} $ such that for all $ \lambda $ in $ \mathds{R} $ except for at most countably many $ \lambda $ in $ \sigma_{\mathrm{ess}}(T) $, the operator $ D(\lambda) $ is unitarily equivalent to a block diagonal operator $ \Gamma_{\lambda} \oplus 0 $ on $ L^{2}(0, \infty) \oplus \mathds{C}^{k} $, where $ \Gamma_{\lambda} $ is a bounded self-adjoint Hankel integral operator on $ L^{2}(0, \infty) $. \end{introtheorem} The theory of bounded self-adjoint Hankel operators has been studied intensively by Rosenblum, Howland, Megretski{\u\i}, Peller, Treil, and others, see \cite{Rosenblum_I, Rosenblum_II, Howland, Megretskii_et_al}. In their 1995 paper \cite{Megretskii_et_al}, Megretski{\u\i}, Peller, and Treil have shown that every bounded self-adjoint Hankel operator can be characterized by three properties concerning the spectrum and the multiplicity in the spectrum, see \cite[Theorem 1]{Megretskii_et_al}. We present a version of \cite[Theorem 1]{Megretskii_et_al} for differences of two orthogonal projections in Section \ref{Formulierung der Hilfsmittel}, see Theorem \ref{Charakterisierung modifiziert} below. Denote by $ \ell^{2}(\mathds{N}_{0}) $ the space of all complex square summable one-sided sequences $ x=(x_{0}, x_{1}, ...) $. A bounded operator $ H $ on $ \ell^{2}(\mathds{N}_{0}) $ is called \textit{essentially Hankel} if $ A^{\ast} H - H A $ is compact, where $ A : (x_{0}, x_{1}, ...) \mapsto (0, x_{0}, x_{1}, ...) $ denotes the forward shift on $ \ell^{2}(\mathds{N}_{0}) $. The set of essentially Hankel operators was introduced in \cite{Martinez} by Mart{\'{\i}}nez-Avenda{\~n}o. Clearly, every operator of the form 'Hankel plus compact' is essentially Hankel, but the converse is not true (see \cite[Theorem 3.8]{Martinez}). For compact perturbations $ S $, we will prove the following version of Theorem \ref{new Main result}. \newpage \begin{introtheorem} \label{new Main result II} Let $ T $ and $ S $ be a semibounded self-adjoint operator and a compact self-adjoint operator acting on $ \mathfrak{H} $, respectively. Let $ 1/4 > a_{1} > a_{2} > ... > 0 $ be an arbitrary decreasing null sequence of real numbers. ~Then for all $ \lambda $ in $ \mathds{R} $ except for at most countably many $ \lambda $ in $ \sigma_{\mathrm{ess}}(T) $, there exist a bounded self-adjoint Hankel operator $ \Gamma_{\lambda} $ and a compact self-adjoint operator $ K_{\lambda} $ acting on $ \ell^{2}(\mathds{N}_{0}) $ with the following properties: \begin{enumerate} \item $ D(\lambda) $ is unitarily equivalent to $ \Gamma_{\lambda} + K_{\lambda} $. \item either $ K_{\lambda} $ is a finite rank operator or $ \nu_{j}(\lambda) / a_{j} \rightarrow 0 $ as $ j \rightarrow \infty $, where $ \nu_{1}(\lambda), \nu_{2}(\lambda), ... $ denote the nonzero eigenvalues of $ K_{\lambda} $ ordered by decreasing modulus (with multiplicity taken into account). \end{enumerate} In particular, $ \Gamma_{\lambda} + K_{\lambda} $ is essentially Hankel. \end{introtheorem} Moreover, the operator $ K_{\lambda} $ in Theorem \ref{new Main result II} can always be chosen of finite rank if $ S $ is of finite rank. In Sections \ref{The second section}--\ref{The section with finite rank perturbation}, the operator $ T $ is supposed to be bounded. The case when $ T $ is semibounded (but not bounded) will be reduced to the bounded case by means of resolvents, see Subsection \ref{Zum halbbeschraenkten Fall} and the remark in Subsection \ref{Subsection with two auxiliary results}. In Section \ref{The second section}, we will show that the dimensions of $ \mathrm{Ker} (D(\lambda) \pm I) $ differ by at most $ N \in \mathds{N} $ if $ S $ is of rank $ N $, where $ I $ denotes the identity operator. We write this as \begin{equation} \label{Ungleichung zur Symmetrie des Spektrums} \big| \mathrm{dim} ~ \mathrm{Ker} \big( D(\lambda) - I \big) - \mathrm{dim} ~ \mathrm{Ker} \big( D(\lambda) + I \big) \big| \leq \mathrm{rank} ~ S, \quad \lambda \in \mathds{R}. \end{equation} Furthermore, an example is given where equality is attained. However, there may exist $ \lambda \in \mathds{R} $ such that \begin{equation*} \mathrm{dim} ~ \mathrm{Ker} \big( D(\lambda) - I \big) = \infty \quad \text{and} \quad \mathrm{Ker} \big( D(\lambda) + I \big) = \{ 0 \} \end{equation*} if $ S $ is a compact operator with infinite dimensional range. Section \ref{The third section} provides a list of sufficient conditions so that Question \ref{The main question} has a positive answer, see Proposition \ref{Einige hinreichende Bedingungen}. Moreover, if $ S = \langle \cdot, \varphi \rangle \varphi $ is a rank one operator and the vector $ \varphi $ is cyclic for $ T $, then we will show in Theorem \ref{Main theorem II} that the kernel of $ D(\lambda) $ is trivial for all $ \lambda $ in the \linebreak interval $ ( \min \sigma_{\mathrm{ess}}(T), \max \sigma_{\mathrm{ess}}(T) ) $ and infinite dimensional for all $ \lambda $ in \linebreak $ \mathds{R} \setminus [ \min \sigma_{\mathrm{ess}}(T), \max \sigma_{\mathrm{ess}}(T) ] $. In the case when $ \varphi $ is not cyclic for $ T $, Example \ref{via Krein} shows that Question \ref{The main question} may have to be answered negatively. In this situation, we need to consider the block operator representation of $ D(\lambda) $ with respect to the orthogonal subspaces $ \overline{\mathrm{span}} \{ T^{j} \varphi : j \in \mathds{N}_{0} \} $ and $ \mathfrak{H} \ominus \overline{\mathrm{span}} \{ T^{j} \varphi : j \in \mathds{N}_{0} \} $ of $ \mathfrak{H} $, see Subsection \ref{Reduktion zum zyklischen Fall}. In Section \ref{The section with finite rank perturbation}, we will show that the operator $ D(\lambda) $ is non-invertible for all $ \lambda $ in $ \mathds{R} $ except for at most countably many $ \lambda $ in $ \sigma_{\mathrm{ess}}(T) $, see Theorem \ref{Main theorem III}. Section \ref{Beweis des Hauptresultates} completes the proofs of Theorems \ref{new Main result} and \ref{new Main result II}. In particular, it is shown that $ D(\lambda) $ is unitarily equivalent to a self-adjoint Hankel operator of finite rank for \emph{all} $ \lambda \in \mathds{R} $ if $ T $ has a purely discrete spectrum and $ S $ is a rank one operator (see Proposition \ref{Keine Ausnahmepunkte im Fall von purely discrete spectrum} and p.\,\pageref{Ergaenzung Proposition}). Some examples, including the almost Mathieu operator, are discussed in Section \ref{Beispiele und Anwendungen} below. The results of this paper will be part of the author's Ph.D. thesis at Johannes Gutenberg University Mainz. \section{The main tools} \label{Formulierung der Hilfsmittel} In this section, we present the main tools for the proofs of Theorems \ref{new Main result} and \ref{new Main result II}. First, we state a lemma which follows immediately from \cite[Theorem 6.1]{Davis}. \begin{lemma} \label{Satz bei Chandler Davis} Let $ \Gamma $ be the difference of two orthogonal projections. \, Then $ \sigma(\Gamma) \subset [-1,1] $. Moreover, the restricted operators $ \left. \Gamma \right|_{\mathfrak{H}_{0}} $ and $ \left. (-\Gamma) \right|_{\mathfrak{H}_{0}} $ are unitarily equivalent, where the closed subspace $ \mathfrak{H}_{0} := \left[ \mathrm{Ker} ~ (\Gamma - I) \oplus \mathrm{Ker} ~ (\Gamma + I) \right]^{\bot} $ of $ \mathfrak{H} $ is reducing for $ \Gamma $. \end{lemma} In \cite{Megretskii_et_al}, Megretski{\u\i}, Peller, and Treil solved the inverse spectral problem for self-adjoint Hankel operators. In our situation, \cite[Theorem 1]{Megretskii_et_al} reads as follows: \begin{theorem} \label{Charakterisierung modifiziert} The difference $ \Gamma $ of two orthogonal projections is unitarily equivalent to a bounded self-adjoint Hankel operator if and only if the following three conditions hold: \begin{itemize} \item[\emph{(C1)}] either $ \mathrm{Ker} ~ \Gamma = \{ 0 \} $ or $ \mathrm{dim} ~ \mathrm{Ker} ~ \Gamma = \infty $; \item[\emph{(C2)}] $ \Gamma $ is non-invertible; \item[\emph{(C3)}] $ | \mathrm{dim} ~ \mathrm{Ker} (\Gamma - I) - \mathrm{dim} ~ \mathrm{Ker} (\Gamma + I) | \leq 1 $. \end{itemize} \end{theorem} If $ \mathrm{dim} ~ \mathrm{Ker} (\Gamma - I) = \infty $ or $ \mathrm{dim} ~ \mathrm{Ker} (\Gamma + I) = \infty $, then (C3) has to be understood as $ \mathrm{dim} ~ \mathrm{Ker} (\Gamma - I) = \mathrm{dim} ~ \mathrm{Ker} (\Gamma + I) = \infty $ (cf.\,\cite[p.\,249]{Megretskii_et_al}). \begin{proof}[Proof of Theorem \ref{Charakterisierung modifiziert}] Combine Lemma \ref{Satz bei Chandler Davis} and \cite[Theorem 1]{Megretskii_et_al}. \end{proof} As will be shown in Section \ref{The second section}, the operator $ D(\lambda) $ satisfies condition (C3) for all $ \lambda \in \mathds{R} $ if $ S $ is a rank one operator. Therefore, a sufficient condition for $ D(\lambda) $ to be unitarily equivalent to a bounded self-adjoint Hankel operator is given by: \emph{the kernel of $ D(\lambda) $ is infinite dimensional.} In Proposition \ref{Einige hinreichende Bedingungen} below, we present a list of sufficient conditions such that the kernel of $ D(\lambda) $ is infinite dimensional. More generally, a self-adjoint \textit{block-Hankel operator of order $ N $} is a block-Hankel matrix $ (a_{j+k})_{j,k \in \mathds{N}_{0}} $, where $ a_{j} $ is an $ N \times N $ matrix for every $ j $, see \cite[p.\,247]{Megretskii_et_al}. We will need the following version of Theorem \ref{Charakterisierung modifiziert}: \begin{theorem} \label{Charakterisierung modifiziert allgemeinere Version} The difference $ \Gamma $ of two orthogonal projections is unitarily equivalent to a bounded self-adjoint block-Hankel operator of order $ N $ if and only if the following three conditions hold: \begin{itemize} \item[\emph{(C1)}] \quad either $ \mathrm{Ker} ~ \Gamma = \{ 0 \} $ or $ \mathrm{dim} ~ \mathrm{Ker} ~ \Gamma = \infty $; \item[\emph{(C2)}] \quad $ \Gamma $ is non-invertible; \item[\emph{(C3)}] \hspace{-1ex}$ _{N} $ \, $ | \mathrm{dim} ~ \mathrm{Ker} (\Gamma - I) - \mathrm{dim} ~ \mathrm{Ker} (\Gamma + I) | \leq N $. \end{itemize} \end{theorem} Again, if $ \mathrm{dim} ~ \mathrm{Ker} (\Gamma - I) = \infty $ or $ \mathrm{dim} ~ \mathrm{Ker} (\Gamma + I) = \infty $, then (C3)$ _{N} $ has to be understood as $ \mathrm{dim} ~ \mathrm{Ker} (\Gamma - I) = \mathrm{dim} ~ \mathrm{Ker} (\Gamma + I) = \infty $. \begin{proof}[Proof of Theorem \ref{Charakterisierung modifiziert allgemeinere Version}] Combine Lemma \ref{Satz bei Chandler Davis} and \cite[Theorem 2]{Megretskii_et_al}. \end{proof} \section{On the dimension of $ \mathrm{Ker} \big( D(\lambda) \pm I \big) $} \label{The second section} In this section, the self-adjoint operator $ T $ is assumed to be bounded. The main purpose of this section is to show that the dimensions of $ \mathrm{Ker} \big( D(\lambda) \pm I \big) $ do not exceed the rank of the perturbation $ S $, see Lemma \ref{Lemma zur Bedingung C3} below. In particular, condition (C3)$ _{N} $ in Theorem \ref{Charakterisierung modifiziert allgemeinere Version} is fulfilled for all $ \lambda \in \mathds{R} $ if the rank of $ S $ is equal to $ N \in \mathds{N} $. \begin{lemma} \label{Lemma zur Bedingung C3} Let $ T $ and $ S $ be a bounded self-adjoint operator and a self-adjoint operator of finite rank $ N $ acting on $ \mathfrak{H} $, respectively. Then for all $ \lambda $ in $ \mathds{R} $, one has \begin{equation*} \mathrm{dim} ~ \mathrm{Ker} \big( D(\lambda) \pm I \big) \leq N. \end{equation*} \end{lemma} \begin{proof} Let us write $ P_{\lambda} := E_{(-\infty, \lambda)}(T+S) $ and $ Q_{\lambda} := E_{(-\infty, \lambda)}(T) $. We will only show that $ \mathrm{dim} ~ \mathrm{Ker} ( P_{\lambda} - Q_{\lambda} - I ) \leq N $; the other inequality is proved analogously. Assume for contradiction that there exists an orthonormal system $ x_{1}, ..., x_{N+1} $ in $ \mathrm{Ker} ( P_{\lambda} - Q_{\lambda} - I ) $. Choose a normed vector $ \tilde{x} $ in \begin{equation*} \mathrm{span} \{ x_{1}, ..., x_{N+1} \} \cap (\mathrm{Ran} ~ S)^{\bot} \neq \{ 0 \}. \end{equation*} Hence $ P_{\lambda} \tilde{x} = \tilde{x} $ and $ Q_{\lambda} \tilde{x} = 0 $ and this implies \begin{align*} \langle (T+S) \tilde{x}, \tilde{x} \rangle < \lambda \quad \text{and} \quad \langle T \tilde{x}, \tilde{x} \rangle \geq \lambda \end{align*} so that \begin{align*} \lambda > \langle (T+S) \tilde{x}, \tilde{x} \rangle = \langle T \tilde{x}, \tilde{x} \rangle \geq \lambda, \end{align*} which is a contradiction. \end{proof} \begin{remark} If we consider an unbounded self-adjoint operator $ T $, then the proof of Lemma \ref{Lemma zur Bedingung C3} does not work, because $ \tilde{x} $ might not belong to the domain of $ T $. \end{remark} The following example shows that Inequality (\ref{Ungleichung zur Symmetrie des Spektrums}) above is optimal. \begin{example} \begin{enumerate} \item Consider the bounded self-adjoint diagonal operator \begin{align*} T = \mathrm{diag} \! \left( -1, -1/2, -1/3, -1/4, ... \right) : \ell^{2}(\mathds{N}_{0}) \rightarrow \ell^{2}(\mathds{N}_{0}) \end{align*} and, for $ N \in \mathds{N} $, the self-adjoint diagonal operator \begin{align*} S = \mathrm{diag} ( \underbrace{-1, ..., -1}_{N ~ \mathrm{times}}, 0, ... ) : \ell^{2}(\mathds{N}_{0}) \rightarrow \ell^{2}(\mathds{N}_{0}). \end{align*} Then $ S $ is of rank $ N $, and we see that \begin{align*} &\mathrm{dim} ~ \mathrm{Ker} \! \left( E_{(-\infty, \lambda)}(T+S) - E_{(-\infty, \lambda)}(T) - I \right) = N \\ & \text{and} \quad \mathrm{Ker} \! \left( E_{(-\infty, \lambda)}(T+S) - E_{(-\infty, \lambda)}(T) + I \right) = \{ 0 \} \end{align*} for all $ \lambda \in (-1-1/N, -1) $. \item Let $ a_{0} := -1 $, $ a_{1} := -1/2 $, $ a_{2} := -1/3 $. Consider the bounded self-adjoint diagonal operator \begin{align*} T = \mathrm{diag} \! \left( a_{0}, a_{0} + \frac{1/2}{4}, a_{1}, a_{1} + \frac{1/6}{4}, a_{0} + \frac{1/2}{5}, a_{1} + \frac{1/6}{5}, a_{0} + \frac{1/2}{6}, ... \right) \end{align*} on $ \ell^{2}(\mathds{N}_{0}) $. Since $ |a_{0}-a_{1}| = 1/2 $ and $ |a_{1}-a_{2}| = 1/6 $, it follows that the compact self-adjoint diagonal operator \begin{align*} S = -2 \cdot \mathrm{diag} \! \left( 0, \frac{1/2}{4}, 0, \frac{1/6}{4}, \frac{1/2}{5}, \frac{1/6}{5}, \frac{1/2}{6}, ... \right) : \ell^{2}(\mathds{N}_{0}) \rightarrow \ell^{2}(\mathds{N}_{0}) \end{align*} is such that \begin{align*} (+) \begin{cases} &\mathrm{dim} ~ \mathrm{Ker} \! \left( E_{(-\infty, \lambda)}(T+S) - E_{(-\infty, \lambda)}(T) - I \right) = \infty \\ & \text{and} \quad \mathrm{Ker} \! \left( E_{(-\infty, \lambda)}(T+S) - E_{(-\infty, \lambda)}(T) + I \right) = \{ 0 \} \end{cases} \end{align*} for $ \lambda \in \{ -1, -1/2 \} $. Clearly, this example can be extended such that $ (+) $ holds for all $ \lambda $ in $ \{ -1, -1/2, -1/3, ... \} $. \end{enumerate} \end{example} \section{On the dimension of $ \mathrm{Ker} ~ D(\lambda) $} \label{The third section} In this section, we deal with the question whether the operator $ D(\lambda) $ fulfills condition (C1) in Theorem \ref{Charakterisierung modifiziert allgemeinere Version}. Suppose that the self-adjoint operators $ T $ and $ S $ are bounded and of finite rank, respectively. We will provide a list of sufficient conditions such that the kernel of $ D(\lambda) $ is infinite dimensional for all $ \lambda $ in $ \mathds{R} $. Furthermore, we will prove that the kernel of $ D(\lambda) $ is trivial for all $ \lambda $ in the \linebreak interval $ ( \min \sigma_{\mathrm{ess}}(T), \max \sigma_{\mathrm{ess}}(T) ) $ and infinite dimensional for all $ \lambda $ in \linebreak $ \mathds{R} \setminus [ \min \sigma_{\mathrm{ess}}(T), \max \sigma_{\mathrm{ess}}(T) ] $, provided that $ S = \langle \cdot, \varphi \rangle \varphi $ is a rank one operator and the vector $ \varphi $ is cyclic for $ T $, see Theorem \ref{Main theorem II} below. \subsection{Sufficient conditions such that $ \mathrm{dim} ~ \mathrm{Ker} ~ D(\lambda) = \infty $.} Let $ \lambda \in \mathds{R} $. If the kernel of $ D(\lambda) = E_{(-\infty, \lambda)}(T+S) - E_{(-\infty, \lambda)}(T) $ is infinite dimensional, then $ D(\lambda) $ fulfills conditions (C1) and (C2) in Theorem \ref{Charakterisierung modifiziert allgemeinere Version}. Let $ N \in \mathds{N} $ be the rank of $ S $. The following proposition provides a list of sufficient conditions such that the kernel of $ D(\lambda) $ is infinite dimensional. \begin{proposition} \label{Einige hinreichende Bedingungen} If at least one of the following three cases occurs for $ X = T $ or for $ X = T + S $, then the operator $ D(\lambda) $ is unitarily equivalent to a bounded self-adjoint block-Hankel operator of order $ N $ with infinite dimensional kernel for all $ \lambda \in \mathds{R} $. \begin{enumerate} \item \label{EW mit unendlicher Vielfachheit} The spectrum of $ X $ contains an eigenvalue of infinite multiplicity. In particular, this pertains to the case when the range of $ X $ is finite dimensional. \item The spectrum of $ X $ contains infinitely many eigenvalues with multiplicity at least $ N+1 $. \item \label{Vielfachheit groesser eins im stetigen Spektrum} The spectrum of the restricted operator $ \! \left. X \right|_{\mathfrak{E}^{\bot}} $ has multiplicity at least $ N+1 $ (not necessarily uniform), where $ \mathfrak{E} := \left\{ x \in \mathfrak{H} : x \text{ is an eigenvector of } X \right\} $. \end{enumerate} \end{proposition} \begin{proof} By Lemma \ref{Lemma zur Bedingung C3}, we know that condition (C3)$ _{N} $ in Theorem \ref{Charakterisierung modifiziert allgemeinere Version} holds true for all $ \lambda \in \mathds{R} $. It remains to show that $ \mathrm{dim} ~ \mathrm{Ker} ~ D(\lambda) = \infty $ for all $ \lambda \in \mathds{R} $. First, suppose that there exists an eigenvalue $ \lambda_{0} $ of $ X=T $ with multiplicity $ m \geq N+1 $, i.\,e. $ m \in \{ N+1,N+2, ... \} \cup \{ \infty \} $. Define \begin{equation*} \mathfrak{M} := \left( \mathrm{Ran} ~ E_{ \{ \lambda_{0} \} }(T) \right) \cap (\mathrm{Ran} ~ S)^{\bot} \neq \{ 0 \}. \end{equation*} It is easy to show that $ \mathfrak{M} $ is a closed subspace of $ \mathfrak{H} $ such that \begin{itemize} \item $ \mathrm{dim} ~ \mathfrak{M} \geq m-N $, \item $ \left. T \right|_{\mathfrak{M}} = \left. \left( T + S \right) \right|_{\mathfrak{M}} $, \item $ T(\mathfrak{M}) \subset \mathfrak{M} \quad \text{and} \quad T(\mathfrak{M}^{\bot}) \subset \mathfrak{M}^{\bot} $, \item $ \left( T + S \right)(\mathfrak{M}) \subset \mathfrak{M} \quad \text{and} \quad \left( T + S \right)(\mathfrak{M}^{\bot}) \subset \mathfrak{M}^{\bot} $. \end{itemize} Therefore, $ \mathfrak{M} $ is contained in the kernel of $ D(\lambda) $ for all $ \lambda \in \mathds{R} $. It follows that the kernel of $ D(\lambda) $ is infinite dimensional for all $ \lambda \in \mathds{R} $ whenever case (1) or case (2) occur for the operator $ X=T $; in the case when $ X = T + S $ the proof runs analogously. Now suppose that case (3) occurs for $ X = T $. Write \begin{equation*} S = \sum_{k=1}^{N} \alpha_{k} \langle \cdot, \varphi_{k} \rangle \varphi_{k} : \mathfrak{H} \rightarrow \mathfrak{H}, \end{equation*} where $ \varphi_{1}, ..., \varphi_{N} $ form an orthonormal system in $ \mathfrak{H} $ and $ \alpha_{1}, ..., \alpha_{N} $ are nonzero real numbers. Define the closed subspace $ \mathfrak{N} := \overline{\mathrm{span}} \left\{ T^{j} \varphi_{k} : j \in \mathds{N}_{0}, k = 1,...,N \right\} $ of $ \mathfrak{H} $. It is well known that \begin{itemize} \item $ \left. T \right|_{\mathfrak{N}^{\bot}} = \left. \left( T + S \right) \right|_{\mathfrak{N}^{\bot}} $, \item $ T(\mathfrak{N}) \subset \mathfrak{N} \quad \text{and} \quad T(\mathfrak{N}^{\bot}) \subset \mathfrak{N}^{\bot} $, \item $ \left( T + S \right)(\mathfrak{N}) \subset \mathfrak{N} \quad \text{and} \quad \left( T + S \right)(\mathfrak{N}^{\bot}) \subset \mathfrak{N}^{\bot} $. \end{itemize} Therefore, $ \mathfrak{N}^{\bot} $ is contained in the kernel of $ D(\lambda) $ for all $ \lambda \in \mathds{R} $. A standard proof using the theory of direct integrals (see \cite[Chapter 7]{Birman_Solomyak_II}, see in particular \cite[Theorem 1, p.\,177]{Birman_Solomyak_II}) shows that $ \mathfrak{N}^{\bot} $ is infinite dimensional. If $ X = T + S $, then the proof runs analogously. Now the proof is complete. \end{proof} \subsection{The case when $ S $ is a rank one operator.} \label{Reduktion zum zyklischen Fall} For the rest of this section, let us assume that $ S = \langle \cdot, \varphi \rangle \varphi $ is a rank one operator. The following example illustrates that $ \mathrm{dim} ~ \mathrm{Ker} ~ D(\lambda) $ may attain every value in $ \mathds{N} $, provided that $ \varphi $ is not cyclic for $ T $. Recall that when $ \mathrm{dim} ~ \mathrm{Ker} ~ D(\lambda) $ is neither zero nor infinity, Theorem \ref{Charakterisierung modifiziert} shows that Question \ref{The main question} has to be answered negatively. \begin{example} \label{via Krein} Essentially, this is an application of Kre{\u\i}n's example from \cite[pp.\,622--624]{Krein}. \newline Let $ 0 < \lambda < 1 $. Consider the bounded self-adjoint integral operators $ A_{j} $, $ j = 0,1 $, with kernel functions \begin{equation*} a_{0}(x,y) = \begin{cases} \sinh(x) \mathrm{e}^{-y} & \text{if } x \leq y \\ \sinh(y) \mathrm{e}^{-x} & \text{if } x \geq y \end{cases} ~~ \text{and} ~~ a_{1}(x,y) = \begin{cases} \cosh(x) \mathrm{e}^{-y} & \text{if } x \leq y \\ \cosh(y) \mathrm{e}^{-x} & \text{if } x \geq y \end{cases} \end{equation*} on the Hilbert space $ L^{2}(0,\infty) $. By \cite[pp.\,622--624]{Krein}, we know that $ A_{0} - A_{1} $ is of rank one and that the difference $ E_{(-\infty, \lambda)}(A_{0}) - E_{(-\infty, \lambda)}(A_{1}) $ is a Hankel operator. Furthermore, it was shown in \cite[Theorem 1]{Kostrykin} that $ E_{(-\infty, \lambda)}(A_{0}) - E_{(-\infty, \lambda)}(A_{1}) $ has a simple purely absolutely continuous spectrum filling in the interval $ [-1,1] $. In particular, the kernel of \begin{equation*} E_{(-\infty, \lambda)}(A_{0}) - E_{(-\infty, \lambda)}(A_{1}) \end{equation*} is trivial. Let $ k \in \mathds{N} $. Now consider block diagonal operators \begin{align*} \widetilde{A}_{j} := A_{j} \oplus M : L^{2}(0,\infty) \oplus \mathds{C}^{k} \rightarrow L^{2}(0,\infty) \oplus \mathds{C}^{k}, \quad j = 0,1, \end{align*} where $ M \in \mathds{C}^{k \times k} $ is an arbitrary fixed self-adjoint matrix. Then one has \begin{align*} \mathrm{dim} ~ \mathrm{Ker} \left( E_{(-\infty, \lambda)}(\widetilde{A}_{0}) - E_{(-\infty, \lambda)}(\widetilde{A}_{1}) \right) = k. \end{align*} \end{example} The following consideration shows that (up to at most countably many $ \lambda $ in the essential spectrum of $ T $) this is the only type of counterexample to Question \ref{The main question} above. The closed subspace $ \mathfrak{N}^{\bot} $ of $ \mathfrak{H} $ might be trivial, finite dimensional, or infinite dimensional, where $ \mathfrak{N} := \overline{\mathrm{span}} \{ T^{j} \varphi : j \in \mathds{N}_{0} \} $. \begin{itemize} \item[\textbf{Case 1.}] If $ \mathfrak{N}^{\bot} $ is trivial, then $ \varphi $ is cyclic for $ T $ and Proposition \ref{Main result} below implies that $ D(\lambda) $ is unitarily equivalent to a bounded self-adjoint Hankel operator for all $ \lambda $ in $ \mathds{R} $ except for at most countably many $ \lambda $ in $ \sigma_{\mathrm{ess}}(T) $. \item[\textbf{Case 2.}] Suppose that $ \mathrm{dim} ~ (\mathfrak{N}^{\bot}) =: k \in \mathds{N} $. We will reduce this situation to the first case. Let us identify $ \mathfrak{N}^{\bot} $ with $ \mathds{C}^{k} $. The restricted operators $ \left. T \right|_{\mathfrak{N}^{\bot}} $ and $ \left. (T+S) \right|_{\mathfrak{N}^{\bot}} $ coincide on $ \mathfrak{N}^{\bot} $, and since $ \mathfrak{N} $ reduces both $ T $ and $ T+S $ there exists a self-adjoint matrix $ M $ in $ \mathds{C}^{k \times k} $ such that $ T $ and $ T+S $ can be identified with the block diagonal operators $ \left. T \right|_{\mathfrak{N}} \oplus M $ and $ \left. (T+S) \right|_{\mathfrak{N}} \oplus M $ acting on $ \mathfrak{N} \oplus \mathds{C}^{k} $, respectively. Therefore, \begin{equation*} D(\lambda) = \Big( E_{(-\infty, \lambda)} \big( \! \left. T \right|_{\mathfrak{N}} + \left. S \right|_{\mathfrak{N}} \big) - E_{(-\infty, \lambda)} \big( \! \left. T \right|_{\mathfrak{N}} \big) \Big) \oplus 0, \quad \lambda \in \mathds{R}, \end{equation*} and $ \varphi $ is cyclic for $ \left. T \right|_{\mathfrak{N}} $. \item[\textbf{Case 3.}] Since $ \mathfrak{N}^{\bot} \subset \mathrm{Ker} ~ D(\lambda) $ for all $ \lambda $ in $ \mathds{R} $, it follows from Lemma \ref{Lemma zur Bedingung C3} and Theorem \ref{Charakterisierung modifiziert} that $ D(\lambda) $ is unitarily equivalent to a bounded self-adjoint Hankel operator for all $ \lambda $ in $ \mathds{R} $ if $ \mathfrak{N}^{\bot} $ is infinite dimensional. \end{itemize} \subsection{The case when $ \varphi $ is cyclic for $ T $.} \label{The first subsection} This subsection is devoted to the proof of the following theorem: \begin{theorem} \label{Main theorem II} Suppose that the self-adjoint operators $ T $ and $ S = \langle \cdot, \varphi \rangle \varphi $ are bounded and of rank one, respectively, and that the vector $ \varphi $ is cyclic for $ T $. Let $ \lambda \in \mathds{R} \setminus \{ \min \sigma_{\mathrm{ess}}(T), ~ \max \sigma_{\mathrm{ess}}(T) \} $. ~Then the kernel of $ D(\lambda) $ is \begin{enumerate} \item infinite dimensional if and only if $ \lambda \in \mathds{R} \setminus \left[ \min \sigma_{\mathrm{ess}}(T), \max \sigma_{\mathrm{ess}}(T) \right] $. \item trivial if and only if $ \lambda \in \left( \min \sigma_{\mathrm{ess}}(T), \max \sigma_{\mathrm{ess}}(T) \right) $. \end{enumerate} In particular, one has \begin{align*} \text{either} \quad \mathrm{Ker} ~ D(\lambda) = \{ 0 \} \quad \text{or} \quad \mathrm{dim} ~ \mathrm{Ker} ~ D(\lambda) = \infty. \end{align*} \end{theorem} The proof is based on a result by Liaw and Treil \cite{Treil_und_Liaw} and some harmonic analysis. Theorem \ref{Main theorem II} will be an important ingredient in the proof of Proposition \ref{Main result} below. Likewise, it is of independent interest. Note that, according to Theorem \ref{Main theorem II}, the kernel of $ D(\lambda) $ is trivial for \emph{all} $ \lambda $ between $ \min \sigma_{\mathrm{ess}}(T) $ and $ \max \sigma_{\mathrm{ess}}(T) $, no matter if the interval $ (\min \sigma_{\mathrm{ess}}(T), \max \sigma_{\mathrm{ess}}(T)) $ contains points from the resolvent set of $ T $, isolated eigenvalues of $ T $, etc. It will be useful to write $ S = S_{\alpha} = \alpha \langle \cdot, \varphi \rangle \varphi $ for some real number $ \alpha \neq 0 $ such that $ \| \varphi \| = 1 $. Let $ \lambda \in \mathds{R} $. Again, we write \begin{equation*} P_{\lambda} = E_{(-\infty, \lambda)}(T+S_{\alpha}) \quad \text{and} \quad Q_{\lambda} = E_{(-\infty, \lambda)}(T). \end{equation*} Observe that the kernel of $ P_{\lambda} - Q_{\lambda} $ is equal to the orthogonal sum of $ \left( \mathrm{Ran} ~ P_{\lambda} \right) \cap \left( \mathrm{Ran} ~ Q_{\lambda} \right) $ and $ \left( \mathrm{Ker} ~ P_{\lambda} \right) \cap \left( \mathrm{Ker} ~ Q_{\lambda} \right) $. Therefore, we will investigate the dimensions of $ \left( \mathrm{Ran} ~ P_{\lambda} \right) \cap \left( \mathrm{Ran} ~ Q_{\lambda} \right) $ and $ \left( \mathrm{Ker} ~ P_{\lambda} \right) \cap \left( \mathrm{Ker} ~ Q_{\lambda} \right) $ separately. Now we follow \cite[pp.\,1948--1949]{Treil_und_Liaw} in order to represent the operators $ T $ and $ T+S_{\alpha} $ such that \cite[Theorem 2.1]{Treil_und_Liaw} is applicable. Define Borel probability measures $ \bbmu $ and $ \bbmu_{\alpha} $ on $ \mathds{R} $ by \begin{align*} \bbmu(\Omega) := \langle E_{\Omega}(T) \varphi, \varphi \rangle \quad \text{and} \quad \bbmu_{\alpha}(\Omega) := \langle E_{\Omega}(T+S_{\alpha}) \varphi, \varphi \rangle, \quad \Omega \in \mathcal{B}(\mathds{R}), \end{align*} respectively. According to \cite[Proposition 5.18]{Schmuedgen}, there exist unitary operators $ U : \mathfrak{H} \rightarrow L^{2}(\bbmu) $ and $ U_{\alpha} : \mathfrak{H} \rightarrow L^{2}(\bbmu_{\alpha}) $ such that $ U T U^{\ast} = M_{t} $ is the multiplication operator by the independent variable on $ L^{2}(\bbmu) $, $ U_{\alpha} (T+S_{\alpha}) U_{\alpha}^{\ast} = M_{s} $ is the multiplication operator by the independent variable on $ L^{2}(\bbmu_{\alpha}) $, and one has both $ (U \varphi)(t) = 1 $ on $ \mathds{R} $ and $ (U_{\alpha} \varphi)(s) = 1 $ on $ \mathds{R} $. Clearly, the operators $ U $ and $ U_{\alpha} $ are uniquely determined by these properties. By \cite[Theorem 2.1]{Treil_und_Liaw}, the unitary operator $ V_{\alpha} := U_{\alpha} U^{\ast} : L^{2}(\bbmu) \rightarrow L^{2}(\bbmu_{\alpha}) $ is given by \begin{align} \label{Der unitaere Operator von Treil_und_Liaw} \left( V_{\alpha} f \right)(x) = f(x) - \alpha \int \frac{f(x) - f(t)}{x - t} \mathrm{d} \bbmu(t) \end{align} for all continuously differentiable functions $ f : \mathds{R} \rightarrow \mathds{C} $ with compact support. For the rest of this subsection, we suppose that $ V_{\alpha} $ satisfies (\ref{Der unitaere Operator von Treil_und_Liaw}). Without loss of generality, we may further assume that $ T $ is already the multiplication operator by the independent variable on $ L^{2}(\bbmu) $, i.\,e., we identify $ \mathfrak{H} $ with $ L^{2}(\bbmu) $, $ T $ with $ U T U^{\ast} $, and $ T+S_{\alpha} $ with $ U (T+S_{\alpha}) U^{\ast} $. In order to prove Theorem \ref{Main theorem II}, we need the following lemma. \begin{lemma} \label{Main theorem II part I} Let $ \lambda \in \mathds{R} \setminus \{ \max \sigma_{\mathrm{ess}}(T) \} $. Then one has that the dimension of $ \left( \mathrm{Ran} ~ P_{\lambda} \right) \cap \left( \mathrm{Ran} ~ Q_{\lambda} \right) $ is \begin{enumerate} \item infinite if and only if $ \lambda > \max \sigma_{\mathrm{ess}}(T) $. \item zero if and only if $ \lambda < \max \sigma_{\mathrm{ess}}(T) $. \end{enumerate} \end{lemma} \begin{proof} The idea of this proof is essentially due to the author's supervisor, Vadim Kostrykin. The well-known fact (see, e.\,g., \cite[Example 5.4]{Schmuedgen}) that $ \mathrm{supp} ~ \bbmu_{\alpha} = \sigma(T + S_{\alpha}) $ implies that the cardinality of $ (\lambda, \infty) \cap \mathrm{supp} ~ \bbmu_{\alpha} $ is infinite [resp.\,finite] if and only if $ \lambda < \max \sigma_{\mathrm{ess}}(T) $ [resp.\,$ \lambda > \max \sigma_{\mathrm{ess}}(T) $]. \newline \textbf{Case 1.} The cardinality of $ (\lambda, \infty) \cap \mathrm{supp} ~ \bbmu_{\alpha} $ is finite. \newline Since $ \lambda > \max \sigma_{\mathrm{ess}}(T) $, it follows that \begin{align*} \mathrm{dim} ~ \mathrm{Ran} ~ E_{[\lambda, \infty)}(T+S_{\alpha}) < \infty \quad \text{and} \quad \mathrm{dim} ~ \mathrm{Ran} ~ E_{[\lambda, \infty)}(T) < \infty. \end{align*} Therefore, $ \mathrm{Ran} ~ E_{(-\infty,\lambda)}(T+S_{\alpha}) \cap \mathrm{Ran} ~ E_{(-\infty,\lambda)}(T) $ is infinite dimensional. \newline \textbf{Case 2.} The cardinality of $ (\lambda, \infty) \cap \mathrm{supp} ~ \bbmu_{\alpha} $ is infinite. \newline If $ \lambda \leq \min \sigma(T) $ or $ \lambda \leq \min \sigma(T + S_{\alpha}) $, then $ \left( \mathrm{Ran} ~ P_{\lambda} \right) \cap \left( \mathrm{Ran} ~ Q_{\lambda} \right) = \{ 0 \} $, as claimed. Now suppose that $ \lambda > \min \sigma(T) $ and $ \lambda > \min \sigma(T + S_{\alpha}) $. Let $ f \in \left( \mathrm{Ran} ~ P_{\lambda} \right) \cap \left( \mathrm{Ran} ~ Q_{\lambda} \right) $. Then one has \begin{align*} f(x) = 0 ~ \text{for } \bbmu \text{-almost all } x \geq \lambda \quad \text{and} \quad \left( V_{\alpha}f \right)(x) = 0 ~ \text{for } \bbmu_{\alpha} \text{-almost all } x \geq \lambda. \end{align*} Choose a representative $ \tilde{f} $ in the equivalence class of $ f $ such that $ \tilde{f}(x) = 0 $ for \textit{all} $ x \geq \lambda $. Let $ r \in \Big( 0, \frac{ \max \sigma_{\mathrm{ess}}(T) - \lambda }{ 3 } \Big) $. According to \cite[Corollary 6.4 (a)]{Knapp} and the fact that $ \bbmu $ is a finite Borel measure on $ \mathds{R} $, we know that the set of continuously differentiable scalar-valued functions on $ \mathds{R} $ with compact support is dense in $ L^{2}(\bbmu) $ with respect to $ \| \cdot \|_{L^{2}(\bbmu)} $. Thus, a standard mollifier argument shows that we can choose continuously differentiable functions $ \tilde{f}_{n} : \mathds{R} \rightarrow \mathds{C} $ with compact support such that \begin{align*} \big\| \tilde{f}_{n} - \tilde{f} \big\|_{L^{2}(\bbmu)} < 1/n \quad \text{and} \quad \tilde{f}_{n}(x) = 0 \text{ for all } x \geq \lambda + r, \quad n \in \mathds{N}. \end{align*} In particular, we may insert $ \tilde{f}_{n} $ into Formula (\ref{Der unitaere Operator von Treil_und_Liaw}) and obtain \begin{equation*} \left( V_{\alpha} \tilde{f}_{n} \right)(x) = \alpha \int_{(-\infty, \lambda + r)} \frac{\tilde{f}_{n}(t)}{x - t} \mathrm{d} \bbmu(t) \quad \text{for all } x \geq \lambda + 2r. \end{equation*} It is readily seen that \begin{equation*} \left( B g \right)(x) := \int_{(-\infty, \lambda + r)} \frac{g(t)}{x - t} \mathrm{d} \bbmu(t), \quad x \geq \lambda + 2r, \end{equation*} defines a bounded operator $ B : L^{2} \! \left( \mathds{1}_{(-\infty, \lambda + r)} \mathrm{d} \bbmu \right) \rightarrow L^{2} \! \left( \mathds{1}_{ \left[ \! \right. \left. \lambda + 2r, \infty \right) } \mathrm{d} \bbmu_{\alpha} \right) $ with operator norm $ \leq 1/r $. It is now easy to show that \begin{equation*} \tag{$ \ast $} \int_{(-\infty, \lambda]} \frac{\tilde{f}(t)}{x - t} \mathrm{d} \bbmu(t) = 0 \quad \text{for } \bbmu_{\alpha} \text{-almost all } x \geq \lambda + 2r. \end{equation*} As $ r \in \Big( 0, \frac{ \max \sigma_{\mathrm{ess}}(T) - \lambda }{ 3 } \Big) $ in ($ \ast $) was arbitrary, we get that \begin{equation*} \int_{(-\infty, \lambda]} \frac{\tilde{f}(t)}{x - t} \mathrm{d} \bbmu(t) = 0 \quad \text{for } \bbmu_{\alpha} \text{-almost all } x > \lambda. \end{equation*} From now on, we may assume without loss of generality that $ \tilde{f} $ is real-valued. Consider the holomorphic function from $ \mathds{C} \setminus (-\infty, \lambda] $ to $ \mathds{C} $ defined by \begin{equation*} z \mapsto \int_{(-\infty, \lambda]} \frac{\tilde{f}(t)}{z - t} \mathrm{d} \bbmu(t). \end{equation*} Since $ \lambda < \max \sigma_{\mathrm{ess}}(T) $, the identity theorem for holomorphic functions implies that \begin{equation*} \int_{(-\infty, \lambda]} \frac{\tilde{f}(t)}{z - t} \mathrm{d} \bbmu(t) = 0 \quad \text{for all } z \in \mathds{C} \setminus (-\infty, \lambda]. \end{equation*} This yields \begin{equation*} \tag{$ \ast \ast $} \int_{(-\infty, \lambda]} \frac{\tilde{f}(t)}{(x - t)^{2} + y^{2}} \mathrm{d} \bbmu(t) = 0 \quad \text{for all } x \in \mathds{R}, ~ y > 0. \end{equation*} Consider the positive finite Borel measure $ \nu_{1} : \mathcal{B}(\mathds{R}) \rightarrow [0, \infty) $ and the finite signed Borel measure $ \nu_{2} : \mathcal{B}(\mathds{R}) \rightarrow \mathds{R} $ defined by \begin{align*} \nu_{1}(\Omega) := \int_{\Omega \cap (-\infty, \lambda]} \mathrm{d} \bbmu(t), \quad \nu_{2}(\Omega) := \int_{\Omega \cap (-\infty, \lambda]} \tilde{f}(t) \mathrm{d} \bbmu(t); \end{align*} note that $ \tilde{f} $ belongs to $ L^{1}(\bbmu) $. Denote by $ p_{\nu_{j}} : \{ x + \mathrm{i} y \in \mathds{C} : x \in \mathds{R} ,y > 0 \} \rightarrow \mathds{R} $ the Poisson transform of $ \nu_{j} $, \begin{align*} p_{\nu_{j}}(x + \mathrm{i} y) := y \int_{\mathds{R}} \frac{\mathrm{d} \nu_{j}(t)}{(x - t)^{2} + y^{2}}, \quad x \in \mathds{R}, ~ y > 0, ~ j=1,2. \end{align*} It follows from ($ \ast \ast $) that \begin{equation*} p_{\nu_{2}}(x + \mathrm{i} y) = 0 \quad \text{for all } x \in \mathds{R}, ~ y > 0. \end{equation*} Furthermore, since $ \nu_{1} $ is not the trivial measure, one has \begin{equation*} p_{\nu_{1}}(x + \mathrm{i} y) > 0 \quad \text{for all } x \in \mathds{R}, ~ y > 0. \end{equation*} Now \cite[Proposition 2.2]{Jaksic_Last} implies that \begin{equation*} 0 = \lim_{y \searrow 0} \frac{p_{\nu_{2}}(x + \mathrm{i} y)}{p_{\nu_{1}}(x + \mathrm{i} y)} = \tilde{f}(x) \quad \text{for } \bbmu \text{-almost all } x \leq \lambda. \end{equation*} Hence $ \tilde{f}(x) = 0 $ for $ \bbmu $-almost all $ x \in \mathds{R} $. We conclude that $ \left( \mathrm{Ran} ~ P_{\lambda} \right) \cap \left( \mathrm{Ran} ~ Q_{\lambda} \right) $ is trivial. This finishes the proof. \end{proof} Analogously, one shows that the following lemma holds true. \newpage \begin{lemma} \label{Main theorem II part II} Let $ \lambda \in \mathds{R} \setminus \{ \min \sigma_{\mathrm{ess}}(T) \} $. Then one has that the dimension of $ \left( \mathrm{Ker} ~ P_{\lambda} \right) \cap \left( \mathrm{Ker} ~ Q_{\lambda} \right) $ is \begin{enumerate} \item infinite if and only if $ \lambda < \min \sigma_{\mathrm{ess}}(T) $. \item zero if and only if $ \lambda > \min \sigma_{\mathrm{ess}}(T) $. \end{enumerate} \end{lemma} \begin{remark} The proof of Lemma \ref{Main theorem II part I} does not work if $ T $ is unbounded. To see this, consider the case where the essential spectrum of $ T $ is bounded from above and $ T $ has infinitely many isolated eigenvalues greater than $ \max \sigma_{\mathrm{ess}}(T) $. \end{remark} \begin{proof}[Proof of Theorem \ref{Main theorem II}] Taken together, Lemmas \ref{Main theorem II part I} and \ref{Main theorem II part II} imply Theorem \ref{Main theorem II}. \end{proof} \section{On non-invertibility of $ D(\lambda) $} \label{The section with finite rank perturbation} In this section, the self-adjoint operator $ T $ is assumed to be bounded. The main purpose of this section is to establish the following theorem. \begin{theorem} \label{Main theorem III} Let $ S : \mathfrak{H} \rightarrow \mathfrak{H} $ be a compact self-adjoint operator. Then the following assertions hold true. \begin{enumerate} \item If $ \lambda \in \mathds{R} \setminus \sigma_{\mathrm{ess}}(T) $, then $ D(\lambda) $ is a compact operator. In particular, zero belongs to the essential spectrum of $ D(\lambda) $. \item Zero belongs to the essential spectrum of $ D(\lambda) $ for all but at most countably many $ \lambda $ in $ \sigma_{\mathrm{ess}}(T) $. \label{part two} \end{enumerate} \end{theorem} Note that we cannot exclude the case that the exceptional set is dense in $ \sigma_{\mathrm{ess}}(T) $. \begin{remark} Mart{\'{\i}}nez-Avenda{\~n}o and Treil have shown ``that given any compact subset of the complex plane containing zero, there exists a Hankel operator having this set as its spectrum'' (see \cite[p.\,83]{Treil_Martinez}). Thus, Theorem \ref{Main theorem III} and \cite[Theorem 1.1]{Treil_Martinez} lead to the following result: \textit{for all $ \lambda $ in $ \mathds{R} $ except for at most countably many $ \lambda $ in $ \sigma_{\mathrm{ess}}(T) $, there exists a Hankel operator $ \Gamma_{\lambda} $ such that $ \sigma(\Gamma_{\lambda}) = \sigma \big( D(\lambda) \big) $.} \end{remark} First, we will prove Theorem \ref{Main theorem III} in the case when the range of $ S $ is finite dimensional. If $ S $ is compact and the range of $ S $ is infinite dimensional, then the proof has to be modified. \subsection{The case when the range of $ S $ is finite dimensional} Throughout this subsection, we consider a self-adjoint finite rank operator \begin{equation*} S = \sum_{j=1}^{N} \alpha_{j} \langle \cdot, \varphi_{j} \rangle \varphi_{j} : \mathfrak{H} \rightarrow \mathfrak{H}, \quad N \in \mathds{N}, \end{equation*} where $ \varphi_{1}, ..., \varphi_{N} $ form an orthonormal system in $ \mathfrak{H} $ and $ \alpha_{1}, ..., \alpha_{N} $ are nonzero real numbers. Note that if there exists $ \lambda_{0} $ in $ \mathds{R} $ such that \begin{equation*} \mathrm{dim} ~ \mathrm{Ran} ~ E_{ \{ \lambda_{0} \} }(T) = \infty \quad \text{or} \quad \mathrm{dim} ~ \mathrm{Ran} ~ E_{ \{ \lambda_{0} \} }(T+S) = \infty, \end{equation*} then $ \mathrm{dim} ~ \mathrm{Ker} ~ D(\lambda) = \infty $ (see the proof of Proposition \ref{Einige hinreichende Bedingungen} (\ref{EW mit unendlicher Vielfachheit}) above) and hence $ 0 \in \sigma_{\mathrm{ess}} \big( D(\lambda) \big) $ for all $ \lambda \in \mathds{R} $. Define the sets $ \mathcal{M}(X) $, $ \mathcal{M}_{-}(X) $, and $ \mathcal{M}_{+}(X) $ by \begin{align*} &\mathcal{M}(X) := \{ \lambda \in \sigma_{\mathrm{ess}}(X) : \text{there exist } \lambda_{k}^{\pm} \text{ in } \sigma(X) \text{ such that } \lambda_{k}^{-} \nearrow \lambda, ~ \lambda_{k}^{+} \searrow \lambda \}, \\ &\mathcal{M}_{-}(X) := \{ \lambda \in \sigma_{\mathrm{ess}}(X) : \text{there exist } \lambda_{k}^{-} \text{ in } \sigma(X) \text{ such that } \lambda_{k}^{-} \nearrow \lambda \} \setminus \mathcal{M}(X), \\ &\mathcal{M}_{+}(X) := \{ \lambda \in \sigma_{\mathrm{ess}}(X) : \text{there exist } \lambda_{k}^{+} \text{ in } \sigma(X) \text{ such that } \lambda_{k}^{+} \searrow \lambda \} \setminus \mathcal{M}(X), \end{align*} where $ X = T $ or $ X = T+S $. The following well-known result shows that these sets do not depend on whether $ X = T $ or $ X = T + S $. \begin{lemma}[see {\cite[Proposition 2.1]{Arazy_Zelenko}}; see also {\cite[p.\,83]{Behncke}}] Let $ A $ and $ B $ be bounded self-adjoint operators acting on $ \mathfrak{H} $. If $ N := \mathrm{dim} ~ \mathrm{Ran} ~ B $ is in $ \mathds{N} $ and $ \mathcal{I} \subset \mathds{R} $ is a nonempty interval contained in the resolvent set of $ A $, then $ \mathcal{I} $ contains no more than $ N $ eigenvalues of the operator $ A + B $ (taking into account their multiplicities). \end{lemma} In view of this lemma and the fact that the essential spectrum is invariant under compact perturbations, we will write $ \mathcal{M} $ instead of $ \mathcal{M}(X) $, $ \mathcal{M}_{+} $ instead of $ \mathcal{M}_{+}(X) $, and $ \mathcal{M}_{-} $ instead of $ \mathcal{M}_{-}(X) $, where $ X = T $ or $ X = T+S $. \begin{lemma} \label{Trace Class Ergebnis} Let $ \lambda \in \mathds{R} \setminus ( \mathcal{M} \cup \mathcal{M}_{-} ) $. Then $ D(\lambda) $ is a trace class operator. \end{lemma} \begin{proof} There exists an infinitely differentiable function $ \psi : \mathds{R} \rightarrow \mathds{R} $ with compact support such that \begin{equation*} E_{(-\infty, \lambda)}(T+S) - E_{(-\infty, \lambda)}(T) = \psi (T+S) - \psi (T). \end{equation*} Combine \cite[p.\,156, Equation (8.3)]{Birman_Solomyak} with \cite[p.\,532]{Peller_II} and \cite[Theorem 2]{Peller_II}, and it follows that $ D(\lambda) $ is a trace class operator. \end{proof} An analogous proof shows that $ D(\lambda) $ is a trace class operator for $ \lambda $ in $ \mathcal{M}_{-} $, provided that $ E_{\{ \lambda \}}(T+S) - E_{\{ \lambda \}}(T) $ is of trace class. \begin{proposition} \label{Null im Spektrum bis auf abz} One has $ 0 \in \sigma_{\mathrm{ess}} \big( D(\lambda) \big) $ for all but at most countably many $ \lambda \in \mathds{R} $. \end{proposition} In the proof of Proposition \ref{Null im Spektrum bis auf abz}, we will use the notion of weak convergence for sequences of probability measures. \begin{definition} Let $ \mathcal{E} $ be a metric space. A sequence $ \bbnu_{1}, \bbnu_{2}, ... $ of Borel probability measures on $ \mathcal{E} $ is said to converge \emph{weakly} to a Borel probability measure $ \bbnu $ on $ \mathcal{E} $ if \begin{align*} \lim_{n \rightarrow \infty} \int f \mathrm{d} \bbnu_{n} = \int f \mathrm{d} \bbnu \quad \text{for every bounded continuous function } f : \mathcal{E} \rightarrow \mathds{R}. \end{align*} If $ \bbnu_{1}, \bbnu_{2}, ... $ converges weakly to $ \bbnu $, then we shall write $ \bbnu_{n} \stackrel{w}{\rightarrow} \bbnu $, $ n \rightarrow \infty $. \end{definition} \begin{proof}[Proof of Proposition \ref{Null im Spektrum bis auf abz}] First, we note that if $ \lambda < \min \big( \sigma(T) \cup \sigma(T+S) \big) $ or $ \lambda > \max \big( \sigma(T) \cup \sigma(T+S) \big) $, then $ D(\lambda) $ is the zero operator, and there is nothing to show. So let us henceforth assume that $ \lambda \geq \min \big( \sigma(T) \cup \sigma(T+S) \big) $ and $ \lambda \leq \max \big( \sigma(T) \cup \sigma(T+S) \big) $. The idea of the proof is to apply Weyl's criterion (see, e.\,g., \cite[Proposition 8.11]{Schmuedgen}) to a suitable sequence of normed vectors. In this proof, we denote by $ \| g \|_{\infty, \, \mathcal{K}} $ the supremum norm of a function $ g : \mathcal{K} \rightarrow \mathds{R} $, where $ \mathcal{K} $ is a compact subset of $ \mathds{R} $, and by $ \| A \|_{\mathrm{op}} $ the usual operator norm of an operator $ A : \mathfrak{H} \rightarrow \mathfrak{H} $. Choose a sequence $ (x_{n})_{n \in \mathds{N}} $ of normed vectors in $ \mathfrak{H} $ such that \begin{align*} &x_{1} ~ \bot ~ \left\{ \varphi_{k} : k = 1, ..., N \right\}, ~ x_{2} ~ \bot ~ \left\{ x_{1}, \varphi_{k}, T \varphi_{k} : k = 1, ..., N \right\}, ~ ..., \\ &x_{n} ~ \bot ~ \left\{ x_{1}, ..., x_{n-1}, T^{j} \varphi_{k} : j \in \mathds{N}_{0}, ~ j \leq n-1, ~ k = 1, ..., N \right\}, ~ ... \end{align*} Consider sequences of Borel probability measures $ (\bbnu_{n})_{n \in \mathds{N}} $ and $ (\tilde{\bbnu}_{n})_{n \in \mathds{N}} $ that are defined as follows: \begin{align*} \bbnu_{n}(\Omega) := \langle E_{\Omega}(T) x_{n}, x_{n} \rangle, \quad \tilde{\bbnu}_{n}(\Omega) := \langle E_{\Omega}(T+S) x_{n}, x_{n} \rangle, \quad \Omega \in \mathcal{B}(\mathds{R}). \end{align*} It is easy to see that by Prohorov's theorem (see, e.\,g., \cite[Proposition 7.2.3]{Parthasarathy}), there exist a subsequence of a subsequence of $ (x_{n})_{n \in \mathds{N}} $ and Borel probability measures $ \bbnu $ and $ \tilde{\bbnu} $ with support contained in $ \sigma(T) $ and $ \sigma(T+S) $, respectively, such that \begin{equation*} \bbnu_{n_{k}} \stackrel{w}{\rightarrow} \bbnu ~ \text{ as } ~ k \rightarrow \infty \quad \text{and} \quad \tilde{\bbnu}_{n_{k_{\ell}}} \stackrel{w}{\rightarrow} \tilde{\bbnu} ~ \text{ as } ~ \ell \rightarrow \infty. \end{equation*} Due to this observation, we consider the sequences $ \big( x_{n_{k_{\ell}}} \big)_{\ell \in \mathds{N}} $, $ \big( \bbnu_{n_{k_{\ell}}} \big)_{\ell \in \mathds{N}} $, and $ \big( \tilde{\bbnu}_{n_{k_{\ell}}} \big)_{\ell \in \mathds{N}} $ which will be denoted again by $ (x_{n})_{n \in \mathds{N}} $, $ (\bbnu_{n})_{n \in \mathds{N}} $, and $ (\tilde{\bbnu}_{n})_{n \in \mathds{N}} $. Put $ \mathcal{N}_{T} := \{ \mu \in \mathds{R} : \bbnu( \{ \mu \} ) > 0 \} $ and $ \mathcal{N}_{T+S} := \{ \mu \in \mathds{R} : \tilde{\bbnu}( \{ \mu \} ) > 0 \} $. Then the set $ \mathcal{N}_{T} \cup \mathcal{N}_{T+S} $ is at most countable. Consider the case where $ \lambda $ does not belong to $ \mathcal{N}_{T} \cup \mathcal{N}_{T+S} $. Define $ \xi := \min \left\{ \min \sigma(T), ~ \min \sigma(T+S) \right\} - 1 $. Consider the continuous functions $ f_{m} : \mathds{R} \rightarrow \mathds{R} $, $ m \in \mathds{N} $, that are defined by \begin{equation*} f_{m}(t) := \big( 1 + m (t - \xi) \big) \cdot \mathds{1}_{\left[ \xi - 1/m, ~ \xi \right]}(t) + \mathds{1}_{\left( \xi, ~ \lambda \right)} (t) + \big( 1 - m (t - \lambda) \big) \cdot \mathds{1}_{\left[ \lambda, \lambda + 1/m \right]}(t). \end{equation*} The figure below shows (qualitatively) the graph of $ f_{m} $. \begin{figure}[ht] \begin{center} \begin{picture}(200,50) \put(0,0){\line(1,0){60}} \put(60,0){\line(1,4){10}} \put(70,40){\line(1,0){60}} \put(130,40){\line(1,-4){10}} \put(140,0){\line(1,0){60}} \end{picture} \caption{The graph of $ f_{m} $.} \end{center} \end{figure} For all $ m \in \mathds{N} $, choose polynomials $ p_{m, k} $ such that \begin{align} \label{Approximiere stetige Funktion durch Polynome} \| f_{m} - p_{m, k} \|_{\infty, \, \mathcal{K}} \rightarrow 0 \quad \text{as } k \rightarrow \infty, \end{align} where $ \mathcal{K} := \big[ \! \min \big( \sigma(T) \cup \sigma(T+S) \big) - 10, \, \max \big( \sigma(T) \cup \sigma(T+S) \big) + 10 \big] $. By construction of $ (x_{n})_{n \in \mathds{N}} $, one has \begin{align} \label{Gleichheit Polynome} p_{m,k}(T+S) x_{n} = p_{m, k}(T) x_{n} \quad \text{for all } n > \text{degree of } p_{m, k }. \end{align} For all $ m \in \mathds{N} $, the function $ |\mathds{1}_{(-\infty, \lambda)} - f_{m}|^{2} $ is bounded, measurable, and continuous except for a set of both $ \bbnu $-measure zero and $ \tilde{\bbnu} $-measure zero. Now (\ref{Gleichheit Polynome}) and the Portmanteau theorem (see, e.\,g., \cite[Theorem 13.16 (i) and (iii)]{Klenke}) imply \begin{align*} &\limsup_{n \rightarrow \infty} \left\| \left( E_{(-\infty, \lambda)}(T+S) - E_{(-\infty, \lambda)}(T) \right) x_{n} \right\| \\ &\leq \left( \int_{\mathds{R}} | \mathds{1}_{(-\infty, \lambda)}(t) - f_{m}(t) |^{2} \mathrm{d}\bbnu(t) \right)^{1/2} \\ & \quad + \left( \int_{\mathds{R}} | \mathds{1}_{(-\infty, \lambda)}(s) - f_{m}(s) |^{2} \mathrm{d} \tilde{\bbnu}(s) \right)^{1/2} \\ & \quad + \left\| f_{m}(T) - p_{m, k}(T) \right\|_{\mathrm{op}} \\ & \quad + \left\| f_{m}(T+S) - p_{m, k}(T+S) \right\|_{\mathrm{op}} \end{align*} for all $ m \in \mathds{N} $ and all $ k \in \mathds{N} $. First, we send $ k \rightarrow \infty $ and then we take the limit $ m \rightarrow \infty $. As $ m \rightarrow \infty $, the sequence $ | \mathds{1}_{(-\infty, \lambda)} - f_{m} |^{2} $ converges to zero pointwise almost everywhere with respect to both $ \bbnu $ and $ \tilde{\bbnu} $. Now (\ref{Approximiere stetige Funktion durch Polynome}) and the dominated convergence theorem imply \begin{equation*} \lim_{n \rightarrow \infty} \left\| \left( E_{(-\infty, \lambda)}(T+S) - E_{(-\infty, \lambda)}(T) \right) x_{n} \right\| = 0. \end{equation*} Recall that $ (x_{n})_{n \in \mathds{N}} $ is an orthonormal sequence. Thus, an application of Weyl's criterion (see, e.\,g., \cite[Proposition 8.11]{Schmuedgen}) concludes the proof. \end{proof} \begin{remark} If $ T $ is unbounded, then the spectrum of $ T $ is unbounded, so that the proof of Proposition \ref{Null im Spektrum bis auf abz} does not work. For instance, we used the compactness of the spectrum in order to uniformly approximate $ f_{m} $ by polynomials. Moreover, it is unclear whether an orthonormal sequence $ (x_{n})_{n \in \mathds{N}} $ as in the proof of Proposition \ref{Null im Spektrum bis auf abz} can be found in the domain of $ T $. \end{remark} \subsection{The case when the range of $ S $ is infinite dimensional} In this subsection, we suppose that $ S $ is compact and the range of $ S $ is infinite dimensional. The following lemma is easily shown. \begin{lemma} \label{Compact Operator Ergebnis} Let $ \lambda \in \mathds{R} \setminus \sigma_{\mathrm{ess}}(T) $. Then $ D(\lambda) $ is compact. \end{lemma} Furthermore, Proposition \ref{Null im Spektrum bis auf abz} still holds when $ S $ is compact and the range of $ S $ is infinite dimensional. To see this, we need to modify two steps of the proof of Proposition \ref{Null im Spektrum bis auf abz}. Let us write $ S = \sum_{j=1}^{\infty} \alpha_{j} \langle \cdot, \varphi_{j} \rangle \varphi_{j} $, where $ \varphi_{1}, \varphi_{2}, ... $ is an orthonormal system in $ \mathfrak{H} $ and $ \alpha_{1}, \alpha_{2}, ... $ are nonzero real numbers. (1) In contrast to the proof of Proposition \ref{Null im Spektrum bis auf abz} above, we choose an orthonormal sequence $ x_{1}, x_{2}, ... $ in $ \mathfrak{H} $ as follows: \begin{align*} &x_{1} ~ \bot ~ \varphi_{1}, ~ x_{2} ~ \bot ~ \{ x_{1}, \varphi_{1}, \varphi_{2}, T \varphi_{1}, T \varphi_{2} \}, ~ ..., \\ &x_{n} ~ \bot ~ \{ x_{1}, ..., x_{n-1}, \varphi_{k}, T \varphi_{k}, ..., T^{n-1} \varphi_{k} : k = 1, ..., n \}, ~ ... \end{align*} By construction, one has \begin{equation*} p(T + F_{\ell}) x_{n} = p(T) x_{n} \quad \text{ for all } n > \max (\ell, ~ \text{degree of } p), \end{equation*} where $ p $ is a polynomial, $ \ell \in \mathds{N} $, and $ F_{\ell} := \sum_{j=1}^{\ell} \alpha_{j} \langle \cdot, \varphi_{j} \rangle \varphi_{j} $. (2) We continue as in the proof of Proposition \ref{Null im Spektrum bis auf abz} above and estimate as follows: \begin{align*} &\limsup_{n \rightarrow \infty} \left\| \left( E_{(-\infty, \lambda)}(T+S) - E_{(-\infty, \lambda)}(T) \right) x_{n} \right\| \\ &\leq \left( \int_{\mathds{R}} | \mathds{1}_{(-\infty, \lambda)}(t) - f_{m}(t) |^{2} \mathrm{d}\bbnu(t) \right)^{1/2} \\ & \quad + \left( \int_{\mathds{R}} | \mathds{1}_{(-\infty, \lambda)}(s) - f_{m}(s) |^{2} \mathrm{d} \tilde{\bbnu}(s) \right)^{1/2} \\ & \quad + \left\| f_{m}(T) - p_{m, k}(T) \right\|_{\mathrm{op}} \\ & \quad + \left\| f_{m}(T+S) - p_{m, k}(T+S) \right\|_{\mathrm{op}} \\ & \quad + \left\| p_{m,k}(T+S) - p_{m, k}(T + F_{\ell}) \right\|_{\mathrm{op}} \end{align*} for all $ k, \ell, m \in \mathds{N} $, where $ \| \cdot \|_{\mathrm{op}} $ denotes the operator norm. It is well known that the operators $ F_{\ell} $ uniformly approximate the operator $ S $ as $ \ell $ tends to infinity. Therefore, $ \left\| p_{m,k}(T+S) - p_{m, k}(T + F_{\ell}) \right\|_{\mathrm{op}} \rightarrow 0 $ as $ \ell \rightarrow \infty $. Analogously to the proof of Proposition \ref{Null im Spektrum bis auf abz} above, it follows that \begin{equation*} \lim_{n \rightarrow \infty} \left\| \left( E_{(-\infty, \lambda)}(T+S) - E_{(-\infty, \lambda)}(T) \right) x_{n} \right\| = 0. \end{equation*} Hence, we have shown that zero belongs to the essential spectrum of $ D(\lambda) $ for all but at most countably many $ \lambda \in \mathds{R} $. \subsection{Proof of Theorem \ref{Main theorem III}} Taken together, Lemma \ref{Trace Class Ergebnis} and Proposition \ref{Null im Spektrum bis auf abz} show that Theorem \ref{Main theorem III} holds whenever the range of $ S $ is finite dimensional. In the preceding subsection, we have shown that Theorem \ref{Main theorem III} also holds when $ S $ is compact and the range of $ S $ is infinite dimensional. Now the proof is complete. \qed \subsection{The smooth situation} In order to apply a result of Pushnitski \cite{Pushnitski_I} to $ D(\lambda) $, we check the corresponding assumptions stated in \cite[p.\,228]{Pushnitski_I}. First, define the compact self-adjoint operator $ G := |S|^{\frac{1}{2}} : \mathfrak{H} \rightarrow \mathfrak{H} $ and the bounded self-adjoint operator $ S_{0} := \mathrm{sign}(S) : \mathfrak{H} \rightarrow \mathfrak{H} $. Obviously, one has $ S = G^{\ast} S_{0} G $. Define the operator-valued functions $ h_{0} $ and $ h $ on $ \mathds{R} $ by \begin{align*} h_{0}(\lambda) = G E_{(-\infty, \lambda)}(T) G^{\ast}, \quad h(\lambda) = G E_{(-\infty, \lambda)}(T+S) G^{\ast}, \quad \lambda \in \mathds{R}. \end{align*} In order to fulfill \cite[Hypothesis 1.1]{Pushnitski_I}, we need the following assumptions. \begin{hypothesis} Suppose that there exists an open interval $ \delta $ contained in the absolutely continuous spectrum of $ T $. Next, we assume that the derivatives \begin{align*} \dot{h}_{0}(\lambda) = \frac{\mathrm{d}}{\mathrm{d} \lambda} h_{0}(\lambda) \quad \text{and} \quad \dot{h}(\lambda) = \frac{\mathrm{d}}{\mathrm{d} \lambda} h(\lambda) \end{align*} exist in operator norm for all $ \lambda \in \delta $, and that the maps $ \delta \ni \lambda \mapsto \dot{h}_{0}(\lambda) $ and \linebreak $ \delta \ni \lambda \mapsto \dot{h}(\lambda) $ are H\"older continuous (with some positive exponent) in the operator norm. \end{hypothesis} Now \cite[Theorem 1.1]{Pushnitski_I} yields that for all $ \lambda \in \delta $, there exists a nonnegative real number $ a $ such that \begin{equation*} \sigma_{\mathrm{ess}} \big( D(\lambda) \big) = \left[ -a, a \right]. \end{equation*} The number $ a $ depends on $ \lambda $ and can be expressed in terms of the scattering matrix for the pair $ T $, $ T + S $, see \cite[Formula (1.3)]{Pushnitski_I}. \begin{example} \label{Kreinsches Beispiel zum Zweiten} Again, consider Kre{\u\i}n's example \cite[pp.\,622--624]{Krein}. That is, $ \mathfrak{H} = L^{2}(0,\infty) $, the initial operator $ T = A_{0} $ is the integral operator from Example \ref{via Krein}, and $ S = \langle \cdot, \varphi \rangle \varphi $ with $ \varphi(x) = \mathrm{e}^{-x} $. Put $ \delta = (0,1) $. ~Then Pushnitski has shown in \cite[Subsection 1.3]{Pushnitski_I} that, by \cite[Theorem 1.1]{Pushnitski_I}, one has $ \sigma_{\mathrm{ess}} \big( D(\lambda) \big) = [-1, 1] $ for all $ 0 < \lambda < 1 $. In particular, the operator $ D(\lambda) $ fulfills condition (C2) in Theorem \ref{Charakterisierung modifiziert} for all $ 0 < \lambda < 1 $. \end{example} \section{Proof of the main results} \label{Beweis des Hauptresultates} This section is devoted to the proof of Theorem \ref{new Main result} and Theorem \ref{new Main result II}. First, we need to show two auxiliary results. \subsection{Two auxiliary results} \label{Subsection with two auxiliary results} \begin{proposition} \label{Main result} Let $ T $ and $ S = \langle \cdot, \varphi \rangle \varphi $ be a bounded self-adjoint operator and a self-adjoint operator of rank one acting on $ \mathfrak{H} $, respectively. \begin{enumerate} \item The operator $ D(\lambda) $ is unitarily equivalent to a self-adjoint Hankel operator of finite rank for all $ \lambda $ in $ \mathds{R} \setminus \left[ \min \sigma_{\mathrm{ess}}(T), \max \sigma_{\mathrm{ess}}(T) \right] $. \item \label{The second item of the main theorem} Suppose that $ \varphi $ is cyclic for $ T $. Then $ D(\lambda) $ is unitarily equivalent to a bounded self-adjoint Hankel operator for all $ \lambda $ in $ \mathds{R} \setminus \sigma_{\mathrm{ess}}(T) $ and for all but at most countably many $ \lambda $ in $ \sigma_{\mathrm{ess}}(T) $. \end{enumerate} \end{proposition} \begin{proof} (1) follows easily from Lemma \ref{Lemma zur Bedingung C3} and Theorem \ref{Charakterisierung modifiziert}, because \begin{align*} D(\lambda) &= E_{(-\infty, \lambda)}(T+S) - E_{(-\infty, \lambda)}(T) \\ &= E_{[ \lambda, \infty )}(T) - E_{[ \lambda, \infty )}(T+S) \end{align*} is a finite rank operator for all $ \lambda $ in $ \left( -\infty, \min \sigma_{\mathrm{ess}}(T) \right) \cup \left( \max \sigma_{\mathrm{ess}}(T), \infty \right) $. (2) is a direct consequence of Lemma \ref{Lemma zur Bedingung C3}, Theorem \ref{Main theorem II}, Theorem \ref{Main theorem III}, and Theorem \ref{Charakterisierung modifiziert}. This concludes the proof. \end{proof} \begin{lemma} \label{Vorbereitung zu new Main result II} The statements of Theorem \ref{new Main result II} hold if $ T $ is additionally assumed to be bounded. \end{lemma} \begin{proof} Let $ \lambda \in \mathds{R} $. It follows from Halmos' decomposition (see \cite{Halmos}) of $ \mathfrak{H} $ with respect to the orthogonal projections $ E_{(-\infty, \lambda)}(T+S) $ and $ E_{(-\infty, \lambda)}(T) $ that we obtain the following orthogonal decomposition of $ \mathfrak{H} $ with respect to $ D(\lambda) $: \begin{align*} \mathfrak{H} = \Big( \mathrm{Ker} ~ D(\lambda) \Big) \oplus \Big( \mathrm{Ran} ~ E_{\{ 1 \}} \big( D(\lambda) \big) \Big) \oplus \Big( \mathrm{Ran} ~ E_{\{ -1 \}} \big( D(\lambda) \big) \Big) \oplus \mathfrak{H}_{g}^{(\lambda)}. \end{align*} Here $ \mathfrak{H}_{g}^{(\lambda)} $ is the orthogonal complement of \begin{equation*} \widetilde{\mathfrak{H}}^{(\lambda)} := \Big( \mathrm{Ker} ~ D(\lambda) \Big) \oplus \Big( \mathrm{Ran} ~ E_{\{ 1 \}} \big( D(\lambda) \big) \Big) \oplus \Big( \mathrm{Ran} ~ E_{\{ -1 \}} \big( D(\lambda) \big) \Big) \end{equation*} in $ \mathfrak{H} $. Clearly, $ \mathfrak{H}_{g}^{(\lambda)} $ is reducing for the operator $ D(\lambda) $. It follows from Lemma \ref{Satz bei Chandler Davis} that $ \left. D(\lambda) \right|_{\mathfrak{H}_{g}^{(\lambda)}} $ is unitarily equivalent to $ - \! \left. D(\lambda) \right|_{\mathfrak{H}_{g}^{(\lambda)}} $. It is elementary to show that there exists a compact self-adjoint block diagonal operator $ \widetilde{K}_{\lambda} \oplus 0 $ on $ \widetilde{\mathfrak{H}}^{(\lambda)} \oplus \mathfrak{H}_{g}^{(\lambda)} $ with the following properties: \begin{itemize} \item $ \widetilde{K}_{\lambda} \oplus 0 $ fulfills assertion (2) in Theorem \ref{new Main result II}. \item the range of $ \widetilde{K}_{\lambda} \oplus 0 $ is infinite dimensional if and only if one of the closed subspaces $ \mathrm{Ran} ~ E_{\{ 1 \}} \big( D(\lambda) \big), ~ \mathrm{Ran} ~ E_{\{ -1 \}} \big( D(\lambda) \big) $ is finite dimensional and the other one is infinite dimensional. \item the kernel of $ D(\lambda) - \big( \widetilde{K}_{\lambda} \oplus 0 \big) $ is either trivial or infinite dimensional. \item if $ \widetilde{\mathfrak{H}}^{(\lambda)} \neq \{ 0 \} $, then the spectrum of $ \left. D(\lambda) \right|_{\widetilde{\mathfrak{H}}^{(\lambda)}} - \widetilde{K}_{\lambda} $ is contained in the interval $ [-1,1] $ and consists only of eigenvalues. Moreover, the dimensions of $ \mathrm{Ran} ~ E_{\{ t \}} \big( \! \! \left. D(\lambda) \right|_{\widetilde{\mathfrak{H}}^{(\lambda)}} - \widetilde{K}_{\lambda} \big) $ and $ \mathrm{Ran} ~ E_{\{ -t \}} \big( \! \! \left. D(\lambda) \right|_{\widetilde{\mathfrak{H}}^{(\lambda)}} - \widetilde{K}_{\lambda} \big) $ differ by at most one, for all $ 0 < t \leq 1 $. \end{itemize} The block diagonal operator $ \widetilde{K}_{\lambda} \oplus 0 $ serves as a correction term for $ D(\lambda) $. In particular, no correction term is needed if $ \widetilde{\mathfrak{H}}^{(\lambda)} = \{ 0 \} $. Theorem \ref{Main theorem III} and the invariance of the essential spectrum under compact perturbations imply that zero belongs to the essential spectrum of $ D(\lambda) - \big( \widetilde{K}_{\lambda} \oplus 0 \big) $ for all $ \lambda $ in $ \mathds{R} $ except for at most countably many $ \lambda $ in $ \sigma_{\mathrm{ess}}(T) $. Therefore, an application of \cite[Theorem 1]{Megretskii_et_al} yields that $ D(\lambda) - \big( \widetilde{K}_{\lambda} \oplus 0 \big) $ is unitarily equivalent to a bounded self-adjoint Hankel operator $ \Gamma_{\lambda} $ on $ \ell^{2}(\mathds{N}_{0}) $ for all $ \lambda $ in $ \mathds{R} $ except for at most countably many $ \lambda $ in $ \sigma_{\mathrm{ess}}(T) $. Thus, by the properties of $ \widetilde{K}_{\lambda} \oplus 0 $ listed above, the claim follows. \end{proof} \begin{remark} If we consider $ E_{(-\infty, \lambda]}(T) - E_{(-\infty, \lambda]}(T + S) $, the difference of the spectral projections associated with the closed interval $ (-\infty, \lambda] $ instead of the open interval $ (-\infty, \lambda) $, then all assertions in Lemma \ref{Lemma zur Bedingung C3}, Proposition \ref{Einige hinreichende Bedingungen}, Theorem \ref{Main theorem II}, Theorem \ref{Main theorem III}, Proposition \ref{Main result}, and Lemma \ref{Vorbereitung zu new Main result II} remain true. All proofs can easily be modified. \end{remark} \subsection{The case when $ T $ is semibounded} \label{Zum halbbeschraenkten Fall} In this subsection, which is based on Kre{\u\i}n's approach in \cite[pp.\,622--623]{Krein}, we deal with the case when the self-adjoint operator $ T $ is semibounded \emph{but not bounded}. As before, we write \begin{equation*} D(\lambda) = E_{(-\infty, \lambda)}(T+S) - E_{(-\infty, \lambda)}(T) \end{equation*} if $ S $ is a compact self-adjoint operator and $ \lambda \in \mathds{R} $. First, consider the case when $ T $ is bounded from below. Choose $ c \in \mathds{R} $ such that \begin{align} \label{Der nach unten beschraenkte Fall} T + cI \geq 0 \quad \text{and} \quad T + S + cI \geq 0. \end{align} It suffices to consider $ D(\lambda) $ for $ \lambda \geq -c $. Compute \begin{align*} D(\lambda) &= E_{[\lambda, \infty)}(T) - E_{[\lambda, \infty)}(T+S) \\ &= E_{(-\infty, \mu ]} \big( (T + (1+c)I)^{-1} \big) - E_{(-\infty, \mu ]} \big( (T + S + (1+c)I)^{-1} \big), \end{align*} where $ \mu = \frac{1}{\lambda + 1 + c} $. By the second resolvent equation, one has \begin{equation*} \big( T + S + (1+c)I \big)^{-1} = \big( T + (1+c)I \big)^{-1} - \big( T + S + (1+c)I \big)^{-1} S \big( T + (1+c)I \big)^{-1}. \end{equation*} The operator \begin{equation} \label{Festlegung von S strich} S^{\prime} := - \big( T + S + (1+c)I \big)^{-1} S \big( T + (1+c)I \big)^{-1} \end{equation} is compact and self-adjoint. One can easily show that $ \mathrm{rank} ~ S^{\prime} = \mathrm{rank} ~ S $. In particular, if $ S = \langle \cdot, \varphi \rangle \varphi $ has rank one and $ \varphi^{\prime} := \frac{(T + (1+c)I)^{-1} \varphi}{\| (T + (1+c)I)^{-1} \varphi \|} $, then there exists a number $ \alpha^{\prime} \in \mathds{R} $ such that $ S^{\prime} = \alpha^{\prime} \langle \cdot, \varphi^{\prime} \rangle \varphi^{\prime} $. We have shown: \begin{lemma} \label{The bounded case again} Let $ T $ be a self-adjoint operator which is bounded from below but not bounded, let $ S $ be a compact self-adjoint operator, and let $ c $ be such that (\ref{Der nach unten beschraenkte Fall}) holds. ~Then $ D(\lambda) = 0 $ for all $ \lambda < -c $ and \begin{align*} D(\lambda) = E_{(-\infty, \mu]} \big( T^{\prime} \big) - E_{(-\infty, \mu]} \big( T^{\prime} + S^{\prime} \big) \quad \text{for all } \lambda \geq -c. \end{align*} Here $ \mu = \frac{1}{\lambda + 1 + c} $\,, $ T^{\prime} = \big( T + (1+c)I \big)^{-1} $, and $ S^{\prime} $ is defined as in (\ref{Festlegung von S strich}). \end{lemma} The case when $ T $ is bounded from below can now be pulled back to the bounded case, see the remark in Subsection \ref{Subsection with two auxiliary results} above. \begin{proposition} \label{Hauptergebnis im nach unten beschraenkten Fall} Suppose that $ S = \langle \cdot, \varphi \rangle \varphi $ is a self-adjoint operator of rank one and that $ T $ is a self-adjoint operator which is bounded from below but not bounded. Assume further that the spectrum of $ T $ is not purely discrete and that the vector $ \varphi $ is cyclic for $ T $. ~Then the kernel of $ D(\lambda) $ is trivial for all $ \lambda > \min \sigma_{\mathrm{ess}}(T) $. Furthermore, $ D(\lambda) $ is unitarily equivalent to a bounded self-adjoint Hankel operator for all $ \lambda $ in $ \mathds{R} \setminus \sigma_{\mathrm{ess}}(T) $ and for all but at most countably many $ \lambda $ in $ \sigma_{\mathrm{ess}}(T) $. \end{proposition} \begin{proof} Let $ c $ be such that (\ref{Der nach unten beschraenkte Fall}) holds. It is easy to show that $ (T+(1+c)I)^{-1} \varphi $ is cyclic for $ (T+(1+c)I)^{-1} $ if $ \varphi $ is cyclic for $ T $. Furthermore, it is easy to show that the function $ x \mapsto \frac{1}{x+1+c} $ is one-to-one from $ \sigma_{\mathrm{ess}}(T) $ onto $ \sigma_{\mathrm{ess}} \big( (T + (1+c)I)^{-1} \big) \setminus \{ 0 \} $. One has that $ \min \sigma_{\mathrm{ess}} \big( (T + (1+c)I)^{-1} \big) = 0 $ and, since the spectrum of $ T $ is not purely discrete, $ \max \sigma_{\mathrm{ess}} \big( (T + (1+c)I)^{-1} \big) = \frac{1}{\lambda_{0}+1+c} $, where $ \lambda_{0} := \min \sigma_{\mathrm{ess}}(T) $. Therefore, $ \mu = \frac{1}{\lambda+1+c} $ belongs to the open interval $ \Big( 0, \frac{1}{\lambda_{0}+1+c} \Big) $ if and only if $ \lambda > \lambda_{0} $. In view of Lemma \ref{The bounded case again} and the remark in Subsection \ref{Subsection with two auxiliary results} above, the claims follow. \end{proof} Moreover, standard computations show: \begin{corollary} \label{Same list of sufficient conditions} Suppose that $ S $ is a self-adjoint operator of finite rank $ N \in \mathds{N} $ and that $ T $ is a self-adjoint operator which is bounded from below but not bounded. We obtain the same list of sufficient conditions for $ D(\lambda) $ to be unitarily equivalent to a bounded self-adjoint block-Hankel operator of order $ N $ with infinite dimensional kernel for all $ \lambda \in \mathds{R} $ as in Proposition \ref{Einige hinreichende Bedingungen} above. \end{corollary} \begin{proof} Let $ X = T $ or $ X = T+S $ and let $ c $ be such that (\ref{Der nach unten beschraenkte Fall}) holds. One has: \begin{itemize} [leftmargin=2.5em] \item The real number $ \lambda $ is an eigenvalue of $ X $ with multiplicity $ k \in \mathds{N} \cup \{ \infty \} $ if and only if $ \frac{1}{\lambda+1+c} $ is an eigenvalue of $ (X+(1+c)I)^{-1} $ with the same multiplicity $ k $. \item The spectrum of the restricted operator $ \left. X \right|_{\mathfrak{E}^{\bot}} $ has multiplicity at least $ N+1 $ if and only if the spectrum of the restricted operator $ \left. (X+(1+c)I)^{-1} \right|_{\mathfrak{E}^{\bot}} $ has multiplicity at least $ N+1 $, where $ \mathfrak{E} := \left\{ x \in \mathfrak{H} : x \text{ is an eigenvector of } X \right\} $. \end{itemize} In view of Lemma \ref{The bounded case again} and the remark in Subsection \ref{Subsection with two auxiliary results} above, the claim follows. \end{proof} In Proposition \ref{Hauptergebnis im nach unten beschraenkten Fall}, we assumed that the spectrum of $ T $ is not purely discrete. Now consider the case when $ T $ has a purely discrete spectrum. By the invariance of the essential spectrum under compact perturbations, it is clear that the operator $ T + S $ also has a purely discrete spectrum, for all compact self-adjoint operators $ S $. Moreover, since $ T $ is bounded from below, we know that $ T + S $ is bounded from below as well. Therefore, the range of $ D(\lambda) $ is finite dimensional for all $ \lambda \in \mathds{R} $, and in particular conditions (C1) and (C2) in Theorem \ref{Charakterisierung modifiziert allgemeinere Version} are fulfilled. Combining this with Lemma \ref{Lemma zur Bedingung C3} and the remark in Subsection \ref{Subsection with two auxiliary results} above, we have shown: \begin{proposition} \label{Keine Ausnahmepunkte im Fall von purely discrete spectrum} Suppose that $ S $ is a self-adjoint operator of finite rank $ N \in \mathds{N} $ and that $ T $ is a bounded from below self-adjoint operator with a purely discrete spectrum. Then $ D(\lambda) $ is unitarily equivalent to a finite rank self-adjoint block-Hankel operator of order $ N $ for \emph{all} $ \lambda \in \mathds{R} $. \end{proposition} This proposition supports the idea that there is a structural correlation between the operator $ D(\lambda) $ and block-Hankel operators. Now, consider the case when $ T $ is bounded from above. Choose $ c \in \mathds{R} $ such that \begin{align*} T - cI \leq 0 \quad \text{and} \quad T + S - cI \leq 0. \end{align*} It suffices to consider $ D(\lambda) $ for $ \lambda \leq c $. Compute \begin{align*} D(\lambda) &= E_{(\mu, \infty)} \big( (T+S-(1+c)I )^{-1} \big) - E_{(\mu, \infty)} \big( (T-(1+c)I)^{-1} \big) \\ &= E_{(-\infty, \mu]} \big( (T-(1+c)I)^{-1} \big) - E_{(-\infty, \mu]} \big( (T+S-(1+c)I )^{-1} \big), \end{align*} where $ \mu = \frac{1}{\lambda - (1 + c)} $. By the second resolvent equation, one has \begin{equation*} \big( T + S - (1+c)I \big)^{-1} = \big( T - (1+c)I \big)^{-1} - \big( T + S - (1+c)I \big)^{-1} S \big( T - (1+c)I \big)^{-1}. \end{equation*} The operator $ S^{\prime \prime} := - \big( T + S - (1+c)I \big)^{-1} S \big( T - (1+c)I \big)^{-1} $ is compact and self-adjoint with $ \mathrm{rank} ~ S^{\prime \prime} = \mathrm{rank} ~ S $. In particular, if $ S = \langle \cdot, \varphi \rangle \varphi $ has rank one and $ \varphi^{\prime \prime} := \frac{(T - (1+c)I)^{-1} \varphi}{\| (T - (1+c)I)^{-1} \varphi \|} $, then there exists a number $ \alpha^{\prime \prime} \in \mathds{R} $ such that $ S^{\prime \prime} = \alpha^{\prime \prime} \langle \cdot, \varphi^{\prime \prime} \rangle \varphi^{\prime \prime} $. Now proceed analogously to the case when $ T $ is bounded from below. It follows that Proposition \ref{Hauptergebnis im nach unten beschraenkten Fall} holds in the case when $ T $ is bounded from above but not bounded if we replace $ \lambda > \min \sigma_{\mathrm{ess}}(T) $ by $ \lambda < \max \sigma_{\mathrm{ess}}(T) $. Furthermore, Corollary \ref{Same list of sufficient conditions} still holds if $ T $ is bounded from above but not \linebreak bounded. Obviously, Proposition \ref{Keine Ausnahmepunkte im Fall von purely discrete spectrum} holds in the case when $ T $ is bounded from above. \label{Ergaenzung Proposition} \subsection{Proof of Theorem \ref{new Main result} and Theorem \ref{new Main result II}} Let us first complete the proof of Theorem \ref{new Main result}. \begin{proof}[Proof of Theorem \ref{new Main result}] If the operator $ T $ is bounded, then the statement of Theorem \ref{new Main result} follows from Proposition \ref{Main result} and the discussion of Case 1 -- Case 3 in Subsection \ref{Reduktion zum zyklischen Fall} above. Now suppose that $ T $ is bounded from below but not bounded and let $ c $ be such that (\ref{Der nach unten beschraenkte Fall}) holds. First, assume that the spectrum of $ T $ is not purely discrete. ~If $ \varphi $ is cyclic for $ T $, then the claim follows from Proposition \ref{Hauptergebnis im nach unten beschraenkten Fall}. ~In the case when $ \varphi $ is not cyclic for $ T $, we consider the bounded operator $ T^{\prime} $ and the rank one operator $ S^{\prime} $ defined as in Lemma \ref{The bounded case again} above. As we have noted in the proof of Proposition \ref{Hauptergebnis im nach unten beschraenkten Fall}, it is easy to show that the function $ x \mapsto \frac{1}{x+1+c} $ is one-to-one from $ \sigma_{\mathrm{ess}}(T) $ onto $ \sigma_{\mathrm{ess}} \big( T^{\prime} \big) \setminus \{ 0 \} $. Now the statement of Theorem \ref{new Main result} follows from the remark in Subsection \ref{Subsection with two auxiliary results}, Proposition \ref{Main result}, and the discussion of Case 1 -- Case 3 in Subsection \ref{Reduktion zum zyklischen Fall} above. If $ T $ has a purely discrete spectrum, then Proposition \ref{Keine Ausnahmepunkte im Fall von purely discrete spectrum} shows that $ D(\lambda) $ is unitarily equivalent to a finite rank self-adjoint Hankel operator for all $ \lambda \in \mathds{R} $. If $ T $ is bounded from above but not bounded, then the proof runs analogously. This finishes the proof. \end{proof} Now let us prove Theorem \ref{new Main result II}. \begin{proof}[Proof of Theorem \ref{new Main result II}] In view of Lemma \ref{Vorbereitung zu new Main result II}, it suffices to consider the case when $ T $ is semibounded but not bounded. First, let $ T $ be bounded from below but not bounded and let $ c $ be such that (\ref{Der nach unten beschraenkte Fall}) holds. Again, recall that the function $ x \mapsto \frac{1}{x+1+c} $ is one-to-one from $ \sigma_{\mathrm{ess}}(T) $ onto $ \sigma_{\mathrm{ess}} \big( (T + (1+c)I)^{-1} \big) \setminus \{ 0 \} $. Now the statements of Theorem \ref{new Main result II} follow from Lemma \ref{The bounded case again}, the remark in Subsection \ref{Subsection with two auxiliary results} above, and Lemma \ref{Vorbereitung zu new Main result II}. If $ T $ is bounded from above but not bounded, then the proof runs analogously. \end{proof} \section{Some examples} \label{Beispiele und Anwendungen} In this section, we apply the above theory in the context of operators that are of particular interest in various fields of (applied) mathematics, such as Schr\"odinger operators. In any of the following examples, the operator $ D(\lambda) $ is unitarily equivalent to a bounded self-adjoint (block-) Hankel operator for all $ \lambda $ in $ \mathds{R} $. First, we consider the case when $ T $ has a purely discrete spectrum. \newpage \begin{example} Let $ \mathfrak{H} = L^{2}(\mathds{R}^{n}) $ and suppose that $ V \geq 0 $ is in $ L_{\mathrm{loc}}^{1}(\mathds{R}^{n}) $ such that Lebesgue measure of $ \{ x \in \mathds{R}^{n} : 0 \leq V(x) < M \} $ is finite for all $ M > 0 $. Then the self-adjoint Schr\"odinger operator $ T \geq 0 $ defined by the form sum of $ - \Delta $ and $ V $ has a purely discrete spectrum, see \cite[Example 4.1]{Wang-Wu}; see also \cite[Theorem 1]{Simon_B_II}. Therefore, if $ S $ is any self-adjoint operator of finite rank $ N $, then Proposition \ref{Keine Ausnahmepunkte im Fall von purely discrete spectrum} implies that $ D(\lambda) $ is unitarily equivalent to a finite rank self-adjoint block-Hankel operator of order $ N $ for all $ \lambda \in \mathds{R} $. \end{example} Next, consider the case when $ S = \langle \cdot, \varphi \rangle \varphi $ is of rank one and $ \varphi $ is cyclic for $ T $. \begin{example} \label{Rekonstruktion Kreinsches Beispiel} Once again, consider Kre{\u\i}n's example \cite[pp.\,622--624]{Krein}. The operators $ T = A_{0} $ and $ T + \langle \cdot, \varphi \rangle \varphi = A_{1} $, where $ \varphi(x) = \mathrm{e}^{-x} $, from Example \ref{via Krein} both have a simple purely absolutely continuous spectrum filling in the interval $ [0,1] $. Therefore, $ D(\lambda) $ is the zero operator for all $ \lambda \in \mathds{R} \setminus (0,1) $. \emph{(}$ \ast $\emph{)} \hspace{2ex} The function $ \varphi $ is cyclic for $ T $. \newline Hence, Theorem \ref{Main theorem II} implies that the kernel of $ D(\lambda) $ is trivial for all $ 0 < \lambda < 1 $. Furthermore, an application of Proposition \ref{Main result} yields that $ D(\lambda) $ is unitarily equivalent to a bounded self-adjoint Hankel operator for all $ \lambda $ in $ \mathds{R} $ except for at most countably many $ \lambda $ in $ [0,1] $. Note that, in this example, explicit computations show that there are no exceptional points (see \cite{Krein}). \end{example} \begin{proof}[Proof of \emph{(}$ \ast $\emph{)}] Let $ k $ be in $ \mathds{N}_{0} $. Define the \(k\)th Laguerre polynomial $ L_{k} $ on $ (0,\infty) $ by $ L_{k}(x) := \frac{\mathrm{e}^{x}}{k!} ~ \frac{\mathrm{d}^{k}}{\mathrm{d}x^{k}} (x^{k} \mathrm{e}^{-x}) $. Furthermore, define $ \psi_{k} $ on $ (0,\infty) $ by $ \psi_{k}(x) := x^{k} \mathrm{e}^{-x} $. A straightforward computation shows that \begin{align*} \big( A_{0} \psi_{k} \big) (x) = \frac{1}{2} \, \mathrm{e}^{-x} \bigg\{ \frac{x^{k+1}}{k+1} + \frac{1}{2^{k+1}} \sum_{\ell=0}^{k-1} (2x)^{k-\ell} ~ \frac{k!}{(k-\ell)!} \bigg\}. \end{align*} By induction on $ n \in \mathds{N}_{0} $, it easily follows that $ p \cdot \varphi $ belongs to the linear span of $ A_{0}^{\ell} \varphi $, $ \ell \in \mathds{N}_{0} $, $ \ell \leq n $, for all polynomials $ p $ of degree $ \leq n $. In particular, the functions $ \phi_{j} $ defined on $ (0, \infty) $ by $ \phi_{j}(x) := \sqrt{2} ~ L_{j}(2x) \mathrm{e}^{-x} $ are elements of $ \mathrm{span} \big\{ A_{0}^{\ell} \varphi : \ell \in \mathds{N}_{0}, \ell \leq n \big\} $ for all $ j \in \mathds{N}_{0} $ with $ j \leq n $. Since $ (\phi_{j})_{j \in \mathds{N}_{0}} $ is an orthonormal basis of $ L^{2}(0,\infty) $, it follows that $ \varphi $ is cyclic for $ T $. \end{proof} Example \ref{Rekonstruktion Kreinsches Beispiel} suggests the conjecture that Proposition \ref{Main result} (\ref{The second item of the main theorem}) can be strengthened to hold up to a finite exceptional set. Last, we consider different examples where the multiplicity in the spectrum of $ T $ is such that we can apply Proposition \ref{Einige hinreichende Bedingungen}. \begin{example} \begin{enumerate} \item Let $ T $ be an arbitrary orthogonal projection on $ \mathfrak{H} $, and let $ S $ be a self-adjoint operator of finite rank. Then zero or one is an eigenvalue of $ T $ with infinite multiplicity, and we can apply Proposition \ref{Einige hinreichende Bedingungen}. \item Put $ \mathfrak{H} = L^{2}(0,\infty) $ and let $ T $ be the Carleman operator, i.\,e., the bounded Hankel operator such that \begin{align*} (Tg)(x) = \int_{0}^{\infty} \frac{g(y)}{x+y} \mathrm{d}y \end{align*} for all continuous functions $ g : (0,\infty) \rightarrow \mathds{C} $ with compact support. \newpage It is well known (see, e.\,g., \cite[Chapter 10, Theorem 2.3]{Peller}) that the Carleman operator has a purely absolutely continuous spectrum of uniform multiplicity two filling in the interval $ [0, \uppi] $. Therefore, if $ S $ is any self-adjoint operator of rank one, Proposition \ref{Einige hinreichende Bedingungen} can be applied. \end{enumerate} \end{example} \subsection*{Jacobi operators} Consider a bounded self-adjoint Jacobi operator $ H $ acting on the Hilbert space $ \ell^{2}(\mathds{Z}) $ of complex square summable two-sided sequences. \linebreak More precisely, suppose that there exist bounded real-valued sequences $ a = (a_{n})_{n} $ and $ b = (b_{n})_{n} $ with $ a_{n} > 0 $ for all $ n \in \mathds{Z} $ such that \begin{align*} (H x)_{n} = a_{n} x_{n+1} + a_{n-1} x_{n-1} + b_{n} x_{n}, \quad n \in \mathds{Z}, \end{align*} cf.\,\cite[Theorem 1.5 and Lemma 1.6]{Teschl}. The following result is well known. \begin{proposition}[see \cite{Teschl}, Lemma 3.6] Let $ H $ be a bounded self-adjoint Jacobi operator on $ \ell^{2}(\mathds{Z}) $. Then the singular spectrum of $ H $ has spectral multiplicity one, and the absolutely continuous spectrum of $ H $ has multiplicity at most two. \end{proposition} In the case where $ H $ has a simple spectrum, there exists a cyclic vector $ \varphi $ for $ H $, and we can apply Proposition \ref{Main result} to $ H $ with the rank one perturbation $ S = \langle \cdot, \varphi \rangle \varphi $. Otherwise, $ H $ fulfills condition (\ref{Vielfachheit groesser eins im stetigen Spektrum}) with $ N = 1 $ in Proposition \ref{Einige hinreichende Bedingungen}. Let us discuss some examples in the latter case with $ T = H $. Since $ S $ can be an arbitrary self-adjoint operator of rank one, we do not mention it in the following. \begin{example} Consider the discrete Schr\"odinger operator $ H = H_{V} $ on $ \ell^{2}(\mathds{Z}) $ with bounded potential $ V : \mathds{Z} \rightarrow \mathds{R} $, \begin{align*} \big( H_{V}x \big)_{n} = x_{n+1} + x_{n-1} + V_{n} x_{n}, \quad n \in \mathds{Z}. \end{align*} If the spectrum of $ H_{V} $ contains only finitely many points outside of the interval $ [-2,2] $, then \cite[Theorem 2]{Damanik_et_al} implies that $ H_{V} $ has a purely absolutely continuous spectrum of multiplicity two on $ [-2,2] $. It is well known that the free Jacobi operator $ H_{0} $ with $ V = 0 $ has a purely absolutely continuous spectrum of multiplicity two filling in the interval $ [-2,2] $. \end{example} Let us consider the almost Mathieu operator $ H = H_{\kappa, \beta, \theta} : \ell^{2}(\mathds{Z}) \rightarrow \ell^{2}(\mathds{Z}) $ defined by \begin{align*} (H x)_{n} = x_{n+1} + x_{n-1} + 2 \kappa \cos \! \big( 2 \uppi (\theta + n \beta) \big) x_{n}, \quad n \in \mathds{Z}, \end{align*} where $ \kappa \in \mathds{R} \setminus \{ 0 \} $ and $ \beta, \theta \in \mathds{R} $. In fact, it suffices to consider $ \beta, \theta \in \mathds{R} / \mathds{Z} $. The almost Mathieu operator plays an important role in physics, see, for instance, the review \cite{Last_Y} and the references therein. Here, we are interested in cases where Proposition \ref{Einige hinreichende Bedingungen} can be applied to the \linebreak almost Mathieu operator with an arbitrary self-adjoint rank one perturbation. \linebreak Sufficient conditions for this purpose are provided in the following lemma. \begin{lemma} \label{Example almost Mathieu operator} \begin{enumerate} \item If $ \beta $ is rational, then for all $ \kappa $ and $ \theta $ the almost Mathieu \linebreak operator $ H_{\kappa, \beta, \theta} $ is periodic and has a purely absolutely continuous spectrum of uniform multiplicity two. \item If $ \beta $ is irrational and $ | \kappa | < 1 $, then for all $ \theta $ the almost Mathieu operator $ H_{\kappa, \beta, \theta} $ has a purely absolutely continuous spectrum of uniform multiplicity two. \end{enumerate} \end{lemma} \begin{proof} (1) If $ \beta $ is rational, then $ H_{\kappa, \beta, \theta} $ is a periodic Jacobi operator. Hence, it is well known (see, e.\,g., \cite[p.\,122]{Teschl}) that the spectrum of $ H_{\kappa, \beta, \theta} $ is purely absolutely continuous. According to \cite[Theorem 9.1]{Deift_Simon}, we know that the absolutely continuous spectrum of $ H_{\kappa, \beta, \theta} $ is uniformly of multiplicity two. This proves (1). (2) Suppose that $ \beta $ is irrational. Avila has shown (see \cite[Main Theorem]{Avila}) that the almost Mathieu operator $ H_{\kappa, \beta, \theta} $ has a purely absolutely continuous spectrum if and only if $ | \kappa | < 1 $. Again, \cite[Theorem 9.1]{Deift_Simon} implies that the absolutely continuous spectrum of $ H_{\kappa, \beta, \theta} $ is uniformly of multiplicity two. This finishes the proof. \end{proof} Problems 4--6 of Simon's list \cite{Simon_B} are concerned with the almost Mathieu operator. Avila's result \cite[Main Theorem]{Avila}, which we used in the above proof, is a solution for Problem 6 in \cite{Simon_B}. \section*{Acknowledgments} The author would like to thank his Ph.D. supervisor, Vadim Kostrykin, for many fruitful discussions. The author would like to thank Julian Gro{\ss}mann, Stephan Schmitz, and Albrecht Seelmann for reading the manuscript and Konstantin \linebreak A. Makarov for editorial advice. Furthermore, the author would like to thank Alexander Pushnitski for his invitation to give a talk at the King's College Analysis Seminar and for helpful discussions on differences of functions of operators.
{'timestamp': '2015-07-13T02:10:08', 'yymm': '1406', 'arxiv_id': '1406.6516', 'language': 'en', 'url': 'https://arxiv.org/abs/1406.6516'}
\section{Introduction} RNA is believed to be central for the understanding of evolution. It acts as genotypic legislative in form of viruses and viroids and as phenotypic executive in form of ribozymes, capable of catalytic activity, cleaving other RNA molecules. This dualism gives rise to the hypothesis that RNA may have preceded DNA and proteins, therefore playing a key role in prebiotic evolution. In light of growing support of an {\it RNA world} \cite{Schuster:02} and even RNA-based metabolisms or the prospect of self-replicating RNA \cite{Poole:06a} it is the phenotypic aspect of RNA that still lacks deeper understanding. Despite the fact that pseudoknot RNA structures are known to be of central importance \cite{Science:05a}, little is known from a theoretical point of view, for instance only recently their generating function has been obtained \cite{Reidys:07pseu}. Let us provide first some background on RNA sequences and structures which allows us to put our results into context. The primary sequence of an RNA molecule is the sequence of nucleotides {\bf A}, {\bf G}, {\bf U} and {\bf C} together with the Watson-Crick ({\bf A-U},{\bf G-C}) and ({\bf U-G}) base pairing rules specifying the pairs of nucleotides can potentially form bonds. Single stranded RNA molecules form helical structures whose bonds satisfy the above base pairing rules and which, in many cases, determine or are even tantamount to their function. Three decades ago Waterman {\it et.al.} pioneered the concept of RNA secondary structures \cite{Penner:93c,Waterman:79a,Waterman:78a,Waterman:94a,Waterman:80}, a concept being best understood when considering structure as a diagram, drawing the primary sequence of nucleotides horizontally and ignoring all chemical bonds of its backbone. Then one draws all bonds, i.e.~nucleotide interactions satisfying the Watson-Crick base pairing rules (and {\bf G}-{\bf U} pairs) as arcs in the upper halfplane, thereby identifying structure with the set of all arcs. Then secondary structures have no two arcs $(i_1,j_1)$, $(i_2,j_2)$, where $i_1<j_1$ and $i_2<j_2$ with the property $i_1<i_2<j_1<j_2$ and all arcs have at least length $2$. While the concept of secondary structure is of fundamental importance it is well-known that there exist additional types of nucleotide interactions \cite{Science:05a}. These bonds are called pseudoknots \cite{Westhof:92a} and occur in functional RNA (RNAseP \cite{Loria:96a}), ribosomal RNA \cite{Konings:95a} and are conserved in the catalytic core of group I introns. In plant viral RNAs pseudoknots mimic tRNA structure and in {\it in vitro} RNA evolution \cite{Tuerk:92} experiments have produced families of RNA structures with pseudoknot motifs, when binding HIV-1 reverse transcriptase. Leaving the paradigm of RNA secondary structures, i.e.~studying RNA structures with crossing bonds, poses challenging problems for computational biology. Prediction algorithms for RNA pseudoknot structures are much harder to derive since there exists no {\it a priori} tree-structure and the subadditivity of local solutions is not guaranteed. In fact pseudoknot RNA structures are genuinely non-inductive and seem to be best described by the mathematical language of vacillating tableaux \cite{Reidys:07pseu,Chen:07a}. One approach of categorizing RNA pseudoknot structures consists in considering the maximal size of sets of mutually crossing bonds, leading to the notion of $k$-noncrossing RNA structures \cite{Reidys:07pseu}. \begin{figure}[ht] \centerline{% \epsfig{file=f1.eps,width=0.7\textwidth}\hskip15pt } \caption{\small $k$-noncrossing RNA structures. (a) secondary structure (with isolated labels $3,7,8,10$), (b) non-planar $3$-noncrossing structure } \label{F:4} \end{figure} This concept is very intuitive, a $k$-noncrossing RNA structure has at most $k-1$ mutually crossing arcs and a minimum bond-length of $2$, i.e.~for any $i$, the nucleotides $i$ and $i+1$ cannot form a bond. In this paper we will consider $k$-noncrossing RNA structures with arc-length $\ge 3$. Their analysis is based on the generating function, which has coefficients that are alternating sums. This fact makes even the computation of the exponential growth factor a difficult task. To make things worse, in case of arc-length $\ge 3$ there exists no explicit formula for the coefficients, which could only be computed via a recursion formula. \subsection{Organization and main results} Let $\mathcal{S}_{k,3}(n)$ ($k\ge 3)$ denote the set of $k$-noncrossing RNA structures with arc-length $\ge 3$ and let ${\sf S}_{k,3}(n)= \vert \mathcal{S}_{k,3}(n)\vert $. In Section~\ref{S:pre} we provide the necessary background on $k$-noncrossing RNA structures with arc-length $\ge 3$ and their generating function $\sum_{n\ge 0}{\sf S}_{k,3}(n)z^n$. In Section~\ref{S:exp} we compute the exponential factor of ${\sf S}_{k,3}(n)$. To make it easily accessible to a broad readership we give an elementary proof based on real analysis and transformations of the generating function. Central to our proof is the functional identity of Lemma~\ref{L:func} and its generalized version in Lemma~\ref{L:ana}. In Section~\ref{S:sub} we present the asymptotic analysis of $\mathcal{S}_{3,3}(n)$, using Flajolet {\it et.al.}'s singular expansions and transfer theorems \cite{Flajolet:05,Flajolet:99,Flajolet:94,Popken:53,Odlyzko:92}. This analysis is similar to \cite{Reidys:07pseu} but involves solving a quartic instead of a quadratic polynomial in order to localize the singularities. The main result of the paper is {\bf Theorem.}$\,$ {\it The number of $3$-noncrossing RNA structures with arc length $\ge 3$ is asymptotically given by \begin{eqnarray} \label{E:konk3} {\sf S}_{3,3}(n) & \sim & \frac{6.11170\cdot 4!}{n(n-1)\dots(n-4)}\, 4.54920^n \ , \end{eqnarray} where ${\sf s}_{3,3}^{}(n)=\frac{6.11170\cdot 4!}{n(n-1)\dots(n-4)}= 146.6807\left[\frac{1}{ n^5}-\frac{35}{4n^6}+\frac{1525}{32 n^7}+{O}(n^{-8})\right]$. } In the table below we display the quality of our approximation by listing the subexponential factors, i.e.~we compare for $k=3$ the quantities ${\sf S}_{k,3}(n)/(4.54920)^n$ obtained from the generating function (Theorem~\ref{T:cool2}), which are the exact values and the asymptotic expressions ${\sf s}_{3,3}^{}(n)$, respectively. \begin{center} \begin{tabular}{c|c|c|c|c|c} \hline \multicolumn{6}{c}{\textbf{The sub exponential factor}}\\ \hline $n$ & ${\sf S}_{3,3}(n)/(4.54920)^n$ & ${\sf s}_{3,3}^{}(n)$ &$n$ & ${\sf S}_{3,3}(n)/(4.54920)^n$ & ${\sf s}_{3,3}^{}(n)$\\ \hline \small 10 & \small $3.016\times 10^{-4}$ & \small$4.851\times 10^{-3}$ & \small 60 & \small $3.457 \times 10^{-7}$ & \small$2.238\times 10^{-7}$\\ \small 20 & \small $2.017 \times 10^{-5}$ & \small$7.884\times 10^{-5}$ & \small 70 & \small $1.476\times 10^{-7}$ & \small$1.010\times 10^{-7}$\\ \small 30 & \small $3.513 \times 10^{-6}$ & \small$8.577\times 10^{-6}$ & \small 80 & \small $3.783\times 10^{-8}$ & \small$5.085\times 10^{-8}$\\ \small 40 & \small $9.646\times 10^{-7}$ & \small$1.858\times 10^{-6}$ & \small 90 & \small $2.154\times 10^{-8}$ & \small$2.781\times 10^{-8}$\\ \small 50 & \small $5.627\times 10^{-7}$ & \small$5.769\times 10^{-7}$ & \small 100 & \small $1.299\times 10^{-8}$ & \small$1.624\times 10^{-8}$\\ \end{tabular} \end{center} \section{$k$-noncrossing RNA structures with arc-length $\ge 3$}\label{S:pre} Suppose we are given the primary RNA sequence $$ {\bf A}{\bf C}{\bf U}{\bf C}{\bf A}{\bf G}{\bf U}{\bf U}{\bf A} {\bf G}{\bf A}{\bf A}{\bf U}{\bf A}{\bf G}{\bf C}{\bf C}{\bf G}{\bf G} {\bf U}{\bf C} \ . $$ We then identify an RNA structure with the set of all bonds different from the backbone-bonds of its primary sequence, i.e.~the arcs $(i,i+1)$ for $1\le i\le n-1$. Accordingly an RNA structure is a combinatorial graph over the labels of the nucleotides of the primary sequence. These graphs can be represented in several ways. In Figure~\ref{F:2} we represent a structure with loop-loop interactions in two ways. \begin{figure}[ht]\label{F:2} \centerline{ \epsfig{file=f2.eps,width=0.8\textwidth}\hskip15pt } \caption{\small A $3$-noncrossing RNA structure with arc-length $\ge 3$, as a planar graph (top) and as a diagram (bottom)} \label{F:6} \end{figure} In the following we will consider structures as diagram representations of digraphs. A digraph $D_n$ is a pair of sets $V_{D_n},E_{D_n}$, where $V_{D_n}= \{1,\dots,n\}$ and $E_{D_n}\subset \{(i,j)\mid 1\le i< j\le n\}$. $V_{D_n}$ and $E_{D_n}$ are called vertex and arc set, respectively. A $k$-noncrossing digraph is a digraph in which all vertices have degree $\le 1$ and which does not contain a $k$-set of arcs that are mutually intersecting, or equivalently \begin{eqnarray} \ \not\exists\, (i_{r_1},j_{r_1}),(i_{r_2},j_{r_2}),\dots,(i_{r_k},j_{r_k});\quad & & i_{r_1}<i_{r_2}<\dots<i_{r_k}<j_{r_1}<j_{r_2}<\dots<j_{r_k} \ . \end{eqnarray} We will represent digraphs as a diagrams (Figure~\ref{F:2}) by representing the vertices as integers on a line and connecting any two adjacent vertices by an arc in the upper-half plane. The direction of the arcs is implicit in the linear ordering of the vertices and accordingly omitted. \begin{definition}\label{D:rna} An $k$-noncrossing RNA structure with arc-length $\ge 3$ is a digraph in which all vertices have degree $\le 1$, having at most a $k-1$-set of mutually intersecting arcs without arcs of length $\le 2$, i.e.~arcs of the form $(i,i+1)$ and $(i,i+2)$, respectively. Let ${\sf S}_{k,3}(n)$ and ${\sf S}_{k,3}(n,\ell)$ be the numbers of $k$-noncrossing RNA structures with arc-length $\ge 3$ and those with $\ell$ isolated vertices, respectively. \end{definition} Let $f_{k}(n,\ell)$ denote the number of $k$-noncrossing digraphs with $\ell$ isolated points. We have shown in \cite{Reidys:07pseu} that \begin{align}\label{E:ww0} f_{k}(n,\ell)& ={n \choose \ell} f_{k}(n-\ell,0) \\ \label{E:ww1} \det[I_{i-j}(2x)-I_{i+j}(2x)]|_{i,j=1}^{k-1} &= \sum_{n\ge 1} f_{k}(n,0)\,\frac{x^{n}}{n!} \\ \label{E:ww2} e^{x}\det[I_{i-j}(2x)-I_{i+j}(2x)]|_{i,j=1}^{k-1} &=(\sum_{\ell \ge 0}\frac{x^{\ell}}{\ell!})(\sum_{n \ge 1}f_{k}(n,0)\frac{x^{n}}{n!})=\sum_{n\ge 1} \left\{\sum_{\ell=0}^nf_{k}(n,\ell)\right\}\,\frac{x^{n}}{n!} \ . \end{align} where $I_{r}(2x)=\sum_{j \ge 0}\frac{x^{2j+r}}{{j!(r+j)!}}$ is the hyperbolic Bessel function of the first kind of order $r$. In particular we have for $k=3$ \begin{equation}\label{E:2-3} f_{3}(n,\ell)= {n \choose \ell}\left[C_{\frac{n-\ell}{2}+2}C_{\frac{n-\ell}{2}}- C_{\frac{n-\ell}{2}+1}^{2}\right] \ , \end{equation} where $C_m=\frac{1}{m+1}\binom{2m}{m}$ is the $m$th Catalan number. The derivation of the generating function of $k$-noncrossing RNA structures, given in Theorem~\ref{T:cool2} below uses advanced methods and novel constructions of enumerative combinatorics due to Chen~{\it et.al.} \cite{Chen:07a,Gessel:92a} and Stanley's mapping between matchings and oscillating tableaux i.e.~families of Young diagrams in which any two consecutive shapes differ by exactly one square. The enumeration is obtained using the reflection principle due to Gessel and Zeilberger \cite{Gessel:92a} and Lindstr\"om \cite{Lindstroem:73a} in accord with an inclusion-exclusion argument in order to eliminate the arcs of length $\le 2$ \cite{Reidys:07pseu}. \begin{theorem}\label{T:cool2} Let $k\in\mathbb{N}$, $k>2$. Then the numbers of $k$-noncrossing RNA structures ${\sf S}_{k,3}(n,\ell)$ and ${\sf S}_{k,3}(n)$ are given by \begin{eqnarray}\label{E:da2} {\sf S}_{k,3}(n,\ell) & = & \sum_{b\ge 0}(-1)^{b} \lambda(n,b)f_{k}(n-2b),\ell) \\ \label{E:da3} {\sf S}_{k,3}(n) & = & \sum_{b=0}^{\lfloor n/2\rfloor} (-1)^{b} \lambda(n,b) \left\{\sum_{\ell=0}^{n-2b}f_{k}(n-2b,\ell)\right\}\ . \end{eqnarray} where $\lambda(n,b)$ satisfies the recursion \begin{equation}\label{E:hh} \lambda(n,b)=\lambda(n-1,b)+\lambda(n-2,b-1)+ \lambda(n-3,b-1)+\lambda(n-4,b-2) \end{equation} and the initial conditions for eq.~{\rm (\ref{E:hh})} are $\lambda(n,0)=1$ and $\lambda(n,1)=2n-3$. \end{theorem} \section{The exponential factor}\label{S:exp} In this section we obtain the exponential growth factor of the coefficients ${\sf S}_{k,3}(n)$. Let us begin by considering the generating function $\sum_{n\ge 0}{\sf S}_{k,3}(n)x^n$ as a power series over $\mathbb{R}$. Since $\sum_{n\ge 0}{\sf S}_k(n)x^n$ has monotonously increasing coefficients $\lim_{n\to\infty}{\sf S}_{k,3}(n)^{\frac{1}{n}}$ exists and determines via Hadamard's formula its radius of convergence. Due to the inclusion-exclusion form of the terms ${\sf S}_{k,3}(n)$, it is not obvious however, how to compute this radius. Our strategy consists in first showing that ${\sf S}_{k,3}(n)$ is closely related to $f_k(2n,0)$ via a functional relation of generating functions. \begin{lemma}\label{L:laplace} Let $x$ be an indeterminate over $\mathbb{R}$ and $T_{k}(n)$ be the number of $k$-noncrossing partial matchings over $[n]$, i.e. $T_{k}(n)=\sum_{m \le \lfloor \frac{n}{2}\rfloor}\binom{n}{2m}f_k(2m,0)$. Let furthermore $\rho_k$ denote the radius of convergence of $\sum_{n \ge 0}T_{k}(n)\, x^n$. Then we have \begin{equation}\label{E:laplace} \forall \, \vert x\vert <\rho_k;\quad \sum_{n \ge 0}T_{k}(n)\, x^n = \frac{1}{1-x}\, \sum_{n \ge 0}f_k(2n,0)\,\left(\frac{x}{1-x}\right)^{2n} \ . \end{equation} \end{lemma} \begin{proof} We will relate the ordinary generating functions of $T_{k}(n)$ \begin{equation}\label{E:ll} \sum_{n \ge 0}T_{k}(n)\frac{x^n}{n!}= \sum_{n \ge 0}\sum_{m\le\frac{n}{2}}{n \choose 2m}f_k(2m,0) \frac{x^n}{n!}=e^x\cdot {\sf det}[I_{i-j}(2x)-I_{i+j}(2x)]_{i,j=1}^{k-1}\ \end{equation} (see eq.~(\ref{E:ww2})) and $f_k(2n,0)$ via Laplace transforms as follows: $\sum_{n \ge 0} T_{k}(n)x^n = \sum_{n \ge 0}T_{k}(n)\frac{x^n}{n!}n!$ is convergent for any $x\in\mathbb{R}$, whence we derive, using the Laplace transformation and interchanging integration and summation \begin{eqnarray*} \sum_{n \ge 0}T_{k}(n)x^n = \sum_{n \ge 0}T_{k}(n)\frac{x^n}{n!} \int_{0}^{\infty}e^{-t}t^n dt = \int_0^{\infty}\sum_{n \ge 0}T_{k}(n)\frac{(xt)^n}{n!}e^{-t}dt \end{eqnarray*} We next interpret the rhs via eq.~(\ref{E:ll}) and obtain \begin{eqnarray*} \sum_{n \ge 0}T_{k}(n)x^n &=& \int_0^{\infty}e^{xt}{\sf det}[I_{i-j}(2xt)-I_{i+j}(2xt)]_{i,j=1}^{k-1} e^{-t}dt \ . \end{eqnarray*} Interchanging integration and summation yields \begin{eqnarray*} \sum_{n \ge 0}T_{k}(n)x^n &=& \int_0^{\infty}e^{(x-1)t}\sum_{n \ge 0}f_k(2n,0)\frac{(xt)^{2n}}{(2n)!}dt\\ &=& \sum_{n \ge 0}f_k(2n,0)\cdot\frac{1}{(2n)!} \int_0^{\infty}e^{(x-1)t}(xt)^{2n}dt\\ &=& \sum_{n \ge 0}f_k(2n,0)\frac{1}{(2n)!}\int_0^{\infty} e^{-(1-x)t}((1-x)t)^{2n}(\frac{x}{1-x})^{2n}dt\\ &=&\sum_{n \ge 0}f_k(2n,0)\frac{1}{(2n)!}\cdot(2n)!(\frac{x}{1-x})^{2n}\cdot\frac{1}{1-x}\\ &=&\frac{1}{1-x}\sum_{n \ge 0}f_k(2n,0)\,(\frac{x}{1-x})^{2n} \end{eqnarray*} and the proof of the lemma is complete. \end{proof} \begin{lemma}\label{L:func} Let $x$ be an indeterminante over $\mathbb{R}$ and ${\sf S}_{k,3}(n)$ be the number of $k$-noncrossing RNA structures with arc-length $\ge 3$. Then we have the functional equation \begin{equation}\label{E:laplace} \sum_{n \ge 0}{\sf S}_{k,3}(n)\, x^n = \frac{1}{1-x+x^2+x^3-x^4}\sum_{n \ge 0}f_k(2n,0)\left(\frac{x-x^3}{1-x+x^2+x^3-x^4}\right)^{2n} \end{equation} \end{lemma} \begin{proof} According to Theorem~\ref{T:cool2} we have \begin{eqnarray*} {\sf S}_{k,3}(n) & = & \sum_{b \le \lfloor \frac{n}{2}\rfloor} (-1)^b\lambda(n,b)\sum_{\ell=0}^{n-2b}f_{k}(n-2b,\ell) \\ & = & \sum_{b \le \lfloor \frac{n}{2}\rfloor} (-1)^b\lambda(n,b)\sum_{m=2b}^{n} {n-2b \choose m-2b}f_{k}(m-2b,0) \end{eqnarray*} Our goal is now to relate $\sum_{n \ge 0}{\sf S}_{k,3}(n)x^n$ to the terms $T_k(n)$. For this purpose we derive \begin{align*} \sum_{n \ge 0}{\sf S}_{k,3}(n)x^n&=\sum_{n \ge 0}\sum_{2b\le n} (-1)^b\lambda(n,b)\sum_{m=2b}^{n}{n-2b \choose m-2b}f_k(m-2b,0)\, x^n\\ &=\sum_{b \ge 0}(-1)^b x^{2b}\sum_{n \ge 2b}\lambda(n,b)T_{k}(n-2b)x^{n-2b}\\ &=\sum_{b \ge 0}(-1)^b x^{2b}\sum_{n \ge 0}\lambda(n+2b,b)T_{k}(n)\,x^n \ .\\ \end{align*} Interchanging the summations w.r.t.~$b$ and $n$ we consequently arrive at \begin{equation}\label{E:21} \sum_{n \ge 0}{\sf S}_{k,3}(n)x^n = \sum_{n \ge 0}\left[\sum_{b\ge 0}(-1)^b x^{2b}\lambda(n+2b,b)\right]T_{k}(n)\, x^n \ . \end{equation} We set $\varphi_{n}(x)=\sum_{b \ge 0}\lambda(n+2b,b)x^b$. According to Theorem~\ref{T:cool2} we have the recursion formula $$ \lambda(n+2b,b)=\lambda(n+2b-1,b)+\lambda(n+2b-2,b-1)+ \lambda(n+2b-3,b-1)+\lambda(n+2b-4,b-2) $$ Multiplying with $x^b$ and taking the summation over all $b$ ranging from $0$ to $\lfloor n/2\rfloor$ implies the following functional equation for $\varphi_{n}(x)$, $n=1,2\ldots$ \begin{equation}\label{E:22} \varphi_{n}(x)=\varphi_{n-1}(x)+x\cdot \varphi_{n}(x)+x \cdot \varphi_{n-1}(x)+ x^2\varphi_{n}(x) \ . \end{equation} Eq.~(\ref{E:22}) is equivalent to \begin{equation}\label{E:23} \frac{\varphi_{n}(x)}{\varphi_{n-1}(x)}=\frac{1+x}{1-x-x^2} \quad \text{\rm and }\quad \varphi_{0}(x)=\sum_{b \ge 0}\lambda(2b,b)x^b=\frac{1}{1-x-x^2} \end{equation} since $\lambda_{b}=\lambda(2b,b)$ satisfies the recursion formula $\lambda_b=\lambda_{b-1}+\lambda_{b-2}$ and the initial condition $\lambda_{0}=\lambda_{1}=1$. We can conclude from this that $\lambda(2b,b)$ is $b$-th Fibonacci number. As a result we obtain the formula \begin{equation}\label{E:24} \varphi_{n}(x)=\varphi_{0}(x)\left(\frac{1+x}{1-x-x^2}\right)^n= \frac{1}{1-x-x^2} \left(\frac{1+x}{1-x-x^2}\right)^n \ . \end{equation} Substituting eq.~(\ref{E:24}) into eq.~(\ref{E:21}) we can compute \begin{eqnarray*} \sum_{n \ge 0}{\sf S}_{k,3}(n)x^n &= & \sum_{n \ge 0}\varphi_{n}(-x^2)T_{k}(n)x^n \\ & = &\sum_{n \ge 0}\frac{1}{1+x^2-x^4}\left(\frac{1-x^2}{1+x^2-x^4}\right)^n T_{k}(n)\ x^n\\ & = & \frac{1}{1+x^2-x^4} \sum_{n \ge 0} T_{k}(n)\, \left(\frac{x-x^3}{1+x^2-x^4}\right)^n \ . \end{eqnarray*} Via Lemma~\ref{L:laplace} we have the following interpretation of $\sum_{n \ge 0}T_{k}(n) \, x^n$ $$ \sum_{n \ge 0}T_{k}(n) \, x^n = \frac{1}{1-x}\sum_{n \ge 0} f_k(2n,0)\,(\frac{x}{1-x})^{2n}\ . $$ Therefore we obtain setting $x'=\frac{x-x^3}{1+x^2-x^4}$, $\frac{1}{1-x'}= \frac{1+x^2-x^4}{1-x+x^2+x^3-x^4}$ and $\frac{x'}{1-x'}=\frac{x-x^3}{1-x+x^2+x^3-x^4}$ \begin{eqnarray*} \sum_{n \ge 0}{\sf S}_{k,3}(n)x^n &= & \frac{1}{1-x+x^2+x^3-x^4}\sum_{n \ge 0}f_k(2n,0)\left(\frac{x-x^3}{1-x+x^2+x^3-x^4}\right)^{2n} \ , \end{eqnarray*} whence the lemma. \end{proof} Using complex analysis we can extend Lemma~\ref{L:func} to arbitrary $\vert z\vert < \rho_k$, where $z\in \mathbb{C}$ \begin{lemma}\label{L:ana} Let $k>2$ be an integer, then we have for arbitrary $z\in\mathbb{C}$ with the property $\vert z\vert <\rho_k$ the equality \begin{equation}\label{E:rr2} \sum_{n \ge 0}{\sf S}_{k,3}(n)\, z^n = \frac{1}{1-z+z^2+z^3-z^4} \sum_{n \ge 0}f_k(2n,0)\left(\frac{z-z^3}{1-z+z^2+z^3-z^4}\right)^{2n} \ . \end{equation} \end{lemma} \begin{proof} The power series $\sum_{n\ge 0} {\sf S}_{k,3}(n) z^{n}$ and $\frac{1}{1-z+z^2+z^3-z^4} \sum_{n\ge 0} f_k(2n,0) \left(\frac{z-z^3}{1-z+z^2+z^3-z^4}\right)^{2n}$ are analytic in a disc of radius $0<\epsilon<\rho_k$ and according to Lemma~\ref{L:func} coincide on the interval $]-\epsilon,\epsilon [$. Therefore both functions are analytic and equal on the sequence $(\frac{1}{n})_{n\in\mathbb{N}}$ which converges to $0$ and standard results of complex analysis (zeros of nontrivial analytic functions are isolated) imply that eq.~(\ref{E:rr2}) holds for any $z\in\mathbb{C}$ with $\vert z\vert<\rho_k$, whence the lemma. \end{proof} Lemma~\ref{L:ana} is the key to compute the exponential growth rates for any $k>2$. In its proof we recruit the Theorem of Pringsheim \cite{Titmarsh:39} which asserts that a power series $\sum_{n\ge 0}a_nz^n$ with $a_n\ge 0$ has its radius of convergence as dominant (but not necessarily unique) singularity. In particular there exists a dominant real valued singularity. \begin{theorem}\label{T:asy1} Let $k\ge 3$ be a positive integer and $r_k$ be the radius of convergence of the power series $\sum_{n\ge 0}f_k(2n,0)z^{2n}$ and \begin{equation}\label{E:theta} \vartheta\colon [0, \frac{\sqrt{2}}{2}]\longrightarrow [0,\frac{5-\sqrt{2}}{4}], \quad z\mapsto \frac{z(1-z)(1+z)}{-(z^2-\frac{1}{2})^2+z(z^2-\frac{1}{2})- \frac{z}{2}+\frac{5}{4}} \ . \end{equation} Then the power series $\sum_{n\ge 0}{\sf S}_{k,3}(n)z^n$ has the real valued, dominant singularity $\rho_k$, which is the unique real solution of $\vartheta(x)=r_k$ and for the number of $k$-noncrossing RNA structures with arc-length $\ge 3$ holds \begin{equation}\label{E:rel} {\sf S}_{k,3}(n)\sim \left(\frac{1}{\rho_k}\right)^n \ . \end{equation} \end{theorem} In Section~\ref{S:sub} we will in particular prove that $\rho_3\approx 0.21982$. \begin{proof} Suppose we are given $r_k$, then $r_k\le \frac{1}{2}$ (this follows immediately from $C_n\sim 2^{2n}$ via Stirling's formula). The functional identity of Lemma~\ref{L:func} allows us to derive the radius of convergence of $\sum_{n\ge 0}S_k(n)z^n$. According to Lemma~\ref{L:ana} we have \begin{equation}\label{E:25} \sum_{n \ge 0}{\sf S}_{k,3}(n)\, z^n = \frac{1}{1-z+z^2+z^3-z^4} \sum_{n \ge 0}f_k(2n,0)\left(\frac{z-z^3}{1-z+z^2+z^3-z^4}\right)^{2n} \end{equation} $f_k(2n,0)$ is monotone, whence the limit $\lim_{n\to \infty}f_k(2n,0)^{ \frac{1}{2n}}$ exists and applying Hadamard's formula: $\lim_{n\to \infty}f_k(2n,0)^{\frac{1}{2n}}=\frac{1}{{r_{k}}}$. For $z\in \mathbb{R}$, we proceed by computing the roots of $$ \left|\frac{z-z^3}{1-z+z^2+z^3-z^4}\right|={r_{k}} $$ which for $r_k\le \frac{1}{2}$ has the minimal root $\rho_k$. We next show that $\rho_k$ is indeed the radius of convergence of $\sum_{n\ge 0} {\sf S}_k(n) z^n$. For this purpose we observe that the map \begin{equation}\label{E:w=1} \vartheta\colon [0, \frac{\sqrt{2}}{2}]\longrightarrow [0,\frac{5-\sqrt{2}}{4}], \quad z\mapsto \frac{z(1-z)(1+z)}{-(z^2-\frac{1}{2})^2+z(z^2-\frac{1}{2})- \frac{z}{2}+\frac{5}{4}} , \qquad \text{\rm where} \quad\vartheta(\rho_k)={r_k} \end{equation} \begin{figure}[ht] \centerline{ \epsfig{file=f7.eps,width=0.6\textwidth}\hskip15pt } \caption{\small We display 4 poles(the corresponding 4 peaks) in $\frac{z-z^3}{1-z+z^2+z^3-z^4}$ over $\mathbb{C}$. The picture illustrates that $\vartheta$ is a bijection on the interval $[0, \frac{\sqrt{2}}{2}]$, which allows us to obtain the dominant singularity $\rho_k$.} \label{F:6} \end{figure} is bijective, continuous and strictly decreasing. Continuity and strict monotonicity of $\vartheta$ guarantee in view of eq.~(\ref{E:w=1}) that $\rho_k$, is indeed the radius of convergence of the power series $\sum_{n\ge 0} {\sf S}_k(n) z^n$. In order to show that $\rho_k$ is a dominant singularity we consider $\sum_{n\ge 0}{\sf S}_{k,3}(n)z^n$ as a power series over $\mathbb{C}$. Since ${\sf S}_{k,3}(n)\ge 0$, the theorem of Pringsheim~\cite{Titmarsh:39} guarantees that $\rho_k$ itself is a singularity. By construction $\rho_k$ has minimal absolute value and is accordingly dominant. Since ${\sf S}_{k,3}(n)$ is monotone $\lim_{n\to\infty}{\sf S}_{k,3}(n)^{\frac{1} {n}}$ exists and we obtain using Hadamard's formula \begin{equation} \lim_{n\to\infty}{\sf S}_{k,3}(n)^{\frac{1}{n}}=\frac{1}{\rho_k},\quad \text{\rm or equivalently}\quad {\sf S}_{k,3}(n)\sim \left(\frac{1}{\rho_k} \right)^n \, , \end{equation} from which eq.~(\ref{E:rel}) follows and the proof of the theorem is complete. \end{proof} \section{Asymptotic Analysis}\label{S:sub} In this section we provide the asymptotic number of $3$-noncrossing RNA structures with arc-length $\ge 3$. In the course of our analysis we derive the analytic continuation of the power series $\sum_{n\ge 0}{\sf S}_{3,3}(n)z^n$. The analysis will in particular provide independent proof of the exponential factor computed in Theorem~\ref{T:asy1}. The derivation of the subexponential factors is based on singular expansions \cite{Flajolet:05} in combination with transfer theorems. The key ingredient for the coefficient extraction is the Hankel contours, see Figure~\ref{F:7}. Let us begin by specifying a suitable domain for our Hankel contours tailored for Theorem~\ref{T:transfer1}. \begin{definition}\label{D:delta} Given two numbers $\phi,R$, where $R>1$ and $0<\phi<\frac{\pi}{2}$ and $\rho\in\mathbb{R}$ the open domain $\Delta_\rho(\phi,R)$ is defined as \begin{equation} \Delta_\rho(\phi,R)=\{ z\mid \vert z\vert < R, z\neq \rho,\, \vert {\rm Arg}(z-\rho)\vert >\phi\} \end{equation} A domain is a $\Delta_\rho$-domain if it is of the form $\Delta_\rho(\phi,R)$ for some $R$ and $\phi$. A function is $\Delta_\rho$-analytic if it is analytic in some $\Delta_\rho$-domain. \end{definition} \begin{figure}[ht] \centerline{% \epsfig{file=f8.eps,width=0.5\textwidth}\hskip15pt } \caption{\small $\Delta_1$-domain enclosing a Hankel contour. We assume $z=1$ to be the unique dominant singularity. The coefficients are obtained via Cauchy's integral formula and the integral path is decomposed in $4$ segments. Segment $1$ becomes asymptotically irrelevant since by construction the function involved is bounded on this segment. Relevant are the rectilinear segments $2$ and $4$ and the inner circle $3$. The only contributions to the contour integral are being made here, which shows why the singular expansion allows to approximate the coefficients so well.} \label{F:7} \end{figure} Since the Taylor coefficients have the property \begin{equation}\label{E:scaling} \forall \,\gamma\in\mathbb{C}\setminus 0;\quad [z^n]f(z)=\gamma^n [z^n]f(\frac{z}{\gamma}) \ , \end{equation} we can, w.l.o.g.~reduce our analysis to the case where $1$ is the dominant singularity. We use $U(a,r)=\{z\in \mathbb{C}|\vert z-a\vert<r\}$ to denote the open neighborhood of $a$ in $\mathbb{C}$. We use the notation \begin{equation}\label{E:genau} \left(f(z)=O\left(g(z)\right) \ \text{\rm as $z\rightarrow \rho$}\right)\quad \Longleftrightarrow \quad \left(f(z)/g(z) \ \text{\rm is bounded as $z\rightarrow \rho$}\right) \end{equation} and if we write $f(z)=O(g(z))$ it is implicitly assumed that $z$ tends to a (unique) singularity. $[z^n]\,f(z)$ denotes the coefficient of $z^n$ in the power series expansion of $f(z)$ around $0$. \begin{theorem}\label{T:transfer1}\cite{Flajolet:05} Let $\alpha$ be an arbitrary complex number in $\mathbb{C}\setminus \mathbb{Z}_{\le 0}$, $f(z)$ be a $\Delta_1$-analytic function. Suppose $r\in\mathbb{Z}_{\ge 0}$, and $f(z)=O((1-z)^{r}\ln^{}(\frac{1}{1-z}))$ in the intersection of a neighborhood of $1$ and the $\Delta_1$-domain, then we have \begin{equation} [z^n]f(z)\sim K\, (-1)^r\frac{r!}{n(n-1)\dots(n-r)} \quad \text{\it for some $K>0$}\ . \end{equation} \end{theorem} We are now prepared to compute an explicit formula for the numbers of $3$-noncrossing RNA structures with arc-length $\ge 3$. \begin{theorem}\label{T:asy3} The number of $3$-noncrossing RNA structures with arc length $\ge 3$ is asymptotically given by \begin{eqnarray*} \label{E:konk3} {\sf S}_{3,3}(n) & \sim & \frac{6.11170\cdot 4!}{n(n-1)\dots(n-4)}\, 4.54920^n \ .\\ \end{eqnarray*} \end{theorem} \begin{proof} {\it Claim $1$.} The dominant singularity $\rho_3$ of the power series $\sum_{n\ge 0} {\sf S}_{3,3}(n) z^n$ is unique.\\ In order to prove Claim $1$ we use Lemma~\ref{L:ana}, according to which the analytic function $\Xi_3(z)$ is the analytic continuation of the power series $\sum_{n\ge 0} {\sf S}_{3}(n) z^n$. We proceed by showing that $\Xi_3(z)$ has exactly $12$ singularities in $\mathbb{C}$ and the dominant singularity is unique. The first four singularities are the roots of the quartic polynomial $P(z)=1-z+z^2+z^3-z^4$. Next we observe in analogy to our proof in \cite{Reidys:07asy} that the power series $\sum_{n\ge 0} f_3(2n,0) y^{n}$ has the analytic continuation $\Psi(y)$ (obtained by MAPLE sumtools) given by \begin{equation}\label{E:psi} \Psi(y)= \frac{-(1-16y)^{\frac{3}{2}} P_{3/2}^{-1}(-\frac{16y+1}{16y-1})} {16\, {y}^{\frac{5}{2}}} \ , \end{equation} where $P_{\nu}^{m}(x)$ denotes the Legendre Polynomial of the first kind with the parameters $\nu=\frac{3}{2}$ and $m=-1$. $\Psi(y)$ has one dominant singularity at $y=\frac{1}{16}$, which in view of $\vartheta(z)=(\frac{z-z^3}{1+z^2-z^4-z+z^3})^2$ induces exactly $8$ singularities of $\Xi_3(z)=\frac{1}{1+z^2-z^4-z+z^3}\, \Psi\left(\left(\frac{z-z^3}{1+z^2-z^4-z+z^3}\right)^2\right) $. Indeed, $\Psi(y^2)$ has the two singularities $\mathbb{C}$: $\beta_1=\frac{1}{4}$ and $\beta_2=-\frac{1}{4}$ which produces for $\Xi_3(z)$ (solving the quartic equation) the 8 singularities $\rho_3\approx 0.21982$, $\zeta_2\approx 5.00829$, $\zeta_3\approx -1.07392$, $\zeta_4\approx 0.84581$, $\zeta_5\approx -0.53243+0.11951i$, $\zeta_6 \approx -0.53243-0.11951i$, $\zeta_7 \approx 1.10477$ and $\zeta_8 \approx -3.03992$. The above values have error terms (for details for solving the general quartic equation see Section~\ref{S:app}) of the order $10^{-5}$, which allows us to conclude that the dominant singularity $\rho_3$ is unique and Claim $1$ follows. \\ {\it Claim $2$.} \cite{Reidys:07asy} $\Psi(z)$ is $\Delta_{\frac{1}{16}}(\phi,R)$-analytic and has the singular expansion $(1-16z)^4\ln\left(\frac{1}{1-16z}\right)$. \begin{equation} \forall\, z\in\Delta_{\frac{1}{16}}(\phi,R)\cap U(\frac{1}{16},\epsilon);\quad \Psi(z)={O}\left((1-16z)^4\ln\left(\frac{1}{1-16z}\right)\right) \ . \end{equation} First $\Delta_{\frac{1}{16}}(\phi,R)$-analyticity of the function $\Psi(z)$ is obvious. We proceed by proving that $(1-16z)^4\ln\left(\frac{1}{1-16z}\right)$ is its singular expansion in the intersection of a neighborhood of $\frac{1}{16}$ and the $\Delta$-domain $\Delta_{\frac{1}{16}}(\phi,R)$. Using the notation of falling factorials $(n-1)_4=(n-1)(n-2)(n-3)(n-4)$ we observe $$ f_3(2n,0)=C_{n+2}C_{n}-C_{n+1}^2= \frac{1}{(n-1)_4} \frac{12(n-1)_4(2n+1)}{(n+3)(n+1)^2(n+2)^2}\, \binom{2n}{n}^2 \ . $$ With this expression for $f_3(2n,0)$ we arrive at the formal identity \begin{eqnarray*} \sum_{n\ge 5}16^{-n}f_3(2n,0)z^n & = & O(\sum_{n\ge 5} \left[16^{-n}\,\frac{1}{(n-1)_4} \frac{12(n-1)_4(2n+1)}{(n+3)(n+1)^2(n+2)^2}\, \binom{2n}{n}^2-\frac{4!}{(n-1)_4}\frac{1}{\pi}\frac{1}{n}\right]z^n \\ & & + \sum_{n\ge 5}\frac{4!}{(n-1)_4}\frac{1}{\pi}\frac{1}{n}z^n) \ , \end{eqnarray*} where $f(z)=O(g(z))$ denotes that the limit $f(z)/g(z)$ is bounded for $z\rightarrow 1$, eq.~(\ref{E:genau}). It is clear that \begin{eqnarray*} & & \lim_{z\to 1}(\sum_{n\ge 5}\left[16^{-n}\,\frac{1}{(n-1)_4} \frac{12(n-1)_4(2n+1)}{(n+3)(n+1)^2(n+2)^2}\, \binom{2n}{n}^2-\frac{4!}{(n-1)_4}\frac{1}{\pi}\frac{1}{n}\right]z^n) \\ &= & \sum_{n\ge 5} \left[16^{-n}\,\frac{1}{(n-1)_4} \frac{12(n-1)_4(2n+1)}{(n+3)(n+1)^2(n+2)^2}\, \binom{2n}{n}^2-\frac{4!}{(n-1)_4}\frac{1}{\pi}\frac{1}{n}\right] <\kappa \end{eqnarray*} for some $\kappa< 0.0784$. Therefore we can conclude \begin{equation} \sum_{n\ge 5}16^{-n}f_3(2n,0)z^n= O(\sum_{n\ge 5}\frac{4!}{(n-1)_4}\frac{1}{\pi}\frac{1}{n}z^n) \ . \end{equation} We proceed by interpreting the power series on the rhs, observing \begin{equation} \forall\, n\ge 5\, ; \qquad [z^n]\left((1-z)^4\,\ln\frac{1}{1-z}\right)= \frac{4!}{(n-1)\dots (n-4)}\frac{1}{n} \, , \end{equation} whence $\left((1-z)^4\,\ln\frac{1}{1-z}\right)$ is the unique analytic continuation of $\sum_{n\ge 5}\frac{4!}{(n-1)_4} \frac{1}{\pi}\frac{1}{n}z^n$. Using the scaling property of Taylor coefficients $[z^n]f(z)=\gamma^n [z^n]f(\frac{z}{\gamma})$ we obtain \begin{equation}\label{E:isses} \forall\, z\in\Delta_{\frac{1}{16}}(\phi,R)\cap U(\frac{1}{16},\epsilon);\quad \Psi(z) =O\left((1-16z)^4\ln\left(\frac{1}{1-16z}\right)\right) \ . \end{equation} Therefore we have proved that $(1-16z)^{4}\ln^{}(\frac{1}{1-16z})$ is the singular expansion of $\Psi(z)$ at $z=\frac{1}{16}$, whence Claim $2$. Our last step consists in verifying that the type of the singularity does not change when passing from $\Psi(z)$ to $\Xi_3(z)= \frac{1}{1-z+z^2+z^3-z^4} \Psi((\frac{z-z^3}{1-z+z^2+z^3-z^4})^2)$. \\ {\it Claim $3$.} For $z\in \Delta_{\rho_3}(\phi,R)\cap U(\rho_3,\epsilon)$ we have $\Xi_3(z) ={O}\left((1-\frac{z}{\rho_3})^4\ln(\frac{1}{1-\frac{z}{\rho_3}}) \right)$.\\ To prove the claim set $u(z)=1-z+z^2+z^3-z^4$. We first observe that Claim $2$ and Lemma~\ref{L:ana} imply \begin{align*} \Xi_3(z) &=O\left( \frac{1}{u(z)}\, \left[\left(1-16(\frac{z-z^3}{u(z)})^2\right)^4 \ln\frac{1}{\left(1-16(\frac{z-z^3}{u(z)})^2\right)}\right]\right) \ . \end{align*} The Taylor expansion of $q(z)=1-16(\frac{z-z^3}{u(z)})^2$ at $\rho_3$ is given by $q(z)=\alpha(\rho_3-z)+{O}(z-\rho_3)^2$ and setting $\alpha\approx -1.15861$ we compute \begin{align*} \frac{1}{u(z)}\, \left[q(z)^4\ln\frac{1} {q(z)}\right] &= \frac{(\alpha(\rho_3-z)+{O}(z-\rho_3)^2)^4\ln\frac{1}{\alpha(\rho_3-z)+{O} (z-\rho_3)^2}}{0.83679-0.45789(z-\rho_3)+O((z-\rho_3)^2)}\\ &= \frac{\left([\alpha+O(z-\rho_3)](\rho_3-z)^4 \ln\frac{1}{[\alpha+O(z-\rho_3)](\rho_3-z)} \right)}{O(z-\rho_3)} \\ &={O}((\rho_3-z)^4\ln\frac{1}{\rho_3-z}) \ , \end{align*} whence Claim $3$. Now we are in the position to employ Theorem~\ref{T:transfer1}, and obtain for ${\sf S}_{3,3}(n)$ \begin{align*} {\sf S}_{3,3}(n)&\sim K'\, [z^n]\left((\rho_3-z)^4\ln\frac{1}{\rho_3-z} \right) \sim K'\, \frac{4!} {n(n-1)\dots(n-4)}\left(\frac{1}{\rho_3}\right)^n \ . \end{align*} Theorem~\ref{T:cool2} allows us to compute $K'=6.11170$ and the proof of the Theorem is complete. \end{proof} \section{Appendix}\label{S:app} Let us first introduce some basic definitions for quartic equation (quartic) used in the following. $Ax^4+Bx^3+Cx^2+Dx+E=0$ is a quartic if $A \ne 0$. A depressed quartic is a quartic such that $B=0$ holds. $Ax^3+Bx^2+Cx+d=0$ is a cubic equation if $A \ne 0$ and in particular a cubic without $x^2$ term is called as depressed cubic. \\ The first step in solving the quartic consists in transforming it into a depressed quartic. i.e. eliminate the $x^3$ term. Transform the equation into $x^4+\frac{B}{A}x^3+\frac{C}{A}x^2+\frac{D}{A}x+\frac{E}{A}=0$ and substitute $x=u-\frac{B}{4A}$. Simplifying the original quartic yields \begin{equation}\label{A:depqua} u^4+\alpha u^2+\beta u +\gamma=0 \end{equation} where $\alpha=\frac{-3B^2}{8A^2}+\frac{C}{A}$, $\beta=\frac{B^3}{8A^3}-\frac{BC}{2A^2}+\frac{D}{A}$ and $\gamma=\frac{-3B^4}{256A^4}+\frac{CB^2}{16A^3}-\frac{BD}{4A^2} +\frac{E}{A}$, and eq.~(\ref{A:depqua}) is a depressed quartic function, which is tantamount to \begin{equation}\label{A:preu} (u^2+\alpha)^2+\beta u+\gamma=\alpha u^2+\alpha^2 \end{equation} The next step is to insert a variable $y$ into the perfect square on the left side of eq.~(\ref{A:preu}). Add both \begin{align*} (u^2+\alpha+y)^2-(u^2+\alpha)^2=2yu^2+2y\alpha+y^2\\ 0=(\alpha+2y)u^2-2yu^2-\alpha u^2 \end{align*} to eq.(\ref{A:preu}) yields \begin{equation}\label{A:insert-y} (u^2+\alpha+y)^2=(\alpha+2y)u^2-\beta u+(y^2+2y\alpha+\alpha^2-\gamma) \ . \end{equation} The next step consists in substituting for $y$ such that the right side of eq.~(\ref{A:insert-y}) becomes a square. Observe that $(su+t)^2=(s^2)u^2+(2st)u+(t^2)$ holds for any $s$ and $t$, and the relation between the coefficients of the rhs is $(2st)^2=4(s^2)(t^2)$. Therefore to make the rhs of eq.~(\ref{A:insert-y}) into a perfect square, the following equation must hold. \begin{equation}{\label{A:solve-y}} 2y^3+5 \alpha y^2+(4 \alpha^2-2 \gamma)y+(\alpha^3-\alpha \gamma-\frac{\beta^2}{4})=0 \ . \end{equation} Similarly, transform eq~(\ref{A:solve-y}) into a depressed cubic equation by substituting $y=v-\frac{5}{6}\alpha$ \begin{equation}\label{A:depcub} v^3+(-\frac{\alpha^2}{12}-\gamma)v+(-\frac{\alpha^3}{108}+\frac{\alpha \gamma}{3}-\frac{\beta^2}{8})=0 \ . \end{equation} Set $P=-\frac{\alpha^2}{12}-\gamma$ and $Q=-\frac{\alpha^3}{108}+\frac{\alpha \gamma}{3}-\frac{\beta^2}{8}$. Select any of the solutions of eq.~(\ref{A:depcub}) of the form $v=\frac{P}{3U}-U$ where $U=\sqrt[3]{\frac{Q}{2}\pm\sqrt{\frac{Q^2}{4}+\frac{P^3}{27}}}$ and $U \ne 0$, otherwise $v=0$. In view of $y=v-\frac{5}{6}$ this solution yields for eq.~(\ref{A:solve-y}) $y=-\frac{5}{6}\alpha+\frac{P}{3U}-U$ for $U \ne 0$ and $y=-\frac{5}{6}\alpha$ for $U=0$. Now the rhs of eq.~(\ref{A:insert-y}) becomes \begin{align*} (\alpha+2y)u^2+(-\beta)u+(y^2+2y\alpha+\alpha^2-\gamma)=\left( \left(\sqrt{\alpha+2y}\right)u+\frac{-\beta}{2\sqrt{\alpha+2y}}\right)^2 \end{align*} Combined with eq.~(\ref{A:insert-y}) this yields two solutions for $u$: $$ u=\frac{\pm\sqrt{\alpha+2y}\pm\sqrt{-\left(3\alpha+2y\pm\frac{2\beta} {\sqrt{\alpha+2y}}\right)}}{2} $$ where the first and the third $\pm$ must have the same sign. This allows us to obtain the solutions for $x$: $$ x=-\frac{B}{4A}+\frac{\pm\sqrt{\alpha+2y}\pm\sqrt{-\left(3\alpha+ 2y\pm\frac{2\beta}{\sqrt{\alpha+2y}}\right)}}{2} \ . $$ In particular for $x^4-5x^3-x^2+5x-1=0$, $A=1,B=-5,C=-1,D=5,E=-1$, and hence $\alpha=-\frac{83}{8}$, $\beta=-\frac{105}{8}$, $P=-\frac{16}{3}$ and $Q=\frac{299}{216}$ and $U \approx 1.21481-0.54955i \ne 0$, therefore $y \approx 6.21621$, and the solutions are $\rho_3\approx 0.21982$, $\zeta_2\approx 5.00829$, $\zeta_3\approx -1.07392$, $\zeta_4\approx 0.84581$. As for the equation $x^4+3x^3-x^2-3x-1=0$, the corresponding solutions are $\zeta_5\approx -0.53243+0.11951i$, $\zeta_6 \approx -0.53243-0.11951i$, $\zeta_7 \approx 1.10477$ and $\zeta_8 \approx -3.03992$. {\bf Acknowledgments.} This work was supported by the 973 Project, the PCSIRT Project of the Ministry of Education, the Ministry of Science and Technology, and the National Science Foundation of China.
{'timestamp': '2007-08-23T19:43:53', 'yymm': '0708', 'arxiv_id': '0708.3134', 'language': 'en', 'url': 'https://arxiv.org/abs/0708.3134'}
\section{Introduction} \quad Consider the following two sequences of random subsets of the set $[n]:= \{1, \ldots, n \}$, generated by listing the cycles of a uniform random permutation $\pi_n$ of $[n]$ in two different orders: for a permutation $\pi_n$ with $K_n$ cycles, \begin{itemize} \item let $C_{1:n}, C_{2:n}, \ldots$ be the cycles of $\pi_n$ in {\em order of least elements}, so $C_{1:n}$ is the cycle of $\pi_n$ containing $1$, if $C_{1:n} \ne [n]$ then $C_{2:n}$ is the cycle of $\pi_n$ containing the least $j \in [n]$ with $j \notin C_{1:n}$, and so on, with $C_{k:n} = \emptyset$ if $k > K_n$; \item let $R_{1:n}, R_{2:n}, \ldots, R_{K_n:n}$ be the same cycles in {\em order of greatest elements}, so $R_{1:n}$ is the cycle of $\pi_n$ containing $n$, if $R_{1:n} \ne [n]$ then $R_{2:n}$ is the cycle of $\pi_n$ containing the greatest $j \in [n]$ with $j \notin R_{1:n}$, and so on, with $R_{k:n} = \emptyset$ if $k > K_n$. \end{itemize} For $1 \le k \le n$ define the probability \begin{equation} \label{eq:ukn} u_{k:n} := \P \left( \cup_{i=1}^k C_{i:n} = \cup _{i = 1}^k R_{i:n} \right), \end{equation} the probability that the same collection of $k$ cycles appears as the first $k$ cycles in both orders. It is elementary that the number of elements of $C_{1:n}$ has a discrete uniform distribution on $[n]$, and the same is true for $R_{1:n}$. Since $u_{1:n}$ is the probability that both $1$ and $n$ fall in the same cycle of $\pi_n$, it follows easily that $u_{1:n} = 1/2$ for every $n \ge 2$. For $k \ge 2$ it is easy to give multiple summation formulas for $u_{k:n}$. Such formulas show that $u_{k:n}$ has some dependence on $n$ for $k \ge 2$, with limits \begin{equation} \label{uklim} \lim_{n\rightarrow \infty } u_{k:n} = u_k \mbox{ for each } k = 1,2, \ldots, \end{equation} which may be described as follows. It is known \cite[p. 25]{abtlogcs} that the asymptotic structure of sizes of cycles of $\pi_n$, when listed in either order, and normalized by $n$, is that of the sequence of lengths of subintervals of $[0,1]$ defined by the {\em uniform stick-breaking scheme}: \begin{equation} \label{wk:intro} P_1:= W_1, \qquad P_2:= (1-W_1) W_2, \qquad P_3:= (1-W_1) ( 1 - W_2) W_3 , \cdots \end{equation} where the $W_i$ are i.i.d. uniform $[0,1]$ variables. In more detail, \begin{equation} P_k:= |I_k| \mbox{ where } I_k := [R_{k-1}, R_k) \mbox{ with } R_k = \sum_{i=1}^k P_k = 1 - \prod_{i=1}^k ( 1 - W_i ), \end{equation} with the convention $R_0=0$. The distribution of lengths $(P_1,P_2, \ldots)$ so obtained, with $\sum_{i} P_i = 1$ almost surely, is known as the GEM$(1)$ model, after Griffiths, Engen and McCloskey. The limit probabilities $u_k$ are easily evaluated directly in terms of the limit model, as follows. Let $(U_1, U_2, \ldots)$ be an i.i.d.\ uniform $[0,1]$ sequence of {\em sample points} independent of $(P_1, P_2, \ldots)$. Say that an interval $I_k$ has been discovered by time $i$ if it contains at least one of the sample points $U_1, U_2, \ldots, U_i$. The sampling process imposes a new order of discovery on the intervals $I_k$, which describes the large $n$ limit structure of the random permutation of cycles of $\pi_n$ described above. The limit $u_k$ in \eqref{uklim} is $u_k:= \P(E_k)$, the probability of the event $E_k$ in the limit model that the union of the first $k$ intervals to be discovered in the sampling process equals $\cup_{i=1}^k I_i = [0,R_k)$ for $R_k = P_1 + \cdots + P_k$ as above. There are several different ways to express this event $E_k$. The most convenient for present purposes is to consider the stopping time $n(k,1)$ when the sampling process first discovers a point not in $[0,R_k)$. If at that time $n = n(k,1)$, for $1 \le i \le k$ there is at least one sample point $U_j \in I_i$ with $1 \le j < n$, then the event $E_k$ has occurred, and otherwise not. Thus for $k = 1,2, \ldots$ \begin{equation}\label{eq:ukintro} u_k = \P\left( \cap_{i=1}^k ( U_j \in I_i \mbox{ for some } 1 \le j < n(k,1) ) \right) \end{equation} and we adopt the convention that $u_0:=1$. This sequence $(u_k, k\geq 0)$ is a renewal sequence appearing in the study of \emph{regenerative permutations} in \cite{PT17}. In that context it is easily shown that the limit $u_\infty := \lim_{k\rightarrow \infty} u_k$ exists, but difficult to evaluate the $u_k$ for general $k$. However, computation of $u_k$ for the first few $k=1,2,3,\ldots$ by symbolic integration suggested a general formula for $u_k$ as a rational linear combination of the real numbers $1, \zeta(2), \ldots, \zeta(k)$ where $\zeta$ is the Riemann zeta function, \begin{equation*} \zeta(s): = \sum_{n = 1}^{\infty} \frac{1}{n^s} \qquad \mbox{for } \re(s) > 1. \end{equation*} \quad This article establishes the following result, whose proof leads to some probabilistic interpretations of multiple zeta values and harmonic sums. \begin{proposition} \label{prop:conju} The renewal sequence $(u_k,~k \ge 0)$ defined above by \eqref{eq:ukintro} in terms of uniform stick-breaking is characterized by any one of the following equivalent conditions: \begin{enumerate}[(i).] \item The sequence $(u_k,~ k \ge 0)$ is defined recursively by \begin{equation} \label{rec} 2 u_{k} + 3 u_{k-1} + u_{k-2} = 2 \zeta(k) \quad \mbox{with } u_0 = 1, \, u_1 = 1/2. \end{equation} \item For all $k \ge 0$, \begin{equation} \label{rzs} u_{k} = (-1)^{k-1} \left(2 - \frac{3}{2^k} \right) + \sum_{j=2}^{k} (-1)^{k-j} \left(2 -\frac{1}{2^{k-j}} \right) \zeta(j). \end{equation} \item For all $k \ge 0$, \begin{equation} \label{positive} u_{k} = \sum_{j = 1}^{\infty} \frac{2}{j^k(j+1)(j+2)}. \end{equation} \item The generating function of $(u_{k},~ k \ge 0)$ is \begin{equation} \label{Uz} U(z) : = \sum_{k=0}^{\infty}u_k z^k = \frac{2}{(1+z)(2+z)} \Bigg[ 1 + \Bigg(2 - \gamma - \psi(1-z)\Bigg) z\Bigg], \end{equation} for $|z| < 1$, where $\gamma: = \lim_{n \rightarrow \infty} (\sum_{k=1}^n 1/k - \ln n) \approx 0.577$ is the Euler constant, and $\psi(z): = \Gamma'(z)/\Gamma(z)$ with $\Gamma(z): = \int_0^\infty t^{z-1} e^{-t} dt$, the digamma function. \end{enumerate} \end{proposition} The proof is given in Section \ref{sec:prop_proof}. We are also interested in $(u_k)$ when the partition $(I_k)$ follows a more general stick-breaking scheme, where the $(W_k)$ in \eqref{wk:intro} are i.i.d.\ with an arbitrary distribution on $(0,1)$. We will develop in particular the case, for $\theta>0$, where $W_k$ follows a beta$(1,\theta)$ distribution -- with density $\theta(1-x)^{\theta-1}$ on $[0,1]$. The sequence $u_k$ so defined is the limit \eqref{uklim} if the distribution of the random permutation $\pi_n$ is changed from the uniform distribution on permutations of $[n]$ to the {\em Ewens $(\theta)$ distribution} on permutations of $[n]$, in which the probability of any particular permutation of $[n]$ with $k$ cycles is $\theta^k/(\theta)_n$ instead of $1/(1)_n$, where $(\theta)_n:= \theta ( \theta + 1) \cdots ( \theta + n-1)$ is a rising factorial. In that case the limit distribution of interval lengths $(P_1, P_2, \ldots)$ is known as the GEM$(\theta)$ distribution \cite[\S 5.4]{abtlogcs}. Our expressions for $u_k$ in this case are less explicit. In the following, the notation $\stackrel{\theta}{=}$ indicates evaluations for the GEM$(\theta)$ model. For instance, it was proved in \cite[(7.16)]{PT17} that \begin{equation} \label{PT17formula} u_{\infty} \stackrel{\theta}{=} \frac{\Gamma(\theta+2) \Gamma(\theta+1)}{\Gamma(2\theta+2)} \stackrel{1}{=} \frac{1}{3}. \end{equation} A consequence of Proposition \ref{prop:conju} is that for each $k\geq 1$, the right-hand side of \eqref{rzs} is positive -- in fact strictly greater than $u_{\infty} \stackrel{1}{=} 1/3$. Also, the probability $f_k$ of a first renewal at time $k$, which is determined by $u_1, \ldots, u_k$ by a well known recursion recalled later in \eqref{ufrec}, is also strictly positive. These inequalities seem not at all obvious without the probabilistic interpretations offered here. The inequalities are reminiscent of Li's criterion \cite{Li} for the Riemann hypothesis, which has some probabilistic interpretations indicated in \cite[Section 2.3]{BPY}. The GEM$(1)$ model also arises from the asymptotics of prime factorizations \cite{DG}, but the results for sampling from GEM(1) described here do not seem easy to interpret in that setting. \quad The interpretation of $u_k$ sketched above and detailed in \cite{PT17}, that $u_k$ is the probability that the random order of discovery of intervals maps $[k]$ to $[k]$, yields the following corollary. \begin{corollary} For ${\bf w}: = (w_1, w_2, \ldots) \in (0,1)^{\mathbb{N}_{+}}$, let $p_i({\bf w}) : = (1-w_1) \cdots (1-w_{i-1}) w_i$. Then for each $k \ge 1$, the expression \begin{equation} \label{identity} \sum_{\pi \in \mathfrak{S}_k} \int_{(0,1)^k} p_{\pi(1)}({\bf w}) \prod_{i=2}^{k} \frac{p_{\pi(i)}({\bf w})}{1-\sum_{j=1}^{i-1} p_{\pi(j)}({\bf w})} dw_1 \cdots dw_k \end{equation} is equal to \eqref{rzs} and to \eqref{positive}, where $\mathfrak{S}_k$ is the set of permutations of the finite set $\{1,\cdots,k\}$. \end{corollary} The expression \eqref{rzs} gives a {\em rational zeta series expansion} of the multiple integral \eqref{identity}. Similar expansions also appeared in Beukers' proof \cite{Beukers} of the irrationality of $\zeta(3)$. The expression \eqref{identity} is a sum of $k!$ positive terms, while \eqref{rzs} is a linear combination of $1,\zeta(2),\cdots,\zeta(k)$ with alternating signs. By symbolic integration, we can identify each term of the sum in \eqref{identity} for $k=2,3$, but some terms become difficult to evaluate for $k \ge 4$, and we have no general formula for these terms, no direct algebraic explanation of why the terms in \eqref{identity} should sum to a rational zeta series. \quad The Riemann zeta function plays an important role in analytic number theory \cite{Edwards,Bombieri}, and has applications in geometry \cite{Witten,Ev} and mathematical physics \cite{Berry,Kirsten}. Connections between the Riemann zeta function and probability theory have also been explored, for example: \begin{itemize} \item For each $s > 1$, the normalized terms of the Riemann zeta series define a discrete probability distribution of a random variable $Z_s$ with values on $\{1, 2, \cdots\}$, such that $\log Z_s$ has a compound Poisson distribution \cite{ABR,LH,Gut}. \item The values $\zeta(2)$ and $\zeta(3)$ emerge in the limit of large random objects \cite{Frieze,AldouS}. \item The values $1/\zeta(n)$ for $n = 2,3 \ldots$ arise from the limit proportion of $n$-free numbers; that is, numbers not divisible by any $n$-th power of a natural number, see \cite{EL, AN}. \item The values $\zeta(1/2-n)$ for $n\geq 0$ appear in the expected first ladder height of Gaussian random walks \cite{chang_ladder_1997}. \item The Riemann zeta function appears in the Mellin transforms of functionals of Brownian motion and Bessel processes \cite{Williams,BPY}. \item Conjectured bounds for the zeta function on the critical line $\Re(s) = 1/2$ can be related to branching random walks \cite{arguin_maxima_2017}. \item There are striking parallels between the behavior of zeros of the Riemann zeta function on the line $\Re(s) = 1/2$ and the structure of eigenvalues in random matrix theory \cite{Montgomery,KSarnak,Odly}. \end{itemize} In the early $1990$s, Hoffman \cite{Hoffman} and Zagier \cite{Zagier} introduced the {\em multiple zeta value} \begin{equation} \label{multzeta} \zeta(s_1,\cdots,s_k): = \sum_{0< n_1 < \cdots < n_k } \frac{1}{n_1^{s_1} \cdots n_k^{s_k}}, \end{equation} and the {\em multiple zeta-star value} \begin{equation} \label{multzetastar} \zeta^{*}(s_1,\cdots,s_k): = \sum_{0< n_1 \leq \cdots \leq n_k } \frac{1}{n_1^{s_1} \cdots n_k^{s_k}}, \end{equation} for each $k>0$, and $s_i \in \mathbb{N}_{+}: = \{1,2,\cdots\}$ with $s_1 > 1$ to ensure the convergence. Note that the multiple zeta-star value \eqref{multzetastar} can be written as the sum of multiple zeta values: \begin{equation*} \zeta^*(s_1,\cdots,s_k) = \sum_{{\bf s}^*} \zeta({\bf s}^*), \end{equation*} where the sum is over all ${\bf s}^* = (s_1 \square \cdots \square s_k)$, with each $\square$ filled by either a comma or a plus. To illustrate, \begin{align*} & \zeta^*(s_1,s_2) = \zeta(s_1,s_2) + \zeta(s_1+s_2), \\ & \zeta^*(s_1,s_2,s_3) = \zeta(s_1,s_2,s_3) + \zeta(s_1,s_2+s_3) + \zeta(s_1+s_2,s_3) + \zeta(s_1+s_2+ s_3). \end{align*} See \cite{BBBL,Hoffman05,AKO} for the algebraic structure, and some evaluations of multiple zeta values. It was proved in \cite{AET,Zhao} that the multiple zeta functions \eqref{multzeta}-\eqref{multzetastar} can also be continued meromorphically on the whole space $\mathbb{C}^k$. \quad These multiple zeta values appear in various contexts including algebraic geometry, knot theory, and quantum field theory, see \cite{GF17}. But we are not aware of any previous probabilistic interpretation of these numbers. In this article we show how the zeta values $\zeta(2),\zeta(3),\ldots$ and \eqref{multzeta}-\eqref{multzetastar} arise in the renewal sequence $(u_k)$ associated with the discovery of intervals for a GEM$(1)$ partition of $[0,1]$. Equivalently the same sequence $(u_k)$ can be expressed in terms of a GEM$(1)$-biased permutation of $\mathbb{N}_{+}$ \cite{PT17}, or of the {\em Bernoulli sieve} \cite{Gnedinsieve} driven by the GEM$(1)$ distribution. \bigskip {\bf Organization of the paper:} The rest of the paper is organized as follows. \begin{itemize} \item In Section \ref{sec:gem_theta_records} we introduce the main tool of our analysis, a Markov chain $(\widehat{Q}_k)$ derived from the discovery process of subintervals in the GEM$(\theta)$ stick-breaking model, and show its equality in distribution with a weak record chain. \item In Section \ref{sec:prop_proof} we give the proof of Proposition \ref{prop:conju} for $\theta = 1$, and provide some partial results for general $\theta$. \item In Section \ref{sec:renewal_seq} we define a number of renewal sequences satisfying a recursion involving the Riemann zeta function. \item In Section \ref{sec:gem_one} we specialize again to $\theta=1$ and examine further the distribution of the Markov chain $(\widehat{Q}_k)$, deriving expressions involving iterated harmonic sums and zeta values. \item In Section \ref{sec:u2} we derive a formula for $u_{2:n}$ associated with random permutations, which provides evaluation of $u_2$ for general $\theta$ as the limit. Among those we identify the sequence $(u_k)$ defined by \eqref{eq:ukintro} in the GEM$(1)$ model. \end{itemize} \section{One-parameter Markov chains and record processes} \label{sec:gem_theta_records} \quad Recall the definition \eqref{wk:intro} of the length $P_k = |I_k|$ of the $k$-th interval in a stick-breaking partition and the uniform sequence $(U_i)$ of points that we use to discover intervals. Now define a random sequence of positive integers $(X_i)$ by setting \begin{equation}\label{eq:def_xi} X_i := k \iff U_i \in I_k . \end{equation} So $X_i$ is the rank of the interval in which the $i$-th sample point $U_i$ falls. Conditionally given the sequence of interval lengths $(P_1,P_2, \ldots)$, the $X_i$ are i.i.d. according to this distribution on $\mathbb{N}_{+} := \{1,2, \ldots \}$. Formula \eqref{eq:ukintro} can be recast as \begin{equation}\label{eq:uk_redef} u_k = \P\left (\{X_1, X_2, \ldots, X_{n(k,1)-1}\}=\{1,2,\ldots,k\}\right ), \end{equation} where $n(k,1) = \inf\{i \geq 1, \, X_i \geq k+1\}$. \quad The key to our analysis is the Markov chain $(\widehat{Q}_k)$ given by the following lemma from \cite[Lemma 7.1]{PT17}. This lemma is suggested by work of Gnedin and coauthors on the Bernoulli sieve \cite{Gsmall,GIM}, and subsequent work on extremes and gaps in sampling from a RAM by Pitman and Yakubovich \cite{PY17, P17}. \begin{lemma} \label{lemma:PT} Let $X_1, X_2, \ldots$ be as in \eqref{eq:def_xi} for a stick-breaking partition with i.i.d.\ factors $W_i \stackrel{(d)}{=} W$ as in \eqref{wk:intro} for some distribution of $W$ on $(0,1)$. For $n \in \mathbb{N}_{+}$ and $k = 0,1, \ldots$ let \begin{equation} Q_n^*(k):= \sum_{i=1}^n 1 (X_i > k ) = \sum_{i=1}^n 1 (U_i \ge R_k ) \end{equation} represent the number of the first $n$ sample points which land outside the union $[0, R_k)$ of the first $k$ intervals. For $m = 1,2, \ldots$ let $n(k,m):= \min \{n : Q_n^*(k) = m \}$ be the first time $n$ that there are $m$ sample points outside the first $k$ intervals. Then: \begin{enumerate}[(i).] \item For each $k$ and $m$ there is the equality of joint distributions \begin{equation} \left( Q_{n(k,m)} ^* (k-j), 0 \le j \le k \right) \stackrel{(d)}{=} \left( \widehat{Q}_j, 0 \le j \le k \,\middle|\, \widehat{Q}_0 = m \right) \end{equation} where $(\widehat{Q}_0, \widehat{Q}_1, \ldots)$ with $1 \le \widehat{Q}_0 \le \widehat{Q}_1 \cdots$ is a Markov chain with state space $\mathbb{N}_{+}$ and stationary transition probability function \begin{equation} \label{hatqdef} \widehat{q}(m,n) := \binom{n-1}{m-1}\mathbb{E} W^{n-m} (1-W)^m \quad \mbox{for}~m \le n. \end{equation} \item For each $k \ge 1$ the renewal probability $u_k$ defined by \eqref{eq:uk_redef} is given by \begin{equation} \label{ukform} u_k = \mathbb{P}( \widehat{Q}_0 < \widehat{Q}_1 < \cdots < \widehat{Q}_k \,|\, \widehat{Q}_0 = 1 ). \end{equation} \item The sequence $u_k$ is strictly decreasing, with limit $u_\infty \ge 0$ which is given by \begin{equation} \label{uinfform} u_\infty = \mathbb{P}( \widehat{Q}_0 < \widehat{Q}_1 < \cdots \,|\, \widehat{Q}_0 = 1 ). \end{equation} \end{enumerate} \end{lemma} \quad Here we study the Markov chain $(\widehat{Q}_k)$ for the GEM$(\theta)$ partition and show that it has an interpretation as a \emph{weak record chain}. Let $X_1,X_2, \ldots$ be a random sample from the GEM$(\theta)$ model with i.i.d.\ stick-breaking factors $W_i \stackrel{(d)}{=} W$ for $W$ following a beta$(1,\theta)$ distribution. Consider \begin{equation} \label{C1} C^{\ell, \theta}_{k}: = \sum_{j=1}^k 1\{Q^{*}_{n(k,\ell)}(j) = Q^{*}_{n(k,\ell)}(j-1)\} \quad \mbox{for } k \ge 1, \end{equation} the number of empty intervals among the first $k$ intervals at the first time $n(k,\ell)$ there are $\ell$ points outside the first $k$ intervals. To study the random variables $C^{\ell, \theta}_{k}$, we introduce a family of one-parameter Markov chains $(\widehat{Q}_j^{\ell,\theta}, ~j \ge 0)$ with \begin{itemize} \item the initial value $\widehat{Q}_0^{\ell, \theta} = \ell \in \mathbb{N}_{+}$, \item the transition probability function $\widehat{q}^{\theta}(m,n)$ given by \eqref{hatqdef} for $W$ the beta$(1,\theta)$ distribution. \end{itemize} For $W$ the beta$(1,\theta)$ distribution, \begin{equation*} \mathbb{E}W^{n-m}(1-W)^m = \frac{(1)_{n-m} \, \theta }{(\theta+m)_{n-m-1}} \quad \mbox{for } m \le n, \end{equation*} where \begin{equation*} (x)_j: = x(x+1) \cdots (x+j-1) = \frac{\Gamma(x+j)}{\Gamma(x)}. \end{equation*} So the transition probability $\widehat{q}^{\theta}$ of the $\widehat{Q}^{\ell,\theta}$ chain is given by \begin{equation} \label{transitheta} \widehat{q}^{\theta}(m,n) = \frac{(m)_{n-m} \, \theta }{(\theta+m)_{n-m+1}} \quad \mbox{for } m \le n. \end{equation} Let \begin{equation} \label{occup} G^{\ell,\theta}_{i}(k): = \sum_{j = 1}^k 1\{\widehat{Q}^{\ell,\theta}_{j} = i\} \quad \mbox{for } i \geq \ell, \end{equation} be the occupation count of state $i$ for the Markov chain $(\widehat{Q}^{\ell,\theta}_j,~1 \le j \le k)$. According to Lemma \ref{lemma:PT} $(i)$, for each $k \ge 1$, \begin{align} C^{\ell, \theta}_{k} \stackrel{(d)}{=} \widehat{C}^{\ell, \theta}_k &: = \sum_{j=1}^k 1 \{\widehat{Q}^{\ell,\theta}_j = \widehat{Q}^{\ell,\theta}_{j-1}\} \label{CQ} \\ & = \sum_{j=1}^k 1 \{\widehat{Q}^{\ell,\theta}_j = \widehat{Q}^{\ell,\theta}_{j-1} = \ell\} + \sum_{i = \ell+1}^{\infty} \sum_{j=1}^k1 \{\widehat{Q}^{\ell,\theta}_j = \widehat{Q}^{\ell,\theta}_{j-1} = i\} \notag\\ &= G^{\ell,\theta}_{\ell}(k) + \sum_{i=\ell+1}^\infty (G^{\ell,\theta}_{i}(k) - 1)^+, \label{C2} \end{align} where the last equality follows from the fact that the process $(\widehat{Q}_j^{\ell,\theta}, ~j \ge 0)$ is weakly increasing starting at $\ell$. \quad Now we establish a connection between the one-parameter chain $\widehat{Q}^{\ell,\theta}$ and a record process. Fix $\ell \in \mathbb{N}_{+}$. For $X_1, X_2, \ldots$ i.i.d.\ with support $\{\ell, \ell+1, \ldots\}$, let $(R_j, ~j \ge 0)$ be the {\em weak ascending record process} of $(X_j, ~j \ge 1)$. That is, \begin{equation*} R_0: = \ell \quad \mbox{and} \quad R_j : = X_{L_j} \mbox{ for } j \ge 1, \end{equation*} where $L_j$ is defined recursively by \begin{equation*} L_1: = 1 \quad \mbox{and} \quad L_{j+1}: = \min\{i> L_j: X_i \ge X_{L_j}\} \mbox{ for } j \ge 1. \end{equation*} The sequence $(R_j,~ j \ge 0)$ was first considered by Vervaat \cite{Vervaat}, see also \cite[Section 2.8]{ABN}, and \cite[Lecture 15]{Nevzorov} for further discussion on records of discrete distributions. It is known that $(R_j,~j \ge 0)$ is a Markov chain with the transition probability function $r(m,n)$ given by \begin{equation} \label{rmn} r(m,n) = \frac{\mathbb{P}(X_1 = n)}{\mathbb{P}(X_1 \ge m)} \quad \mbox{for } m \le n. \end{equation} \begin{proposition} \label{record} Let $(R_j,~ j \ge 0)$ be the weak ascending record process of the i.i.d.\ sequence $(X_j,~ j \ge 1)$ with $X_j \stackrel{(d)}{=} \widehat{Q}_1^{\ell,\theta}$; that is, \begin{equation*} \mathbb{P}(X_j = n) = \widehat{q}^{\theta}(\ell, n) \quad \mbox{for } n \ge \ell, \end{equation*} where $\widehat{q}^{\theta}$ is defined by \eqref{transitheta}. Then there is the equality in joint distributions \begin{equation} (\widehat{Q}_j^{\ell,\theta}, j \ge 0) \stackrel{(d)}{=} (R_j,~ j \ge 0). \end{equation} \end{proposition} \begin{proof} Observe that for $\ell \le m \le n$, \begin{equation*} \widehat{q}^{\theta}(\ell,n) = \frac{(\ell)_{n-\ell} \, \theta }{(\theta+\ell)_{n - \ell + 1}} = \frac{(\ell)_{m-\ell}}{(\theta+\ell)_{m-\ell}} \widehat{q}^{\theta}(m,n). \end{equation*} Sum this identity over $n \ge m$ to see that $\mathbb{P}(X_j \ge m) = (\ell)_{m-\ell}/(\theta+\ell)_{m-\ell}$, hence that $\widehat{q}^{\theta}(m, \cdot)$ is the conditional distribution of $X_j$ given $X_j \ge m$, as required. \end{proof} \quad It is known \cite[Theorem 1.1]{PY17} that the counts $G^{\ell,\theta}_i(\infty)$ of records at each possible value $i = \ell, \ell+1, \ldots$ are independent and geometrically distributed on $\mathbb{N}_0: = \{0\} \cup \mathbb{N}_{+}$ with parameter $i/(i + \theta)$. Combined with Proposition \ref{record}, we get the following result which is a variant of \cite[Proposition 5.1]{GINR}. \begin{corollary} \label{gfC} Let $C^{\ell,\theta}_k$ and $\widehat{C}^{\ell, \theta}_k$ be defined by \eqref{C1} and \eqref{CQ}. Then there is the increasing and almost sure convergence \begin{equation*} C_k^{\ell, \theta} \stackrel{(d)}{=} \widehat{C}^{\ell, \theta}_k \uparrow C_{\infty}^{\ell,\theta}, \end{equation*} along with convergence of all positive moments, where the probability generating function of $C^{\ell,\theta}_{\infty}$ is given by \begin{equation} \label{pgfC} F_{\ell,\theta}(z) :=\mathbb{E} z^{C^{\ell,\theta}_{\infty}} = \frac{\Gamma(\ell+1+\theta) \Gamma(\ell+\theta - \theta z)}{\Gamma(\ell) \Gamma(\ell + 1 + 2 \theta - \theta z)}. \end{equation} Consequently, the random variable $C^{\ell,\theta}_{\infty}$ has the mixed Poisson distribution with random parameter $-\theta \log H$, where $H$ has the beta$(\ell,\theta+1)$ distribution. \end{corollary} This result, combined with Lemma \ref{lemma:PT}$(iii)$, leads to the formula \eqref{PT17formula}: \[ u_\infty \overset{\theta}{=} \P(C^{1,\theta}_\infty = 0) = F_{1,\theta}(0) = \frac{\Gamma(\theta+2) \Gamma(\theta+1)}{\Gamma(2 \theta+2)}. \]Also note that the random variable $C_{\infty}^{\ell,\theta}$ has a simple representation for $\theta \in \mathbb{N}_{+}$: \begin{equation} \label{sumgeo} C^{\ell,\theta}_{\infty} \stackrel{(d)}{=} \sum_{j = 0}^{\theta} \mathcal{G}_j^{\ell,\theta}, \end{equation} where $\mathcal{G}_j^{\ell,\theta}$, $0 \le j \le \theta$ are independent and geometrically distributed on $\mathbb{N}_0$ with parameter $(\ell+j)/(\ell+j+\theta)$. \begin{proof} The identity \eqref{C2} shows that \begin{equation*} \widehat{C}^{\ell, \theta}_k \uparrow C^{\ell,\theta}_{\infty}: = G^{\ell,\theta}_{\ell}(\infty) + \sum_{i=\ell+1}^\infty (G^{\ell,\theta}_{i}(\infty) - 1)^+ \quad a.s. \end{equation*} where $G^{\ell,\theta}_{i}(\infty)$, $i \ge \ell$ are independent and geometrically distributed on $\mathbb{N}_0$ with parameter $p_{i,\theta}: = i/(i + \theta)$. For $G$ geometrically distributed on $\mathbb{N}_0$ with parameter $p$, \begin{equation*} \mathbb{E}z^{G} = \frac{p}{1-(1-p)z} \quad \mbox{and} \quad \mathbb{E}z^{(G-1)^{+}} = p + \frac{(1-p)p}{1-(1-p)z}. \end{equation*} As a result, \begin{align*} \mathbb{E} z^{C^{\ell,\theta}_{\infty}} &= \frac{p_{\ell,\theta}}{1-(1-p_{\ell,\theta})z} \prod_{i = \ell + 1}^{\infty} \left(p_{i,\theta} + \frac{(1-p_{i,\theta})p_{i,\theta}}{1-(1-p_{i,\theta})z} \right) \\ &= \frac{\ell}{\ell+ \theta- \theta z} \prod_{i = \ell + 1}^{\infty} \frac{i(i+2 \theta - \theta z)}{(i+ \theta)(i+\theta-\theta z)} \\ & = \frac{\ell}{\ell+ \theta- \theta z} \cdot \frac{\Gamma(\ell+1+\theta) \Gamma(\ell+1+\theta - \theta z)}{\Gamma(\ell+1) \Gamma(\ell + 1 + 2 \theta - \theta z)}, \end{align*} which leads to the formula \eqref{pgfC}. Recall that the generating function of the Poisson$(u)$ distribution is $e^{-u(1-z)}$, and that the Mellin transform of the beta$(p,q)$ variable $H_{p,q}$ is \begin{equation*} \mathbb{E}H_{p,q}^\nu = \frac{\Gamma(\nu+p)\Gamma(p+q)}{\Gamma(p) \Gamma(\nu+p+q)} \quad \mbox{for } \nu > -p. \end{equation*} By taking $\nu = \theta(1-z)$, $p = \ell$ and $q = \theta+1$, we identify the distribution of $C^{\ell,\theta}_{\infty}$ with the stated mixed Poisson distribution. \end{proof} \quad Let $\psi(x): = \Gamma'(x)/ \Gamma(x)$ be the digamma function, and $\psi^{(k)}(x)$ be the $k^{th}$ derivative of $\psi(x)$. For $k \ge 1$, define \begin{equation} \label{Deltakz} \Delta_{k,\ell,\theta}(z) := \psi^{(k-1)}(\ell+\theta-\theta z) - \psi^{(k-1)}(\ell+1+2\theta-\theta z). \end{equation} A simple calculation shows that $F'_{\ell,\theta}(z) = -\theta F_{\ell,\theta}(z) \Delta_{1,\ell,\theta}(z)$ and $\Delta'_{k,\ell,\theta}(z) = -\theta \Delta_{k+1,\ell,\theta}(z)$. By induction, the derivatives of $F_{\theta}$ can be written as \begin{equation} F_{\ell,\theta}^{(k)}(z) = (-\theta)^k F_{\ell,\theta}(z) P_k(\Delta_{1,\ell,\theta}(z), \cdots, \Delta_{k,\ell,\theta}(z)), \end{equation} where $P_k(x_1, \cdots, x_k)$ is the {\em $k^{th}$ complete Bell polynomial} \cite[Section 3.3]{Comtet74}. To illustrate, \begin{align*} P_1(x_1) &= x_1,\\ P_2(x_1,x_2) &= x_1^2 + x_2,\\ P_3(x_1,x_2,x_3) &= x_1^3 + 3 x_1 x_2 + x_3,\\ P_4(x_1,x_2,x_3,x_4) &= x_1^4 + 6 x_1^2 x_2 + 4 x_1 x_3 + 3 x_2^2 +x_4,\\ P_5(x_1,x_2,x_3,x_4,x_5) &= x_1^5 + 10 x_1^3 x_2 + 10 x_1^2 x_3 + 15 x_1 x_2^2 + 5 x_1 x_4 + 10 x_2 x_3 + x_5, \end{align*} and so on. Now by expanding $F_{\ell,\theta}$ into power series at $z=0$ and $z=1$, we get \begin{equation} \label{Claw} \mathbb{P}(C^{\ell,\theta}_\infty = k) = \frac{(-\theta)^k}{k !} \frac{\Gamma(\ell+\theta) \Gamma(\ell+ \theta+1)}{\Gamma(\ell) \Gamma(\ell+ 2 \theta + 1)} P_k(\Delta_{1,\ell,\theta}(0), \cdots, \Delta_{k,\ell,\theta}(0)), \end{equation} and \begin{equation} \label{binom} \mathbb{E}\binom{C_\infty^{\ell,\theta}}{k} = \frac{(-\theta)^k}{k !} \, P_k(\Delta_{1,\ell,\theta}(1), \cdots, \Delta_{k,\ell,\theta}(1)), \end{equation} where $\Delta_{k,\ell,\theta}(\cdot)$ is defined by \eqref{Deltakz}. By taking $\theta = 1$ and $k=1$ in \eqref{binom}, we get \begin{equation} \mathbb{E}C_{\infty}^{\ell,1} = \psi(\ell+2) - \psi(\ell) = \frac{1+ 2 \ell}{\ell(\ell+1)}, \end{equation} since $\psi(\ell) = \sum_{j=1}^{\ell-1} 1/j - \gamma$, with $\gamma$ the Euler constant. \section{Proof of Proposition \ref{prop:conju}} \label{sec:prop_proof} \quad In this section we apply the results of Section \ref{sec:gem_theta_records} to evaluate the renewal sequence $(u_k)$ in the GEM$(1)$ case, and extend to the general GEM$(\theta)$ case. The computation boils down to the study of the Markov chain $(\widehat{Q}^{\ell,\theta}_k,~k \ge 0)$ with $\ell = 1$. We start by proving Proposition \ref{prop:conju}, corresponding to the case where $\ell = 1$ and $\theta = 1$. To this end, we need the following duality formula due to Hoffman \cite[Theorem 4.4]{Hoffman} and Zagier \cite[Section 9]{Zagier}. \begin{lemma} Let $\zeta(s_1, \cdots, s_k)$ be the multiple zeta value defined by \eqref{multzeta}. Then \begin{equation} \zeta(\underbrace{1,\ldots,1}_{k-1}, h+1) = \zeta(\underbrace{1,\ldots,1}_{h-1}, k+1) \quad \mbox{for all } h, k \in \mathbb{N}_{+}. \end{equation} In particular, \begin{equation} \label{keyzeta} \zeta(\underbrace{1,\ldots,1}_{k-2}, 2) = \zeta(k) \quad \mbox{for all } k \geq 2. \end{equation} \end{lemma} \begin{proof}[Proof of Proposition \ref{prop:conju}] By Lemma \ref{lemma:PT}$(ii)$, for $k \ge 2$, \begin{align*} u_k & \stackrel{1}{=} \sum_{1<n_1< \cdots < n_k} \mathbb{P}(\widehat{Q}^{1,1}_1 = n_1, \cdots, \widehat{Q}^{1,1}_k = n_k) \\ & = \sum_{1<n_1< \cdots < n_k} \widehat{q}^{1}(1,n_1) \, \widehat{q}^{1}(n_1,n_2) \cdots \widehat{q}^{1}(n_{k-1},n_k) \\ & = \sum_{1<n_1< \cdots < n_{k-1}} \frac{1}{(n_1+1) \cdots(n_{k-2}+1)(n_{k-1}+1)^2} \\ & = \sum_{0 <n_1< \cdots < n_{k-1}} \frac{1}{(n_1+2) \cdots(n_{k-2}+2)(n_{k-1}+2)^2}. \end{align*} For $k \ge 2$ and $x \ge 0$, let \begin{equation} \label{Hzeta} \zeta(\nu_1, \ldots, \nu_{k-1}; x) : = \sum_{0 <n_1< \cdots < n_{k-1}} \frac{1}{(n_1+x)^{\nu_1} \cdots(n_{k-2}+x)^{\nu_{k-2}}(n_{k-1}+x)^{\nu_{k-1}}}, \end{equation} be the {\em multiple Hurwitz zeta function} \cite{Olivier}, and $h_k(x): = \zeta(\underbrace{1,\ldots,1}_{k-2},2;x)$. Therefore \begin{equation} \label{ukhk} u_k \stackrel{1}{=} h_k(2) \mbox{ for } k \ge 2. \end{equation} We claim that for $k \ge 3$, \begin{equation} \label{Hzetarel} \zeta(\nu_1, \ldots, \nu_{k-1}; x-1) = \zeta(\nu_1, \ldots, \nu_{k-1}; x) + x^{-\nu_{1}} \zeta(\nu_2, \ldots, \nu_{k-1};x) \end{equation} In fact, \begin{align*} \zeta(\nu_1, \ldots, \nu_{k-1}; x-1) & = \sum_{0 <n_1< \cdots < n_{k-1}} \frac{1}{(n_1+x-1)^{\nu_1} \cdots(n_{k-2}+x-1)^{\nu_{k-2}}(n_{k-1}+x-1)^{\nu_{k-1}}} \\ & = \sum_{0 \leq n_1< \cdots < n_{k-1}} \frac{1}{(n_1+x)^{\nu_1} \cdots(n_{k-2}+x)^{\nu_{k-2}}(n_{k-1}+x)^{\nu_{k-1}}}, \end{align*} and writing this expression as two sums over the distinct sets $\{0 = n_1 < n_2 < \cdots < n_{k-1}\}$ and $\{0 < n_1 < n_2 < \cdots < n_{k-1}\}$ yields the formula \eqref{Hzetarel}. Consequently, \begin{equation} \label{recur} h_k(x-1) = h_k(x) + x^{-1} h_{k-1}(x) \quad \mbox{for } k \ge 3 . \end{equation} By taking $x = 2$ and $x = 1$ in \eqref{recur}, we get for $k \ge 3$, \begin{equation*} h_k(1) = h_k(2) + \frac{1}{2} h_{k-1}(2) \quad \mbox{and} \quad h_k(0) = h_k(1) + h_{k-1}(1), \end{equation*} which implies that for $k \ge 4$, \begin{equation} \label{recur2} h_k(0) = h_k(2) + \frac{3}{2} h_{k-1}(2) + \frac{1}{2} h_{k-2}(2). \end{equation} According to the formula \eqref{keyzeta}, \begin{equation} \label{multzetafor} h_k(0) = \zeta(\underbrace{1,\ldots,1}_{k-2}, 2) = \zeta(k). \end{equation} By \eqref{ukhk}, \eqref{recur2} and \eqref{multzetafor}, we derive the recursion \eqref{rec} for $k \ge 4$. Recall that by definition, we have $u_0=1$ and it is easy to check that $u_1\overset{1}{=}1/2$. By symbolic integration, we get: \begin{equation*} u_2 \overset{1}{=} -\frac{5}{4} + \zeta(2) \quad \mbox{and} \quad u_3 \overset{1}{=} \frac{13}{8} - \frac{3}{2} \zeta(2) + \zeta(3), \end{equation*} which satisfies the recursion for $k = 2,3$. So the part $(i)$ of the proposition is proved. The equivalences $(i) \Leftrightarrow (ii) \Leftrightarrow (iv)$ are straightforward, and $(ii) \Leftrightarrow (iii)$ follow by partial fraction decomposition. We will see in Section \ref{sec:renewal_seq} that the parts $(i)$, $(iv)$ in Proposition \ref{prop:conju} are valid for general recursions of the form $au_{k-2}+bu_{k-1}+cu_k = \zeta(k)$. \end{proof} \quad In the sequel, we aim to extend the above calculation to general $\theta>0$. It is easily seen that \begin{align*} u_k & \stackrel{\theta}{=} \sum_{1<n_1< \cdots < n_k} \mathbb{P}(\widehat{Q}^{1,\theta}_1 = n_1, \cdots, \widehat{Q}^{1,\theta}_k = n_k) \\ & = \sum_{1<n_1< \cdots < n_k} \frac{\theta^k \, (n_k-1)!}{(\theta+ n_1) \cdots (\theta+n_{k-1}) (\theta + 1)_{n_k}} \\ & = \theta^k \, \sum_{0 <n_1< \cdots < n_{k-1}} \frac{1}{(\theta+ n_1+1) \cdots (\theta+n_{k-1}+1)} \sum_{n_k > n_{k-1}} \frac{n_k!}{(\theta+1)_{n_k+1}}. \end{align*} Note that for all $k \ge 0$, \begin{align*} \sum_{n \ge k} \frac{n!}{(\theta+1)_{n+1}} &= \sum_{n \ge k} \frac{1}{\theta} \frac{(\theta + n+1)n!-(n+1)!}{(\theta+1)_{n+1}} \\ &= \sum_{n \ge k} \frac{1}{\theta}\left (\frac{n!}{(\theta+1)_{n}} - \frac{(n+1)!}{(\theta+1)_{n+1}} \right ) \\ &= \frac{k!}{\theta\,(\theta+1)_{k}} = \frac{\Gamma(\theta) \, \Gamma(k+1)}{\Gamma(\theta+k+1)}. \end{align*} Therefore, \begin{equation} \label{eq:uktheta} u_k \stackrel{\theta}{=} \theta^k \Gamma(\theta) \sum_{0 <n_1< \cdots < n_{k-1}} \frac{1}{(\theta+n_1+1) \cdots (\theta+n_{k-1}+1)}\, \frac{\Gamma(n_{k-1}+2)}{\Gamma(n_{k-1}+\theta+2)}. \end{equation} It seems to be difficult to simplify the expression \eqref{eq:uktheta} for general $\theta$. We focus on the case where $\theta \in \mathbb{N}_{+}$. Let \begin{equation*} h_{k,\theta}(x): = \theta^k \Gamma(\theta)\sum_{0 <n_1< \cdots < n_{k-1}} \frac{\Gamma(n_{k-1}+1-\theta + x)}{(n_1+x) \cdots(n_{k-1}+x) \Gamma(n_{k-1}+1+x)}, \end{equation*} so $u_k \stackrel{\theta}{=} h_{k,\theta}(\theta+1)$. Again it is elementary to show that \begin{equation*} h_{k,\theta}(x-1) = h_{k,\theta}(x) + \frac{\theta}{x} h_{k-1,\theta}(x). \end{equation*} Consequently, the sequence $(u_k,~ k \ge 0)$ satisfies a $(\theta+1)$-order recursion: \begin{equation} u_k + a_{1,\theta} u_{k-1} + \cdots + a_{\theta+1,\theta} u_{k-\theta-1} = h_{k,\theta}(0), \end{equation} where \begin{equation*} a_{i,\theta}: = \sum_{0<n_1 < \cdots < n_i \le \theta+1} \frac{\theta^i}{n_1 \cdots n_i} \quad \mbox{for } 1 \le i \le \theta+1, \end{equation*} and $h_{k,\theta}(0)$ is a variant of the multiple Hurwitz zeta function. \section{Renewal sequences derived from the zeta function} \label{sec:renewal_seq} \quad Look at the sequence \begin{equation} \label{ukdef} u_k := \sum_{n = 1}^\infty \frac{n^{-k}}{q(n)} \quad ( k = 0,1, 2, \ldots) \end{equation} where \begin{equation} \label{qdef} q(n) := a n^2 + b n + c \end{equation} is a generic quadratic function of $n$. We are interested in conditions on $q$ which allow the sequence $(u_k, k = 0,1, \ldots)$ to be interpreted as a renewal sequence \cite{Feller}. Basic requirements are that $q(n) > 0 $ for all $n = 1,2, \ldots$, so at least $a > 0$, and that $u_0 = 1$, which is a matter of normalization of coefficients of $q$. The sequence $1/q(n)$, $n = 1,2, \ldots$ then defines a probability distribution on the positive integers. If $X$ denotes a random variable with this distribution, so $\P(X= n) = 1/q(n)$, $n = 1,2, \ldots$, then \eqref{ukdef} becomes \begin{equation} \label{ukmom} u_k = \E (1/X)^k \quad ( k = 0,1, 2, \ldots). \end{equation} That is to say, $u_k$ is the $k^{th}$ moment of the probability distribution of $1/X$ on $[0,1]$. Obviously, $0 \le u_k \le 1$, and by the Cauchy-Schwartz inequality applied to $$(1/X)^k = (1/X)^{(k-1)/2} (1/X)^{(k+ 1)/2},$$ \begin{equation} \label{kaluza} u_k ^2 \le u_{k-1} u_{k+1} \qquad ( k = 1,2, \ldots). \end{equation} A sequence $(u_k)$ bounded between $0$ and $1$ with $u_0 = 1$ and subject to \eqref{kaluza} is called a {\em Kaluza sequence} \cite{Kaluza}. By a classical theorem of Kaluza, every such sequence is a {\em renewal sequence} \cite{Kaluza}. See \cite{PT17} for an elementary proof and further references. In view of Proposition $1.2$, we are motivated to study such renewal sequences $(u_k)$ and the associated distribution $(f_k)$ of the time until first renewal, whose generating functions \begin{equation} \label{gfs} U(z):= \sum_{k=0}^\infty u_k z^k \mbox{ and } F(z):= \sum_{k=1}^\infty f_k z^k \qquad ( | z | < 1 ) \end{equation} are known \cite{Feller} to be related by \begin{equation} \label{gfrel} U(z)= (1- F(z))^{-1} \mbox{ and } F(z) = 1 - U(z)^{-1} . \end{equation} This identity of generating functions corresponds to the basic relation \begin{equation} \label{ufrec} u_k = f_k + f_{k-1} u_1 + \cdots + f_1 u_{k-1} \qquad ( k = 1, 2, \ldots) \end{equation} which allows either of the sequences $(u_k)$ and $(f_k)$ to be derived from the other. Observe that the definition $q(n) = a n^2 + b n + c$ gives \begin{equation} \label{basicid} \frac{c n^{-k}}{q(n)} + \frac{b n^{-(k-1)}}{q(n)} + \frac{a n^{-(k-2)}}{q(n)} = n^{-k} \end{equation} and hence, for $k \ge 2$, \begin{equation} \label{urec} c u_k + b u_{k-1} + a u_{k-2} = \sum_{n=1}^\infty n^{-k} = \zeta(k). \end{equation} It follows that $U(z)$ and hence $F(z)$ can always be expressed in terms of the well known (see \cite[formula 6.3.14]{abramowitz1964handbook}) generating function of $\zeta$ values \begin{equation} G(z):= \sum_{n=2}^\infty \zeta(n) z^n = - z ( \gamma + \psi(1-z) ) , \qquad ( | z | < 1 ) \end{equation} where $\gamma$ is Euler's constant and $\psi(x):= \Gamma'(x)/\Gamma(x)$ is the digamma function, as $$ c( U(z) - u_0 - u_1 z ) + b z ( U(z) - u_0 ) + a z^2 U(z) = G(z) $$ or $$ q(z) U(z) - c ( u_0 + u_1 z ) - u_0 b z = G(z) $$ which rearranges as \begin{equation} \label{ufromG} U(z) = \frac{ c u_0 + ( b u_0 + c u_1 ) z + G(z) }{ q(z) }. \end{equation} Defining $r_1,r_2\in \mathbb{C}$ as the two roots of $q$, we have \begin{gather*} q(z) = a(z-r_1)(z-r_2)\\ b = -a(r_1+r_2) \text{ and } c = a r_1 r_2. \end{gather*} Note that our assumption that $q(n)>0$ for all $n=1,2,\ldots$ implies that the roots $r_1$ and $r_2$ are not positive integers. A straight-forward computation shows that the condition $u_0=1$ implies \[ a = \begin{cases} \dfrac{\psi(1-r_2)-\psi(1-r_1)}{r_1 - r_2} &\quad \text{if } r_1 \neq r_2\\ \psi'(1-r_1) &\quad \text{if }r_1 = r_2, \end{cases} \] and that we have \[ u_1= \frac{1}{2c}\left (-b+2\gamma+\psi(1-r_1)+\psi(1-r_2)\right ). \] Finally, obtaining $F(z)$ from \eqref{gfrel} and \eqref{ufromG} and taking derivatives gives us \begin{align} &F'(1)=q(1),\\ &\begin{aligned} F''(1)&+q(1)(1-q(1)) \\ &= (4-q(1))a-q(-1)+q(1)(c(1+2u_1)+b)\\ &= (4-q(1))a-q(-1)+q(1)(c+2\gamma+\psi(1-r_1)+\psi(1-r_2)). \end{aligned} \end{align} To summarize, and combine with some standard renewal theory: \begin{proposition} \label{prop:ukfromq} Let $q(n)$ be any quadratic function of $n = 1,2, \ldots$ with $q(n) > 0$ for all $n$, normalized so that $ u_0:= \sum_{n = 1}^\infty 1/q(n) = 1, $ and let $u_k:= \sum_{n = 1}^\infty n^{-k}/ q(n)$ for $k \ge 1$. Then $(u_k)$ is a decreasing, positive recurrent renewal sequence, with $$ \lim_{k \rightarrow \infty} u_k = 1/q(1) . $$ The corresponding distribution of an i.i.d.\ sequence of positive integer valued random variables $Y_1, Y_2, \ldots$ with $\P(Y_1 + \ldots + Y_m = k \mbox{ for some m } ) = u_k$ has distribution with mean and variance \begin{gather*} \E (Y_1) = q(1)\\ \textrm{Var}(Y_1) = (4-q(1))a-q(-1)+q(1)(c+2\gamma+\psi(1-r_1)+\psi(1-r_2)) \end{gather*} and probability generating function $F(z):= \E (z^{Y_1} )$ given by \eqref{gfrel} for $U(z)$ as in \eqref{ufromG}. \end{proposition} {\bf Example.} Take $a = 1/2, b = 3/2, c = 1$. Then $q(n) = (n+1)(n+2)/2$ makes $u_0 = 1$, $1/u_\infty = \E(Y_1) = q(1) = 3$, and $\textrm{Var}(Y_1) = 11$. From Proposition \ref{prop:conju}, we know that this sequence $(u_k)$ is the renewal sequence associated with a GEM$(1)$ random partition of $[0,1]$. Equivalently in the terminology of random permutations \cite{PT17}, $(u_k)$ is the renewal sequence of the \emph{splitting times} of a GEM$(1)$-biased permutation. Therefore $Y_1$ is distributed as $T_1$, the first splitting time of $\Pi$ a GEM$(1)$-biased permutation. In particular, we have $\E(T_1) = 3$, $\textrm{Var}(T_1) = 11$. \section{Development of the GEM\texorpdfstring{$(1)$}{(1)} case} \label{sec:gem_one} \quad The Markov chain $(\widehat{Q}_k)$ described in Lemma \ref{lemma:PT}, with uniform stick-breaking factors -- i.e.\ in the GEM$(1)$ case -- was first studied by Erdős, Rényi and Szüsz \cite{ERS}, where it appears as Engel's series derived from $U$ a uniform random variable on $(0,1)$. More precisely, if $2 \leq q_1 \leq q_2 \leq \ldots$ is the unique random sequence of integers such that \[ U = \frac{1}{q_1}+\frac{1}{q_1 q_2} +\cdots+\frac{1}{q_1 q_2 \cdots q_n} +\cdots, \] then we have \[ (q_i - 1,\, i\geq 1) \stackrel{(d)}{=} (\widehat{Q}_i,\, i\geq 1), \] where we condition on $\widehat{Q}_0 = 1$. \quad Here we give explicit formulas for the distribution of $\widehat{Q}_k$ in terms of iterated harmonic sums and the Riemann zeta function. The transition probabilities $\widehat{q}(m,n) := \P(\widehat{Q}_{k+1} = n \mid \widehat{Q}_k = m)$ are given by \[ \widehat{q}(m,n) = \frac{m}{n(n+1)}. \] Then the joint probability distribution of $\widehat{Q}_1, \ldots , \widehat{Q}_k$ is given by the formula \begin{align} \P( \widehat{Q}_{1} = n_1, \ldots, \widehat{Q}_{k-1} = n_{k-1}, \widehat{Q}_k = n_k) = \frac {1(1\le n_1 \le \ldots \le n_{k-1} \le n_k)}{(n_1 +1)\cdots (n_{k-1}+1) (n_k+1) n_k } . \label{vist} \end{align} It follows that for $k = 1,2, \ldots$ \begin{align} \P(\widehat{Q}_k = n) &= \frac{1}{n(n+1)} \sum_{1\leq n_1\leq \ldots \leq n_{k-1}\leq n} \frac{1}{(n_1+1)\cdots (n_{k-1}+1)}\notag \\ & =\frac{H^*_{k-1}(n+1) - H^*_{k-2}(n+1)}{n(n+1)}. \label{vkdist} \end{align} where $H^*_{-1}(n) = 0, H^*_{0}(n) = 1$, and $H^*_k(n)$ for $k = 1, 2, \ldots$ is the $k$th {\em iterated harmonic sum} defined by \begin{align} \label{SH} H^*_k(n):= \sum_{m=1}^n \frac{ H^*_{k-1}(m) }{m} = \sum_{1 \le n_1 \le n_2 \le \cdots \le n_k \le n} \frac{ 1 }{ n_1 n_2 \cdots n_k } . \end{align} In particular, $H^*_1(n) = H(n)$ is the $n$-th harmonic number, and $H^*_2(n) = \sum_{m=1}^n H(m)/m$. Such {\em iterated} or {\em multiple} harmonic sums have attracted the attention of a number of authors \cite{BBG,AK,AKO}. Since \eqref{vkdist} describes a probability distribution over $n \in \{1,2,3,\ldots\}$, we deduce by induction that \begin{align} \sum_{n=1}^ \infty \frac{ H^*_k(n+1)}{n(n+1)} = k + 1 \quad (k \ge 0 ). \label{hsumk} \end{align} This identity has the probabilistic interpretation that for each $k = 1,2, \ldots$, $n = 1,2, \ldots$ \begin{align} \label{SHinterp} \E \left[ \sum_{j=1}^k 1( \widehat{Q}_j = n) \right] = \sum_{j=1}^k \P( \widehat{Q}_j = n) = \frac{ H^*_{k-1} (n+1) } { n (n+1) }, \end{align} where $\sum_{j=1}^k 1( \widehat{Q}_j = n)$ is the number of times $j$ with $1 \le j \le k$ that $\widehat{Q}_j$ has value $n$. It is easily seen that for each fixed $n$ the sequence $H^*_{k-1} (n)$ is increasing with limit $n$ as $k \rightarrow \infty$. So the limit version of \eqref{SHinterp} is for $n = 1,2, \ldots$ \begin{align} \label{liminterp} \E \left[ \sum_{j=1}^\infty 1( \widehat{Q}_j = n) \right] = \sum_{j=1}^\infty \P( \widehat{Q}_j = n) = \frac{1}{n}. \end{align} As observed by R\'enyi \cite{renyi}, the random variables $S_1, S_2, \ldots$, with $S_n:= \sum_{j=1}^\infty 1( \widehat{Q}_j = n) $, are independent with the geometric distribution on $\{0,1,2, \ldots\}$ with parameter $1 - 1/(n+1)$: \begin{align} \label{geom} \P(S_n = s) = \frac{1}{(n+1)^s} \left( 1 - \frac{1}{n+1} \right) \quad (n = 1,2, \ldots,\, s = 0,1, \ldots) \end{align} which also implies \eqref{liminterp}. \quad Some known results about the iterated harmonic sums $H^*_k(n)$ can now be interpreted as features of the distributions of $\widehat{Q}_k$. Arakawa and Kaneko \cite{AK} defined the function \begin{align} \xi_m(s) = \frac{1}{\Gamma(s)} \int_0^\infty t^{s-1} e^{-t} \frac{ \textrm{Li}_m(1-e^{-t})}{ (1 - e^{-t} ) } dt , \label{arakan} \end{align} where $\textrm{Li}_m$ is the polylogarithm function \[ \textrm{Li}_m(z):= \sum_{n=1}^\infty z^n/n^m. \] The integral converges for $\Re(s) >0 $ and the function $\xi_m(s)$ continues to an entire function of $s$. They showed that values of $\xi_m(s)$ for $s$ can be expressed in terms of multiple zeta values, and observed in particular that $$ \xi_1(s) = s \zeta(s+1), $$ which can readily be derived from \eqref{arakan} using the identity $ \textrm{Li}_1(1-e^{-t}) = t$. Ohno \cite{Ohno} then showed that for positive integer $m$ and $k$: \begin{align} \xi_m(k) = \sum_{1 \le n_1 \le \cdots \le n_k} \frac{ 1 }{ n_1 n_2 \cdots n_{k-1} n_k^{m+1} } = \sum_{n=1}^\infty \frac{ H^*_{k-1}(n) }{n^{m+1}}. \label{ohno1} \end{align} That is, with the replacement $k \rightarrow k+1$ and taking $ m=1 $, \begin{align} \sum_{n=1}^\infty \frac{H^*_k(n)}{n^2 } = (k+1) \zeta(k+2) \quad (k= 0,1,2, \ldots) \label{ohno3} \end{align} Subtracting $1$ from both sides of this identity gives a corresponding formula with summation from $n=2$ to $\infty$ on the left. Comparing with the more elementary formula \eqref{hsumk}, and using $$ \frac{1}{n} - \frac{1}{n+1} = \frac{1}{n(n+1)}, $$ it follows that \begin{align} \sum_{n=1}^\infty \frac{H^*_k(n+1)}{n (n+1)^2} = k + 2 - (k+1) \zeta(k+2) \quad (k = 0,1,2, \ldots) \label{ohno4} \end{align} Plugged into formula \eqref{vkdist} for the distribution of $\widehat{Q}_k$, this gives a formula for the first inverse moment of $\widehat{Q}_k+1$: \begin{align} \E\left(\frac{1}{\widehat{Q}_k+1}\right) = 1 - k \zeta(k+1) + (k-1) \zeta(k). \label{ohno5} \end{align} \quad By this stage, we have reached some identities for multiple zeta values which cannot be easily verified symbolically using Mathematica, though they are readily checked for modest values of $k$ to limits of numerical precision. The case $k = 1$ of \eqref{ohno3} reduces to $$ \sum_{n=1}^\infty \frac{H(n)}{n^2} = 2 \zeta(3), $$ which Borwein et al.\ \cite{BBG} attribute to Euler, and which can be confirmed symbolically on Mathematica. The case $k = 2$ of \eqref{ohno3} expands to \begin{equation} \label{2sum} \sum_{n=1}^\infty \frac{H^*_2(n)}{n^2} = \frac{1}{2} \sum_{n=1}^\infty \frac{H^2(n)}{n^2} + \frac{1}{2} \sum_{n=1}^\infty \frac{H_2(n)}{n^2} = 3 \zeta(4), \end{equation} with $H_k(n) = \sum_{m=1}^{n}1/m^k$, which can also be confirmed symbolically on Mathematica. The decomposition into power sums is found to be \begin{align} \frac{1}{2} \sum_{n=1}^\infty \frac{H^2(n)}{n^2} &= \frac{17}{8} \zeta(4) \label{11}, \\ \frac{1}{2} \sum_{n=1}^\infty \frac{H_2(n)}{n^2} &= \frac{7}{8} \zeta(4) \label{2}, \end{align} and summing these two identities yields \eqref{2sum} as required. According to Borwein and Borwein \cite{borwein_intriguing_1995}, formula \eqref{11} was first discovered numerically by Enrico Au-Yeun, and provided impetus to the surge of effort at simplification of multiple Euler sums by Borwein and coauthors \cite{bailey_experimental_1994}. Borwein and Borwein's first rigorous proof of \eqref{11} was based on the integral identity $$ \frac{1} {\pi} \int_0^\pi \theta^2 \log^2 ( 2 \cos \theta/2) d \theta = \frac{11}{2} \zeta(4), $$ which they derived with Fourier analysis using Parseval's formula. Later \cite{BBG}, they gave a systematic account of evaluations of multiple harmonic sums, including \eqref{11} as an exemplar case. In particular, they made a systematic study of the Euler sums \begin{align} s_h(k,s):= \sum_{n=1}^\infty \frac{ H^k(n) }{ (n+1)^s }, \end{align} and \begin{align} \sigma_h(k,s):= \sum_{n=1}^\infty \frac{ H_k(n) }{ (n+1)^s }, \end{align} and a number of other similar sums. They proved a number of exact reductions of such sums to evaluations of the zeta function at integer arguments and established other such reductions beyond a reasonable doubt by numerical computation. \quad With this development, we can prove the following result for $C_k^{1,1}$ the number of empty intervals among the first $k$ intervals at the stopping time $n(k,1)$ when the first sample point falls outside the union $[0,R_k)$ of these intervals. \begin{proposition} Let $C_k^{1,1}$ be defined by \eqref{C1} for $\ell = 1$ and $\theta = 1$. Then \begin{equation} \label{meanempty} \mathbb{E}C_{k}^{1,1} =\left\{ \begin{array}{ccl} 1/2 + k - (k-1) \zeta(k) & \mbox{for } k \ge 2, \\ 1/2 & \mbox{for }k = 1, \end{array}\right. \end{equation} \end{proposition} \begin{proof} According to \eqref{CQ}, \begin{equation*} \mathbb{E}C_k^{1,1} = \sum_{j=1}^k \mathbb{P}(\widehat{Q}_j^{1,1} = \widehat{Q}_{j-1}^{1,1}), \end{equation*} where for $j \ge 2$ we use \eqref{vkdist} to evaluate $\mathbb{P}(\widehat{Q}_j^{1,1} = n)$. So for $j \ge 2$, \begin{align*} \mathbb{P}(\widehat{Q}_j^{1,1} = \widehat{Q}_{j-1}^{1,1}) & = \sum_{n \ge 1} \mathbb{P}(\widehat{Q}_j^{1,1} = \widehat{Q}_{j-1}^{1,1} = n) \\ & = \sum_{n \ge 1} \mathbb{P}(\widehat{Q}^{1,1}_{j-1} = n ) \widehat{q}^{1}(n,n) \\ & = \sum_{n \ge 1}\frac{H_{j-2}^{*}(n+1) - H_{j-3}^{*}(n+1)}{n(n+1)^2} \\ & = \left\{ \begin{array}{ccl} 1 - (j-1) \zeta(j) + (j-2) \zeta(j-1) & \mbox{for } j \ge 3, \\ 2 - \zeta(2) & \mbox{for }j = 2, \end{array}\right. \end{align*} where the last equality follows from \eqref{ohno4}. Also note that \begin{equation*} \mathbb{P}(\widehat{Q}_1^{1,1} = \widehat{Q}_{0}^{1,1}) = \widehat{q}^1(1,1) = 1/2. \end{equation*} The formulae \eqref{meanempty} follow from the above computations. \end{proof} \quad Recall the interpretation of the binomial moments $\mathbb{E}\binom{C^{1,1}_k}{j}$ from \cite[(6.2)]{PT17}. The case $j = 1$ has been evaluated in \eqref{meanempty}. For $j = k$, \begin{equation} \mathbb{E} \binom{C_k^{1,1}}{k} = \mathbb{P}(C_k^{1,1} = k) = \frac{1}{2^k}, \end{equation} and for $j = k-1$, $$\mathbb{E} \binom{C_k^{1,1}}{k-1} = \frac{k}{2^k} + \mathbb{P}(C_k^{1,1} = k-1).$$ Note that \begin{align*} \mathbb{P}(C_k^{1,1} = k-1) &= \sum_{m=0}^{k-1} \sum_{n > 1} \widehat{q}^{1}(1,1)^m \widehat{q}^{1}(1,n) \widehat{q}^{1}(n,n)^{k-1-m} \\ & = \frac{1}{2^{k-1}} - 2 \sum_{n \ge 1} \frac{1}{n(n+1)(n+2)^k} \end{align*} By partial fraction decomposition, $$ \frac{1}{n(n+1)(n+2)^k} = \frac{1}{2^k n} - \frac{1}{n+1} + \frac{2^k-1}{2^k(n+2)} + \sum_{j = 2}^k \frac{2^{k+1-j} - 1}{2^{k+1-j}(n+2)^j},$$ which leads to $$ \sum_{n \ge 1} \frac{1}{n(n+1)(n+2)^k} = \frac{k}{2^{k+1}} - k + 1 + \sum_{j=2}^k \frac{2^{k+1-j} - 1}{2^{k+1-j}} \zeta(j). $$ Therefore, \begin{equation} \label{k1mom} \mathbb{E} \binom{C_k^{1,1}}{k-1} = 2k - 2 + \frac{1}{2^{k-1}} - \sum_{j=2}^k \frac{2^{k+1-j} - 1}{2^{k-j}} \zeta(j) \end{equation} We have proved in Corollary \ref{gfC} that the random variables $C^{1,1}_k$ converges in distribution as $k \rightarrow \infty$. Consequently, $$\mathbb{E} \binom{C_k^{1,1}}{k-1} \longrightarrow 0 \quad \mbox{as } k \rightarrow \infty.$$ It is easily seen from the expression \eqref{k1mom} that this is equivalent to the well known formula $$\sum_{n = 2}^\infty ( \zeta(n) - 1 ) = 1.$$ But the formulas for other binomial moments seem to be difficult, even for $j = 2$. Generally, we are interested in the exact distribution of $C^{1,1}_k$ on $\{0,1,...k\}$ for $k = 1,2, \ldots$ Simple formulas are found for $k =1, 2,3, 4$ as displayed in the following table. \\ Table of $\mathbb{P}(C^{1,1}_k = j)$ with $0 \le j \le k$ for $k = 1,2,3,4$. \begin{center} \small \begin{tabular}{ c | c c c c c c c} \multicolumn{1}{l}{$k$} &&&&&&&\\\cline{1-1} &&&&& \\ 1 &$\frac{1}{2}$& $\frac{1}{2}$ & & & & \\ &&&&& \\ 2 &$ - \frac{5}{4} + \zeta(2)$&$2 -\zeta(2)$& $\frac{1}{4}$& && \\ &&&&& \\ 3 &$\frac{13}{8} - \frac{3}{2} \zeta(2)+\zeta(3) $&$-\frac{37}{8} + 3 \zeta(2)$&$\frac{31}{8} - \frac{3}{2}\zeta(2) - \zeta(3)$&$\frac{1}{8}$&&\\ &&&&& \\ 4 &$-\frac{29}{16} + \frac{7}{4} \zeta(2) - \frac{3}{2} \zeta(3) + \zeta(4)$ &$\frac{57}{8} -\frac{21}{4} \zeta(2) + \frac{3}{2} \zeta(3)$&$-\frac{41}{4} + \frac{21}{4} \zeta(2) + \frac{3}{2} \zeta(3)$& $\frac{47}{8} - \frac{7}{4} \zeta(2) - \frac{3}{2}\zeta(3) -\zeta(4) $&$\frac{1}{16}$& \\ &&&&& \\ \hline \multicolumn{1}{l}{} &0&1&2&3&4& ~$j$\\ \end{tabular} \end{center} The details are left to the reader. Observe that up to $k = 4$, all of the point probabilities in the distribution of $C_k^{1,1}$ are rational linear combinations of zeta values. This is also true with $j = 0, k-1, k$ for all $k$. We leave open the problem of finding an explicit formula for $\P( C_k^{1,1} = j)$ for general $j$ and $k$, but make the following conjecture: \begin{conj} For each $k \ge 1$ and $0 \le j \le k$, $$\mathbb{P}(C^{1,1}_k = j) = q_{k,1} + \sum_{j = 2}^k q_{k,j} \zeta(j), $$ with $q_{k,j}$ rational numbers. \end{conj} \section{Evaluation of $u_{2:n}$ and its limit} \label{sec:u2} \quad In this section we derive an explicit formula for $u_2$ in the general GEM$(\theta)$ case, which is based on evaluation of the combinatorial expressions of $u_{2:n}$ given later in \eqref{5kinds}. In principle, the analysis of $u_{2:n}$ can be extended to $u_{k:n}$ for $k \ge 3$, but there will be an annoying proliferation of cases. Already for $k = 2$, it requires considerable care not to overcount or undercount the cases. \quad Let $\Pi_n$ be the partition of $[n]$ generated by a random permutation of $[n]$, or, more generally, by any consistent sequence of exchangeable random partitions of $[n]$, with {\em exchangeable partition probability function (EPPF)} $p$. See \cite{pitmanbook} for background. The function $p$ is a function of compositions $(n_1, \ldots, n_k)$ of $n$, which gives for every $m \ge n$ the probability $p(n_1, \ldots, n_k)$ that for each particular listing of elements of $[n]$ by a permutation, say $(x_1, \ldots, x_n)$, that the first $n_1$ elements $\{x_1, \ldots, x_{n_1} \}$ fall in one block of $\Pi_m$, and if $n_1 < n$ the next $n_2$ elements $\{x_{n_1 + 1}, \ldots, x_{n_1 + n_2} \}$ fall in another block of $\Pi_m$, and if $n_1 + n_2 < n$ the next $n_3$ elements $\{x_{n_1 + n_2 + 1}, \ldots, x_{n_1 + n_2 + n_3} \}$ fall in a third block of $\Pi_m$, and so on. In other words, $p(n_1, \ldots, n_k)$ is the common probability, for every $m \ge n$, that the restriction of $\Pi_m$ to $[n]$ equals any particular partition of $[n]$ whose blocks are of sizes $n_1, \ldots , n_k$. For the Ewens $(\theta)$ model, there is the well known formula $$ p_\theta( n_1, \ldots, n_k ) = \frac{ \theta ^{k-1} } { ( 1 + \theta ) _{n-1} } \prod_{i=1}^k (1)_{n_i - 1}, \mbox{ where } n = n_1 + \cdots + n_k. $$ This formula for $\theta = 1$ $$ p_1( n_1, \ldots, n_k ) = \frac{ \prod_{i=1}^k (n_i - 1)! } {n! }, \mbox{ where } n = n_1 + \cdots + n_k, $$ corresponds to the case when $\Pi_n$ is the partition of $[n]$ generated by the cycles of a uniformly distributed random permutation of $[n]$. Then the denominator $n!$ is the number of permutations of $[n]$, while the product in the numerator is the obvious enumeration of the number of permutations of $n$ in which $[n_1]$ forms one cycle, and $[n_1 + n_2 ] \setminus [n_1]$ forms a second cycle, and so on. Essential for following arguments is the less obvious {\em consistency property} of uniform random permutations, that $p_1(n_1, \ldots, n_k)$ is also, for every $m \ge n$, the probability that $[n_1]$ is the restriction to $[n]$ of one cycle of $\Pi_m$, and $[n_1 + n_2 ] \setminus [n_1]$ the restriction to $[n]$ of a second cycle of $\Pi_m$, and so on. This basic consistency property of random permutations allows the sequence of random partitions $\Pi_n$ of $[n]$ to be constructed according to the {\em Chinese Restaurant Process}, so the restriction of $\Pi_m$ to $[n]$ is $\Pi_n$ for every $n < m$. \quad Consider first for $n \ge 2$ the probability that the same block of $\Pi_n$ is discovered first in examining elements of $[n]$ from left to right as in examining elements of $[n]$ from right to left. This is the probability that $1$ and $n$ fall in the same block of $\Pi_n$. By exchangeability, this is the same as the probability that $1$ and $2$ fall in the same block, that is \begin{equation} u_{1:n}:= \P ( \mbox{$1$ and $n$ in the same block} ) = p(2) \stackrel{\theta}{=} p_\theta(2) = \frac{ 1 } { 1 + \theta }. \end{equation} Next, consider for $n \ge 3$ the probability $u_{2:n}$ that the union of the first two blocks found in sampling left to right equals the union of the first two blocks found in sampling right to left. \begin{proposition} For each $n \ge 3$, and each exchangeable random partition $\Pi_n$ of $[n]$ with EPPF $p$, \begin{equation} \label{5kinds} \begin{aligned} u_{2:n} &= p(n) + (n-2) p(n-1,1) + \sum_{1 < j < k < n } p( j-1 + n - k , 2 ) \\ &\quad + \sum_{j=1}^{n-1} p(j,n-j) + \sum_{1 < j < k < n } p(j, n-k+1) . \end{aligned}\end{equation} \end{proposition} \begin{proof} The terms in \eqref{5kinds} are accounted for as follows: \begin{itemize} \item $p(n)$ is the probability that there is only one block, with $1$ and $n$ in this block. \item $(n-2) p(n-1,1)$ is the probability that there are only two blocks, with both $1$ and $n$ in this block, while the second block a singleton, which may be $\{j\}$ for any one of the $n-2$ elements $j \in \{ 2, \ldots, n-1 \}$. \item $\sum_{1 < j < k < n } p( j-1 + n - k , 2 ) $ is the sum of the probabilities that there are two or more blocks, with both $1$ and $n$ in this block, with $j$ the first element and $k$ the last element of some second block, which is both the second block to appear from left to right, and the second block to appear from right to left. The probability of this event determined by $1 < j < k < n$ is the probability that the set $[j] \cup ([n] \setminus [k-1])$ of $j + 1 + n - k $ elements is split by the partition into the two particular subsets $[j-1] \cup ([n] \setminus [k])$ and $\{j\} \cup \{k\}$ of $j-1 + n - k $ and $2$ respectively. Hence the $p( j-1 + n - k , 2 )$, by the exchangeability and consistency properties of the random partitions of various subsets of $[n]$. \item $\sum_{j=1}^{n-1} p(j,n-j)$ is the probability of the event that there are exactly two blocks $[j]$ and $[n] \setminus [j]$ for some $1 \le j < n$. \item $\sum_{1 < j < k < n } p(j, n-k+1)$ is the sum of the probabilities that there are two or more blocks, with $1$ and $n$ in different blocks, with $j$ the first element of the block containing $n$, which is the second block to appear from left to right, and $k$ the last element of the block containing $1$, which is the second block to appear from right to left. The probability of this event determined by $1 < j < k < n$ is the probability that the set $[j] \cup ([n] \setminus [k-1])$ of $j + 1 + n - k $ elements is split by the partition into the two particular subsets $[j-1] \cup \{k\}$ and $\{j \} \cup ([n] - [k])$ of sizes $j$ and $n-k+1$ respectively. Hence the $p(j,n-k+1)$, again by the exchangeability and consistency properties of the random partitions of various subsets of $[n]$. \end{itemize} \end{proof} \quad In this classification of five kinds of terms contributing to the probability $u_{2:n}$, the first three kinds account for all cases in which $1$ and $n$ fall in the same block, while the last two kinds account for all cases in which $1$ and $n$ fall in different blocks. The double sum for the third kind of term is $0$ unless $n \ge 4$, in which case it always simplifies to a single sum by grouping terms according to the value $h$ of $j-1 + n - k$ : \begin{align} \label{prob1} \sum_{1 < j < k < n } p( j-1 + n - k , 2 ) &= \sum_{h=2}^ {n-2} (h-1) p( h,2 ) \\ \label{prob1th} & \stackrel{\theta}{=} \left[ p_\theta(2) - p_\theta(n) - (n-2) p_\theta(n-1,1) \right] \, p_\theta (2) \end{align} with further simplification as indicated by $ \stackrel{\theta}{=} $ for the Ewens $(\theta)$ model. For the Ewens $(\theta)$ model, the $p(h,2)$ in \eqref{prob1} becomes $$ p_\theta( h,2 ) = \frac{ \theta (1)_{h-1} }{ (1 + \theta )_{h+1} } $$ while the expression in \eqref{prob1th} features $$ p_\theta(2) = \frac{ 1 } { 1 + \theta }; \qquad p_\theta(n) = \frac{ (1)_{n-1} } { (1 + \theta )_{n-1}} ; \qquad p_\theta(n-1,1) = \frac{ \theta \, (1)_{n-2} } { (1 + \theta )_{n-1} }. $$ The evaluation $ \stackrel{\theta}{=} $ in \eqref{prob1th} is easily checked algebraically, by first checking it for $n = 4$, then checking the equality of differences as $n$ is incremented. This evaluation \eqref{prob1th} is an expression of the well known characteristic property of {\em non-interference} in the Ewens $(\theta)$ model, according to which, given that $1$ and $n$ fall in the same block of some size $b$ with $2 \le b < n$, the remaining $n-b$ elements are partitioned by according to the Ewens $(\theta)$ model for $n-b$ elements. The sums in \eqref{prob1} evaluate the probability that $1$ and $n$ fall in the same block, whose size $b$ is at most $n-2$, and that the partition of the remaining $n-b \ge 2$ elements puts the least of these element in the same block as the greatest of these elements. For a general EPPF the probability that $1$ and $n$ fall in the same block, whose size $b$ is at most $n-2$, is $p(2) - p(n) - (n-2) p(n-1,1)$, as in the first factor of \eqref{prob1th} for $p = p_\theta$. For the Ewens $(\theta)$ model, given this event and the size $b \le n-2$ of the block containing $1$ and $n$, the probability that the remaining $n-b \ge 2$ elements have their least and greatest elements in the same block is just $p_\theta(2)$, regardless of the value of $b$. Hence the factorization in the expression of \eqref{prob1th} for the Ewens $(\theta)$ model. \quad While the sum of the first three kinds of terms in \eqref{5kinds} can be simplified as above in the Ewens $(\theta)$ model, even for $\theta = 1$ there is no comparable simplification for the sum of the last two kinds of terms in \eqref{5kinds}, representing the probability of the event that $1$ and $n$ fall in different blocks, while the same union of the first two blocks is found by examining elements from left to right as in examining elements from right to left. Asymptotics as $n \rightarrow \infty$ are easy for the sum of the first three kinds of terms in \eqref{5kinds}. The limit of the contribution of these three terms is $p_\theta(2)^2 = (1 + \theta)^{-2} \stackrel{1}{=} 1/4$. As for the remaining two kinds of terms, it is obvious that $\sum_{j=1}^{n-1} p(j,n-j) \rightarrow 0$ for any partition structure, since this is the probability that there are only two classes, and all elements of one class appear in a sample of size $n$ before all members of the other class. So for the Ewens $(\theta)$ model this gives \begin{align} \lim_{n \rightarrow \infty} u_{2:n} &\overset{\theta}{=} (1 + \theta)^{-2} + \lim_{n\rightarrow \infty} \sum_{1 < j < k < n } \frac{ \theta (1)_{j-1} (1 )_{n-k} }{ ( 1 + \theta )_{j + n - k } } \notag \\ &= (1 + \theta)^{-2} + \sum_{j=2}^{\infty} \frac{\theta(1)_{j-1}}{(\theta+j-1)(\theta+1)_{j}} \end{align} where the limit can also be evaluated as an integral with respect to the joint distribution of $P_1 = W_1$ and $P_2 = (1-W_1)W_2$ for $W_i$ independent beta $(1,\theta)$ variables. We can write \begin{equation} u_2 \overset{\theta}{=} \frac{1}{(\theta+1)^2} + \frac{1}{\theta+1}\left (\thrftwo{1}{1}{\theta}{\theta+1}{\theta+2}{1}-1\right ), \end{equation} in terms of the generalized hypergeometric function $_3F_2$. Lima \cite[Lemma 1]{lima2012rapidly} gives the following formula for {\em Catalan's constant} $G$: \begin{equation}\label{eq:lima} \frac{1}{2}\thrftwo{1}{1}{\frac{1}{2}}{\frac{3}{2}}{\frac{3}{2}}{1} = G = \beta(2), \end{equation} where $\beta(s) := \sum_{n=1}^{\infty}\frac{(-1)^n}{(2n+1)^s}$ for $s>0$. By manipulating the hypergeometric $_3 F_2$ function, one can see that for $\theta = n+1/2$ where $n$ is an integer, $u_2$ is of the form $q+rG$, where $q$ and $r$ are rational numbers. Lima's articles \cite{MR3357692, MR2968884} contain many related formulas, and references to zeta and beta values. \bigskip {\bf Acknowledgement:} We thank David Aldous for various pointers to the literature. \bibliographystyle{plain}
{'timestamp': '2019-06-18T02:11:35', 'yymm': '1707', 'arxiv_id': '1707.07776', 'language': 'en', 'url': 'https://arxiv.org/abs/1707.07776'}
\section{Introduction} Einstein manifolds are related with many questions in geometry and physics, for instance: Riemannian functionals and their critical points, Yang-Mills theory, self-dual manifolds of dimension four, exact solutions for the Einstein equation field. Today we already have in our hands many examples of Einstein manifolds, even the Ricci-flat ones (see \cite{besse,Oneil,LeandroPina,Romildo}). However, finding new examples of Einstein metrics is not an easy task. A common tool to make new examples of Einstein spaces is to consider warped product metrics (see \cite{LeandroPina,Romildo}). In \cite{besse}, a question was made about Einstein warped products: \begin{eqnarray}\label{question} \mbox{``Does there exist a compact Einstein warped}\nonumber\\ \mbox{product with nonconstant warping function?"} \end{eqnarray} Inspired by the problem (\ref{question}), several authors explored this subject in an attempt to get examples of such manifolds. Kim and Kim \cite{kimkim} considered a compact Riemannian Einstein warped product with nonpositive scalar curvature. They proved that a manifold like this is just a product manifold. Moreover, in \cite{BRS,Case1}, they considered (\ref{question}) without the compactness assumption. Barros, Batista and Ribeiro Jr \cite{BarrosBatistaRibeiro} also studied (\ref{question}) when the Einstein product manifold is complete and noncompact with nonpositive scalar curvature. It is worth to say that Case, Shu and Wei \cite{Case} proved that a shrinking quasi-Einstein metric has positive scalar curvature. Further, Sousa and Pina \cite{Romildo} were able to classify some structures of Einstein warped product on semi-Riemannian manifolds, they considered, for instance, the case in which the base and the fiber are Ricci-flat semi-Riemannian manifolds. Furthermore, they provided a classification for a noncompact Ricci-flat warped product semi-Riemannian manifold with $1$-dimensional fiber, however the base is not necessarily a Ricci-flat manifold. More recently, Leandro and Pina \cite{LeandroPina} classified the static solutions for the vacuum Einstein equation field with cosmological constant not necessarily identically zero, when the base is invariant under the action of a translation group. In particular, they provided a necessarily condition for integrability of the system of differential equations given by the invariance of the base for the static metric. When the base of an Einstein warped product is a compact Riemannian manifold and the fiber is a Ricci-flat semi-Riemannian manifold, we get a partial answer for (\ref{question}). Furthermore, when the base is not compact, we obtain new examples of Einstein warped products. Now, we state our main results. \begin{theorem}\label{teo1} Let $(\widehat{M}^{n+m}, \hat{g})=(M^{n},g)\times_{f}(N^{m},\tilde{g})$, $n\geq3$ and $m\geq2$, be an Einstein warped product semi-Riemannian manifold (non Ricci-flat), where $M$ is a compact Riemannian manifold and $N$ is a Ricci-flat semi-Riemannian manifold. Then $\widehat{M}$ is a product manifold, i.e., $f$ is trivial. \end{theorem} It is very natural to consider the next case (see Section \ref{SB}). \begin{theorem}\label{teo2} Let $(\widehat{M}^{n+m}, \hat{g})=(M^{n},g)\times_{f}(N^{m},\tilde{g})$, $n\geq3$ and $m\geq2$, be an Einstein warped product semi-Riemannian manifold (i.e., $\widehat{R}ic=\lambda\hat{g}$; $\lambda\neq0$), where $M$ is a compact Riemannian manifold with scalar curvature $R\leq\lambda(n-m)$, and $N$ is a semi-Riemannian manifold. Then $\widehat{M}$ is a product manifold, i.e., $f$ is trivial. Moreover, if the equality holds, then $N$ is Ricci-flat. \end{theorem} Now, we consider that the base is a noncompact Riemannian manifold. The next result was inspired, mainly, by Theorem \ref{teo2} and \cite{LeandroPina}, and gives the relationship between Ricci tensor $\widehat{R}ic$ of the warped metric $\hat{g}$ and the Ricci tensor $Ric$ for the metric of the base $g$. \begin{theorem}\label{teo3b} Let $(\widehat{M}^{n+m}, \hat{g})=(M^{n},g)\times_{f}(N^{m},\tilde{g})$, $n\geq3$ and $m\geq2$, be an Einstein warped product semi-Riemannian manifold (i.e., $\widehat{R}ic=\lambda\hat{g}$), where $M$ is a noncompact Riemannian manifold with constant scalar curvature $\lambda=\frac{R}{n-1}$, and $N$ is a semi-Riemannian manifold. Then $M$ is Ricci-flat if and only if the scalar curvature $R$ is zero. \end{theorem} Considering a conformal structure for the base of an Einstein warped product semi-Riemannian manifold, we have the next results. Furthermore, the following theorem is very technical. We consider that the base for such Einstein warped product manifold is conformal to a pseudo-Euclidean space which is invariant under the action of a $(n-1)$-dimensional translation group, and that the fiber is a Ricci-flat space. In order, for the reader to have a more intimate view of the next results, we recommend a previous reading of Section \ref{CFSI}. \begin{theorem}\label{teo3a} Let $(\widehat{M}^{n+m}, \hat{g})=(\mathbb{R}^{n}, \bar{g})\times_{f}(N^{m},\tilde{g})$, $n\geq3$ and $m\geq2$, be an warped product semi-Riemannian manifold such that $N$ is a Ricci-flat semi-Riemannian manifold. Let $(\mathbb{R}^{n}, g)$ be a pseudo-Euclidean space with coordinates $x =(x_{1},\ldots , x_{n})$ and $g_{ij} = \delta_{ij}\varepsilon_{i}$, $1\leq i,j\leq n$, where $\delta_{ij}$ is the delta Kronecker and $\varepsilon_{i} = \pm1$, with at least one $\varepsilon_{i} = 1$. Consider smooth functions $\varphi(\xi)$ and $f(\xi)$, where $\xi=\displaystyle\sum_{k=1}^{n}\alpha_{k}x_{k}$, $\alpha_{k}\in\mathbb{R}$, and $\displaystyle\sum_{k=1}^{n}\varepsilon_{k}\alpha_{k}^{2}=\kappa$. Then $(\mathbb{R}^{n}, \bar{g})\times_{f}(N^{m},\tilde{g})$, where $\bar{g}=\frac{1}{\varphi^{2}}g$, is an Einstein warped product semi-Riemannain manifold (i.e., $\widehat{R}ic=\lambda\hat{g}$) such that $f$ and $\varphi$ are given by: \begin{eqnarray}\label{system2} \left\{ \begin{array}{lcc} (n-2)\varphi\varphi''-m\left(G\varphi\right)'=mG^{2}\\\\ \varphi\varphi''-(n-1)(\varphi')^{2}+mG\varphi'=\kappa\lambda \\\\ nG\varphi'-(G\varphi)'-mG^{2}=\kappa\lambda, \end{array} \right. \end{eqnarray} and \begin{eqnarray}\label{sera3} f=\Theta\exp\left(\int\frac{G}{\varphi}d\xi\right), \end{eqnarray} where $\Theta\in\mathbb{R}_{+}\backslash\{0\}$, $G(\xi)=\pm\sqrt{\frac{\kappa[\lambda(n-m)-\bar{R}]}{m(m-1)}}$ and $\kappa=\pm1$. Here $\bar{R}$ is the scalar curvature for $\bar{g}$. \end{theorem} The next result is a consequence of Theorem \ref{teo3a}. \begin{theorem}\label{teo3} Let $(\widehat{M}^{n+m}, \hat{g})=(M^{n},g)\times_{f}(N^{m},\tilde{g})$, $n\geq3$ and $m\geq2$, be an Einstein warped product semi-Riemannian manifold, where $M$ is conformal to a pseudo-Euclidean space invariant under the action of a $(n-1)$-dimensional translation group with constant scalar curvature (possibly zero), and $N$ is a Ricci-flat semi-Riemannian manifold. Then, $\widehat{M}$ is either \begin{enumerate} \item[(1)] a Ricci-flat semi-Riemannain manifold $(\mathbb{R}^{n},g)\times_{f}(N^{m},\tilde{g})$, such that $(\mathbb{R}^{n},g)$ is the pseudo-Euclidean space with warped function $f(\xi)=\Theta\exp{(A\xi)}$, where $\Theta>0,$ $A\neq0$ are nonnull constants, or\\ \item[(2)] conformal to $(\mathbb{R}^{n},g)\times(N^{m},\tilde{g})$, where $(\mathbb{R}^{n},g)$ is the pseudo-Euclidean space. The conformal function $\varphi$ is given by \begin{eqnarray*} \varphi(\xi)= \frac{1}{(-G\xi+C)^{2}};\quad\mbox{where}\quad G\neq0, C\in\mathbb{R}. \end{eqnarray*} \end{enumerate} Moreover, the conformal function is defined for $\xi\neq\frac{C}{G}$. \end{theorem} It is worth mentioning that the first item of Theorem \ref{teo3} was not considered in \cite{Romildo}. From Theorem \ref{teo3} we can construct examples of complete Einstein warped product Riemannian manifolds. \begin{corollary}\label{coro1} Let $(N^{m},\tilde{g})$ be a complete Ricci-flat Riemannian manifold and $f(\xi)=\Theta\exp{(A\xi)}$, where $\Theta>0$ and $A\neq0$ are constants. Therefore, $(\mathbb{R}^{n},g_{can})\times_{f}(N^{m},\tilde{g})$ is a complete Ricci-flat warped product Riemannian manifold. \end{corollary} \begin{corollary}\label{coro2} Let $(N^{m},\tilde{g})$ be a complete Ricci-flat Riemannian manifold and $f(x)= \frac{1}{x_{n}}$ with $x_{n}>0$. Therefore, $(\widehat{M},\hat{g})=(\mathbb{H}^{n},g_{can})\times_{f}(N^{m},\tilde{g})$ is a complete Riemannian Einstein warped product such that $$\widehat{R}ic=-\frac{m+n-1}{n(n-1)}\hat{g}.$$ \end{corollary} The paper is organized as follows. Section \ref{SB} is divided in two subsections, namely, {\it General formulas} and {\it A conformal structure for the warped product with Ricci-flat fiber}, where will be provided the preliminary results. Further, in Section \ref{provas}, we will prove our main results. \section{Preliminar}\label{SB} Consider $(M^{n}, g)$ and $(N^{m},\tilde{g})$, with $n\geq3$ and $m\geq2$, semi-Riemannian manifolds, and let $f:M^{n}\rightarrow(0,+\infty)$ be a smooth function, the warped product $(\widehat{M}^{n+m},\hat{g})=(M^{n},g)\times_{f}(N^{m},\tilde{g})$ is a product manifold $M\times N$ with metric \begin{eqnarray*} \hat{g}=g+f^{2}\tilde{g}. \end{eqnarray*} From Corollary 43 in \cite{Oneil}, we have that (see also \cite{kimkim}) \begin{eqnarray}\label{test1} \widehat{R}ic=\lambda\hat{g}\Longleftrightarrow\left\{ \begin{array}{lcc} Ric-\frac{m}{f}\nabla^{2}f=\lambda g\\ \widetilde{R}ic=\mu\tilde{g}\\ f\Delta f+(m-1)|\nabla f|^{2}+\lambda f^{2}=\mu \end{array} ,\right. \end{eqnarray} where $\lambda$ and $\mu$ are constants. Which means that $\widehat{M}$ is an Einstein warped product if and only if (\ref{test1}) is satisfied. Here $\widehat{R}ic$, $\widetilde{R}ic$ and $Ric$ are, respectively, the Ricci tensor for $\hat{g}$, $\tilde{g}$ and $g$. Moreover, $\nabla^{2}f$, $\Delta f$ and $\nabla f$ are, respectively, the Hessian, The Laplacian and the gradient of $f$ for $g$. \subsection{General formulas}\label{GF} We derive some useful formulae from system (\ref{test1}). Contracting the first equation of (\ref{test1}) we get \begin{eqnarray}\label{01} Rf^{2}-mf\Delta f=nf^{2}\lambda, \end{eqnarray} where $R$ is the scalar curvature for $g$. From the third equation in (\ref{test1}) we have \begin{eqnarray}\label{02} mf\Delta f+m(m-1)|\nabla f|^{2}+m\lambda f^{2}=m\mu. \end{eqnarray} Then, from (\ref{01}) and (\ref{02}) we obtain \begin{eqnarray}\label{oi} |\nabla f|^{2}+\left[\frac{\lambda(m-n)+R}{m(m-1)}\right]f^{2}=\frac{\mu}{(m-1)}. \end{eqnarray} When the base is a Riemannian manifold and the fiber is a Ricci-flat semi-Riemannian manifold (i.e., $\mu=0$), from (\ref{oi}) we obtain \begin{eqnarray}\label{eqtop} |\nabla f|^{2}+\left[\frac{\lambda(m-n)+R}{m(m-1)}\right]f^{2}=0. \end{eqnarray} Then, either \begin{eqnarray*} R\leq\lambda(n-m) \end{eqnarray*} or $f$ is trivial , i.e., $\widehat{M}$ is a product manifold. \subsection{A conformal structure for the Warped product with Ricci-flat fiber}\label{CFSI} In what follows, consider $(\mathbb{R}^{n}, g)$ and $(N^{m},\tilde{g})$ semi-Riemannian manifolds, and let $f:\mathbb{R}^{n}\rightarrow(0,+\infty)$ be a smooth function, the warped product $(\widehat{M}^{n+m},\hat{g})=(\mathbb{R}^{n},g)\times_{f}(N^{m},\tilde{g})$ is a product manifold $\mathbb{R}^{n}\times N$ with metric \begin{eqnarray*} \hat{g}=g+f^{2}\tilde{g}. \end{eqnarray*} Let $(\mathbb{R}^{n}, g)$, $n\geq3$, be the standard pseudo-Euclidean space with metric $g$ and coordinates $(x_{1},\ldots,x_{n})$ with $g_{ij}=\delta_{ij}\varepsilon_{i}$, $1\leq i,j\leq n$, where $\delta_{ij}$ is the delta Kronecker, $\varepsilon_{i} = \pm1$, with at least one $\varepsilon_{i} = 1$. Consider $(\widehat{M}^{n+m},\hat{g})=(\mathbb{R}^{n}, \bar{g})\times_{f}(N^{m},\tilde{g})$ a warped product, where $\varphi:\mathbb{R}^{n}\rightarrow\mathbb{R}\backslash\{0\}$ is a smooth function such that $\bar{g}=\frac{g}{\varphi^{2}}$. Furthermore, we consider that $\widehat{M}$ is an Einstein semi-Riemannian manifold, i.e., $$\widehat{R}ic=\lambda\hat{g},$$ where $\widehat{R}ic$ is the Ricci tensor for the metric $\hat{g}$ and $\lambda\in\mathbb{R}$. We use invariants for the group action (or subgroup) to reduce a partial differential equation into a system of ordinary differential equations \cite{olver}. To be more clear, we consider that $(\widehat{M}^{n+m},\hat{g})=(\mathbb{R}^n,\bar{g})\times_{f}(N^{m},\tilde{g})$ is such that the base is invariant under the action of a $(n-1)$-dimensional translation group (\cite{BarbosaPinaKeti,olver,LeandroPina,Romildo,Tenenblat}). More precisely, let $(\mathbb{R}^{n}, g)$ be the standard pseudo-euclidean space with metric $g$ and coordinates $(x_{1}, \cdots, x_{n})$, with $g_{ij} = \delta_{ij}\varepsilon_{i}$, $1\leq i, j\leq n$, where $\delta_{ij}$ is the delta Kronecker, $\varepsilon_{i} = \pm1$, with at least one $\varepsilon_{i} = 1$. Let $\xi=\displaystyle\sum_{i}\alpha_{i}x_{i}$, $\alpha_{i}\in\mathbb{R}$, be a basic invariant for a $(n-1)$-dimensional translation group where $\alpha=\displaystyle\sum_{i}\alpha_{i}\frac{\partial}{\partial x_{i}}$ is a timelike, lightlike or spacelike vector, i.e., $\displaystyle\sum_{i}\varepsilon_{i}\alpha_{i}^{2}=-1,0,$ or $1$, respectively. Then we consider $\varphi(\xi)$ and $f(\xi)$ non-trivial differentiable functions such that \begin{eqnarray*} \varphi_{x_{i}}=\varphi'\alpha_{i}\quad\mbox{and}\quad f_{x_{i}}=f'\alpha_{i}. \end{eqnarray*} Moreover, it is well known (see \cite{BarbosaPinaKeti,LeandroPina,Romildo}) that if $\bar{g}=\frac{1}{\varphi^{2}}g$, then the Ricci tensor $\bar{R}ic$ for $\bar{g}$ is given by $$\bar{R}ic=\frac{1}{\varphi^{2}}\{(n-2)\varphi\nabla^{2}\varphi + [\varphi\Delta\varphi - (n-1)|\nabla\varphi|^{2}]g\},$$ where $\nabla^{2}\varphi$, $\Delta\varphi$ and $\nabla\varphi$ are, respectively, the Hessian, the Laplacian and the gradient of $\varphi$ for the metric $g$. Hence, the scalar curvature of $\bar{g}$ is given by \begin{eqnarray}\label{scalarcurvature} \bar{R}&=&\displaystyle\sum_{k=1}^{n}\varepsilon_{k}\varphi^{2}\left(\bar{R}ic\right)_{kk}=(n-1)(2\varphi\Delta\varphi - n|\nabla\varphi|^{2})\nonumber\\ &=&(n-1)[2\varphi\varphi''-n(\varphi)^{2}]\displaystyle\sum_{i}\varepsilon_{i}\alpha_{i}^{2}. \end{eqnarray} In what follows, we denote $\kappa=\displaystyle\sum_{i}\varepsilon_{i}\alpha_{i}^{2}$. When the fiber $N$ is a Ricci-flat semi-Riemannian manifold, we already know from Theorem 1.2 in \cite{Romildo} that $\varphi(\xi)$ and $f(\xi)$ satisfy the following system of differential equations \begin{eqnarray}\label{system} \left\{ \begin{array}{lcc} (n-2)f\varphi''-mf''\varphi-2m\varphi'f'=0;\\\\ f\varphi\varphi''-(n-1)f(\varphi')^{2}+m\varphi\varphi'f'=\kappa\lambda f;\\\\ (n-2)f\varphi\varphi'f'-(m-1)\varphi^{2}(f')^{2}-ff''\varphi^{2}=\kappa\lambda f^{2}. \end{array} \right. \end{eqnarray} Note that the case where $\kappa=0$ was proved in \cite{Romildo}. Therefore, we only consider the case $\kappa=\pm1$. \ \section{Proof of the main results}\label{provas} \ \noindent {\bf Proof of Theorem \ref{teo1}:} In fact, from the third equation of the system (\ref{test1}) we get that \begin{eqnarray}\label{kimkimeq} div\left(f\nabla f\right)+(m-2)|\nabla f|^{2}+\lambda f^{2}=\mu. \end{eqnarray} Moreover, if $N$ is Ricci-flat, from (\ref{kimkimeq}) we obtain \begin{eqnarray}\label{kimkimeq1} div\left(f\nabla f\right)+\lambda f^{2}\leq div\left(f\nabla f\right)+(m-2)|\nabla f|^{2}+\lambda f^{2}=0. \end{eqnarray} Considering $M$ a compact Riemannian manifold, integrating (\ref{kimkimeq1}) we have \begin{eqnarray}\label{kimkimeq2} \int_{M}\lambda f^{2}dv=\int_{M}\left(div\left(f\nabla f\right)+\lambda f^{2}\right)dv\leq 0. \end{eqnarray} Therefore, from (\ref{kimkimeq2}) we can infer that \begin{eqnarray}\label{kimkimeq3} \lambda\int_{M}f^{2}dv\leq 0. \end{eqnarray} This implies that, either $\lambda\leq0$ or $f$ is trivial. It is worth to point out that compact quasi-Einstein metrics on compact manifolds with $\lambda\leq0$ are trivial (see Remark 6 in \cite{kimkim}). \hfill $\Box$ \ \noindent {\bf Proof of Theorem \ref{teo2}:} Let $p$ be a maximum point of $f$ on $M$. Therefore, $f(p)>0$, $(\nabla f)(p)=0$ and $(\Delta f)(p)\geq0$. By hypothesis $R+\lambda(m-n)\leq0$, then from (\ref{oi}) we get \begin{eqnarray*} |\nabla f|^{2}\geq\frac{\mu}{m-1}. \end{eqnarray*} Whence, in $p\in M$ we obtain \begin{eqnarray*} 0=|\nabla f|^{2}(p)\geq\frac{\mu}{m-1}. \end{eqnarray*} Since $\mu$ is constant, we have that $\mu\leq0$. Moreover, from the third equation in (\ref{test1}) we have \begin{eqnarray*} \lambda f^{2}(p)\leq (f\Delta f)(p)+(m-1)|\nabla f|^{2}(p)+\lambda f^{2}(p)=\mu\leq0. \end{eqnarray*} Implying that $\lambda\leq0$. Then, from \cite{kimkim} the result follows. Now, if $R+\lambda(m-n)=0$ from (\ref{oi}) we have that \begin{eqnarray*} |\nabla f|^{2}=\frac{\mu}{m-1}. \end{eqnarray*} Then, for $p\in M$ we obtain \begin{eqnarray*} 0=|\nabla f|^{2}(p)=\frac{\mu}{m-1}. \end{eqnarray*} Therefore, since $\mu$ is a constant we get that $\mu=0$, i.e., $N$ is Ricci-flat. \hfill $\Box$ It is worth to say that if $M$ is a compact Riemannian manifold and the scalar curvature $R$ is constant, then $f$ is trivial (see \cite{Case}). \ \noindent {\bf Proof of Theorem \ref{teo3b}:} Considering $\lambda=\frac{R}{n-1}$ in equation (\ref{oi}) we obtain \begin{eqnarray}\label{ooi} |\nabla f|^{2}+\frac{R}{m(n-1)}f^{2}=\frac{\mu}{m-1}. \end{eqnarray} Then, taking the Laplacian we get \begin{eqnarray}\label{3b1} \frac{1}{2}\Delta|\nabla f|^{2}+\frac{R}{m(n-1)}\left(|\nabla f|^{2}+f\Delta f\right)=0. \end{eqnarray} Moreover, when we consider that $\lambda=\frac{R}{n-1}$ in (\ref{test1}), and contracting the first equation of the system we have that \begin{eqnarray}\label{3b2} -\Delta f=\frac{Rf}{m(n-1)}. \end{eqnarray} From (\ref{3b2}), (\ref{3b1}) became \begin{eqnarray}\label{3b3} \frac{1}{2}\Delta|\nabla f|^{2}+\frac{R}{m(n-1)}|\nabla f|^{2}=\frac{R^{2}f^{2}}{m^{2}(n-1)^{2}}. \end{eqnarray} The first equation of (\ref{test1}) and (\ref{ooi}) allow us to infer that \begin{eqnarray*} \frac{2f}{m}Ric(\nabla f)&=&\frac{2Rf}{m(n-1)}\nabla f+2\nabla^{2}f(\nabla f)\nonumber\\ &=&\nabla\left(|\nabla f|^{2}+\frac{Rf^{2}}{m(n-1)}\right)=\nabla\left(\frac{\mu}{m-1}\right)=0. \end{eqnarray*} And since $f>0$ we get \begin{eqnarray}\label{3b4} Ric(\nabla f, \nabla f)=0. \end{eqnarray} Remember the Bochner formula \begin{eqnarray}\label{bochner} \frac{1}{2}\Delta|\nabla f|^{2}=|\nabla^{2}f|^{2}+Ric(\nabla f,\nabla f)+g(\nabla f,\nabla\Delta f). \end{eqnarray} Whence, from (\ref{3b2}), (\ref{3b4}) and (\ref{bochner}) we obtain \begin{eqnarray}\label{bochner1} \frac{1}{2}\Delta|\nabla f|^{2}+\frac{R}{m(n-1)}{|\nabla f|}^{2}=|\nabla^{2}f|^{2}. \end{eqnarray} Substituting (\ref{3b3}) in (\ref{bochner1}) we get \begin{eqnarray}\label{hessiannorm} |\nabla^{2}f|^{2}=\frac{R^{2}f^{2}}{m^{2}(n-1)^{2}}. \end{eqnarray} From the first equation of (\ref{test1}), a straightforward computation give us \begin{eqnarray}\label{ricnorm} |Ric|^{2}=\frac{m^{2}}{f^{2}}|\nabla^{2}f|^{2}+\frac{2mR\Delta f}{(n-1)f}+\frac{nR^{2}}{(n-1)^{2}}. \end{eqnarray} Finally, from (\ref{hessiannorm}), (\ref{3b2}) and (\ref{ricnorm}) we have that \begin{eqnarray*} |Ric|^{2}=\frac{R^{2}}{n-1}. \end{eqnarray*} Then, we get the result. \hfill $\Box$ \ In what follows, we consider the conformal structure given in Section \ref{CFSI} to prove Theorem \ref{teo3a} and Theorem \ref{teo3}. \ \noindent {\bf Proof of Theorem \ref{teo3a}:} From definiton, \begin{eqnarray}\label{grad} |\bar{\nabla}f|^{2}=\displaystyle\sum_{i,j}\varphi^{2}\varepsilon_{i}\delta_{ij}f_{x_{i}}f_{x_{j}}=\left(\displaystyle\sum_{i}\varepsilon_{i}\alpha_{i}^{2}\right)\varphi^{2}(f')^{2}=\kappa\varphi^{2}(f')^{2}, \end{eqnarray} where $\bar{\nabla}f$ is the gradient of $f$ for $\bar{g}$, and $\kappa\neq0$. Then, from (\ref{eqtop}) and (\ref{grad}) we have \begin{eqnarray}\label{sera} \kappa\varphi^{2}(f')^{2}+\left[\frac{\lambda(m-n)+\bar{R}}{m(m-1)}\right]f^{2}=\frac{\mu}{m-1}. \end{eqnarray} Consider that $N$ is a Ricci-flat semi-Riemannian manifold, i.e., $\mu=0$, from (\ref{sera}) we get \begin{eqnarray}\label{sera1} \frac{f'}{f}=\frac{G(\bar{R})}{\varphi}, \end{eqnarray} where $G(\xi)=\pm\sqrt{\frac{\kappa[\lambda(n-m)-\bar{R}]}{m(m-1)}}$. Which give us (\ref{sera3}). Now, from (\ref{sera1}) we have \begin{eqnarray}\label{sera2} \frac{f''}{f}=\left(\frac{G}{\varphi}\right)'+\left(\frac{G}{\varphi}\right)^{2}=\left(\frac{G}{\varphi}\right)^{2}+\frac{G'}{\varphi}-\frac{G\varphi'}{\varphi^{2}}. \end{eqnarray} Therefore, from (\ref{system}), (\ref{sera1}) and (\ref{sera2}) we get (\ref{system2}). \hfill $\Box$ \ \noindent {\bf Proof of Theorem \ref{teo3}:} Considering that $\bar{R}$ is constant, from (\ref{system2}) we obtain \begin{eqnarray}\label{system12} \left\{ \begin{array}{lcc} (n-2)\varphi\varphi''-mG\varphi'=mG^{2}\\\\ \varphi\varphi''-(n-1)(\varphi')^{2}+mG\varphi'=\kappa\lambda \\\\ (n-1)G\varphi'-mG^{2}=\kappa\lambda \end{array} .\right. \end{eqnarray} The third equation in (\ref{system12}) give us that $\varphi$ is an affine function. Moreover, since \begin{eqnarray}\label{hum} \varphi'(\xi)=\frac{\kappa\lambda+mG^{2}}{(n-1)G}, \end{eqnarray} we get $\varphi''=0$. Then, from the first and second equations in (\ref{system12}) we have, respectively, \begin{eqnarray*} -mG\varphi'=mG^{2}\quad\mbox{and}\quad -(n-1)(\varphi')^{2}+mG\varphi'=\kappa\lambda. \end{eqnarray*} This implies that \begin{eqnarray}\label{hdois} -(\varphi')^{2}=\frac{\kappa\lambda+mG^{2}}{(n-1)}. \end{eqnarray} Then, from (\ref{hum}) and (\ref{hdois}) we get \begin{eqnarray*} (\varphi')^{2}+G\varphi'=0. \end{eqnarray*} That is, $\varphi'=0$ or $\varphi'=-G$. First consider that $\varphi'=0$. From (\ref{scalarcurvature}) and (\ref{system12}), it is easy to see that $\lambda=\bar{R}=0$. Then, we get the first item of the theorem since, as mentioned, the case $\varphi' = 0$ was not considered in \cite{Romildo}. Now, we take $\varphi'=-G$. Integrating over $\xi$ we have \begin{eqnarray}\label{phii} \varphi(\xi)=-G\xi+C;\quad\mbox{where}\quad G\neq0, C\in\mathbb{R}. \end{eqnarray} Then, from (\ref{hum}) we obtain \begin{eqnarray}\label{htres} \frac{\kappa\lambda+mG^{2}}{(n-1)G}=-G. \end{eqnarray} Since $G^{2}=\frac{\kappa[\lambda(n-m)-\bar{R}]}{m(m-1)}$, from (\ref{htres}) we obtain \begin{eqnarray}\label{scalarcurvature1} \bar{R}=\frac{n(n-1)\lambda}{(m+n-1)}. \end{eqnarray} Considering that $\lambda\neq0$, we can see that $\bar{R}$ is a non-null constant. On the other hand, since $\varphi'=-G$, from (\ref{scalarcurvature}) we get \begin{eqnarray}\label{anem} \bar{R}=-n(n-1)\kappa G^{2}, \end{eqnarray} where $G^{2}=\frac{\kappa[\lambda(n-m)-\bar{R}]}{m(m-1)}$. Observe that (\ref{scalarcurvature1}) and (\ref{anem}) are equivalent. Furthermore, from (\ref{sera3}) and (\ref{phii}) we get \begin{eqnarray*} f(\xi)=\frac{\Theta}{-G\xi+C}. \end{eqnarray*} Now the demonstration is complete. \hfill $\Box$ \ \noindent {\bf Proof of Corollary \ref{coro1}:} It is a direct consequence of Theorem \ref{teo3}-(1). \hfill $\Box$ \ \noindent {\bf Proof of Corollary \ref{coro2}:} Remember that $\xi=\displaystyle\sum_{i}\alpha_{i}x_{i}$, where $\alpha_{i}\in\mathbb{R}$ (cf. Section \ref{CFSI}). Consider in Theorem \ref{teo3}-(2) that $\alpha_{n}=\frac{1}{G}$ and $\alpha_{i}=0$ for all $i\neq n$. Moreover, taking $C=0$ we get \begin{eqnarray} f(\xi)=\frac{1}{x_{n}^{2}}. \end{eqnarray} Moreover, take $\mathbb{R}^{n^{\ast}}_{+}=\{(x_{1},\ldots,x_{n})\in\mathbb{R}^{n}; x_{n}>0\}$. Then, $\left(\mathbb{R}^{n^{\ast}}_{+},g_{can}=\frac{\delta_{ij}}{x_{n}^{2}}\right)=(\mathbb{H}^{n},g_{can})$ is the hyperbolic space. We pointed out that $\mathbb{H}^{n}$ with this metric has constant sectional curvature equal to $-1$. Then, from (\ref{scalarcurvature1}) we obtain $\lambda=-\frac{m+n-1}{n(n-1)}$, and the result follows. \hfill $\Box$ \iffalse \noindent {\bf Proof of Theorem \ref{teo4}:} It is a straightforward computation from (\ref{system2}) that \begin{eqnarray*}\label{eqseggrau} m(m-1)G^{2}-\left[2m(n-2)\varphi'\right]G+[\lambda(m+n-2)+(n-2)(n-1)(\varphi')^{2}]=0, \end{eqnarray*} where $G=\pm\sqrt{\frac{\kappa[\lambda(n-m)-\bar{R}]}{m(m-1)}}$ and $\kappa=\pm1$. Therefore, from the second order equation we have \begin{eqnarray*}\label{G} G=\frac{m(n-2)\varphi'\pm \sqrt{\Delta}}{m(m-1)}, \end{eqnarray*} where $\Delta=[m^{2}(n-2)^{2}-m(m-1)(n-2)(n-1)](\varphi')^{2}-\lambda m(m-1)(m+n-2)$. Observe that, by hypothesis $m=n-1$, then $\Delta=-\lambda m(m-1)(m+n-2)$. Whence, (\ref{G}) became \begin{eqnarray*} G=\varphi'\pm \sqrt{-\lambda\frac{(2n-3)}{(n-1)(n-2)}}. \end{eqnarray*} This implies that $\lambda<0$. Then, taking $\beta=\pm \sqrt{-\lambda\frac{(2n-3)}{(n-1)(n-2)}}$ \begin{eqnarray}\label{hquatro} G^{2}=(\varphi')^{2}+2\varphi'\beta+\beta^{2}. \end{eqnarray} Since $G^{2}=\frac{\kappa(\lambda-\bar{R})}{(n-1)(n-2)}$ and $\bar{R}=\kappa(n-1)[2\varphi\varphi''-n(\varphi')^{2}]$ from (\ref{hquatro}) we get \begin{eqnarray}\label{edoboa} \varphi\varphi''-(\varphi')^{2}+\tilde{\beta}\varphi'+\theta=0, \end{eqnarray} where $\tilde{\beta}=\pm\sqrt{\frac{-\lambda(2n-3)(n-2)}{(n-1)}}$ and $\theta=-\lambda\frac{2n+\kappa-3}{2(n-1)}$. Then, from (\ref{edoboa}) we obtain \begin{eqnarray*} \varphi(\xi) = \frac{1}{2}\xi(\sqrt{\tilde{\beta}^{2}+4\theta}+\tilde{\beta})+\ln\left(\frac{\sqrt{\tilde{\beta}^{2}+4\theta}}{\theta_{1}\exp(\xi\sqrt{\tilde{\beta}^{2}+4\theta})-\theta_{2}}\right), \end{eqnarray*} where $\tilde{\beta}^{2}+4\theta=-\lambda\frac{(2n-3)(n-4)+2\kappa}{(n-1)}$ and $\theta_{1}\neq0$. Observe that, if $n=4$ then $\kappa=1$. \hfill $\Box$ \fi \ \begin{acknowledgement} The authors would like to express their deep thanks to professor Ernani Ribeiro Jr for valuable suggestions. \end{acknowledgement}
{'timestamp': '2017-08-17T02:02:17', 'yymm': '1708', 'arxiv_id': '1708.04720', 'language': 'en', 'url': 'https://arxiv.org/abs/1708.04720'}
\section{Approach} \vspace{-0.1in} Our goal is to find a discriminative set of parts or patches that co-occur in many of the positively labeled images in the same configuration. We address this goal in two steps. First, we find a set of patches that are discriminative, i.e., they tend to occur mainly in positive images. Second, we use an efficient approach to find co-occurring configurations of pairs of such patches. Our approach easily extends beyond pairs; for simplicity and to retain configurations that occur frequently enough, we here restrict ourselves to pairs. \textbf{Discriminative candidate patches.} For identifying discriminative patches, we begin with a construction similar to that of \citet{song-icml2014}. Let $\mathcal{P}$ be the set of positively labeled images. Each image $I$ contains candidate boxes $\{b_{I,1}, \ldots, b_{I,m}\}$ found via selective search \cite{selectivesearch}. For each $b_{I,i}$, we find its closest matching neighbor $b_{I',j}$ in each other image $I'$ (regardless of the image label). The $K$ closest of those neighbors form the neighborhood $\mathcal{N}(b_{I,i})$, the remaining ones are discarded. Discriminative patches will have neighborhoods mainly within images in $\mathcal{P}$, i.e., if $\mathcal{B}(\mathcal{P})$ is the set of all patches from images in $\mathcal{P}$, then $\mathcal{N}(b) \cap \mathcal{B}(\mathcal{P}) \approx K$. To identify a small, diverse and representative set of such patches, like \cite{song-icml2014}, we construct a bipartite graph $\mathcal{G} = (\mathcal{U},\mathcal{V},\mathcal{E})$, where both $\mathcal{U}$ and $\mathcal{V}$ contain copies of $\mathcal{B}(\mathcal{P})$. Each patch $b \in \mathcal{V}$ is connected to the copy of its nearest neighbors in $\mathcal{U}$ -- these will be $K$ or less, depending on whether the $K$ nearest neighbors of $b$ occur in $\mathcal{B}(\mathcal{P})$ or in negatively labeled images. The most representative patches maximize the covering function \begin{equation} \label{eq:cover} F(S) = | \Gamma(S) |, \end{equation} where $\Gamma(S) = \{u \in \mathcal{U} \mid (b,u) \in \mathcal{E} \text{ for some } b \in S\}$ is the neighborhood of $S \subseteq \mathcal{V}$ in the bipartite graph. Figure~\ref{fig:g_c} shows a cartoon illustration. The function $F$ is monotone and submodular, and the $C$ maximizing elements (for a given $C$) can be selected greedily~\cite{nemhauser1978}. However, if we aim to find part configurations, we must select multiple, jointly informative patches per image. Patches selected to merely maximize coverage can still be redundant, since the most frequently occurring ones are often highly overlapping. A straightforward modification would be to treat highly overlapping patches as identical. This identification would still admit a submodular covering model as in Equation~\eqref{eq:cover}. But, in our case, the candidate patches are very densely packed in the image, and, by transitivity, we would have to make all of them identical. In consequence, this would completely rule out the selection of more than one patch in an image and thereby prohibit the discovery of any co-occurring configurations. \begin{figure} \centering \begin{tabular}{l@{\hspace{35pt}}c} \multirow{3}{*}{ \includegraphics[width=0.55\textwidth]{figures/figure1/clusters.pdf}} & \vspace{1pt} \\ & \includegraphics[width=0.25\textwidth]{figures/figure1/g_c2.pdf} \\ & \vspace{2pt} \end{tabular} \caption{Left: bipartite graph $\mathcal{G}$ that defines the utility function $F$ and identifies discriminativeness; right: graph $\mathcal{G}_C$ that defines the diversifying independence constraints $\mathcal{M}$. We may pick $C_1$ (yellow) and $C_3$ (green) together, but not $C_2$ (red) with any of those, since it is redundant. If we identify overlapping patches in $\mathcal{G}$ and thus the covering $F$, then we would only ever pick one of $C_1$, $C_2$ and $C_3$, and no characteristic configurations could be identified.} \label{fig:g_c} \end{figure} \vspace{-0.1cm} Therefore, we take a different approach, differing from \cite{song-icml2014} whose goal is to identify single patches, and not part-based configurations. We constrain our selection such that no two patches $b, b' \in \mathcal{V}$ can be picked whose neighborhoods overlap by more than a fraction of $\theta$. By overlap, we mean that the patches in the neighborhoods of $b, b'$ overlap significantly (they need not be identical). This notion of diversity is reminiscent of NMS and similar to that in~\cite{doersch-siggraph2012}, but we here phrase and analyze it as a constrained submodular optimization problem. Our constraint can be expressed in terms of a different graph $\mathcal{G}_C = (\mathcal{V}, \mathcal{E}_C)$ with nodes $\mathcal{V}$. In $\mathcal{G}_C$, there is an edge between $b$ and $b'$ if their neighborhoods overlap prohibitively, as illustrated in Figure~\ref{fig:g_c}. Our family of feasible solutions is \begin{equation} \label{eq:constrainedcov} \mathcal{M} = \{S \subseteq V \mid \forall\, b,b' \in S \text{ there is no edge } (b,b') \in \mathcal{E}_C\}. \end{equation} In other words, $\mathcal{M}$ is the family of all independent sets in $\mathcal{G}_C$. We aim to maximize \begin{align} \max\nolimits_{S\subseteq \mathcal{V}} F(S) \quad \text{s.t. } S \in \mathcal{M}. \end{align} This problem is NP-hard. We solve it approximately via the following greedy algorithm. Begin with $S^0 = \emptyset$, and, in iteration $t$, add $b \in \argmax_{b \in \mathcal{V} \setminus S} |\Gamma(b) \setminus \Gamma(S^{t-1})|$. As we add $b$, we delete all of $b$'s neighbors in $\mathcal{G}_C$ from $\mathcal{V}$. We continue until $\mathcal{V} = \emptyset$. If the neighborhoods $\Gamma(b)$ are disjoint, then this algorithm amounts to the following simplified scheme: we first sort all $b \in \mathcal{V}$ in non-increasing order by their degree $\Gamma(b)$, i.e., their number of neighbors in $\mathcal{B}(\mathcal{P})$, and visit them in this order. We always add the currently highest $b$ in the list to $S$, then delete it from the list, and with it all its immediate (overlapping) neighbors. The following lemma states an approximation factor for the greedy algorithm, where $\Delta$ is the maximum degree of any node in $\mathcal{G}_C$. \begin{lemma}\label{lem:bound} The solution $S_g$ returned by the greedy algorithm is a $1/(\Delta+2)$ approximation for Problem~\eqref{eq:constrainedcov}: $F(S_g) \geq \tfrac{1}{\Delta+2}F(S^*)$. If $\Gamma(b) \cap \Gamma(b') = \emptyset$ for all $b,b' \in \mathcal{V}$, then the worst-case approximation factor is $1/(\Delta+1)$. \end{lemma} The proof relies on phrasing $\mathcal{M}$ as an intersection of matroids. \begin{definition}[Matroid] A matroid $(\mathcal{V}, \mathcal{I}_k)$ consists of a ground set $\mathcal{V}$ and a family $\mathcal{I}_k \subseteq 2^{\mathcal{V}}$ of ``independent sets'' that satisfy three axioms: (1) $\emptyset \in \mathcal{I}_k$; (2) downward closedness: if $S \in \mathcal{I}_k$ then $T \in \mathcal{I}_k$ for all $T \subseteq S$; and (3) the exchange property: if $S, T \in \mathcal{I}_k$ and $|S| < |T|$, then there is an element $v \in T\setminus S$ such that $S \cup \{v\} \in \mathcal{I}_k$. \end{definition} \begin{proof}\emph{(Lemma~\ref{lem:bound})} We will argue that Problem~\eqref{eq:constrainedcov} is the problem of maximizing a monotone submodular function subject to the constraint that the solution lies in the intersection of $\Delta+1$ matroids. With this insight, the approximation factor of the greedy algorithm for submodular $F$ follows from \cite{fisher78} and that for non-intersecting $\Gamma(b)$ from \cite{jenkyns76}, since in the latter case the problem is that of finding a maximum weight vector in the intersection of $\Delta+1$ matroids. It remains to argue that $\mathcal{M}$ is an intersection of matroids. Our matroids will be partition matroids (over the ground set $\mathcal{V}$) whose independent sets are of the form $\mathcal{M}_k = \{S \mid |S \cap e| \leq 1, \text{ for all } e \in E_k\}$. To define those, we partition the edges in $\mathcal{G}_C$ into disjoint sets $E_k$, i.e., no two edges in $E_k$ share a common node. The $E_k$ can be found by an edge coloring -- one $E_k$ and $\mathcal{M}_k$ for each color $k$. By Vizing's theorem \cite{vizing64}, we need at most $\Delta+1$ colors. The matroid $\mathcal{M}_k$ demands that for each edge $e \in E_k$, we may only select one of its adjacent nodes. All matroids together say that for any edge $e \in \mathcal{E}$, we may only select one of the adjacent nodes, and that is the constraint in Equation~\eqref{eq:constrainedcov}, i.e. $\mathcal{M} = \bigcap_{k=1}^{\Delta+1} \mathcal{M}_k$. We do not ever need to explicitly compute $E_k$ and $\mathcal{M}_k$; all we need to do is check membership in the intersection, and this is equivalent to checking whether a set $S$ is an independent set in $\mathcal{G}_C$, which is done by the deletions in the algorithm. \vspace{-0.1cm} \end{proof} From the constrained greedy algorithm, we obtain a set $S \subset \mathcal{V}$ of discriminative patches. Together with its neighborhood $\Gamma(b)$, each patch $b \in \mathcal{V}$ forms a representative cluster. Figure~\ref{fig:clusters} shows some example patches derived from the labels ``aeroplane'' and ``motorbike''. The discovered patches intuitively look like ``parts'' of the objects, and are frequent but sufficiently different. \begin{figure*}[t] \centering \includegraphics[width=0.12\textwidth, height=1.2cm]{figures/clusters/aeroplane/c_1_0.png}\hspace{-0.12cm} \includegraphics[width=0.12\textwidth, height=1.2cm]{figures/clusters/aeroplane/c_1_1.png}\hspace{0.03cm} \includegraphics[width=0.12\textwidth, height=1.2cm]{figures/clusters/aeroplane/c_2_0.png}\hspace{-0.12cm} \includegraphics[width=0.12\textwidth, height=1.2cm]{figures/clusters/aeroplane/c_2_1.png}\hspace{0.03cm} \includegraphics[width=0.12\textwidth, height=1.2cm]{figures/clusters/aeroplane/c_3_0.png}\hspace{-0.12cm} \includegraphics[width=0.12\textwidth, height=1.2cm]{figures/clusters/aeroplane/c_3_1.png}\hspace{0.03cm} \includegraphics[width=0.12\textwidth, height=1.2cm]{figures/clusters/aeroplane/c_4_0.png}\hspace{-0.12cm} \includegraphics[width=0.12\textwidth, height=1.2cm]{figures/clusters/aeroplane/c_4_1.png}\\ \vspace{0.15cm} \includegraphics[width=0.12\textwidth, height=1.2cm]{figures/clusters/motorbike/c_1_0.png}\hspace{-0.12cm} \includegraphics[width=0.12\textwidth, height=1.2cm]{figures/clusters/motorbike/c_1_1.png}\hspace{0.03cm} \includegraphics[width=0.12\textwidth, height=1.2cm]{figures/clusters/motorbike/c_2_0.png}\hspace{-0.12cm} \includegraphics[width=0.12\textwidth, height=1.2cm]{figures/clusters/motorbike/c_2_1.png}\hspace{0.03cm} \includegraphics[width=0.12\textwidth, height=1.2cm]{figures/clusters/motorbike/c_3_0.png}\hspace{-0.12cm} \includegraphics[width=0.12\textwidth, height=1.2cm]{figures/clusters/motorbike/c_3_1.png}\hspace{0.03cm} \includegraphics[width=0.12\textwidth, height=1.2cm]{figures/clusters/motorbike/c_4_0.png}\hspace{-0.12cm} \includegraphics[width=0.12\textwidth, height=1.2cm]{figures/clusters/motorbike/c_4_1.png}\\ \vspace{0.15cm} \includegraphics[width=0.12\textwidth, height=1.2cm]{figures/clusters/cat/c_1_0.png}\hspace{-0.12cm} \includegraphics[width=0.12\textwidth, height=1.2cm]{figures/clusters/cat/c_1_1.png}\hspace{0.03cm} \includegraphics[width=0.12\textwidth, height=1.2cm]{figures/clusters/cat/c_2_0.png}\hspace{-0.12cm} \includegraphics[width=0.12\textwidth, height=1.2cm]{figures/clusters/cat/c_2_1.png}\hspace{0.03cm} \includegraphics[width=0.12\textwidth, height=1.2cm]{figures/clusters/cat/c_3_0.png}\hspace{-0.12cm} \includegraphics[width=0.12\textwidth, height=1.2cm]{figures/clusters/cat/c_3_1.png}\hspace{0.03cm} \includegraphics[width=0.12\textwidth, height=1.2cm]{figures/clusters/cat/c_4_0.png}\hspace{-0.12cm} \includegraphics[width=0.12\textwidth, height=1.2cm]{figures/clusters/cat/c_4_1.png}\\ \vspace{-0.05in} \caption{Examples of discovered patch ``clusters'' for aeroplane, motorbike, and cat. The discovered patches intuitively look like object parts, and are frequent but sufficiently different.} \label{fig:clusters} \vspace{-0.1in} \end{figure*} \textbf{Finding frequent configurations.} The next step is to find frequent configurations of co-occurring clusters, e.g., the head patch of a person on top of the torso patch, or a bicycle with visible wheels. A ``configuration'' consists of patches from two clusters $C_i, C_j$, their relative location, and their viewpoint and scale. In practice, we give preference to pairs that by themselves are very relevant and maximize a weighted combination of co-occurrence count and coverage $\max\{\Gamma (C_i), \Gamma (C_j)\}$. All possible configurations of all pairs of patches amount to too many to explicitly write down and count. Instead, we follow an efficient procedure for finding frequent configurations. Our approach is inspired by \cite{li-cvpr2012}, but does not require any supervision. We first find configurations that occur in at least two images. To do so, we consider each pair of images $I_1$, $I_2$ that have at least two co-occurring clusters. For each correspondence of cluster patches across the images, we find a corresponding transform operation (translation, rescale, viewpoint change). This results in a point in a 4D transform space, for each cluster correspondence. We quantize this space into $B$ bins. Our candidate configurations will be pairs of cluster correspondences $((b_{I_1,1}, b_{I_2,1}),(b_{I_1,2},b_{I_2,2})) \in (C_i \times C_i), (C_j \times C_j)$ that fall in the same bin, i.e., share the same transform, and have the same relative location. Between a given pair of images, there can be multiple such pairs of correspondences. We keep track of those via a multi-graph $\mathcal{G}_P = (\mathcal{P}, \mathcal{E}_P)$ that has a node for each image $I \in \mathcal{P}$. For each correspondence $((b_{I_1,1}, b_{I_2,1}),(b_{I_1,2},b_{I_2,2})) \in (C_i \times C_i), (C_j \times C_j)$, we draw an edge $(I_1,I_2)$ and label it by the clusters $C_i, C_j$ and the common relative position. As a result, there can be multiple edges $(I_1,I_j)$ in $\mathcal{G}_P$ with different edge labels. The most frequently occurring configuration can now be read out by finding the largest connected component in $\mathcal{G}_P$ induced by retaining only edges with the same label. We use the largest component(s) as the characteristic configurations for a given image label (object class). If the component is very small, then there is not enough information to determine co-occurrences, and we simply use the most frequent single cluster. (This may also be determined by a statistical test.) The final single ``correct'' localization will be the smallest bounding box that contains the full configuration. \section{Conclusion} \textbf{Conclusion.} We presented a novel weakly-supervised object detection method that discovers frequent configurations of discriminative visual patterns. We showed that the discovered configurations provide more accurate spatial coverage of the full object and provide a way to generate useful hard negatives. Together, these lead to state-of-the-art weakly-supervised detection results on the challenging PASCAL VOC dataset. \section{Experiments} \vspace{-0.1in} In this section, we analyze (1) detection performance of the models trained with the discovered configurations, and (2) impact of the discovered hard negatives on detection performance. \textbf{Implementation details.} We employ a recent region based detection framework \cite{rcnnTR, song-icml2014} and use the same fc7 features from the CNN model \cite{decafTR} on region proposals \cite{selectivesearch} throughout the experiment. For discriminative patch discovery, we use $K=|\mathcal{P}|/2, \theta =K/20$. For correspondence detection, we discretize the 4D transform space of \{$x$: relative horizontal shift, $y$: relative vertical shift, $s$: relative scale, $p$: relative aspect ratio\} with $\Delta x = 30~px, \Delta y = 30 ~px, \Delta s = 1 ~px/px, \Delta p = 1 ~px/px$. We choose this binning scheme by visually examining few qualitative examples so that scale, aspect ratio agreement between the two paired instances are more strict, while their translation agreement is more loose in order to handle deformable objects. More details regarding be transform space binning can be found in \cite{parikh-cvpr08}. \textbf{Discovered configurations.} Figure \ref{fig:cluster_visualization_figure} qualitatively illustrates discovered configurations (green and yellow boxes) and foreground estimates (magenta boxes) that have high degree in graph $\mathcal{G}_P$ for all classes in the PASCAL dataset. Our method consistently finds meaningful combinations such as a wheel and body of bicycles, face and torso of people, locomotive basement and upper body parts of trains/buses, window and body frame of cars, etc. Some failures include cases where the algorithm latches onto different objects co-occurring in consistent configurations such as the lamp and sofa combination (right column, second row from the bottom in Figure \ref{fig:cluster_visualization_figure}). \textbf{Weakly-supervised object detection.} Following the evaluation protocol of the PASCAL VOC dataset, we report detection results on the PASCAL \emph{test} set using detection average precision. For a direct comparison with the state-of-the-art weakly-supervised object detection method \cite{song-icml2014}, we do not use the extra instance level annotations such as \emph{pose, difficult, truncated} and restrict the supervision to the image level object presence annotations. Table \ref{tab:detection-full} compares our detection results against two baseline methods \cite{siva1,song-icml2014} which report the result on the full dataset. As shown in Table \ref{tab:detection-full}, our method improves detection performance on the majority of the classes (consistent improvement on rigid man-made object classes). It is worth noting that our method shows significant improvement on the person class (arguably most important category in the PASCAL dataset). Figure \ref{fig:detection_images} shows some example high scoring detection results on the test set. \begin{figure*}[htbp] \centering \includegraphics[width=0.25\textwidth, height=2cm]{figures/detection_images/aeroplane_008950.pdf}\hspace{0.1cm} \includegraphics[width=0.23\textwidth, height=2cm]{figures/detection_images/bicycle_009866.pdf}\hspace{0.1cm} \includegraphics[width=0.23\textwidth, height=2cm]{figures/detection_images/car_009243.pdf}\hspace{0.1cm} \includegraphics[width=0.23\textwidth, height=2cm]{figures/detection_images/motorbike_009361.pdf}\hspace{0.1cm} \includegraphics[width=0.245\textwidth, height=2cm]{figures/detection_images/car_009675.pdf}\hspace{0.15cm} \includegraphics[width=0.23\textwidth, height=2cm]{figures/detection_images/person_009612.pdf}\hspace{0.1cm} \includegraphics[width=0.23\textwidth, height=2cm]{figures/detection_images/person_009583.pdf}\hspace{0.1cm} \includegraphics[width=0.23\textwidth, height=2cm]{figures/detection_images/bicycle_009198.pdf}\hspace{0.1cm} \caption{Example detections on test set. Green: our method, Red: \cite{song-icml2014}} \label{fig:detection_images} \end{figure*} \vspace{-0.2cm} \textbf{Impact of discovered hard negatives.} To analyze the effect of our discovered hard negatives, we compare to two baseline cases: (1) not adding any negative examples from positives images (2) adding image regions around the foreground estimate as conventionally implemented in fully supervised object detection algorithms \cite{pedro2008, rcnnTR}. We use the criterion from \cite{rcnnTR}, where all image regions in positive images with overlap score (intersection area over union area with respect to foreground regions) less than $0.3$ are used as ``neighboring'' negative image regions on positive images. Table \ref{tab:hard-negatives} shows the effect of our hard negative examples in terms of detection average precision, for all classes (mAP). The experiment shows that adding ``neighboring negative regions'' does not lead to noticeable improvement over not adding any negative regions from positive images, while adding our automatically discovered hard negative regions improves the detection performance more substantially. \begin{figure*}[h!] \centering \includegraphics[width=0.23\textwidth, height=2cm]{figures/results/aeroplane_rank_00001.pdf}\hspace{-0.12cm} \includegraphics[width=0.23\textwidth, height=2cm]{figures/results/aeroplane_rank_00002.pdf}\hspace{0.2cm} \includegraphics[width=0.23\textwidth, height=2cm]{figures/results/bicycle_rank_00001.pdf}\hspace{-0.12cm} \includegraphics[width=0.23\textwidth, height=2cm]{figures/results/bicycle_rank_00002.pdf} \includegraphics[width=0.23\textwidth, height=2cm]{figures/results/bird_rank_00004.pdf}\hspace{-0.12cm} \includegraphics[width=0.23\textwidth, height=2cm]{figures/results/bird_rank_00002.pdf}\hspace{0.2cm} \includegraphics[width=0.23\textwidth, height=2cm]{figures/results/boat_rank_00003.pdf}\hspace{-0.12cm} \includegraphics[width=0.23\textwidth, height=2cm]{figures/results/boat_rank_00004.pdf} \includegraphics[width=0.23\textwidth, height=2cm]{figures/results/bottle_rank_00001.pdf}\hspace{-0.12cm} \includegraphics[width=0.23\textwidth, height=2cm]{figures/results/bottle_rank_00002.pdf}\hspace{0.2cm} \includegraphics[width=0.23\textwidth, height=2cm]{figures/results/bus_rank_00001.pdf}\hspace{-0.12cm} \includegraphics[width=0.23\textwidth, height=2cm]{figures/results/bus_rank_00002.pdf} \includegraphics[width=0.23\textwidth, height=2cm]{figures/results/car_rank_00003.pdf}\hspace{-0.12cm} \includegraphics[width=0.23\textwidth, height=2cm]{figures/results/car_rank_00004.pdf}\hspace{0.2cm} \includegraphics[width=0.23\textwidth, height=2cm]{figures/results/cat_rank_00001.pdf}\hspace{-0.12cm} \includegraphics[width=0.23\textwidth, height=2cm]{figures/results/cat_rank_00002.pdf} \includegraphics[width=0.23\textwidth, height=2cm]{figures/results/chair_rank_00001.pdf}\hspace{-0.12cm} \includegraphics[width=0.23\textwidth, height=2cm]{figures/results/chair_rank_00002.pdf}\hspace{0.2cm} \includegraphics[width=0.23\textwidth, height=2cm]{figures/results/cow_rank_00001.pdf}\hspace{-0.12cm} \includegraphics[width=0.23\textwidth, height=2cm]{figures/results/cow_rank_00002.pdf} \includegraphics[width=0.23\textwidth, height=2cm]{figures/results/diningtable_rank_00001.pdf}\hspace{-0.12cm} \includegraphics[width=0.23\textwidth, height=2cm]{figures/results/diningtable_rank_00003.pdf}\hspace{0.2cm} \includegraphics[width=0.23\textwidth, height=2cm]{figures/results/dog_rank_00001.pdf}\hspace{-0.12cm} \includegraphics[width=0.23\textwidth, height=2cm]{figures/results/dog_rank_00002.pdf} \includegraphics[width=0.23\textwidth, height=2cm]{figures/results/horse_rank_00001.pdf}\hspace{-0.12cm} \includegraphics[width=0.23\textwidth, height=2cm]{figures/results/horse_rank_00002.pdf}\hspace{0.2cm} \includegraphics[width=0.23\textwidth, height=2cm]{figures/results/motorbike_rank_00001.pdf}\hspace{-0.12cm} \includegraphics[width=0.23\textwidth, height=2cm]{figures/results/motorbike_rank_00002.pdf} \includegraphics[width=0.23\textwidth, height=2cm]{figures/results/person_rank_00001.pdf}\hspace{-0.12cm} \includegraphics[width=0.23\textwidth, height=2cm]{figures/results/person_rank_00002.pdf}\hspace{0.2cm} \includegraphics[width=0.23\textwidth, height=2cm]{figures/results/pottedplant_rank_00001.pdf}\hspace{-0.12cm} \includegraphics[width=0.23\textwidth, height=2cm]{figures/results/pottedplant_rank_00002.pdf} \includegraphics[width=0.23\textwidth, height=2cm]{figures/results/sheep_rank_00001.pdf}\hspace{-0.12cm} \includegraphics[width=0.23\textwidth, height=2cm]{figures/results/sheep_rank_00002.pdf}\hspace{0.2cm} \includegraphics[width=0.23\textwidth, height=2cm]{figures/results/sofa_rank_00003.pdf}\hspace{-0.12cm} \includegraphics[width=0.23\textwidth, height=2cm]{figures/results/sofa_rank_00002.pdf} \includegraphics[width=0.23\textwidth, height=2cm]{figures/results/train_rank_00001.pdf}\hspace{-0.12cm} \includegraphics[width=0.23\textwidth, height=2cm]{figures/results/train_rank_00002.pdf}\hspace{0.2cm} \includegraphics[width=0.23\textwidth, height=2cm]{figures/results/tvmonitor_rank_00001.pdf}\hspace{-0.12cm} \includegraphics[width=0.23\textwidth, height=2cm]{figures/results/tvmonitor_rank_00002.pdf} \caption{Example configurations that have high degree in graph $\mathcal{G}_P$. The green and yellow boxes show the discovered discriminative visual parts, and the magenta box shows the bounding box that tightly fits their configuration.} \label{fig:cluster_visualization_figure} \vspace{-0.05in} \end{figure*} \newcommand{\fontseries{b}\selectfont}{\fontseries{b}\selectfont} \begin{table*}[t] \footnotesize \centering \renewcommand{\arraystretch}{1.0} \renewcommand{\tabcolsep}{0.35mm} \begin{tabular}{l *{21}{c}} \toprule & ~aero & bike & bird & boat & bottle & bus & car & cat & chair & cow & table & dog & horse & mbike & pson & plant & sheep & sofa & train & tv & ~mAP\\ \midrule \midrule \cite{siva1} & ~13.4 & 44.0 & 3.1 & 3.1 & 0.0 & 31.2 & 43.9 & 7.1 & 0.1 & 9.3 & 9.9 & 1.5 & \fontseries{b}\selectfont 29.4 & 38.3 & 4.6 & 0.1 & 0.4 & 3.8 & 34.2 & 0.0 & ~ 13.9\\ \midrule \cite{song-icml2014} & ~27.6 & 41.9 & 19.7 & 9.1 & 10.4 & 35.8 & 39.1 & \fontseries{b}\selectfont 33.6 & 0.6 & 20.9 & 10.0 & \fontseries{b}\selectfont 27.7 & \fontseries{b}\selectfont 29.4 & 39.2 & 9.1 & \fontseries{b}\selectfont 19.3 & 20.5 & \fontseries{b}\selectfont 17.1 & 35.6 & 7.1 & ~22.7\\ \midrule \midrule ours + SVM & ~31.9 & 47.0 & 21.9 & 8.7 & 4.9 & 34.4 & 41.8 & 25.6 & 0.3 & 19.5 & \fontseries{b}\selectfont 14.2 & 23.0 & 27.8 & 38.7 & \fontseries{b}\selectfont 21.2 & 17.6 & \fontseries{b}\selectfont 26.9 & 12.8 & \fontseries{b}\selectfont 40.1 & 9.2 & ~23.4\\ \midrule ours + LSVM & ~\fontseries{b}\selectfont 36.3 & \fontseries{b}\selectfont 47.6 & \fontseries{b}\selectfont 23.3 & \fontseries{b}\selectfont 12.3 & \fontseries{b}\selectfont 11.1 & \fontseries{b}\selectfont 36.0 & \fontseries{b}\selectfont 46.6 & 25.4 & \fontseries{b}\selectfont 0.7 & \fontseries{b}\selectfont 23.5 & 12.5 & 23.5 & 27.9 & \fontseries{b}\selectfont 40.9 & 14.8 & 19.2 & 24.2 & \fontseries{b}\selectfont 17.1 & 37.7 & \fontseries{b}\selectfont 11.6 & ~\fontseries{b}\selectfont 24.6\\ \bottomrule \end{tabular} \caption{Detection average precision (\%) on full PASCAL VOC 2007 test set.} \label{tab:detection-full} \vspace{-0.1in} \end{table*} \begin{table}[t] \footnotesize \centering \begin{tabular}{l *{4}{c}} \toprule & ~~w/o hard negatives & ~~neighboring hard negatives & ~~discovered hard negatives\\ \midrule \midrule ours + SVM & 22.5 & 22.2 & \fontseries{b}\selectfont 23.4\\ \midrule ours + LSVM & 23.7 & 23.9 & \fontseries{b}\selectfont 24.6\\ \bottomrule \end{tabular} \caption{Effect of our hard negative examples on full PASCAL VOC 2007 test set.} \label{tab:hard-negatives} \vspace{-0.05in} \end{table} \section{Introduction} \vspace{-0.1in} The growing amount of sparsely and noisily labeled image data promotes the need for robustly learning detection methods that can cope with a minimal amount of supervision. A prominent example of this scenario is the abundant availability of labels at the image level (i.e., whether a certain object is present in the image or not), while detailed annotations of the exact location of the object are tedious and expensive and, consequently, scarce. Learning methods that handle image-level labels circumvent the need for such detailed annotations and therefore have the potential to effectively utilize the massive and ever-growing textually annotated visual data available on the Web. In addition, such weakly supervised methods can be more robust than fully supervised ones if the detailed annotations are noisy or ill-defined. Motivated by these developments, recent work has explored learning methods that decreasingly rely on strong supervision. Early ideas for weakly supervised detection~\cite{weber,fergus03} paved the way by successfully learning part-based object models, albeit on simple object-centric datasets (e.g., Caltech-101). Since then, a number of approaches \cite{pandey,siva2012defence,song-icml2014} have attempted to learn models on more realistic and challenging data sets that feature large intra-category appearance variations and background clutter. To cope with those difficulties, these methods typically generate multiple candidate regions and retain the ones that occur most frequently in the positively labeled images. However, due to intra-category variations and deformations, the identified (single) patches often correspond to only a part of the object, such as a human face instead of the entire body. Such mislocalizations are a frequent problem for weakly supervised detection methods. Figure~\ref{fig:detection_images} illustrates some examples. Mislocalization and too large or too small bounding boxes are problematic in two respects. First, they obviously affect the accuracy of the detection method. Detection is commonly phrased as multiple instance learning and addressed with non-convex optimization methods that alternatingly guess the location of the objects as positive examples (since the true location is unknown) and train a detector based on those guesses. Such methods are therefore heavily influenced by good initial localizations. Second, a common approach is to train the detector in stages, while adding informative ``hard'' negative examples to the training data. If we are not given accurate true object localizations in the training data, these hard examples must be derived from the detections identified in earlier rounds, and these initial detections may only use image-level annotations. The higher the accuracy of the initial localizations, the more informative is the augmented training data -- and this is key to the accuracy of the final learned model. In this work, we address the issue of mislocalizations by identifying characteristic, discriminative \emph{configurations} of multiple patches (rather than a single one). This part-based approach is motivated by the observation that automatically discovered single ``discriminative'' patches often correspond to object parts. In addition, wrong background patches (e.g., patches of water or sky) occur throughout the positive images, but do not re-occur in typical configurations. In particular, we propose an effective method that takes as input a set of images with labels of the form ``the object is present''/``not present'', and automatically identifies characteristic part configurations of the given object. To identify such co-occurrences, we use two main criteria. First, useful patches are \emph{discriminative}, i.e., they occur in many positively labeled images, but not in the negatively labeled ones. To identify such patches, we use a discriminative covering formulation similar to \cite{song-icml2014}. However, our end goal is to discover multiple patches in each image that represent different parts, i.e., they may be close but may not be overlapping too much. In covering formulations, one may discourage overlaps by saying that for two overlapping regions, one ``covers'' the other, i.e., they are treated as identical. But this is a transitive relation, and the density of possible regions in detection would imply that all regions are identical, strongly discouraging the selection of more than one part per image. Partial covers face the challenge of scale invariance. Hence, we take a different approach and formulate an independence constraint. This second criterion ensures that we select regions that may be close but non-redundant and not fully overlapping. We show that this constrained selection problem corresponds to maximizing a submodular function subject to a matroid intersection constraint, which leads to approximation algorithms with theoretical worst-case bounds. Given candidate parts identified by those two criteria, we effectively find frequently co-occurring configurations that take into account relative position, scale and viewpoint. We demonstrate multiple benefits of the discovered configurations. First, we observe that combinations of patches can produce more accurate spatial coverage of the full object, especially when the most discriminative pattern corresponds to an object part. Second, any overlapping region between the co-occurring visual patterns is likely to cover a part (but not the full) of the object of interest (see intersecting regions between green and yellow boxes in Figure~\ref{fig:cluster_visualization_figure}); thus, they can be used to generate very informative hard negatives for training, and those can reduce localization errors at test time. In short, our main contribution is a novel weakly-supervised object detection method that automatically discovers frequent configurations of discriminative visual patterns and exploits them for training more robust object detectors. In our experiments on the challenging PASCAL VOC dataset, we find the inclusion of our discriminative, automatically detected configurations to outperform all state-of-the-art methods. \subsection{Related work} \vspace{-0.05in} \textbf{Weakly-supervised object detection.} Training object detectors is usually done in a fully-supervised fashion using tight bounding box annotations that cover the object of interest (e.g.,~\cite{pedro-dpm}). To reduce laborious bounding box annotation costs, recent weakly-supervised approaches~\cite{weber,fergus03,pandey,siva2012defence,song-icml2014} train detectors using binary object-presence labels without any object-location information. Early efforts~\cite{weber,fergus03} focused on simple datasets that have a single prominent object in each image (e.g., Caltech-101). More recent approaches~\cite{pandey,siva2012defence,song-icml2014} focus on the more challenging PASCAL dataset, which contains multiple objects in each image and large intra-category appearance variations. Of these, \citet{song-icml2014} achieve state-of-the-art results by finding discriminative image patches that occur frequently in the positive images but rarely in the negative images using deep Convolutional Neural Network (CNN) features~\cite{krizhevsky-nips2012} and a submodular cover formulation. We use a similar approach to identify discriminative patches. But, contrary to \cite{song-icml2014} who assume patches to contain entire objects, we allow patches to contain full objects or merely object \emph{parts}. Thus, we aim to automatically piece together those patches to produce better full-object estimates. To this end, we augment the covering formulation and identify patches that are both representative and explicitly mutually different. We will see that this leads to more robust object estimates and further allows our system to intelligently select ``hard negatives'' (mislocalized objects), both of which improve detection performance. \textbf{Visual data mining.} Existing approaches discover high-level object categories~\cite{sivic-iccv05,grauman-cvpr06,faktor-eccv2012}, mid-level patches~\cite{singh-eccv2012,doersch-siggraph2012,juneja-cvpr2013}, or low-level foreground features~\cite{ff-ijcv} by grouping similar visual patterns (i.e., images, patches, or contours) according to their texture, color, shape, etc. Recent methods~\cite{doersch-siggraph2012,juneja-cvpr2013} use weakly-supervised labels to discover discriminative visual patterns. We use related ideas, but formulate the problem as a submodular optimization over matroids, which leads to approximation algorithms with theoretical worst-case guarantees. Covering formulations have also been used in \cite{barinova12,chen14}, but after running a trained object detector. An alternative discriminative approach, but less scalable than covering, uses spectral methods \cite{zou13}. \textbf{Modeling co-occurring visual patterns.} Modeling the spatial and geometric relationship between co-occurring visual patterns (objects or object-parts) often improves visual recognition performance~\cite{weber,fergus03,sivic-cvpr2004,quack-iccv2007,zhang-cvpr2009,ff-ijcv,pedro-dpm,farhadi-cvpr2011,singh-eccv2012,li-cvpr2012}. Co-occurring patterns are usually represented as doublets~\cite{singh-eccv2012}, higher-order constellations~\cite{weber,fergus03} or star-shaped models~\cite{pedro-dpm}. Among these, our work is most inspired by~\cite{weber,fergus03}, which learns part-based models using only weak-supervision. However, we use more informative features and a different formulation, and show results on more difficult datasets. Our work is also related by~\cite{li-cvpr2012}, which discovers high-level object compositions (``visual phrases''~\cite{farhadi-cvpr2011}) using ground-truth bounding box annotations. In contrast to~\cite{li-cvpr2012}, we aim to discover part compositions to represent full objects and do so without any bounding box annotations.
{'timestamp': '2014-06-26T02:06:57', 'yymm': '1406', 'arxiv_id': '1406.6507', 'language': 'en', 'url': 'https://arxiv.org/abs/1406.6507'}
\section{Introduction} \object{HD 150136} is the brightest member of the \object{NGC 6193} cluster in the \object{Ara OB1} association and a known non-thermal radio-emitter \citep{ben2006}, i.e.\ an object where particles are accelerated to relativistic energies \citep[see][and references therein]{debrev}. This distinctive feature was the reason for a decade-long observational effort to clarify the nature and properties of HD~150136. Recently, \citet[][hereafter \citetalias{MGS12}]{MGS12} showed that HD~150136 is a triple hierarchical system consisting of an inner binary, with an O3V((f$^{\star}$))-O3.5V((f$^+$)) primary star and an O5.5-6V((f)) secondary star on a $P_\mathrm{in} \approx 2.67$~d orbit, and a third physically bound O6.5-7V((f)) companion on a $P_\mathrm{out} \approx 8$- to 15-year orbit. With a total mass to be estimated around 130~$M_{\odot}$, HD~150136 is one of the most massive multiple O-star systems known. It is also the closest one to Earth to harbour an O3 star. Given the range of possible orbital periods for the outer system, \citet{MGS12} estimated a probable separation on the plane of the sky between the inner pair and the third component of roughly 10 to 40~milli-arcsec (mas). Here, we report on extended spectroscopic monitoring that allows us to obtain the first orbital solution of the outer system. We also report on the very first interferometric detection of the wide system using the Very Large Telescope Interferometer (VLTI). \begin{table} \caption{Journal of the new and archival spectroscopic observations.} \label{tab: spectro} \centering \begin{tabular}{c r r r r} \hline \hline HJD & $v_1$ \hspace*{3mm} & $v_2$ \hspace*{3mm} & $v_3$ \hspace*{3mm} \\ $-$2\,450\,000 & (km\,s$^{-1}$) & (km\,s$^{-1}$) & (km\,s$^{-1}$) \\ \hline \vspace*{-1mm}\\ \multicolumn{4}{c}{2000 UVES observation}\\ \vspace*{-1mm}\\ 1726.5548 & 173.5 & $-$368.8 & 29.4 \\ \vspace*{-1mm}\\ \multicolumn{4}{c}{2008 FEROS observation}\\ \vspace*{-1mm}\\ 4658.5184 & $-$107.5 & 128.2 & 27.0 \\ \vspace*{-1mm}\\ \multicolumn{4}{c}{2011 FEROS observations}\\ \vspace*{-1mm}\\ 5642.916 & $-$182.9& 268.4& $-$6.9\\ 5696.790 & $-$208.8& 298.5& $-$4.0\\ 5696.907 & $-$191.8& 269.4& $-$7.7\\ 5697.897 & 186.5& $-$337.0& $-$6.4\\ 5699.583 & $-$198.9& 272.9& $-$6.4\\ 5699.588 & $-$191.7& 268.9& $-$6.7\\ \vspace*{-1mm}\\ \multicolumn{4}{c}{2012 FEROS observations}\\ \vspace*{-1mm}\\ 6048.661 & 148.6& $-$274.7& $-$23.7\\ 6048.758 & 120.7& $-$221.2& $-$27.4\\ 6048.899 & 62.0& $-$150.9& $-$29.5\\ 6049.597 & $-$215.8& 306.1& $-$18.0\\ 6049.715 & $-$221.1& 314.8& $-$18.0\\ 6049.924 & $-$208.2& 280.1& $-$22.3\\ 6050.666 & 119.1& $-$223.4& $-$26.2\\ 6050.843 & 167.7& $-$310.1& $-$25.7\\ 6051.641 & 33.6& $-$114.2& $-$22.1\\ 6051.864 & $-$107.5& 128.2& $-$21.0\\ 6052.646 & $-$192.7& 268.9& $-$20.4\\ 6052.781 & $-$152.4& 219.6& $-$24.9\\ 6052.909 & $-$109.9& 107.7& $-$27.6\\ 6053.660 & 188.6& $-$344.1& $-$27.6\\ 6053.778 & 187.8& $-$350.7& $-$28.3\\ 6053.937 & 168.4& $-$308.4& $-$26.1\\ 6054.649 & $-$152.4& 190.2& $-$26.5\\ 6054.758 & $-$188.9& 240.8& $-$23.4\\ \hline \end{tabular} \end{table} \section{Observations and data reduction} \subsection{Spectroscopy} We used the FEROS spectrograph mounted at the MPG/ESO 2.2m telescope at La Silla (Chile) to obtain new high-resolution optical spectra of HD~150136 that supplement the FEROS data analysed in \citetalias{MGS12}. Eighteen FEROS spectra were obtained during a eight-night run in May 2012 (PI: Mahy), providing a continuous wavelength coverage from 3700 to 9200\AA\ at a resolving power of 48\,000. The data were processed as described in \citetalias{MGS12}. In addition, we searched the ESO archives for complementary data. We retrieved six FEROS spectra from 2011 (PI: Barb\'a; 087.D-0946(A)), one FEROS spectrum from 2008 (PI: Barb\'a; 079.D-0564(B)) and one UVES spectrum from July 2000 (PI: Roueff; 065.I-0526(A)). The latter two spectra provide additional constraints on the tertiary component, but not on the systemic velocity of the inner pair and were therefore used only to calculate the orbital solution of the wider pair. The disentangling procedure described in \citetalias{MGS12} for triple systems was applied to the 2011 and 2012 sets of spectra. We also reprocessed the complete data set, using our cross-correlation technique on all data from 1999 to 2012 to consistently measure the radial velocities (RVs) of the HD~150136 components. The RV values corresponding to the new observational epochs are given in Table~1 along with the journal of the 2011 and 2012 observations. \subsection{Long baseline interferometry} \begin{figure*} \centering \includegraphics[width=18cm]{interfero.pdf} \caption{Calibrated visibilities (left panel) and closure phases (middle panel) from PIONIER in August 2012, overlaid with the best fit binary model (red solid lines). The right panel shows the $\chi^2$ map in the vicinity of the best-fit solution. } \label{fig:interfero} \end{figure*} Interferometric data were obtained in June and August 2012 with the PIONIER combiner \citep[][]{Le-Bouquin:2011} and the Auxiliary Telescopes (ATs) at the VLTI \citep{Haguenauer:2010}. Fringes were dispersed over three spectral channels across the H-band (1.50 - 1.80\,$\mu$m). The ATs were located in configuration A1-G1-K0-I1, providing six projected baselines ranging from 30 to 120~m and a maximum angular resolution of 2~mas in the H-band. Data were reduced and calibrated with the \texttt{pndrs} package \citep[][]{Le-Bouquin:2011}. Calibration stars were chosen in the JMMC Stellar Diameters Catalog \citep{Lafrasse:2010uq}. The closure phases and visibilities were modelled with two unresolved sources because the expected diameters of the individual components ($<0.1$\ mas) as well as the separation of the inner pair ($<0.2$\ mas) are largely unresolved by the longest VLTI baselines. The data show no evidence that these assumptions may be wrong ($\chi^2_\mathrm{red}\approx1.2$). We used the \texttt{LITpro}\footnote{\tiny\url{www.jmmc.fr/litpro}} software \citep{Tallon-Bosc:2008} to extract the best-fit binary parameters, namely the flux ratio (assumed to be constant across the H-band) and the astrometric separation. The closure phases were also analysed independently using the method presented in \citet{Absil:2011}, and the results agreed excellently. HD\,150136 was clearly resolved on both dates with a slight decrease in separation between June and August (Table~\ref{tab: interfero}). Figure~\ref{fig:interfero} shows the data obtained in August 2012 overlaid with the best-fit binary model. The final accuracy is dominated by a 2\%\ uncertainty in the wavelength calibration of PIONIER. Adopting a distance of $1.32 \pm 0.12$~kpc \citep{HeH77}, the measured angular separations translate into projected distances of $12.2\pm 1.1$ and $11.4\pm 1.1$~AU for the June and August observations. The 1$\sigma$\ error-bars are dominated by the 10\%\ uncertainty on the distance. \begin{table} \caption{Interferometric best-fit measurements and 1$\sigma$\ error-bars.} \label{tab: interfero} \centering \begin{tabular}{l l l l } \hline\hline & & \multicolumn{2}{c}{Observing date} \\ Parameter & Unit & 2012-06-10 & 2012-08-15 \\ \hline \vspace*{-3mm}\\ HJD &($-$2\,450\,000) & \hspace*{4mm} 6088.595 &\hspace*{4mm} 6154.573 \\ $(f_3/f_{1+2})_{1.65\mu}$ & & \hspace*{2.5mm}$0.24 \pm 0.02$ & \hspace*{2.5mm}$0.24 \pm 0.02$ \\ $\delta$E & (mas) & $-7.96 \pm 0.16$ & $-6.98 \pm 0.14$ \\ $\delta$N & (mas) & $-4.73 \pm 0.09$ & $-5.13 \pm 0.10$ \\ $r$ & (mas) & \hspace*{2.5mm}$9.26 \pm 0.19$ & \hspace*{2.5mm}$8.66\pm0.17$ \\ $\theta$ & (\degr) & \hspace*{0.7mm}$239.2 \pm 1.0$ & \hspace*{0.7mm}$233.7\pm 1.1$ \\ $\chi^2_\mathrm{red}$ & & \hspace*{8mm} 1.2 & \hspace*{8mm} 1.4 \\ \vspace*{-3mm}\\ \hline \end{tabular} \end{table} \section{Orbital properties} \begin{table} \caption{Best-fit circular orbital solutions of the inner pair for the 2011 and 2012 campaigns.} \label{tab: RV12} \centering \begin{tabular}{l l l l } \hline\hline Parameter & Unit & Primary & Secondary \\ \hline \vspace*{-1mm}\\ \multicolumn{4}{c}{ 2011 SB2 solution }\\ \vspace*{-1mm}\\ $P_\mathrm{in}$ & (d) & \multicolumn{2}{c}{ 2.67454 (fixed) } \\ $T_0$ &(HJD$-$2\,450\,000) & \multicolumn{2}{c}{ 1327.136 $\pm$ 0.006} \\ $M_2/M_1$ & & \multicolumn{2}{c}{ \hspace*{5.5mm}0.623 $\pm$ 0.008} \\ $\gamma$ &(km\,s$^{-1}$) & $-15.7 \pm 2.4$ & $-12.1 \pm 3.2$ \\ $K$ &(km\,s$^{-1}$) & \hspace*{.7mm}$208.3 \pm 2.3$ & \hspace*{.7mm}$334.5 \pm 3.6$ \\ r.m.s. &(km\,s$^{-1}$) & \multicolumn{2}{c}{\hspace*{3mm} 6.2} \\ \vspace*{-1mm}\\ \multicolumn{4}{c}{ 2012 SB2 solution} \\ \vspace*{-1mm}\\ $P_\mathrm{in}$ &(d) & \multicolumn{2}{c}{ 2.67454 (fixed) } \\ $T_0$ &(HJD$-$2\,450\,000) & \multicolumn{2}{c}{1327.151 $\pm$ 0.005 } \\ $M_2/M_1$ & & \multicolumn{2}{c}{ \hspace*{5.5mm}0.630 $\pm$ 0.007 } \\ $\gamma$& (km\,s$^{-1}$) & $-20.4 \pm 2.0$ & $-12.1 \pm 2.8 $ \\ $K$ &(km\,s$^{-1}$) & \hspace*{.7mm}$214.4 \pm 2.5$ & \hspace*{.7mm}$340.4 \pm 3.9 $ \\ r.m.s.& (km\,s$^{-1}$) & \multicolumn{2}{c}{ \hspace*{3mm}13.1} \\ \hline \end{tabular} \end{table} \subsection{The close pair} We used the Li{\`e}ge Orbital Solution Package\footnote{LOSP is developed and maintained by H. Sana and is available at http://www.science.uva.nl/$\sim$hsana/losp.html. The algorithm is based on the generalization of the SB1 method of \citet{wol67} to the SB2 case along the lines described in \citet{rau00} and \citet{san06a}.} (LOSP) to compute the orbital solutions of the inner pair during the 2011 and 2012 campaigns (Table~\ref{tab: RV12}). This provided a measurement of the systemic velocity of the inner pair at these epochs. We fixed the orbital period to the same value as that of \citetalias{MGS12}, i.e.\ 2.67454~days. For both campaigns, the mass ratio, the semi-amplitudes, and the projected semi-major axis are similar to those found in Paper ~I. The difference between the systemic velocities of the primary and of the secondary components is larger for the 2012 solution but, globally, both orbital solutions of Table~\ref{tab: RV12} agree with each other and with the solution determined in \citetalias{MGS12}. \subsection{The third companion} \subsubsection{Spectroscopic orbital solution} We determined the systemic velocities of the inner system during the individual short-term campaigns with more than one spectrum since 1999. These values are put in perspective with the RVs of the tertiary component in Fig.~\ref{fig: RV3}. There is a clear, anti-correlated periodic motion of the systemic velocity of the close binary system and of the tertiary component. We performed a Fourier analysis on the tertiary RVs using the Heck-Manfroid-Mersch method \citep[][refined by \citealt{gos01}]{hec85}. The highest peak in the periodogram indicates a period of about $3335 \pm 260$~days. We adopted this value as our initial guess and used LOSP to refine the period estimate. We computed an SB1 orbital solution from the RVs of the tertiary alone and an SB2 solution using the tertiary RVs and the systemic velocities of the inner pair ($\gamma_{12}$). The two solutions are in excellent agreement and the SB2 RV solution is given in Table~\ref{tab: spectro2}. The reliability of the RV-only solution, especially its eccentricity and the semi-amplitudes of the RV curves, is unfortunately limited by a lack of sampling around periastron. \begin{table} \caption{Best-fit LOSP RV solution of the wide system.} \label{tab: spectro2} \centering \begin{tabular}{l l l l} \hline\hline & & \multicolumn{2}{c}{LOSP RV solution} \\ Parameter & Unit & Inner pair & Tertiary \\ \hline \vspace*{-3mm}\\ $P_\mathrm{out}$ &(d) & \multicolumn{2}{c}{ $2980 \pm 71$ } \\ $e$ & & \multicolumn{2}{c}{ \hspace*{3.5mm}$0.60 \pm 0.14$} \\ $\omega$ &(\degr) & \multicolumn{2}{c}{ $259.2\pm 5.8$} \\ $T$ &(HJD$-$2\,450\,000) & \multicolumn{2}{c}{ \hspace*{1.7mm}$1193 \pm 104$} \\ $M_3/M_{1+2}$ & & \multicolumn{2}{c}{ \hspace*{3.5mm}$0.31 \pm 0.07$} \\ $\gamma$& (km\,s$^{-1}$) & $-21.6 \pm 2.7$ & $-16.1 \pm 4.9$ \\ $K$ &(km\,s$^{-1}$) & \hspace*{2.2mm}$16.7 \pm 4.5$ & \hspace*{2.2mm}$53.6 \pm 14.4$ \\ r.m.s.& (km\,s$^{-1}$) & \multicolumn{2}{c}{ \hspace*{3.mm}5.2} \\ \hline \end{tabular} \end{table} \subsubsection{Simultaneous RV and astrometric orbital solution} The separations measured by PIONIER in June and August 2012 are relatively small and indicate an eccentricity at the upper end of the confidence interval of the RV-only solution of Table~\ref{tab: spectro2}. We therefore proceeded to simultaneously adjust the RV and astrometric measurements of the outer pair. We minimized the squared differences between the measurements and the model \begin{eqnarray} \chi^2& = & \sum \left( \frac{\gamma_{12} - \gamma_{12}^\mathrm{mod}}{\sigma_{12} } \right)^2 + \sum \left( \frac{ v_{3} - v_{3}^\mathrm{mod}}{\sigma_{3} } \right)^2 \nonumber \\ & + & \sum \left( \frac{ \delta E - \delta E^\mathrm{mod}}{\sigma_{\delta E} } \right)^2 + \sum \left( \frac{ \delta N - \delta N^\mathrm{mod}}{\sigma_{\delta N}} \right)^2 \end{eqnarray} using a Levenberg-Marquardt method, adopting the RV-only solution of Table~\ref{tab: spectro2} as a starting point. Based on the residuals of the tertiary and the systemic RVs around the best-fit RV curves, we adopted $\sigma_3=3.4$ and $\sigma_{12}=6.8$~km\,s$^{-1}$\ as typical uncertainties on $v_3$ and $\gamma_{12}$. The three-dimensional orbital solution converges towards a higher eccentricity (although still within error-bars), hence towards larger RV curve semi-amplitudes, than the RV-only solution. All other parameters remain unchanged. The best-fit combined solution is given in Table~\ref{tab: final} and is shown in Fig.~\ref{fig: RV3}. To estimate the uncertainties, we performed Monte-Carlo (MC) simulations. We randomly drew input RVs and astrometric positions around the best-fit orbital solution at epochs corresponding to our observing dates. We used normal distributions with standard deviations corresponding to the $1\sigma$ measurement uncertainties. We also included the uncertainty on the distance by drawing the distance from a normal distribution centered on 1320~pc and with a standard deviation of 120~pc. We recomputed the best-fit orbital solution and inclination using 1000 simulated data sets and we constructed the distributions of the output parameters. The medians of the distributions match the best-fit values of Table~\ref{tab: final} very well. We used the distances between the median and the 0.16 and 0.84 percentiles as uncertainty estimates. The upper and lower error-bars agree within 10\%\ for all parameters; Table~\ref{tab: final} provides the average of both values. \begin{table} \caption{Best-fit simultaneous RV and astrometric orbital solution of the wide system. The corresponding RV curves and projected orbit on the plane of sky are displayed in Fig.~\ref{fig: RV3}. } \label{tab: final} \centering \begin{tabular}{llll} \hline\hline & & \multicolumn{2}{c}{Combined solution}\\ Parameter & Unit & Inner pair & Tertiary \\ \hline \vspace*{-3mm}\\ $P_\mathrm{out}$ &(d) & \multicolumn{2}{c}{ $3008 \pm 54$ } \\ $e$ & & \multicolumn{2}{c}{\hspace*{3.5mm}$0.73\pm0.05$ } \\ $\omega$ &(\degr) & \multicolumn{2}{c}{ $250.7\pm2.9$} \\ $T$ &(HJD$-$2\,450\,000) & \multicolumn{2}{c}{ $1241\pm38$} \\ $M_3/M_{1+2}$ & & \multicolumn{2}{c}{ \hspace*{3.5mm}$0.32\pm0.08$} \\ $i_\mathrm{out}$ &(\degr) & \multicolumn{2}{c}{ $107.9\pm3.0$} \\ $\Omega$ &(\degr) & \multicolumn{2}{c}{ \hspace*{1.5mm}$-60\pm10$} \\ \\ $\gamma$ &(km\,s$^{-1}$) & $-19.9 \pm 2.2$ & $-21.5 \pm 2.0$ \\ $K$ &(km\,s$^{-1}$) & \hspace*{2.3mm}$25.8 \pm 7.6$ & \hspace*{2.3mm}$79.8 \pm 11$ \\ $a$ &(A.U.) & \hspace*{4mm}$5.1 \pm 1.2$ & \hspace*{2.3mm}$15.8 \pm 0.6$ \\ $M$ &($M_{\odot}$) & \hspace*{3.2mm}$102 \pm 16$ & \hspace*{5mm}$ 33 \pm 12$ \\ \\ $\chi^2_\mathrm{red}$ & & \multicolumn{2}{c}{\hspace*{3mm} 0.94} \\ \vspace*{-3mm}\\ \hline \end{tabular} \end{table} \section{Discussion} \subsection{Physical properties} Using the best-fit three-dimensional orbit, absolute mass estimates are $M_3=33\pm12$~$M_{\odot}$\ and $M_{1+2}=102\pm16$~$M_{\odot}$\ for the third companion and the total mass of the inner pair, respectively. Combining the total absolute mass of the inner pair with the spectroscopic minimum masses of \citetalias{MGS12} (table 4), we constrain the inclination of the inner binary to values of $49.6\pm3.6$\degr. This value is compatible with the absence of eclipse in the inner system. Best masses for the primary and secondary are thus $M_1=62.6\pm10.0$ and $M_2=39.5\pm6.3$~$M_{\odot}$. The agreement with the expected masses given the estimated spectral types is remarkably good \citep{MSH05}. The dynamical masses also agree within errors with the evolutionary masses obtained in \citetalias{MGS12}: $70.6_{-9.1}^{+11.4}$, $36.2_{-1.6}^{+5.0}$ and $27.0_{-3.5}^{+3.0}$~$M_{\odot}$\ for components 1, 2, and 3, respectively. The measured flux-ratio in the H-band, $(f_3/f_{1+2})_{1.65\mu}= 0.24$, also agrees well with the expected H-band flux-ratio of 0.26 given the component spectral types \citep{MaP06}, which confirms the main-sequence nature of the tertiary object \citepalias[see discussion in][]{MGS12}. \subsection{Non-thermal emission} \label{sect: nt} The present study allows us to clarify to some extent the considerations on the origin of the non-thermal radio emission. As argued in Paper\,I, the synchrotron radio emission probably comes from the colliding winds in the wide orbit. The radius of the photosphere ($\tau = 1$) for the primary (whose wind dominates the free-free opacity in the system) is expected to be at most 850\,$R_{\odot}$\ (at $\lambda=20$\,cm), strongly suggesting that the stagnation point of the collision zone must be located farther away (see Paper\,I). For the tertiary star, we assume two typical values for the mass loss rate: $\mathrm{\dot M}_{cl}$\,=\,10$^{-7}$\,$M_{\odot}$\,yr$^{-1}$ \citep[{\it classical} value,][]{muijres2012} and $\mathrm{\dot M}_{ww}$\,=\,10$^{-9}$\,$M_{\odot}$\,yr$^{-1}$ (more representative of the {\it weak-wind} case). We also adopt a terminal velocity of 2500\,km\,s$^{-1}$, corresponding to $2.6 \times v_\mathrm{esc}$ and $v_\mathrm{esc} \approx 970$~km\,s$^{-1}$\ for O6.5-7~V stars \citep{muijres2012}. On these bases, we estimate that, in the classical-wind case, the stagnation point is located at about 260 and 1630\,$R_{\odot}$\ from the tertiary at periastron and at apastron. In the weak-wind case, these distances reduce to about 30 and 200\,$R_{\odot}$. Using the same approach as described in Paper\,I, we find that the extension of the radio photosphere of the tertiary star is shorter than the distance between the tertiary component and the stagnation point of the colliding winds, whatever the wavelength and whatever the assumption about the nature of the stellar wind (classical or weak). In the classical-wind case, however, the stagnation point very close to periastron could be located below the radio photosphere at longer wavelengths (13 and 20\,cm) where HD\,150136 is significantly detected, but the observations reported by \citet{ben2006} were performed in December 2003, which is far from periastron passage according to our ephemeris. In the weak-wind case, the stagnation point is always located significantly outside the photosphere of the dense primary wind, even at periastron, and at all wavelengths. These facts are in agreement with the rather high flux densities measured between 3 and 20~cm, as reported by \citet{ben2006}. \begin{figure} \centering \includegraphics[width=\columnwidth]{rv4.pdf} \caption{Main panel: Evolution of the third component RVs (triangles) and of the systemic velocity of the O+O inner system (squares) as a function of time. The best-fit RV curves are overlaid (Table~\ref{tab: final}). Insert: PIONIER astrometric points and projection of the best-fit relative orbit on the plane of the sky. The dashed line shows the line of nodes. } \label{fig: RV3} \end{figure} As discussed in \citet{deb2012} for a similar system, the variability in the synchrotron radio flux from a colliding-wind binary most likely comes from two changing factors: stellar separation and free-free absorption. Following the discussion above, absorption is unlikely to dominate. Any temporal change of the HD\,150136 flux densities would therefore be mainly attributed to the varying stellar separation along the eccentric wide orbit. One may therefore expect the radio flux density to reach its maximum very close to periastron, with a minimum at apastron. A radio monitoring of the wide orbit in HD\,150136 (so far non-existent) is required to validate this scenario and to achieve a more detailed description of the relevant non-thermal processes. Given the expected location of the stagnation point, the wind interaction zone may be resolvable from the stars by very large baseline radio interferometric facilities. Imaging the wind-wind collision at several epochs would help to distinguish between a weak-wind, as possibly suggested by the CMFGEN analysis in \citetalias{MGS12}, and a normal wind for the tertiary star. \section{Summary} We reported the very first interferometric detection of the outer companion in the hierarchical triple system HD~150136. Combining the interferometric measurements, new and archival spectroscopy with data from \citetalias{MGS12}, we obtained the first three-dimensional orbital solution of the wider system. The best-fit solution indicates a 8.2-yr period and a high eccentricity ($e \mathrel{\hbox{\rlap{\hbox{\lower3pt\hbox{$\sim$}}}\hbox{\raise2pt\hbox{$>$}}}} 0.7$). The accuracy of the PIONIER interferometric measurements allowed us to constrain the inclination of the outer orbit, and from there on, of the inner pair to within a few degrees only. We constrained the masses of the three stars of the system to 63, 40, and 33~$M_{\odot}$\ for the O3-3.5~V, O5.5-6~V, and O6.5-7~V components. In particular, this is the first direct measurement of the mass of an early main-sequence O star. We showed that the obtained dynamical masses agree within errors with the evolutionary masses estimated in ~\citetalias{MGS12}. Although relative error-bars remain at the 15\%-level for masses of the inner pair components and at the 30\%-level for the tertiary mass, spectroscopic observations around the next periastron passage in 2015 and further interferometric monitoring over a significant fraction of the orbit may provide direct mass measurements with uncertainties of a couple of per cent only. It will also allow for an accurate independent measurement of the distance to the system. This constitutes a must-do to accurately test high-mass star evolutionary models and to investigate the reality and origin of the mass discrepancy in more details \citep[e.g.\ ][]{HKV92, WeV10}. As for other non-thermal emitters (\object{HD 93250}, \citealt{SLBDB11}; \object{HD 167971}, \citealt{deb2012}), the present study also shows that long baseline interferometry is ideally suited to resolve the components of O-type non-thermal radio emitters. The significant advances on the three-dimensional description of the orbits in HD\,150136 is a solid base for future investigations to understand the non-thermal emission and particle acceleration processes at work in massive multiple systems. \begin{acknowledgements} PIONIER is funded by the Universit\'e Joseph Fourier (UJF, Grenoble) through its Poles TUNES and SMING and the vice-president of research, the Institut de Plan\'etologie et d'Astrophysique de Grenoble, the ``Agence Nationale pour la Recherche'' with the program ANR EXOZODI, and the Institut National des Sciences de l'Univers (INSU) with the programs ``Programme National de Physique Stellaire'' and ``Programme National de Plan\'etologie''. The integrated optics beam combiner is the result of a collaboration between IPAG and CEA-LETI based on CNES R\&T funding. The authors thank all the people involved in the VLTI project. The use of Yassine Damerdji's orbital code is also warmly acknowledged. This study made use of the Smithsonian/NASA Astrophysics Data System (ADS), of the Centre de Donn\'ees astronomiques de Strasbourg (CDS) and of the Jean-Marie Mariotti Center (JMMC). Some calculations and graphics were performed with the freeware \texttt{Yorick}. \end{acknowledgements}
{'timestamp': '2013-04-15T02:00:13', 'yymm': '1304', 'arxiv_id': '1304.3457', 'language': 'en', 'url': 'https://arxiv.org/abs/1304.3457'}
\section{Introduction} Rogue (freak) waves can be described as high amplitude waves with a height bigger than $2-2.2$ times the significant waveheight in a wavefield. Their studies have become extensive in recent years \cite{Akhmediev2009b, Akhmediev2009a, bayindir2016, Akhmediev2011}. The research has emerged with the investigation of one of the simplest nonlinear models, which is the nonlinear Schr\"{o}dinger equation (NLSE)\cite{Akhmediev2009b}. Discovery of the unexpected rogue wave solutions of the NLSE resulted in seminal studies of rogue wave dynamics, such as in Ref.\cite{Akhmediev2009b}. Their existence is not necessarily restricted to optical media \cite{FirstOpticalRW}, they can also be observed in hydrodynamics, Bose-Einstein condensation, acoustics and finance, just to name a few \cite{Akhmediev2009b, bayindir2016}. It is natural to expect that in a medium whose dynamics are described by the NLSE and NLSE like equations, rogue waves can also emerge. In this study we consider optical rogue waves for which analyzing the dynamics, shapes and statistics of rogue wavy optical fields are crucially important to satisfy certain power and communication constraints. On the other hand, quantum Zeno dynamics \cite{MisraSudarshan,Facchi2008JPA}, which is the inhibition of the evolution of an unstable quantum state by appropriate frequent observations during a time interval has attracted an intense attention in quantum science, usually for protecting the quantum system from decaying due to inevitable interactions with its environment. It emerged that the observation alters the evolution of an atomic particle, even can stop it \cite{HarochePRL2010, HarochePRA2012, HarocheNP2014}. Duan and Guo showed that the dissipation of two particles can be prevented \cite{Guo1998PRL,Guo1998PRA}, Viola and Lloyd proposed a dynamical suppression of decoherence of qubit systems \cite{Lloyd1998PRA}, Maniscalco et al. proposed a strategy to fight against the decoherence of the entanglement of two atoms in a lossy resonator \cite{Maniscalco2008PRL}, Nourmandipour et al. studied Zeno and anti-Zeno effects on the entanglement dynamics of dissipative qubits coupled to a common bath \cite{Nourmandipour2016JOSAB}, and Bernu et al. froze the coherent field growth in a cavity \cite{Bernu2008PRL}. Very recently, Facchi et al. studied the large-time limit of the quantum Zeno effect \cite{Facchi17JMP}. Quantum Zeno dynamics can also be used for realizing controlled operations and creating entanglement. Creation of entanglement is a major issue in quantum information science, requiring controlled operations such as CNOT gates between qubits, which is usually a demanding task. As the number of qubits exceeds two, multipartite entanglement emerges in inequivalent classes such as GHZ, W and cluster states, which cannot be transformed into each other via local operations and classical communications. The preparation of multipartite entangled states -especially W states- require not only even more controlled operations but also novel methods \cite{OzaydinW1, OzaydinW2, OzaydinW3, OzaydinW4, OzaydinW5}. Wang et al. proposed a collective threshold measurement scheme for creating bipartite entanglement, avoiding the difficulty of applying CNOT gates or performing Bell measurements \cite{Wang2008PRA}, which can be extended to multipartite entangled states. Chen et al. proposed to use Zeno dynamics for generation of W states robust against decoherence and photon loss \cite{Chen2016OptComm} and Barontini et al. experimentally demonstrated the deterministic generation of W states by quantum Zeno dynamics \cite{Barontini2015Science}. Nakazoto et al. further showed that purifying quantum systems is possible via Zeno-like measurements \cite{Nakazoto20013PRL}. Optical analogue of the quantum Zeno effect has been receiving an increasing attention. Yamane et al. reported Zeno effect in optical fibers \cite{Yamane2001OptComm}. Longhi proposed an optical lattice model including tunneling-coupled waveguides for the observation of the optical Zeno effect \cite{Longhi2006PRL}. Leung and Ralph proposed a distillation method for improving the fidelity of optical Zeno gates \cite{Leung2006PRA}. Biagioni et al. experimentally demonstrated the optical Zeno effect by scanning tunneling optical microscopy \cite{Biagioni2008OptExp}. Abdullaev et al. showed that it is possible to observe the optical analog of not only linear but also nonlinear quantum Zeno effects in a simple coupler and they further proposed a setup for the experimental demonstration of these effects \cite{Abdullaev2011PRA}. McCusker et al. utilized quantum Zeno effect for the experimental demonstration of the interaction-free all-optical switching \cite{Kumar2013PRL}. Thapliyal et al. studied quantum Zeno and anti-Zeno effects in nonlinear optical couplers \cite{Thapliyal2016PRA}. In this paper we numerically investigate the optical analogue of quantum Zeno dynamics of the rogue waves that are encountered in optics. With this motivation, in the second section of this paper we review the NLSE and the split-step Fourier method for its numerical solution. We also review a procedure applied to wavefunction to model the Zeno dynamics of an observed system. In the third section of this paper, we analyze the Zeno dynamics of the Akhmediev breathers, Peregrine and Akhmediev-Peregrine soliton solutions of the NLSE, which are used as models to describe the rogue waves. We show that frequent measurements of the wave inhibits the movement of the wave in the observation domain for each of these types of rogue waves. We also analyze the spectra of the rogue waves under Zeno dynamics and discuss the effect of observation frequency on the rogue wave profile and on the probability of lingering of the wave in the observation domain. In the last section we conclude our work and summarize future research tasks. \section{Nonlinear Schr\"{o}dinger Equation and Zeno Effect} It was shown that all the features of linear quantum mechanics can be reproduced by NLSE \cite{Richardson2014PRA}, and quantum NLSE can accurately describe quantum optical solitons in photonic waveguides with Kerr nonlinearity \cite{Carter1987PRL,Drummond1987JOSAB,Lai1989PRA844,Lai1989PRA854}. The bosonic matter wave field for weakly interacting ultracold atoms in a Bose-Einstein condensate, evolves according to quantum NLSE \cite{LeggettRMP2001,MorschRMP2006}. Many nonlinear phenomena observed in fiber optics are generally studied in the frame of the NLSE \cite{Akhmediev2009b}. Optical rogue waves are one of those phenomena and rational rogue wave soliton solutions of the NLSE are accepted as accurate optical rogue wave models \cite{Akhmediev2009b}. In order the analyze the Zeno dynamics of rogue waves, we consider the nondimensional NLSE given as \begin{equation} i\psi_t + \frac{1}{2} \psi_{xx} + \left|\psi \right|^2 \psi =0, \label{eq01} \end{equation} where $x$ and $t$ are the spatial and temporal variables, respectively, $i$ is the imaginary number, and $\psi$ is the complex amplitude. It is known that the NLSE given by Eq.(\ref{eq01}) admits many different types of analytical solutions. Some of these solutions are reviewed in the next section of this paper. For arbitrary wave profiles, where the analytical solution is unknown, the NLSE can be numerically solved by a split-step Fourier method (SSFM), which is one of the most commonly used forms of the spectral methods. Similar to other spectral methods, the spatial derivatives are calculated using spectral techniques in SSFM. Some applications of the spectral techniques can be seen in Refs.\cite{bay2009, Bay_arxNoisyTun, BayTWMS2016, bay_cssfm, Agrawal, Bay_arxNoisyTunKEE, demiray, Bay_arxEarlyDetectCS, Karjadi2010, Karjadi2012, Bay_cssfmarx, bayindir2016nature, Bay_arxChaotCurNLS, BayPRE1, BayPRE2, Bay_CSRM} and their more comprehensive analysis can be seen in Refs.\cite{canuto, trefethen}. The temporal derivatives in the governing equations is calculated using time integration schemes such as Adams-Bashforth and Runge-Kutta, etc. \cite{canuto, trefethen, demiray}. However, SSFM uses an exponential time stepping function for this purpose. SSFM is based on the idea of splitting the equation into two parts, namely the linear and the nonlinear parts. Then time stepping is performed starting from the initial conditions. In a possible splitting we take the first part of the NLSE as \begin{equation} i\psi_t= -\left| \psi \right|^2\psi \label{eq02} \end{equation} which can exactly be solved as \begin{equation} \tilde{\psi}(x,t_0+\Delta t)=e^{i \left| \psi(x,t_0)\right|^2 \Delta t}\ \psi_0, \label{eq03} \end{equation} where $\Delta t$ is the time step and $\psi_0=\psi(x,t_0)$ is the initial condition. The second part of the NLSE can be written as \begin{equation} i\psi_t=- \frac{1}{2} \psi_{xx}. \label{eq04} \end{equation} Using a Fourier series expansion we obtain \begin{equation} \psi(x,t_0+\Delta t)=F^{-1} \left[e^{-i k^2 /2 \Delta t}F[\tilde{\psi}(x,t_0+\Delta t) ] \right], \label{eq05} \end{equation} where $k$ is the wavenumber \cite{bay_cssfm, Agrawal}. Substituting Eq.(\ref{eq03}) into Eq.(\ref{eq05}), the final form of the SSFM becomes \begin{equation} \psi(x,t_0+\Delta t)=F^{-1} \left[e^{-i k^2 /2\Delta t}F[ e^{i \left| \psi(x,t_0)\right|^2 \Delta t}\ \psi_0 ] \right]. \label{eq06} \end{equation} Starting from the initial conditions, the time integration of the NLSE can be done by the SSFM. Two fast Fourier transform (FFT) operations per time step are needed for this form of the SSFM. The time step is selected as $\Delta t=10^{-3}$, which does not cause a stability problem. The number of spectral components are taken as $M=2048$ in order to use the FFT routines efficiently. Although it is known that the decay of an atomic particle can be inhibited by Zeno dynamics, it remains an open question whether the rogue waves in the quantized optical fields in the frame of the NLSE can be stopped by Zeno dynamics. In this paper we analyze the Zeno dynamics of such rogue waves by using the SSFM reviewed above. Although analytical solution of the NLSE is known and used as initial conditions in time stepping of SSFM, after a positive Zeno measurement the wavefunction becomes complicated thus numerical solution is needed. Recently a theoretical wavefunction formulation of the quantum Zeno dynamics is proposed in Ref.\cite{PorrasFreeZeno}, used in Refs.\cite{Porras_Zenotunnelmimick, Porras_diffractionspread} and experimentally tested in Ref.\cite{Porras_zeno_clss_opt}. In this formulation, after a positive measurement the particle is found in the observation domain of $[-L,L]$ with a wavefunction of $\psi_T(x,t)=\psi(x,t)\textnormal{rect}(x/L)/\sqrt{P}$ where $P=\int_{-L}^L \left| \psi \left(x,t \right) \right|^2 dx$, and $\textnormal{rect}(x/L)=1$ for $-L \leq x \leq L$, and $0$ elsewhere \cite{PorrasFreeZeno}. Between two successive positive measurements, the wave evolves according to NLSE. This cycle can be summarized as \begin{equation} \begin{split} \psi_T & \left(x,\frac{(n-1)t}{N} \right) \stackrel{evolve}{\rightarrow} \psi \left(x,\frac{nt}{N} \right) \stackrel{measure}{\rightarrow} \\ & \psi_T \left(x,\frac{nt}{N} \right)= \psi \left(x,\frac{nt}{N} \right) \frac{\textnormal{rect(x/L)}}{\sqrt{P_N^{n}}} \label{eq07} \end{split} \end{equation} where $n$ is the observation index, $N$ is the number of observations \cite{PorrasFreeZeno}, and \begin{equation} P_N^{n}=\int_{-L}^L \left| \psi \left(x,\frac{nt}{N} \right) \right|^2 dx. \label{eq08} \end{equation} The cumulative probability of finding the wave in the observation domain becomes \cite{PorrasFreeZeno} \begin{equation} P_N=\prod_{n=1}^N P_N^{n}. \label{eq09} \end{equation} Using the momentum representation of the linear Schr\"{o}dinger equation and analogy of optical wave dynamics of Fabry-Perot resonator, an analytical derivation of the lingering probability of an atomic particle in the interval of $[-L,L]$ after $n^{th}$ measurement is given as \begin{equation} P_N^{n}\approx 1-0.12 \left(\frac{4}{\pi} \right)^2 \left(\frac{2 \pi t}{N} \right)^{3/2} \label{eq10} \end{equation} in \cite{PorrasFreeZeno}. After $N$ measurements the cumulative probability of finding the particle in the observation domain becomes \begin{equation} P_N \approx \left(1-0.12 \left(\frac{4}{\pi} \right)^2 \left(\frac{2 \pi t}{N} \right)^{3/2} \right)^N \label{eq11} \end{equation} which can further be simplified using Newton's binomial theorem \cite{PorrasFreeZeno}. The reader is referred to Ref.\cite{PorrasFreeZeno} for the details of the derivation of these relations. We compare the analytical relations given in Eqs.(\ref{eq10})-(\ref{eq11}) with the numerical probability calculations in the next section of this paper. \section{Results and Discussion} \subsection{Freezing Akhmediev Breathers} In order to study the Zeno dynamics of rogue waves we first consider the Akhmediev breather (AB) solution of the NLSE given in Eq.(\ref{eq01}). It is known that the NLSE admits a solution in the form of \begin{equation} \psi_{AB}=\left[1+\frac{2(1-2a) \cosh{(bt)}+ i b \sinh{(bt)}}{\sqrt{2a} \cos{(\lambda x)}-\cosh{(bt)} } \right] \exp{[it]} \label{eq12} \end{equation} where $a$ is a free parameter, $\lambda=2 \sqrt{1-2a}$ and $b=\sqrt{8a(1-2a)}$ \ \cite{AkhmedievBreather, AkhmedievBreatherExp}. This solution is known as AB and plays an essential role in describing the modulation instability mechanism and rogue wave generation. Experiments have demonstrated that the ABs can exist in optical fibers \cite{AkhmedievBreatherExp}. The triangularization of the Fourier spectrum, i.e. the triangular supercontinuum generation, plays a key role in the generation and early detection mechanisms of rogue waves. Thus, we also examine the spectra of ABs under Zeno effect. In Fig.~\ref{fig1}, 3D plots of the AB for $a=0.45$ and its Fourier spectrum are depicted in the first row. This AB is freely evolving for the time interval of $t=[-5,5]$. In the second row of Fig.~\ref{fig1}, we present the AB under Zeno effect and its Fourier spectrum. This AB evolved freely in the time interval of $t=[-5,0]$ and it was continuously subjected to Zeno observations in the time interval of $t=[0,5]$ within $L=[-7.5,7.5]$. For a better visualization of the effect of Zeno observations, the wave profile is not normalized in this figure. \begin{figure}[ht!] \begin{center} \includegraphics[width=3.4in]{fig1.pdf} \end{center} \caption{\small Zeno dynamics of an Akhmediev breather with $a=0.45$ a) freely evolving wave b) its Fourier spectrum c) continuously observed Akhmediev breather in $[-7.5,7.5]$ during $t=[0-5]$ d) its Fourier spectrum.} \label{fig1} \end{figure} \begin{figure}[htb!] \begin{center} \includegraphics[width=3.4in]{fig2.pdf} \end{center} \caption{\small Zeno dynamics of an Akhmediev breather with $a=0.45$ a) freely evolving wave b) its Fourier spectrum c) continuously observed Akhmediev breather in $[-5,5]$ during $t=[0-5]$ d) its Fourier spectrum.} \label{fig2} \end{figure} Next we apply the similar procedure to the AB, but with a narrower region of observation which is selected as $L=[-5,5]$ to illustrate the effects of narrowing the observation domain as depicted in Fig.~\ref{fig2}. Due to Fourier duality principle, the AB frozen in this fashion has a wider spectrum compared to its counterpart presented in Fig.~\ref{fig1}. Depending on the width of the Zeno observation domain, it is possible to freeze the whole AB and thus its triangular spectrum during the observation time. \subsection{Freezing Peregrine Soliton} Rogue waves of the NLSE is considered to be in the form of rational soliton solutions \cite{Akhmediev2009b}. The simplest rational soliton solution of the NLSE is the Peregrine soliton \cite{Peregrine, Kibler}. It is given by \begin{equation} \psi_1=\left[1-4\frac{1+2it}{1+4x^2+4t^2} \right] \exp{[it]} \label{eq13} \end{equation} where $t$ and $x$ denotes the time and space, respectively \cite{Akhmediev2009b}. This solution can be recovered as the limiting case of the AB when the period of the solution tends to infinity. It has been shown that Peregrine soliton is only a first order rational soliton solution of the NLSE \cite{Akhmediev2009b}. Higher order rational soliton solutions of the NLSE and a hierarchy of obtaining those rational solitons based on Darboux transformations are given in \cite{Akhmediev2009b}. Throughout many simulations \cite{Akhmediev2009b, Akhmediev2009a, Akhmediev2011} and some experiments \cite{Kibler}, it has been confirmed that rogue waves can be in the form of the first (Peregrine) and higher order rational soliton solutions of the NLSE. \begin{figure}[htb!] \begin{center} \includegraphics[width=3.4in]{fig3.pdf} \end{center} \caption{\small Zeno dynamics of a Peregrine soliton a) freely evolving wave b) its Fourier spectrum c) continuously observed Peregrine soliton in the interval of $[-7.5,7.5]$ during $t=[0-5]$ d) its Fourier spectrum.} \label{fig3} \end{figure} Similar to the AB case, in order to analyze the Zeno dynamics of the Peregrine soliton we apply the procedure described by Eqs.(\ref{eq07})-(\ref{eq09}). In Fig.~\ref{fig3}, plots of the Peregrine soliton and its Fourier spectrum are shown in the first row. This Peregrine soliton evolved freely during the time interval of $t=[-5,5]$. In the second row of Fig.~\ref{fig3}, we present the Peregrine soliton inhibited by Zeno observations and its corresponding Fourier spectrum. In this plot the Peregrine soliton evolved freely during the temporal interval of $t=[-5,0]$ and it was continuously subjected to Zeno observations in the time interval of $t=[0,5]$ within $L=[-7.5,7.5]$. For a better visualization of the effect of Zeno observations, the wave profile is not normalized in this figure as before. Again depending on the width of the Zeno observation domain, it is possible to freeze the Peregrine soliton wholly or partially and thus its triangular spectrum during the observation time. \begin{figure}[htb!] \begin{center} \includegraphics[width=3.4in]{fig4.pdf} \end{center} \caption{\small Probability density (unnormalized) after N intermediate measurements of the Peregrine soliton in the interval $[-0.8,0.8]$ for a time $t=n/N=2$.} \label{fig4} \end{figure} \begin{figure}[htb!] \begin{center} \includegraphics[width=3.4in]{fig5.pdf} \end{center} \caption{\small The probability of finding the Peregrine soliton in the interval $[-0.8,0.8]$ for a time of $t=n/N=2$ after N intermediate measurements.} \label{fig5} \end{figure} In Fig.~\ref{fig4}, we present the unnormalized wave profile with different observation numbers ($N$). For a larger number of intermediate measurements, the decay of Peregrine soliton is inhibited for longer times compared to the case of fewer Zeno observations. This is due to the fact that for a smaller number of observations the evolution time between two successive observations increases. Thus the wave profile diffuses in this relatively large temporal interval. The constant background level of unity is diminishes since the probability vanishes in the unobserved regions of the evolution domain for a positive measurement. In Fig.~\ref{fig5}, we present the normalized probabilities of finding the wave in the observation domain for a time interval of $t=[0, 2]$. The dashed lines show the probabilities obtained analytically by Eqs.(\ref{eq10})-(\ref{eq11}) whereas the continuous lines represent numerical results. Since the numerical results are obtained using the NLSE and the analytical distributions given by Eqs.(\ref{eq10})-(\ref{eq11}) rely on the assumption that the particle's motion is governed by linear Schr\"{o}dinger equation, some discrepancies appear between two results, where the discrepancies are less for more frequent observations as expected. \subsection{Freezing Akhmediev-Peregrine Soliton} Second order rational soliton solution of the NLSE is Akhmediev-Peregrine soliton \cite{Akhmediev2009b}, which is considered to be a model for rogue waves with higher amplitude than the Peregrine soliton. The formula of Akhmediev-Peregrine soliton is given as \begin{equation} \psi_2=\left[1+\frac{G_2+it H_2}{D_2} \right] \exp{[it]} \label{eq14} \end{equation} where \begin{equation} G_2=\frac{3}{8}-3x^2-2x^4-9t^2-10t^4-12x^2t^2 \label{eq14} \end{equation} \begin{equation} H_2=\frac{15}{4}+6x^2-4x^4-2t^2-4t^4-8x^2t^2 \label{eq16} \end{equation} and \begin{equation} \begin{split} D_2=\frac{1}{8} & [ \frac{3}{4}+9x^2+4x^4+\frac{16}{3}x^6+33t^2+36t^4 \\ & +\frac{16}{3}t^6-24x^2t^2+16x^4t^2+16x^2t^4 ] \label{eq17} \end{split} \end{equation} where $t$ is the time and $x$ is the space parameter \cite{Akhmediev2009b}. Using Darboux transformation formalism this soliton can be obtained using the Peregrine soliton as the seed solution \cite{Akhmediev2009b}. Many numerical simulations also confirm that rogue waves in the NLSE framework can also be in the form of Akhmediev-Peregrine soliton \cite{Akhmediev2011, Akhmediev2009b, Akhmediev2009a}. However, to our best knowledge an experimental verification of this soliton still does not exist. \begin{figure}[htb!] \begin{center} \includegraphics[width=3.4in]{fig6.pdf} \end{center} \caption{\small Zeno dynamics of an Akhmediev-Peregrine soliton a) freely evolving wave b) its Fourier spectrum c) continuously observed Akhmediev-Peregrine soliton in the interval of $[-7.5,7.5]$ during $t=[0-5]$ d) its Fourier spectrum.} \label{fig6} \end{figure} As in the AB and Peregrine soliton cases, in order to analyze the Zeno dynamics of the Akhmediev-Peregrine soliton we apply the procedure described by Eqs.(\ref{eq07})-(\ref{eq09}). Similarly, in Fig.~\ref{fig6}, the Akhmediev-Peregrine soliton, which evolved freely during the time interval of $t=[-5,5]$, and its Fourier spectrum are plotted in the first row. In the second row of Fig.~\ref{fig6}, the Akhmediev-Peregrine soliton inhibited by Zeno observations and its corresponding Fourier spectrum are depicted. In this plot the Akhmediev-Peregrine soliton evolved freely during the temporal interval of $t=[-5,0]$ and continuous Zeno observations took place time within $L=[-7.5,7.5]$ during the time interval of $t=[0,5]$. \begin{figure}[htb!] \begin{center} \includegraphics[width=3.4in]{fig7.pdf} \end{center} \caption{\small Probability density after N intermediate measurements of the Akhmediev-Peregrine soliton in the interval $[-1.7,1.7]$ for a time $t=n/N=2$.} \label{fig7} \end{figure} Again unnormalized wave profile is depicted in this figure for a better visualization of the effect of Zeno observations on the Akhmediev-Peregrine soliton. Again depending on the width of the Zeno observation domain, it is possible to freeze the Akhmediev-Peregrine soliton wholly or partially and thus its triangular spectrum can be preserved during the observation time. In Fig.~\ref{fig7}, the unnormalized wave profiles for different observation numbers ($N$) are shown. Similar to the previous cases, for a larger number of intermediate measurements, the decay of Akhmediev-Peregrine soliton is inhibited for longer times compared to the case of fewer Zeno observations due to shorter diffusion time between two positive Zeno observations. Additionally, the peak as well as two dips of the Akhmediev-Peregrine soliton can be preserved during measurements depending on the length of the observation domain. Similarly, since the probability vanishes in the unobserved regions of the evolution domain for a positive measurement, the constant background level of unity is diminishes. \begin{figure}[h!] \begin{center} \includegraphics[width=3.4in]{fig8.pdf} \end{center} \caption{\small The probability of finding the Akhmediev-Peregrine soliton in the interval $[-0.8,0.8]$ for a time of $t=n/N=2$ after N intermediate measurements.} \label{fig8} \end{figure} In Fig.~\ref{fig8}, we present the normalized probabilities of finding the wave in the observation domain for a time interval of $t=[0, 2]$. The dashed lines show the probabilities obtained analytically by Eqs.(\ref{eq10})-(\ref{eq11}) whereas the continuous lines represent numerical results. As before, since the numerical results are obtained using the NLSE and the analytical distributions given by Eqs.(\ref{eq10})-(\ref{eq11}) rely on the assumption that the particle's motion is governed by linear Schr\"{o}dinger equation, some discrepancies appear between two results, where the discrepancies are less for more frequent observations as expected. However the discrepancies are slightly higher compared to the Zeno dynamics of the Peregrine soliton depicted in Fig.~\ref{fig5} due to increased steepness of the Akhmediev-Peregrine soliton. \section{Conclusion} In this paper we have numerically investigated the Zeno dynamics of optical rogue waves in the frame of the standard NLSE. In particular, we have analyzed the Zeno dynamics of the Akhmediev breathers, Peregrine and Akhmediev-Peregrine soliton solutions of the NLSE, which are considered as accurate rogue wave models. We have showed that frequent measurements of the rogue wave inhibits its movement in the observation domain for each of these solutions. We have analyzed the spectra of the rogue waves to observe the supercontinuum generation under Zeno observations. Fourier as well as the wavelet spectra of rogue waves under Zeno dynamics may give some clue about the application time (position) of the Zeno observations to freeze the emergence or decay of rogue waves. This would especially be important for the Zeno dynamics of stochastic wavefields which produce rogue waves. We have also analyzed the effect of observation frequency on the rogue wave profile and on the probability of freezing the wave in the observation domain. We have showed that the rogue wave shape can be preserved for longer times and the probability of the freezing the rogue wave increases for more frequent observations, for all three types of rogue waves considered. The revival dynamics, that is the dynamics after Zeno observations are ceased, of rogue waves will be a part of future work. The analysis of statistical distributions and the shapes of rogue waves in the stochastic fields after Zeno observation remains as an problem which needs further attention as well. We believe that the results presented herein may open new insights in modelling the dynamics of standing and propagating optical rogue waves. More specifically, the procedure analyzed in this paper in the frame of the standard NLSE can be used to advance the fields of optical communications and vibrations with special applications which include but are not limited to freezing and steering the rogue and other wave types, avoiding their breaking, imposing a time delay by Zeno effect. The procedure analyzed in this paper in the frame of the standard NLSE can also be extended to model the Zeno dynamics of the many other fascinating nonlinear phenomena which can be used to advance the optical science and technology. \section*{Acknowledgment} FO is funded by Isik University Scientific Research Funding Agency under Grant Number: BAP-15B103.
{'timestamp': '2017-05-09T02:04:30', 'yymm': '1701', 'arxiv_id': '1701.01997', 'language': 'en', 'url': 'https://arxiv.org/abs/1701.01997'}
\section{Introduction} Despite the numerous empirical successes of deep learning, much of the underlying theory remains poorly understood. One promising direction forward to an interpretable account of deep learning is in the study of the relationship between deep neural networks and kernel machines. Several studies in recent years have shown that gradient flow on infinitely wide neural networks with a certain parameterization gives rise to linearized dynamics in parameter space \citep{Lee2019WideNN, liu_belkin} and consequently a kernel regression solution with a kernel known as the neural tangent kernel (NTK) in function space \citep{Jacot2018NeuralTK, Arora2019OnEC}. Kernel machines enjoy firmer theoretical footing than deep neural networks, which allows one to accurately study their training and generalization \citep{RasmussenW06, Scholkopf2002}. Moreover, they share many of the phenomena that overparameterized neural networks exhibit, such as interpolating the training data \citep{zhang_rethinking, liang_rahklin_interpolate, belkin_deep_kernel_understand}. However, the exact equivalence between neural networks and kernel machines breaks for finite width networks. Further, the regime with approximately static kernel, also referred to as the lazy training regime \citep{Chizat2019OnLT}, cannot account for the ability of deep networks to adapt their internal representations to the structure of the data, a phenomenon widely believed to be crucial to their success. In this present study, we pursue an alternative perspective on the NTK, and ask whether a neural network with an NTK that changes significantly during training can ever be a kernel machine for a {\it data-dependent} kernel: i.e. does there exist a kernel function $K$ for which the final neural network function $f$ is $f(\bm x) \approx \sum_{\mu=1}^P \alpha^\mu K(\bm x,\bm x^\mu)$ with coefficients $\alpha^\mu$ that depend only on the training data? We answer in the affirmative: that a large class of neural networks at small initialization trained on approximately whitened data are accurately approximated as kernel regression solutions with their final, data-dependent NTKs up to an error dependent on initialization scale. Hence, our results provide a further concrete link between kernel machines and deep learning which, unlike the infinite width limit, allows for the kernel to be shaped by the data. The phenomenon we study consists of two training phases. In the first phase, the kernel starts off small in overall scale and quickly aligns its eigenvectors toward task-relevant directions. In the second phase, the kernel increases in overall scale, causing the network to learn a kernel regression solution with the final NTK. We call this phenomenon the \textit{silent alignment effect} because the feature learning happens before the loss appreciably decreases. Our contributions are the following \begin{enumerate}[leftmargin=*] \item In Section \ref{sec:silent_align_theory}, we demonstrate the silent alignment effect by considering a simplified model where the kernel evolves while small and then subsequently increases only in scale. We theoretically show that if these conditions are met, the final neural network is a kernel machine that uses the final, data-dependent NTK. A proof is provided in Appendix \ref{app:kernel_ev_scale}. \item In Section \ref{sec:two_layer_linear}, we provide an analysis of the NTK evolution of two layer linear MLPs with scalar target function with small initialization. If the input training data is whitened, the kernel aligns its eigenvectors towards the direction of the optimal linear function early on during training while the loss does not decrease appreciably. After this, the kernel changes in scale only, showing this setup satisfies the requirements for silent alignment discussed in Section \ref{sec:silent_align_theory}. \item In Section \ref{sec:deep_linear}, we extend our analysis to deep MLPs by showing that the time required for alignment scales with initialization the same way as the time for the loss to decrease appreciably. Still, these time scales can be sufficiently separated to lead to the silent alignment effect for which we provide empirical evidence. We further present an explicit formula for the final kernel in linear networks of any depth and width when trained from small initialization, showing that the final NTK aligns to task-relevant directions. \item In Section \ref{sec:nonlinear_anisotropy}, we show empirically that the silent alignment phenomenon carries over to nonlinear networks trained with ReLU and Tanh activations on isotropic data, as well as linear and nonlinear networks with multiple output classes. For anisotropic data, we show that the NTK must necessarily change its eigenvectors when the loss is significantly decreasing, destroying the silent alignment phenomenon. In these cases, the final neural network output deviates from a kernel machine that uses the final NTK. \end{enumerate} \subsection{Related Works} \cite{Jacot2018NeuralTK} demonstrated that infinitely wide neural networks initialized with an appropriate parameterization and trained on mean square error loss evolve their predictions as a linear dynamical system with the NTK at initalization. A limitation of this kernel regime is that the neural network internal representations and the kernel function do not evolve during training. Conditions under which such lazy training can happen is studied further in \citep{Chizat2019OnLT,liu_belkin}. \citet{domingos2020model} recently showed that every model, including neural networks, trained with gradient descent leads to a kernel model with a path kernel and coefficients $\alpha^\mu$ that depend on the test point $\bm x$. This dependence on $\bm x$ makes the construction not a kernel method in the traditional sense that we pursue here (see Remark 1 in \citep{domingos2020model}). Phenomenological studies and models of kernel evolution have been recently invoked to gain insight into the difference between lazy and feature learning regimes of neural networks. These include analysis of NTK dynamics which revealed that the NTK in the feature learning regime aligns its eigenvectors to the labels throughout training, causing non-linear prediction dynamics \citep{fort2020deep, Baratin2021ImplicitRV, shan2021rapid, woodworth, chen2020labelaware, GEIGER20211, bai2020taylorized}. Experiments have shown that lazy learning can be faster but less robust than feature learning \citep{Flesch} and that the generalization advantage that feature learning provides to the final predictor is heavily task and architecture dependent \citep{lee2020finite}. \cite{fort2020deep} found that networks can undergo a rapid change of kernel early on in training after which the network's output function is well-approximated by a kernel method with a data-dependent NTK. Our findings are consistent with these results. \cite{stoger2021small} recently obtained a similar multiple-phase training dynamics involving an early alignment phase followed by spectral learning and refinement phases in the setting of low-rank matrix recovery. Their results share qualitative similarities with our analysis of deep linear networks. The second phase after alignment, where the kernel's eigenspectrum grows, was studied in linear networks in \citep{jacot2021deep}, where it is referred to as the saddle-to-saddle regime. Unlike prior works \citep{Dyer2020Asymptotics, Aitken2020OnTA, Andreassen2020AsymptoticsOW}, our results do not rely on perturbative expansions in network width. Also unlike the work of \cite{Saxe14exactsolutions}, our solutions for the evolution of the kernel do not depend on choosing a specific set of initial conditions, but rather follow only from assumptions of small initialization and whitened data. \section{The Silent Alignment Effect and Approximate Kernel Solution}\label{sec:silent_align_theory} Neural networks in the overparameterized regime can find many interpolators: the precise function that the network converges to is controlled by the time evolution of the NTK. As a concrete example, we will consider learning a scalar target function with mean square error loss through gradient flow. Let $\bm x \in \mathbb{R}^D$ represent an arbitrary input to the network $f(\bm x)$ and let $\{ \bm x^\mu, y^\mu\}_{\mu=1}^P$ be a supervised learning training set. Under gradient flow the parameters $\bm \theta$ of the neural network will evolve, so the output function is time-dependent and we write this as $f(\bm x,t)$. The evolution for the predictions of the network on a test point can be written in terms of the NTK $K(\bm x,\bm x',t) = \frac{\partial f(\bm x,t)}{\partial \bm \theta} \cdot \frac{\partial f(\bm x',t)}{\partial \bm\theta}$ as \begin{align} \frac{d}{dt} f(\bm x,t) = \eta \sum_{\mu} K(\bm x,\bm x^\mu, t) (y^\mu - f(\bm x^\mu,t)), \end{align} where $\eta$ is the learning rate. If one had access to the dynamics of $K(\bm x,\bm x^\mu,t)$ throughout all $t$, one could solve for the final learned function $f^*$ with integrating factors under conditions discussed in Appendix \ref{app:integrating_factor} \begin{align}\label{eq:integ_factor} f^*(\bm x) = f_0(\bm x) + \sum_{\mu \nu} \int_{0}^{\infty} dt \ \bm k_t(\bm x)^\mu \left[\exp\left( - \eta \int_{0}^{t} \bm K_{t'}\, dt' \right) \right]_{\mu \nu} \left( y^\nu - f_0(\bm x^\nu) \right). \end{align} Here, $\bm k_t(\bm x)^{\mu} = K(\bm x,\bm x^\mu,t)$, $[\bm K_t]_{\mu,\nu} = K(\bm x^\mu,\bm x^{\nu},t )$, and $y^\mu - f_0(\bm x^\mu)$ is the initial error on point $\bm x^\mu$. We see that the final function has contributions throughout the full training interval $t \in (0,\infty)$. The seminal work by \cite{Jacot2018NeuralTK} considers an infinite-width limit of neural networks, where the kernel function $K_t(\bm x,\bm x')$ stays constant throughout training time. In this setting where the kernel is constant and $f_0(\bm x^\mu) \approx 0$, then we obtain a true kernel regression solution $f(\bm x) = \sum_{\mu, \nu} \bm k(\bm x)^\mu \bm K^{-1}_{\mu \nu} y^\nu$ for a kernel $K(\bm x,\bm x')$ which does not depend on the training data. Much less is known about what happens in the rich, feature learning regime of neural networks, where the kernel evolves significantly during time in a data-dependent manner. In this paper, we consider a setting where the initial kernel is small in scale, aligns its eigenfunctions early on during gradient descent, and then increases only in scale monotonically. As a concrete phenomenological model, consider depth $L$ networks with homogenous activation functions with weights initialized with variance $\sigma^2$. At initialization $K_0(\bm x,\bm x') \sim O(\sigma^{2L-2}) , f_0(\bm x) \sim O(\sigma^L)$ (see Appendix \ref{app:kernel_ev_scale}). We further assume that after time $\tau$, the kernel only evolves in scale in a constant direction \begin{align} K(\bm x,\bm x',t) = \begin{cases} \sigma^{2L-2} \tilde K(\bm x,\bm x',t) & t \leq \tau \\ g(t) K_{\infty}(\bm x,\bm x') & t > \tau \end{cases}, \end{align} where $\tilde K(\bm x,\bm x',t)$ evolves from an initial kernel at time $t=0$ to $K_{\infty}(\bm x,\bm x')$ by $t = \tau$ and $g(t)$ increases monotonically from $\sigma^{2L-2}$ to $1$. In this model, one also obtains a kernel regression solution in the limit where $\sigma \to 0$ with the final, rather than the initial kernel: $f(\bm x) = \bm k_{\infty}(\bm x) \cdot \bm K_{\infty}^{-1} \bm y + O(\sigma^{L})$. We provide a proof of this in the Appendix \ref{app:kernel_ev_scale}. The assumption that the kernel evolves early on in gradient descent before increasing only in scale may seem overly strict as a model of kernel evolution. However, we analytically show in Sections \ref{sec:two_layer_linear} and \ref{sec:deep_linear} that this can happen in deep linear networks initialized with small weights, and consequently that the final learned function is a kernel regression with the final NTK. Moreover, we show that for a linear network with small weight initialization, the final NTK depends on the training data in a universal and predictable way. We show empirically that our results carry over to nonlinear networks with ReLU and tanh activations under the condition that the data is whitened. For example, see Figure \ref{fig:relu_demo_silent_alignment}, where we show the silent alignment effect on ReLU networks with whitened MNIST and CIFAR-10 images. We define alignment as the overlap between the kernel and the target function $\frac{\bm y^\top \bm K \bm y}{\|\bm K\|_F |\bm y|^2}$, where $\bm y \in \mathbb{R}^P$ is a vector of the target values, quantifying the projection of the labels onto the kernel, as discussed in \citep{cortes2012algorithms}. This quantity increases early in training but quickly stabilizes around its asymptotic value before the loss decreases. Though \Eqref{eq:integ_factor} was derived under assumption of gradient flow with constant learning rate, the underlying conclusions can hold in more realistic settings as well. In Figure \ref{fig:relu_demo_silent_alignment} (d) and (e) we show learning dynamics and network predictions for Wide-ResNet \citep{zagoruyko2017wide} on whitened CIFAR-10 trained with the Adam optimizer \citep{kingma2014adam} with learning rate $10^{-5}$, which exhibits silent alignment and strong correlation with the final NTK predictor. In the unwhitened setting, this effect is partially degraded, as we discuss in Section \ref{sec:nonlinear_anisotropy} and Appendix \ref{app:res_net_experiment}. Our results suggest that the final NTK may be useful for analyzing generalization and transfer as we discuss for the linear case in Appendix \ref{app:transfer_theory}. \begin{figure}[t] \centering \subfigure[Whitened Data MLP Dynamics]{\includegraphics[width=0.32\linewidth]{figures/loss_kernel_whitened_relu2.pdf}} \subfigure[Prediction MNIST]{\includegraphics[width=0.32\linewidth]{figures/prediction_comp_whitened_relu.pdf}} \subfigure[Prediction CIFAR-10]{\includegraphics[width=0.32\linewidth]{figures/cifar_prediction_comp_whitened_relu.pdf}} \subfigure[Wide Res-Net Dynamics]{\includegraphics[width=0.32\linewidth]{figures/resnet_loss_alignment_curve_whitened.pdf}} \subfigure[Prediction Res-Net]{\includegraphics[width=0.32\linewidth]{figures/cifar_prediction_comp_whitened_resnet_more_data.pdf}} \caption{A demonstration of the Silent Alignment effect. (a) We trained a 2-layer ReLU MLP on $P = 1000$ MNIST images of handwritten $0$'s and $1$'s which were whitened. Early in training, around $t \approx 50$, the NTK aligns to the target function and stay fixed (green). The kernel's overall scale (orange) and the loss (blue) begin to move at around $t=300$. The analytic solution for the maximal final alignment value in linear networks is overlayed (dashed green), see Appendix \ref{app:NTK_final_deep_linear}. (b) We compare the predictions of the NTK and the trained network on MNIST test points. Due to silent alignment, the final learned function is well described as a kernel regression solution with the final NTK $K_{\infty}$. However, regression with the initial NTK is not a good model of the network's predictions. (c) The same experiment on $P =1000$ whitened CIFAR-10 images from the first two classes. Here we use MSE loss on a width $100$ network with initialization scale $\sigma=0.1$. (d) Wide-ResNet with width multiplier $k=4$ and blocksize of $b=1$ trained with $P=100$ training points from the first two classes of CIFAR-10. The dashed orange line marks when the kernel starts growing significantly, by which point the alignment has already finished. (e) Predictions of the final NTK are strongly correlated with the final NN function. } \label{fig:relu_demo_silent_alignment} \end{figure} \section{Kernel Evolution in 2 Layer Linear Networks}\label{sec:two_layer_linear} We will first study shallow linear networks trained with small initialization before providing analysis for deeper networks in Section \ref{sec:deep_linear}. We will focus our discussion in this section on the scalar output case but we will provide similar analysis in the multiple output channel case in a subsequent section. We stress, though we have systematically tracked the error incurred at each step, we have focused on transparency over rigor in the following derivation. We demonstrate that our analytic solutions match empirical simulations in Appendix \ref{subsec:solns_to_full_training}. We assume the $P$ data points $\bm x^\mu \in \mathbb{R}^{D}, \mu = 1, \dots, P$ of zero mean with correlation matrix $\bm\Sigma = \frac1P \sum_{\mu=1}^P \bm x^\mu {\bm x^\mu}^\top$. Further, we assume that the target values are generated by a linear teacher function $y^\mu = s \bm { \beta}_T \cdot \bm x^{\mu}$ for a unit vector $\bm{ \beta}_T$. The scalar $s$ merely quantifies the size of the supervised learning signal: the variance of $|\bm y|^2 = s^2 \bm\beta^\top_T \bm\Sigma \bm \beta_T$. We define the two-layer linear neural network with $N$ hidden units as $f(\bm x) = \bm a^\top \bm W \bm x$. Concretely, we initialize the weights with standard parameterization $a_i \sim\mathcal{N}(0,\sigma^2/N), W_{ij} \sim\mathcal{N}(0,\sigma^2/D)$. Understanding the role of $\sigma$ in the dynamics will be crucial to our study. We analyze gradient flow dynamics on MSE cost $L = \frac{1}{2 P} \sum_{\mu} \left( f(\bm x^\mu) - y^\mu \right)^2$. Under gradient flow with learning rate $\eta = 1$, the weight matrices in each layer evolve as \begin{align} \frac{d}{dt} \bm a = - \frac{\partial L}{\partial \bm a} = \bm W \bm \Sigma \left( s \bm\beta_T - \bm W^\top \bm a \right), \quad \frac{d}{dt} \bm W = - \frac{\partial L}{\partial \bm W} = \bm a \left( s \bm \beta_T - \bm W^\top \bm a \right)^\top \bm \Sigma. \end{align} The NTK takes the following form throughout training. \begin{align}\label{eq:kernel_evolution} K(\bm x, \bm x'; t) = \bm x^\top \bm W^\top \bm W \bm x' + |\bm a|^2 \bm x^\top \bm x'. \end{align} Note that while the second term, a simple isotropic linear kernel, does not reflect the nature of the learning task, the first term $\bm x^\top \bm W^\top \bm W \bm x'$ can evolve to yield an anisotropic kernel that has learned a representation from the data. \subsection{Phases of Training in Two Layer Linear Network} We next show that there are essentially two phases of training when training a two-layer linear network from small initialization on whitened-input data. \begin{itemize}[leftmargin=*] \item Phase I: An alignment phase which occurs for $t \sim \frac{1}{s}$. In this phase the weights align to their low rank structure and the kernel picks up a rank-one term of the form $\bm x^\top \bm \beta \bm \beta^\top \bm x'$. In this setting, since the network is initialized near $\bm W, \bm a = \bm 0$, which is a saddle point of the loss function, the gradient of the loss is small. Consequently, the magnitudes of the weights and kernel evolve slowly. \item Phase II: A data fitting phase which begins around $t \sim \frac{1}{s} \log(s \sigma^{- 2} )$. In this phase, the system escapes the initial saddle point $\bm W, \bm a = 0$ and loss decreases to zero. In this setting both the kernel's overall scale and the scale of the function $f(\bm x,t)$ increase substantially. \end{itemize} If Phase I and Phase II are well separated in time, which can be guaranteed by making $\sigma$ small, then the final function solves a kernel interpolation problem for the NTK which is only sensitive to the geometry of gradients in the final basin of attraction. In fact, in the linear case, the kernel interpolation at every point along the gradient descent trajectory would give the final solution as we show in Appendix \ref{app:all_ntk_learn_same}. A visual summary of these phases is provided in Figure \ref{fig:phase_one_two_visual}. \begin{figure} \centering \subfigure[ Initialization]{\includegraphics[width=0.23\linewidth]{figures/kernel_contour_init.pdf}} \subfigure[Phase 1]{\includegraphics[width=0.23\linewidth]{figures/kernel_contour_transition.pdf}} \subfigure[Phase 2]{\includegraphics[width=0.23\linewidth]{figures/kernel_contour_final.pdf}} \caption{The evolution of the kernel's eigenfunctions happens during the early alignment phase for $t_1 \approx \frac{1}{s}$, but significant evolution in the network predictions happens for $t > t_2 = \frac{1}{2}\log( s \sigma^{-2})$. (a) Contour plot of kernel's norm for linear functions $f(\bm x) = \bm\beta \cdot \bm x$. The black line represents the space of weights which interpolate the training set, ie $\bm X^\top \bm\beta = \bm y$. At initialization, the kernel is isotropic, resulting in spherically symmetric level sets of RKHS norm. The network function is represented as a blue dot. (b) During Phase I, the kernel's eigenfunctions have evolved, enhancing power in the direction of the min-norm interpolator, but the network function has not moved far from the origin. (c) In Phase II, the network function $\bm W^\top \bm a$ moves from the origin to the final solution.} \label{fig:phase_one_two_visual} \end{figure} \subsubsection{Phase I: Early Alignment for Small Initialization} In this section we show how the kernel aligns to the correct eigenspace early in training. We focus on the whitened setting, where the data matrix $\bm X$ has all of its nonzero singular values equal. We let $\bm\beta$ represent the normalized component of $\bm\beta_T$ in the span of the training data $\{ \bm x^\mu \}$. We will discuss general $\bm\Sigma$ in section \ref{sec:unwhite_two_layer}. We approximate the dynamics early in training by recognizing that the network output is small due to the small initialization. Early on, the dynamics are given by: \begin{align} \frac{d}{dt} \bm a = s \bm W \bm\beta + O(\sigma^3) \ , \quad \frac{d}{dt} \bm W = s \bm a \bm \beta^\top + O(\sigma^3). \end{align} Truncating terms order $\sigma^3$ and higher, we can solve for the kernel's dynamics early on in training \begin{align}\label{eq:approx_align} K(\bm x,\bm x';t) = q_0 \cosh(2\eta st) \ \bm x^\top \left[ \bm\beta \bm\beta^\top + \bm I \right] \bm x' + O(\sigma^2), \quad t \ll s^{-1} \log(s/\sigma^2). \end{align} where $q_0$ is an initialization dependent quantity, see Appendix \ref{app:two_layer_phase_one}. The bound on the error is obtained in Appendix \ref{sec:phase_one_error}. We see that the kernel picks up a rank one-correction $\bm\beta \bm\beta^\top$ which points in the direction of the task vector $\bm \beta$, indicating that the kernel evolves in a direction sensitive to the target function $y = s \bm\beta_T \cdot \bm x$. This term grows exponentially during the early stages of training, and overwhelms the original kernel $K_0$ with timescale $1/s$. Though the neural network has not yet achieved low loss in this phase, the alignment of the kernel and learned representation has consequences for the transfer ability of the network on correlated tasks as we show in Appendix \ref{app:transfer_theory}. \subsubsection{Phase II: Spectral Learning} We now assume that the weights have approached their low rank structure, as predicted from the previous analysis of Phase I dynamics, and study the subsequent NTK evolution. We will show that, under the assumption of whitening, the kernel only evolves in overall scale. First, following \citep{fukumizu1998effect, arora_cohen_linear_acc,Du2018AlgorithmicRI}, we note the following conservation law $\frac{d}{dt} \left[ \bm a(t) \bm a(t)^\top - \bm W(t) \bm W(t)^\top \right] = 0$ which holds for all time. If we assume small initial weight variance $\sigma^2$, $\bm a \bm a^\top - \bm W \bm W^\top = O(\sigma^2) \approx 0$ at initialization, and stays that way during the training due to the conservation law. This condition is surprisingly informative, since it indicates that $\bm W$ is rank-one up to $O(\sigma)$ corrections. From the analysis of the alignment phase, we also have that $\bm W^\top \bm W \propto \bm \beta \bm \beta^\top$. These two observations uniquely determine the rank one structure of $\bm W$ to be $\bm a \bm \beta^\top+O(\sigma)$. Thus, from \eqref{eq:kernel_evolution} it follows that in Phase II, the kernel evolution takes the form \begin{align} K(\bm x,\bm x';t) = u(t)^2 \bm x^\top \left[ \bm\beta \bm\beta^\top + \bm I \right] \bm x' + O(\sigma), \end{align} where $u(t)^2 = |\bm a|^2$. This demonstrates that the kernel only changes in overall scale during Phase II. Once the weights are aligned with this scheme, we can get an expression for the evolution of $u(t)^2$ analytically, $u(t)^2 = s e^{2st} (e^{2st} - 1 + s / u_0^2)^{-1}$, using the results of \citep{fukumizu1998effect,Saxe14exactsolutions} as we discuss in \ref{subsec:phase2}. This is a sigmoidal curve which starts at $u_0^2$ and approaches $s$. The transition time where active learning begins occurs when $e^{st} \approx s/u_0^2 \implies t \approx s^{-1} \log( s/\sigma^2)$. This analysis demonstrates that the kernel only evolves in scale during this second phase in training from the small initial value $u_0^2 \sim O(\sigma^2)$ to its asymptote. Hence, kernel evolution in this scenario is equivalent to the assumptions discussed in Section \ref{sec:silent_align_theory}, with $g(t) = u(t)^2$, showing that the final solution is well approximated by kernel regression with the final NTK. We stress that the timescale for the first phase $t_1 \sim 1/s$, where eigenvectors evolve, is independent of the scale of the initialization $\sigma^2$, whereas the second phase occurs around $t_2 \approx t_1 \log(s/\sigma^2)$. This separation of timescales $t_1 \ll t_2$ for small $\sigma$ guarantees the silent alignment effect. We illustrate these learning curves and for varying $\sigma$ in Figure \ref{fig:two_layer_theory_expt}. \subsection{Unwhitened data}\label{sec:unwhite_two_layer} When data is unwhitened, the right singular vector of $\bm W$ aligns with $\bm\Sigma \bm \beta$ early in training, as we show in Appendix \ref{app:two_layer_unwhitened}. This happens since, early on, the dynamics for the first layer are $\frac{d}{dt} \bm W \sim \bm a(t) \bm\beta^\top \bm\Sigma$. Thus the early time kernel will have a rank-one spike in the $\bm \Sigma \bm \beta$ direction. However, this configuration is not stable as the network outputs grow. In fact, at late time $\bm W$ must realign to converge to $\bm W \propto \bm a \bm \beta^\top$ since the network function converges to the optimum and $f = \bm a^\top \bm W \bm x = s \bm\beta \cdot \bm x$, which is the minimum $\ell_2$ norm solution (Appendix \ref{app:inductive_deep_linear}). Thus, the final kernel will always look like $K_{\infty}(\bm x,\bm x') = s \bm x^\top \left[\bm \beta \bm\beta^\top + \bm I \right] \bm x'$. However, since the realignment of $\bm W$'s singular vectors happens \textit{during the Phase II spectral learning}, the kernel is not constant up to overall scale, violating the conditions for silent alignment. We note that the learned function still is a kernel regression solution of the final NTK, which is a peculiarity of the linear network case, but this is not achieved through the silent alignment phenomenon as we explain in Appendix \ref{app:two_layer_unwhitened}. \section{Extension to Deep Linear Networks}\label{sec:deep_linear} We next consider scalar target functions approximated by deep linear neural networks and show that many of the insights from the two layer network carry over. The neural network function $f : \mathbb{R}^{D} \to \mathbb{R}$ takes the form $f(\bm x) = {\bm w^{L}}^\top \bm W^{L-1} ... \bm W^{1} \bm x$. The gradient flow dynamics under mean squared error (MSE) loss become \begin{align} \frac{d}{dt} \bm W^{\ell} = - \eta \frac{\partial L}{\partial \bm W^{\ell}} = \eta \left( \prod_{\ell' > \ell} \bm W^{\ell'} \right)^{\top} \left( s \bm \beta - \bm{\tilde{w}} \right)^\top \bm \Sigma \left( \prod_{\ell' < \ell} \bm W^{\ell' } \right)^\top, \end{align} where $\bm{\tilde{w}} = \bm W^{1 \top} \bm W^{2 \top} ... \bm w^{L } \in \mathbb{R}^{D}$ is shorthand for the effective one-layer linear network weights. Inspired by observations made in prior works \citep{fukumizu1998effect,arora_cohen_linear_acc, Du2018AlgorithmicRI}, we again note that the following set of conservation laws hold during the dynamics of gradient descent $\frac{d}{dt} \left[ \bm W^{\ell} \bm W^{\ell \top} - \bm W^{\ell+1 \top} \bm W^{\ell + 1} \right] = 0$. This condition indicates a balance in the size of weight updates in adjacent layers and simplifies the analysis of linear networks. This balancing condition between weights of adjacent layers is not specific to MSE loss, but will also hold for any loss function, see Appendix \ref{app:other_loss_fns}. We will use this condition to characterize the NTK's evolution. \subsection{NTK Under Small Initialization}\label{sec:small_init} We now consider the effects of small initialization. When the initial weight variance $\sigma^2$ is sufficiently small, $\bm W^{\ell} \bm W^{\ell \top} - \bm W^{\ell+1 \top} \bm W^{\ell + 1} = O(\sigma^2 ) \approx 0$ at initialization.\footnote{Though we focus on neglecting the $O(\sigma^2)$ initial weight matrices in the main text, an approximate analysis for wide networks at finite $\sigma^2$ and large width is provided in Appendix \ref{app:refined_balance}, which reveals additional dependence on relative layer widths.} This conservation law implies that these matrices remain approximately equal throughout training. Performing an SVD on each matrix and inductively using the above formula from the last layer to the first, we find that all matrices will be approximately rank-one $\bm w^{L} = u(t) \bm r_{L}(t) \ , \ \bm W^{\ell} = u(t) \bm r_{\ell+1}(t) \bm r_{\ell}(t)^\top$, where $\bm r_{\ell}(t)$ are unit vectors. Using only this balancing condition and expanding to leading order in $\sigma$, we find that the NTK's dynamics look like \begin{align} K(\bm x, \bm x', t) &= u(t)^{2(L-1)} \bm x^\top \left[ (L-1) \bm r_1(t) \bm r_1(t)^\top + \bm I \right] \bm x' + O(\sigma). \end{align} We derive this formula in the Appendix \ref{app:ntk_formula_deep_linear}. We observe that the NTK consists of a rank-$1$ correction to the isotropic linear kernel $\bm x \cdot \bm x'$ with the rank-one spike pointing along the $\bm r_{1}(t)$ direction. This is true dynamically throughout training under the assumption of small $\sigma$. At convergence $\bm r(t) \to \bm\beta$, which is the unique fixed point reachable through gradient descent. We discuss evolution of $u(t)$ below. The alignment of the NTK with the direction $\bm\beta$ increases with depth $L$. \subsubsection{Whitened Data vs Anisotropic Data}\label{sec:white_vs_unwhite} We now argue that in the case where the input data is whitened, the trained network function is again a kernel machine that uses the final NTK. The unit vector $\bm r_1(t)$ quickly aligns to $\bm\beta$ since the first layer weight matrix evolves in the rank-one direction $\frac{d}{dt} \bm W^{1} = \bm v(t) \bm \beta^\top$ throughout training for a time dependent vector function $\bm v(t)$. As a consequence, early in training the top eigenvector of the NTK aligns to $\bm\beta$. Due to gradient descent dynamics, $\bm W^{1 \top} \bm W^1$ grows only in the $\bm\beta \bm\beta^\top$ direction. Since the $\bm r_1$ quickly aligns to $\bm\beta$ due to $\bm W^1$ growing only along the $\bm\beta$ direction, then the global scalar function $c(t) = u(t)^L$ satisfies the dynamics $\dot{c}(t)= c(t)^{2-2/L} \left[ s- c(t) \right]$ in the whitened data case, which is consistent with the dynamics obtained when starting from the orthogonal initialization scheme of \citet{Saxe14exactsolutions}. We show in the Appendix \ref{app:deep_linear_dynamics_ct} that spectral learning occurs over a timescale on the order of $t_{1/2} \approx \frac{L}{s (L-2)} \sigma^{-L+2}$, where $t_{1/2}$ is the time required to reach half the value of the initial loss. We discuss this scaling in detail in Figure \ref{fig:time_to_align_learn_deep}, showing that although the timescale of alignment shares the same scaling with $\sigma$ for $L > 2$, empirically alignment in deep networks occurs faster than spectral learning. Hence, the silent alignment conditions of Section 2 are satisfied. In the case where the data is unwhitened, the $\bm r_1(t)$ vector aligns with $\bm\Sigma \bm \beta$ early in training. This happens since, early on, the dynamics for the first layer are $\frac{d}{dt} \bm W^1 \sim \bm v(t) \bm\beta^\top \bm\Sigma$ for time dependent vector $\bm v(t)$. However, for the same reasons we discussed in Section \ref{sec:unwhite_two_layer} the kernel must realign at late times, violating the conditions for silent alignment. \begin{figure} \centering \subfigure[ODE Time to Learn]{\includegraphics[width=0.29\linewidth]{figures/deep_time_to_half_loss.pdf}} \subfigure[$L=3$ Dynamics]{\includegraphics[width=0.32\linewidth]{figures/depth_3_time_to_align.pdf}} \subfigure[Time To Learn $L=3$]{\includegraphics[width=0.32\linewidth]{figures/depth_3_learn_time_scaling.pdf}} \caption{Time scales are not strongly separated for deep networks ($L \geq 3$), but alignment is consistently achieved before loss decreases. (a) Time to half loss scales in a power law with $\sigma$ for networks with $L \geq 3$: $t_{1/2} \sim \frac{L}{(L-2)} \sigma^{-L+2}$ (black dashed) is compared with numerically integrating the dynamics $\dot{c}(t) = c^{2-2/L}(s-c)$ (solid). The power law scaling of $t_{1/2}$ with $\sigma$ is qualitatively different than what happens for $L =2$, where we identified logarithmic scaling $t_{1/2} \sim \log(\sigma^{-2})$. (b) Linear networks with $D=30$ inputs and $N=50$ hidden units trained on synthetic whitened data with $|\bm\beta|=1$. We show for a $L=3$ linear network the cosine similarity of $\bm W^{1 \top} \bm W^1$ with $\bm \beta \bm \beta^\top$ (dashed) and the loss (solid) for different initialization scales. (c) The time to get to $1/2$ the initial loss and the time for the cosine similarity of $\bm W^{1\top} \bm W^1$ with $\bm \beta \bm\beta^\top$ to reach $1/2$ both scale as $\sigma^{-L+2}$, however one can see that alignment occurs before half loss is achieved. } \label{fig:time_to_align_learn_deep} \end{figure} \subsection{Multiple Output Channels} We next discuss the case where the network has multiple $C$ output channels. Each network output, we denote as $f_c(\bm x')$ resulting in $C^2$ kernel sub-blocks $K_{c,c'}(\bm x,\bm x') = \nabla f_c(\bm x) \cdot \nabla f_{c'}(\bm x')$. In this context, the balanced condition $\bm W^{\ell} \bm W^{\ell \top} \approx \bm W^{\ell +1 \top} \bm W^{\ell+1}$ implies that each of the weight matrices is rank-$C$, implying a rank-$C$ kernel. We give an explicit formula for this kernel in Appendix \ref{app:multi-class}. For concreteness, consider whitened input data $\bm\Sigma = \bm I$ and a teacher with weights $\bm\beta \in \mathbb{R}^{C \times D}$. The singular value decomposition of the teacher weights $\bm\beta = \sum_{\alpha} s_{\alpha} \bm z_{\alpha} \bm v_{\alpha}^\top$ determines the evolution of each mode \citep{Saxe14exactsolutions}. Each singular mode begins to be learned at $t_{\alpha} = \frac{1}{s_\alpha} \log\left( s_{\alpha} u_0^{-2} \right)$. To guarantee silent alignment, we need all of the Phase I time constants to be smaller than all of the Phase II time constants. In the case of a two layer network, this is equivalent to the condition $\frac{1}{s_{min}} \ll \frac{1}{s_{max}} \log\left( s_{max} u_0^{-2} \right)$ so that the kernel alignment timescales are well separated from the timescales of spectral learning. We see that alignment precedes learning in Figure \ref{fig:multi-class} (a). For deeper networks, as discussed in \ref{sec:white_vs_unwhite}, alignment scales in the same way as the time for learning. \section{Silent Alignment on Real Data and ReLU nets}\label{sec:nonlinear_anisotropy} In this section, we empirically demonstrate that many of the phenomena described in the previous sections carry over to the nonlinear homogenous networks with small initialization provided that the data is not highly anisotropic. A similar separation in timescales is expected in the nonlinear $L$-homogenous case since, early in training, the kernel evolves more quickly than the network predictions. This argument is based on a phenomenon discussed by \citet{Chizat2019OnLT}. Consider an initial scaling of the parameters by $\sigma$. We find that the relative change in the loss compared to the relative change in the features has the form $\frac{|\frac{d}{dt} \nabla f| }{|\nabla f|} \frac{\mathcal L}{|\frac{d}{dt} \mathcal L|} \approx O(\sigma^{-L})$ which becomes very large for small initialization $\sigma$ as we show in Appendix \ref{app:laziness}. This indicates, that from small initialization, the parameter gradients and NTK evolve much more quickly than the loss. This is a necessary, but not sufficient condition for the silent alignment effect. To guarantee the silent alignment, the gradients must be finished evolving except for overall scale by the time the loss appreciably decreases. However, we showed that for whitened data that nonlinear ReLU networks do in fact enjoy the separation of timescales necessary for the silent alignment effect in Figure \ref{fig:relu_demo_silent_alignment}. In even more realistic settings, like ResNet in Figure \ref{fig:relu_demo_silent_alignment} (d), we also see signatures of the silent alignment effect since the kernel does not grow in magnitude until the alignment has stabilized. We now explore how anisotropic data can interfere with silent alignment. We consider the partial whitening transformation: let the singular value decomposition of the data matrix be $\bm X = \bm U \bm S \bm V^\top$ and construct a new partially whitened dataset $\bm X_{\gamma} = \bm U \bm S^{\gamma} \bm V^\top$, where $\gamma \in (0,1)$. As $\gamma \to 0$ the dataset becomes closer to perfectly whitened. We compute loss and kernel aligment for depth 2 ReLU MLPs on a subset of CIFAR-10 and show results in Figure \ref{fig:cifar_anisotropy_partial_whiten}. As $\gamma \to 0$ the agreement between the final NTK and the learned neural network function becomes much closer, since the kernel alignment curve is stable after a smaller number of training steps. As the data becomes more anisotropic, the kernel's dynamics become less trivial at later time: rather than evolving only in scale, the alignment with the target function varies in a non-trivial way while the loss is decreasing. As a consequence, the NN function deviates from a kernel machine with the final NTK. \begin{figure}[ht] \centering \subfigure[Input Spectra]{\includegraphics[width=0.3\linewidth]{figures/cifar_singularvals_vary_gamma.pdf}} \subfigure[Train Loss]{\includegraphics[width=0.3\linewidth]{figures/cifar_loss_vary_gamma.pdf}} \subfigure[Test Error]{\includegraphics[width=0.3\linewidth]{figures/cifar_test_loss_vary_gamma.pdf}} \subfigure[Kernel Norm]{\includegraphics[width=0.3\linewidth]{figures/cifar_kernel_norm_vary_gamma.pdf}} \subfigure[Phase I alignment]{\includegraphics[width=0.3\linewidth]{figures/cifar_kernel_alignment_vary_gamma.pdf}} \subfigure[Predictor Comparison]{\includegraphics[width=0.3\linewidth]{figures/cifar_estimation_NTK_vary_gamma.pdf}} \caption{Anisotropy in the data introduces multiple timescales which can interfere with the silent alignment effect in a ReLU network. Here we train an MLP to do two-class regression using Adam at learning rate $5\times 10^{-3}$. (a) We consider the partial whitening transformation on the 1000 CIFAR-10 images $\lambda_k \to \lambda_k^{\gamma}$ for $\gamma \in (0,1)$ for covariance eigenvalues $\bm\Sigma \bm v_k = \lambda_k \bm v_k$. (b) The loss dynamics for unwhitened data have a multitude of timescales rather than a single sigmoidal learning curve. As a consequence, kernel alignment does not happen all at once before the loss decreases and the final solution is not a kernel machine with the final NTK. (c) The test error on classification. (d) Anisotropic data gives a slower evolution in the kernel's Frobenius norm. (e) The kernel alignment very rapidly approaches an asymptote for whitened data but exhibits a longer timescale for the anisotropic data. (f) The final NTK predictor gives a better predictor for the neural network when the data is whitened, but still substantially outperforms the initial kernel even in the anisotropic case. } \label{fig:cifar_anisotropy_partial_whiten} \end{figure} \section{Conclusion} We provided an example of a case where neural networks can learn a kernel regression solution while in the rich regime. Our silent alignment phenomenon requires a separation of timescales between the evolution of the NTK's eigenfunctions and relative eigenvalues and a separate phase where the NTK grows only in scale. We demonstrate that, if these conditions are satisfied, then the final neural network function satisfies a representer theorem for the final NTK. We show analytically that these assumptions are realized in linear neural networks with small initialization trained on approximately whitened data and observe that the results hold for nonlinear networks and networks with multiple outputs. We demonstrate that silent alignment is highly sensitive to anisotropy in the input data. Our results demonstrate that representation learning is not necessarily at odds with the learned neural network function being a kernel regression solution; i.e. a superposition of a kernel function on the training data. While we provide one mechanism for a richly trained neural network to learn a kernel regression solution through the silent alignment effect, perhaps other temporal dynamics of the NTK could also give rise to the neural network learning a kernel machine for a data-dependent kernel. Further, by asking whether neural networks behave as kernel machine for some data-dependent kernel function, one can hopefully shed light on their generalization properties and transfer learning capabilities \citep{ bordelon_icml_learning_curve, Canatar2021SpectralBA,loureiro_lenka_feature_maps, GEIGER20211} and see Appendix \ref{app:transfer_theory}. \subsubsection*{Acknowledgments} CP acknowledges support from the Harvard Data Science Initiative. AA acknowledges support from an NDSEG Fellowship and a Hertz Fellowship. BB acknowledges the support of the NSF-Simons Center for Mathematical and Statistical Analysis of Biology at Harvard (award \#1764269) and the Harvard Q-Bio Initiative. We thank Jacob Zavatone-Veth and Abdul Canatar for helpful discussions and feedback.
{'timestamp': '2021-12-06T02:03:05', 'yymm': '2111', 'arxiv_id': '2111.00034', 'language': 'en', 'url': 'https://arxiv.org/abs/2111.00034'}
\section{Introduction} The ongoing research on the on-shell techniques has gone beyond its primal scattering amplitude domain, to the computation of form factor in recent years. The form factor, sometimes stated as a bridge linking on-shell amplitude and off-shell correlation function, is a quantity containing both on-shell states(ingredients for amplitudes) and gauge invariant operators(ingredients for correlation functions). Its computation can be traced back to the pioneering paper\cite{vanNeerven:1985ja} nearly 30 years ago, where the Sudakov form factor of the bilinear scalar operator $\mathop{\rm Tr}(\phi^2)$ is investigated up to two loops. At present, many revolutionary insights originally designed for the computation of amplitudes\footnote{See reviews, e.g., \cite{Bern:2007dw,Elvang:2013cua,Henn:2014yza}.}, such as MHV vertex expansion\cite{Cachazo:2004kj}, BCFW recursion relation\cite{Britto:2004ap,Britto:2005fq}, color-kinematic duality\cite{Bern:2008qj,Bern:2010ue}, unitarity cut \cite{Bern:1994zx,Bern:1994cg} method(and its generalization to $D$-dimension \cite{Anastasiou:2006jv,Anastasiou:2006gt}), generalized unitarity\cite{Britto:2004nc,Britto:2005ha}, etc., have played their new roles in evaluating form factors. These progresses are achieved in various papers. In paper \cite{Brandhuber:2010ad}, the BCFW recursion relation appears for the first time in the recursive computation of tree-level form factor, mainly for the bilinear scalar operator. As a consequence, the solution of recursion relation for split helicity form factor is conquered\cite{Brandhuber:2011tv}. Intensive discussion on the recursion relation of form factor is provided later in \cite{Bork:2014eqa}. A generalization to the form factor of full stress tensor multiplet is discussed in \cite{Brandhuber:2011tv} and \cite{Bork:2010wf}, where in the former one, supersymmetric version of BCFW recursion relation is pointed out to be applicable to super form factor. Shortly after, the color-kinematic duality is implemented in the context of form factor\cite{Boels:2012ew}, both at tree and loop-level, to generate the integrand of form factor. Most recently, the elegant formulation of amplitudes based on Grassmannian prescription\cite{ArkaniHamed:2012nw} is also extended to tree-level form factors\cite{Frassek:2015rka}. At loop-level, the form factor is generally computed by unitarity cut method. The generic Maximal-Helicity-Violating(MHV) super form factor as well as some Next-MHV(NMHV) form factor at one-loop are computed in \cite{Brandhuber:2011tv,Bork:2011cj,Bork:2012tt,Engelund:2012re} with compact results. The Sudakov form factor is computed to three loops in \cite{Gehrmann:2006wg,Baikov:2009bg,Gehrmann:2011xn}. The three-point two-loop form factor of half-BPS operator is achieved in \cite{Brandhuber:2012vm}, and the general $n$-point form factor as well as the remainder functions in \cite{Brandhuber:2014ica}. The scalar operator with arbitrary number of scalars is discussed in \cite{Bork:2010wf,Penante:2014sza,Brandhuber:2014ica}. Beyond the half-BPS operators, form factors of non-protected operators, such as dilatation operator\cite{Wilhelm:2014qua}, Konishi operator\cite{Nandan:2014oga}, operators in the $SU(2)$ sectors\cite{Loebbert:2015ova}, are also under investigation. Furthermore, the soft theorems for the form factor of half-BPS and Konishi operators are studied at tree and one-loop level\cite{Bork:2015fla}, showing similarity to amplitude case. Carrying on the integrand result of \cite{Boels:2012ew}, the master integrals for four-loop Sudakov form factor is determined in \cite{Boels:2015yna}. An alternative discussion on the master integrals of form factor in massless QCD can be found in \cite{vonManteuffel:2015gxa}. Similar unitarity based studies on Sudakov form factor of three-dimensional ABJM theories are also explored\cite{Brandhuber:2013gda,Young:2013hda,Bianchi:2013pfa}. The above mentioned achievements encode the belief that the state-of-art on-shell techniques of amplitude would also be applicable to form factor. Recently, the advances in the computation of boundary contribution have revealed another connection between form factor and amplitude. When talking about the BCFW recursion relation of amplitude, the boundary contribution is generally assumed to be absent. However this assumption is not always true, for example, it fails in the theories involving only scalars and fermions or under the "bad" momentum deformation. Many solutions have been proposed(by auxiliary fields\cite{Benincasa:2007xk,Boels:2010mj}, analyzing Feynman diagrams\cite{Feng:2009ei,Feng:2010ku,Feng:2011twa}, studying the zeros\cite{Benincasa:2011kn,Benincasa:2011pg,Feng:2011jxa}, the factorization limits\cite{Zhou:2014yaa}, or using other deformation\cite{Cheung:2015cba,Cheung:2015ota,Luo:2015tat}) to deal with the boundary contribution in various situations. Most recently, a new multi-step BCFW recursion relation algorithm\cite{Feng:2014pia,Jin:2014qya,Feng:2015qna} is proposed to detect the boundary contribution through certain poles step by step. Especially in paper \cite{Jin:2014qya}, it is pointed out that the boundary contribution possesses similar BCFW recursion relation as amplitudes, and it can be computed recursively from the lower-point boundary contribution. Based on this idea, later in paper \cite{Jin:2015pua}, the boundary contribution is further interpreted as form factor of certain composite operator named {\sl boundary operator}, while the boundary operator can be extracted from the operator product expansion(OPE) of deformed fields. The idea of boundary operator motives us to connect the computation of form factor to the boundary contribution of amplitudes. Since a given boundary contribution of amplitude can be identified as a form factor of certain boundary operator, we can also interpret a given form factor as the boundary contribution of certain amplitude. In paper \cite{Jin:2015pua}, the authors showed how to construct the boundary operator starting from a known Lagrangian. We can reverse the logic and ask the question: for a given operator, how can we construct a Lagrangian whose boundary operator under certain momentum deformation is exactly the operator of request? In this paper, we try to answer this question by constructing the Lagrangian for a class of so called composite operators. Once the Lagrangian is ready, we can compute the corresponding amplitude, take appropriate momentum shifting and extract the boundary contribution, which is identical(or proportional) to the form factor of that operator. By this way, the computation of form factor can be considered as a problem of computing the amplitude of certain theory. This paper is structured as follows. In \S \ref{secReview}, we briefly review the BCFW recursion relation and boundary operator. We also list the composite operators of interest, and illustrate how to construct the Lagrangian that generates the boundary operators of request. In \S \ref{secSudakov}, using Sudakov form factor as example, we explain how to compute the form factor through computing the boundary contribution of amplitude, and demonstrate the computation by recursion relation of form factor, amplitude and boundary contribution. We show that these three ways of understanding lead to the same result. In \S \ref{secComposite}, we compute the form factors of composite operators by constructing corresponding Lagrangian and working out the amplitude of double trace structure. Conclusion and discussion can be found in \S \ref{secConclusion}, while in the appendix, the construction of boundary operator starting from Lagrangian is briefly reviewed for reader's convenience, and the discussion on large $z$ behavior is presented. \section{From boundary contribution to form factor} \label{secReview} The BCFW recursion relation \cite{Britto:2004ap,Britto:2005fq} provides a new way of studying scattering amplitude in S-matrix framework. Using suitable momentum shifting, for example, \begin{eqnarray} \widehat{p}_i=p_i-z q~~,~~\widehat{p}_j=p_j+zq~~\mbox{while}~~q^2=p_i\cdot q=p_j\cdot q=0~,~~~\label{bcfw-shifting}\end{eqnarray} one can treat the amplitude as an analytic function $A(z)$ of single complex variable, with poles in finite locations and possible non-vanishing terms in boundary, while the physical amplitude sits at $z=0$ point. Assuming that under certain momentum shifting, $A(z)$ has no boundary contribution in the contour integration ${1\over 2\pi i}\oint {dz\over z}A(z)$, i.e., $A(z)\to 0$ when $z\to \infty$, then the physical amplitude $A(z=0)$ can be purely determined by the residues of $A(z)$ at finite poles. However, if $A(z)$ does not vanish around the infinity, for example when taking a "bad" momentum shifting or in theories such as $\lambda\phi^4$, the boundary contribution would also appear as a part of physical amplitude. Most people would try to avoid dealing with such theories as well as the "bad" momentum shifting, since the evaluation of boundary contribution is much more complicated than taking the residues of $A(z)$. Although it is usually unfavored during the direct computation of amplitude, authors in paper \cite{Jin:2015pua} found that the boundary contribution is in fact {\sl a form factor involving boundary operator and unshifted particles}, \begin{eqnarray} B^{\spab{1|2}}=\spaa{\Phi(p_3)\cdots \Phi(p_{n})|\mathcal{O}^{\spab{1|2}}(0)|0}~,~~~\label{boundaryFrom}\end{eqnarray} where $\Phi(p_i)$ denotes arbitrary on-shell fields, and momenta of $\Phi(p_1),\Phi(p_2)$ have been shifted according to eqn.(\ref{bcfw-shifting}). The momentum $q$ carried by the boundary operator is $q=-p_1-p_2=\sum_{i=3}^n p_i$. Eqn. (\ref{boundaryFrom}) is identical to a $(n-2)$-point form factor generated by operator $\mathcal{O}^{\spab{1|2}}$ with off-shell momentum $q^2\neq 0$. The observation (\ref{boundaryFrom}) provides a new way of computing form factor, \begin{enumerate} \item Construct the Lagrangian, and compute the corresponding amplitude, \item Take the appropriate momentum shifting, and pick up the boundary contribution, \item Read out the form factor from boundary contribution after considering LSZ reduction. \end{enumerate} In paper \cite{Jin:2015pua}, the authors illustrated how to work out the boundary operator $\mathcal{O}^{\spab{\Phi_i|\Phi_j}}$ from Lagrangian of a given theory under momentum shifting of two selected external fields. Starting from a Lagrangian, one can eventually obtain a boundary operator. For example, a real massless scalar theory with $\phi^m$ interaction \begin{eqnarray} L=-{1\over 2} (\partial \phi)^2+{\kappa\over m!}\phi^m~,~~~\end{eqnarray} under momentum shifting of two scalars(say $\phi_1$ and $\phi_2$) will produce a boundary operator \begin{eqnarray} \mathcal{O}^{\spab{\phi_1|\phi_2}}={\kappa\over (m-2)!}\phi^{m-2}~.~~~\end{eqnarray} Hence the boundary contribution of a $n$-point amplitude $A_n(\phi_1,\ldots, \phi_n)$ in this $\kappa\phi^m$ theory under $\spab{\phi_1|\phi_2}$-shifting is identical to the $(n-2)$-point form factor \begin{eqnarray} \mathcal{F}_{\mathcal{O}^{\spab{\phi_1|\phi_2}},n-2}(\phi_3,\ldots,\phi_{n};q)\equiv{\kappa\over (m-2)!}\spaa{\phi_3\cdots\phi_{n}|\phi^{m-2}(0)|0}~.~~~\end{eqnarray} However, this form factor is not quite interesting. We are interested in certain kind of operators, such as bilinear half-BPS scalar operator $\mathop{\rm Tr}(\phi^{AB}\phi^{AB})$ or chiral stress-tensor operator $\mathop{\rm Tr}(W^{++}W^{++})$ in $\mathcal{N}=4$ super-Yang-Mills(SYM) theory, where $W^{++}$ is a particular projection of the chiral vector multiplet superfield $W^{AB}(x,\theta)$ in SYM. What we want to do is to compute the form factor for a given operator, but not the operators generated from arbitrary Lagrangian. More explicitly, if we want to compute the form factor of operator $\mathcal{O}$, we should first construct a Lagrangian whose boundary operator is identical(or proportional) to $\mathcal{O}$. With such Lagrangian in hand, we can then compute the corresponding amplitude, take the momentum shifting and pick up the boundary contribution. So the problem is how to construct the corresponding Lagrangian. \subsection{The operators of interest} \label{secOperator} It is obvious that the construction of Lagrangian depends on the operators we want to produce. In this paper, we will study the so called gauge-invariant local composite operators, which are built as traces of product of gauge-covariant fields at a common spacetime point. These fields are taken to be the component fields of $\mathcal{N}=4$ superfield $\Phi^{\mathcal{N}=4}$ \cite{Nair:1988bq}, given by six real scalars $\phi^I,I=1,\ldots, 6$(or 3 complex scalars $\phi^{AB}$), four fermions $\psi^A_{\alpha}=\epsilon^{ABCD}\psi_{BCD\alpha}$, four anti-fermions $\bar{\psi}_{A\dot{\alpha}}$ and the field strength $F_{\mu\nu}$, where $\alpha,\beta,\dot{\alpha},\dot{\beta}=1,2$ are spinor indices, $A,B,C,D=1,2,3,4$ are $SU(4)$ R-symmetric indices, and $\mu,\nu=0,1,2,3$ are spacetime indices. The field strength can be further split into self-dual and anti-self-dual parts $F_{\alpha\beta},\bar{F}_{\dot{\alpha}\dot{\beta}}$: \begin{eqnarray} F_{\alpha\beta\dot{\alpha}\dot{\beta}}=F_{\mu\nu}(\sigma^{\mu})_{\alpha\dot{\alpha}}(\sigma^{\nu})_{\beta\dot{\beta}} =\sqrt{2}\epsilon_{\dot{\alpha}\dot{\beta}}F_{\alpha\beta}+\sqrt{2}\epsilon_{\alpha\beta}\bar{F}_{\dot{\alpha}\dot{\beta}}~,~~~\end{eqnarray} corresponding to positive gluon and negative gluon respectively. The number of fields inside the trace is called the length of operator, and the simplest non-trivial ones are the length two operators. There is no limit on the length of operator, for example, the bilinear half-BPS scalar operator $\mathop{\rm Tr}(\phi^I\phi^J)$ is length two, while we could also have length $L$ scalar operator $\mathop{\rm Tr}(\phi^{I_1}\cdots \phi^{I_L})$. The operators can also carry spinor indices, such as $\mathcal{O}^{\alpha\beta\dot{\alpha}\dot{\beta}}=\mathop{\rm Tr}(\psi^{A\alpha}\psi^{B\beta}\bar{F}^{\dot{\alpha}\dot{\beta}})$ in the $(1,1)$ representation under Lorentz group $SU(2)\times SU(2)$. We will mainly focus on the length two operators. These operators can be classified by their spins and labeled by their representations under $SU(2)\times SU(2)$ group. For spin-0 operators in $(0,0)$-representation, we have \begin{eqnarray} &&\mathcal{O}^{[0]}_{\tiny \mbox{I}}=\mathop{\rm Tr}(\phi^{I}\phi^{J})~~,~~\mathcal{O}^{[0]}_{\tiny \mbox{II}}=\mathop{\rm Tr}(\psi^{A\alpha}\psi^{B}_{\alpha})~~,~~ \mathcal{O}^{[0]}_{\tiny \mbox{III}}=\mathop{\rm Tr}(F^{\alpha\beta}F_{\alpha\beta})~,~~~\nonumber\\ &&~~~~~~~~~~~~~~~~~~~~~~~~~~\bar{\mathcal{O}}^{[0]}_{\tiny \mbox{II}}=\mathop{\rm Tr}(\bar{\psi}^{\dot{\alpha}}_{A}\bar{\psi}_{B\dot{\alpha}})~~,~~ \bar{\mathcal{O}}^{[0]}_{\tiny \mbox{III}}=\mathop{\rm Tr}(\bar{F}^{\dot{\alpha}\dot{\beta}}\bar{F}_{\dot{\alpha}\dot{\beta}})~.~~~\label{spin0-operator}\end{eqnarray} For spin-${1\over 2}$ operators in $({1\over 2},0)$ or $(0,{1\over 2})$-representation, we have \begin{eqnarray} &&\mathcal{O}^{[1/2]}_{\tiny \mbox{I}}=\mathop{\rm Tr}(\phi^{I}\psi^{A\alpha})~~,~~\mathcal{O}^{[1/2]}_{\tiny \mbox{II}}=\mathop{\rm Tr}(\psi^{A}_{\beta}F^{\beta\alpha})~,~~~\nonumber\\ &&\bar{\mathcal{O}}^{[1/2]}_{\tiny \mbox{I}}=\mathop{\rm Tr}(\phi^{I}\bar{\psi}_{A}^{\dot{\alpha}})~~,~~\bar{\mathcal{O}}^{[1/2]}_{\tiny \mbox{II}}=\mathop{\rm Tr}(\bar{\psi}_{A\dot{\beta}}\bar{F}^{\dot{\beta}\dot{\alpha}})~.~~~\label{spin12-operator}\end{eqnarray} For spin-1 operators in $(1,0)$ or $(0,1)$-representation, we have \begin{eqnarray} &&\mathcal{O}^{[1]}_{\tiny \mbox{I}}=\mathop{\rm Tr}(\psi^{A\alpha}\psi^{B\beta}+\psi^{A\beta}\psi^{B\alpha})~~,~~\mathcal{O}^{[1]}_{\tiny \mbox{II}}=\mathop{\rm Tr}(\phi^{I} F^{\alpha\beta})~,~~~\nonumber\\ &&\bar{\mathcal{O}}^{[1]}_{\tiny \mbox{I}}=\mathop{\rm Tr}(\bar{\psi}_A^{~~\dot{\alpha}}\bar{\psi}_B^{~~\dot{\beta}}+\bar{\psi}_A^{~~\dot{\beta}}\bar{\psi}_B^{~~\dot{\alpha}})~~,~~\bar{\mathcal{O}}^{[1]}_{\tiny \mbox{II}}=\mathop{\rm Tr}(\phi^{I} \bar{F}^{\dot{\alpha}\dot{\beta}})~,~~~\label{spin1-operator1}\end{eqnarray} and in $({1\over 2},{1\over 2})$-representation, \begin{eqnarray} \mathcal{O}^{[1]}_{\tiny \mbox{III}}=\mathop{\rm Tr}(\psi^{A\alpha}\bar{\psi}_B^{\dot{\alpha}})~.~~~\label{spin1-operator2}\end{eqnarray} For spin-${3\over 2}$ operators in $(1,{1\over 2})$ or $({1\over 2},1)$-representation, we have \begin{eqnarray} \mathcal{O}^{[3/2]}_{\tiny \mbox{I}}=\mathop{\rm Tr}(\bar{\psi}_A^{\dot{\alpha}}F^{\alpha\beta})~~,~~\bar{\mathcal{O}}^{[3/2]}_{\tiny \mbox{I}}=\mathop{\rm Tr}({\psi}^{A{\alpha}}\bar{F}^{\dot{\alpha}\dot{\beta}})~.~~~\label{spin32-operator1}\end{eqnarray} and in $({3\over 2},0)$ or $(0,{3\over 2})$-representation, \begin{eqnarray} \mathcal{O}^{[3/2]}_{\tiny \mbox{II}}=\mathop{\rm Tr}({\psi}^{A\gamma}{F}^{{\alpha}{\beta}})~~,~~\bar{\mathcal{O}}^{[3/2]}_{\tiny \mbox{II}}=\mathop{\rm Tr}(\bar{\psi}_A^{\dot{\gamma}}\bar{F}^{\dot{\alpha}\dot{\beta}})~.~~~\label{spin32-operator2}\end{eqnarray} For spin-2 operators in $(1,1)$-representation, we have \begin{eqnarray} \mathcal{O}^{[2]}_{\tiny \mbox{I}}=\mathop{\rm Tr}(F^{\alpha\beta}\bar{F}^{\dot{\alpha}\dot{\beta}})~.~~~\label{spin2-operator}\end{eqnarray} For operators of the same class, we can apply similar procedure to construct the Lagrangian. The operators with length larger than two can be similarly written down, and classified according to their spins and representations. For those whose spins are no larger than 2, we can apply the same procedure as is done for length two operators. while if their spins are larger than 2, we need multiple shifts. Some of above operators are in fact a part of the chiral stress-tensor multiplet operator in $\mathcal{N}=4$ SYM \cite{Eden:2011yp,Eden:2011ku}, and their form factors are components of $\mathcal{N}=4$ super form factor. However, we have assumed that, all indices of these gauge-covariant fields are general, so above operators are not limited to the chiral part, they are quite general. \subsection{Constructing the Lagrangian} One important property shared by above operators is that they are all traces of fields. Tree-level amplitudes of ordinary gauge theory only possess single trace structure. From the shifting of two external fields, one can not generate boundary operators with trace structures, which can be seen in \cite{Jin:2015pua}. The solution is to intentionally add a double trace term in the standard Lagrangian. The added term should be gauge-invariant, and generate the corresponding operator under selected momentum shifting. For a given operator $\mathcal{O}$ of interest, let us add a double trace term $\Delta L$ to the $\mathcal{N}=4$ Lagrangian $L_{{\tiny\mbox{SYM}}}$, \begin{eqnarray} L_{\mathcal{O}}=L_{{\tiny\mbox{SYM}}}+ \frac{\kappa}{N}\mathop{\rm Tr}(\Phi^{\alpha'_1}\Phi^{\alpha'_2})\mathcal{O} +\frac{\bar{\kappa}}{N}\mathop{\rm Tr}(\Phi^{\dagger}_{\alpha'_1}\Phi^{\dagger}_{\alpha'_2})\bar{\mathcal{O}}~,~~~\label{lagrangianO} \end{eqnarray} where $SU(N)$ group is assumed, $\kappa,\bar{\kappa}$ are coupling constants for the double trace interactions(which can be re-scaled to fit the overall factor of final result) and $\Phi^{\alpha'}$, $\Phi^{\dagger}_{\alpha'}$ denotes\footnote{The definition of $\Phi, \Phi^{\dagger}$ can be found in (\ref{defPhi}), and remind that the index here of $\Phi,\Phi^{\dagger}$ is not spinor index but the index of their components, which specifies $\Phi$ to be scalar, fermion or gluon.} any type of fields among $\phi^I,\psi^{A\alpha},\bar{\psi}_A^{\dot{\alpha}},F^{\alpha\beta},\bar{F}^{\dot{\alpha}\dot{\beta}}$. The spinor indices are not explicitly written down for $\Phi, \Phi^{\dagger}$, however we note that they should be contracted with the spinor indices of the operator, so that the added Lagrangian terms are Lorentz invariant. We will show that at the large $N$ limit, momentum shifting of two fields in $\Delta L$ indeed generates the boundary operator $\mathcal{O}$. \def{\tiny \mbox{full}}{{\tiny \mbox{full}}} The tree-level amplitudes defined by Lagrangian $L_{\mathcal{O}}$ can have single trace pieces or multiple trace pieces. A full $(n+2)$-point amplitude $$A_{n+2}^{{\tiny \mbox{full}}}(\Phi^{\alpha_1a_1},\ldots, \Phi^{\alpha_na_n},\Phi^{\alpha_{n+1}a},\Phi^{\alpha_{n+2}b})$$ thus can be decomposed into color-ordered partial amplitudes $A$ as \begin{eqnarray} A_{n+2}^{{\tiny \mbox{full}}}&=&A_{n+2}(1,2,\ldots,n+2)\mathop{\rm Tr}(t^{a_1}\cdots t^{a_n}t^at^b)+\cdots\label{aoriginal}\\ &&+\frac{1}{N}A_{k;n+2-k}(1,\ldots,k;k+1,\ldots , n+2)\mathop{\rm Tr}(t^{a_1}\cdots t^{a_{k}})\mathop{\rm Tr}(t^{a_{k+1}}\cdots t^at^b)+\cdots\nonumber\end{eqnarray} where $A_{n}$ denotes $n$-point single trace amplitude, $A_{k;n-k}$ denotes $n$-point double trace amplitude. We use $i$ to abbreviate $\Phi_i$, and $\cdots$ stands for all possible permutation terms and other higher order multiple trace pieces. Since the operator $\mathcal{O}$ we want to generate is single trace, the terms with higher multiple trace in $\cdots$ is then irrelevant for our discussion, and also they can be ignored at large $N$. Now let us contract the color indices $a,b$, which gives\footnote{Remind the identity $(t^a)^{~\bar{\jmath}_1}_{i_1}(t^a)^{~\bar{\jmath}_2}_{i_2}=\delta^{~\bar{\jmath}_2}_{i_1}\delta^{~\bar{\jmath}_1}_{i_2}-{1\over N}\delta^{~\bar{\jmath}_1}_{i_1}\delta^{~\bar{\jmath}_2}_{i_2}$.} \begin{eqnarray} A_{n+2}^{{\tiny \mbox{full}}}&=&{N^2-1\over N}A_{n+2}(1,2,\ldots,n+2)\mathop{\rm Tr}(t^{a_1}\cdots t^{a_n})+\cdots\label{acontract}\\ &&+{N^2-1\over N^2}A_{k;n+2-k}(1,\ldots,k;k+1,\ldots, n+2)\mathop{\rm Tr}(t^{a_1}\cdots t^{a_k})\mathop{\rm Tr}(t^{a_{k+1}}\cdots t^{a_n})+\cdots\nonumber\end{eqnarray} In this case, the $O(N)$ order terms in (\ref{acontract}) come from two places, one is the single trace part in (\ref{aoriginal}) when $t^a$ and $t^b$ are adjacent, the other is the double trace part in (\ref{aoriginal}) whose color factor has the form $\mathop{\rm Tr}(\cdots )\mathop{\rm Tr}( t^at^b)$. So when color indices $a,b$ are contracted, the leading contribution of the full $(n+2)$-point amplitude is \begin{eqnarray} A_{n+2}^{{\tiny \mbox{full}}}=N\mathop{\rm Tr}(t^{a_1}\cdots t^{a_n})\mathcal{K}(1,2,\ldots, n)+\mbox{possible~permutation}\{1,2,\ldots,n\}~,~~~\end{eqnarray} where \begin{eqnarray} \mathcal{K}(1,\ldots, n)&\equiv&A_{n+2}(1,\ldots, n,n+1,n+2)+A_{n+2}(1,\ldots, n,n+2,n+1)\nonumber\\ &&+A_{n;2}(1,\ldots, n;n+1,n+2)~.~~~\label{aconN}\end{eqnarray} The first two terms in $\mathcal{K}$ are the same as the corresponding color-ordered single trace amplitudes, since the other double trace terms in the Lagrangian will not contribute to the $O(N)$ order at tree-level. The third term in $\mathcal{K}$ is double trace amplitude of the trace form $\mathop{\rm Tr}(\cdots)\mathop{\rm Tr}(t^at^b)$, and the Feynman diagrams contributing to this amplitude are those whose $\Phi_{n+1}$ and $\Phi_{n+2}$ are attached to the same double trace vertex, while the color indices of $\Phi_{n+1},\Phi_{n+2}$ are separated from others. Now let us examine the large $z$ behavior of the amplitude under momentum shifting $\gb{\Phi_{n+1}^{\alpha_{n+1}}|\Phi_{n+2}^{\alpha_{n+2}}}$. Since the color indices of two shifted legs are contracted, it is equivalent to consider the large $z$ behavior of $\mathcal{K}(1,2,\ldots, n)$ under such shifting. Following \cite{Jin:2015pua}, we find that at the large $N$ limit, the leading interaction part $V$ is given by \begin{eqnarray} V^{\alpha \beta }&=&V^{\alpha \beta}_{{\tiny\mbox{SYM}}} +N\bar{\kappa}(\delta^{\alpha}_{\alpha'_{1}}\delta^{\beta}_{\alpha'_{2}} +\delta^{\alpha}_{\alpha'_{2}}\delta^{\beta}_{\alpha'_{1}}) \bar{\mathcal{O}}+N{\kappa}(T^{\alpha'_{1}\alpha}T^{\alpha'_{2}\beta} +T^{\alpha'_{2}\alpha}T^{\alpha'_{1}\beta}){\mathcal{O}}~,~~~\label{vnewl}\end{eqnarray} where $T^{\alpha\beta}$ is defined through $\Phi^{\alpha}=T^{\alpha\beta}\Phi^{\dagger}_{\beta}$, and $\alpha'_1=\alpha_{n+1}, \alpha'_2=\alpha_{n+2}$, indicating that the shifted fields $\Phi_{n+1},\Phi_{n+2}$ are the two fields of $\mathop{\rm Tr}(\Phi^{\alpha'_1}\Phi^{\alpha'_2})$ in (\ref{lagrangianO}) with specific field type. In general, the OPE of shifted fields has the form \cite{Jin:2015pua} \begin{eqnarray} \mathcal{Z}(z)=\epsilon^{n+1}_{\alpha}\epsilon^{n+2}_{\beta}\Bigl[V^{\alpha\beta}-V^{\alpha\beta_1}(D_0^{-1})_{\beta_1\beta_2}V^{\beta_2\beta}+\cdots \Bigr]~,~~~\label{exp-exp-1} \end{eqnarray} where $\epsilon_\alpha^{n+1},\epsilon_\beta^{n+2}$ are external wave functions of $\Phi_{n+1},\Phi_{n+2}$. The terms with $(D_0^{-1})^{k}$ correspond to Feynman diagrams with $k$ hard propagators. The $\mathcal{Z}(z)$ for $L_{\mathcal{O}}$ contains two parts, one from the single trace and the other from double trace. The single trace amplitudes in $\mathcal{K}$ originate from Feynman diagrams with vertices of $\mathcal{N}=4$ Lagrangian, thus their $\mathcal{Z}(z)$ can be directly obtained by replacing $V^{\alpha \beta}$ with $V^{\alpha \beta}_{{\tiny\mbox{SYM}}}$. The double trace amplitudes in $\mathcal{K}$ originate from Feynman diagrams with double trace vertices. Because the two shifted fields $\Phi_{n+1},\Phi_{n+2}$ should be attached to the same double trace vertex, in this case the hard propagator will not appear in the corresponding Feynman diagrams. Thus for this part, we only need to keep the first term in (\ref{exp-exp-1})(more explicitly, the terms with single $\mathcal{O}$ or $\bar{\mathcal{O}}$ in (\ref{vnewl})). Combined together, we have \begin{eqnarray} \mathcal{Z}(z)&=&\mathcal{Z}_{{\tiny\mbox{SYM}}}(z)+\epsilon^{n+1}_{\alpha}\epsilon^{n+2}_{\beta}N\bar{\kappa}(\delta^{\alpha}_{\alpha'_{1}}\delta^{\beta}_{\alpha'_{2}} +\delta^{\alpha}_{\alpha'_{2}}\delta^{\beta}_{\alpha'_{1}})\bar{\mathcal{O}}\nonumber\\ &&~~~~~~~~~~~~~~~+\epsilon^{n+1}_{\alpha}\epsilon^{n+2}_{\beta}N{\kappa}(T^{\alpha'_{1}\alpha}T^{\alpha'_{2}\beta} +T^{\alpha'_{2}\alpha}T^{\alpha'_{1}\beta}){\mathcal{O}}~.~~~\label{zzzz2}\end{eqnarray} The summation of $\alpha,\beta$ runs over all types of fields. For a given momentum shifting $\alpha'_1=\alpha_{n+1}$, $\alpha'_2=\alpha_{n+2}$, we can choose the wave function such that $\epsilon^{n+1}_{\alpha_{n+1}}\epsilon^{n+2}_{\alpha_{n+2}}\neq 0$ but all other types of contractions vanish. In this case, the second line of (\ref{zzzz2}) contains a factor $(T^{\alpha_{n+1}\alpha_{n+1}}T^{\alpha_{n+2}\alpha_{n+2}} +T^{\alpha_{n+1}\alpha_{n+2}}T^{\alpha_{n+2}\alpha_{n+1}})$. From the definition of $T^{\alpha\beta}$ in (\ref{Phi2DaggerPhi}), it is clear that this factor is zero when the two shifted fields are not complex conjugate to each other. So we have, \begin{eqnarray} \mathcal{Z}(z)=\mathcal{Z}_{{\tiny\mbox{SYM}}}(z) +N\bar{\kappa}\epsilon^{n+1}_{\alpha_{n+1}}\epsilon^{n+2}_{\alpha_{n+2}}\bar{\mathcal{O}}~.~~~\label{zzzz3}\end{eqnarray} However, if the two shifted fields are complex conjugate to each other, then in the definition of Lagrangian (\ref{lagrangianO}), $\bar{\mathcal{O}}$ is in fact identical to $\mathcal{O}$. This means that there is only one term in $\Delta L$ but not two, and consequently there is only the first line in (\ref{zzzz2}). After the choice of wave functions, we again get (\ref{zzzz3}). From eqn.(\ref{zzzz3}), we know that the large $z$ behavior of $L_{\mathcal{O}}$ under $\spab{\Phi|\Phi}$-shifting depends on the large $z$ behavior of $\mathcal{N}=4$ SYM theory as well as the double trace term $\Delta L$. In fact(please refer to Appendix \ref{largeZN4} for detailed discussion), for all the shifts we use in this paper\footnote{Including $\langle \phi^I|\phi^J]$, $\langle \psi^{A\alpha}|\phi^J]$, $\langle \psi^{A\alpha}|\bar{\psi}_{\dot{\alpha}}]$, $\langle \psi^{A\alpha}|\psi^{B\beta}]$, $\langle \psi^{A\alpha}|F^{\beta\gamma}]$, $\langle \bar{\psi}_{A\dot{\alpha}}|F^{\beta\gamma}]$ and $\langle \bar{F}^{\dot{\alpha}\dot{\beta}}|F^{\gamma\rho}]$.}, $\mathcal{Z}_{{\tiny\mbox{SYM}}}(z)$ has lower power in $z$ than the second term in \eqref{zzzz3} at large $z$. This means that the boundary operator(or the operator defined by the leading $z$ order) is always determined by the second term in (\ref{zzzz3}), \begin{eqnarray} \mathcal{Z}(z)\sim&N\bar{\kappa}\epsilon^{n+1}_{\alpha_{n+1}}\epsilon^{n+2}_{\alpha_{n+2}}\bar{\mathcal{O}}~.~~~ \label{zzzz5}\end{eqnarray} So it produces the desired operator $\bar{\mathcal{O}}$, up to certain possible pre-factor from the external wave functions. \section{Sudakov form factor and more} \label{secSudakov} In this section, we will take the bilinear half-BPS scalar operator $\mathcal{O}_2\equiv \mathcal{O}_{\tiny \mbox{I}}^{[0]}=\mathop{\rm Tr}(\phi^I\phi^J)$ as an example to illustrate the idea of computing form factor from boundary contributions. The form factor is defined as \begin{eqnarray} \mathcal{F}_{\mathcal{O}_2,n}({s};q)=\int d^4x e^{-iqx}\vev{s|\mathop{\rm Tr}(\phi^{I}\phi^{J})(x)|0}=\delta^{(4)}(q-\sum_{i=1}^{n}p_i)\vev{s|\mathop{\rm Tr}(\phi^{I}\phi^{J})(0)|0}~.~~~\label{FormO2}\end{eqnarray} Here $|s\rangle$ is a $n$-particle on-shell states, and each state in $|s\rangle$ is on-shell, with a momentum $p_i^2=0$, while the operator, carrying momentum $q=\sum_{i=1}^np_i$, is off-shell. The simplest example is given by taking $|s\rangle=|\phi^{I}(p_1)\phi^{J}(p_2)\rangle$, i.e., the Sudakov form factor, and it is simply\footnote{With coupling constant and delta function of momentum conservation stripped off here and from now on for simplicity.} \begin{eqnarray} \vev{\phi^{I}(p_1)\phi^{J}(p_2)|\mathop{\rm Tr}(\phi^I\phi^J)(0)|0}=1~.~~~\nonumber\end{eqnarray} A more complicated one is given by taking the on-shell states as two scalars and $(n-2)$ gluons. Depending on the helicities of gluons, it defines the MHV form factor, NMHV form factor and so on. In order to compute the form factor (\ref{FormO2}) as boundary contribution of certain amplitude under BCFW shifting, we need to relate the operator $\mathcal{O}_2$ with certain boundary operator. This can be done by constructing a new Lagrangian $L_{\mathcal{O}_2}$ by adding an extra double trace term $\Delta L$ in the $\mathcal{N}=4$ Lagrangian as \begin{eqnarray} L_{\mathcal{O}_2}=L_{{\tiny\mbox{SYM}}}-{\kappa\over 4N}\mathop{\rm Tr}(\phi^{I}\phi^{J})\mathop{\rm Tr}(\phi^{K}\phi^{L})~,~~~\end{eqnarray} where $\kappa$ is the coupling constant. Since we are dealing with real scalars, there is no need to add the corresponding complex conjugate term. This new term provides a four-scalar vertex, and it equals to $i\kappa$, as shown in Figure (\ref{FourScalar}). \begin{figure} \centering \includegraphics[width=6in]{FourScalar}\\ \caption{(a)The four-scalar vertex of ${\kappa\over 4N}\mathop{\rm Tr}(\phi^{I}\phi^{J})\mathop{\rm Tr}(\phi^{K}\phi^{L})$ term, (b)The double-line notation of four-scalar vertex, showing the possible trace structures.}\label{FourScalar} \end{figure} If we split two scalars into ordinary part and hard part $\phi^{Ia}\to \phi^{Ia}+\phi^{\Lambda Ia}$ and $\phi^{Jb}\to \phi^{Jb}+\phi^{\Lambda Jb}$(the hard part $\phi^{\Lambda}$ corresponds to the large $z$ part), then the quadratic term $\phi^{\Lambda Ia}\phi^{\Lambda Jb}$ of $L_{{\tiny\mbox{SYM}}}$ part can be read out from the result in Appendix B of \cite{Jin:2015pua} by setting $A=(A_{\mu},\phi^I)$, which is given by \begin{eqnarray} 2g^2N\delta^{IJ}\mathop{\rm Tr}(A\cdot A+\phi\cdot\phi)~.~~~\label{variationSYM}\end{eqnarray} The quadratic term $\phi^{\Lambda Ia}\phi^{\Lambda Jb}$ of $\Delta L$ part is simply(at the leading $N$ order) \begin{eqnarray} {N\over 2}\kappa\mathop{\rm Tr}(\phi^K\phi^L)~.~~~\end{eqnarray} Thus the boundary operator under two-scalar shifting is \begin{eqnarray} \mathcal{O}^{\langle\phi^{Ia}|\phi^{Jb}]}=2g^2N\delta^{IJ}\mathop{\rm Tr}(A\cdot A+\phi\cdot\phi)+{N\over 2}\kappa\mathop{\rm Tr}(\phi^K\phi^L)~.~~~\label{O2boundaryO}\end{eqnarray} Notice that the traceless part (while $I\neq J$) of boundary operator (\ref{O2boundaryO}) is proportional to the operator $\mathcal{O}_2$. This means that if the two shifted scalars are not the same type of scalar, i.e., $I\neq J$, the corresponding boundary contribution $B^{\langle\phi^{Ia}|\phi^{Jb}]}$ of amplitude defined by Lagrangian $L_{\mathcal{O}_2}$ is identical to the form factor of $\mathcal{O}_2=\mathop{\rm Tr}(\phi^K\phi^L)$, up to some over-all factor which can be fixed by hand. More explicitly, let us consider the color-ordered form factor $\vev{1,2,\ldots, n|\mathcal{O}_2|0}$, where $i$ denotes an arbitrary field. It is dressed with a single trace structure $\mathop{\rm Tr}(t^1t^2\cdots t^n)\mathcal{O}_2$. In the amplitude side, $\mathcal{O}_2$ is generated from the double trace term $\Delta L$, and the corresponding trace structure of color-ordered amplitude is $\mathop{\rm Tr}(t^{1}t^{2}\cdots t^{n})\mathop{\rm Tr}(t^{n+1}t^{n+2})$. We denote the amplitude of double trace structure as $A_{n;2}(1,2,\ldots, n; \phi_{n+1},\phi_{n+2})$. It only gets contributions from the Feynman diagrams where $\phi_{n+1},\phi_{n+2}$ are attached to the sole four-scalar vertex of $\Delta L$. Then the form factor $\vev{1,2,\ldots, n|\mathcal{O}_2|0}$ is just the boundary contribution of $A_{n;2}(1,2,\ldots, n; \phi_{n+1},\phi_{n+2})$ under BCFW shifting of two scalars $\phi_{n+1},\phi_{n+2}$! As a simple illustration, let us consider four-point scalar amplitude $A_{2;2}(\phi_1^K,\phi^L_2;\phi^I_3,\phi^J_4)$. In this case, the only possible contributing diagram is a four-scalar vertex defined by $ \Delta L$, and we can directly work out as $A_{2;2}(\phi_1,\phi_2;\phi_3,\phi_4)=i\kappa$. After appropriate normalization, it can be set as 1. Since it has no dependence on any external momenta, after momentum shifting \begin{eqnarray} | 3\rangle\to |3\rangle-z|4\rangle~~~,~~~|4]\to |4]+z|3]~,~~~\end{eqnarray} the amplitude still remains the same, while the boundary operator is $\mathop{\rm Tr}(\phi^K\phi^L)$. There is no pole's term in $z$, while the zero-th order term in $z$ is $B^{\langle \phi^I_3|\phi^J_4]}(\phi^K_1,\phi^L_2;\phi^I_{\widehat{3}},\phi^J_{\widehat{4}})=1$. Thus we confirm the tree-level Sudakov form factor \begin{eqnarray} \vev{\phi_1^K,\phi_2^L|\mathop{\rm Tr}(\phi^K\phi^L)|0}=B^{\langle 3|4]}(\phi_1^K,\phi_2^L;\phi^I_{\widehat{3}},\phi^J_{\widehat{4}})=1~.~~~\end{eqnarray} Now we have three different ways of studying form factor. The first, as stated in \cite{Brandhuber:2010ad}, form factor obeys a similar BCFW recursion relation as amplitude. This enables us to compute a form factor recursively from lower-point ones. The second, we can compute the corresponding amplitude. Once it is obtained, we can take the BCFW shifting $\langle \phi_{n+1}|\phi_{n+2}]$ and extract the boundary contribution $B^{\spab{\phi_{n+1}|\phi_{n+2}}}$, which equals to the corresponding form factor after identification. The third, as stated in \cite{Jin:2014qya}, the boundary contribution also obeys a similar BCFW recursion relation as amplitude. We can compute boundary contribution recursively from lower-point boundary contributions, and once it is obtained, we can work out the form factor after identification. In the following subsection, we will take MHV form factor of operator $\mathcal{O}_2$ as an example, to illustrate these three ways of understanding. \subsection{MHV case} The $n$-point color-ordered MHV form factor of operator $\mathcal{O}_2$ is given by \begin{eqnarray} \mathcal{F}^{{\tiny \mbox{MHV}}}_{\mathcal{O}_2,n}(\{g^+\},\phi_i,\phi_j;q)=-{\vev{i~j}^2\over \vev{1~2}\vev{2~3}\cdots\vev{n~1}}~,~~~\label{MHVformO2}\end{eqnarray} where $\mathcal{F}^{{\tiny \mbox{MHV}}}_{\mathcal{O}_2,n}(\{g^+\},\phi_i,\phi_j;q)$ denotes $$\mathcal{F}_{\mathcal{O}_2,n}^{{\tiny \mbox{MHV}}}(g_1^+,\ldots,g_{i-1}^+,\phi_i,g_{i+1}^+,\ldots, g_{j-1}^+,\phi_j,g_{j+1}^+,\ldots, g_n^+;q)~.$$ \subsubsection*{BCFW recursion relation of form factor} The result (\ref{MHVformO2}) has been proven in paper \cite{Brandhuber:2010ad}\footnote{Note that we have introduced an over-all minus sign in the expression (\ref{MHVformO2}), so that the Sudakov form factor is defined to be $\mathcal{F}_{\mathcal{O}_2,2}(\phi_1,\phi_2;q)=1$.} by BCFW recursion relation of form factor. As stated therein, after taking BCFW shifting of two momenta $p_{i_1},p_{i_2}$, the form factor can be computed as summation of products of lower-point form factor and lower-point amplitude, as long as the large $z$ behavior $\mathcal{F}(z)|_{z\to \infty}\to 0$ is satisfied under such deformation. The $n$ external legs will be split into two parts, with $\widehat{p}_{i_1}, \widehat{p}_{i_2}$ in each part separately. The operator, since it is color-singlet, can be inserted into either part. So it is possible to build up a $n$-point form factor recursively from three-point amplitudes and three-point form factors. Since this method has already been described in \cite{Brandhuber:2010ad}, we will not repeat it here. \subsubsection*{BCFW recursion relation of amplitude} Instead of computing form factor directly, we can first compute the corresponding $(n+2)$-point amplitude \begin{eqnarray} A_{n;2}(g_1^+,\ldots, g_{i-1}^+,\phi_i,g_{i+1}^+,\ldots,g_{j-1}^+, \phi_j,g_{i+1}^+,\ldots, g_n^+;\phi_{n+1},\phi_{n+2})~.~~~\label{An2}\end{eqnarray} This amplitude can be computed via BCFW recursion relation. If we choose one shifted momentum to be gluon, $A_{n;2}(z)$ will be vanishing when $z\to \infty$, i.e., there is no boundary contribution. So we can take $\spab{g^+|\phi}$-shifting in the computation. The four-point amplitude is trivially $A_{2;2}(\phi_1,\phi_2;\phi_3,\phi_4)=1$. To compute the five-point amplitude $A_{3;2}(\phi_1,\phi_2,g_3^+;\phi_4,\phi_5)$, we can take $\gb{g^+_3|\phi_1}$-shifting. There is only one contributing term as shown in Figure (\ref{An2proof}.a), which is given by \begin{figure} \centering \includegraphics[width=6in]{An2proof}\\ \caption{(a) is the contributing diagram for $A_{3;2}(\phi_1,\phi_2,g_3^+;\phi_4,\phi_5)$. (b)(c) are the contributing diagrams for general $A_{n;2}$ when $j\neq i+2$ while (b)(d) are the contributing diagrams for $A_{n;2}$ when $j=i+2$.}\label{An2proof} \end{figure} \begin{eqnarray} A_{3;2}(\phi_1,\phi_2,g_3^+;\phi_4,\phi_5)&=&A_{2;2}({\phi}_{\widehat{1}},{\phi}_{\widehat{P}};\phi_4,\phi_5){1\over P_{23}^2}A_3({\phi}_{-\widehat{P}},\phi_2,g^+_{\widehat{3}})\nonumber\\ &=&-1\times{1\over P_{23}^2}\times {\bvev{2~3}[3~\widehat{P}]\over [\widehat{P}~2]}=-{\spaa{1~2}^2\over \spaa{1~2}\spaa{2~3}\spaa{3~1}}~,~~~\end{eqnarray} where $\widehat{P}=p_2+p_3-z|1\rangle|3]$. Similarly, for general amplitude $A_{n;2}$, we can take $\spab{g^+_{i+1}|\phi_{i}}$-shifting\footnote{Because of cyclic invariance, we can always do this.}. If $j\neq (i+2)$, we need to consider two contributing terms as shown in Figure (\ref{An2proof}.b) and (\ref{An2proof}.c), while if $j=(i+2)$, we need to consider two contributing terms as shown in Figure (\ref{An2proof}.b) and (\ref{An2proof}.d). In either case, contribution of diagram (\ref{An2proof}.b) vanishes under $\gb{g^+_{i+1}|\phi_i}$-shifting. So we only need to compute contribution of diagram (\ref{An2proof}.c) or (\ref{An2proof}.d). Taking $j\neq (i+2)$ as example, we have \begin{eqnarray} &&A_{n;2}(g_1^+,\ldots, \phi_i,\ldots, \phi_j,\ldots, g_n^+;\phi_{n+1},\phi_{n+2})\\ &=&A_{n-1;2}(g^+_{i+3},\ldots, \phi_j,\ldots, g^+_n,g_1^+,\ldots, \phi_{\widehat{i}}, g^+_{\widehat{P}};\phi_{n+1},\phi_{n+2}){1\over P^2_{i+1,i+2}}A_3(g^-_{-\widehat{P}},g^+_{\widehat{i+1}},g^+_{i+2})~.~~~~\nonumber \end{eqnarray} Assuming that \begin{eqnarray} A_{n;2}(\{g^+\},\phi_i,\phi_j;\phi_{n+1},\phi_{n+2})=-{\spaa{i~j}^2\over \spaa{1~2}\spaa{2~3}\cdots\spaa{n~1}}~~~~\label{an2result}\end{eqnarray} is true for $A_{n-1;2}$, then \begin{eqnarray} &&A_{n;2}(g_1^+,\ldots, \phi_i,\ldots, \phi_j,\ldots, g_n^+;\phi_{n+1},\phi_{n+2})\\ &=&-{\spaa{i~j}^2\over \spaa{1~2}\cdots \spaa{i-1,i}\spaa{i~\widehat{P}}\spaa{\widehat{P},i+3}\spaa{i+3,i+4}\cdots \spaa{n~1}}{1\over P^2_{i+1,i+2}}{\spbb{i+1,i+2}^3\over \spbb{\widehat{P},i+1}\spbb{i+2,\widehat{P}}}\nonumber\\ &=&-{\spaa{i~j}^2\over \spaa{1~2}\spaa{2~3}\cdots\spaa{n~1}}~,~~~\nonumber \end{eqnarray} where \begin{eqnarray} \widehat{P}=p_{i+1}+p_{i+2}-z_{i+1,i+2}|i\rangle |i+1]~~,~~z_{i+1,i+2}={\spaa{i+1,i+2}\over \spaa{i,i+2}}~.~~~\end{eqnarray} Similar computation shows that for $j\neq i+2$ case, (\ref{an2result}) is also true for all $n$. Thus we have proven the result (\ref{an2result}) by BCFW recursion relation of amplitude. As discussed, $\spab{\phi_{n+1}|\phi_{n+2}}$-shifting generates the boundary operator $\mathcal{O}_2$, and the corresponding boundary contribution is identical to the form factor of operator $\mathcal{O}_2$. Here, $A_{n;2}$ does not depend on momenta $p_{n+1},p_{n+2}$, thus \begin{eqnarray} B^{\spab{\phi_{n+1}|\phi_{n+2}}}(\{g^+\},\phi_i,\phi_j;\phi_{\widehat{n+1}},\phi_{\widehat{n+2}})=-{\spaa{i~j}^2\over \spaa{1~2}\spaa{2~3}\cdots\spaa{n~1}}~,~~~\end{eqnarray} and correspondingly \begin{eqnarray} \mathcal{F}^{{\tiny \mbox{MHV}}}_{\mathcal{O}_2,n}(\{g^+\},\phi_i,\phi_j;q)=B^{\spab{\phi_{n+1}|\phi_{n+2}}}=-{\spaa{i~j}^2\over \spaa{1~2}\spaa{2~3}\cdots\spaa{n~1}}~,~~~\end{eqnarray} which agrees with the result given by BCFW recursion relation of form factor. \subsubsection*{Recursion relation of boundary contribution} We can also compute the boundary contribution directly by BCFW recursion relation without knowing the explicit expression of amplitude, as shown in paper \cite{Jin:2014qya}. \begin{figure} \centering \includegraphics[width=5.5in]{Bn2proof}\\ \caption{(a)Feynman diagram for boundary contribution $B_{2;2}^{\spab{\phi_3|\phi_4}}(\phi_1,\phi_2;\phi_{\widehat{3}},\phi_{\widehat{4}})$, (b)Feynman diagrams for boundary contribution $B_{3;2}^{\spab{\phi_4|\phi_5}}(\phi_1,\phi_2,g_3^+;\phi_{\widehat{4}},\phi_{\widehat{5}})$.}\label{Bn2proof} \end{figure} The boundary contribution of four and five-point amplitudes can be computed directly by Feynman diagrams. For four-point case, there is only one diagram, i.e., four-scalar vertex, as shown in Figure (\ref{Bn2proof}.a), and $B^{\spab{\phi_3|\phi_4}}_{2;2}(\phi_1,\phi_2;\phi_{\widehat{3}},\phi_{\widehat{4}})=1$. For five-point case, under $\spab{\phi_4|\phi_5}$-shifting, only those Feynman diagrams whose $\widehat{p}_{4},\widehat{p}_{5}$ are attached to the same four-scalar vertex contribute to the boundary contribution. There are in total two diagrams as shown in Figure (\ref{Bn2proof}.b), which gives \begin{eqnarray} B_{3;2}^{\spab{\phi_4|\phi_5}}(\phi_1,\phi_2,g^+_3;\phi_{\widehat{4}},\phi_{\widehat{5}})&=&-{(p_2-P_{23})^{\mu}\epsilon_{\mu}^{+}(p_3)\over P_{23}^2}+ {(p_1-P_{13})^{\mu}\epsilon_{\mu}^{+}(p_3)\over P_{13}^2}\nonumber\\ &=&-{\spaa{1~2}^2\over \spaa{1~2}\spaa{2~3}\spaa{3~1}}~,~~~\end{eqnarray} where the polarization vector $\epsilon_{\mu}^{\pm}(p)$ is defined to be \begin{eqnarray} \epsilon_{\mu}^+(p)={\spab{r|\gamma_\mu|p}\over \sqrt{2}\spaa{r~p}}~~,~~\epsilon_{\mu}^-(p)={\spab{p|\gamma_\mu|r}\over \sqrt{2}\spbb{p~r}}~,~~~\end{eqnarray} with $r$ an arbitrary reference spinor. From these lower-point results, it is not hard to guess that \begin{eqnarray} B_{n;2}^{\spab{\phi_{n+1}|\phi_{n+2}}}(\{g^+\},\phi_i,\phi_j;\phi_{\widehat{n+1}},\phi_{\widehat{n+2}})=-{\spaa{i~j}^2\over \spaa{1~2}\spaa{2~3}\cdots\spaa{n~1}}~.~~~\label{bn2result}\end{eqnarray} This result can be proven recursively by taking another shifting $\spab{i_1|\phi_{n+2}}$ on $B_{n;2}^{\spab{\phi_{n+1}|\phi_{n+2}}}$, where $p_{i_1}$ is the momentum other than $p_{n+1},p_{n+2}$. If under this second shifting, there is no additional boundary contribution, then $B_{n;2}^{\spab{\phi_{n+1}|\phi_{n+2}}}$ can be fully determined by the pole terms under $\spab{i_{1}|\phi_{n+2}}$-shifting. Otherwise we should take a third momentum shifting and so on, until we have detected the complete boundary contribution. Fortunately, if $p_{i_1}$ is the momentum of gluon, a second shifting $\spab{g_{i_1}^+|\phi_{n+2}}$ is sufficient to detect all the contributions \cite{Jin:2014qya}. For a general boundary contribution $B^{\spab{\phi_{n+1}|\phi_{n+2}}}_{n;2}$, we can take $\spab{g_1^+|\phi_{n+2}}$-shifting. It splits the boundary contribution into a sub-amplitude times a lower-point boundary contribution, and only those terms with three-point amplitudes are non-vanishing. Depending on the location of $\phi_i,\phi_j$, the contributing terms are different. Assuming that (\ref{bn2result}) is true for $B_{n-1;2}$, if $i,j\neq 2,n$, we have \begin{eqnarray} &&B_{n;2}^{\spab{\phi_{n+1}|\phi_{n+2}}}(\{g^+\},\phi_i,\phi_j;\phi_{\widehat{n+1}},\phi_{\widehat{n+2}})\\ &=&A_3(g^+_n,g^+_{\widehat{\widehat{1}}},g^-_{\widehat{\widehat{P}}}){1\over P_{1n}^2}B^{\spab{\phi_{n+1}|\phi_{n+2}}}_{n-1;2}(g^+_{-\widehat{\widehat{P}}},g^+_2,\ldots, \phi_i,\ldots, \phi_j,\ldots, g^+_{n-1};\phi_{\widehat{n+1}},\phi_{\widehat{\widehat{n+2}}})\nonumber\\ &&+A_3(g^+_{\widehat{\widehat{1}}},g^+_2,g^-_{\widehat{\widehat{P}}}){1\over P_{12}^2}B^{\spab{\phi_{n+1}|\phi_{n+2}}}_{n-1;2}(g^+_{-\widehat{\widehat{P}}},g^+_3,\ldots, \phi_i,\ldots, \phi_j,\ldots, g^+_{n};\phi_{\widehat{n+1}},\phi_{\widehat{\widehat{n+2}}})~,~~~\nonumber\end{eqnarray} while if $i=2,j\neq n$, we have \begin{eqnarray} &&B_{n;2}^{\spab{\phi_{n+1}|\phi_{n+2}}}(\{g^+\},\phi_2,\phi_j;\phi_{\widehat{n+1}},\phi_{\widehat{n+2}})\\ &=&A_3(g^+_n,g^+_{\widehat{\widehat{1}}},g^-_{\widehat{\widehat{P}}}){1\over P_{1n}^2}B^{\spab{\phi_{n+1}|\phi_{n+2}}}_{n-1;2}(g^+_{-\widehat{\widehat{P}}},\phi_2,g^+_3,\ldots, \phi_j,\ldots, g^+_{n-1};\phi_{\widehat{n+1}},\phi_{\widehat{\widehat{n+2}}})\nonumber\\ &&~~~~~~+A_3(g^+_{\widehat{\widehat{1}}},\phi_2,\phi_{\widehat{\widehat{P}}}){1\over P_{12}^2}B^{\spab{\phi_{n+1}|\phi_{n+2}}}_{n-1;2}(\phi_{-\widehat{\widehat{P}}},g^+_3,\ldots, \phi_j,\ldots, g^+_{n};\phi_{\widehat{n+1}},\phi_{\widehat{\widehat{n+2}}})~,~~~\nonumber\end{eqnarray} and if $i=2,j=n$, we have \begin{eqnarray} &&B_{n;2}^{\spab{\phi_{n+1}|\phi_{n+2}}}(\{g^+\},\phi_2,\phi_n;\phi_{\widehat{n+1},\widehat{n+2}})\\ &=&A_3(\phi_n,g^+_{\widehat{\widehat{1}}},\phi_{\widehat{\widehat{P}}}){1\over P_{1n}^2}B^{\spab{\phi_{n+1}|\phi_{n+2}}}_{n-1;2}(\phi_{-\widehat{\widehat{P}}},\phi_2,g^+_3,\ldots, g^+_{n-1};\phi_{\widehat{n+1}},\phi_{\widehat{\widehat{n+2}}})\nonumber\\ &&~~~~~~+A_3(g^+_{\widehat{\widehat{1}}},\phi_2,\phi_{\widehat{\widehat{P}}}){1\over P_{12}^2}B^{\spab{\phi_{n+1}|\phi_{n+2}}}_{n-1;2}(\phi_{-\widehat{\widehat{P}}},g^+_3,\ldots,g^+_{n-1},\ldots, \phi_{n};\phi_{\widehat{n+1}},\phi_{\widehat{\widehat{n+2}}})~.~~~\nonumber\end{eqnarray} All of them lead to the result (\ref{bn2result}), which ends the proof. Again, with the result of boundary contribution, we can work out the corresponding form factor directly. We have shown that the BCFW recursion relation of form factor, amplitude and boundary contribution lead to the same conclusion. This is not limited to MHV case, since the connection between form factor and boundary contribution of amplitude is universal and does not depend on the external states. In fact, for any form factor with $n$-particle on-shell states $|s\rangle$, we can instead compute the corresponding amplitude $A_{n;2}(s;\phi_{n+1},\phi_{n+2})$ defined by Lagrangian $L_{\mathcal{O}_2}$, and extract the boundary contribution under $\spab{\phi_{n+1}|\phi_{n+2}}$-shifting. There is no difference between this boundary contribution and form factor of $\mathcal{O}_2$. For example, in \cite{Brandhuber:2011tv}, the authors showed that the split-helicity form factor shares a similar "zigzag diagram" construction as the split-helicity amplitude given in \cite{Britto:2005dg}. It is now easy to understand this, since the form factor is equivalent to the boundary contribution of the amplitude, and it naturally inherits the "zigzag" construction with minor modification. The tree amplitude $A_{n;2}(1,\ldots, n;n+1,n+2)$ associated with the double trace structure is cyclically invariant inside legs $\{1,2,\ldots, n\}$ and $\{n+1,n+2\}$, so no surprisingly, the color-ordered form factor is also cyclically invariant on its $n$ legs. Since the trace structure $\mathop{\rm Tr}(t^{n+1}t^{n+2})$ is completely isolated from the other color structure, while the later one is constructed only from structure constant $f^{abc}$. Thus for amplitudes $A_{n;2}$, we also have Kleiss-Kuijf(KK) relation\cite{Kleiss:1988ne} among permutation of legs $\{1,2,\ldots, n\}$ as \begin{eqnarray} A_{n;2}(1,\{\alpha\},n,\{\beta\};\phi_{n+1},\phi_{n+2})=(-)^{n_{\beta}}\sum_{\sigma\in OP\{\alpha\}\cup\{\beta^{T}\}}A_{n;2}(1,\sigma,n;\phi_{n+1},\phi_{n+2})~,~~~\end{eqnarray} where $n_{\beta}$ is the length of set $\beta$, $\beta^{T}$ is the reverse of set $\beta$, and $OP$ is the ordered permutation, containing all the possible permutations between two sets while keeping each set ordered. This relation can be similarly extended to form factors. Especially for operator $\mathcal{O}_2$, we can relate all form factors to those with two adjacent scalars, \begin{eqnarray} \mathcal{F}_{\mathcal{O}_2,n}(\phi_1,\{\alpha\},\phi_n,\{\beta\};q)=(-)^{n_{\beta}}\sum_{\sigma\in OP\{\alpha\}\cup\{\beta^{T}\}}\mathcal{F}_{\mathcal{O}_2,n}(\phi_n,\phi_1,\sigma;q)~.~~~\end{eqnarray} \subsection{Form factor of operator $\mathcal{O}_k\equiv\mathop{\rm Tr}(\phi^{M_1}\phi^{M_2}\cdots \phi^{M_k})$} Let us further consider a more general operator $\mathcal{O}_k\equiv \mathop{\rm Tr}(\phi^{M_1}\phi^{M_2}\cdots \phi^{M_k})$ and the form factor $\mathcal{F}_{\mathcal{O}_k,n}(s;q)=\spaa{s|\mathcal{O}_k(0)|0}$. In order to generate the operator $\mathcal{O}_k$ under certain BCFW shifting, we need to add an additional Lagrangian term \begin{eqnarray} \Delta L={\kappa\over (2k) N}\mathop{\rm Tr}(\phi^I\phi^J)\mathop{\rm Tr}(\phi^{M_1}\phi^{M_2}\cdots \phi^{M_{k}})~~~~\label{FormOk}\end{eqnarray} to construct a new Lagrangian $L_{\mathcal{O}_{k}}=L_{{\tiny\mbox{SYM}}}+\Delta L$. Then the boundary contribution of corresponding amplitude $A_{n;2}(s;\phi_{n+1},\phi_{n+2})$ under $\spab{\phi_{n+1}|\phi_{n+2}}$-shifting is identical to the form factor $\mathcal{F}_{\mathcal{O}_{k},n}(s;q)$. To see that the boundary operator $\mathcal{O}^{\spab{\phi^{I_a}|\phi^{J_b}}}$ is indeed the operator $\mathcal{O}_k$, we can firstly compute the variation of Lagrangian $L_{\mathcal{O}_k}$ from left with respect to $\phi^{I_a}$, and then the variation of ${\delta L_{\mathcal{O}_k}\over \delta \phi^{I_a}}$ from right with respect to $\phi^{J_b}$, which we shall denote as ${\overleftarrow{\delta}\over \delta \phi^{J_b}}$ to avoid ambiguities. The variation of $L_{{\tiny\mbox{SYM}}}$ part is given in (\ref{variationSYM}), while for $\Delta L$ part, we have \begin{eqnarray} {\delta \Delta L\over\delta \phi^{I_a} }&=&{\kappa\over kN}\mathop{\rm Tr}(\phi^Jt^a)\mathop{\rm Tr}(\phi^{M_1}\phi^{M_2}\cdots \phi^{M_{k}})\nonumber\\ &&+{\kappa\over 2N}\mathop{\rm Tr}(\phi^{N_1}\phi^{N_2})\mathop{\rm Tr}(t^a \phi^{M_1}\phi^{M_2}\cdots \phi^{M_{k-1}})~,~~~\end{eqnarray} and \begin{eqnarray} {\overleftarrow{\delta}\over \delta \phi^{J_b}}\left({\delta \Delta L\over\delta \phi^{I_a} }\right)&=&{N^2-1\over 2kN}\kappa\mathop{\rm Tr}(\phi^{M_1}\phi^{M_2}\cdots \phi^{M_k})+{\kappa\over 2N}\phi^{Ma}\mathop{\rm Tr}(t^a\phi^{M_1}\phi^{M_2}\cdots \phi^{M_{k-1}})\nonumber\\ &&+\sum_{i}{\kappa\over 2N}\mathop{\rm Tr}(\phi^{N_1}\phi^{N_2})\mathop{\rm Tr}(t^a\phi^{M_1}\cdots \phi^{M_i}t^a\phi^{M_{i+1}}\cdots\phi^{M_{k-2}})~.~~~\nonumber\end{eqnarray} The first term contains $O(N)$ order result, with a single trace proportional to $\mathop{\rm Tr}(\phi^k)$, while the second term is $O({1\over N})$ order, and the third term is also $O({1\over N})$ order with even triple trace structure. Thus at the leading $N$ order, the boundary operator of $L_{\mathcal{O}_k}$ is \begin{eqnarray} \mathcal{O}^{\langle\phi^{Ia}|\phi^{Jb}]}=2g^2N\delta^{IJ}\mathop{\rm Tr}(A\cdot A+\phi^K\phi^K)+{N\over 2k}\kappa\mathop{\rm Tr}(\phi^{M_1}\phi^{M_2}\cdots \phi^{M_k})~.~~~\label{OkboundaryO}\end{eqnarray} Similar to the $\mathcal{O}_2$ case, the traceless part of (\ref{OkboundaryO}) is proportional to the operator $\mathcal{O}_k$. The $\Delta L$ term introduces a $(k+2)$-scalar vertex, besides this it has no difference to $\mathcal{O}_2$ case. We can compute the amplitude $A_{n;2}(s;\phi_{n+1},\phi_{n+2})$, take $\spab{\phi_{n+1}|\phi_{n+2}}$-shifting and extract the boundary contribution. Then transforming it to form factor is almost trivial. For instance, $A_{k;2}(\phi_1,\ldots,\phi_k;\phi_{k+1},\phi_{k+2})=1$, thus $\mathcal{F}_{\mathcal{O}_k,k}(\phi_1,\ldots,\phi_k;q)=1$. It is also easy to conclude that, since the Feynman diagrams of amplitude $$A_{n;2}(\phi_1,\cdots,\phi_{k},g_{k+1}^+,\ldots, g_{n}^+;\phi_{n+1},\phi_{n+2})$$ defined by $L_{\mathcal{O}_k}$ have one-to-one mapping to the Feynman diagrams of amplitude $$A_{n-(k-2);2}(\phi_1,\phi_{k},g_{k+1}^+,\ldots, g_n^+;\phi_{n+1},\phi_{n+2})$$ defined by $L_{\mathcal{O}_2}$ by just replacing the $(k+2)$-scalar vertex with four-scalar vertex, we have \begin{eqnarray} A^{\mathcal{O}_k}_{n;2}(\phi_1,\ldots,\phi_{k},g_{k+1}^+,\ldots,g_n^+;\phi_{n+1},\phi_{n+2})&=&A^{\mathcal{O}_2}_{n-(k-2);2}(\phi_1,\phi_{k},g_{k+1}^+,\ldots,g_n^+;\phi_{n+1},\phi_{n+2})\nonumber\\ &=&-{\spaa{1~k}\over\spaa{k,k+1}\spaa{k+1,k+2}\cdots\spaa{n~1}}~.~~~\end{eqnarray} Thus we get \begin{eqnarray} \mathcal{F}_{\mathcal{O}_k;n}(\phi_1,\ldots,\phi_{k},g_{k+1}^+,\ldots,g_n^+;q)&=&-{\spaa{1~k}\over\spaa{k,k+1}\spaa{k+1,k+2}\cdots\spaa{n~1}}~.~~~\end{eqnarray} \section{Form factor of composite operators} \label{secComposite} Now we move to the computation of form factors for the composite operators introduced in \S \ref{secOperator}. For convenience we will use complex scalars $\phi^{AB},\bar{\phi}_{AB}$ instead of real scalars $\phi^I$ in this section. We will explain the construction of Lagrangian which generates the corresponding operators, and compute the MHV form factors through amplitudes of double trace structure. \subsection{The spin-0 operators} There are three operators \begin{eqnarray} &&\mathcal{O}^{[0]}_{\tiny \mbox{I}}=\mathop{\rm Tr}(\phi^{AB}\phi^{CD})~~~,~~~\mathcal{O}^{[0]}_{\tiny \mbox{II}}=\mathop{\rm Tr}(\psi^{A\gamma}\psi^B_{\gamma})~~~,~~~\mathcal{O}^{[0]}_{\tiny \mbox{III}}=\mathop{\rm Tr}(F^{\alpha\beta}F_{\alpha\beta})~,~~~\end{eqnarray} with their complex conjugate partners $\bar{\mathcal{O}}_{\tiny \mbox{I}}^{[0]}$, $\bar{\mathcal{O}}_{\tiny \mbox{II}}^{[0]}$ and $\bar{\mathcal{O}}_{\tiny \mbox{III}}^{[0]}$. For these operators, in order to construct Lorentz invariant double trace Lagrangian terms $\Delta L$, we need to product them with another spin-0 trace term. Since shifting a gluon is always more complicated than shifting a fermion, and shifting a fermion is more complicated than shifting a scalar, we would like to choose the spin-0 trace term as trace of two scalars, as already shown in operator $\mathcal{O}_2$ case. For operator $\mathcal{O}_{\tiny \mbox{II}}^{[0]}$, we could construct the Lagrangian as \begin{eqnarray} L_{\mathcal{O}^{[0]}_{\tiny \mbox{II}}}=L_{{\tiny\mbox{SYM}}}+{\kappa \over N}\mathop{\rm Tr}(\phi^{A'B'}\phi^{C'D'})\mathop{\rm Tr}(\psi^{A\gamma}\psi^B_{\gamma})+{\bar{\kappa}\over N} \mathop{\rm Tr}(\bar{\phi}_{A'B'}\bar{\phi}_{C'D'})\mathop{\rm Tr}(\bar{\psi}^{\dot{\gamma}}_A\bar{\psi}_{B\dot{\gamma}})~.~~~\end{eqnarray} The momentum shifting of two scalars $\phi_{n+1},\phi_{n+2}$ will generate the boundary operator $\mathcal{O}^{\spab{\phi_{n+1}|\phi_{n+2}}}=\mathop{\rm Tr}(\bar{\psi}^{\dot{\gamma}}_A\bar{\psi}_{B\dot{\gamma}})$, while the shifting of two scalars $\bar{\phi}_{n+1},\bar{\phi}_{n+2}$ will generate the boundary operator $\mathcal{O}^{\spab{\bar{\phi}_{n+1}|\bar{\phi}_{n+2}}}=\mathop{\rm Tr}(\psi^{A\gamma}\psi^B_{\gamma})$. Thus the form factor $$\mathcal{F}_{\mathcal{O}^{[0]}_{\tiny \mbox{II}},n}(s;q)=\spaa{s|\mathcal{O}^{[0]}_{\tiny \mbox{II}}|0}$$ is identical to the boundary contribution of amplitude $A_{n;2}(s;\bar{\phi}_{n+1},\bar{\phi}_{n+2})$ defined by $L_{\mathcal{O}^{[0]}_{\tiny \mbox{II}}}$ under $\spab{\bar{\phi}_{n+1}|\bar{\phi}_{n+2}}$-shifting. This amplitude can be computed by Feynman diagrams or BCFW recursion relation method. The $\Delta L$ Lagrangian term introduces $\phi$-$\phi$-$\psi$-$\psi$ and $\bar{\phi}$-$\bar{\phi}$-$\bar{\psi}$-$\bar{\psi}$ vertices in the Feynman diagrams, and it defines the four-point amplitude $A_{2;2}(\bar{\psi}_1,\bar{\psi}_2;\bar{\phi}_3,\bar{\phi}_4)=\spaa{1~2}$ as well as $A_{2;2}(\psi_1,\psi_2;\phi_3,\phi_4)=\spbb{1~2}$. Thus it is immediately know that the boundary contribution $B^{\spab{\bar{\phi}_3|\bar{\phi}_4}}(\bar{\psi}_1,\bar{\psi}_2;\bar{\phi}_{\widehat{3}},\bar{\phi}_{\widehat{4}})=\spaa{1~2}$, and the form factor $\mathcal{F}_{\mathcal{O}^{[0]}_{\tiny \mbox{II}},n}(\bar{\psi}_1,\bar{\psi}_2;q)=\spaa{1~2}$. We can also compute the five-point amplitude $A_{3;2}(\bar{\psi}_1,\bar{\psi}_2,g_3^+;\bar{\phi}_4,\bar{\phi}_5)$, and the contributing Feynman diagrams are similar to Figure (\ref{Bn2proof}.b) but now we have $\bar{\psi}_1,\bar{\psi}_2$ instead of $\bar{\phi}_1,\bar{\phi}_2$. It is given by \begin{eqnarray} A_{3;2}(\bar{\psi}_1,\bar{\psi}_2,g_3^+;\bar{\phi}_4,\bar{\phi}_5)&=&{\spaa{1|P_{23}|\gamma^{\mu}|2}\over s_{23}}\epsilon^{+}_{\mu}(p_3)+{\spaa{2|P_{13}|\gamma^{\mu}|1}\over s_{13}}\epsilon^{+}_{\mu}(p_3)=-{\spaa{1~2}^2\over \spaa{2~3}\spaa{3~1}}~.~~~\end{eqnarray} Generalizing this result to $(n+2)$-point double trace amplitude, we have \begin{eqnarray} A_{n;2}(\{g^+\},\bar{\psi}_i,\bar{\psi}_j;\bar{\phi}_{n+1},\bar{\phi}_{n+2})=-{\spaa{i~j}^3\over \spaa{1~2}\spaa{2~3}\spaa{3~4}\cdots \spaa{n~1}}~.~~~\label{fermionMHVForm}\end{eqnarray} It is easy to verify above result by BCFW recursion relation of amplitude, for example, by taking $\spab{g_1^+|\bar{\psi}_i}$-shifting. Similar to the $\mathcal{O}_2$ case, only those terms with three-point sub-amplitudes can have non-vanishing contributions, and after substituting the explicit results for $A_3$ and $A_{n-1;2}$, we arrive at the result (\ref{fermionMHVForm}). The boundary contribution of amplitude (\ref{fermionMHVForm}) under $\spab{\bar{\phi}_{n+1}|\bar{\phi}_{n+2}}$-shifting keeps the same as $A_{n;2}$ itself, thus consequently we get the form factor \begin{eqnarray} \boxed{\mathcal{F}_{\mathcal{O}^{[0]}_{\tiny \mbox{II}},n}(\{g^+\},\bar{\psi}_i,\bar{\psi}_j;q)=-{\spaa{i~j}^3\over \spaa{1~2}\spaa{2~3}\spaa{3~4}\cdots \spaa{n~1}}~.~~~}\end{eqnarray} It is also interesting to consider another special $n$-point external states, i.e., two fermions with $(n-2)$ gluons of negative helicities. For five-point amplitude $A_{3;2}(\bar{\psi}_1,\bar{\psi}_2,g_3^-;\bar{\phi}_4,\bar{\phi}_5)$, the contributing Feynman diagrams can be obtained by replacing $g_3^+$ as $g_3^-$ in amplitude $A_{3;2}(\bar{\psi}_1,\bar{\psi}_2,g_3^+;\bar{\phi}_4,\bar{\phi}_5)$, so we have \begin{eqnarray} A_{3;2}(\bar{\psi}_1,\bar{\psi}_2,g_3^-;\bar{\phi}_4,\bar{\phi}_5)&=&{\spaa{1|P_{23}|\gamma^{\mu}|2}\over s_{23}}\epsilon^{-}_{\mu}(p_3)+{\spaa{2|P_{13}|\gamma^{\mu}|1}\over s_{13}}\epsilon^{-}_{\mu}(p_3)\nonumber\\ &=&{(p_4+p_5)^2\spbb{1~2}\over \spbb{1~2}\spbb{2~3}\spbb{3~1}}~.~~~\end{eqnarray} More generally, we have \begin{eqnarray} A_{n;2}(\{g^-\},\bar{\psi}_i,\bar{\psi}_j;\bar{\phi}_{n+1},\bar{\phi}_{n+2})={(p_{n+1}+p_{n+2})^2\spbb{i~j}\over \spbb{1~2}\spbb{2~3}\cdots \spbb{n~1}}~.~~~\label{fermionAllMinus}\end{eqnarray} This result can be proven recursively by BCFW recursion relation. Assuming eqn. (\ref{fermionAllMinus}) is valid for $A_{n-1;2}$, then taking $\spab{\bar{\phi}_{n+2}|g_n}$-shifting, we get two contributing terms\footnote{We assumed that $i,j\neq 1,n-1$, otherwise the two contributing terms are slightly different. However they lead to the same conclusion.} for $A_{n;2}$. The first term is \begin{eqnarray} &&A_3(g^-_{\widehat{n}},g^-_1,g^+_{\widehat{P}_{1n}}){1\over P_{1n}^2}A_{n-1;2}(g^-_{-\widehat{P}_{1n}},g^-_2,\ldots, \bar{\psi}_i,\ldots,\bar{\psi}_j,\ldots,g^-_{n-1};\bar{\phi}_{n+1},\bar{\phi}_{\widehat{n+2}})\nonumber\\ &=&{\spbb{i~j}(p_{n+1}+p_{n+2})^2\over \spbb{1~2}\spbb{2~3}\cdots\spbb{n-1,n}\spbb{n~1}}{\spbb{n+2,1}\spbb{n,n-1}\over \spbb{n-1,1}\spbb{n+2,n}}\nonumber\\ &&~~~~+{\spbb{i~j}\over \spbb{1~2}\spbb{2~3}\cdots\spbb{n-1,n}\spbb{n~1}}{\spaa{n+1,n}\spbb{n+2,n+1}\over \spbb{n-1,1}\spbb{n+2,n}}\spbb{n~1}\spbb{n,n-1}~,~~~\end{eqnarray} while the second term is \begin{eqnarray} &&A_3(g_{n-1}^-,g^-_{\widehat{n}},g^{+}_{\widehat{P}_{n-1,n}}){1\over P_{n-1,n}^2}A_{n-1;2}(g^{-}_{-\widehat{P}_{n-1,n}},g^-_1,\ldots,\bar{\psi}_i,\ldots,\bar{\psi}_j,\ldots,g_{n-2}^-;\bar{\phi}_{n+1},\bar{\phi}_{\widehat{n+2}})\nonumber\\ &=&{\spbb{i~j}(p_{n+1}+p_{n+2})^2\over \spbb{1~2}\spbb{2~3}\cdots\spbb{n-1,n}\spbb{n~1}}{\spbb{n-1,n+2}\spbb{n,1}\over \spbb{n-1,1}\spbb{n+2,n}}\nonumber\\ &&~~~~+{\spbb{i~j}\over \spbb{1~2}\spbb{2~3}\cdots\spbb{n-1,n}\spbb{n~1}}{\spaa{n+1,n}\spbb{n+2,n+1}\over \spbb{n-1,1}\spbb{n+2,n}}\spbb{n~1}\spbb{n-1,n}~.~~~\end{eqnarray} Summing above two contributions, we get the desired eqn. (\ref{fermionAllMinus}). Note that $q=-p_{n+1}-p_{n+2}$ shows up in result (\ref{fermionAllMinus}), which is the momentum carried by the operator in form factor. The $\spab{\bar{\phi}_{n+1}|\bar{\phi}_{n+2}}$-shifting assures that $\widehat{p}_{n+1}+\widehat{p}_{n+2}=p_{n+1}+p_{n+2}$, thus we get the form factor \begin{eqnarray} \boxed{\mathcal{F}_{\mathcal{O}_{\tiny \mbox{II}}^{[0]},n}(\{g^-\},\bar{\psi}_i,\bar{\psi}_j;q)={q^2\spbb{i~j}\over \spbb{1~2}\spbb{2~3}\cdots\spbb{n~1}}~.~~~}\end{eqnarray} For operator $\mathcal{O}_{\tiny \mbox{III}}^{[0]}$, we can also construct the Lagrangian as \begin{eqnarray} L_{\mathcal{O}^{[0]}_{\tiny \mbox{III}}}=L_{{\tiny\mbox{SYM}}}+{\kappa \over N}\mathop{\rm Tr}(\phi^{A'B'}\phi^{C'D'})\mathop{\rm Tr}(F^{\alpha\beta}F_{\alpha\beta})+{\bar{\kappa}\over N} \mathop{\rm Tr}(\bar{\phi}_{A'B'}\bar{\phi}_{C'D'})\mathop{\rm Tr}(\bar{F}^{\dot{\alpha}\dot{\beta}}\bar{F}_{\dot{\alpha}\dot{\beta}})~.~~~\end{eqnarray} As usual, the $\spab{\bar{\phi}_{n+1}|\bar{\phi}_{n+2}}$-shifting generates the boundary operator $\mathcal{O}^{\spab{\bar{\phi}_{n+1}|\bar{\phi}_{n+2}}}=\mathop{\rm Tr}(F^{\alpha\beta}F_{\alpha\beta})$, while the $\Delta L$ double trace Lagrangian term introduces four, five and six-point vertices in the Feynman diagrams. For computational convenience, let us take the following definition of self-dual $F^{+}_{\mu\nu}$ and anti-self-dual $F^{-}_{\mu\nu}$ field strengthes \begin{eqnarray} F_{\mu\nu}^{\pm}={1\over 2}F_{\mu\nu}\pm{1\over 4i}\epsilon_{\mu\nu\rho\sigma}F^{\rho\sigma}~~\mbox{and}~~{1\over 2}\epsilon_{\mu\nu\rho\sigma}F^{\pm\rho\sigma}=\pm F^{\pm}_{\mu\nu}~,~~~\end{eqnarray} and rewrite the Lagrangian as \begin{eqnarray} L_{\mathcal{O}^{[0]}_{\tiny \mbox{III}}}=L_{{\tiny\mbox{SYM}}}+\kappa \mathop{\rm Tr}(\phi^{A'B'}\phi^{C'D'})\mathop{\rm Tr}(F^{+\mu\nu}F^{+}_{\mu\nu})+\bar{\kappa} \mathop{\rm Tr}(\bar{\phi}_{A'B'}\bar{\phi}_{C'D'})\mathop{\rm Tr}(F^{-\mu\nu}F^{-}_{\mu\nu})~.~~~\nonumber\end{eqnarray} The off-shell Feynman rules for the four-point vertices defined by the corresponding terms inside $\mathop{\rm Tr}(\phi\phi)\mathop{\rm Tr}(F^{+}F^{+})$ or $\mathop{\rm Tr}(\bar{\phi}\bar{\phi})\mathop{\rm Tr}(F^{-}F^{-})$ of $\Delta L$ are given by \begin{eqnarray} M_{\mu\nu}^{\pm}=(p_{i_1}\cdot p_{i_2})\eta_{\mu\nu}-p_{i_1\nu}p_{i_2\mu}\pm{1\over i}\epsilon_{\mu\nu\rho\sigma}p_{i_1}^{\rho}p_{i_1}^{\sigma}~,~~~\end{eqnarray} where $p_{i_1},p_{i_2}$ are the momenta of two gluons. In fact, $M^{+}_{\mu\nu}$ can only attach gluons with positive helicities while $M^{-}_{\mu\nu}$ can only attach gluons with negative helicities, since \begin{eqnarray} \epsilon_1^{+\mu}M_{\mu\nu}^+={\spbb{1|\gamma_{\nu}|p_2|1}\over \sqrt{2}}~~,~~\epsilon^{-\mu}_1M_{\mu\nu}^+=0~~\mbox{and}~~\epsilon_1^{+\mu}M_{\mu\nu}^-=0~~,~~\epsilon_1^{-\mu}M_{\mu\nu}^-={\spaa{1|\gamma_\nu|p_2|1}\over \sqrt{2}}~.~~~\nonumber\end{eqnarray} And the four-point amplitudes defined by these vertices are given by \begin{eqnarray} &&A_{2;2}(g_1^-,g_2^-;\bar{\phi}_3,\bar{\phi}_4)=\epsilon_1^{-\mu}M_{\mu\nu}^-\epsilon_2^{-\nu}=\spaa{1~2}^2~~,~~A_{2;2}(g_1^+,g_2^+;\phi_3,\phi_4)=\epsilon_1^{+\mu}M_{\mu\nu}^+\epsilon_2^{+\nu}=\spbb{1~2}^2~.~~~\nonumber\end{eqnarray} In order to compute the five-point amplitude $A_{3;2}(g_1^-,g_2^-,g_3^+;\bar{\phi}_4,\bar{\phi}_5)$, we also need the Feynman rule for five-point vertex defined by the corresponding terms inside $\mathop{\rm Tr}(\phi\phi)\mathop{\rm Tr}(F^{+}F^{+})$ or $\mathop{\rm Tr}(\bar{\phi}\bar{\phi})\mathop{\rm Tr}(F^{-}F^{-})$, which is given by \begin{eqnarray} &&V^{abc}_{\mu\nu\rho}\label{fivepointV}\\ &&={ig\over 2}f^{abc}\Big((p_1-p_2)_{\rho}\eta_{\mu\nu}+(p_2-p_3)_{\mu}\eta_{\nu\rho}+ (p_3-p_1)_\nu\eta_{\rho\mu}+i\kappa(p_1+p_2+p_3)^{\sigma}\epsilon_{\mu\nu\sigma\rho}\Big)~.~~~\nonumber\end{eqnarray} There are in total three contributing Feynman diagrams, as shown in Figure (\ref{ssgmgmgp}). We need to sum up all of three results. \begin{figure} \centering \includegraphics[width=6in]{ssgmgmgp}\\ \caption{Feynman diagrams for $A_{3;2}(g_1^-,g_2^-,g_3^+;\bar{\phi}_4,\bar{\phi}_5)$ defined by $L_{\mathcal{O}^{[0]}_{\tiny \mbox{III}}}$. All external particles are out-going.}\label{ssgmgmgp} \end{figure} The first diagram gives \begin{eqnarray} (a)&=&{\spaa{1|P_{23}|\gamma_{\mu}|1}\over P_{23}^2}\Big((\epsilon_3^+\cdot\epsilon_2^-)p_2^{\mu}-(P_{23}\cdot \epsilon_3)\epsilon_2^{-\mu}+(p_3\cdot \epsilon_2^-)\epsilon_3^{+\mu}\Big)\nonumber\\ &=&{\spaa{r_3~2}\spaa{1~2}^2\over \spaa{2~3}\spaa{3~r_3}}+{\spaa{1~2}\spaa{r_3~1}\spbb{r_2~3}\over \spaa{r_3~3}\spbb{2~r_2}}~,~~~\end{eqnarray} where $r_1,r_2,r_3$ are reference momenta of $\epsilon_{\mu}^-(p_1),\epsilon_{\mu}^-(p_2),\epsilon_{\mu}^+(p_3)$(abbreviate as $\epsilon_1^-$, $\epsilon_2^-$, $\epsilon_3^+$) respectively. The second diagram gives \begin{eqnarray} (b)&=&{\spaa{2|P_{13}|\gamma_{\mu}|2}\over P_{13}^2}\Big(-(P_{13}\cdot\epsilon_1^-)\epsilon_3^{+\mu}+(p_1\cdot\epsilon_3^+)\epsilon_1^{-\mu}+(\epsilon_1^-\epsilon_3^+)p_3^\mu\Big)\nonumber\\ &=&{\spaa{1~r_3}\spaa{1~2}^2\over\spaa{3~1}\spaa{r_3~3}}+{\spaa{1~2}\spaa{r_3~2}\spbb{r_1~3}\over \spaa{r_3~3}\spbb{1~r_1}}~.~~~\end{eqnarray} The third diagram (\ref{ssgmgmgp}.c) is defined by the five-point vertex (\ref{fivepointV}), while the result of first three terms in the bracket of (\ref{fivepointV}) is \begin{eqnarray} (c.1)&=&{1\over 2}((p_2-p_1)\cdot \epsilon_3^+)(\epsilon_1^-\cdot\epsilon_2^-)+((p_1-p_3)\cdot \epsilon_2^-)(\epsilon_3^+\cdot\epsilon_1^-)+((p_3-p_2)\cdot \epsilon_1^-)(\epsilon_2^-\cdot\epsilon_3^+)\nonumber\\ &=&{1\over 2}\Big({\spbb{r_1~3}\spaa{1~2}\spaa{2~r_3}\over\spaa{r_3~3}\spbb{1~r_1}}-{\spbb{3~r_2}\spaa{1~2}\spaa{1~r_3}\over\spaa{r_3~3}\spbb{2~r_2}}+{\spbb{r_1~3}\spbb{r_2~3}\spaa{2~1}\over\spbb{1~r_1}\spbb{2~r_2}}\Big)~.~~~\end{eqnarray} Using \begin{eqnarray} i\epsilon_{\mu\nu\rho\sigma}p_1^{\mu}p_2^{\nu}p_3^{\rho}p_4^{\sigma}=\spaa{1~2}\spbb{2~3}\spaa{3~4}\spbb{4~1}-\spbb{1~2}\spaa{2~3}\spbb{3~4}\spaa{4~1}~,~~~\nonumber\end{eqnarray} the last term in the bracket of (\ref{fivepointV}) can be computed as \begin{eqnarray} (c.2)&=&{1\over 2}i\epsilon_{\mu\nu\sigma\rho}\epsilon_1^{-\mu}\epsilon_2^{-\nu}(p_1+p_2+p_3)^\sigma\epsilon_3^{+\rho}\nonumber\\ &=&{1\over 2}\Big({\spaa{1~2}\spaa{2~r_3}\spbb{r_1~3}\over \spbb{1~r_1}\spaa{r_3~3}}-{\spaa{1~2}\spaa{1~r_3}\spbb{3~r_2}\over \spbb{2~r_2}\spaa{r_3~3}}+{\spaa{1~2}\spbb{r_2~3}\spbb{r_1~3}\over \spbb{1~r_1}\spbb{2~r_2}}\Big)~.~~~\end{eqnarray} Summing above contributions, we get \begin{eqnarray} A_{3;2}(g_1^-,g_2^-,g_3^+;\bar{\phi}_4,\bar{\phi}_5) =-{\spaa{1~2}^4\over \spaa{1~2}\spaa{2~3}\spaa{3~1}}~.~~~\end{eqnarray} More generally, we have \begin{eqnarray} A_{n;2}(\{g^+\}, g_i^-,g_j^-;\bar{\phi}_{n+1},\bar{\phi}_{n+2})=-{\spaa{i~j}^4\over \spaa{1~2}\spaa{2~3}\cdots\spaa{n~1}}~,~~~\end{eqnarray} which can be trivially proven by BCFW recursion relation. This expression is exactly the same as the pure-gluon $n$-point MHV amplitude of Yang-Mills theory. By taking $\spab{\bar{\phi}_{n+1}|\bar{\phi}_{n+2}}$-shifting, we can get the form factor as \begin{eqnarray} \boxed{\mathcal{F}_{\mathcal{O}^{[0]}_{\tiny \mbox{III}},n}(\{g^+\},g_i^-,g_j^-;q)=-{\spaa{i~j}^4\over\spaa{1~2}\spaa{2~3}\cdots\spaa{n~1}}~.~~~}\end{eqnarray} Again, let us consider another configuration of external states, i.e., $n$ gluons with negative helicities and two scalars. Computation of $A_{3;2}(g_1^-,g_2^-,g_3^-;\bar{\phi}_4,\bar{\phi}_5)$ is almost the same as $A_{3;2}(g_1^-,g_2^-,g_3^+;\bar{\phi}_4,\bar{\phi}_5)$, and we only need to replace $\epsilon_3^{+}$ by $\epsilon_3^{-}$. Direct computation shows that, contributions of all three diagrams lead to \begin{eqnarray} {s_{12}^2+s_{13}^2+s_{23}^2+2s_{12}s_{13}+2s_{12}s_{23}+2s_{13}s_{23}\over\spbb{1~2}\spbb{2~3}\spbb{3~1}}={((p_4+p_5)^2)^2\over \spbb{1~2}\spbb{2~3}\spbb{3~1}}~.~~~\end{eqnarray} This result can be generalized to $A_{n;2}$ as \begin{eqnarray} A_{n;2}(g_1^-,g_2^-,\ldots, g_n^-;\bar{\phi}_{n+1},\bar{\phi}_{n+2})={((p_{n+1}+p_{n+2})^2)^2\over \spbb{1~2}\spbb{2~3}\cdots\spbb{n~1}}~,~~~\label{gluonAllminus}\end{eqnarray} and can be proven recursively by BCFW recursion relation. In fact, assuming eqn. (\ref{gluonAllminus}) is true for $A_{n-1;2}$ and taking $\spab{g_n^-|g_1^-}$-shifting, there is only one non-vanishing term in BCFW expansion, which gives \begin{eqnarray} &&A_3(g_{\widehat{1}}^-,g_2^-,g_{\widehat{P}_{12}}^+){1\over P_{12}^2}A_{n-1;2}(g_{-\widehat{P}_{12}}^-,g_3^-,\ldots, g_{\widehat{n}}^-;\bar{\phi}_{n+1},\bar{\phi}_{n+2})={((p_{n+1}+p_{n+2})^2)^2\over \spbb{1~2}\spbb{2~3}\cdots\spbb{n~1}}~.~~~\end{eqnarray} So the corresponding form factor is \begin{eqnarray} \boxed{\mathcal{F}_{\mathcal{O}_{\tiny \mbox{III}}^{[0]},n}(g_1^-,g_2^-,\ldots, g_n^-;q)={(q^2)^2\over \spbb{1~2}\spbb{2~3}\cdots \spbb{n~1}}~.~~~}\end{eqnarray} \subsection{The spin-${1\over 2}$ operators} For operators \begin{eqnarray} \mathcal{O}^{[{1/2}]}_{\tiny \mbox{I}}=\mathop{\rm Tr}(\phi^{AB}\psi^{C\alpha})~~~~~~~~,~~~~~~~~\mathcal{O}^{[{1/2}]}_{\tiny \mbox{II}}=\mathop{\rm Tr}(\psi^{A}_{\beta}F^{\beta\alpha})~,~~~\end{eqnarray} and their complex conjugates $\bar{\mathcal{O}}^{[1/2]}_{\tiny \mbox{I}}, \bar{\mathcal{O}}^{[1/2]}_{\tiny \mbox{II}}$, we need to product them with another spin-${1\over 2}$ trace term, which can be chosen as trace of product of scalar and fermion. For operator ${\mathcal{O}}^{[1/2]}_{\tiny \mbox{I}}$, we can construct the Lagrangian as \begin{eqnarray} L_{\mathcal{O}_{\tiny \mbox{I}}^{[{1/2}]}}=L_{{\tiny\mbox{SYM}}}+{\kappa\over N}\mathop{\rm Tr}(\phi^{A'B'}\psi^{C'\alpha})\mathop{\rm Tr}(\phi^{AB}\psi^{C}_{\alpha}) +{\bar{\kappa}\over N}\mathop{\rm Tr}(\bar{\phi}_{A'B'}\bar{\psi}^{\dot{\alpha}}_{C'})\mathop{\rm Tr}(\bar{\phi}_{AB}\bar{\psi}_{C\dot{\alpha}})~.~~~\end{eqnarray} In order to generate operator ${\mathcal{O}}^{[1/2]}_{\tiny \mbox{I}}$, we should shift $\bar{\phi}_{n+1},\bar{\psi}_{n+2}$. However, there are two ways of shifting, and their large $z$ behaviors are different. If we consider $\spab{\bar{\phi}_{n+1}|\bar{\psi}_{n+2}}$-shifting, the leading term in $z$ is $O(z^0)$, and the boundary operator after considering the LSZ reduction is \begin{eqnarray} \mathcal{O}^{\spab{\bar{\phi}_{n+1}|\bar{\psi}_{n+2}}}=\lambda_{n+2,\alpha}\mathop{\rm Tr}(\phi\psi^{\alpha})~,~~~\end{eqnarray} hence it has a $\lambda_{n+2,\alpha}$ factor difference with $\mathcal{O}_{\tiny \mbox{I}}^{[1/2]}$. If we consider $\spab{\bar{\psi}_{n+2}|\bar{\phi}_{n+1}}$-shifting, the leading term in $z$ is $O(z)$ order. The boundary operator associated with the $O(z^0)$ term is quite complicated, but in the $O(z)$ order, we have \begin{eqnarray} \mathcal{O}_z^{\spab{\bar{\psi}_{n+2}|\bar{\phi}_{n+1}}}=-\lambda_{n+1,\alpha}\mathop{\rm Tr}(\phi\psi^{\alpha})~.~~~\end{eqnarray} These two ways of shifting would give the same result for form factor of $\mathcal{O}_{\tiny \mbox{I}}^{[1/2]}$. However, it is better to take the shifting where the leading $z$ term has lower rank, preferably $O(z^0)$ order, since the computation would be simpler. The $\Delta L$ term introduces $\phi$-$\psi$-$\phi$-$\psi$ and $\bar{\phi}$-$\bar{\psi}$-$\bar{\phi}$-$\bar{\psi}$ vertices in the Feynman diagrams. It is easy to know from Feynman diagram computation that $A_{2;2}(\bar{\phi}_1,\bar{\psi}_2;\bar{\phi}_3,\bar{\psi}_4)=\spaa{4~2}$, and \begin{eqnarray} A_{3;2}(\bar{\phi}_1,\bar{\psi}_2,g_3^+;\bar{\phi}_4,\bar{\psi}_5)&=&{\spaa{5|P_{23}|\gamma_\mu|2}\over s_{23}}\epsilon_{3}^{+\mu}-{\spaa{5~2}\over s_{13}}(p_1-P_{13})_{\mu}\epsilon_{3}^{+\mu}\nonumber\\ &=&{\spaa{1~2}^2\spaa{2~5}\over\spaa{1~2}\spaa{2~3}\spaa{3~1}}~.~~~\end{eqnarray} This result can be generalized to \begin{eqnarray} A_{n;2}(\{g^+\},\bar{\phi}_i,\bar{\psi}_{j};\bar{\phi}_{n+1},\bar{\psi}_{n+2})={\spaa{i~j}^2\spaa{j,n+2}\over\spaa{1~2}\spaa{2~3}\cdots\spaa{n~1}}~,~~~\end{eqnarray} and similarly be proven by BCFW recursion relation. Note that this amplitude depends on $p_{n+2}$(more strictly speaking, $\lambda_{n+2}^{\alpha}$) but not $p_{n+1}$, if we take $\spab{\bar{\phi}_{n+1}|\bar{\psi}_{n+2}}$-shifting, the boundary contribution equals to the amplitude itself. Thus subtracting the factor\footnote{We take the convention that $\spaa{i~j}=\epsilon_{\alpha\beta}\lambda_i^\alpha\lambda_j^\beta=\lambda_i^\alpha\lambda_{j\alpha}$, $\spbb{i~j}=\epsilon^{\dot{\alpha}\dot{\beta}}\widetilde{\lambda}_{i\dot{\alpha}}\widetilde{\lambda}_{j\dot{\beta}}=\widetilde{\lambda}_{i\dot{\alpha}}\widetilde{\lambda}_{j}^{\dot{\alpha}}$.} $\lambda_{n+2,\alpha}$, we obtain the form factor of operator $\mathcal{O}_{\tiny \mbox{I}}^{[1/2]}$ as \begin{eqnarray} \boxed{\mathcal{F}^{\alpha}_{\mathcal{O}^{[1/2]}_{\tiny \mbox{I}},n}(\{g^+\},\bar{\phi}_i,\bar{\psi}_j;q)={\spaa{i~j}^2\lambda_j^\alpha\over \spaa{1~2}\spaa{2~3}\cdots\spaa{n~1}}~.~~~}\label{formfactor12I}\end{eqnarray} If we instead take $\spab{\bar{\psi}_{n+2}|\bar{\phi}_{n+1}}$-shifting, the boundary contribution of amplitude $A_{n;2}$ is \begin{eqnarray} B^{\spab{\bar{\psi}_{n+2}|\bar{\phi}_{n+1}}}_{n;2}(\{g^+\},\bar{\phi}_i,\bar{\psi}_{j};\bar{\phi}_{n+1},\bar{\psi}_{n+2})={\spaa{i~j}^2(\spaa{j,n+2}-z\spaa{j,n+1})\over\spaa{1~2}\spaa{2~3}\cdots\spaa{n~1}}~.~~~\end{eqnarray} The coefficient of $z$ in above result is identical to the form factor of $\mathcal{O}_z^{\spab{\bar{\psi}_{n+2}|\bar{\phi}_{n+1}}}$, and in order to get the form factor of $\mathcal{O}_{\tiny \mbox{I}}^{[1/2]}$, we should subtract $-\lambda_{n+1,\alpha}$. The final result is again (\ref{formfactor12I}). \begin{figure} \centering \includegraphics[width=6in]{spsipsigg}\\ \caption{Feynman diagrams for $A_{3;2}(\bar{\psi}_1,g_2^-,g_3^+;\bar{\phi}_4,\bar{\psi}_5)$ defined by $L_{\mathcal{O}^{[1/2]}_{\tiny \mbox{II}}}$. All external particles are out-going.}\label{spsipsigg} \end{figure} For operator $\mathcal{O}_{\tiny \mbox{II}}^{[1/2]}$, we can construct the Lagrangian as \begin{eqnarray} L_{\mathcal{O}_{\tiny \mbox{II}}^{[1/2]}}=L_{{\tiny\mbox{SYM}}}+{\kappa\over N}\mathop{\rm Tr}(\phi^{A'B'}\psi^{C'}_{\alpha})\mathop{\rm Tr}(\psi^A_{\beta}F^{\beta\alpha}) +{\bar{\kappa}\over N}\mathop{\rm Tr}(\bar{\phi}_{AB}\bar{\psi}_{C\dot{\alpha}})\mathop{\rm Tr}(\bar{\psi}_{A\dot{\beta}}\bar{F}^{\dot{\beta}\dot{\alpha}})~.~~~\end{eqnarray} Here we choose $\spab{\bar{\phi}_{n+1}|\bar{\psi}_{n+2}}$-shifting so that the leading term in $z$ is $O(z^0)$ order. The corresponding boundary operator is \begin{eqnarray} \mathcal{O}^{\spab{\bar{\phi}_{n+1}|\bar{\psi}_{n+2}}}=\lambda_{n+2,\alpha}\mathop{\rm Tr}(\psi_{\beta}F^{\beta\alpha})~.~~~\end{eqnarray} The $\Delta L$ term introduces four-point(scalar-fermion-fermion-gluon) and five-point(scalar-fermion-fermion-gluon-gluon) vertices. The four-point amplitude defined by the four-point vertex is given by \begin{eqnarray} A_{2;2}(\bar{\psi}_1,g_2^-;\bar{\phi}_3,\bar{\psi}_4)&=&{\spaa{1|2|\gamma_{\mu}|4}+\spaa{4|2|\gamma_{\mu}|1}\over 2}\epsilon_2^{-\mu}=\spaa{1~2}\spaa{4~2}~.~~~\end{eqnarray} The five-point amplitude $A_{3;2}(\bar{\psi}_1,g_2^-,g_3^+;\bar{\phi}_4,\bar{\psi}_5)$ can be computed from three Feynman diagrams as shown in Figure (\ref{spsipsigg}). The first diagram gives \begin{eqnarray} (a)&=&{1\over 2}\Big(-{\spaa{1|P_{23}|\gamma_{\mu}|5}\over s_{23}}-{\spaa{5|P_{23}|\gamma_{\mu}|1}\over s_{23}}\Big)\Big(-(P_{23}\cdot\epsilon_3^+)\epsilon_2^{-\mu}+(p_3\cdot\epsilon_2^{-\mu})\epsilon_3^{+\mu}+(\epsilon_3^+\cdot\epsilon_2^-)p_2^{\mu}\Big)\nonumber\\ &=&{\spaa{2~r_3}\spaa{1~2}\spaa{2~5}\over \spaa{2~3}\spaa{r_3~3}}-{1\over 2}{\spaa{1~2}\spbb{r_2~3}\spaa{r_3~5}\over \spaa{r_3~3}\spbb{2~r_2}}-{1\over 2}{\spaa{5~2}\spbb{r_2~3}\spaa{r_3~1}\over \spaa{r_3~3}\spbb{2~r_2}}~,~~~\end{eqnarray} and the second diagram gives \begin{eqnarray} (b)&=&-{\spaa{5~2}\spaa{2|P_{13}|\gamma_\mu|1}\over s_{13}}\epsilon_{3}^{+\mu}={\spaa{2~5}\spaa{1~2}\spaa{r_3~1}\over \spaa{1~3}\spaa{r_3~3}}~,~~~\end{eqnarray} while the third diagram gives \begin{eqnarray} (c)&=&{\spaa{1|\gamma_\mu\gamma_\nu|5}+\spaa{5|\gamma_\mu\gamma_\nu|1}\over 2}\epsilon_2^{-\mu}\epsilon_3^{\nu} ={1\over 2}{\spaa{1~2}\spbb{r_2~3}\spaa{r_3~5}\over \spaa{r_3~3}\spbb{2~r_2}}+{1\over 2}{\spaa{5~2}\spbb{r_2~3}\spaa{r_3~1}\over \spaa{r_3~3}\spbb{2~r_2}}~.~~~\end{eqnarray} Summing above contributions, we get \begin{eqnarray} A_{3;2}(\bar{\psi}_1,g_2^-,g_3^+;\bar{\phi}_4,\bar{\psi}_5)={\spaa{1~2}^3\spaa{2~5}\over \spaa{1~2}\spaa{2~3}\spaa{3~1}}~.~~~\end{eqnarray} Then it is simple to generalize it to \begin{eqnarray} A_{n;2}(\{g^+\},\bar{\psi}_i,g_j^-;\bar{\phi}_{n+1},\bar{\psi}_{n+2})={\spaa{i~j}^3\spaa{j,n+2}\over \spaa{1~2}\spaa{2~3}\dots\spaa{n~1}}~,~~~\end{eqnarray} which can be proven by BCFW recursion relation. Taking $\spab{\bar{\phi}_{n+1}|\bar{\psi}_{n+2}}$-shifting and Subtracting $\lambda_{n+2,\alpha}$, we get the form factor \begin{eqnarray} \boxed{\mathcal{F}^{\alpha}_{\mathcal{O}^{[1/2]}_{\tiny \mbox{II}},n}(\{g^+\},\bar{\psi}_i,g_j^-;q)={\spaa{i~j}^3\lambda_j^{\alpha}\over \spaa{1~2}\spaa{2~3}\dots\spaa{n~1}}~.~~~}\end{eqnarray} \subsection{The spin-1 operators} There are three spin-1 operators \begin{eqnarray} \mathcal{O}^{[1]}_{\tiny \mbox{I}}=\mathop{\rm Tr}(\psi^{A\alpha}\psi^{B\beta}+\psi^{A\beta}\psi^{B\alpha})~~~,~~~\mathcal{O}^{[1]}_{\tiny \mbox{II}}=\mathop{\rm Tr}(\phi^{AB}F^{\alpha\beta}) ~~~,~~~\mathcal{O}^{[1]}_{\tiny \mbox{III}}=\mathop{\rm Tr}(\psi^{A\alpha}\bar{\psi}_B^{\dot{\alpha}})~,~~~\end{eqnarray} and their complex conjugates. In order to construct the Lagrangian, we need to product them with spin-1 trace term. Since a computation involving $F^{\alpha\beta}$ is always harder than those involving fermion and scalar, it is better to choose the trace of two fermions. For operator $\mathcal{O}_{\tiny \mbox{I}}^{[1]}$, we can construct the Lagrangian as \begin{eqnarray} L_{\mathcal{O}_{\tiny \mbox{I}}^{[1]}}=L_{{\tiny\mbox{SYM}}}+({\kappa \over N}\mathop{\rm Tr}(\psi^{A'}_{\alpha}\psi^{B'}_{\beta}+\psi^{A'}_{\beta}\psi^{B'}_{\alpha})\mathop{\rm Tr}(\psi^{A\alpha}\psi^{B\beta}+\psi^{A\beta}\psi^{B\alpha})+c.c.)~.~~~\end{eqnarray} Here in order to generate operator $\mathcal{O}_{\tiny \mbox{I}}^{[1]}$, we should shift two fermions $\bar{\psi}_{n+1},\bar{\psi}_{n+2}$. Taking $\spab{\bar{\psi}_{n+1}|\bar{\psi}_{n+2}}$-shifting and considering the LSZ reduction, we find that the leading term in $z$ is $O(z)$ order, and the corresponding boundary operator is \begin{eqnarray} \mathcal{O}_{z}^{\spab{\bar{\psi}_{n+1}|\bar{\psi}_{n+2}}}=-2\lambda_{n+2,\alpha}\lambda_{n+2,\beta}\mathop{\rm Tr}(\psi^{A\alpha}\psi^{B\beta}+\psi^{A\beta}\psi^{B\alpha})~.~~~\end{eqnarray} Thus we also need to take the $O(z)$ order term in the boundary contribution of amplitude $A_{n;2}$ under $\spab{\bar{\psi}_{n+1}|\bar{\psi}_{n+2}}$-shifting. The $\Delta L$ Lagrangian term introduces four-fermion vertex, which defines the four-point amplitude $A_{2;2}(\bar{\psi}_1,\bar{\psi}_2;\bar{\psi}_3,\bar{\psi}_4)=\spaa{3~1}\spaa{2~4}+\spaa{4~1}\spaa{2~3}$. For five-point amplitude $A_{3;2}(\bar{\psi}_1,\bar{\psi}_2,g_3^+;\bar{\psi}_4,\bar{\psi}_5)$, there are two contributing Feynman diagrams, and the first diagram gives \begin{eqnarray} (a)&=&-\spaa{5~2}{\spaa{1|\gamma_\mu|P_{13}|4}\over s_{13}}\epsilon_3^{+\mu}-\spaa{4~2}{\spaa{1|\gamma_\mu|P_{13}|5}\over s_{13}}\epsilon_3^{+\mu}\nonumber\\ &=&-{\spaa{5~2}\spaa{4~1}\spaa{1~r_3}\over \spaa{3~1}\spaa{r_3~3}}-{\spaa{4~2}\spaa{5~1}\spaa{1~r_3}\over \spaa{3~1}\spaa{r_3~3}}~,~~~\end{eqnarray} while the second gives \begin{eqnarray} (b)&=&\spaa{5~1}{\spaa{2|\gamma_{\mu}|P_{23}|4}\over s_{23}}\epsilon_3^{+\mu}+\spaa{4~1}{\spaa{2|\gamma_{\mu}|P_{23}|5}\over s_{23}}\epsilon_3^{+\mu}\nonumber\\ &=&{\spaa{5~1}\spaa{4~2}\spaa{2~r_3}\over \spaa{3~2}\spaa{r_3~3}}+{\spaa{4~1}\spaa{5~2}\spaa{2~r_3}\over \spaa{3~2}\spaa{r_3~3}}~.~~~\end{eqnarray} Thus \begin{eqnarray} A_{3;2}(\bar{\psi}_1,\bar{\psi}_2,g_3^+;\bar{\psi}_4,\bar{\psi}_5)&=&{\spaa{1~2}^2\over\spaa{1~2}\spaa{2~3}\spaa{3~1}}(\spaa{4~1}\spaa{2~5}+\spaa{5~1}\spaa{2~4})~.~~~\end{eqnarray} By BCFW recursion relation, we also have \begin{eqnarray} &&A_{n;2}(\{g^+\},\bar{\psi}_i,\bar{\psi}_j;\bar{\psi}_{n+1},\bar{\psi}_{n+2})\nonumber\\ &&={\spaa{i~j}^2\over \spaa{1~2}\spaa{2~3}\cdots\spaa{n~1}}(\spaa{n+1,i}\spaa{j,n+2}+\spaa{n+2,i}\spaa{j,n+1})~.~~~\end{eqnarray} Notice that this amplitude depends on both $\lambda_{n+1}^{\alpha},\lambda_{n+2}^{\alpha}$, thus the $O(z)$ term is unavoidable when shifting two fermions. The boundary contribution under $\spab{\bar{\psi}_{n+1}|\bar{\psi}_{n+2}}$-shifting is \begin{eqnarray} &&B_{n;2}^{\spab{\bar{\psi}_{n+1}|\bar{\psi}_{n+2}}}(\{g^+\},\bar{\psi}_i,\bar{\psi}_j;\bar{\psi}_{\widehat{n+1}},\bar{\psi}_{\widehat{n+2}})\nonumber\\ &&=-2z{\spaa{i~j}^2\over \spaa{1~2}\spaa{2~3}\cdots\spaa{n~1}}\spaa{n+2,i}\spaa{j,n+2}\nonumber\\ &&~~~~~~~+{\spaa{i~j}^2\over \spaa{1~2}\spaa{2~3}\cdots\spaa{n~1}}(\spaa{n+1,i}\spaa{j,n+2}+\spaa{n+2,i}\spaa{j,n+1})~.~~~\end{eqnarray} Taking the $O(z)$ contribution and subtracting the factor $-2\lambda_{n+2,\alpha}\lambda_{n+2,\beta}$, we get the form factor \begin{eqnarray} \boxed{\mathcal{F}^{\alpha\beta}_{\mathcal{O}_1^{[1]},n}(\{g^+\},\bar{\psi}_i,\bar{\psi}_j;q)={\spaa{i~j}^2\over \spaa{1~2}\spaa{2~3}\cdots\spaa{n~1}}\left({\lambda_i^{\alpha}\lambda_j^{\beta}+\lambda_j^{\alpha}\lambda_i^{\beta}\over 2}\right)~,~~~}\end{eqnarray} where we have symmetrized the indices $\alpha,\beta$. Similar construction can be applied to the operator $\mathcal{O}_{\tiny \mbox{II}}^{[1]}$, where we have \begin{eqnarray} L_{\mathcal{O}_{\tiny \mbox{II}}^{[1]}}=L_{{\tiny\mbox{SYM}}}+({\kappa\over N}\mathop{\rm Tr}(\psi^{A'}_{\alpha}\psi^{B'}_{\beta}+\psi^{A'}_{\beta}\psi^{B'}_{\alpha})\mathop{\rm Tr}(\phi^{AB}F^{\alpha\beta})+c.c.)~.~~~\end{eqnarray} The leading term in $z$ under $\spab{\bar{\psi}_{n+1}|\bar{\psi}_{n+2}}$-shifting is $O(z)$ order, and the boundary operator is \begin{eqnarray} \mathcal{O}_z^{\spab{\bar{\psi}_{n+1}|\bar{\psi}_{n+2}}}=-\lambda_{n+2,\alpha}\lambda_{n+2,\beta}\mathop{\rm Tr}(\phi^{AB}F^{\alpha\beta})~.~~~ \end{eqnarray} The $\Delta L$ Lagrangian term introduces four-point(fermion-fermion-scalar-gluon) vertex and five-point(fermion-fermion-scalar-gluon-gluon) vertex. The four-point vertex defines four-point amplitude $A_{2;2}(\bar{\phi}_1,g_2^-;\bar{\psi}_3,\bar{\psi}_4)=-{1\over 2}(\spaa{3|2|\gamma_{\mu}|4}+\spaa{4|2|\gamma_\mu|3})\epsilon_2^{-\mu}=\spaa{2~3}\spaa{2~4}$, while for five-point amplitude $A_{3;2}(\bar{\phi}_1,g_2^-,g_3^+;\bar{\psi}_4,\bar{\psi}_5)$, we need to consider three Feynman diagrams, as shown in Figure (\ref{psipsisgg}). \begin{figure} \centering \includegraphics[width=6in]{psipsisgg}\\ \caption{Feynman diagrams for $A_{3;2}(\bar{\phi}_1,g_2^-,g_3^+;\bar{\psi}_4,\bar{\psi}_5)$ defined by $L_{\mathcal{O}^{[1]}_{\tiny \mbox{II}}}$. All external particles are out-going.}\label{psipsisgg} \end{figure} The first diagram gives \begin{eqnarray} (a)&=&{\spaa{2~4}\spaa{2~5}\over s_{13}}(p_1+P_{13})_{\mu}\epsilon_{3}^{+\mu}={\spaa{2~4}\spaa{2~5}\spaa{r_3~1}\over \spaa{3~1}\spaa{r_3~3}}~,~~~\end{eqnarray} the second diagram gives \begin{eqnarray} (b)&=&{1\over 2}\Big(-{\spaa{4|P_{23}|\gamma_{\mu}|5}\over s_{23}}-{\spaa{5|P_{23}|\gamma_{\mu}|4}\over s_{23}}\Big)\Big(-(P_{23}\cdot\epsilon_3^+)\epsilon_2^{-\mu}+(p_3\cdot\epsilon_2^-)\epsilon_3^{\mu}+(\epsilon_3^+\cdot\epsilon_2^{-})p_2^{\mu}\Big)\nonumber\\ &=&{\spaa{r_3~2}\spaa{2~4}\spaa{2~5}\over \spaa{2~3}\spaa{r_3~3}}+{1\over 2}{\spbb{r_2~3}\spaa{r_3~4}\spaa{2~5}\over \spaa{r_3~3}\spbb{2~r_2}}+{1\over 2}{\spbb{r_2~3}\spaa{r_3~5}\spaa{2~4}\over \spaa{r_3~3}\spbb{2~r_2}}~,~~~\end{eqnarray} and the third diagram gives \begin{eqnarray} (c)&=&{\spaa{4|\gamma_{\mu}\gamma_{\nu}|5}+\spaa{5|\gamma_{\mu}\gamma_{\nu}|4}\over 2}\epsilon_2^{-\mu}\epsilon_3^{+\nu}={1\over 2}{\spaa{4~2}\spbb{r_2~3}\spaa{r_3~5}\over \spaa{r_3~3}\spbb{2~r_2}}+{1\over 2}{\spaa{5~2}\spbb{r_2~3}\spaa{r_3~4}\over \spaa{r_3~3}\spbb{2~r_2}}~.~~~\end{eqnarray} Summing above contributions, we get \begin{eqnarray} A_{3;2}(\bar{\phi}_1,g_2^-,g_3^+;\bar{\psi}_4,\bar{\psi}_5)&=&{\spaa{1~2}^2\over\spaa{1~2}\spaa{2~3}\spaa{3~1}}\spaa{4~2}\spaa{2~5}~.~~~\end{eqnarray} Generalizing above result to $(n+2)$-point amplitude, we have \begin{eqnarray} A_{n;2}(\{g^+\},\bar{\phi}_i,g_j^-;\bar{\psi}_{n+1},\bar{\psi}_{n+2})={\spaa{i~j}^2\over \spaa{1~2}\spaa{2~3}\cdots\spaa{n~1}}\spaa{n+1,j}\spaa{j,n+2}~,~~~\end{eqnarray} which can be trivially proven by BCFW recursion relation. We are only interested in the $O(z)$ term of the boundary contribution under $\spab{\bar{\psi}_{n+1}|\bar{\psi}_{n+2}}$-shifting, which is \begin{eqnarray} B_{n;2}^{\spab{\bar{\psi}_{n+1}|\bar{\psi}_{n+2}}}(\{g^+\},\bar{\phi}_i,g_j^-;\bar{\psi}_{\widehat{n+1}},\bar{\psi}_{\widehat{n+2}})=-z{\spaa{i~j}^2\spaa{n+2,j}\spaa{j,n+2}\over \spaa{1~2}\spaa{2~3}\cdots\spaa{n~1}}+O(z^0)~.~~~\end{eqnarray} After subtracting the factor $-\lambda_{n+2,\alpha}\lambda_{n+2,\beta}$, we get \begin{eqnarray} \boxed{\mathcal{F}^{\alpha\beta}_{\mathcal{O}_{\tiny \mbox{II}}^{[0]},n}(\{g^+\},\bar{\phi}_i,g_j^-;q)={\spaa{i~j}^2\over \spaa{1~2}\spaa{2~3}\cdots\spaa{n~1}}(-\lambda_j^{\alpha}\lambda_j^{\beta})~.~~~}\end{eqnarray} Now let us turn to the operator $\mathcal{O}_{\tiny \mbox{III}}^{[1]}$, and construct the Lagrangian as \begin{eqnarray} L_{\mathcal{O}_{\tiny \mbox{III}}^{[1]}}=L_{{\tiny\mbox{SYM}}}+{\kappa\over N}\mathop{\rm Tr}(\psi^{A'}_{\alpha}\bar{\psi}_{B'\dot{\alpha}})\mathop{\rm Tr}(\psi^{A\alpha}\bar{\psi}_{B}^{\dot{\alpha}})~.~~~\end{eqnarray} The leading term in $z$ under $\spab{\bar{\psi}_{n+2}|\psi_{n+1}}$-shifting is $O(z^2)$ order, while the leading term in $z$ under $\spab{\psi_{n+1}|\bar{\psi}_{n+2}}$-shifting is $O(z^0)$ order. In the later case, the boundary operator is \begin{eqnarray} \mathcal{O}^{\spab{\psi_{n+1}|\bar{\psi}_{n+2}}}=\widetilde{\lambda}_{n+1,\dot{\alpha}}\lambda_{n+2,\alpha}\mathop{\rm Tr}(\psi^{A\alpha}\bar{\psi}^{\dot{\alpha}}_B)~.~~~\end{eqnarray} The four-point amplitude $A_{2;2}(\psi_1,\bar{\psi}_2;\psi_3,\bar{\psi}_4)=\spbb{1~3}\spaa{2~4}$, while the five-point amplitude \begin{eqnarray} A_{3;2}(\psi_1,\bar{\psi}_2,g_3^+;\psi_4,\bar{\psi}_5)&=&\spaa{2~5}{\spbb{1|\gamma_\mu|P_{13}|4}\over s_{13}}\epsilon_3^{+\mu}-\spbb{1~4}{\spaa{2|\gamma_\mu|P_{23}|5}\over s_{23}}\epsilon_3^{+\mu}\nonumber\\ &=&{\spaa{1~2}\spaa{2~5}\spab{2|1+3|4}\over \spaa{1~2}\spaa{2~3}\spaa{3~1}}~.~~~\end{eqnarray} Note that $\spab{2|1+3|4}=\spab{2|1+2+3|4}=\spab{2|q|4}$, where $q=-p_4-p_5$, we can generalize above result to $(n+2)$-point as \begin{eqnarray} A_{n;2}(\{g^+\},\psi_i,\bar{\psi}_j;\psi_{n+1},\bar{\psi}_{n+2})={\spaa{i~j}\spaa{j,n+2}\spab{j|q|n+1}\over\spaa{1~2}\spaa{2~3}\cdots\spaa{n~1}}~,~~~\label{formfactor1III}\end{eqnarray} where $q=-p_{n+1}-p_{n+2}$. Let us verify eqn. (\ref{formfactor1III}) by induction method. Assuming eqn. (\ref{formfactor1III}) is valid for $A_{n-1;2}$, and taking $\spab{g_1^+|g_n^+}$-shifting, we get two contributing terms\footnote{We have assumed that $i,j\neq 2,n-1$, otherwise the contributing terms are slightly different. But the conclusion is the same.} from BCFW expansion. One is \begin{eqnarray} A_{n-1;2}(g^+_{\widehat{1}},\ldots, \psi_i,\ldots, \bar{\psi}_j,\ldots, g^+_{n-2},g^+_{\widehat{P}_{n-1,n}};\psi_{n+1},\bar{\psi}_{n+2}){1\over P^2_{n-1,n}}A_3(g^-_{-\widehat{P}_{n-1,n}},g^+_{n-1},g^+_{\widehat{n}})~.~~~\nonumber\end{eqnarray} Since $\widehat{P}_{n-1,n}^2=\spaa{n-1,n}\spbb{\widehat{n},n-1}=0$, so $A_3(g^-_{\widehat{P}_{n-1,n}},g^+_{n-1},g^+_{\widehat{n}})\sim \spbb{n-1,\widehat{n}}^3\to 0$, and this term vanishes. The other contributing term is \begin{eqnarray} &&A_{3}(g^+_{\widehat{1}},g^+_2,g^-_{\widehat{P}_{12}}){1\over P_{12}^2}A_{n-1;2}(g^+_{-\widehat{P}_{12}},g^+_3,\ldots,\psi_i,\ldots, \bar{\psi}_j,\ldots, g^+_{\widehat{n}};\psi_{n+1},\bar{\psi}_{n+2})~.~~~\end{eqnarray} By inserting the explicit expressions of $A_3$ and $A_{n-1;2}$, we arrive at eqn. (\ref{formfactor1III}). Under $\spab{\psi_{n+1}|\bar{\psi}_{n+2}}$-shifting, the boundary contribution is \begin{eqnarray} B_{n;2}^{\spab{\psi_{n+1}|\bar{\psi}_{n+2}}}(\{g^+\},\psi_i,\bar{\psi}_j;\psi_{\widehat{n+1}},\bar{\psi}_{\widehat{n+2}})={\spaa{i~j}\spaa{j,n+2}\spab{j|q|n+1}\over\spaa{1~2}\spaa{2~3}\cdots\spaa{n~1}}~.~~~\end{eqnarray} So subtracting the factor $\lambda_{n+2,\alpha}\widetilde{\lambda}_{n+1,\dot{\alpha}}$, we get the form factor \begin{eqnarray} \boxed{\mathcal{F}^{\alpha\dot{\alpha}}_{\mathcal{O}_{\tiny \mbox{III}}^{[1]},n}(\{g^+\},\psi_i,\bar{\psi}_j;q)={\spaa{i~j}\over\spaa{1~2}\spaa{2~3}\cdots\spaa{n~1}}\lambda_j^{\alpha}(\lambda_{j\beta}q^{\beta\dot{\alpha}})~.~~~}\end{eqnarray} \subsection{The spin-${3\over 2}$ operators} There are two operators \begin{eqnarray} \mathcal{O}_{\tiny \mbox{I}}^{[3/2]}=\mathop{\rm Tr}(\bar{\psi}^{\dot{\alpha}}F^{\alpha\beta})~~~~,~~~~\mathcal{O}_{\tiny \mbox{II}}^{[3/2]}=\mathop{\rm Tr}(\psi^{\gamma}F^{\alpha\beta})~~~~\end{eqnarray} with their complex conjugate partners. We need to product them with spin-${3\over 2}$ trace term to construct $\Delta L$. For the operator $\mathcal{O}_{\tiny \mbox{I}}^{[3/2]}$, we can construct the Lagrangian as \begin{eqnarray} L_{\mathcal{O}_{\tiny \mbox{I}}^{[3/2]}}=L_{{\tiny\mbox{SYM}}}+{\kappa\over N}\mathop{\rm Tr}(\bar{\psi}_{\dot{\alpha}}F_{\alpha\beta})\mathop{\rm Tr}(\bar{\psi}^{\dot{\alpha}}F^{\alpha\beta})+{\bar{\kappa}\over N}\mathop{\rm Tr}(\psi_{\alpha}\bar{F}_{\dot{\alpha}\dot{\beta}})\mathop{\rm Tr}(\psi^{\alpha}\bar{F}^{\dot{\alpha}\dot{\beta}})~.~~~\end{eqnarray} It introduces new four-point vertices $\bar{\psi}$-$g^+$-$\bar{\psi}$-$g^+$ and $\psi$-$g^-$-$\psi$-$g^-$, as well as five, six-point vertices. From Feynman diagrams, we can directly compute $A_{2;2}(\psi_1,g_2^-;\psi_3,g_4^-)=\spaa{2~4}^2\spbb{1~3}$, while for the five-point amplitude $A_{3;2}(\psi_1,g_2^-,g_3^+;\psi_4,g_5^-)$, we need to compute three Feynman diagrams, which are given by \begin{eqnarray} (a)&=&\spaa{2~5}^2{\spbb{1|\gamma_\mu|P_{13}|4}\over s_{13}}\epsilon_3^{+\mu}={\spaa{2~5}^2\spbb{3~4}\over \spaa{3~1}}+{\spaa{2~5}^2\spbb{1~4}\spaa{r_3~1}\over \spaa{3~1}\spaa{r_3~3}}~,~~~\end{eqnarray} \begin{eqnarray} (b)&=&-\spbb{1~4}{\spaa{5|P_{23}|\gamma_\mu|5}\over s_{23}}\Big(-(P_{23}\cdot\epsilon_3^+)\epsilon_2^{-\mu}+(p_3\cdot\epsilon_2^-)\epsilon_3^{+\mu}+(\epsilon_3^+\cdot\epsilon_2^{-})p_2^{\mu}\Big)\nonumber\\ &=&{\spbb{1~4}\spaa{2~5}^2\spaa{r_3~2}\over \spaa{2~3}\spaa{r_3~3}}+{\spbb{1~4}\spaa{2~5}\spbb{r_2~3}\spaa{r_3~5}\over \spaa{r_3~3}\spbb{2~r_2}}~,~~~\end{eqnarray} and \begin{eqnarray} (c)=\spbb{1~4}\spaa{5|\gamma_\mu\gamma_\nu|5}\epsilon_2^{-\mu}\epsilon_3^{+\nu}={\spbb{1~4}\spaa{5~2}\spbb{r_2~3}\spaa{r_3~5}\over\spaa{r_3~3}\spaa{2~r_2}}~.~~~\end{eqnarray} So the final result is \begin{eqnarray} A_{3;2}(\psi_1,g_2^-,g_3^+;\psi_4,g_5^-)={\spaa{1~2}\spaa{2~5}^2\spab{2|q|4}\over \spaa{1~2}\spaa{2~3}\spaa{3~1}}~,~~~\end{eqnarray} where $q=-p_4-p_5$. This result can be generalized to \begin{eqnarray} A_{n;2}(\{g^+\},\psi_i,g_j^-;\psi_{n+1},g^-_{n+2})={\spaa{i~j}\spaa{j,n+2}^2\spab{j|q|n+1}\over \spaa{1~2}\spaa{2~3}\cdots\spaa{n~1}}~,~~~\end{eqnarray} where $q=-p_{n+1}-p_{n+2}$, and proven by BCFW recursion relation as done for the $\mathcal{O}_{\tiny \mbox{III}}^{[1]}$ case. If taking $\spab{g_{n+2}^-|\psi_{n+1}}$-shifting, the leading $z$ term in the boundary operator would be $O(z^3)$ order. We can however choose $\spab{\psi_{n+1}|g_{n+2}^-}$-shifting, under which there is only $O(z^0)$ term in the boundary operator, \begin{eqnarray} \mathcal{O}^{\spab{\psi_{n+1}|g_{n+2}^-}}=\widetilde{\lambda}_{n+1,\dot{\alpha}}\lambda_{n+2,\alpha}\lambda_{n+2,\beta}\mathop{\rm Tr}(\bar{\psi}^{\dot{\alpha}}F^{\alpha\beta})~.~~~\end{eqnarray} The boundary contribution of amplitude $A_{n;2}$ under $\spab{\psi_{n+1}|g_{n+2}^-}$-shifting equals to $A_{n;2}$ itself, thus after subtracting factor $\widetilde{\lambda}_{n+1,\dot{\alpha}}\lambda_{n+2,\alpha}\lambda_{n+2,\beta}$, we get the form factor \begin{eqnarray} \boxed{\mathcal{F}^{\dot{\alpha}~\alpha\beta}_{\mathcal{O}_{\tiny \mbox{I}}^{[3/2]},n}(\{g^+\},\psi_i,g_j^-;q)={\spaa{i~j}\over \spaa{1~2}\spaa{2~3}\cdots\spaa{n~1}}\lambda_j^{\alpha}\lambda_j^{\beta}(\lambda_{j\gamma}q^{\gamma\dot{\alpha}})~.~~~}\end{eqnarray} Discussion on the operator $\mathcal{O}^{[3/2]}_{\tiny \mbox{II}}$ is almost the same as operator $\mathcal{O}^{[3/2]}_{\tiny \mbox{I}}$, while we only need to change $\psi\to\bar{\psi}$. We can construct the Lagrangian as \begin{eqnarray} L_{\mathcal{O}_{\tiny \mbox{II}}^{[3/2]}}=L_{{\tiny\mbox{SYM}}}+{\kappa\over N}\mathop{\rm Tr}(\psi_{\gamma}F_{\alpha\beta})\mathop{\rm Tr}(\psi^{\gamma}F^{\alpha\beta})+{\bar{\kappa}\over N}\mathop{\rm Tr}(\bar{\psi}_{\dot{\gamma}}\bar{F}_{\dot{\alpha}\dot{\beta}})\mathop{\rm Tr}(\bar{\psi}^{\dot{\gamma}}\bar{F}^{\dot{\alpha}\dot{\beta}})~.~~~\end{eqnarray} In order to generate the operator $\mathop{\rm Tr}(\psi^{\gamma}F^{\alpha\beta})$, we need to shift $\bar{\psi}_{n+1}, g^-_{n+2}$. Under $\spab{g_{n+2}^-|\bar{\psi}_{n+1}}$-shifting, the leading term in $z$ is $O(z^2)$ order, and the corresponding boundary operator is \begin{eqnarray} \mathcal{O}^{\spab{g_{n+2}^-|\bar{\psi}_{n+1}}}_{z^2}=\lambda_{n+1,\alpha}\lambda_{n+1,\beta}\lambda_{n+1,\gamma}\mathop{\rm Tr}(\psi^{\gamma}F^{\alpha\beta})~.~~~\end{eqnarray} We can also take $\spab{\bar{\psi}_{n+1}|g_{n+2}^-}$-shifting, and the corresponding boundary operator is $O(z)$ order, \begin{eqnarray} \mathcal{O}_{z}^{\spab{\bar{\psi}_{n+1}|g_{n+2}^-}}=\lambda_{n+2,\gamma}\lambda_{n+2,\alpha}\lambda_{n+2,\beta}\mathop{\rm Tr}(\psi^{\gamma}F^{\alpha\beta})~.~~~\end{eqnarray} Computation of double trace amplitudes defined by $L_{\mathcal{O}_{\tiny \mbox{II}}^{[3/2]}}$ is similar to those defined by $L_{\mathcal{O}_{\tiny \mbox{I}}^{[3/2]}}$, and we immediately get $A_{2;2}(\bar{\psi}_1,g_2^-;\bar{\psi}_3,g_4^-)=\spaa{2~4}^2\spaa{3~1}$, and \begin{eqnarray} A_{3;2}(\bar{\psi}_1,g_2^-,g_3^+;\bar{\psi}_4,g_5^-)={\spaa{1~2}^2\spaa{2~5}^2\spaa{1~4}\over \spaa{1~2}\spaa{2~3}\spaa{3~1}}~.~~~\end{eqnarray} For general $(n+2)$-point amplitude, we have \begin{eqnarray} A_{n;2}(\{g^+\},\bar{\psi}_i,g_j^-;\bar{\psi}_{n+1},g_{n+2}^-)={\spaa{i~j}^2\spaa{j,n+2}^2\spaa{i,n+1}\over \spaa{1~2}\spaa{2~3}\cdots\spaa{n~1}}~.~~~\end{eqnarray} We can either take $\spab{g_{n+2}^-|\bar{\psi}_{n+1}}$-shifting or $\spab{\bar{\psi}_{n+1}|g_{n+2}^-}$-shifting to compute the form factor of $\mathcal{O}_{\tiny \mbox{II}}^{[3/2]}$. For example, under $\spab{g_{n+2}^-|\bar{\psi}_{n+1}}$-shifting, we pick up the $O(z^2)$ term of boundary contribution, which is $$z^2{\spaa{i~j}^2\spaa{j,n+1}^2\spaa{i,n+1}\over \spaa{1~2}\spaa{2~3}\cdots\spaa{n~1}}~,$$ subtract the factor $\lambda_{n+1,\alpha}\lambda_{n+1,\beta}\lambda_{n+1,\gamma}$, and finally get the form factor, \begin{eqnarray} \boxed{\mathcal{F}^{\alpha\beta\gamma}_{\mathcal{O}_{\tiny \mbox{II}}^{[3/2]},n}(\{g^+\},\bar{\psi}_i,g_j^-;\bar{\psi}_{n+1},g_{n+2}^-)={\spaa{i~j}^2\over\spaa{1~2}\spaa{2~3}\cdots\spaa{n~1}}\lambda_{j}^{\alpha}\lambda_{j}^{\beta}\lambda_{i}^{\gamma}~.~~~}\end{eqnarray} \subsection{The spin-2 operator} \label{sectionspin2} For the spin-2 operator \begin{eqnarray} \mathcal{O}_{\tiny \mbox{I}}^{[2]}=\mathop{\rm Tr}(F^{\alpha\beta}\bar{F}^{\dot{\alpha}\dot{\beta}})~,~~~\end{eqnarray} we can construct the Lagrangian as \begin{eqnarray} L_{\mathcal{O}_{\tiny \mbox{I}}^{[2]}}=L_{{\tiny\mbox{SYM}}}+{\kappa\over N}\mathop{\rm Tr}(F_{\alpha\beta}\bar{F}_{\dot{\alpha}\dot{\beta}})\mathop{\rm Tr}(F^{\alpha\beta}\bar{F}^{\dot{\alpha}\dot{\beta}})~.~~~\end{eqnarray} The $\Delta L$ Lagrangian term introduces four to eight-point gluon vertices in Feynman diagrams. It is easy to know that the four-point amplitude $A_{2;2}(g_1^-,g_2^+;g_3^-,g_4^+)=\spaa{1~3}^2\spbb{2~4}^2$. The general $(n+2)$-point amplitude is given by \begin{eqnarray} A_{n;2}(\{g^+\},g_i^-;g_{n+1}^-,g_{n+2}^+)=-{\spab{i|q|n+2}^2\spaa{i,n+1}^2\over\spaa{1~2}\spaa{2~3}\cdots\spaa{n~1}}~,~~~\label{formfactor2I}\end{eqnarray} where $q=-p_{n+1}-p_{n+2}$. Let us verify this result by BCFW recursion relation. Assuming eqn. (\ref{formfactor2I}) is valid for $A_{n-1;2}$, and taking $\spab{g_{n-1}^+|g_{n}^+}$-shifting, we get two contributing terms\footnote{We have assumed $i\neq 1,n-2$, which can always be true by cyclic invariance of the external legs.} in BCFW expansion. The first term is \begin{eqnarray} A_{n-1;2}(g_2^+,\ldots, g_{i-1}^+,g_i^-,g_{i+1}^+,\ldots, g_{\widehat{n-1}}^+,g_{\widehat{P}_{1n}}^+;g_{n+1}^-,g_{n+2}^+){1\over P^2_{1n}}A_{3}(g_{-\widehat{P}_{1n}}^-,g_{\widehat{n}}^+,g_1^+)~,~~~\end{eqnarray} and this one vanishes, since the on-shell condition of propagator $\widehat{P}_{1n}^2=\spaa{1~n}\spbb{\widehat{n}~1}=0$ implies $A_{3}(g^-_{-\widehat{P}_{1n}},g_{\widehat{n}}^+,g_1^+)\sim \spbb{\widehat{n}~1}^3\to 0$. The other term is \begin{eqnarray} &&A_3(g_{n-2}^+,g_{\widehat{n-1}}^+,g_{\widehat{P}}^-){1\over P_{n-2,n-1}^2}A_{n-1;2}(g^+_{-\widehat{P}},g_{\widehat{n}}^+,g_1^+,\ldots,g_{i-1}^+,g_i^-,g_{i+1}^+,\ldots,g_{n-3}^+;g_{n+1}^-,g_{n+2}^+)~.~~~\nonumber\end{eqnarray} After inserting the explicit expressions for $A_{3}$ and $A_{n-1;2}$, we arrive at the result (\ref{formfactor2I}). The leading $z$ term of boundary operator under $\spab{g_{n+1}^-|g_{n+2}^+}$-shifting is $O(z^4)$ order. Instead, we would like to take $\spab{g_{n+2}^+|g_{n+1}^-}$-shifting, under which the boundary operator is $O(z^0)$ order. After considering LSZ reduction, we have \begin{eqnarray} \mathcal{O}^{\spab{g_{n+2}^+|g_{n+1}^-}}=\widetilde{\lambda}_{n+2,\dot{\alpha}}\widetilde{\lambda}_{n+2,\dot{\beta}}\lambda_{n+1,\alpha}\lambda_{n+1,\beta}\mathop{\rm Tr}(F^{\alpha\beta}\bar{F}^{\dot{\alpha}\dot{\beta}})~.~~~\end{eqnarray} Hence by picking up the boundary contribution of amplitude $A_{n;2}$ under $\spab{g_{n+2}^+|g_{n+1}^-}$-shifting, and subtracting factor $\widetilde{\lambda}_{n+2,\dot{\alpha}}\widetilde{\lambda}_{n+2,\dot{\beta}}\lambda_{n+1,\alpha}\lambda_{n+1,\beta}$, we get the form factor \begin{eqnarray} \boxed{\mathcal{F}_{\mathcal{O}_{\tiny \mbox{I}}^{[2]},n}^{\dot{\alpha}\dot{\beta}~\alpha\beta}(\{g^+\},g_i^-;q)=-{(\lambda_{i\gamma_1}q^{\gamma_1\dot{\alpha}})(\lambda_{i\gamma_2}q^{\gamma_2\dot{\beta}})\lambda^{\alpha}_{i}\lambda^{\beta}_{i}\over\spaa{1~2}\spaa{2~3}\cdots\spaa{n~1}}~.~~~}\end{eqnarray} \section{Summary and discussion} \label{secConclusion} The boundary operator is initially introduced as a formal technique to study the boundary contribution of amplitude when doing BCFW recursion relation in paper \cite{Jin:2015pua}. It defines a form factor, and practically this off-shell quantity is difficult to compute. In this paper, we take the reversed way to study the form factor from boundary contribution of amplitude of certain theory. We show that by suitable construction of Lagrangian, it is possible to generate boundary operators which are identical(or proportional) to the given operators of interest. This means that the form factor of given operator can be extracted from the boundary contribution of corresponding amplitude defined by that Lagrangian. We demonstrate this procedure for a class of composite operators by computing amplitudes of double trace structure and reading out the form factors from corresponding boundary contribution. Thus the computation of form factor becomes a problem of computing the scattering amplitude. We have considered a class of composite operators, which are traces of product of two component fields from $\mathcal{N}=4$ SYM, and the sum of spins of those two fields is no larger than two. In fact, the construction of Lagrangian has no difference for other operators with length(the number of fields inside the trace) larger than two, provided the sum of their spins is no larger than two. This is because we can always product them with a length-two trace term to make a Lorentz-invariant Lagrangian term, and deform the two fields in the extra trace term to produce the required boundary operators. However, if the operator has spin larger than two, in order to make a Lorentz-invariant Lagrangian term, the length of extra trace term should be larger than two. Then deformation of two fields in the extra trace term is not sufficient to produce the desired boundary operators, and we need multi-step deformation. It would be interesting to investigate how this multi-step deformation works out. It would also be interesting to find out how to apply this story to other kind of operators such as stress-tensor multiplet or amplitude with off-shell currents. Note that all the discussions considered in this paper are at tree-level. While it is argued\cite{Jin:2015pua} that the boundary operator is generalizable to loop-level since the OPE can be defined therein, it is interesting to see if similar connection between form factor and amplitude also exists at loop-level or not. For this purpose, it would be better to study the loop corrections to the boundary operators, which is under investigation. \acknowledgments This work is supported by the Qiu-Shi Fund and the National Natural Science Foundation of China(Chinese NSF) with Grant No.11135006, No.11125523, No.11575156. RH would also like to acknowledge the supporting from Chinese Postdoctoral Administrative Committee.
{'timestamp': '2016-01-26T02:15:57', 'yymm': '1601', 'arxiv_id': '1601.06612', 'language': 'en', 'url': 'https://arxiv.org/abs/1601.06612'}
\section{Introduction} \label{sec:intro} Collisions between similar-size planetary bodies (``giant impacts'') dominate the final stage of planet formation \citep{1985ScienceWetherill,2010ChEGAsphaug}. These events generally result in the formation of transient magma oceans on the resulting bodies \citep{1993TonksMelosh,2016PEPSdeVries}. During the last decade, a series of studies combined core-mantle differentiation with accretion modeling to put constraints on how terrestrial planets' cores formed in the solar system and showed that core formation of terrestrial planets does not occur in a single stage, but rather that it is the result of a multistage process, i.e., a series of metal-silicate equilibrations \citep[e.g.,][]{2011Rubie,Rubie2015,2015IcarDwyer,2015IcarBonsor,Carter2015apj,2016Rubie,2017EPSLFischer,2019EPSLZube}. In particular, the core formation model from \citet{Rubie2015} uses rigorous chemical mass balance with metal-silicate element partitioning data and requires assumptions regarding the bulk compositions of all starting embryos and planetesimals as a function of heliocentric distance. The differentiation of terrestrial planets is modeled as the separation of the iron-rich metal from silicate material using metal-silicate partitioning of elements to determine the evolving chemistry of the two reservoirs. New insights into terrestrial planet formation have been enabled by this equilibration model. For example, \citet{Rubie2015} demonstrated that Earth likely accreted from heterogeneous reservoirs of early solar system materials, \citet{2016Rubie} demonstrated that iron sulfide segregation was responsible for stripping the highly siderophile elements from Earth’s mantle, and \citet{2017JacobsonEPSL} proposed that Venus does not have an active planetary dynamo because it never experienced late Moon-forming giant impacts. These previous studies of planet differentiation interpreted the results of \textit{N}-body simulations of terrestrial planet formation where collisions were treated as perfectly inelastic, which computer models of giant impacts have shown is an oversimplification of the complex collision process \citep[e.g.,][]{2004ApJAgnor,2012ApJ...751...32S}. Inelastic collisions, or ``perfect merging", assumes the projectile mass ($M_\mathrm{P}$) merges with the target mass ($M_\mathrm{T}$) to form a body with mass $M_\mathrm{T}+ M_\mathrm{P}$. However, in nearly all giant impacts, escaping debris is produced, and the projectile's core does not simply descend through the magma ocean and merge with the target's metal core. Instead, half of all collisions are `hit-and-run', where the projectile escapes accretion \citep[][]{2004ApJAgnor,2010ApJKokubo} and may never re-impact the target again \citep{2013IcarusChambers,EmsenhuberApj2020}. In these events, partial accretion may occur between the metal and silicate reservoirs of the target body and the projectile (or `runner'). To accurately model the geochemical evolution of the mantles and cores of growing planetary bodies, it is thus necessary to account for the range of accretionary (or non-accretionary) outcomes of giant impacts. \subsection{Beyond perfect merging} High-resolution hydrocode computer simulations of collisions provide a description of the outcomes of giant impacts, which can then be incorporated into planet formation models to produce higher-fidelity predictions. But each giant impact simulation requires a long computational time to complete (on the order of hours to days depending on the resolution and computing resources). Since a large number of collisions may occur during late-stage terrestrial planet formation \citep[up to order of \num{e3}, e.g.,][]{EmsenhuberApj2020}, it is impractical to model each impact ``on-the-fly'' by running a full hydrocode simulation at a resolution that is sufficient to make meaningful predictions. Several previous studies focused on overcoming both the assumption of perfect merging and the aforementioned computational bottleneck. These works employed various techniques to resolve different aspects of giant impacts. Commonly, scaling laws and other algebraic relationships \citep[e.g.][]{1986MmSAI..57...65H,HOUSEN1990226,HOLSAPPLE19941067,2010ApJKokubo,2012ApJLeinhardt,2017IcarusGenda} are utilized during an \textit{N}-body (orbital dynamical) planet formation simulation to predict the collision outcome for the masses of post-impact remnants \citep[e.g.][]{2013IcarusChambers,2016ApJQuintana,2019AJClementA}. The post-impact information is then fed back into the \textit{N}-body code for further dynamical evolution. Other studies `handed off' each collision scenario to a hydrodynamic simulation in order to model the exact impact scenario explicitly, and fed post-impact information back to the \textit{N}-body code \citep{2017IcarusGenda,2020A&ABurger}. The latter methodology is the most rigourous, but also the most computationally demanding, as it requires to run a hydrodynamic calculation for every one of the hundreds of collisions in an \textit{N}-body simulation, each of which requires days of computer time when using modest resolution. Alternatively, \citet[termed \citetalias{2019ApJCambioni}{} hereafter]{2019ApJCambioni} proposed a fully data-driven approach, in which machine-learning algorithms were trained on a data set of pre-existing hydrocode simulations \citep{2011PhDReufer,2020ApJGabriel}. The machine-learned functions (``surrogate models") predict the outcome of a collision within a known level of accuracy with respect to the hydrocode simulations in an independent testing set. This process is fully data-driven and does not introduce model assumptions in the fitting, which is in contrast to scaling laws composed of a set of algebraic functions based on physical arguments \citep[e.g.,][]{2012ApJLeinhardt,2020ApJGabriel}. The surrogate models are fast predictors and \citet[][termed \citetalias{EmsenhuberApj2020}~hereafter]{EmsenhuberApj2020} implemented them in a code library named \texttt{collresolve}\footnote{\url{https://github.com/aemsenhuber/collresolve}} \citep{2019SoftwareEmsenhuberCambioni} to realistically treat collisions on-the-fly during terrestrial planet formation studies. When \texttt{collresolve} is used to treat collisions in \textit{N}-body studies, the final planets feature a wider range of masses and degree of mixing across feeding zones in the disk compared with those predicted by assuming perfect merging. Although \citetalias{EmsenhuberApj2020}~ignored debris re-accretion ---and we use these dynamical simulations for the study herein--- their results suggest that composition diversity increases in collision remnants. This is something that cannot be predicted by models that assume perfect merging. \subsection{This work} In this paper, we compare the collision outcome obtained assuming perfect merging with that predicted by the more realistic machine-learned giant impact model of \citetalias{2019ApJCambioni}~and \citetalias{EmsenhuberApj2020}{}. In the former case, debris is not produced by definition and in the latter case, debris is produced but not re-accreted. In this respect, our goal is not to reproduce the solar system terrestrial planets, but to investigate whether or not the two collision models produce different predictions in terms of terrestrial planets' core-mantle differentiation at the end of the planetary system's dynamical evolution. \citetalias{2019ApJCambioni}~and~\citetalias{EmsenhuberApj2020}~developed models for the mass and orbits of the largest post-impact remnants; here we go a step further and develop a model for the preferential erosion of mantle silicates and core materials. To do so, we train two new neural networks to predict the core mass fraction of the resulting bodies of a giant impact. We describe the data-driven model of inefficient accretion by \citetalias{2019ApJCambioni}~and~\citetalias{EmsenhuberApj2020}{} in Section \ref{sec:ML} and its implementation in the core-mantle differentiation model by \citet{Rubie2015} in Section \ref{sec:equi_method}. We compare the perfect merging and inefficient-accretion models in two ways: (1) by studying the case of a single collision between two planetary embryos (Section \ref{sec:map_elements}); and (2) by interpreting the effect of multiple giant impacts in the \textit{N}-body simulations of accretion presented in \citetalias{EmsenhuberApj2020}~(Section \ref{sec:summary_N_body}). During the accretion of planets through giant impacts between planetary embryos, we focus on the evolution of those variables that control planetary differentiation: mass, core mass fraction, as well as metal-silicate equilibration pressure, temperature, and oxygen fugacity. Other factors that may alter composition and thermodynamical evolution indirectly, e.g., atmospheric escape and radiative effects, are not covered in these models. \section{Inefficient-accretion model} \label{sec:ML} The data-driven inefficient-accretion model by \citetalias{2019ApJCambioni}{} and \citetalias{EmsenhuberApj2020}{} consists of applying machine learning to the prediction of giant impacts' outcomes based on the pre-existing set of collision simulations described below in Section \ref{sec:SPH_data}. By training on a large data set of simulations of giant impacts, this approach allows producing response functions (surrogate models) that accurately and quickly predict key outcomes of giant impacts needed to introduce realistic collision outcomes ``on-the-fly" of an \textit{N}-body code. \subsection{Data set of giant impact simulations} \label{sec:SPH_data} The data set used in \citetalias{2019ApJCambioni}~and~\citetalias{EmsenhuberApj2020}~and in this work is composed of nearly 800 simulations of planetary collisions performed using the Smoothed-Particle Hydrodynamics (SPH) technique \citep[see, e.g.,][for reviews]{1992ARA&AMonaghan,2009NARRosswog} obtained by \citet{2011PhDReufer} and further described in \citet{2020ApJGabriel}. They have a resolution of $\sim$ \num{2e5} SPH particles. All bodies are differentiated with a bulk composition of 70 wt\% silicate and 30 wt\% metallic iron, where the equation of state for iron is ANEOS \citep{ANEOS} and M-ANEOS for SiO\textsubscript{2} \citep{2007M&PSMelosh}. The data set spans target masses $M_\mathrm{T}$ from \num{e-2} to \SI{1}{\mearth}, projectile-to-target mass ratios $\gamma=M_\mathrm{P}/M_\mathrm{T}$ between \num{0.2} and \num{0.7}, all impact angles $\theta_{coll}$, and impact velocities $v_\mathrm{coll}$ between 1 and 4 times the mutual escape velocity $v_\mathrm{esc}$, where \begin{equation} \label{V_esc} v_\mathrm{esc}=\sqrt{\frac{2G(M_\mathrm{T}+M_\mathrm{P})}{R_\mathrm{T}+R_\mathrm{P}}}, \end{equation} \noindent which represents the entire range of expected impact velocities between major bodies from \textit{N}-body models \citep[e.g.][]{2013IcarusChambers,2016ApJQuintana}. In Equation \ref{V_esc}, $G$ is the gravitational constant, and $R_\mathrm{T}$ and $R_\mathrm{P}$ are the bodies' radii. We refer to \citetalias{2019ApJCambioni}, \citetalias{EmsenhuberApj2020}, and \citet{2020ApJGabriel} for more information about the data set. An excerpt of the data set is reported in Table \ref{tab:data}. The data set is provided in its entirety in the machine-readable format. \textbf{} \begin{table*} \centering \caption{Excerpt of the data from the collision simulation analysis.} \begin{tabular}{cccc|c|cccc} \hline Target mass & Mass ratio & Angle & Velocity & Type & Acc. L. R. & Acc. S. R. & CMF L. R. & CMF S. R. \\ $M_\mathrm{T}\ [\si{\mearth}]$ & $\gamma=M_\mathrm{P}/M_\mathrm{T}$ & $\theta_{coll}$ [deg] & $v_\mathrm{coll}/v_\mathrm{esc}$ & & $\xi_\mathrm{L}$ & $\xi_\mathrm{S}$ & $Z_\mathrm{L}$ & $Z_\mathrm{S}$ \\ \hline \hline \num{1} & 0.70 & 52.5 & 1.15 & 1 & 0.02 & -0.03 & 0.30 & 0.31 \\ \num{1} & 0.70 & 22.5 & 3.00 & 1 & -0.58 & -0.62 & 0.50 & 0.62 \\ \num{1} & 0.70 & 45.0 & 1.30 & 1 & 0.02 & -0.04 & 0.30 & 0.31\\ \num{e-1} & 0.70 & 15.0 & 1.40 & 0 & 0.90 & -1.00 & 0.31 & ... \\ \num{e-1} & 0.20 & 15.0 & 3.50 & -1 & -1.51 & -1.00 & 0.43 & ... \\ \num{e-1} & 0.35 & 15.0 & 3.50 & -1 & -1.25 & -1.00 & 0.50 & ... \\ \num{e-2} & 0.70 & 60.0 & 1.70 & 1 & 0.00 & -0.02 & 0.30 & 0.30 \\ \hline \end{tabular} \tablecomments{This table is published in its entirety in the machine-readable format. A portion is shown here for guidance regarding its form and content.} \tablecomments{The elements in the first four columns are the predictors (collision properties): $M_T \in [\num{e-2},~1]~\si{\mearth}$; $\gamma=M_\mathrm{P}/M_\mathrm{T} \in [0.2,~0.7]$; $\theta_{coll} \in [0,~\ang{90}]$; $v_{coll}/v_{esc} \in [1,~4]$. The elements in the other columns are the responses describing the collision outcome. The fifth column (Type) is the automated classification (for the classifier), with hit-and-run collisions coded as a 1, accretion as a 0, and erosion as a -1. The last four columns are the responses for the neural networks: $\xi_\mathrm{L}$ is computed according to Equation~(\ref{eq:acclr}), $\xi_\mathrm{S}$ to Equation~(\ref{eq:accsr}), and the post-collision Core Mass Fractions (CMF) of the largest remnant and second remnant, $Z_\mathrm{L}$ and $Z_\mathrm{S}$ respectively, are defined in Section \ref{sec:regressor_CMF}.} \label{tab:data} \end{table*} Designing the surrogate models described in the following Sections requires running hundreds of SPH simulations and training the machine-learning functions. The computational cost of this procedure, however, is low when compared against the computational resources necessary to solve each collision in a \textit{N}-body study with a full SPH simulation \added{at the same particle resolution of the simulations used in \citetalias{2019ApJCambioni}{} and \citetalias{EmsenhuberApj2020}} for each event instead of using the surrogate models. Each giant impact simulation requires a long computational time to complete, on the order of hours to days depending on the resolution and computing resources, while the surrogate models, once constructed, provide an answer in a fraction of a second (\citetalias{2019ApJCambioni};~\citetalias{EmsenhuberApj2020}). \subsection{Surrogate model of accretion efficiencies} \label{sec:regressor_accs} In order to assess the accretion efficiency of the target across simulations of varying masses of targets and projectiles, \citetalias{EmsenhuberApj2020}~normalize the change in mass of the largest remnant, assumed to be the post-impact target, by the projectile mass \citep{2010ChEGAsphaug}: \begin{equation} \xi_\mathrm{L}=\frac{M_\mathrm{L}-M_\mathrm{T}}{M_\mathrm{P}} < 1, \label{eq:acclr} \end{equation} \noindent where $M_\mathrm{L}$ is the mass of the largest single gravitationally bound remnant. Accretion onto the target causes $\xi_\mathrm{L}>0$, while negative values indicate erosion. This accretion efficiency is heavily dependent on the impact velocity relative to the mutual escape velocity $v_\mathrm{esc}$, especially in the critical range $\sim1$--$1.4~v_\mathrm{esc}$, which encompasses $\sim$90\% of the probability distribution of impact velocities between major remnants \added{in the $N$-body simulations by \citet{2013IcarusChambers} which include collision fragmentation \citep{2020ApJGabriel}.} Similarly, for the second largest remnant with mass $M_\mathrm{S}$, assuming it is the post-impact projectile, i.e., the runner in a hit-and-run collision \citep{2006NatureAsphaug}, \citetalias{EmsenhuberApj2020}~define a non-dimensional accretion efficiency again normalized by the projectile mass: \begin{equation} \xi_\mathrm{S}=\frac{M_\mathrm{S} - M_\mathrm{P}}{M_\mathrm{P}}. \label{eq:accsr} \end{equation} The value $\xi_\mathrm{S}$ is almost always negative, as mass transfer from the projectile onto the target occurs also in case of projectile survival \citep{2006NatureAsphaug,2019ApJEmsenhuberA}, and loss to debris can occur. The mass of the debris $M_\mathrm{D}$ is computed from mass conservation. If the debris creation efficiency is defined as: $\xi_\mathrm{D}=M_\mathrm{D}/M_\mathrm{P}$, then $\xi_\mathrm{L}+\xi_\mathrm{S}+\xi_\mathrm{D}=0$. In \citetalias{EmsenhuberApj2020}, the quantities of Equations \ref{eq:acclr} and \ref{eq:accsr} were used to train a surrogate model of accretion efficiencies. This is a neural network, that is, a parametric function trained to mimic the ``parent'' SPH calculation as an input-output function, in order to predict real-variable outputs given the four impact parameters (predictors): mass of the target, projectile-to-target mass ratio, impact angle, and impact velocity. The data set entries are of the type: \begin{equation} \{(M_\mathrm{T},~\gamma,~\theta_\mathrm{coll},~v_\mathrm{coll}/v_\mathrm{esc}) ; (\xi_\mathrm{L}, \xi_\mathrm{S})\}. \end{equation} The surrogate model is assessed in its training success and predictive capabilities by means of the mean squared error \begin{equation} \label{eq:MSE} \frac{1}{N} \sum_{i=0}^N (Q_\mathrm{NN}-Q_\mathrm{SPH})^2, \end{equation} \noindent and correlation coefficient \begin{equation} \label{eq:R-value} \frac{\mathrm{cov}(Q_\mathrm{NN},Q_\mathrm{SPH})}{\sigma_\mathrm{NN} \sigma_\mathrm{SPH}} \end{equation} for each quantity $Q$, where $Q_\mathrm{NN}$ and $Q_\mathrm{SPH}$ indicate the predictions by the Neural Networks and the correspondent outcome from the SPH simulations with standard deviations $\sigma_\mathrm{NN}$ and $\sigma_\mathrm{SPH}$, respectively. The goal is to achieve a mean squared error as close to zero and a correlation coefficient as close to 100\% as possible on a testing data set, which comprises data that were not used for training. The surrogate model of accretion efficiency is able to predict the mass of the largest and second largest remnants with a mean squared error at testing equal to 0.03 and a correlation coefficient greater than 96\%. Importantly, although the surrogate model has a high global accuracy, inaccurate predictions can still occur locally in the parameter space (\citetalias{2019ApJCambioni}). \added{These inaccurate predictions, however, are not systematic; the distributions of the residuals $\Delta \xi$ between accretion efficiency predictions $Q_{NN}$ and target values $Q_{SPH}$ for the largest remnant and second remnant are well fit by Gaussian distributions $\mathcal{N} (\mu, \sigma) = \mathcal{N} (0.0,0.1)$ and $\mathcal{N} (0.02,0.09)$, respectively, where $\mu$ is the mean of the residuals and $\sigma$ is the standard deviation of the residuals, so that $\mathcal{N} (0, 0)$ is the distribution associated with a noiseless surrogate model. The noise level is comparable to the numerical precision of the SPH simulations ($\sim$ 0.1--0.15 in units of accretion efficiency). High-inaccuracy predictions (i.e., $|\Delta \xi| > 0.5$) account for just 3.7\% and 2.6\% of the overall set for largest and second remnants respectively, and tend to cluster in proximity of the boundaries between different regimes, as previously discussed in \citetalias{2019ApJCambioni}{}.} \subsection{Classifier of collision types} \label{sec:class_reg} In \citetalias{EmsenhuberApj2020}, the data set of SPH simulations of Section \ref{sec:SPH_data} was also used to train a classifier which provides predictions of the type of collisions (classes, or responses) based on the following mass criterion: accretion ($M_\mathrm{L}>M_\mathrm{T}$ and $M_\mathrm{S}<0.1 M_\mathrm{P}$), erosion ($M_\mathrm{L}<M_\mathrm{T}$ and $M_\mathrm{S}<0.1 M_\mathrm{P}$), and hit-and-run collision ($M_\mathrm{S}>0.1 M_\mathrm{P}$). The data set entries are of the type: \begin{equation} \{(M_\mathrm{T},~\gamma,~\theta_\mathrm{coll},~v_\mathrm{coll}/v_\mathrm{esc}) ; \mathrm{type}\} \end{equation} \added{As opposed to the prediction by the surrogate model of accretion efficiency described in Section \ref{sec:regressor_accs}, the prediction of the classifier is categorical in type and its accuracy is computed as the mean value of the correct classifications over the whole population at testing. This accuracy is equal to 95\% globally, 83.3\% on the ``erosion'' type, 91.7\% for the ``accretion'' type, and 98.0\% for the hit-and-run collision type.} \subsection{Surrogate models of core mass fraction} \label{sec:regressor_CMF} \begin{figure*} \centering \includegraphics[width=1\linewidth]{Updated_Fig1.png} \caption{As the training proceeds, the mean squared errors (Equation \ref{eq:MSE}) of the surrogate models of core mass fraction on the training (blue curves), validation (green curves) and testing (red curves) data sets decrease at each training epoch until convergence is reached. The performances of the surrogate models are quantified in terms of mean squared error (Equation \ref{eq:MSE}) and correlation coefficient (Equation \ref{eq:R-value}) at testing: \{0.02\%; $R>96\%$\} and \{0.08\%; $R>93\%$\} for the largest and second largest remnants, respectively. The two inset plots show the correlation between predictions by the neural networks and the corresponding SPH values in the testing data sets (open dots) and the 1:1 correlation line.} \label{fig:reg_perf} \end{figure*} We use the same SPH data set as \citetalias{2019ApJCambioni}~and \citetalias{EmsenhuberApj2020}{}~to train two new surrogate models to predict the core mass fractions of the largest and second largest remnants. Each remnant's core mass fraction is obtained by accounting for all material in the SPH simulations. This includes \emph{all} gravitationally-bound material such as potential silicate vapour resulting from energetic collisions involving larger bodies or impact velocities. The core mass fractions of the target and projectile are termed $Z_\mathrm{T}$ and $Z_\mathrm{P}$, respectively. Their initial values are always equal to 30\% \added{in the dataset of SPH simulations used here}. For the first new surrogate model, we train, validate, and test a neural network using a data set with entries: \begin{equation} \{(M_\mathrm{T},~\gamma,~\theta_\mathrm{coll},~v_\mathrm{coll}/v_\mathrm{esc}) ; Z_\mathrm{L}\}, \end{equation} where $Z_\mathrm{L}$ is the largest remnant's core mass fraction. In the hit-and-run regime only, we train a second neural network to predict the core mass fraction of the second largest remnant. For this surrogate model, the data set has entries: \begin{equation} \{(M_\mathrm{T},~\gamma,~\theta_\mathrm{coll},~v_\mathrm{coll}/v_\mathrm{esc}) ; Z_\mathrm{S}\}, \end{equation} \noindent where $Z_\mathrm{S}$ is the post-collision core mass fraction of the second largest remnant. Following the approach described in \citetalias{2019ApJCambioni}{} and \citetalias{EmsenhuberApj2020}{}, the training of the networks is performed on 70\% of the overall data set. The rest of the data are split between a validation set (15\%) and a testing set (15\%) via random sampling without replacement. \added{Each neural network architecture consists of an input layer with 4 nodes (as many as the impact properties), one or more hidden layers and an output node. The number of hidden layers, number of neurons in each hidden layer, and the neurons activation functions (that is, the functions that define the output of the neurons given a set of inputs) are among the ``hyperparameters'' of the network. The optimal hyperparameters are not learned during training, but found through minimization of the mean squared error on the validation set (Equation \ref{eq:MSE}). Additional hyperparameters include the choice of the training algorithm, the intensity of the regularization of the training cost function aimed to avoid the choice for too ``complex'' models \citep[e.g., ][]{girosi1995regularization} and the strategy of data normalization before training.} The optimal neural network architectures have 10 neurons in the hidden layer with an hyperbolic tangent sigmoid activation function (Equation 13 in \citetalias{2019ApJCambioni}). The inputs and targets are normalized in the range [-1, 1]. The regularization process \citep[e.g., ][]{girosi1995regularization} has a strength equal to $5.48\times10^{-6}$ and $3.80\times10^{-5}$ for the $Z_\mathrm{L}$ and $Z_\mathrm{S}$ networks, respectively. Each network is trained using the Levenberg-Marquardt algorithm described in \citet{demuth2014neural}. The learning dynamics (i.e., evolution of the mean squared error for training, validation, and testing at different epochs of training procedure) are plotted in Figure~\ref{fig:reg_perf} for the largest and second largest remnants (left and right panels, respectively). At every training epoch, the weights of the networks are updated such that the mean squared error on the training data set gets progressively smaller. The mean squared errors converge in about 6 epochs for the surrogate model of the largest remnant and 4 for that of the second remnant. Once trained, the predictive performance of the networks are quantified by the mean squared error (red curves in Figure \ref{fig:reg_perf}, Equation \ref{eq:MSE}) and correlation coefficient (box plots, Equation \ref{eq:R-value}) on the testing dataset. The mean squared error at convergence is equal to about $2\times10^{-4}$ with a correlation coefficient of above 96\% for the largest remnant, and $8\times10^{-4}$ with a correlation coefficient above 93\% for the second largest remnant. \added{In Figure \ref{fig:reg_perf}, the differences between training, validation, and testing mean squared errors at convergence are smaller than 0.01 in units of $M_{core}/M_{planet}$. The mean squared errors on the testing set are smaller or equal to those on the training set, indicating that the trained algorithms generalize well the prediction to new cases not learned during training. The validation mean squared errors are also smaller than the training errors, which indicates that the trained algorithms are not overfitting the training datasets. These (small) differences between mean squared errors may also reflect differences in variance of the datasets, which we attempted to mitigate by populating the sets using random sampling without replacement.} \added{Although the mean squared errors are globally low, inaccurate predictions may still occur locally in the parameter space. The distributions of the residuals in core mass fraction between $Q_{NN}$ and target values $Q_{SPH}$ for largest remnant and second remnant values, however, are well fit by Normal distributions $\mathcal{N} (0\%,2\%)$ and $\mathcal{N} (0\%,1\%)$, respectively. Despite the penury of data at large core mass fractions (i.e., $Z>40\%$), the surrogate model is found to be accurate in this regime too: the residuals are well fit with $\mathcal{N} (3\%, 4\%)$ and $\mathcal{N} (-3\%, 6\%)$ for the largest and second largest remnants, respectively. This noise level is comparable to the numerical uncertainty of the SPH simulations in predicting the core mass fraction of terrestrial planets, which we estimate to be 2--5 \% for accretionary and erosive events, and in the order of 10\% for erosive hit-and-run events. Predictions with residual $> 10\%$ account for just the 3.5\% and 7.8\% of the overall set for largest and second remnants respectively. As expected, we find that these inaccurate predictions tend to cluster in proximity of the boundary between hit-and-run and erosive collisions \citep[i.e., disruptive hit-and-runs,][]{2014NatGeoAsphaug}, in which substantial debris is produced in the erosion of both the targets' and projectiles' mantle, and where the SPH simulator is also expected to be the most noisy.} \subsection{Accretion efficiencies of the core and the mantle} \label{sec:acc_effs_cm} In order to track how the core mass fraction of a growing planet evolves through the giant impact phase of accretion, we split the change in mass from the target to the largest remnant into a core and mantle component: \begin{equation} \label{eq:change_L} M_\mathrm{L} - M_\mathrm{T} = \Delta M_\mathrm{L}^\mathrm{c} + \Delta M_\mathrm{L}^\mathrm{m}, \end{equation} \noindent where $\Delta M_\mathrm{L}^\mathrm{c} = M_\mathrm{L}^\mathrm{c} - M_\mathrm{T}^\mathrm{c}$ and $\Delta M_\mathrm{L}^\mathrm{m} = M_\mathrm{L}^\mathrm{m} - M_\mathrm{T}^\mathrm{m}$ are the changes in mass of the core and mantle of the largest remnant from the target as indicated by the superscripts ``c'' and ``m'', respectively. Similarly, in the hit-and-run collision regime, we do the same for the change in mass from the projectile to the second largest remnant: \begin{equation} \label{eq:change_S} M_\mathrm{S} - M_\mathrm{P} = \Delta M_\mathrm{S}^\mathrm{c} + \Delta M_\mathrm{S}^\mathrm{m}, \end{equation} where $\Delta M_\mathrm{S}^\mathrm{c} = M_\mathrm{S}^\mathrm{c} - M_\mathrm{P}^\mathrm{c}$ and $\Delta M_\mathrm{S}^\mathrm{m} = M_\mathrm{S}^\mathrm{m} - M_\mathrm{P}^\mathrm{m}$ are defined similarly as the largest remnant case. The remnant bodies after a giant impact may have different core mass fractions than either of the pre-impact bodies (i.e. target and projectile) or each other. For instance, a projectile may erode mantle material from the target in a hit-and-run collision, but during the same impact the target may accrete core material from the projectile. In order to quantify and study these possibilities, we define distinct accretion efficiencies for each remnants' core ($\xi_\mathrm{L}^\mathrm{c}$ and $\xi_\mathrm{S}^\mathrm{c}$) and mantle ($\xi_\mathrm{L}^\mathrm{m}$ and $\xi_\mathrm{S}^\mathrm{m}$), so that, after dividing through by the projectile mass for normalization, the above expressions are transformed into: \begin{align} \label{eq:ML_segments} \xi_\mathrm{L} &= Z_\mathrm{P} \xi_\mathrm{L}^\mathrm{c} + (1 - Z_\mathrm{P}) \xi_\mathrm{L}^\mathrm{m}, \\ \label{eq:MS_segments} \xi_\mathrm{S} &= Z_p \xi_\mathrm{S}^\mathrm{c} + (1 - Z_\mathrm{P}) \xi_\mathrm{S}^\mathrm{m}, \end{align} where we define the core and mantle component accretion efficiencies in the same manner as the overall accretion efficiencies of the remnant bodies: \begin{align} \label{eq:core_acc_L} \xi_\mathrm{L}^\mathrm{c} &= \frac{M_\mathrm{L}^\mathrm{c}-M_\mathrm{T}^\mathrm{c}}{M_\mathrm{P}^\mathrm{c}} = \frac{Z_\mathrm{L}}{Z_\mathrm{P}}\xi_\mathrm{L} + \frac{Z_\mathrm{L}-Z_\mathrm{T}}{Z_\mathrm{P}}\frac{1}{\gamma}, \\ \label{eq:mantle_acc_L} \xi_\mathrm{L}^\mathrm{m} &= \frac{M_\mathrm{L}^\mathrm{m}-M_\mathrm{T}^\mathrm{m}}{M_\mathrm{P}^\mathrm{m}} = \frac{1-Z_\mathrm{L}}{1-Z_\mathrm{P}}\xi_\mathrm{L} - \frac{Z_\mathrm{L}-Z_\mathrm{T}}{1-Z_\mathrm{P}}\frac{1}{\gamma}, \\ \label{eq:core_acc_S} \xi_\mathrm{S}^\mathrm{c} &= \frac{M_\mathrm{S}^\mathrm{c}-M_\mathrm{P}^\mathrm{c}}{M_\mathrm{P}^\mathrm{c}} = \frac{Z_\mathrm{S}}{Z_\mathrm{P}}(1+\xi_\mathrm{S}) - 1, \\ \label{eq:mantle_acc_S} \xi_\mathrm{S}^\mathrm{m} &= \frac{M_\mathrm{S}^\mathrm{m}-M_\mathrm{P}^\mathrm{m}}{M_\mathrm{P}^\mathrm{m}} = \frac{1-Z_\mathrm{S}}{1-Z_\mathrm{P}}(1+\xi_\mathrm{S}) - 1, \end{align} where on the right-hand sides of Equations \ref{eq:core_acc_L}--\ref{eq:mantle_acc_S}, we express the core and mantle accretion efficiencies in terms found in Table~\ref{tab:data}: the initial projectile-to-target mass ratio ($\gamma$), the overall accretion efficiencies of the remnants ($\xi_\mathrm{L}$ and $\xi_\mathrm{S}$), and the core mass fractions of the final bodies ($Z_\mathrm{L}$ and $Z_\mathrm{S}$). \added{In the SPH simulations discussed in Section \ref{sec:SPH_data}, the core mass fractions of the initial bodies are always $Z_\mathrm{T}=Z_\mathrm{P}=30\%$. The approach adopted for cases of collisions between bodies with core mass fraction different from $30\%$ --- which are expected to occur in planet formation studies --- is discussed in Section \ref{subsec:collision}.} \section{Planetary differentiation model} \label{sec:equi_method} After a giant impact, the surviving mantle of a remnant body is assumed to equilibrate with any accreted material, in a magma ocean produced by the energetic impact. This equilibration establishes the composition of the cooling magma ocean and also potentially involves dense Fe-rich metallic liquids that segregate to the core due to density differences. In this Section, we describe how the inefficient-accretion model of Section \ref{sec:ML} is implemented into the planetary accretion and differentiation model published by \citet{2011Rubie,Rubie2015,2016Rubie}. We direct the reader to the manuscripts by Rubie et al.\ for a more detailed description of the metal-silicate equilibration approach itself. \subsection{Identification of silicate and metallic reservoirs} \label{subsec:identification} \added{Regardless of the impact conditions, in the perfect-merging model the metallic core of the projectile plunges into the target’s magma ocean turbulently entraining silicate liquid in a descending plume \citep{2011Deguen, 2003Rubie}. In the inefficient-accretion model, if the collision is a hit-and-run or erosive, the metallic core of the projectile does not plunge into the target's magma ocean, but some of the collision energy may be still delivered to the core-mantle boundaries of the colliding pair, potentially inducing some mixing of the metal and silicate reservoirs.} \added{As today, however, there are no clear recipes for the quantification of the degree of mixing at the core-mantle boundary and re-equilibration of such mixture in case of hit-and-run and erosive collisions. Here, we explore two end-member scenarios: (1) no mixing between the reservoirs and, hence, no silicate-metal re-equilibration; and (2) the metal and silicate reservoirs are fully mixed and the mixture re-equilibrates. It is important to note that these two end-member scenarios are both unlikely, especially the second one, and that the reality of the degree of mixing and equilibration lies in-between these two end-members. In particular, previous studies do not give much support for the idea of full re-equilibration \citep[e.g.,][]{2016LPINakajima,Carter2015apj}, as this would require the delivery of a large input of energy at the core-mantle boundary. We therefore use these two end-members to bound the problem for the sake of studying the effect of switching on/off re-equilibration on planetary differentiation.} In the flowchart of Figure \ref{fig:flowchart}, we outline the steps for identifying equilibrating silicate and metal reservoirs from surviving target and accreted projectile material in the remnants' mantle. These steps are: \begin{enumerate} \item The classifier of collision type (Section \ref{sec:class_reg}) is used to determine the number of resulting bodies of a collision. In case there is only a single remnant (accretion or erosion regimes), the core of the projectile may be either accreted or obliterated. This is determined by looking at the sign of the largest remnant's core accretion efficiency $\xi_\mathrm{L}^\mathrm{c}$. \item[2a.] If $\xi_\mathrm{L}^\mathrm{c}$ is positive and the event is not an hit-and-run collision, the metallic core of the projectile plunges into the target's magma ocean turbulently entraining silicate liquid in a descending plume \citep{2011Deguen, 2003Rubie}. The plume's silicate content increases as the plume expands with increasing depth. This determines the volume fraction of metal $\phi_\mathrm{met}$ \citep{Rubie2015}: \begin{equation} \label{eq:x_sil} \phi_\mathrm{met} = \bigg(1+\frac{\alpha z}{r_0}\bigg)^{-3} \end{equation} \noindent where $\alpha = 0.25$ \citep{2011Deguen}, $r_0$ is the initial radius of the projectile’s core and $z$ is depth in the magma ocean. Equation~\ref{eq:x_sil} allows estimating the mass fraction of the embryo’s mantle material that is entrained as silicate liquid since the volume of the descending projectile core material is known. Following chemical equilibration during descent, any resulting metal is added to the proto-core and the equilibrated silicate is mixed with the fraction of the mantle that did not equilibrate to produce a compositionally homogeneous mantle \citep{Rubie2015}, under the assumption of vigorous mixing due to mantle convection. \item[2b.] If $\xi_\mathrm{L}^\mathrm{c}$ is negative, the target's core and mantle are eroded and the projectile is obliterated. For this case, \added{we explore the two assumptions of no re-equilibration and full re-equilibration between the silicate and metal reservoirs of the largest remnant.} \item[3.] For hit-and-run collisions, the projectile survives accretion and becomes the second remnant of the collision. There may be mass transfer between the colliding bodies as predicted using the surrogate models of Section \ref{sec:regressor_accs} and Section \ref{sec:regressor_CMF}.\added{{For this case, we explore the two assumptions of no re-equilibration and full-equilibration between the silicate and metal reservoirs of the two resulting bodies.}} \end{enumerate} \begin{figure} \centering \includegraphics[width=0.85\linewidth]{flowchart_v2.png} \caption{In the planetary differentiation model, we first identify the interacting reservoirs of the resulting bodies of a collision (the top three rows of this flowchart) using the surrogate model described in Section \ref{sec:ML}. The information is then passed to the metal-silicate equilibration model described in Section \ref{sec:equi_method} (bottom row).} \label{fig:flowchart} \end{figure} \subsection{Mass balance} \label{subsec:mass_balance} After each \added{accretionary giant impact and in case of full-equilibration for hit-and-run and erosive collisions}, there is re-equilibration between any interacting silicate-rich and metallic phases in the remaining bodies, which ultimately determines the compositions of the mantle and core. The volume of interacting material is determined as described in Section~\ref{subsec:identification} and shown schematically in Figure~\ref{fig:flowchart}. The planetary differentiation model tracks the partitioning of elements in the post-impact magma ocean between a silicate phase, which is modeled as a silicate-rich phase mainly composed of SiO$_2$, Al$_2$O$_3$, MgO, CaO, FeO, and NiO, and a metallic phase, which is modeled as a metal reservoir mainly composed of Fe, Si, Ni, and O. Both the silicate-rich and metal phases include the minor and trace elements: Na, Co, Nb, Ta, V, Cr, Pt, Pd, Ru, Ir, W, Mo, S, C, and H. The identified silicate-rich and metal phases equilibrate as described by the following mass balance equation: \begin{equation} \begin{split} \label{Eq:thermo_system} \mathrm{Silicate~Liquid~}(1) + \mathrm{Metal~Liquid~}(1) \\ \Longrightarrow \mathrm{Silicate~Liquid~}(2) + \mathrm{Metal~Liquid~}(2) \end{split} \end{equation} where the flag ``1'' indicates the system before equilibration and the flag ``2'' indicates the system which is equilibrated under the new thermodynamic conditions of the resulting bodies of a collision. Thus, all equilibrating atoms are conserved and the oxygen fugacity of the reaction is set implicitly. \added{The mass balance equations for the major elements (Si, Al, Mg, Ca, Fe, Ni, and O) are iteratively solved by combining them with experimentally-derived expressions for their metal-silicate partitioning behavior. We use the predictive parameterizations by \citet{2009GeCoAMann} for Si and most other siderophile elements, \citet{2008EPSLKegler} for Ni and Co and \citet{2010Frost} for O. For a detailed description of this coupled metal-silicate partitioning and mass balance approach, see \citet{2011Rubie}.} In the chemical equilibrium of Equation \ref{Eq:thermo_system}, the partitioning of the elements into the core and mantle is controlled by the three parameters of the model: pressure $P_e$, temperature $T_e$, and oxygen fugacity $f_{O_2}$ of metal-silicate equilibration. These are in turn a function of the type of collision and its accretion efficiencies (Section \ref{sec:acc_effs_cm}), which control how the reservoirs of the target and projectile interact. In the following, the treatment of these thermodynamic properties in the context of inefficient accretion and equilibration is described. \added{For the case of no re-equilibration, $P_e$, $T_e$ and $f_{O_2}$ cannot be defined as there is not a chemical reaction or an equilibrium between co-existing metal and silicate of known compositions.} \subsubsection{Equilibration pressure and temperature} \label{sec:equi_TP} Following each event \added{involving metal-silicate equilibration}, the silicate-rich and the iron-rich reservoirs are assumed to equilibrate at a pressure $P_\mathrm{e}$ which is a constant fraction $f_\mathrm{P}$ of the embryos' evolving core–mantle boundary pressure $P_\mathrm{CMB}$, \begin{equation} \label{eq:P_cmb} P_\mathrm{e} = f_\mathrm{P} P_\mathrm{CMB}, \end{equation} \noindent and the equilibration temperature $T_\mathrm{e}$ is forced to lie midway between the peridotite liquidus and solidus at the equilibration pressure $P_\mathrm{e}$ \citep[e.g.,][]{Rubie2015}. The pressure $P_\mathrm{e}$ defined in Equation \ref{eq:P_cmb} is a simplified empirical parameter which averages the equilibration pressures for different types of impact events, and the constant $f_\mathrm{P}$ is a proxy for the average depth of impact-induced magma oceans. Here, we adopt a value $f_\mathrm{P} = 0.7$ for all the accretion events, consistently with the findings by \citet{2016PEPSdeVries}, which studied the pressure and temperature conditions of metal-silicate equilibration, after each impact, as Earth-like planets accrete. \added{In case of full-requilibration following hit-and-run and erosive collisions, we assume $f_\mathrm{P} = 1$ (that is, $P_\mathrm{e} = P_\mathrm{CMB}$) because the metal and silicate reservoirs are in a mixed state and re-equilibrate at a pressure equal to that of the core-mantle boundary.} For each of the resulting bodies, we determine the pressure $P_\mathrm{CMB}$ by using Equation 2.73 in \citet{2002geobook}. The radial position of the core-mantle boundary is computed by using the approximation that an embryo is a simple two-layer sphere of radius $R$ consisting of a core of density $\rho_\mathrm{c}$ and radius $R_\mathrm{c}$ surrounded by a mantle of thickness ($R-R_\mathrm{c}$): \begin{equation} \label{eq:core_radius} R_\mathrm{c} = \left(\frac{\rho -\rho_\mathrm{m}}{\rho_\mathrm{c}-\rho_\mathrm{m}}\right)^{\frac{1}{3}} R \end{equation} where the embryo's mean density $\rho$ is provided by the density-mass relationship introduced in \citetalias{EmsenhuberApj2020}{} \added{normalized to predict the density of Earth (\SI{5510}{kg/m^3}) for a planetary mass of 1$M_\oplus$}. Assuming a mantle density $\rho_\mathrm{m} = 0.4~\rho_\mathrm{c}$ (\footnote{The approximation that $\rho_\mathrm{m} = 0.4\rho_\mathrm{c}$ follows from the ratio between the densities of uncompressed peridotite and iron: $\rho_\mathrm{peridotite}/\rho_\mathrm{iron} = \SI{3100}{kg/m^3} / \SI{7874}{kg/m^3} \sim 0.4$.}), the core density $\rho_\mathrm{c}$ is equal to \begin{equation} \label{eq:dens_core} \rho_\mathrm{c} = \frac{5}{2} \rho \left(1-\frac{3}{5}Z\right) \end{equation} where $Z$ is the embryo's core mass fraction as predicted by the surrogate models in Section \ref{sec:regressor_CMF}. \added{For an Earth-mass planet with core mass fraction of 30\%, the assumption of two constant density layers as presented above provides $P_\mathrm{CMB}$ = \SI{130}{GPa}, which is just 4.4\% lower than $P_\mathrm{CMB}$ of the modern Earth \citep[\SI{136}{GPa},][]{2002geobook}. In the immediate aftermath of giant impacts, the internal pressure may be lower than that of the modern value due to the heat release in the impact and higher rotation rates of the planet; the pressure is nevertheless expected to increase to its steady state value due to subsequent de-spinning, heat loss into space and vapor deposition \citep{2019SciLockStewart}.} \subsubsection{Oxygen fugacity} \label{sec:ox_fug} The oxygen fugacity $f_\mathrm{O_2}$ determines the redox conditions for geologic chemical reactions. It is a measure of the effective availability of oxygen for redox reactions, and it dictates the oxidation states of cations like iron that have multiple possible valence states. For oxygen-poor compositions, oxygen fugacity is a strong function of \added{the equilibration} temperature because the concentration of Si in the metal strongly increases with temperature which increases the concentration of FeO in the silicate. For more oxidized compositions, both Si and O dissolve in the metal, and oxygen fugacity is a much weaker function of temperature than in the case of more reduced bulk compositions. The major benefit of the mass balance approach to modeling metal-silicate equilibration as described in Section \ref{subsec:mass_balance} is that the oxygen fugacity does not need to be assumed (as is done in most core formation models), but it is determined directly from the compositions of equilibrated metal and silicate (Equation \ref{Eq:thermo_system}). The oxygen fugacity is defined as the partition coefficient of iron between metal and silicate computed relative to the iron-w\"{u}stite buffer (IW, the oxygen fugacity defined by the equilibrium 2Fe + O\textsubscript{2} = 2 FeO): \begin{equation} \label{eq:ox_fug} \log{f_\mathrm{O_2}} (\Delta \mathrm{IW}) = 2 \log{\left( \frac{X_\mathrm{FeO}^\mathrm{Mw}}{X_\mathrm{Fe}^\mathrm{met}} \right)} \end{equation} \noindent \added{where $X$ represents the mole fractions of components in metal or silicate liquids. $X_\mathrm{FeO}^\mathrm{Mw}$ is related to the silicate liquid composition \citep{2011Rubie} and $X_{Fe}^{met}$ is the fraction of total iron in the bulk composition that is present as metal, as opposed to oxide (i.e., FeO in the silicate). In Equation \ref{eq:ox_fug}, the activity coefficients are assumed to be unity because of the high temperatures, as is normally done when calculating $f_{O_2}$ in studies of core formation, and because their values are very poorly known at high pressures and temperatures. This assumption is discussed in detail by \citet{1999GeCoAGessmann}.} \section{Inefficient accretion versus perfect merging: single impact events} \label{sec:map_elements} \begin{figure*} \centering \includegraphics[width=\linewidth]{CMF_v2.png} \caption{Relative difference between the core mass fractions of the largest remnant (left panel) and second remnant (right panel) and those of the parent bodies (the target's and projectile's, respectively) for a collision with $M_\mathrm{T}=\SI{0.1}{\mearth}$ and projectile-to-target mass ratio $\gamma = 0.7$ happening at different impact angles $\theta_{coll}$ and impact velocities $v_{coll}$ (in units of escape velocity $v_{esc}$). \added{The values are predicted using the surrogate model of core mass fraction of Section \ref{sec:regressor_CMF}.} Each point represents a different collision scenario in which the two bodies collide at different impact angles and velocities. The black curves define the different accretion regimes predicted using the classifier of Section \ref{sec:class_reg}. For comparison, the perfect-merging model predicts that the outcome of the collision is a single larger embryo with core mass fraction equal to that of the target (i.e.\ relative difference equal to zero) and no second remnant exists.} \label{fig:delta_CMF} \end{figure*} Here, we compare the predictions of the perfect-merging model and the inefficient-accretion model of Section \ref{sec:ML} for the case study of a single giant impact between a target of mass $M_\mathrm{T}=\SI{0.1}{\mearth}$ and a projectile of mass $M_\mathrm{P}=0.7 M_\mathrm{T}$. For the two models, we analyze the core mass fraction (Section \ref{subsec:single_CMF}), accretion efficiencies of the cores and the mantles (Section \ref{subsec:single_accs}), pressure, temperature, and oxygen fugacity of metal-silicate equilibration of the resulting bodies (Section \ref{sec:PTox}). \added{In the case of composition, we assume that the two embryos differentiated from early solar system materials that were chemically reduced. A high value of the fraction of total Fe in the system that is present in metal ($X_\mathrm{Fe}^\mathrm{met} = 0.99$) is justified in \citet{Rubie2015} as a condition to achieve reducing conditions so that elements such as Si and Cr partition sufficiently into the core. The refractory siderophile elements are assumed to be in solar-system relative proportions \citep[i.e. CI chondritic,][]{2003PalmeNeill}. This yields a core mass fraction which is approximately equal to 30\% for both target and projectile, which is that of the colliding bodies in the SPH data of Section \ref{sec:SPH_data}.} \subsection{Core mass fraction} \label{subsec:single_CMF} \added{We show in Figure~\ref{fig:delta_CMF} that, in a collision between a target of mass $M_\mathrm{T}=\SI{0.1}{\mearth}$ and a projectile of mass $M_\mathrm{P}=0.7 M_\mathrm{T}$, the inefficient-accretion model predicts that the collision may result in two remnants and the resulting bodies' core mass fractions may be significantly different from those of their parent bodies depending on the characteristics of the collision (namely impact angle and velocity). In contrast, the perfect-merging model predicts that the outcome is a single, larger embryo of mass $M_\mathrm{T}+M_\mathrm{P}$ whose core mass fraction is equal to that of the target, i.e., its bulk composition is unchanged.} \added{Cases of erosive collisions and disruptive hit-and-run events result in a net increase in the core mass fraction of the largest remnant due to massive amounts of mantle loss, which is predicted by our inefficient-accretion model trained on these events. In hit-and-run collisions, the projectile's core mass fraction is also larger than the pre-impact value, exacerbating the erroneous approximations of the perfect merging model, as the latter does not produce two diverse objects as a result of a single (hit and run) collision.} \added{In two regions of the parameter space (white areas in Figure \ref{fig:delta_CMF}), the core mass fraction of the collision remnants is predicted to be at most 4\% and 3\% less than the pre-impact values. This corresponds to a variation of $\sim$ \SI{-1}{\%} in core mass fraction absolute value, which is within the noise floor of the surrogate models of core mass fraction (Figure \ref{fig:reg_perf}).} \subsection{Accretion efficiency} \label{subsec:single_accs} \begin{figure*} \centering \includegraphics[width = 0.7\linewidth]{acc_eff_regimes_v2.png} \caption{Accretion efficiency of the core (first row) and mantle (second row) for the largest remnant (left panels) and second largest remnant (right panels) and corresponding collision regimes as predicted by the inefficient-accretion model (third row) for a collision with $M_\mathrm{T}=\SI{0.1}{\mearth}$ and projectile-to-target mass ratio $\gamma = 0.7$, for different impact angles $\theta_{coll}$ and impact velocities $v_{coll}$ (in units of escape velocity $v_{esc}$).} \label{fig:acc_core_mantle_LR} \end{figure*} \added{For any collision, geochemical modellers that use the perfect-merging assumption tend to approximate that the projectile's mantle and core accrete into the target's mantle and core, and undergo equilibration separately} \citep{Rubie2015}. For a collision between a target of mass $M_\mathrm{T}=\SI{0.1}{\mearth}$ and a projectile of mass $M_\mathrm{P}=0.7 M_\mathrm{T}$, the perfect-merging model predicts that the projectile's core plunges into the target mantle and that the entire projectile's mantle is accreted ($\xi_\mathrm{L}^\mathrm{m} = \xi_\mathrm{L}^\mathrm{c} = 1$), for every combination of impact angle and velocity. \added{In contrast, our results show that there is a much larger diversity of core accretion efficiencies as a function of impact parameters, as shown in Figure~\ref{fig:acc_core_mantle_LR}, which demonstrates a critical inaccuracy of perfect merging and equilibration assumptions.} In the parameter space of impact angle and impact velocity, we identify five collision regimes according to the core and mantle's accretion efficiencies of the largest remnant. These are described in the bottom-left panel of Figure \ref{fig:acc_core_mantle_LR} and defined as: \begin{enumerate}[label=\Alph*:] \item Core and mantle accretion occurs when $\xi_\mathrm{L}^\mathrm{c}\geq$ 0 and $\xi_\mathrm{L}^\mathrm{m}\geq$ 0. \added{No second remnant is present because} the projectile plunges into the target and gets accreted. \item Core accretion with loss of mantle material occurs when $\xi_\mathrm{L}^\mathrm{c}\geq$ 0 and $\xi_\mathrm{L}^\mathrm{m}<$ 0. \added{No second remnant is present because} the projectile's core plunges into the target and gets accreted. \item Core erosion occurs when $\xi_\mathrm{L}^\mathrm{c}<0$. The target's mantle is catastrophically disrupted and core erosion may also occur. The largest remnant has a larger core mass fraction than the target (Figure \ref{fig:delta_CMF}). \added{No second remnant is present because} the projectile is obliterated. \item Mild hit-and-run collisions occur when $\xi_\mathrm{L}^\mathrm{c}\in[-0.1,~0.1]$ and $\xi_\mathrm{L}^\mathrm{m} \in[-0.1,~0.1]$. The target's core does not gain or lose substantial mass, while the target's mantle may lose some mass depending on the impact velocity. The bulk projectile escapes accretion and becomes the second remnant. Substantial debris production may occur. \item \added{Disruptive hit-and-run collisions occur in the rest of the parameter space. The target's loses part of its mantle and the largest remnant has a larger core mass fraction than the target (Figure \ref{fig:delta_CMF}). The bulk projectile escapes accretion and becomes the second remnant.} \end{enumerate} For the second remnant, we identify four collision regimes which are described in the bottom-right panel of Figure \ref{fig:acc_core_mantle_LR}: \begin{enumerate}[label=\Alph*:] \setcounter{enumi}{5} \item Mild mantle erosion occurs when $\xi_\mathrm{S}^\mathrm{c}\in[-0.1,~0.1]$ and $\xi_\mathrm{S}^\mathrm{m}\in[-0.1,~0.1]$. At high impact angle, the geometry of the impact prevents almost any exchange of mass between the target and the projectile. \item Severe mantle erosion occurs when $\xi_\mathrm{S}^\mathrm{c}\in[-0.1,~0.1]$ and $\xi_\mathrm{S}^\mathrm{m}<-0.1$. The second remnant has a less massive mantle compared to the projectile, while it retains its core mostly intact. \item Core erosion occurs when $\xi_\mathrm{S}^\mathrm{c}<$-0.1. The second remnant's core mass fraction is strongly enhanced with respect to that of the projectile. In disruptive hit-and-run collisions, the energy of the impact may be high enough to erode some core material. \item Projectile obliteration occurs when $\xi_\mathrm{S} = \xi_\mathrm{S}^\mathrm{c} = \xi_\mathrm{S}^\mathrm{m} = -1$. No second remnant exists, as the projectile is either accreted or completely disrupted. \end{enumerate} \smallskip \added{In Figure \ref{fig:acc_core_mantle_LR}, we also observe that the surrogate model can produce nonphysical/inconsistent predictions for certain combinations of parameters. For example, at high angle and low velocity the surrogate model predicts that the core is eroded with accretion efficiency $\sim -0.1$ but mantle has an accretion efficiency near 0. This region is expected to be an artifact, since core erosion is expected to be accompanied by substantial mantle loss. This discrepancy, however, is within the noise floor of the surrogate model, which is estimated to be $\sim 0.15$ in units of accretion efficiency after error propagation.} \subsection{Pressure, temperature and oxygen fugacity} \label{sec:PTox} The equilibration conditions of planets resulting from inefficient accretion (due to the stripping of mantle and core materials) will differ from those produced by the perfect-merging model. In Figure \ref{fig:LR_vs_PM} we show the difference in equilibration conditions for a single impact scenario (the same impact scenario as the previous sections); \added{hit-and-run and erosive collisions are treated assuming full re-equilibration, for which the equilibration conditions are well-defined}. The relative differences in the equilibration results are computed with respect to the perfect-merging values as \begin{equation} \label{eq:delta_X_PM} \delta X = \frac{X_\mathrm{inefficient ~accretion}-X_\mathrm{perfect~ merging}}{X_\mathrm{perfect~merging}} \times \SI{100}{\percent}, \end{equation} \noindent where $X$ indicates one of the thermodynamic metal-silicate equilibration parameters: pressure, temperature, or oxygen fugacity of metal-silicate equilibration for an embryo with initial $X_\mathrm{Fe}^\mathrm{met} = 0.99$. Positive values of $\delta[\log{f_\mathrm{O_2}}]$ indicate that the metal-silicate equilibration in the resulting body occurs at more chemically reduced conditions than in the planet from the perfect-merging model because of lower equilibration temperatures, while negative values of $\delta[\log{f_\mathrm{O_2}}]$ indicate less chemically reduced conditions. \added{As a result of mass loss in the target, the inefficient-accretion predictions deviate substantially from the perfect-merging predictions in case of erosive collisions (regimes B, C, D in in Section \ref{subsec:single_accs}) and disruptive hit-and-run collisions (regime E). Conversely, when the collision is accretionary (regime A), the equilibration conditions predicted by the inefficient-accretion model are similar to those obtained with the perfect-merging model ($\delta P_e \approx \delta T_e \sim \SI{0}{\percent}$ and $\delta[\log{f_\mathrm{O_2}}] \sim \SI{0}{\percent}$). Importantly, since the equilibration temperature is defined as a simple one-to-one function with respect to the equilibration pressure, as described in Section~\ref{sec:equi_TP}, the contours for temperature and pressure are very similar.} \begin{figure} \centering \includegraphics[width=\linewidth]{LR_vs_PM_1_v2.png} \includegraphics[width=\linewidth]{LR_vs_PM_2_v2.png} \includegraphics[width=\linewidth]{LR_vs_PM_3_v2.png} \caption{Relative difference (Equation \ref{eq:delta_X_PM}) between the predicted values of pressure, temperature, and oxygen fugacity of metal-silicate equilibration of the largest remnant from the inefficient-accretion model and those of the resulting body from the perfect-merging model (Equation \ref{eq:delta_X_PM}), for the same collision of Figure \ref{fig:delta_CMF} and \ref{fig:acc_core_mantle_LR}. The colliding embryos have initial molar fraction of iron in the metal phase $X_\mathrm{Fe}^\mathrm{met} = 0.99$. \added{We assume the full-equilibration scenario in case hit-and-run and erosive collisions (Section \ref{subsec:identification})}.} \label{fig:LR_vs_PM} \end{figure} \section{Inefficient accretion versus perfect merging: \textit{N}-body simulations} \label{sec:summary_N_body} In this Section, we investigate how planetary differentiation is affected by inefficient accretion during the end-stage of terrestrial planet formation. In contrast to Section \ref{sec:map_elements}, in which we investigate the case study of a single giant impact, here we use the core-mantle differentiation model to interpret the results of the \textit{N}-body simulations of accretion by \citetalias{EmsenhuberApj2020}~which model the evolution of hundreds of Moon-to-Mars-sized embryos as they orbit the Sun and collide to form the terrestrial planets. The goal of our analysis, however, is not to reproduce the solar system terrestrial planets, but to investigate whether or not the perfect-merging and inefficient-accretion models produce significantly different predictions for the terrestrial planets' final properties \added{after a series of collisions under a fiducial \textit{N}-body setup}. \subsection{Initial mass and composition of the embryos} \label{subsec:initial} We use the data set of \textit{N}-body runs performed in \citetalias{EmsenhuberApj2020}{} and test the effects of the two collision models under a single equilibration model. The dataset consists of 16 simulations that use the more realistic treatment of collisions (inefficient-accretion model) and, in addition, 16 control simulations where collisions are taken to be fully accretionary (perfect-merging model). All the simulations were obtained with the \texttt{mercury6} \textit{N}-body code \citep{1999MNRASChambers}. For the inefficient-accretion model, \citetalias{EmsenhuberApj2020}~use the code library \texttt{collresolve}\footnote{\url{https://github.com/aemsenhuber/collresolve}} \citep{2019SoftwareEmsenhuberCambioni}. Each \textit{N}-body simulation begins with 153--158 planetary embryos moving in a disk with surface density similar to that for solids in a minimum mass solar nebula \citep{1977ApSSWeidenschilling}. As in \citet{2001IcarusChambers}, two initial mass distributions are examined: approximately uniform masses, and a bimodal distribution with a few large (i.e., Mars-sized) and many small (i.e., Moon-sized) bodies. The embryos in the simulations 01--04 all have the same initial mass \added{($\num{\sim1.67e-2} M_\oplus$)}. In simulations 11--14, the embryos have their initial mass proportional to the local surface density of solids\added{; the minimum mass is $\num{1.59e-3} M_\oplus$}. In simulations 21--24, the initial mass distribution is bimodal and the two populations of embryos are characterized by bodies with the same mass; \added{the minimum masses in the two populations are $\num{1.79e-3} M_\oplus$ and $\num{7.92e-2} M_\oplus$}. In simulations 31--34, the initial mass distribution is also bimodal but the bodies have mass proportional to the local surface density\added{; the minimum embryo mass is $\num{2.14e-3} M_\oplus$}. The compositions of the initial embryos are set by the initial oxygen fugacity conditions of the early solar system materials which are defined as a function of heliocentric distance. Among the models of early solar system materials that have heritage in the literature, we adopt the model by \citet{Rubie2015}, whose parameters were refined through least squares minimization to obtain an Earth-like planet with mantle composition close to that of the Bulk Silicate Earth. Accretion happens from two distinct reservoirs of planet-forming materials: one of reduced material in the inner solar system \SI{<0.95}{\au}, with the fraction of iron dissolved in metal equal to \num{0.99} and fraction of available silicon dissolved in the metal equal to \num{0.20}. Exterior to that, no dissolved silicon is present in the metal and the proportion of Fe in metal decreases linearly until a value of \num{0.11} is reached at \SI{2.82}{\au}. Beyond 2.82 au, the iron metal fraction linearly decreases and reaches about zero at \SI{6.8}{\au} \added{ \citep[see Figure 6 in][]{Rubie2015}.} \added{Within each group of \textit{N}-body simulations from \citetalias{EmsenhuberApj2020}, which vary different aspects of the initial conditions, we compare results from the perfect merging and inefficient accretion scenarios.} In sets 01--04 and 21--24, the initial mass \added{of every embryo is identical}, so the number density of embryos scales with the surface density. In sets 11--14 and 31--34, the initial embryos' masses scale with the local surface density; the spacing between embryos is independent of the surface density, but the heliocentric distance between them gets smaller as distance increases, hence the number density of the embryos increases with heliocentric distance. This means that the simulations 11--14 and 31--34 are initialized with most of the embryos forming farther from the Sun than those in simulations 01--04 and 21--24. Following the model by \citet{Rubie2015}, this implies that most of the embryos in simulations 11--14 and 31--34 form with initial core mass fractions \added{smaller than 30\%}. \subsection{Working assumptions} \label{subsec:collision} \begin{enumerate} \item The \textit{N}-body simulations of \citetalias{EmsenhuberApj2020}{} are based on the assumption that the bodies are not spinning prior to each collision and are not spinning afterwards. We acknowledge that this approximation violates the conservation of angular momentum \added{in off-axis collisions} and that collisions between spinning bodies would alter accretion behavior \citep[e.g.,][]{1999Agnor}. \item In the \textit{N}-body simulations with inefficient accretion by \citetalias{EmsenhuberApj2020}, only the remnants whose mass is larger than \SI{1e-3}{\mearth} are considered. \added{Removing small bodies from the \textit{N}-body simulations avoids uncertainties that may arise from querying predictions from the machine learning model in regimes on which is was not trained (the SPH dataset extends down to collisions with a total mass of \SI{2e-3}{\mearth}). If the surrogate model predicts a mass smaller than the threshold, then this body is unconditionally treated as debris and does not dynamically interact with the embryos. \citetalias{EmsenhuberApj2020}{} nevertheless tracked the evolution of the overall debris mass budget.} \item As the embryos evolve during accretion through collisions, their core mass fractions can evolve to be different from 30\%, which is the core mass fraction of the SPH colliding bodies that were used to train the surrogate models in Section~\ref{sec:regressor_CMF}. For this reason, in the analysis of the \textit{N}-body simulations we make the approximation that the core mass fraction of each collision remnant \added{\textit{prior} to metal-silicate equilibration} is equal to \begin{equation} \label{eq:core_approx} Z \approx (Z^* - 30\%) + Z_0 \end{equation} \noindent where $Z^*$ is the core mass fraction of the remnant as predicted by the surrogate model of Section~\ref{sec:regressor_CMF}, and $Z_0$ is the metal fraction of the parent body as computed by the core-mantle differentiation model. \added{To guarantee the conservation of siderophile material, the prediction by Equation \ref{eq:core_approx} is bounded to lie between 0 and 100\%.} \end{enumerate} \subsection{Results: Core mass fraction} \label{subsec:N_body_res} To compare the effect of the two accretion model assumptions (inefficient accretion and perfect merging) for planets in the \textit{N}-body simulations, we examine the final core mass fractions. \added{We chose not to bin the data as we found the results to be highly sensitive to choices in bin width (low-\textit{N} statistical issues).} \begin{figure*} \centering \includegraphics[width=0.8\linewidth]{Z_vs_M_v2.png} \caption{\textit{N}-body results by the inefficient-accretion model (dots) and the perfect-merging model (diamonds) in terms of the final planets' core mass fraction as function of the final planetary mass and different initial mass distributions for the embryos. \added{The core mass fractions of the inefficient-accretion planets are the arithmetic mean of the values obtained using the two assumptions of no re-equilibration and full re-equilibration of the metal and silicate reservoirs after hit-and-run and erosive collisions (Section \ref{subsec:identification})}. The red stars show the core mass fractions of the solar system terrestrial planets.} \label{fig:delta_ZZ} \end{figure*} \begin{figure*} \centering \includegraphics[width=\linewidth]{correlation.png} \caption{Left panel: core mass fraction of the terrestrial planets obtained in all the \textit{N}-body simulations from \citetalias{EmsenhuberApj2020}~ assuming that full mixing and re-equilibration occur at the core-mantle boundary (vertical axis) or no re-equilibration occurs (horizontal axis) in hit-and-run and erosive collisions. Right panel: same as the left panel, but for the concentration of Si and O in the core of the terrestrial planets. The data points are color coded in terms of the planets' masses (logarithmic scale).} \label{fig:correlation} \end{figure*} Figure~\ref{fig:delta_ZZ} is a plot of the core mass fraction of the final planets as a function of planetary mass as determined by the perfect-merging model (open diamonds) and the inefficient-accretion model (dots). Each of the four subpanels of Figure~\ref{fig:delta_ZZ} plots one of the four groups of \textit{N}-body simulations (01--04, 11--14, 21--24, 31--34) described in Section \ref{subsec:initial}. Each subpanel shows the results from all the four \textit{N}-body simulations of the correspondent group. The inefficient-accretion results are color-coded in terms of their $h$-number, which measures how many hit-and-run collisions an embryo experienced during the accretion simulation \added{(\citealp{2014NatGeoAsphaug}; \citetalias{EmsenhuberApj2020})}. If the collision event is a merger, the largest remnant's $h$-number is equal to the mass average of the target's and projectile's $h$-numbers. If the event is a hit-and-run collision, the second remnant's $h$-number is increased by 1, \added{while the $h$-number of the largest remnant does not change}. We also plot the estimated values for the core mass fractions of the inner solar system terrestrial bodies as red stars: $Z_\mathrm{Earth} = \SI{33}{\percent}$ , $Z_\mathrm{Venus} = \SI{31}{\percent}$, $Z_\mathrm{Mars} = \SI{24}{\percent}$, $Z_\mathrm{Mercury} = \SI{69}{\percent}$ \citep[respectively]{2013EarthCMF,RUBIE201543,2017HelffrichMarsCore,2013Hauck}. \added{Perfect merging simulations generally produce planets with core mass fractions close to 30\% regardless of the initial embryo distribution. The most massive planets produced by inefficient accretion also have core mass fractions near 30\%, but the degree of spread in core mass fraction increases substantially among the less massive bodies. Furthermore,} remnants with core mass fractions above 40\% are generally found to have relatively high $h$-numbers, meaning that they survived multiple hit-and-run collisions. As discussed in \citetalias{EmsenhuberApj2020}, the initial mass of the embryos influences the dynamical environment and thus imparts a change in the degree of mixing between feeding zones in the planetary disk. This is found to affect the spread in core mass fraction of the smaller embryos. The simulations that produced the results in the top panels of Figure \ref{fig:delta_ZZ} were initialized with embryos of similar mass. As a result, these simulations are characterized by a predominance of collisions between similar-mass bodies which tend to result in more hit-and-run collisions \citep{2010ChEGAsphaug,2020ApJGabriel}. \added{By analyzing the evolution of the debris mass using Equations \ref{eq:core_acc_L}--\ref{eq:mantle_acc_S}, we find that the debris is composed mainly of the mantle material of the embryos, with the core material contributing up to 15\% when collisions are predominantly hit-and-runs in nature. This suggests that} many of the less massive planets are the ``runners'' from hit-and-run collisions (blue and magenta colored points), which managed to escape accretion onto the larger bodies but lost mantle material in the process, an effect predicted in \citet{2014NatGeoAsphaug}. This also explains why remnants with high $h$-numbers tend to also have higher core mass fractions. \added{Conversely, in \textit{N}-body simulations that are initialized with embryos of dissimilar mass (bottom panels of Figure \ref{fig:delta_ZZ}), the most massive bodies establish themselves dynamically early on, accreting smaller bodies in non-disruptive collisions. This leads to final bodies with relatively similar core mass fractions.} \added{Small planets that have a low \textit{h}-number (cyan colored dots) are either giant-impact-free embryos (i.e., small planets with core mass fraction $\sim 30\%$) or the largest remnants of disruptive collisions (i.e., small planets with core mass fraction $> 30\%$). In reality, the latter would sweep up some of their debris (an effect not included in the \textit{N}-body simulations by \citetalias{EmsenhuberApj2020}); this would lower the value of their core mass fraction, as mantle material is preferentially eroded in collisions.} We also observe that the spread in core mass fraction in small bodies depends also on the heliocentric distance at which they form. Adopting the model for early solar system materials by \citet{Rubie2015}, the embryos that form farther than \SI{0.95}{au} have an initial core mass fraction lower than 30\% due to oxidation of metallic iron by water. This explains the presence of a few small embryos with core mass fraction lower than 30\%. \added{Finally, we find that the distribution of core mass fraction as function of mass is robust against the assumption of the degree of mixing and re-equilibration of the metal and silicate reservoirs in hit-and-run and erosive collisions. This is shown in Figure \ref{fig:correlation}, left panel, which is a plot of the the core mass fraction values of the inefficient-accretion planets predicted using the assumption of full mixing and re-equilibration (vertical axis) versus the results assuming no re-equilibration (horizontal axis). The predictions are well-fit by the 1:1 line with coefficient of determination $R^2$ = 0.98. This means that the diversity in core mass fraction observed in Figure \ref{fig:delta_ZZ} is primarily set by the erosive effect of giant impacts, and that subsequent re-equilibration of mixed reservoirs (or lack thereof) do not affect the prediction of core mass fraction. In contrast, the chemical composition of the metal and silicate reservoirs still depend upon the adopted assumption. This is shown in Figure \ref{fig:correlation}, right panel, which is a plot of the concentration of Si and O in the core of the inefficient-accretion planets predicted using the assumption of full mixing and re-equilibration (vertical axis) versus the results assuming no re-equilibration (horizontal axis). As opposed to the core mass fractions (left panel), the data points in the right panel of Figure \ref{fig:correlation} are not well-fit by the 1:1 line. This warrants future studies on the nature and extent of mixing and re-equilibration at the core-mantle boundaries, as further discussed in Section \ref{subsec:fut_work}}. \added{\section{Discussion}} \subsection{Statistics of planetary diversity} \begin{table*} \centering \caption{Average value and range of the values of core mass fractions for the two populations of Inefficient-Accretion (IA) planets with M$<$0.1 M$_\oplus$ and M$>$0.1 M$_\oplus$. The symbols $\bar{Z}$ and $\Delta Z$ indicate the average core mass fraction of the population and its range, respectively. For reference, the Perfect-Merging (PM) planets have $\bar{Z}_{PM}~\approx$ \SI{33}{\%} and $\Delta Z_{PM}~\approx$ \SI{8}{\%}.} \label{tab:comparison_01} \begin{tabular}{ccccc} \hline Sims & $\bar{Z}_{IA}$ (M $<$ 0.1 M$_\oplus$) & $\Delta Z_{IA}$ (M $<$ 0.1 M$_\oplus$) & $\bar{Z}_{IA}$ (M $>$ 0.1 M$_\oplus$) & $\Delta Z_{IA}$ (M $>$ 0.1 M$_\oplus$) \\ \hline 01--04 & 51\% & 88\% & 40\% & 16\%\\ 11--14 & 57\% & 78\% & 41\% & 16\%\\ 21--24 & 38\% & 41\% & 37\% & 17\% \\ 31--34 & 15\% & 6\% & 38\% & 18\% \\ \hline \\ \end{tabular} \end{table*} \added{In order to quantify the observed diversity in smaller bodies, we compute the mean core mass fraction and the range in core mass fraction (i.e., maximum minus minimum values) of the two sub-populations of inefficient-accretion planets that have mass larger and smaller than $0.1 M_\oplus$ (``more massive'' and ``less massive'' planets, respectively, Table~\ref{tab:comparison_01}), and compare them to those of the perfect-merging planets. We use a cut-off value of $0.1 M_\oplus$ because the perfect-merging simulations lack final planets smaller than $0.1 M_\oplus$ (with the exception of 1 planet in the simulations 01--04), and because $0.1 M_\oplus$ roughly corresponds to the maximum mass of the initial embryos of the \textit{N}-body simulations from \citetalias{EmsenhuberApj2020}{}.} \added{The statistics in Table \ref{tab:comparison_01} shows that the more massive inefficient-accretion planets tend to be fairly similar to one another in terms of core mass fraction, while there is a higher degree of diversity among the less massive planets. Futhermore, the more massive planets have core mass fraction statistics similar to that of the perfect-merging planets; this similarity is expected to be further enhanced if debris re-accretion is modelled, because debris tends to be preferentially accreted by the larger planets. Conversely, the average core mass fraction and its spread for the less massive planets tend to be higher than those of the perfect-merging planets because the former are preferentially eroded by giant impacts \citep{2010ChEGAsphaug}. This is especially true when collisions are predominantly hit-and-runs in nature (e.g., results in the top panels of Figure \ref{fig:delta_ZZ}).} \subsection{Effect of debris re-accretion} \label{sec:debris} \added{In the \textit{N}-body simulations of \citetalias{EmsenhuberApj2020}{}, debris produced in the aftermath of a giant impact that was not bound to the target or the runner was simply removed from the system. The mass of debris produced in a given giant impact is usually a relatively minor fraction of the colliding mass, compared to the major bodies, but cumulatively this simplification leads to significant mass removal from the system: up to 80\% of the mass in the initial embryos (\citetalias{EmsenhuberApj2020}{}). Here we consider how this may affect our predictions of compositional diversity with size.} \added{Other studies of terrestrial planet formation have treated giant impact debris explicitly} \citep[e.g.,][]{2009ApJStewartLeinhardt,2010ApJKokubo,2013IcarusChambers,2016ApJQuintana}\added{, although there is substantial difference from study to study in the treatment of debris production, the manner of its release into orbit, and the manner in which debris interacts gravitationally with itself and other bodies. In addition, there are differences in the calculation of the accretion efficiency of giant impacts. \citet{2013IcarusChambers}, whose initial conditions were replicated in \citetalias{EmsenhuberApj2020}{}, remains the most useful point of comparison, as it provides end-member cases of perfect merging, as well as simulations where debris is produced according to a scaling law \citep{2012ApJLeinhardt}. The orbital release of debris in the simulations from \citet{2013IcarusChambers} is represented as a few particles radiating isotropically from the collision point at 1.05 times the target escape velocity, although it is unclear whether the debris fragments subsequently interact with each other or only with the major bodies.} \added{With these approximations, \citet{2013IcarusChambers} found that planets took longer to reach their final masses, due to the sweep up of simulated debris particles, than in the cases with perfect merging. The mean time required for Earth analogues to reach their final mass is \SI{\sim159}{Myr}, substantially longer than \SI{101}{Myr} under perfect merging. The simulations in \citetalias{EmsenhuberApj2020}{} neglect debris but include realistic models of inefficient accretion, which results in a protracted tail of final accretion events lasting to \SI{\sim200}{Myr}. The longer timescale in this case is due to the realistic inclusion of hit-and-run collisions and the more accurate (stronger) orbital deflections for the runner in those simulations. Hit-and-run impactors in \citetalias{EmsenhuberApj2020}{} appear to have served a similar role as the debris particles in \citet{2013IcarusChambers}.} \added{Of greater importance for the present study are the masses and compositions of the finished planets. Compared to perfect merging, \citet{2013IcarusChambers} found that more terrestrial planets (4-5) are produced, with a wider distribution of resulting masses, when collisional fragmentation is approximated as described. Simulations in \citetalias{EmsenhuberApj2020}{} had a median number of 3 and 4 final planets under the perfect merging and realistic-accretion assumptions, respectively. The standard deviation in the final number of planets in \citetalias{EmsenhuberApj2020}{} is 0.7 for the perfect-merging simulations, and 2 when applying the inefficient-accretion model.} \added{The evolution and influence of debris can also have important dynamical effects on the accreting planets. Similar to the effects noted in \citet{2006IcarusOBrien}, the simulated debris particles in \citet{2013IcarusChambers} appear to have applied a dynamical friction to the orbits of the major bodies, leading to slightly lower eccentricities than those of the perfect-merging simulations. In light of this, neglecting debris in \citetalias{EmsenhuberApj2020}{} is likely to have allowed a greater dynamical excitation of the growing planets, which subsequently would have collided at higher impact velocities. Half of the collisions in \citetalias{EmsenhuberApj2020}{} had $V_{coll} >$ 1.6 $V_{esc}$, and a quarter had $V_{coll} > $ 2.12 $V_{esc}$, whereas 95\% of the collisions in \citet{2013IcarusChambers} had velocities lower than 1.6$v_{\rm{esc}}$\footnote{The impact velocity distribution is provided in Supplementary Material of \cite{2020ApJGabriel}}. Further comparison of dynamical excitation and planet formation is difficult, because \citet{2013IcarusChambers} applied a hard cut-off to discern between accretion/disruption with debris production and hit-and-run collisions. In this simplified approach, regardless of impact velocity, a giant impact at impact angle greater than \ang{30} is treated as a hit and run, and there is no gravitational deflection of the runner. In \citetalias{EmsenhuberApj2020}{}, by contrast, the close approaches and resulting angular momentum distribution between the colliding bodies are explicitly resolved. At relatively high velocity, most of the collisions at low impact angle predicted to be accretionary by \citet{2013IcarusChambers} are actually hit-and-run, as discussed in \citetalias{2019ApJCambioni}{}; conversely, at low velocity, many of the collisions predicted by \citet{2013IcarusChambers} to be hit-and-run are actually accretions.} \added{Of paramount interest to the present work is how the approach of ignoring the debris influences our predictions for planetary diversity. Debris that are produced in a giant impact are on heliocentric orbits that can intersect the target, and can intersect the runner in the case of hit-and-run collisions. As argued by \citet{2014NatGeoAsphaug}, accretion occurs preferentially onto the most massive body, the post-impact target, which has the substantially larger collisional cross-section and sweeps up most of the debris. Re-impacts with the runner are less frequent, and furthermore are more likely to be erosive. Therefore, the inclusion of a proper treatment of debris-embryo interaction is expected to strengthen our conclusion, that planetary diversity increases at smaller scales. } \subsection{Future work} \label{subsec:fut_work} \added{In future studies, we will develop a recipe for the degree of re-equilibration, in-between the two end members studied here, as a function of impact conditions for giant impacts, building upon what it has been already achieved in previous studies \citep[e.g.,][]{2010EPSLDahlStevenson,2016LPINakajima,raskin2019material}. This includes: (1) studying at which depth in the planet equilibration occurs, i.e., determining the value of $f_\mathrm{P}$ (Equation \ref{eq:P_cmb}) for different collision regimes; and (2) a more-advanced prediction for the equilibration temperature. In the planetary differentiation model, the latter is set at the midpoint between the solidus and liquidus temperatures of mantle peridotite. In this way, we have enforced that the temperature behaves like the pressure of metal-silicate equilibration. This assumption is probably accurate for the case of a projectile's core sinking through a mantle magma ocean, but proving its validity for hit-and-run or erosive collisions requires further study.} \added{Future work will also aim to further improve the realism of the surrogate model of giant impacts. Specifically, we plan to train neural networks on SPH simulations of collisions between embryos with core mass fraction different from 30\% and refine the treatment of the debris field in \textit{N}-body simulations with respect to that used in \citetalias{EmsenhuberApj2020}{}. As discussed in Section \ref{sec:debris}, in addition to increase the mass of the surviving embryos, debris are expected to reduce the eccentricities and inclinations of the embryos via dynamical friction \citep{2006IcarusRaymond,2006IcarusOBrien,2019ApJKobayashi}, thus increasing the accretion efficiency but reducing the interactions across feeding zones.} \section{Conclusions} \label{sec:conclusion} In this work, neural networks trained on giant impact simulations have been implemented in a core-mantle differentiation model coupled with \textit{N}-body orbital dynamical evolution simulations to study the effect of inefficient accretion on planetary differentiation and evolution. We make a comparison between the results of the neural-network model (``inefficient accretion'') and those obtained by treating all collisions unconditionally as mergers with no production of debris (``perfect merging''). For a single collision scenario between two planetary embryos, we find that the assumption of perfect merging overestimates the resulting bodies' mass and thus their equilibration pressure and temperature. Assuming that the colliding bodies have oxygen-poor bulk compositions, the inefficient-accretion model produces a wider range of oxidation states that depends intimately on the impact velocity and angle; mass loss due to inefficient accretion leads to more reduced oxygen fugacities of metal-silicate equilibration because of the strong temperature dependence at low oxygen fugacities. To investigate the cumulative effect of giant impacts on planetary differentiation, we use a core-mantle differentiation model to post-process the results of \textit{N}-body simulations obtained in \citetalias{EmsenhuberApj2020}, where terrestrial planet formation was modeled with both perfect merging and inefficient accretion. \added{The inefficient-accretion model suggests that planets less massive than \SI{0.1}{\mearth} are compositionally diverse in terms of core mass fraction. This is driven by the effect of mantle erosion that is included in the machine-learning model}. In contrast, both models provide similar predictions for planets more massive than \SI{\approx0.1}{\mearth}, \added{e.g., a tight clustering near 30--40\% value of core mass fraction.} This is consistent with previous studies that successfully reproduced Earth's Bulk Silicate composition using the results from \textit{N}-body simulations with perfect merging \citep{Rubie2015,2016Rubie}. We therefore suggest that an inefficient-accretion model is necessary to accurately track compositional evolution in terrestrial planet formation\added{, particularly when it comes to modeling the history and composition of less massive bodies. Our results improve upon previous studies that reached similar conclusions but did not find planets with core mass fraction comparable to that of Mercury \citep[e.g.,][]{2015IcarDwyer} or whose simulations (specifically the simplification of close approaches) likely underestimated mantle erosion \citep[e.g.,][]{Carter2015apj}.} Finally, the value of oxygen fugacity of metal-silicate equilibration is known to influence the post-accretion evolution of rocky planets' atmospheres \citep[e.g.,][]{2007SSRZahnle,2019SciArmstrong,2020arXivZahnle}. Improving the realism of planet formation models with realistic collision models therefore becomes crucial not only for understanding how terrestrial embryos accrete, but also to make testable predictions of how some of them may evolve from magma-ocean planets to potentially habitable worlds. \acknowledgments S.C., A.E., E.A., and S.R.S. acknowledge support from NASA under grant 80NSSC19K0817. We thank the anonymous reviewers for the comments and and edits that improved this manuscript. The authors also thank A. Morbidelli and M. Nakajima for discussions on the nature of terrestrial planet formation and metal-silicate equilibration.
{'timestamp': '2021-06-16T02:00:51', 'yymm': '2106', 'arxiv_id': '2106.07680', 'language': 'en', 'url': 'https://arxiv.org/abs/2106.07680'}
\section{Introduction} Recent findings on the superconductivity in LaFeAsO$_{1-x}$F$_x$\cite{kamihara} and related materials have triggered a great deal of interest in iron compounds because of the possible connection between the superconductivity and magnetism,\cite{mazin} which undergoes a phase transition from antiferromagnetic to superconducting ground states (and vice versa) tuned by external pressure\cite{takahashi} or chemical doping.\cite{kamihara} In particular, PbO-type FeSe, which is one of the iron-based superconductors discovered a long time ago,\cite{hsu} has attracted attention as a key material for elucidating the superconducting mechanism, because of its extremely simple structure (composed only of the superconducting FeSe$_4$ layer) and its excellent response to external pressure.\cite{mizuguchi} Among all similar materials, FeSe shows the greatest enhancement of its $T\rm_c$ at high pressure:\cite{margadonna} $T\rm_c$ varies from 9 K (at ambient pressure) to 37 K (at 6 GPa), indicating a growth rate as high as 4.5 K/GPa; as a result, using FeSe, it is possible to demonstrate the strong correlation between the structural parameter and $T\rm_c$. The maximum $T\rm_c$ value of iron pnictides is apparently attained when the Fe$X_4$ ($X$: anion) tetrahedron assumes a regular shape;\cite{lee} however, this rule is not applicable to FeSe, because the FeSe$_4$ tetrahedron is distorted from the regular shape,\cite{margadonna} while $T\rm_c$ increases significantly with application of pressure. Although several studies have investigated FeSe subjected to high pressure,\cite{medvedev,garbarino} the pressure dependence of $T\rm_c$, particularly above 6 GPa, is controversial because of difficulties in measurements under high pressure conditions. For example, in one of these studies, superconductivity appeared to exist above 20 GPa, even though the phase transition from tetragonal to hexagonal (non-superconductive) was completed at 12.4 GPa.\cite{braithwaite} This discrepancy is attributable mainly to the following two reasons: the ambiguous definition of $T\rm_c$ and the large anisotropic compressibility of the layered structure.\cite{margadonna} FeSe does not show Meissner diamagnetism at $T\rm_c^{onset}$, which denotes the beginning of the resistivity drop; therefore, there is no guarantee that a kink in the resistivity immediately represents to a signature of superconductivity. Therefore, $T\rm_c$ should be decided by zero resistance temperatures. Moreover, the hydrostaticity of pressure is essential to obtain the precise pressure dependence of $T\rm_c$, because FeSe has an inhomogeneous compressibility,\cite{margadonna} which stems from the layered structure stacked loosely by a van der Waals interaction (see upper inset of Fig. 1). To overcome all these problems, we used a cubic-anvil-type high-pressure apparatus\cite{mori} that ensures quasihydrostaticity by the isotropic movement of anvil tops, even after the liquid pressure-transmitting medium solidifies at low temperature and high pressure; using this apparatus, we reconfirmed the $T\rm_c$-$P$ (pressure) phase diagram of FeSe. With this background, in this study, we measured the electrical resistivity of a high-quality FeSe polycrystal at pressures ranging from 0 GPa to 16 GPa and evaluated the variation in $T\rm_c$ and the electronic state, both of which are closely related to the anion position. A precise evaluation of zero-resistivity temperature shows that the pressure dependence of $T\rm_c$ has a slightly distorted dome-shaped curve with the maximum $T\rm_c$ (30 K) in the range of 0 $<$ P $<$ 11.5 (GPa) and that the temperature dependence of resistivities above $T\rm_c$ changes dramatically at around 2 GPa, suggesting the existence of different types of superconductivities at high pressure. We found a striking correlation between $T\rm_c$ and anion (selenium) height, which was obtained by a direct comparison with previous report,\cite{margadonna} $T\rm_c$ varies with the anion height. Moreover, this relation is broadly applicable to other ferropnictides, indicating that the high-temperature superconductivity in these materials only appears around the optimum anion height ($\sim$1.38$\rm\AA$). We suggest that the anion height should be considered as a key determining factor of $T\rm_c$ of iron-based superconductors containing various anions. \section{Method} FeSe has a simple tetragonal structure that is composed only of edge-shared FeSe$_4$ tetrahedral layers; however, it is difficult to fabricate a good-quality superconducting FeSe sample, because a large amount of excess iron is absolutely imperative for the occurrence of superconductivity\cite{mcqueen} and extreme caution is required to prevent the formation of magnetic impurities from easily oxidizable iron. Polycrystalline samples of FeSe used in this study were prepared by a solid-state reaction using Fe (99.9$\%$, Kojundo-Kagaku) and Se (99.999$\%$, Kojundo-Kagaku) powders. The powders were mixed in a molar ratio of 100:99 (nominal composition of FeSe$_{0.99}$) in an argon-filled glove box and sealed in an evacuated quartz tube. Then, the powders were sintered at 1343 K for 3 days, annealed at 693 K for 2 days, and finally quenched in liquid nitrogen. Further details of sample preparation are described in Ref. 12. The quality of the obtained sample was verified by powder X-ray diffraction using an X-ray diffractometer with a graphite monochromator (MultiFlex, Rigaku); the results confirmed that the sample quality was similar to that of previously reported high-quality samples.\cite{mcqueen} The electrical resistivity and magnetic susceptibility of the sample were measured using a physical property measurement system (PPMS, Quantum Design) magnetic property measurement system (MPMS, Quantum Design), respectively. As shown in Fig. 1, in our sample, zero resistivity and Meissner effect were observed simultaneously at 7 K at ambient pressure. In order to evaluate the precise pressure dependence of $T\rm_c$, we defined both $T\rm_c^{offset}$ (determined from the zero-resistivity temperature) and $T\rm_c^{onset}$ (determined from the cross point of two extrapolated lines drawn for the resistivity data around $T\rm_c$). Electrical resistivity measurements were performed in the cubic-anvil-type apparatus\cite{mori} with Daphne 7474 oil\cite{murata} as the liquid pressure-transmitting medium, which ensured precise measurements up to 16 GPa under nearly hydrostatic conditions in this study. Pressure was calibrated using a calibration curve that was previously obtained by observations of several fixed-pressure points (Bi, Te, Sn, ZnS) at room temperature. The resistivity measurements were performed by a conventional dc four-probe method, as shown in the lower inset of Fig. 1, with an excitation current of 1 mA. The samples used in these experiments had dimensions of 1.0$\times$0.4$\times$0.2 mm$^3$. \begin{figure} \includegraphics[width=1.0\linewidth,keepaspectratio]{fig1s.eps} \caption{\label{fig1}(Color online) Temperature dependence of magnetic susceptibility (top main panel) and electrical resistivity (bottom main panel) of polycrystalline FeSe at ambient pressure. The top and bottom insets show the crystal structure of FeSe and the setting of the sample in the high-pressure apparatus (see text for details), respectively.} \end{figure} \section{RESULTS AND DISCUSSION} The left panel of Fig. 2 shows the temperature dependence of electrical resistivities under application of external pressures ranging from 0 GPa (ambient pressure) to 16 GPa. With the application of pressure (ambient pressure to 16 GPa), the room-temperature resistivity decreases by a factor of more than 3; it reaches a minimum at 10 GPa and subsequently increases between 10 GPa and 16 GPa. In the pressure range from 0 GPa to 6 GPa, $T\rm_c$ (both $T\rm_c^{onset}$ and $T\rm_c^{offset}$) increases rapidly but not monotonically; further, the resistivity curves gradually change shape from the one at ambient pressure (see top right panel of Fig. 2). Meanwhile, as shown in the bottom right panel of Fig. 2, $T\rm_c^{offset}$ suddenly vanishes at 11.5 GPa; this disappearance is attributed to a rapid enhancement of resistivities between 11 GPa and 11.5 GPa. Although $T\rm_c^{onset}$ still remains above 11.5 GPa, it disappears completely at 16 GPa. Figure 3 shows the pressure dependence of $T\rm_c^{onset}$, $T\rm_c^{offset}$, and the width of the superconducting transition, $\Delta T\rm_c$ (= $T\rm_c^{onset}$ - $T\rm_c^{offset}$). Beautiful but slightly distorted dome-shaped curves are observed as cuprates\cite{palee} and heavy fermions.\cite{sidorov1} However, the pressure dependence of $\Delta T\rm_c$ shows a complicated trend. At low pressures up to 2 GPa, $\Delta T\rm_c$ increases exponentially, indicating a salient broadening of the transition width, whereas $T\rm_c^{offset}$ increases gradually. Thereafter, $\Delta T\rm_c$ decreases moderately but increases again above 9 GPa, resulting in a dome-shaped $T\rm_c$ curve. In the following paragraph, we shed light on the details of the abovementioned behaviors, in comparison with those reported in previous studies, to elucidate the nature of iron-based superconductors. \begin{figure} \includegraphics[width=1.0\linewidth]{fig2s.eps} \caption{\label{fig2}(Color online) Temperature dependence of resistivity at ambient and several other pressures (left panel: 0$\sim$16 GPa, top right panel: 0$\sim$8 GPa, and bottom right panel: 9$\sim$16 GPa) for polycrystalline sample of FeSe.} \end{figure} The most striking feature in the low-pressure region ($<$ 2 GPa) is that $T\rm_c^{offset}$ has a relatively flat plateau; that is, an increase in $T\rm_c$ almost levels off between 1 and 1.5 GPa. A similar behavior was also observed during the measurements of DC magnetization\cite{miyoshi} and electrical resistivity\cite{masaki} of FeSe by using high-pressure piston-cylinder units; therefore, it is probably an important characteristic of FeSe. A previous $^{77}$Se-NMR measurement\cite{imai} showed that an antiferromagnetic spin fluctuation is significantly enhanced in the plateau region and that there exists a possibility of a magnetic phase transition or spin freezing. The superconductivity in iron-based compounds is thought to be closely related to a neighboring antiferromagnetic ordered phase; however, tetragonal FeSe exhibits superconductivity without any elemental substitution; therefore, its antiferromagnetic ground state remains undiscovered. Probably, in the case of FeSe, the nesting of a Fermi surface would improve temporarily up to $\sim$2 GPa by application of external pressure, resulting in the enhancement of antiferromagnetic instabilities such as would provide constraints on the enhancement of $T\rm_c$. After that, the nesting condition would worsen with the application of further pressure, and "high-temperature" superconductivity would appear. The appearance of pressure-induced superconductivity adjacent to a magnetic-ordered phase is a characteristic feature of exotic superconductors such as CeRh$_2$Si$_2$,\cite{movshovich} CeNi$_2$Ge$_2$,\cite{steglich} and CeIn$_3$,\cite{mathur} with superconductivity appearing around a quantum critical point. \begin{figure} \includegraphics[width=1.0\linewidth]{fig3s.eps} \caption{\label{fig3}(Color online) Pressure dependence of $T\rm_c^{onset}$ (open circle), $T\rm_c^{offset}$ (closed circle, top main panel) and width of superconducting transition $\Delta T\rm_c$ (= $T\rm_c^{onset}$ - $T\rm_c^{offset}$) (bottom main panel). The solid lines are obtained by connecting the data points.} \end{figure} Figure 4 shows an enlarged view of the resistivities around $T\rm_c$ between 1 and 6 GPa. We can distinguish the gradual change in the shape of resistivity curves: with increasing pressure, the temperature dependence curve of resistivity changes from nearly quadratic to linear. In particular, the change between 2 GPa and 3 GPa is drastic, implying a phase transition between different superconducting states. The possibility of two kinds of superconducting phases has also been reported by Sidorov $et$ $al.$,\cite{sidorov2} indicated by the jump in $d\rho/dT$. A linear dependence of electrical resistivities on temperature is commonly observed in cuprate superconductors\cite{fiory,nakamura,mackenzie} and is considered to be one of the primary indicators of non-Fermi-liquid behavior; an incoherent scattering of fermion quasiparticles via magnetic interactions leads to resistivity of the form $\rho (T) = \rho_0 + AT^\alpha $ where $\rho_0$, $A$ and $\alpha$ are arbitrary constants, however, no linear term is expected according to conventional Fermi-liquid theory. It should be noted that in our study, the non-Fermi-liquid behavior was observed even in the plateau region (because = 1.6$\sim$1.2 between 1 and 2 GPa), indicating the development of a spin fluctuation. The temperature dependence of resistivity in the high-temperature superconducting phase ($>$3 GPa) of FeSe is highly reminiscent of the linear temperature dependence observed in high-$T\rm_c$ cuprates, interpreted as a "strange metal" phase,\cite{anderson} where it is ascribed to antiferromagnetic spin fluctuations. Similar behaviors have also been reported for other ferropnictides, ex. Ba(Fe,Co)$_2$As$_2$,\cite{ahilan} implying that antiferromagnetic spin fluctuations and superconductivity are closely related to each other in iron-based compounds, as discussed in the context of heavy fermion and cuprate superconductors. \begin{figure} \includegraphics[width=1.0\linewidth]{fig4s.eps} \caption{\label{fig4}(Color online) Enlarged view of resistivity around $T\rm_c$ between 1 and 6 GPa. The dotted lines are guides to the eye, showing the dependence of $T$ and $T^2$. For simplicity, we have not shown the data at 1.5 GPa.} \end{figure} For applied pressures greater than 3 GPa, $T\rm_c$ shows the dome-shaped curve, with maximum $T\rm_c$ = 30.02 K at 6 GPa, whereas between 3 GPa and 9 GPa, $\Delta T\rm_c$ continues to decline steadily. As has been noted previously, the shape of the Fe$X_4$ tetrahedron is closely related to the value of $T\rm_c$; in the case of iron pnictides, $T\rm_c$ appears to attain maximum values when the As-Fe-As bond angles come close to 109.47$\rm^ o$,\cite{lee} which corresponds to a regular tetrahedron. However, this rule is not applicable to FeSe.\cite{margadonna} Therefore, we focus on the relationship of $T\rm_c$ with "Se height," which is the distance of the anion from the nearest iron layer. Figure 5 shows the pressure dependence of $T\rm_c^{offset}$ and Se height (inversely scaled), obtained from Ref. 6. Astonishingly, $T\rm_c^{offset}$ varies linearly with the Se height, even in the plateau in the low-pressure region. Although there is a subtle shift in the pressure dependence, which may be due to the difference in ways of applying pressures (cubic or diamond anvil), there is a clear correlation between both parameters. Furthermore, $T\rm_c^{offset}$ is inversely proportional to the magnitude of the Se height, as can be observed from the inset of Fig. 4, indicating that the smaller the Se height, the more enhanced is $T\rm_c$. However, this seems to be contradictory to the behavior observed in other pnictides: in other pnictides, it is observed that $T\rm_c$ is higher when the pnictogen is located at greater heights in the crystal structures;\cite{lee} this behavior is also supported by the theoretical aspect.\cite{kuroki} In any case, FeSe is a suitable material for demonstrating the importance of anion position as discussed below, which is inherently linked to the mechanism of superconductivity in iron-based compounds. \begin{figure} \includegraphics[width=1.0\linewidth]{fig5s.eps} \caption{\label{fig5}(Color online) Pressure dependence of $T\rm_c^{offset}$ and Se height $h\rm_{Se}$ (inversely scaled), as obtained from Ref. 6. The inset shows $T\rm_c^{offset}$ as a function of the Se height. The dotted line is a guide to the eye.} \end{figure} On application of further pressure up to 9 GPa, $T\rm_c^{offset}$ reduces monotonically to lower temperatures and disappears completely above 11.5 GPa; then, the superconducting transition becomes less sharp, as is indicated by the broadening of the transition width $\Delta T\rm_c$. After the disappearance of $T\rm_c^{offset}$, the resistivity over the entire temperature range would improve greatly with increasing pressure, indicating the occurrence of the metal-semiconductor transition. At $\sim$9 GPa, tetragonal FeSe starts being transformed from a tetragonal structure to a hexagonal (NiAs-type) structure, and this structure undergoes a transition from a metallic superconducting state to the semiconducting state.\cite{margadonna} A recent synchrotron X-ray study on FeSe at various pressures\cite{braithwaite} has revealed that the structural transition to the hexagonal phase is completed at around 12.4 GPa, which is consistent with the fact that all traces of superconductivity (see bottom right panel of Fig. 2) completely vanish by 16 GPa, without any trace of an anomalous decrease in resistivity. Therefore, the remarkable increase in transition width $\Delta T\rm_c$ above 9 GPa corresponds to the transition to the hexagonal phase, and this corresponds to the closure of the superconducting dome. The observed onset of $T\rm_c$ above 11.5 GPa indicates a subtle fraction of the superconducting phase, which may no longer manifest Meissner diamagnetism. \begin{figure} \includegraphics[width=1.0\linewidth]{fig6s.eps} \caption{\label{fig6}(Color online) $T\rm_c$ as a function of anion height ($h\rm_{anion}$) for various iron (and nickel)-based superconductors, as obtained from Ref. 29 (triangle: FeSe, circle: other pnictides). Lanthanides ($Ln$) indicate $Ln$FeAsO (1111 system). 111, 122, and 42226 represent LiFeAs, Ba$_{0.6}$K$_{0.4}$Fe$_2$As$_2$, and Sr$_4$Sc$_2$Fe$_2$P$_2$O$_6$,\cite{ogino} respectively. The yellow line shows the fitting result by the Lorenz function.} \end{figure} We now turn to consider, in a more universal sense, the nature of the iron-based superconductivity in FeSe with respect to pressure tuning of $T\rm_c$, which is the focus area in this study. Figure 6 shows the maximum $T\rm_c$ as a function of anion height ($h\rm_{anion}$) for various iron-based superconductors.\cite{horikin,ogino} In this study, we successfully derived the $T\rm_c$-$h\rm_{anion}$ diagram of iron (partially nickel)-based superconductors. The clear correlation between $T\rm_c$ and $h\rm_{anion}$ is a certain indicator of the importance of anion positions in these iron-based superconductors. As shown in Fig. 6, the anion height dependence of $T\rm_c$ is well described by a Lorenz curve (yellow line). As the value of anion height increases, $T\rm_c$ of the iron-based superconductors starts to increase dramatically up to $\sim$55 K at a height of 1.38$\rm\AA$, which corresponds to the optimum value of a 1111 system. However, above the optimum anion height (1.38$\rm\AA$), $T\rm_c$ decreases rapidly with increasing $h\rm_{anion}$, going through our measured FeSe region (1.42$\sim$1.45$\rm\AA$); finally, the value of $h\rm_{anion}$ becomes equal to that of non-superconducting FeTe (1.77$\rm\AA$).\cite{bao} It should be noted that superconductors with direct substitution in the Fe$X_4$ tetrahedral layer or a large deviation from a divalent state (Fe$^{2+}$), e.g., an alkali metal element or Co doping samples of a 122 system or chalcogen-substituted 11-system, are not particularly suitable for this trend. This is probably due to (1) the considerable disorder in the Fe layers; (2) a large gap among anion heights of different anions, for example, in FeSe$_{1-x}$Te$_x$, $T\rm_c$ appears to be dominated only by the Fe-Se distance ($T\rm_c$ $\sim$ 14 K at $h\rm_{anion}$ = 1.478$\rm\AA$, which is consistent with the Lorenz curve)\cite{lehman,tegel}; or (3) coexistence of strong magnetic fluctuation and superconductivity.\cite{lumsden1,lumsden2,parker} We thus conclude that the appearance of "high-temperature" superconductivity in iron compounds is confined to a specific area that is around the optimum anion height (1.38$\rm\AA$), which corresponds to the radius of arsenic at ambient pressure. It has been proposed,\cite{kuroki} on the basis of solutions of Eliashberg equations, that the critical temperature of iron pnictides is inherently linked to their structural parameters, particularly pnictogen heights and the $a$-axis lattice parameter. The result obtained in this study that the pressure evolution of $T\rm_c$ varies with the anion height, as shown in Fig. 5, is in good agreement with the theoretical prediction; however, the length of the $a$-axis of FeSe monotonically decreases with increasing pressure,\cite{margadonna} which suppresses the enhancement of $T\rm_c$. An interesting aspect of FeSe, as observed from Fig. 6, is that $T\rm_c$ does not exhibit this trend above 1.43$\rm\AA$ (corresponding to the pressure range of 0$\sim$2 GPa), which clearly indicates that the system attains a different electronic state below the characteristic pressure ($\sim$2 GPa). This could occur concomitantly with the reconstruction of a Fermi surface; the shapes of the resistivity curves above $T\rm_c$ change clearly at around 2 GPa, as pointed out above (see Fig. 4), which implies a significant transformation to the non-Fermi-liquid state. It has been previously suggested that there is a difference in the superconducting gap symmetries of arsenic and phosphide:\cite{fletcher} a full-gap strong coupling $s$-wave for high-$T\rm_c$ arsenide compounds and nodal low-$T\rm_c$ for phosphide compounds, which is widely perceived in many studies. A theoretical approach\cite{kuroki} has suggested that the pairing symmetry of iron pnictides is determined by the pnictogen heights between a high-$T\rm_c$ nodeless gap for high $h\rm_{anion}$ or a low-$T\rm_c$ nodal gap for low $h\rm_{anion}$, corresponding to the left-hand side of the Lorenz curve shown in Fig. 6. Although FeSe is located on the right-hand side, i.e., in a region of extremely high $h\rm_{anion}$, it is highly probable that FeSe shows two or more different types of superconductivities under application of external pressures, as is the case with pnictides. The extremely soft crystal structure of FeSe enables the control of $h\rm_{anion}$ in a wide range and the superconducting mechanism can be switched by the application of modest pressure. It may be interesting to explore the gap symmetry of FeSe at high pressure ($\sim$6 GPa) by NMR or muon spin rotation and whether there is any difference between the gap symmetry of FeSe and those of other iron-based superconductors. \section{SUMMARY} In this study, the precise pressure dependence of the electric resistivity of FeSe was measured in the pressure range of 0-16.0 GPa at temperatures of 4-300 K by using a cubic-anvil-type high-pressure apparatus. $T\rm_c$ estimated from zero-resistivity temperature shows a slightly distorted dome-shaped curve, with the maximum $T\rm_c$ = 30 K in the range of 0 $< P <$ 11.5 (GPa), which is lower than those in previous studies. The temperature dependence of resistivity above $T\rm_c$ changes dramatically at around 2$\sim$3 GPa; the shapes of the resistivity curves change to linear shapes, which is one of the primary indicators of non-Fermi-liquid behavior; this behavior strongly suggests a phase transition between different superconducting states. A striking correlation is found between $T\rm_c$ and anion (selenium) height: the lower the Se height, the more improved is $T\rm_c$. Moreover, this relation is broadly applicable to other iron pnictides, indicating that the high-temperature superconductivity in these materials appears only around the optimum anion height ($\sim$1.38$\rm\AA$). On the basis of these results, we suggest that anion height should be considered as a key determining factor of $T\rm_c$ in iron-based superconductors containing various anions. \vspace{5mm} We would like to acknowledge Dr. K. Kuroki (The Univ. of Electro-Communications) for helpful discussions. This work was supported by "High-Tech Research Center" Project for Private University from the Ministry of Education, Culture, Sports Science and Technology (MEXT), Grant-in-Aid for Yong Scientists (B) (No. 20740202) and a Grant-in-Aid for Specially Promoted Research from MEXT, and the Asahi Glass Foundation.
{'timestamp': '2010-02-10T02:40:18', 'yymm': '1002', 'arxiv_id': '1002.1832', 'language': 'en', 'url': 'https://arxiv.org/abs/1002.1832'}
\section{Introduction} Semileptonic tau decays represent a clean benchmark to study the hadronization properties of QCD due to the fact that half of the process is purely electroweak and, therefore, free of uncertainties at the required precision \cite{Braaten:1991qm, Braaten:1988hc, Braaten:1988ea, Braaten:1990ef, Narison:1988ni, Pich:1989pq, Davier:2005xq, Pich:2013??}. At the (semi-)inclusive level this allows to extract fundamental parameters of the Standard Model, most importantly the strong coupling $\alpha_S$ \cite{Davier:2008sk, Beneke:2008ad, Pich:2011bb, Boito:2012cr}. Tau decays containing Kaons have been split into the Cabibbo-allowed and suppressed decays \cite{Barate:1999hj, Abbiendi:2004xa} rendering possible determinations of the quark-mixing matrix element $|V_{us}|$ \cite{Maltman:2008ib, Antonelli:2013usa} and the mass of the strange quark \cite{Chetyrkin:1998ej, Pich:1999hc, Korner:2000wd, Kambor:2000dj, Chen:2001qf, Gamiz:2002nu, Gamiz:2004ar, Baikov:2004tk, Gamiz:2007qs} at high precision.\\ At the exclusive level, the largest contribution to the strange spectral function is given by the $\tau^-\to (K\pi)^-\nu_\tau$ decays ($\sim 42\%$). The corresponding differential decay width was measured by the ALEPH \cite{Barate:1999hj} and OPAL \cite{Abbiendi:2004xa} collaborations, and recently the B-factories BaBar \cite{Aubert:2007jh} and Belle \cite{Epifanov:2007rf} have published increased accuracy measurements. These high-quality data have motivated several refined studies of the related observables \cite{Jamin:2006tk, Moussallam:2007qc, Jamin:2008qg, Boito:2008fq, Boito:2010me} allowing for precise determinations of the $K^\star(892)$ pole parameters because this resonance gives the most of the contribution to the dominating vector form factor. These were also determined for the $K^\star(1410)$ resonance and the relative interference of both states was characterized, although with much less precision than in the case of the $K^\star(892)$ mass and width. In order to increase the knowledge of the strange spectral function, the $\tau^-\to \left(K\pi\pi(\pi)\right)^-\nu_\tau$ decays have to be better understood (they add up to one third of the strange decay width), the $\tau^-\to K^- \eta \nu_\tau$ and $\tau^-\to (K\pi)^- \eta \nu_\tau$ decays being also important for that purpose. The $K^-\eta$ mode is also very sensitive to the $K^\star(1410)$ resonance contribution and may be competitive with the $\tau^-\to (K\pi)^-\nu_\tau$ decays in the extraction of its parameters. This is one of the motivations for our study of the $\tau^-\to K^- \eta^{(\prime)} \nu_\tau$ decays in this article. We will tackle the analysis of the $\tau^-\to \left(K\pi\right)^-\pi/\eta\,\nu_\tau$ decays along the lines employed in other three-meson \cite{GomezDumm:2003ku, Dumm:2009kj, Dumm:2009va, Dumm:2012vb} and one-meson radiative tau decays \cite{Guo:2010dv, Roig:2013??} elsewhere. The $\tau^-\to K^- \eta \nu_\tau$ decays were first measured by CLEO \cite{Bartelt:1996iv} and ALEPH \cite{Buskulic:1996qs} in the '90s. Only very recently Belle \cite{Inami:2008ar} and BaBar \cite{delAmoSanchez:2010pc} managed to improve these measurements reducing the branching fraction to essentially half of the CLEO and ALEPH results and achieving a decrease of the error at the level of one order of magnitude. Belle \cite{Inami:2008ar} measured a branching ratio of $(1.58\pm0.05\pm0.09)\cdot10^{-4}$ and BaBar \cite{delAmoSanchez:2010pc} $(1.42\pm0.11\pm0.07)\cdot10^{-4}$, which combined to give the PDG average $(1.52\pm0.08)\cdot10^{-4}$ \cite{Beringer:1900zz}. The related decay $\tau^-\to K^- \eta^\prime \nu_\tau$ has not been detected yet, although an upper limit at the $90\%$ confidence level was placed by BaBar \cite{Lees:2012ks . Belle's paper \cite{Inami:2008ar} cites the few existing calculations of the $\tau^-\to K^- \eta \nu_\tau$ decays based on Chiral Lagrangians \cite{Pich:1987qq, Braaten:1989zn, Li:1996md, Aubrecht:1981cr} and concludes that `further detailed studies of the physical dynamics in $\tau$ decays with $\eta$ mesons are required' (see also, e.g. Ref.~\cite{Actis:2010gg})\footnote{Very recently, the $\tau^-\to K \pi/\eta \nu_\tau$ decays have been studied \cite{Kimura:2012}. However, no satisfactory description of the data can be achieved in both decay channels simultaneously.}. Our aim is to provide a more elaborated analysis which takes into account the advances in this field since the publication of the quoted references more than fifteen years ago. The considered $\tau^-\to K^- \eta^{(\prime)} \nu_\tau$ decays are currently modeled in TAUOLA \cite{Jadach:1990mz, Jadach:1993hs}, the standard Monte Carlo generator for tau lepton decays, relying on phase space. We would like to provide the library with Resonance Chiral Lagrangian-based currents \cite{Shekhovtsova:2012ra, Nugent:2013hxa} that can describe well these decays for their analyses and for the characterization of the backgrounds they constitute to searches of rarer tau decays and new physics processes. Our paper is organized as follows: the hadronic matrix element and the participating vector and scalar form factors are defined in section \ref{M.e. and decay width}, where the differential decay distribution in terms of the latter is also given. These form factors are derived within Chiral Perturbation Theory ($\chi PT$) \cite{Weinberg:1978kz, Gasser:1983yg, Gasser:1984gg} including resonances ($R\chi T$) \cite{Ecker:1988te, Ecker:1989yg} in section \ref{FFs}. Three different options according to treatment of final-state interactions in these form factors are discussed in section \ref{FSI} and will be used in the remainder of the paper. In section \ref{Pred Keta}, the $\tau^-\to K^-\eta\nu_\tau$ decay observables are predicted based on the knowledge of the $\tau^-\to (K\pi)^-\nu_\tau$ decays. These results are then improved in section \ref{Fit Keta} by fitting the BaBar and Belle $\tau^-\to K^-\eta\nu_\tau$ data. We provide our predictions on the $\tau^-\to K^-\eta^\prime\nu_\tau$ decays in section \ref{Pred Ketap} and present our conclusions in section \ref{Concl}. \section{Matrix elements and decay width}\label{M.e. and decay width} We fix our conventions from the general parametrization of the scalar and vector $K^+\eta^{(\prime)}$ matrix elements \cite{Gasser:1984ux}: \begin{equation}\label{general definition} \left\langle\eta^{(\prime)} \Big|\bar{s}\gamma^\mu u\Big| K^+\right\rangle=c^V_{K\eta^{(\prime)}}\left[\left(p_{\eta^{(\prime)}}+p_K\right)^\mu f_+^{K^+ \eta^{(\prime)}}(t) +(p_K-p_{\eta^{(\prime)}})^\mu f_-^{K ^+\eta^{(\prime)}}(t)\right]\,, \end{equation} where $t=(p_K-p_{\eta^{(\prime)}})^2$. From eq. (\ref{general definition}) one has \begin{equation}\label{m.e. fmas fmenos} \left\langle K^-\eta^{(\prime)} \Big|\bar{s}\gamma^\mu u\Big| 0\right\rangle=c^V_{K\eta^{(\prime)}}\left[\left(p_{\eta^{(\prime)}}-p_K\right)^\mu f_+^{K^- \eta^{(\prime)}}(s) -q^\mu f_-^{K ^-\eta^{(\prime)}}(s)\right]\,, \end{equation} with $q^\mu=\left(p_{\eta^{(\prime)}}+p_K\right)^\mu$, $s=q^2$ and $c^V_{K\eta^{(\prime)}}=-\sqrt{\frac{3}{2}}$. Instead of $f_-^{K^- \eta^{(\prime)}}(s)$ one can use $f_0^{K^- \eta^{(\prime)}}(s)$ defined through \begin{equation}\label{definition f0} \left\langle 0 \Big|\partial_\mu(\bar{s}\gamma^\mu u)\Big| K^-\eta^{(\prime)}\right\rangle=i(m_s-m_u)\left\langle 0 \Big|\bar{s}u\Big| K^-\eta^{(\prime)}\right\rangle\equiv i\Delta_{K\pi}c^S_{K^-\eta^{(\prime)}}f_0^{K ^-\eta^{(\prime)}}(s)\,, \end{equation} with \begin{equation} c^S_{K^-\eta}= -\frac{1}{\sqrt{6}}\,,\quad c^S_{K^-\eta^\prime}= \frac{2}{\sqrt{3}}\,,\quad \Delta_{PQ}=m_P^2-m_Q^2\,. \end{equation} The mass renormalization $m_s-\bar{m}$ in $\chi PT$ (or $R\chi T$) needs to be taken into account to define $f_0^{K ^-\eta^{(\prime)}}(s)$ and $\bar{m}=(m_d+m_u)/2$ has been introduced. We will take $\Delta_{K\pi}\Big|^{QCD}=\Delta_{K\pi}$, which is an excellent approximation. From eqs.~(\ref{m.e. fmas fmenos}) and (\ref{definition f0}) one gets \begin{equation}\label{Had m.e.} \left\langle K^-\eta^{(\prime)} \Big|\bar{s}\gamma^\mu u\Big| 0\right\rangle=\left[\left(p_{\eta^{(\prime)}}-p_K\right)^\mu +\frac{\Delta_{K \eta^{(\prime)}}}{s}q^\mu\right]c^V_{K^-\eta^{(\prime)}}f_+^{K^-\eta^{(\prime)}}(s)+ \frac{\Delta_{K \pi}}{s}q^\mu c^S_{K^-\eta^{(\prime)}} f_0^{K^-\eta^{(\prime)}}(s)\,, \end{equation} and the normalization condition \begin{equation}\label{condition origin} f_+^{K^-\eta^{(\prime)}}(0)=-\frac{c^S_{K^-\eta^{(\prime)}}}{c^V_{K^-\eta^{(\prime)}}}\frac{\Delta_{K\pi}}{\Delta_{K\eta^{(\prime)}}}f_0^{K^-\eta^{(\prime)}}(0)\,, \end{equation} which is obtained from \begin{equation} f_-^{K ^-\eta^{(\prime)}}(s)=-\frac{\Delta_{K\eta^{(\prime)}}}{s}\left[\frac{c^S_{K^-\eta^{(\prime)}}}{c^V_{K^-\eta^{(\prime)}}}\frac{\Delta_{K\pi}}{\Delta_{K\eta^{(\prime)}}}f_0^{K^-\eta^{(\prime)}}(s)+f_+^{K^-\eta^{(\prime)}}(s)\right]\,. \end{equation} In terms of these form factors, the differential decay width reads \begin{eqnarray} \label{spectral function} & & \frac{d\Gamma\left(\tau^-\to K^-\eta^{(\prime)}\nu_\tau\right)}{d\sqrt{s}} = \frac{G_F^2M_\tau^3}{32\pi^3s}S_{EW}\Big|V_{us}f_+^{K^-\eta^{(\prime)}}(0)\Big|^2 \left(1-\frac{s}{M_\tau^2}\right)^2\\ & & \left\lbrace\left(1+\frac{2s}{M_\tau^2}\right)q_{K\eta^{(\prime)}}^3(s)\Big|\widetilde{f}_+^{K^-\eta^{(\prime)}}(s)\Big|^2+\frac{3\Delta_{K\eta^{(\prime)}}^2}{4s}q_{K\eta^{(\prime)}}(s)\Big|\widetilde{f}_0^{K^-\eta^{(\prime)}}(s)\Big|^2\right\rbrace\,,\nonumber \end{eqnarray} where \begin{eqnarray}\label{definitions} & & q_{PQ}(s)=\frac{\sqrt{s^2-2s\Sigma_{PQ}+\Delta_{PQ}^2}}{2\sqrt{s}}\,,\quad \sigma_{PQ}(s)=\frac{2q_{PQ}(s)}{\sqrt{s}}\theta\left(s-(m_P+m_Q)^2\right)\,,\nonumber\\ & & \Sigma_{PQ}=m_P^2+m_Q^2\ ,\quad \widetilde{f}_{+,0}^{K^-\eta^{(\prime)}}(s)=\frac{f_{+,0}^{K^-\eta^{(\prime)}}(s)}{f_{+,0}^{K^-\eta^{(\prime)}}(0)}\,, \end{eqnarray} and $S_{EW} = 1.0201$ \cite{Erler:2002mv} represents an electro-weak correction factor. We have considered the $\eta-\eta^\prime$ mixing up to next-to-leading order in the combined expansion in $p^2$, $m_q$ and $1/N_C$ \cite{Kaiser:1998ds, Kaiser:2000gs} (see the next section for the introduction of the large-$N_C$ limit of QCD \cite{'tHooft:1973jz, 'tHooft:1974hx, Witten:1979kh} applied to the light-flavoured mesons). In this way it is found that $\Big|V_{us}f_+^{K^-\eta}(0)\Big|=\Big|V_{us}f_+^{K^-\pi^0}(0)\mathrm{cos}\theta_P\Big|$, $\Big|V_{us}f_+^{K^-\eta^\prime}(0)\Big|=\Big|V_{us}f_+^{K^-\pi^0}(0)\mathrm{sin}\theta_P\Big|$, where $\theta_P=(-13.3\pm1.0)^\circ$ \cite{Ambrosino:2006gk}. The best access to $\Big|V_{us}f_+^{K^-\pi^0}(0)\Big|$ is through semi-leptonic Kaon decay data. We will use the value $0.21664\pm0.00048$ \cite{Beringer:1900zz, Antonelli:2010yf}. Eq.~(\ref{spectral function}) makes manifest that the unknown strong-interaction dynamics is encoded in the tilded form factors, $\widetilde{f}_{+,0}^{K^-\eta^{(\prime)}}(s)$ which will be subject of our analysis in the following section. We will see in particular that the use of $\widetilde{f}_{+,0}^{K^-\eta^{(\prime)}}(s)$ instead of the untilded form factors yields more compact expressions that are symmetric under the exchange $\eta\leftrightarrow\eta^\prime$, see eqs.(\ref{RChT VFFs}) and (\ref{RChT SFFs def}). \section{Scalar and vector form factors in $\boldsymbol{\chi PT}$ with resonances}\label{FFs} Although there is no analytic method to derive the $\widetilde{f}_{+,0}^{K^-\eta^{(\prime)}}(s)$ form factors directly from the QCD Lagrangian, its symmetries are nevertheless useful to reduce the model dependence to a minimum and keep as many properties of the fundamental theory as possible. $\chi PT$ \cite{Weinberg:1978kz, Gasser:1983yg, Gasser:1984gg}, the effective field theory of QCD at low energies, is built as an expansion in even powers of the ratio between the momenta or masses of the lightest pseudoscalar mesons over the chiral symmetry breaking scale, which is of the order of one GeV. As one approaches the energy region where new degrees of freedom -the lightest meson resonances- become active, $\chi PT$ ceases to provide a good description of the Physics (even including higher-order corrections \cite{Bijnens:1999sh, Bijnens:1999hw, Bijnens:2001bb}) and these resonances must be incorporated to the action of the theory. This is done without any ad-hoc dynamical assumption by $R\chi T$ in the convenient antisymmetric tensor formalism that avoids the introduction of local $\chi PT$ terms at next-to-leading order in the chiral expansion since their contribution is recovered upon integrating the resonances out \cite{Ecker:1988te, Ecker:1989yg}. The building of the Resonance Chiral Lagrangians is driven by the spontaneous symmetry breakdown of QCD realized in the meson sector, the discrete symmetries of the strong interaction and unitary symmetry for the resonance multiplets. The expansion parameter of the theory is the inverse of the number of colours of the gauge group, $1/N_C$. Despite $N_C$ not being small in the real world, the fact that phenomenology supports this approach to QCD \cite{Manohar:1998xv, Pich:2002xy} hints that the associated coefficients of the expansion are small enough to warrant a meaningful perturbative approach based on it. At leading order in this expansion there is an infinite number of radial excitations for each resonance with otherwise the same quantum numbers that are strictly stable and interact through local effective vertices only at tree level. The relevant effective Lagrangian for the lightest resonance nonets reads~\footnote{We comment on its extension to the infinite spectrum predicted in the $N_C\to\infty$ limit in the paragraph below eq.~(\ref{JOP FFs}).}: \begin{eqnarray} \label{eq:ret} {\cal L}_{\rm R\chi T} & \doteq & {\cal L}_{\rm kin}^{\rm V,S}\, + \, \frac{F^2}{4}\langle u_{\mu} u^{\mu} + \chi _+\rangle \, + \, \frac{F_V}{2\sqrt{2}} \langle V_{\mu\nu} f_+^{\mu\nu}\rangle\,+\,i \,\frac{G_V}{\sqrt{2}} \langle V_{\mu\nu} u^\mu u^\nu\rangle \,+\, c_d \langle S u_{\mu} u^{\mu}\rangle \,+\, c_m \langle S \chi_ +\rangle\,,\nonumber\\ \label{lagrangian} \end{eqnarray} where all coupling constants are real, $F$ is the pion decay constant and we follow the conventions of Ref.~\cite{Ecker:1988te}. Accordingly, $\langle \rangle$ stands for trace in flavour space, and $u^\mu$, $\chi_+$ and $f_+^{\mu\nu}$ are defined by \begin{eqnarray} u^\mu & = & i\,u^\dagger\, D^\mu U\, u^\dagger \,,\nonumber \\ \chi_\pm & = & u^\dagger\, \chi \, u^\dagger\, \pm u\,\chi^\dagger\, u\,, \nonumber \\ f_\pm^{\mu\nu} & = & u^\dagger\, F_L^{\mu\nu}\, u^\dagger\, \pm u\,F_R^{\mu\nu}\, u \ , \end{eqnarray} where $u$ ($U=u^2$), $\chi$ and $F_{L,R}^{\mu\nu}$ are $3\times 3$ matrices that contain light pseudoscalar fields, current quark masses and external left and right currents, respectively. The matrix $V^{\mu\nu}$ ($S$) includes the lightest vector (scalar) meson multiplet \footnote{In the $N_C\to\infty$ limit of QCD the lightest scalar meson multiplet does not correspond to the one including the $f_0(600)$ (or $\sigma$ meson) \cite{Cirigliano:2003yq}, but rather to the one including the $f_0(1370)$ resonance.}, and ${\cal L}_{\rm kin}^{\rm V,S}$ stands for these resonances kinetic term. We note that resonances with other quantum numbers do not contribute to the considered processes (like the axial-vector and pseudoscalar resonances, which have the wrong parity). The computation of the vector form factors yields \begin{equation} \label{RChT VFFs} \tilde{f}_+^{K^-\eta}(s)=\frac{f_+^{K^-\eta}(s)}{f_+^{K^-\eta}(0)}=1+\frac{F_V G_V}{F^2}\frac{s}{M_{K^\star}^2-s}\,=\frac{f_+^{K^-\eta^\prime}(s)}{f_+^{K^-\eta^\prime}(0)}=\, \tilde{f}_+^{K^-\eta^\prime}(s)\,, \end{equation} because $f_+^{K^-\eta}(0)=\cos\theta_{P}$ and $f_+^{K^-\eta^\prime}(0)=\sin\theta_{P}$. We recall that the normalization of the $K\pi$ vector form factor, $f_+^{K^-\pi}(0)$, was pre-factored in eq.~(\ref{spectral function}) together with $|V_{us}|$. The strangeness changing scalar form factors and associated S-wave scattering within $R\chi T$ have been investigated in a series of papers by Jamin, Oller and Pich \cite{Jamin:2000wn, Jamin:2001zq, Jamin:2006tk,Jamin:2006tj} (see also Ref.~\cite{Bernard:1990kw}). The computation of the scalar form factors gives: \begin{eqnarray}\label{RChT SFFs} && \tilde{f}_0^{K^-\eta}(s)=\frac{f_0^{K^-\eta}(s)}{f_0^{K^-\eta}(0)}=\frac{1}{f_0^{K^-\eta}(0)}\left[\cos\theta_{P}f_0^{K^-\eta_8}(s)\Big|_{\eta_8\to\eta}+2\sqrt{2}\mathrm{sin}\theta_Pf_0^{K^-\eta_1}(s)\Big|_{\eta_1\to\eta}\right]\,,\;\;\;\;\;\;\;\;\\ && \tilde{f}_0^{K^-\eta^\prime}(s)=\frac{f_0^{K^-\eta^\prime}(s)}{f_0^{K^-\eta^\prime}(0)}=\frac{1}{f_0^{K^-\eta^\prime}(0)}\left[\mathrm{cos}\theta_Pf_0^{K^-\eta_1}(s)\Big|_{\eta_1\to\eta^\prime}-\frac{1}{2\sqrt{2}}\mathrm{sin}\theta_Pf_0^{K^-\eta_8}(s)\Big|_{\eta_8\to\eta^\prime}\right]\,,\nonumber \end{eqnarray} and can be written in terms of the $f_0^{K^-\eta_8}(s)$, $f_0^{K^-\eta_1}(s)$ form factors computed in Ref.\cite{Jamin:2001zq}: \begin{eqnarray}\label{JOP FFs} f_0^{K^-\eta_8}(s) & = & 1+\frac{4 c_m}{F^2(M_S^2-s)}\left[c_d(s-m_K^2-p_{\eta_8}^2)+c_m(5m_K^2-3m_\pi^2)\right]+\frac{4c_m(c_m-c_d)}{F^2M_S^2}(3m_K^2-5m_\pi^2)\,,\nonumber\\ f_0^{K^-\eta_1}(s) & = & 1+\frac{4c_m}{F^2(M_S^2-s)}\left[c_d(s-m_K^2-p_{\eta_1}^2)+c_m2m_K^2\right]-\frac{4c_m(c_m-c_d)}{F^2 M_S^2}2m_\pi^2\,, \end{eqnarray} where, for the considered flavour indices, $S$ should correspond to the $K^\star_0(1430)$ resonance. Besides $f_0^{K^-\pi}(0)=f_+^{K^-\pi}(0)$ (see the comment below equation~(\ref{RChT VFFs})) it has also been used that \begin{eqnarray} f_0^{K^-\eta}(0) & = & \cos\theta_{P}\left(1+\frac{\Delta_{K\eta}+3\Delta_{K\pi}}{M_S^2}\right)+2\sqrt{2}\sin\theta_P\left(1+\frac{\Delta_{K\eta}}{M_S^2}\right)\,,\nonumber\\ f_0^{K^-\eta^\prime}(0) & = & \cos\theta_{P}\left(1+\frac{\Delta_{K\eta}}{M_S^2}\right)+\sin\theta_P\left(1+\frac{\Delta_{K\eta}+3\Delta_{K\pi}}{M_S^2}\right)\,. \end{eqnarray} Indeed, using our conventions, the tilded scalar form factors become simply \begin{equation}\label{RChT SFFs def} \tilde{f}_0^{K^-\eta}(s)=\frac{f_0^{K^-\eta}(s)}{f_0^{K^-\eta}(0)}=1+\frac{c_d c_m}{4 F^2}\frac{s}{M_S^2-s}\,=\frac{f_+^{K^-\eta^\prime}(s)}{f_0^{K^-\eta^\prime}(0)}=\, \tilde{f}_0^{K^-\eta^\prime}(s)\,, \end{equation} that is more compact than eqs.~(\ref{RChT SFFs}), (\ref{JOP FFs}) and displays the same symmetry $\eta\leftrightarrow\eta^\prime$ than the vector form factors in eq.~(\ref{RChT VFFs}). The computation of the leading order amplitudes in the large-$N_C$ limit within $R\chi T$ demands, however, the inclusion of an infinite tower of resonances per set of quantum numbers \footnote{We point out that there is no limitation in the $R\chi T$ Lagrangians in this respect. In particular, a second multiplet of resonances has been introduced in the literature \cite{SanzCillero:2002bs, Mateu:2007tr} and bi- and tri-linear operators in resonance fields have been used \cite{GomezDumm:2003ku, RuizFemenia:2003hm, Cirigliano:2004ue, Cirigliano:2005xn, Cirigliano:2006hb, Kampf:2011ty}.}. Although the masses of the large-$N_C$ states depart slightly from the actually measured particles \cite{Masjuan:2007ay} only the second vector state, i.e. the $K^\star(1410)$ resonance, will have some impact on the considered decays. Accordingly, we will replace the vector form factor in eq.~(\ref{RChT VFFs}) by \begin{equation} \label{RChT VFFs2Res} \tilde{f}_+^{K^-\eta^{(\prime)}}(s)=1+\frac{F_V G_V}{F^2}\frac{s}{M_{K^\star}^2-s}+\frac{F_V^\prime G_V^\prime}{F^2}\frac{s}{M_{K^\star\prime}^2-s}\,, \end{equation} where the operators with couplings $F_V^\prime$ and $G_V^\prime$ are defined in analogy with the corresponding unprimed couplings in eq.~(\ref{lagrangian}). If we require that the $f_+^{K^-\eta^{(\prime)}}(s)$ and $f_0^{K^-\eta^{(\prime)}}(s)$ form factors vanish for $s\to\infty$ at least as $1/s$ \cite{Lepage:1979zb, Lepage:1980fj}, we obtain the short-distance constraints \begin{equation}\label{shortdistance} F_V G_V + F_V^\prime G_V^\prime= F^2\,,\quad 4 c_d c_m = F^2\,,\quad c_d-c_m=0\,, \end{equation} which yield the form factors \begin{eqnarray} \label{FFs with short distance constraints} & & \tilde{f}_+^{K^-\eta}(s)=\frac{M_{K^\star}^2+\gamma s}{M_{K^\star}^2-s}-\frac{\gamma s}{M_{K^\star\prime}^2-s}= \tilde{f}_+^{K^-\eta^\prime}(s)\,,\\ & & \tilde{f}_0^{K^-\eta}(s) = \frac{M_S^2}{M_S^2-s}\,=\,\tilde{f}_0^{K^-\eta^\prime}(s)\,,\nonumber \end{eqnarray} where $\gamma=-\frac{F_V^\prime G_V^\prime}{F^2}=\frac{F_VG_V}{F^2}-1$ \cite{Jamin:2006tk, Jamin:2008qg, Boito:2008fq, Boito:2010me}. We note that we are disregarding the modifications introduced by the heavier resonance states to the relation (\ref{shortdistance}) and to the definition of $\gamma$. \section{Different form factors according to treatment of final-state interactions}\label{FSI} The form factors in eqs.(\ref{FFs with short distance constraints}) diverge when the exchanged resonance is on-mass shell and, consequently, cannot represent the underlying dynamics that may peak in the resonance region but does not certainly show a singular behaviour. This is solved by considering a next-to-leading order effect in the large-$N_C$ counting, as it is a non-vanishing resonance width \footnote{Other corrections at this order are neglected. Phenomenology seems to support that this is the predominant contribution.}. Moreover, since the participating resonances are not narrow, an energy-dependent width needs to be considered. A precise formalism-independent definition of the off-shell vector resonance width within $R\chi T$ has been given in Ref.~\cite{GomezDumm:2000fz} and employed successfully in a variety of phenomenological studies. Its application to the $K^*(892)$ resonance gives \begin{eqnarray}\label{K^* width predicted} \Gamma_{K^*}(s) & = & \frac{G_V^2 M_{K^*} s}{64 \pi F^4} \bigg[\sigma_{K\pi}^3(s)+ \mathrm{cos}^2\theta_P \sigma_{K\eta}^3(s) + \mathrm{sin}^2\theta_P\sigma_{K\eta^\prime}^3(s)\bigg] \,, \end{eqnarray} where $\sigma_{PQ}(s)$ was defined in eq.~(\ref{definitions}). Several analyses of the $\pi\pi$ \cite{SanzCillero:2002bs, Pich:2001pj, Dumm:2013zh} and $K\pi$ \cite{Jamin:2008qg, Boito:2008fq, Boito:2010me} form factors where the $\rho(770)$ and $K^\star(892)$ prevail respectively, have probed the energy-dependent width of these resonances with precision. Although the predicted width \cite{Guerrero:1997ku} turns to be quite accurate, it is not optimal to achieve a very precise description of the data and, instead, it is better to allow (as we will do in the remainder of the paper) the on-shell width to be a free parameter and write \begin{eqnarray}\label{K^* width} \Gamma_{K^*}(s) & = & \Gamma_{K^*}\frac{s}{M_{K^*}^2}\frac{\sigma_{K\pi}^3(s)+\mathrm{cos}^2\theta_P \sigma_{K\eta}^3(s) + \mathrm{sin}^2\theta_P\sigma_{K\eta^\prime}^3(s)}{\sigma_{K\pi}^3(M_{K^*}^2)}\,, \end{eqnarray} where it has been taken into account that at the $M_{K^*}$-scale the only absorptive cut is given by the elastic contribution. In the case of the $K^\star(1410)$ resonance there is no warranty that the $KP$ ($P=\pi$, $\eta$, $\eta^\prime$) cuts contribute in the proportion given in eqs.(\ref{K^* width predicted}) and (\ref{K^* width}). We will assume that the lightest $K\pi$ cut dominates and use throughout that \begin{equation}\label{Kstarprimewidth} \Gamma_{K^{\star\prime}}(s)\,=\,\Gamma_{K^{\star\prime}}\frac{s}{M_{K^{\star\prime}}^2}\frac{\sigma_{K\pi}^3(s)}{\sigma_{K\pi}^3(M_{K^{\star\prime}}^2)}\,. \end{equation} The scalar resonance width can also be computed in $R\chi T$ similarly \cite{Ecker:1988te, GomezDumm:2000fz}. In the case of the $K^\star_0(1430)$ it reads \begin{equation}\label{Gamma S computed} \Gamma_{S}(s)\,=\,\Gamma_{S_0}\left(M_S^2\right)\left(\frac{s}{M_S^2}\right)^{3/2}\frac{g(s)}{g\left(M_S^2\right)}\,, \end{equation} with \begin{eqnarray}\label{g(s)} g(s) & = & \frac{3}{2}\sigma_{K\pi}(s)+\frac{1}{6}\sigma_{K\eta}(s)\left[\mathrm{cos}\theta_P\left(1+\frac{3\Delta_{K\pi}+\Delta_{K\eta}}{s}\right)+2\sqrt{2}\mathrm{sin}\theta_P\left(1+\frac{\Delta_{K\eta}}{s}\right)\right]^2\nonumber\\ & & +\frac{4}{3}\sigma_{K\eta^\prime}(s)\left[\mathrm{cos}\theta_P\left(1+\frac{\Delta_{K\eta^\prime}}{s}\right)-\frac{\mathrm{sin}\theta_P}{2\sqrt{2}}\left(1+\frac{3\Delta_{K\pi}+\Delta_{K\eta^\prime}}{s}\right)\right]^2\,. \end{eqnarray} At this point, different options for the inclusion of the resonances width arise. The most simple prescription is to replace $M_R^2-s$ by $M_R^2-s-iM_R\Gamma_R(s)$ in eqs.~(\ref{FFs with short distance constraints}). We shall call this option `dipole model', or simply `Breit-Wigner (BW) model'. One should pay attention to the fact that analyticity of a quantum field theory imposes certain relations between the real and imaginary parts of the amplitudes. In particular, there is one between the real and imaginary part of the relevant two-point function. At the one-loop level its imaginary part is proportional to the meson width but the real part (which is neglected in this model) is non-vanishing. As a result, the Breit-Wigner treatment breaks analyticity at the leading non-trivial order. Instead, one can try to devise a mechanism that keeps the complete complex two-point function. Ref.~\cite{Guerrero:1997ku} used an Omn\`es resummation of final-state interactions in the vector form factor that was consistent with analyticity at next-to-leading order. The associated violations were small and consequently neglected in their study of the $\pi\pi$ observables. This strategy was also exported to the $K\pi$ decays of the $\tau$ in Refs.~\cite{Jamin:2006tk, Jamin:2008qg} where it yielded remarkable agreement with the data. We will call this approach to the vector form factor `the exponential parametrization' (since it exponentiates the real part of the relevant loop function) and refer to it by the initials of the authors who studied the $K\pi$ system along these lines, `JPP'. A decade after, a construction that ensures analyticity of the vector form factor exactly was put forward in Ref.~\cite{Boito:2008fq} and applied successfully to the study of the $K\pi$ tau decays. It is a dispersive representation of the form factor where the input phaseshift, which resums the whole loop function in the denominator of eq.~(\ref{FFs with short distance constraints}), is proportional to the ratio of the imaginary and real parts of this form factor. This method also succeeded in its application to the di-pion system \cite{Dumm:2013zh}, where it was rephrased in a way which makes chiral symmetry manifest at next-to-leading order. We will name this method `dispersive representation' or `BEJ', by the authors who pioneered it in the $K\pi$ system. We would like to stress that the Breit-Wigner model is consistent with $\chi PT$ only at leading order, while the exponential parametrization (JPP) and the dispersive representation (BEJ) reproduce the chiral limit results up to next-to-leading order and including the dominant contributions at the next order \cite{Guerrero:1998hd}. In the dispersive approach to the study of the di-pion and Kaon-pion systems it was possible to achieve a unitary description in the elastic region that could be extended up to $s_{inel}=4m_K^2$ (the $4\pi$ cut, which is phase-space and large-$N_C$ suppressed is safely neglected) and $s_{inel}=(m_K+m_\eta)^2$, respectively. Most devoted studies of these form factors neglect -in one way or another- inelasticities and coupled-channel effects beyond $s_{inel}$ in them \footnote{See, however, Ref.~\cite{Moussallam:2007qc}, which includes coupled channels for the $K\pi$ vector form factor.}, an approximation that seems to be supported by the impressive agreement with the data sought. However, this overlook of the problem seems to be questionable in the case of the $\tau^-\to K^-\eta^{(\prime)}\nu_\tau$ decays where we are concerned with the first (second) inelastic cuts. An advisable solution may come from the technology developed for the scalar form factors that were analyzed in a coupled channel approach in Refs.~\cite{Jamin:2001zq, Jamin:2001zr, Jamin:2006tj} (for the strangeness-changing form factors) \footnote{We will use these unitarized scalar form factors instead of the one in eq.~(\ref{FFs with short distance constraints}) in the JPP and BEJ treatments (see above).} and \cite{Guo:2012ym, Guo:2012yt} (for the strangeness-conserving ones) unitarizing $SU(3)$ and $U(3)$ (respectively) $\chi PT$ with explicit exchange of resonances \cite{Guo:2011pa}. However, given the large errors of the $\tau^-\to K^-\eta\nu_\tau$ decay spectra measured by the BaBar \cite{delAmoSanchez:2010pc} and Belle \cite{Inami:2008ar} Collaborations and the absence of data on the $K^-\eta^{\prime}$ channel we consider that it is not timely to perform such a cumbersome numerical analysis in the absence of enough experimental guidance \footnote{One could complement this poorly known sector with the information from meson-meson scattering on the relevant channels \cite{GomezNicola:2001as}. Our research at next-to-leading order in the $1/N_C$ expansion treating consistently the $\eta-\eta^\prime$ mixing \cite{Kaiser:1998ds, Kaiser:2000gs, Escribano:2010wt} is in progress.}. For this reason we have attempted to obviate the inherent inelasticity of the $K\eta^{(\prime)}$ channels and tried an elastic description, where the form factor that defines the input phaseshift is given by eq.~(\ref{FFs with short distance constraints}) with $\Gamma_{K^\star}(s)$ defined analogously to $\Gamma_{K^{\star\prime}}(s)$, i.e., neglecting the inelastic cuts. We anticipate that the accord with data supports this procedure until more precise measurements demand a better approximation. Let us recapitulate the different alternatives for the treatment of final-state interactions that will be employed in sections \ref{Pred Keta}-\ref{Pred Ketap} to study the $\tau^-\to K^-\eta^{(\prime)}\nu_\tau$ decays. The relevant form factors will be obtained from eqs.(\ref{FFs with short distance constraints}) in each case by: \begin{itemize} \item Dipole model (Breit-Wigner): $M_R^2-s$ will be replaced by $M_R^2-s-iM_R\Gamma_R(s)$ with $\Gamma_{K^\star}(s)$ and $\Gamma_S(s)$ given by eqs. (\ref{K^* width}) and (\ref{Gamma S computed}). \item Exponential parametrization (JPP): The Breit-Wigner vector form factor described above is multiplied by the exponential of the real part of the loop function. The unitarized scalar form factor \cite{Jamin:2001zq} will be employed. The relevant formulae can be found in appendix \ref{app}. \item Dispersive representation (BEJ): A three-times subtracted dispersion relation will be used for the vector form factor. The input phaseshift will be defined using the vector form factor in eq.~(\ref{FFs with short distance constraints}) with $\Gamma_{K^\star}(s)$ including only the $K\pi$ cut and resumming also the real part of the loop function in the denominator. The unitarized scalar form factor will be used \cite{Jamin:2001zq}. More details can be found in appendix \ref{app}. \end{itemize} \section{Predictions for the $\boldsymbol{\tau^-\to K^-\eta\nu_\tau}$ decays}\label{Pred Keta} We note that eqs.(\ref{FFs with short distance constraints}) also hold for the $\tilde{f}_{+,0}^{K^-\pi}(s)$ form factors (see eq.~(\ref{spectral function}) and comments below, as well). Therefore, in principle the knowledge of these form factors in the $K\pi$ system can be transferred to the $K\eta^{(\prime)}$ systems immediately, taking thus advantage of the larger statistics accumulated in the former and their sensitivity to the $K^\star(892)$ properties. This is certainly true in the case of the vector form factor in its assorted versions and in the scalar Breit-Wigner form factor. However, in the BEJ and JPP scalar form factor one has to bear in mind (see appendix \ref{app:both}) that the $KP$ ($P=\pi^0,\,\eta,\,\eta^\prime$) scalar form factors are obtained solving the coupled channel problem which breaks the universality of the $\tilde{f}_0^{K^-P}(s)$ form factors as a result of the unitarization procedure. As a consequence, our application of the $\tilde{f}_0^{K^-\eta^{(\prime)}}(s)$ form factors to the $\tau^-\to K^-\eta^{(\prime)}\nu_\tau$ decays will provide a test of the unitarized results. Taking into account the explanations in Ref.~\cite{Jamin:2001zq} about the difficult convergence of the three-channel problem (mainly because of the smallness of the $K\eta$ contribution and its correlation with the $K\eta^\prime$ channel) this verification is by no means trivial, specially regarding the $K\eta^\prime$ channel, where the scalar contribution is expected to dominate the decay width. In this way, we have predicted the $\tau^-\to K^-\eta\nu_\tau$ branching ratio and differential decay width using the knowledge acquired in the $\tau^-\to (K\pi)^-\nu_\tau$ decays. Explicitly: \begin{itemize} \item In the dipole model, we have taken the $K^\star(892)$, $K^\star(1410)$ and $K_0^\star(1430)$ mass and width from the PDG \cite{Beringer:1900zz} -since this compilation employs Breit-Wigner parametrizations to determine these parameters- and estimated the relative weight of them using $\gamma=\frac{F_VG_V}{F^2}-1$ (see discussion at the end of section \ref{FFs}) \cite{Ecker:1988te}. In this way, we have found $\gamma=-0.021\pm0.031$. \item In the JPP parametrization, we have used the best fit results of Ref.~\cite{Jamin:2008qg} for the vector form factor. The scalar form factor has been obtained from the solutions (6.10) and (6.11) of Ref.~\cite{Jamin:2001zq} \footnote{The relevant $f_0^{K^-\eta^{(\prime)}}(s)$ unitarized scalar form factors have been coded using tables kindly provided by Matthias Jamin.}. The scalar form factors have also been treated alike in the BEJ approach. \item In the BEJ representation, one would use the best fit results of Ref.~\cite{Boito:2010me} to obtain our vector form factor. However, we have noticed the strong dependence on the actual particle masses of the slope form factor parameters, $\lambda_+^\prime$ and $\lambda_+^{\prime\prime}$. Ref.~\cite{Boito:2010me} used the physical masses in their study of $\tau^-\to K_S\pi^-\nu_\tau$ data. On the other hand we focus on the $\tau^-\to K^-P\nu_\tau$ decays. Consequently, the masses should correspond now to $K^-\pi^0$ instead of to $K_S\pi^-$. Noteworthy, both the $K^-$ and $\pi^0$ are lighter than the $K_S$ and $\pi^-$ and the corresponding small mass differences, given by isospin breaking, are big enough to demand for a corresponding change in the $\lambda_+^{\prime(\prime)}$ parameters. Accepting this, the ideal way to proceed would be to fit the BaBar data on $\tau^-\to K^-\pi^0\nu_\tau$ decays \cite{Aubert:2007jh}. Unfortunately, these data are not publicly available yet. For this reason, we have decided to fit Belle data on the $\tau^-\to K_S\pi^-\nu_\tau$ decay using the $K^-$ and $\pi^0$ masses throughout. The results can be found in table \ref{Tab:Fake fit}, where they are confronted to the best fit results of Ref.~\cite{Boito:2008fq}~\footnote{We display the results of this reference instead of those in Ref.~\cite{Boito:2010me} because we are not using information from $K_{\ell3}$ decays in this exercise. Differences are, nonetheless, tiny.}, both of them yield $\chi^2/dof=1.0$ and are given for $s_{cut}=4$ GeV$^2$, although the systematic error due to the choice of this energy scale is included in the error estimation. We will use the results in the central column of table \ref{Tab:Fake fit} to give our predictions of the $\tau^-\to K^-\eta\nu_\tau$ decays based on the $K\pi$ results. \begin{table*}[h!] \begin{center} \begin{tabular}{|c|c|c|} \hline Parameter& Best fit with fake masses & Best fit \cite{Boito:2008fq}\\ \hline $\lambda_+^\prime\times 10^{3}$&$22.2\pm0.9$& $24.7\pm0.8$\\ $\lambda_+^{\prime\prime}\times 10^{4}$&$10.3\pm0.2$& $12.0\pm0.2$\\ $M_{K^\star}$ (MeV)&$892.1\pm0.6$& $892.0\pm0.9$\\ $\Gamma_{K^\star}$ (MeV)&$46.2\pm0.5$& $46.2\pm0.4$\\ $M_{K^{\star\prime}}$ (GeV)&$1.28\pm0.07$& $1.28\pm0.07$\\ $\Gamma_{K^{\star\prime}}$ (GeV)&$0.16^{+0.10}_{-0.07}$& $0.20^{+0.06}_{-0.09}$\\ $\gamma$&$-0.03\pm0.02$& $-0.04\pm0.02$\\ \hline \end{tabular} \caption{\small{Results for the fit to Belle $\tau^-\to K_S\pi^-\nu_\tau$ data \cite{Epifanov:2007rf} with a three-times subtracted dispersion relation including two vector resonances in $\widetilde{f}_+^{K\pi}(s)$, according to eq.~(\ref{FFs with short distance constraints}) and resumming the loop function in the denominator (see appendix \ref{app:BEJ}), as well as the scalar form factor \cite{Jamin:2001zq}. The middle column is obtained using the masses of the $K^-$ and $\pi^0$} mesons and the last column using the $K_S$ and $\pi^-$ masses actually corresponding to the data.}\label{Tab:Fake fit} \end{center} \end{table*} \end{itemize} Proceeding this way we find the differential decay distributions for the three different approaches considered using eq.~(\ref{spectral function}). This one is, in turn, related to the experimental data by using \begin{equation}\label{theory_to_experiment} \frac{dN_{events}}{dE}\,=\,\frac{d\Gamma}{dE}\frac{N_{events}}{\Gamma_\tau BR(\tau^-\to K^-\eta\nu_\tau)}\Delta E_{bin}\,. \end{equation} We thank the Belle Collaboration for providing us with their data \cite{Inami:2008ar}. This was not possible in the case of the BaBar Collaboration \cite{delAmoSanchez:2010pc} because the person in charge of the analysis left the field and the data file was lost. We have, however, read the data points from the paper's figures and included this effect in the errors. The number of events after background subtraction in each data set are $611$ (BaBar) and $1365$ (Belle) and the corresponding bin widths are $80$ and $25$ MeV, respectively. In Fig.\ref{fig:Pred_Keta} we show our predictions based on the $K\pi$ system according to BW, JPP and BEJ. In this figure we have normalized the BaBar data to Belle's using eq.~(\ref{theory_to_experiment}). A look at the data shows some tension between both measurements and we notice a couple of strong oscillations of isolated Belle data points which do not seem to correspond to any dynamics but rather to an experimental issue or to underestimation of the systematic errors \footnote{We have also realized that the first two Belle data points, with non-vanishing entries, are below threshold, a fact which may indicate some problem in the calibration of the hadronic system energy or point to underestimation of the background.}. In this plot there are also shown the corresponding one-sigma bands obtained neglecting correlations between the resonance parameters and also with respect to other sources of uncertainty, namely $|V_{us}f_+^{K^-\pi^0}(0)|$ and $\theta_P$, whose errors are also accounted for. The corresponding branching ratios are displayed in table \ref{Tab:Pred_Keta}, where the $\chi^2/dof$ is also shown. We note that the error correlations corresponding to the fit results shown in table \ref{Tab:Fake fit} have been taken into account in BEJ's branching ratio of table \ref{Tab:Pred_Keta}. It can be seen that the BW model gives a too low decay width and that the function shape is not followed by this prediction, as indicated by the high value of the $\chi^2/dof$ that is obtained. On the contrary, the JPP and BEJ predictions yield curves that compare quite well with the data already. Moreover, the corresponding branching fractions are in accord with the PDG value within errors. Altogether, this explains the goodness of the $\chi^2/dof$, which is $1.5\leftrightarrow 1.9$. Besides, we notice that the error bands are wider in the dispersive representation than in the exponential parametrization, which may be explained by the larger number of parameters entering the former and the more complicated correlations between them that were neglected in obtaining Fig.~\ref{fig:Pred_Keta} and the JPP result in table \ref{Tab:Pred_Keta}. From these results we conclude that quite likely the BW model is a too rough approach to the problem unless our reference values for $\gamma$ and the $K^\star(1410)$ resonance parameters were a bad approximation. We will check this in the next section. On the contrary, the predictions discussed above hint that JPP and BEJ are appropriate for the analysis of $\tau^-\to K^-\eta\nu_\tau$ data that we will pursue next. \begin{figure}[h!] \begin{center} \vspace*{1.25cm} \includegraphics[scale=0.75]{Bands.pdf} \caption{\label{fig:Pred_Keta} \small{BaBar (blue) \cite{delAmoSanchez:2010pc} and Belle (red) \cite{Inami:2008ar} data for the $\tau^-\to K^-\eta\nu_\tau$ decays are confronted to the predictions obtained in the BW (dotted), JPP (solid) and BEJ (dashed) approaches (see the main text for details) which are shown together with the corresponding one-sigma error bands in yellow, light blue and light green, respectively.}} \end{center} \end{figure} \begin{table*}[h!] \begin{center} \begin{tabular}{|c|c|c|} \hline Source& Branching ratio & $\chi^2/dof$\\ \hline Dipole Model (BW)&$\left(0.78^{+0.17}_{-0.10}\right)\cdot 10^{-4}$& $8.3$\\ JPP&$\left(1.47^{+0.14}_{-0.08}\right)\cdot 10^{-4}$& $1.9$\\ BEJ&$\left(1.49\pm0.05\right)\cdot 10^{-4}$& $1.5$\\ Experimental value&$\left(1.52\pm0.08\right)\cdot 10^{-4}$& -\\ \hline \end{tabular} \caption{\small{Predicted branching ratio of the $\tau^-\to K^-\eta\nu_\tau$ decays according to the different approaches used (see the items above eq.~(\ref{theory_to_experiment}) for details). The corresponding $\chi^2/dof$ values are also given and the PDG branching fraction is given for reference.}}\label{Tab:Pred_Keta} \end{center} \end{table*} \section{Fits to the $\boldsymbol{\tau^-\to K^-\eta\nu_\tau}$ BaBar and Belle data}\label{Fit Keta} We have considered different fits to the $\tau^-\to K^-\eta\nu_\tau$ data. In full generality we have assessed that the data is not sensitive either to the low-energy region or to the $K^\star(892)$ peak region. This is not surprising, since the threshold for $K^-\eta$ production opens around $1041$ MeV which is some $100$ MeV larger than $M_{K^\star}+\Gamma_{K^\star}$, a characteristic energy scale for the $K^\star(892)$ region of dominance. This implies first that the fits are unstable under floating $M_{K^\star}$ and $\Gamma_{K^\star}$ (which affects all three approaches) and second that the slopes of the vector form factor, which encode the physics immediately above threshold, can not be fitted with $\tau^-\to K^-\eta\nu_\tau$ data (this only concerns BEJ). We have considered consequently fits varying only the $K^\star(1410)$ mass and width and $\gamma$ and sticking to the reference values discussed in the previous section for the remaining parameters in every approach. Our best fit results for the branching ratios are written in table \ref{Tab:Fit_Keta}, where the corresponding $\chi^2/dof$ can also be read. These are obtained with the best fit parameter values shown in table \ref{Tab:Fit_results}, which can be compared to the reference values, which were used to obtain the predictions in the previous section, that are recalled in table \ref{Tab:Fit_Reference}. The corresponding decay distributions with one-sigma error bands attached are plotted in Fig.~\ref{fig:Fit_Keta}. These results show that the BW model does not really provide a good approximation to the underlying physics for any value of its parameters and should be discarded. Oppositely, JPP and BEJ are able to yield quite good fits to the data with values of the $\chi^2/dof$ around one. This suggests that the simplified treatment of final state interactions in BW, which misses the real part of the two-meson rescatterings and violates analyticity by construction, is responsible for the failure. A closer look to the fit results using JPP and BEJ in tables \ref{Tab:Fit_Keta} and \ref{Tab:Fit_results} shows that: \begin{itemize} \item Fitting $\gamma$ alone is able to improve the quality of both approaches by $15\leftrightarrow 20\%$. The fitted values are consistent with the reference ones (see table \ref{Tab:Fit_Reference}): in the case of BEJ at one sigma, being the differences in JPP slightly larger than that only. This is satisfactory because both the $\tau^-\to (K\pi)^-\nu_\tau$ and the $\tau^-\to K^-\eta\nu_\tau$ decays are sensitive to the interplay between the first two vector resonances and contradictory results would have casted some doubts on autoconsistency. \item When the $K^\star(1410)$ parameters are also fitted the results improve by $\sim13\%$ in JPP and by $\sim33\%$ in BEJ. This represents a reduction of the $\chi^2/dof$ by $\sim26\%$ in JPP and by $\sim50\%$ in BEJ. It should be noted that the three-parameter fits do not yield to physical results in BW. Specifically, $K^\star(1410)$ mass and width tend to the $K^\star(892)$ values and $|\gamma|$ happens to be one order of magnitude larger than the determinations in the literature. Therefore we discard this result. We also notice that although the branching ratios of both JPP and BEJ (which have been obtained taking into account the parameter fit correlations) are in agreement with the PDG value, the JPP branching ratios tend to be closer to its lower limit while BEJ is nearer to the upper one. It can be observed that the deviations of the three-parameter best fit values with respect to the default ones lie within errors in BEJ, as it so happens with $\Gamma_{K^{\star\prime}}$ in JPP. However, there are small tensions between the reference and best fit values of $M_{K^{\star\prime}}$ and $\gamma$ in JPP. \end{itemize} These results are plotted in Fig.~\ref{fig:Fit_Keta}. Although the BW curve has improved with respect to Fig.~\ref{fig:Pred_Keta} and seems to agree well with the data in the higher-energy half of the spectrum, it fails completely at lower energies. On the contrary, JPP and BEJ provide good quality fits to data which are satisfactory along the whole phase space. We note that JPP goes slightly below BEJ and its error band is again narrower possibly due to having less parameters. BEJ errors include the systematics associated to changes in $s_{cut}$ which is slightly enhanced with respect to the $K\pi$ case. Despite the vector form factor giving the dominant contribution to the decay width, the scalar form factor is not negligible and gives $\sim(3\leftrightarrow4)\%$ of the branching fraction in the JPP and BEJ cases. In the BW model this contribution is $\sim7\%$. \begin{table*}[h!] \begin{center} \begin{tabular}{|c|c|c|} \hline Source & Branching ratio & $\chi^2/dof$\\ \hline Dipole Model (BW) (Fit $\gamma$)&$\left(0.96^{+0.21}_{-0.15}\right)\cdot10^{-4}$& $5.0$\\ Dipole Model (BW) (Fit $\gamma$, $M_{K^{\star\prime}}$, $\Gamma_{K^{\star\prime}})$ &Unphysical result& -\\ JPP (Fit $\gamma$)&$\left(1.50^{+0.19}_{-0.11}\right)\cdot 10^{-4}$& $1.6$\\ JPP (Fit $\gamma$, $M_{K^{\star\prime}}$, $\Gamma_{K^{\star\prime}})$&$\left(1.42\pm0.04\right)\cdot 10^{-4}$& $1.4$\\ BEJ (Fit $\gamma$)&$\left(1.59^{+0.22}_{-0.16}\right)\cdot 10^{-4}$& $1.2$\\ BEJ (Fit $\gamma$, $M_{K^{\star\prime}}$, $\Gamma_{K^{\star\prime}})$ &$\left(1.55\pm0.08\right)\cdot 10^{-4}$& $0.8$\\ Experimental value&$\left(1.52\pm0.08\right)\cdot 10^{-4}$& -\\ \hline \end{tabular} \caption{\label{Tab:Fit_Keta} \small{The branching ratios and $\chi^2/dof$ obtained in BW, JPP and BEJ fitting $\gamma$ only and also the $K^\star(1410)$ parameters are displayed. Other parameters were fixed to the reference values used in section \ref{Pred Keta}. The PDG branching fraction is also given for reference.}} \end{center} \end{table*} \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c} \hline \backslashbox{Fitted value}{Approach}&Dipole Model (BW)&JPP&BEJ\cr \hline $\gamma$&$-0.174\pm0.007$&$-0.063\pm0.007$&$-0.041\pm0.021$\cr \hline $\gamma$&Unphysical&$-0.078^{+0.012}_{-0.014}$&$-0.051^{+0.012}_{-0.036}$\cr $M_{K^{\star'}}$ (MeV) &best fit&$1356\pm11$&$1327^{+30}_{-38}$\cr $\Gamma_{K^{\star'}}$ (MeV) &parameters&$232^{+30}_{-28}$&$213^{+72}_{-118}$\cr \hline \end{tabular} \caption{\label{Tab:Fit_results} \small{The best fit parameter values corresponding to the different alternatives considered in table \ref{Tab:Fit_Keta} are given. These can be compared to the reference values, which are given in table \ref{Tab:Fit_Reference}. BEJ results for the mass and width of the $K^{\star}(1410)$ correspond to pole values, while JPP figures are given for the model parameter as in the original literature.}} \end{center} \end{table} \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c} \hline \backslashbox{Reference value}{Approach}&Dipole Model (BW)&JPP&BEJ\cr \hline $\gamma$&$-0.021\pm0.031$&$-0.043\pm0.010$&$-0.029\pm0.017$\cr $M_{K^{\star'}}$ (MeV) & $1414\pm15$ &$1307\pm17$&$1283\pm65$\cr $\Gamma_{K^{\star'}}$ (MeV) & $232\pm21$ &$206\pm49$&$163\pm68$\cr \hline \end{tabular} \caption{\label{Tab:Fit_Reference} \small{Reference values (used in section \ref{Pred Keta}) corresponding to the best fit parameters appearing in table \ref{Tab:Fit_results}. Again BEJ results are pole values and JPP ones are model parameters. The latter are converted to resonance pole values in section \ref{Concl}, where the determination of the $K^{\star}(1410)$ pole parameters is given.}} \end{center} \end{table} The JPP model values appearing in tables \ref{Tab:Fit_results} and \ref{Tab:Fit_Reference} can be translated to pole values along the lines discussed in Ref.~\cite{Escribano:2002iv}. This yields $M_{K^{\star\prime}}=1332^{+16}_{-18}\,,\,\Gamma_{K^{\star\prime}}=220^{+26}_{-24}$ for the best fit values and $M_{K^{\star\prime}}=1286^{+26}_{-28}\,,\,\Gamma_{K^{\star\prime}}=197^{+41}_{-45}$ for the reference values, where all quantities are given in MeV. Remarkable agreement is found between our best fit values in the JPP and BEJ cases, since the latter yields $M_{K^{\star\prime}}=1327^{+30}_{-38}\,, \,\Gamma_{K^{\star\prime}}=213^{+72}_{-118}$. From the detailed study of the $\pi\pi$, $K\pi$ (in the quoted literature) and $K\eta$ systems (in this paper) within JPP and BEJ, one can conclude generally that the dispersive form factors allow a better description of the data while the exponential parametrizations lead to the determination of the resonance pole values with smaller errors. Both things seem to be due to the inclusion of the subtraction constants as extra parameters in the fits within the dispersive representations. \begin{figure}[h!] \begin{center} \vspace*{1.25cm} \includegraphics[scale=0.75]{BandsFit.pdf} \caption{\label{fig:Fit_Keta} \small{BaBar (blue) \cite{delAmoSanchez:2010pc} and Belle (red) \cite{Inami:2008ar} data for the $\tau^-\to K^-\eta\nu_\tau$ decays are confronted to the best fit results obtained in the BW (dotted), JPP (solid) and BEJ (dashed) approaches (see the main text for details) which are shown together with the corresponding one-sigma error bands in light green, pink and orange, respectively. The BW curve corresponds to the one-parameter fit while the JPP and BEJ ones correspond to three-parameter fits.}} \end{center} \end{figure} \section{Predictions for the $\boldsymbol{\tau^-\to K^-\eta^\prime\nu_\tau}$ decays}\label{Pred Ketap} We can finally profit from our satisfactory description of the $\tau^-\to K^-\eta\nu_\tau$ decays and predict the $\tau^-\to K^-\eta^\prime\nu_\tau$ decay observables, where there is only the upper limit fixed at ninety percent confidence level by the BaBar Collaboration \cite{Lees:2012ks}, $BR<4.2\cdot10^{-6}$. We have done this for our best fit results in the BW (one-parameter fit) JPP and BEJ (three-parameter fits) cases. The corresponding results are plotted in Fig.~\ref{fig:Ketap} and the branching ratios can be read from table \ref{Tab:Pred_Ketap}. In the figure we can see that the decay width is indeed dominated by the scalar contribution \footnote{In principle, both the scalar and vector $K\eta^\prime$ form factors are suppressed since they are proportional to $\sin\theta_P$. However, the unitarization procedure of the scalar form factor enhances it sizeably \cite{Jamin:2001zq} due to the effect of the coupled inelastic channels.} \footnote{The suppression of the vector contribution makes that the predicted values using information from the $K\pi$ system and the one-parameter fits with JPP and BEJ are very similar to the results in table \ref{Tab:Pred_Ketap}. For this reason we do not show them.}. In fact, the vector form factor contributes in the range $(9\leftrightarrow15)\%$ to the corresponding branching ratio. Although we keep the BW prediction for reference, we do not draw the associated (large) error band for the sake of clarity in the figure taking into account its wrong description of the $K\eta$ system shown in the previous section. As the scalar form factor dominates the decay width and we are using the same one in JPP and BEJ, the differences between them are tiny (and the errors, of order one third, are the same in table \ref{Tab:Pred_Ketap}). As expected from the results in the $\tau^-\to K^-\eta\nu_\tau$ decays, BEJ gives the upper part of the error band while JPP provides the lower one. We are looking forward to the discovery of this decay mode to verify our predictions. A priori one may forecast some departure from it because of the effect of the poorly known elastic and $K\eta$ channels in meson-meson scattering, which affects the solution of the coupled system of integral equations and specially the value of the $K^-\eta^\prime$ scalar form factor, that is anyway suppressed to some extent. \begin{figure}[h!] \begin{center} \vspace*{1.25cm} \includegraphics[scale=0.75]{SpectrumKetap.pdf} \caption[]{\label{fig:Ketap} \small{The predicted $\tau^-\to K^-\eta^\prime\nu_\tau$ decay width according to BW (green, its big uncertainty is not shown for clarity of the figure), JPP (blue with lower band in red) and BEJ (blue with upper part in pink) is shown. In these last two the scalar form factor corresponds to Ref.~\cite{Jamin:2006tj}, which is represented by the author's initials, JOP, in the figure's legend. The corresponding vector form factor contributions, which are subleading are plotted in orange (solid), blue (dashed) and purple (dotted).}} \end{center} \end{figure} \begin{table*}[h!] \begin{center} \begin{tabular}{|c|c|} \hline Source & Branching ratio\\ \hline Dipole Model (BW) (Fit)&$(1.45^{+3.80}_{-0.87})\cdot10^{-6}$\\ JPP (Fit)&$(1.00^{+0.37}_{-0.29})\cdot10^{-6}$\\ BEJ (Fit)&$(1.03^{+0.37}_{-0.29})\cdot10^{-6}$\\ Experimental bound&<$4.2\cdot10^{-6}$ at $90\%$ C.L.\\ \hline \end{tabular} \caption{\label{Tab:Pred_Ketap} \small{Predicted branching ratios for the $\tau^-\to K^-\eta^\prime\nu_\tau$ decays. The BaBar upper limit is also shown \cite{Lees:2012ks}.}} \end{center} \end{table*} In Fig.~\ref{fig:Correlation} we also plot the correlation between the $\tau^-\to K^-\eta\nu_\tau$ and $\tau^-\to K^-\eta^\prime\nu_\tau$ branching ratios according to the best fit JPP result at one sigma. The correlations between the parameters are neglected. Since the vector (scalar) form factor dominates the former (latter) decays and their parameters are independent the plot does not show any sizeable correlation between both measurements, as expected. As a result, if new data on the $\tau^-\to K^-\eta^\prime\nu_\tau$ decays demand a more careful determination of the $f_0^{K^-\eta^\prime}(s)$ unitarized form factor this will leave almost unaffected the results obtained for the $\tau^-\to K^-\eta\nu_\tau$ channel. \begin{figure}[!h] \begin{center} \vspace*{1.25cm} \includegraphics[scale=1.25]{Correlation.pdf} \caption[]{\label{fig:Correlation} \small{The correlation between the $\tau^-\to K^-\eta\nu_\tau$ and $\tau^-\to K^-\eta^\prime\nu_\tau$ branching ratios is plotted according to the best fit JPP result at one sigma. Correlations between the parameters are neglected. According to expectations, no sizable correlation between both decay modes is observed.}} \end{center} \end{figure} \section{Conclusions}\label{Concl} Hadronic tau decays are an ideal scenario to learn about the non-perturbative character of the strong interactions in rather clean conditions. In this work, we have studied the $\tau^-\to K^-\eta^{(\prime)}\nu_\tau$ decays motivated by the recent measurements performed by the BaBar \cite{delAmoSanchez:2010pc, Lees:2012ks} and Belle Collaborations \cite{Inami:2008ar}. These decays allow the application of the knowledge acquired in the study of $\tau^-\to (K\pi)^-\nu_\tau$ decays. In particular, the $K\eta$ decay is sensitive to the parameters of the $K^\star(1410)$ resonance and to its interplay with the $K^\star(892)$ meson, while the $K\eta^\prime$ decay is an appropriate place to test the unitarization of the strangeness-changing scalar form factors in three coupled-channel case. We have defined with detail the (tilded) scalar and vector form factors and we have gone through the steps of their calculation within Chiral Perturbation Theory including the lightest resonances as explicit degrees of freedom and showed that the results are written in a more compact way using the tilded form factors. Then we have discussed different options according to the treatment of final-state interactions. Specifically, there is the dipole Breit-Wigner (BW) model, which neglects the real part of the two-meson loop function violating analyticity at next-to-leading order; there is the exponential parametrization (JPP) where this real part of the loop is resummed through an Omn\`es exponentiation, which violates analyticity at the next order; and there is the dispersive representation (BEJ), which resums the whole loop function in the denominators, where analyticity holds exactly. In our case, an additional difficulty is that the elastic approach is not valid in any region of the phasespace, since the $K\pi$ channel is open well below the $K\eta^{(\prime)}$ channels. In JPP this is not an issue, since one simply adds the corresponding contribution of these channels to the width and real part of the loop function. However, in BEJ it prevents an approach which does not include inelasticities and the effect of coupled channels. Being conscious of this, we have nevertheless attempted a dispersive representation of the $K\eta^{(\prime)}$ vector form factors were the input phaseshift is obtained using the elastic approximation and, to our surprise, it has done an excellent job in its confrontation to the $K\eta$ data. In the light of more accurate measurements it may become necessary to improve this treatment in the future. Very good agreement has also been found using JPP but BW has failed in this comparison. In the JPP and BEJ fits to the $K\eta$ channel the scalar form factor was obtained solving dispersion relations for the three-body problem. We have checked that the $K\eta^{(\prime)}$ channels are not sensitive either to the $K^\star(892)$ parameters or to the slopes of the form factor, $\lambda_+^{\prime(\prime)}$ (BEJ). We have borrowed this information from the $K\pi$ system. This task was straightforward in BW and JPP although in BEJ we noticed that the $\lambda_+^{\prime(\prime)}$ parameters were sensitive to isospin breaking effects that we had to account for. Once this was done we could fit the $K^\star(1410)$ resonance pole parameters and its relative weight with respect to the $K^\star(892)$ meson, $\gamma$. Our results for these, with masses and widths in MeV, are \begin{equation} M_{K^{\star\prime}}\,=\,1327^{+30}_{-38},\quad\Gamma_{K^{\star\prime}}\,=\,213^{+72}_{-118},\quad \gamma\,=\,-0.051^{+0.012}_{-0.036}\,, \end{equation} in the dispersive representation (BEJ) and \begin{equation} M_{K^{\star\prime}}\,=\,1332^{+16}_{-18},\quad\Gamma_{K^{\star\prime}}\,=\,220^{+26}_{-24},\quad \gamma\,=\,-0.078^{+0.012}_{-0.014}\,, \end{equation} for the exponential parametrization (JPP). Our determination of these parameters has shown to be competitive with its extraction from the $\tau^-\to (K\pi)^-\nu_\tau$ decays. To illustrate this point, we average the JPP and BEJ determinations from the $K\pi$ \cite{Jamin:2008qg, Boito:2010me} and $K\eta$ systems, respectively, to find \begin{equation} M_{K^{\star\prime}}\,=\,1277^{+35}_{-41},\quad\Gamma_{K^{\star\prime}}\,=\,218^{+95}_{-66},\quad \gamma\,=\,-0.049^{+0.019}_{-0.016}\,, \end{equation} from $K\pi$ and \begin{equation} M_{K^{\star\prime}}\,=\,1330^{+27}_{-41},\quad\Gamma_{K^{\star\prime}}\,=\,217^{+68}_{-122},\quad \gamma\,=\,-0.065^{+0.025}_{-0.050}\,, \end{equation} from $K\eta$. We have thus opened an alternative way of determining these parameters. New, more precise data on the $\tau^-\to (K\pi)^-\nu_\tau$ and $\tau^-\to K^-\eta\nu_\tau$ decays will make possible a more accurate determination of these parameters. Finally we have benefited from this study of the $\tau^-\to K^-\eta\nu_\tau$ decays and applied it to the $\tau^-\to K^-\eta^{\prime}\nu_\tau$ decays, were our predictions respect the upper limit found by BaBar and hint to the possible discovery of this decay mode in the near future. In this way we consider that we are in position of providing TAUOLA with theory-based currents that can describe well the $\tau^-\to K^-\eta^{(\prime)}\nu_\tau$ decays, based on the exponential parametrization developed by JPP and the dispersive representation constructed by BEJ. To conclude, differential distributions of hadronic tau decays provide important information for testing diverse form factors and extracting the corresponding parameters increasing our knowledge of hadronization in the low-energy non-perturbative regime of QCD. It will be interesting to see if our predictions for the $\tau^-\to K^-\eta^{\prime}\nu_\tau$ decays are corroborated and if more precise data on the $\tau^-\to K^-\eta\nu_\tau$ decays demand a more refined treatment. Finally, we emphasize the need of giving pole resonance parameters irrespective of the approach employed, either in a theorists' article or in a publication by an experimental collaboration.
{'timestamp': '2013-10-09T02:09:40', 'yymm': '1307', 'arxiv_id': '1307.7908', 'language': 'en', 'url': 'https://arxiv.org/abs/1307.7908'}
\section{Introduction and Statement of Results} Our point of departure is recent work of Bettin and Conrey \cite{bettinconreyreciprocity,bettinconreyperiodfunctions} on the period functions of Eisenstein series. Their initial motivation was the derivation of an exact formula for the second moments of the Riemann zeta function, but their work naturally gave rise to a family of finite arithmetic sums of the form \[ c_{a}\left(\frac{h}{k}\right) \ = \ k^{a}\sum_{m=1}^{k-1}\cot\left(\frac{\pi mh}{k}\right)\zeta\left(-a,\frac{m}{k}\right), \] where $a \in \CC$, $h$ and $k$ are positive coprime integers, and $\zeta(a,x)$ denotes the \emph{Hurwitz zeta function} \[ \zeta(a,x) \ = \ \sum_{n=0}^{\infty}\frac{1}{(n+x)^{a}} \, , \] initially defined for $\Re(a)>1$ and meromorphically continued to the $a$-plane. We call $c_{a}(\frac{h}{k})$ and its natural generalizations appearing below \emph{Bettin--Conrey sums}. There are two major motivations to study these sums. The first is that $c_{0}(\frac{h}{k})$ is essentially a \emph{Vasyunin sum}, which in turn makes a critical appearance in the Nyman--Beurling--B\'aez-Duarte approach to the Riemann hypothesis through the twisted mean-square of the Riemann zeta function on the critical line (see, e.g., \cite{baez,vasyunin}). Bettin--Conrey's work, for $a=0$, implies that there is a hidden symmetry of this mean-square. The second motivation, and the central theme of our paper, is that the Bettin--Conrey sums satisfy a \emph{reciprocity theorem}: \[ c_{a}\left(\frac{h}{k}\right)-\left(\frac{k}{h}\right)^{1+a}c_{a}\left(\frac{-k}{h}\right)+\frac{k^{a}a \, \zeta(1-a)}{\pi h} \] extends from its initiation domain $\QQ$ to an (explicit) analytic function on $\CC \setminus \RR_{ \le 0 }$, making $c_a$ nearly an example of a \emph{quantum modular form} in the sense of Zagier \cite{zagierquantummodular}. In fact, Zagier's ``Example 0'' is the \emph{Dedekind sum} \[ s(h,k) \ = \ \frac{1}{4k}\sum_{m=1}^{k-1}\cot\left(\frac{\pi mh}{k}\right)\cot\left(\frac{\pi m}{k}\right) , \] which is, up to a trivial factor, $c_{ -1 }(\frac h k)$. Dedekind sums first appeared in the transformation properties of the Dedekind eta function and satisfy the reciprocity theorem \cite{dedekind,grosswald} \[ s(h,k) + s(k,h) \ = \ - \frac 1 4 + \frac 1 {12} \left( \frac h k + \frac 1 {hk} + \frac k h \right) . \] We now recall the precise form of Bettin--Conrey's reciprocity theorem. \begin{thm}[Bettin--Conrey \cite{bettinconreyperiodfunctions}]\label{thm:bettinconrey} If $h$ and $k$ are positive coprime integers then \[ c_{a}\left(\frac{h}{k}\right)-\left(\frac{k}{h}\right)^{1+a}c_{a}\left(\frac{-k}{h}\right)+\frac{k^{a}a \, \zeta(1-a)}{\pi h} \ = \ -i \, \zeta(-a)\, \psi_{a}\left(\frac{h}{k}\right) \] where \[ \psi_{a}(z) \ = \ \frac{i}{\pi z}\frac{\zeta(1-a)}{\zeta(-a)}-\frac{i}{z^{1+a}}\cot\frac{\pi a}{2}+i\frac{g_{a}(z)}{\zeta(-a)} \] and \begin{align*} g_{a}(z) \ = \ &-2\sum_{1\leq n\leq M}(-1)^{n}\frac{B_{2n}}{(2n)!}\, \zeta(1-2n-a)(2\pi z)^{2n-1} \\ &\qquad {}+\frac{1}{\pi i}\int_{(-\frac{1}{2}-2M)}\zeta(s) \, \zeta(s-a) \, \Gamma(s)\frac{\cos\frac{\pi a}{2}}{\sin\pi \frac{s-a} 2}(2\pi z)^{-s} \, ds \, . \end{align*} Here $B_k$ denotes the $k\th$ Bernoulli number, $M$ is any integer $\ge -\frac{1}{2}\min(0,\Re(a))$, and the integral notation indicates that our integration path is over the vertical line $\Re(s) = -\frac{1}{2}-2M$. \end{thm} We note that Bettin and Conrey initially defined $\psi_a(z)$ through \[ \psi_a(z) \ = \ E_{ a+1 } (z) - \frac{ 1 }{ z^{ a+1 } } \, E_{ a+1 } \left( - \frac 1 z \right) , \] in other words, $\psi_a(z)$ is the \emph{period function} of the \emph{Eisenstein series} of weight $a+1$, \[ E_{ a+1 } (z) \ = \ 1 + \frac{ 2 }{ \zeta(-a) } \sum_{ n \ge 1 } \sigma_a(n) \, e^{ 2 \pi i n z } , \] where $\sigma_a(n) = \sum_{ d|n } d^a$, and then showed that $\psi_a(z)$ satisfies the properties of Theorem~\ref{thm:bettinconrey}. We have several goals. We start by showing that the right-hand side of Theorem~\ref{thm:bettinconrey} can be simplified by employing an integration technique for Dedekind-like sums that goes back to Rademacher \cite{grosswald}. This yields our first main result: \begin{thm}\label{thm:GeneralReciprocity} Let $\Re(a)>1$ and suppose $h$ and $k$ are positive coprime integers. Then for any $0<\epsilon<\min\left\{ \frac{1}{h},\frac{1}{k}\right\}$, \[ h^{1-a} \, c_{-a}\left(\frac{h}{k}\right)+k^{1-a} \, c_{-a}\left(\frac{k}{h}\right) \ = \ \frac{a \, \zeta(a+1)}{\pi(hk)^{a}}-\frac{(hk)^{1-a}}{2i}\int_{(\epsilon)}\frac{\cot(\pi hz)\cot(\pi kz)}{z^{a}} \, dz \, . \] \end{thm} Theorem~\ref{thm:GeneralReciprocity} implies that the function \[ F(a) \ = \ \int_{(\epsilon)}\frac{\cot(\pi hz)\cot(\pi kz)}{z^{a}} \, dz \] has a holomorphic continuation to the whole complex plane. In particular, in this sense Theorem~\ref{thm:GeneralReciprocity} can be extended to all complex~$a$. Second, we employ Theorem~\ref{thm:GeneralReciprocity} to show that in the case that $a$ is an odd negative integer, the right-hand side of the reciprocity theorem can be explicitly given in terms of Bernoulli numbers. \begin{thm}\label{thm:nReciprocity} Let $n>1$ be an odd integer and suppose $h$ and $k$ are positive coprime integers. Then \begin{align*} &h^{1-n} \, c_{-n}\left(\frac{h}{k}\right)+k^{1-n} \, c_{-n}\left(\frac{k}{h}\right) \ = \\ &\qquad \left(\frac{2\pi i}{hk}\right)^{n}\frac{1}{i(n+1)!}\left(n \, B_{n+1}+\sum_{m=0}^{n+1}{n+1 \choose m}B_{m} \, B_{n+1-m} \, h^{m}k^{n+1-m}\right) . \end{align*} \end{thm} Our third main result is, in turn, a consequence of Theorem~\ref{thm:nReciprocity}: in conjunction with Theorem~\ref{thm:bettinconrey}, it implies the following explicit formulas for $\psi_a(z)$ and $g_a(z)$ when $a$ is an odd negative integer. \begin{thm}\label{thm:nPsiG} If $n>1$ is an odd integer then for all $z\in\CC \setminus \RR_{\le 0}$ \[ \psi_{-n}(z) \ = \ \frac{(2\pi i)^{n}}{\zeta(n)(n+1)!}\sum_{m=0}^{n+1}{n+1 \choose m}B_{m} \, B_{n+1-m} \, z^{m-1} \] and \[ g_{-n}(z) \ = \ \frac{(2\pi i)^{n}}{i(n+1)!}\sum_{m=0}^{n}{n+1 \choose m+1} B_{m+1} \, B_{n-m} \, z^{m} . \] \end{thm} In \cite[Theorem~2]{bettinconreyperiodfunctions}, Bettin and Conrey computed the Taylor series of $g_a(z)$ and remarked that, if $a$ is a negative integer, $\pi \, g_a^{ (m) } (1)$ is a rational polynomial in $\pi^2$. Theorem~\ref{thm:nPsiG} generalizes this remark. We will prove Theorems~\ref{thm:GeneralReciprocity}--\ref{thm:nPsiG} in Section~\ref{sec:mainproofs}. We note that both Theorem~\ref{thm:nReciprocity} and~\ref{thm:nPsiG} can also be derived directly from Theorem~\ref{thm:bettinconrey}. Our next goal is to study natural generalizations of $c_a(\frac h k)$. Taking a leaf from Zagier's generalization of $s(h,k)$ to \emph{higher-dimensional Dedekind sums} \cite{zagier} and its variation involving cotangent derivatives \cite{beckcot}, let $k_{0},k_{1},\dots,k_{n}$ be positive integers such that $(k_{0},k_{j})=1$ for $j=1,2,\dots,n$, let $m_{0},m_{1},\dots,m_{n}$ be nonnegative integers, $a \ne -1$ a complex number, and define the \textit{generalized Bettin--Conrey sum} \[ c_{a}\left(\begin{array}{c|ccc} k_{0} & k_{1} & \cdots & k_{n}\\ m_{0} & m_{1} & \cdots & m_{n} \end{array}\right) \ = \ k_{0}^{a}\sum_{l=1}^{k_{0}-1}\zeta^{(m_{0})}\left(-a,\frac{l}{k_{0}}\right)\prod_{j=1}^{n}\cot^{(m_{j})}\left(\frac{\pi k_{j}l}{k_{0}}\right) . \] Here $\zeta^{(m_{0})}(a,z)$ denotes the $m_{0}\th$ derivative of the Hurwitz zeta function with respect to~$z$. This notation mimics that of Dedekind cotangent sums; note that \[ c_{s}\left(\frac{h}{k}\right) \ = \ c_{s}\left(\begin{array}{c|c} k & h\\ 0 & 0 \end{array}\right). \] In Section~\ref{subsec:GeneralizationBettinConreySums}, we will prove reciprocity theorems for generalized Bettin--Conrey sums, paralleling Theorems~\ref{thm:GeneralReciprocity} and~\ref{thm:nReciprocity}, as well as more special cases that give, we think, interesting identities. Our final goal is to relate the particular generalized Bettin--Conrey sum \[ \sum_{m=1}^{q-1}\cot^{(k)}(\pi mx)\, \zeta \left(-a, \tfrac m q \right) \] with evaluations of the \emph{Estermann zeta function} $ \sum_{n\geq 1}\sigma_a(n) \, \frac{ e^{ 2 \pi i n x } }{ n^s } $ at integers~$s$; see Section~\ref{sec:estermann}. \section{Proofs of Main Results}\label{sec:mainproofs} In order to prove Theorem~\ref{thm:GeneralReciprocity}, we need two lemmas. \begin{lem} \label{lem:AsympCot}Let $m$ be a nonnegative integer. Then \[ \lim_{y\rightarrow\infty}\cot^{(m)}\pi(x\pm iy) \ = \ \begin{cases} \mp i & \textnormal{if \ensuremath{m=0},}\\ 0 & \textnormal{if \ensuremath{m>0}.} \end{cases} \] Furthermore, this convergence is uniform with respect to $x$ in a fixed bounded interval.\end{lem} \begin{proof} Since $\cot z=\frac{i(e^{iz}+e^{-iz})}{e^{iz}-e^{-iz}}$, we may estimate \[ \left|i+\cot\pi(x+iy)\right| \ = \ \frac{2}{\left|e^{i(2\pi x)}-e^{2\pi y}\right|} \ \leq \ \frac{2}{\left|\left|e^{i(2\pi x)}\right|-\left|e^{2\pi y}\right|\right|} \ = \ \frac{2}{\left|1-e^{2\pi y}\right|} \, . \] Given that the rightmost term in this inequality vanishes as $y\rightarrow\infty$, we see that \[\lim_{y\rightarrow\infty}\cot\pi(x+iy) \ = \ -i \, . \] Similarly, the inequality \[ \left|-i+\cot\pi(x-iy)\right| \ = \ \frac{2}{\left|e^{i(2\pi x)}e^{2\pi y}-1\right|} \ \leq \ \frac{2}{\left|\left|e^{i(2\pi x)}e^{2\pi y}\right|-1\right|} \ = \ \frac{2}{\left|e^{2\pi y}-1\right|} \] implies that $\lim_{y\rightarrow\infty}\cot\pi(x-iy)=i$. Since \[ \left|\csc\pi(x+iy)\right| \ = \ \frac{2e^{\pi y}}{\left|e^{i\pi x}-e^{-i\pi x}e^{2\pi y}\right|} \ \leq \ \frac{2e^{\pi y}}{\left|\left|e^{i\pi x}\right|-\left|e^{-i\pi x}e^{2\pi y}\right|\right|} \ = \ \frac{2e^{\pi y}}{\left|1-e^{2\pi y}\right|} \, , \] it follows that $\lim_{y\rightarrow\infty}\csc\pi(x+iy)=0$. Similarly, \[ \left|\csc\pi(x-iy)\right| \ = \ \frac{2e^{\pi y}}{\left|e^{i\pi x}e^{2\pi y}-e^{-i\pi x}\right|} \ \leq \ \frac{2e^{\pi y}}{\left|\left|e^{i\pi x}e^{2\pi y}\right|-\left|e^{-i\pi x}\right|\right|} \ = \ \frac{2e^{\pi y}}{\left|e^{2\pi y}-1\right|} \] implies that $\lim_{y\rightarrow\infty}\csc\pi(x-iy)=0$. We remark that $\frac{d}{dz}(\cot z)=-\csc^{2}z$ and \[\frac{d}{dz}(\csc z) \ = \ -\csc z\cot z \, , \] so all the derivatives of $\cot z$ have a $\csc z$ factor, and therefore, \[ \lim_{y\rightarrow\infty}\cot^{(m)}\pi(x\pm iy) \ = \ \begin{cases} \mp i & \textrm{if \ensuremath{m=0,}}\\ 0 & \textrm{if \ensuremath{m>0}.} \end{cases} \] Since the convergence above is independent of $x$, the limit is uniform with respect to $x$ in a fixed bounded interval. \end{proof} Lemma \ref{lem:AsympCot} implies that \[ \lim_{y\rightarrow\infty}\cot^{(m)}\pi h(x\pm iy) \ = \ \lim_{y\rightarrow\infty}\cot^{(m)}\pi k(x\pm iy) \ = \ \begin{cases} \mp i & \textrm{if \ensuremath{m=0,}}\\ 0 & \textrm{if \ensuremath{m>0},} \end{cases} \] uniformly with respect to $x$ in a fixed bounded interval. The proof of the following lemma is hinted at by Apostol~\cite{MR0046379}. \begin{lem} \label{lem:AsympHurwitz} If $\Re(a)>1$ and $R>0$, then $\zeta(a,x+iy)$ vanishes uniformly with respect to $x\in[0,R]$ as $y\rightarrow\pm\infty$.\end{lem} \begin{proof} We begin by showing that $\zeta(a,z)$ vanishes as $\Im(z)\rightarrow\pm\infty$ if $\Re(z)>0$. Since $\Re(a)>1$ and $\Re(z)>0$, we have the integral representation \cite[eq.~25.11.25]{MR2723248} \[ \zeta(a,z) \ = \ \frac{1}{\Gamma(a)}\int_{0}^{\infty}\frac{t^{a-1}e^{-z\, t}}{1-e^{-t}}dt, \] which may be written as \begin{equation} \zeta(a,z) \ = \ \frac{1}{\Gamma(a)}\int_{0}^{\infty}\frac{t^{a-1}e^{-t\,\Re(z)}}{1-e^{-t}}e^{-it\Im(z)}dt.\label{eq:hurwitzintegral-1} \end{equation} Note that for fixed $\Re(z)$, \[ \int_{0}^{\infty}\frac{t^{a-1}e^{-t\Re(z)}}{1-e^{-t}}dt \ = \ \zeta \left(a,\Re(z)\right) \Gamma(a) \] and \[ \int_{0}^{\infty}\left|\frac{t^{a-1}e^{-t\,\Re(z)}}{1-e^{-t}}\right|dt \ = \ \int_{0}^{\infty}\frac{t^{\Re(a)-1}e^{-t\,\Re(z)}}{1-e^{-t}}dt \ = \ \zeta \left( \Re(a),\Re(z) \right) \Gamma(\Re(a)) \, , \] so the Riemann--Lebesgue lemma (see, for example, \cite[Theorem ~16]{lighthill1958introduction}) implies that \[\int_{0}^{\infty}\frac{t^{a-1}e^{-t\Re(z)}}{1-e^{-t}}e^{-it\Im(z)}dt\] vanishes as $\Im(z)\rightarrow\pm\infty$. By (\ref{eq:hurwitzintegral-1}), this means that for $\Re(z)$ fixed, $\zeta(a,z)$ vanishes as $\Im(z)\rightarrow\pm\infty$. In other words, $\zeta(a,x+iy)\rightarrow0$ pointwise with respect to $x>0$ as $y\rightarrow\pm\infty$. Moreover, the vanishing of $\zeta(a,x+iy)$ as $y\rightarrow\pm\infty$ is uniform with respect to $x\in[0,R]$. Indeed, denote $g(t)=\frac{t^{a-1}e^{-tR}}{1-e^{-t}}$, then (\ref{eq:hurwitzintegral-1}) implies that \[ \int_{0}^{\infty}g(t)dt \ = \ \Gamma(a)\,\zeta(a,R) \] and \[ \int_{0}^{\infty}\left|g(t)\right|dt \ = \ \Gamma(\Re(a))\,\zeta(\Re(a),R) \, . \] It then follows from the Riemann--Lebesgue lemma that $\lim_{\left|z\right|\rightarrow\infty}\int_{0}^{\infty}g(t)e^{-itz}dt=0$. If $x\in(0,R]$, we may write \[ \Gamma(a)\,\zeta(a,x\pm iy) \ = \ \int_{0}^{\infty}\frac{t^{a-1}e^{-tx}}{1-e^{-t}}\,e^{\mp ity}dt \ = \ \int_{0}^{\infty}g(t)\,e^{-it(\pm y-i(x-R))}dt. \] Since $g(t)$ does not depend on $x$, the speed at which $\zeta(a,x\pm iy)$ vanishes depends on $R$ and $y^{2}+(x-R)^{2}$. However, we know that $0\leq\left|x-R\right|<R$, so the speed of the vanishing depends only on $R$. Finally, note that \[ \zeta(a,iy) \ = \ \sum_{n=0}^{\infty}\frac{1}{(iy+n)^{a}} \ = \ \sum_{n=0}^{\infty}\frac{1}{(1+iy+n)^{a}}+\frac{1}{(iy)^{a}} \ = \ \zeta(a,1+iy)+\frac{1}{(iy)^{a}} \, , \] so $\zeta(a,iy)\rightarrow0$ as $y\rightarrow\pm\infty$, and the speed at which $\zeta(s,iy)$ vanishes depends on that of $\zeta(s,1+iy)$. Thus, $\zeta(s,x+iy)\rightarrow0$ uniformly as $y\rightarrow\pm\infty$, as long as $x\in[0,R]$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:GeneralReciprocity}] The idea is to use Cauchy's residue theorem to integrate the function \[ f(z) \ = \ \cot(\pi hz)\cot(\pi kz)\,\zeta(a,z) \] along $C(M,\epsilon)$ as $M\rightarrow\infty$, where $C(M,\epsilon)$ denotes the positively oriented rectangle with vertices $1+\epsilon+iM$, $\epsilon+iM$, $\epsilon-iM$ and $1+\epsilon-iM$, for $M>0$ and $0<\epsilon<\min\left\{ \frac{1}{h},\frac{1}{k}\right\} $ (see Figure~\ref{fig:IntegrationPath}). \begin{figure}[htb] \noindent \begin{centering} \includegraphics[scale=0.5]{IntegrationPath} \par\end{centering} \protect\caption{The closed contour $C(M,\epsilon).$\label{fig:IntegrationPath}} \end{figure} Henceforth, $a\in\mathbb{C}$ is such that $\Re(a)>1$, $(h,k)$ is a pair of coprime positive intergers, and $f(z)$ and $C(M,\epsilon)$ are as above, unless otherwise stated. Since $\zeta(a,z)$ is analytic inside $C(M,\epsilon)$, the only poles of $f(z)$ are those of the cotangent factors. Thus, the fact that $h$ and $k$ are coprime implies that a complete list of the possible poles of $f(z)$ inside $C(M,\epsilon)$ is \[ E=\left\{ \frac{1}{h},\dots,\frac{h-1}{h},\frac{1}{k},\dots,\frac{k-1}{k},1\right\} \] and each of these poles is (at most) simple, with the exception of 1, which is (at most) double. For $m\in\{1,2,\dots,h-1\}$, \begin{align*} \mathop{\text{Res}}_{z=\frac{m}{h}}f(z) \ &= \ \cot\left(\frac{\pi km}{h}\right)\cos(\pi m)\,\zeta\left(a,\frac{m}{h}\right)\mathop{\text{Res}}_{z=\frac{m}{h}}\frac{1}{\sin(\pi hz)} \\ \ &= \ \frac{1}{\pi h}\cot\left(\frac{\pi km}{h}\right)\zeta\left(a,\frac{m}{h}\right). \end{align*} Of course, an analogous result is true for $\mathop{\text{Res}}_{z=\frac{m}{k}}f(z)$ for all $m\in\{1,2,\dots,k-1\}$, and therefore \begin{align*} &\sum_{z_{0}\in E}\mathop{\text{Res}}_{z=z_{0}}f(z) \ = \\ &\qquad \mathop{\text{Res}}_{z=1}f(z)+\frac{1}{\pi h}\sum_{m=1}^{h-1}\cot\left(\frac{\pi km}{h}\right)\zeta\left(a,\frac{m}{h}\right) +\frac{1}{\pi k}\sum_{m=1}^{k-1}\cot\left(\frac{\pi hm}{k}\right)\zeta\left(a,\frac{m}{k}\right) \end{align*} or, equivalently, \begin{equation} h^{1-a}c_{-a}\left(\frac{h}{k}\right)+k^{1-a}c_{-a}\left(\frac{k}{h}\right) \ = \ \pi(hk)^{1-a}\left(\left(\sum_{z_{0}\in E}\mathop{\text{Res}}_{z=z_{0}}f(z)\right)-\mathop{\text{Res}}_{z=1}f(z)\right).\label{eq:RecipResidue} \end{equation} We now determine $\mathop{\text{Res}}_{z=1}f(z)$. The Laurent series of the cotangent function about 0 is given by \[ \cot z \ = \ \frac{1}{z}-\frac{1}{3}z-\frac{1}{45}z^{3}-\frac{2}{945}z^{5}+\cdots, \] so, by the periodicity of $\cot z$, for $z\neq1$ in a small neighborhood of $z=1$, \[ \cot(\pi kz) \ = \ \left(\frac{1}{\pi k}\right)\frac{1}{z-1}-\frac{\pi k}{3}(z-1)-\frac{(\pi k)^{3}}{45}(z-1)^{3}-\frac{2(\pi k)^{5}}{945}(z-1)^{5}+\cdots \] and, similarly, \[ \cot(\pi hz) \ = \ \left(\frac{1}{\pi h}\right)\frac{1}{z-1}-\frac{\pi h}{3}(z-1)-\frac{(\pi h)^{3}}{45}(z-1)^{3}-\frac{2(\pi h)^{5}}{945}(z-1)^{5}+\cdots. \] Since $\zeta(a,z)$ is analytic in a small neighborhood of $1$, Taylor's theorem implies that \[\zeta(a,z)=\sum_{n=0}^{\infty}b_{n}(z-1)^{n}, \] where $b_{n}=\frac{\zeta^{(n)}(a,1)}{n!}$ for $n=0,1,2,\dots$ (derivatives relative to $z$). Thus, the expansion of $f(z)$ about 1 is of the form \[ \frac{b_{0}}{\pi^{2}hk}\left(\frac{1}{z-1}\right)^{2}+\left(\frac{b_{1}}{\pi^{2}hk}\right)\frac{1}{z-1}+(\textrm{analytic part}). \] Given that $a\neq0,1$, we know that $\frac{\partial}{\partial z}\zeta(a,z)=-a\,\zeta(a+1,z)$ \cite[eq.~25.11.17]{MR2723248}, so $b_{1}=-a\,\zeta(a+1,1)=-a\,\zeta(a+1)$. We conclude that $\mathop{\text{Res}}_{z=1}f(z)=-\frac{a\,\zeta(a+1)}{\pi^{2}hk}$ and it then follows from (\ref{eq:RecipResidue}) that \begin{equation} h^{1-a}c_{-a}\left(\frac{h}{k}\right)+k^{1-a}c_{-a}\left(\frac{k}{h}\right) \ = \ \frac{a\,\zeta(a+1)}{\pi(hk)^{a}}+\frac{\pi}{(hk)^{a-1}}\sum_{z_{0}\in E}\mathop{\text{Res}}_{z=z_{0}}f(z).\label{eq:ResidueSide} \end{equation} We now turn to the computation of $\displaystyle\sum_{z_{0}\in E}\mathop{\text{Res}}_{z=z_{0}}f(z)$ via Cauchy's residue theorem, which together with (\ref{eq:ResidueSide}) will provide the reciprocity we are after. Note that the function $f(z)$ is analytic on any two closed contours $C(M_{1},\epsilon)$ and $C(M_{2},\epsilon)$ and since the poles inside these two contours are the same, we may apply Cauchy's residue theorem to both contours and deduce that \[ \int_{C(M_{1},\epsilon)}f(z) \, dz \ = \ \int_{C(M_{2},\epsilon)}f(z) \, dz \, . \] In particular, this implies that \begin{equation} \lim_{M\rightarrow\infty}\int_{C(M,\epsilon)}f(z) \, dz \ = \ 2\pi i\sum_{z_{0}\in E}\mathop{\text{Res}}_{z=z_{0}}f(z) \, . \label{eq:CauchyLim} \end{equation} Let $\gamma_{1}$ be the path along $C(M,\epsilon)$ from $1+\epsilon+iM$ to $\epsilon+iM$. Similarly, define $\gamma_{2}$ from $\epsilon-iM$ to $1+\epsilon-iM$, $\gamma_{3}$ from $\epsilon+iM$ to $\epsilon-iM$, and $\gamma_{4}$ from $1+\epsilon-iM$ to $1+\epsilon+iM$ (see Figure \ref{fig:IntegrationPath}). Since $\Re(a)>1$, \[ \zeta(a,z+1) \ = \ \sum_{n=0}^{\infty}\frac{1}{(n+z+1)^{a}} \ = \ \sum_{n=1}^{\infty}\frac{1}{(n+z)^{a}} \ = \ \zeta(a,z)-\frac{1}{z^{a}} \, , \] and so the periodicity of $\cot z$ implies that \[ \int_{\gamma_{4}}f(z) \, dz \ = \ -\int_{\gamma_{3}}f(z) \, dz+\int_{\gamma_{3}}\frac{\cot(\pi hz)\cot(\pi kz)}{z^{a}} \, dz \, . \] Lemmas \ref{lem:AsympCot} and \ref{lem:AsympHurwitz} imply that $f(z)$ vanishes uniformly as $M\rightarrow\infty$ (uniformity with respect to $\Re(z)\in[\epsilon,1+\epsilon]$) so \[ \lim_{M\rightarrow\infty}\int_{\gamma_{1}}f(z) \, dz \ = \ 0 \ = \ \lim_{M\rightarrow\infty}\int_{\gamma_{2}}f(z) \, dz \, . \] This means that \[ \lim_{M\rightarrow\infty}\int_{C(M,\epsilon)}f(z) \, dz \ = \ \lim_{M\rightarrow\infty}\left(\int_{\gamma_{3}}f(z) \, dz+\int_{\gamma_{4}}f(z) \, dz\right) \] and it follows from (\ref{eq:ResidueSide}) and (\ref{eq:CauchyLim}) that \begin{align*} &h^{1-a}c_{-a}\left(\frac{h}{k}\right)+k^{1-a}c_{-a}\left(\frac{k}{h}\right) \ = \\ &\qquad \frac{a\,\zeta(a+1)}{\pi(hk)^{a}}+\frac{(hk)^{1-a}}{2i}\int_{\epsilon+i\infty}^{\epsilon-i\infty}\frac{\cot(\pi hz)\cot(\pi kz)}{z^{a}} \, dz\, \end{align*} This completes the proof of Theorem~\ref{thm:GeneralReciprocity}. \end{proof} To prove Theorem \ref{thm:nReciprocity}, we now turn to the particular case in which $a=n>1$ is an odd integer and study Bettin--Conrey sums of the form $c_{-n}$. Let $\Psi^{(n)}(z)$ denote the $(n+2)$-th polygamma function (see, for example, \cite[Sec.~5.15]{MR2723248}). It is well known that for $n$ a positive integer, \[ \zeta(n+1,z) \ = \ \frac{(-1)^{n+1}\Psi^{(n)}(z)}{n!} \] whenever $\Re(z)>0$ (see, for instance, \cite[eq.~25.11.12]{MR2723248}), so for $n>1$, we may write \[ c_{-n}\left(\frac{h}{k}\right) \ = \ \frac{(-1)^{n}}{k^{n}(n-1)!}\sum_{m=1}^{k-1}\cot\left(\frac{\pi mh}{k}\right)\Psi^{(n-1)}\left(\frac{m}{k}\right). \] By the reflection formula for the polygamma functions \cite[eq.~5.15.6]{MR2723248}, \[ \Psi^{(n)}(1-z)+(-1)^{n+1}\Psi^{(n)}(z) \ = \ (-1)^{n}\pi\cot^{(n)}(\pi z), \] we know that if $n$ is odd, then \[ \Psi^{(n-1)}\left(1-\frac{m}{k}\right)-\Psi^{(n-1)}\left(\frac{m}{k}\right) \ = \ \pi\cot^{(n-1)}\left(\frac{\pi m}{k}\right) \] for each $m\in\{1,2,\dots,k-1\}$. Therefore, \begin{align*} 2\sum_{m=1}^{k-1}\cot\left(\frac{\pi mh}{k}\right)\Psi^{(n-1)}\left(\frac{m}{k}\right) \ &= \ \sum_{m=1}^{k-1}\cot\left(\frac{\pi mh}{k}\right)\Psi^{(n-1)}\left(\frac{m}{k}\right)\\ &\qquad +\sum_{m=1}^{k-1}\cot\left(\frac{\pi(k-m)h}{k}\right)\Psi^{(n-1)}\left(1-\frac{m}{k}\right), \end{align*} which implies that \begin{align*} &2\sum_{m=1}^{k-1}\cot\left(\frac{\pi mh}{k}\right)\Psi^{(n-1)}\left(\frac{m}{k}\right) \\ &\qquad = \ \sum_{m=1}^{k-1}\cot\left(\frac{\pi mh}{k}\right)\left(\Psi^{(n-1)}\left(\frac{m}{k}\right)-\Psi^{(n-1)}\left(1-\frac{m}{k}\right)\right)\\ &\qquad = \ -\pi\sum_{m=1}^{k-1}\cot\left(\frac{\pi mh}{k}\right)\cot^{(n-1)}\left(\frac{\pi m}{k}\right). \end{align*} This means that for $n>1$ odd, $c_{-n}$ is essentially a Dedekind cotangent sum. Indeed, using the notation in~\cite{beckcot}, \begin{align*} c_{-n}\left(\frac{h}{k}\right) \ &= \ \frac{\pi}{2k^{n}(n-1)!}\sum_{m=1}^{k-1}\cot\left(\frac{\pi mh}{k}\right)\cot^{(n-1)}\left(\frac{\pi m}{k}\right)\\ &= \ \frac{\pi}{2(n-1)!}\,\mathfrak{c}\left(\begin{array}{c|cc} k & h & 1\\ n-1 & 0 & n-1\\ 0 & 0 & 0\\ \end{array}\right). \end{align*} Thus Theorem~\ref{thm:nReciprocity} is an instance of Theorem~\ref{thm:GeneralReciprocity}. Its significance is a reciprocity instance for Bettin--Conrey sums of the form $c_{-n}$ in terms of Bernoulli numbers. For this reason we give the details of its proof. \begin{proof}[Proof of Theorem \ref{thm:nReciprocity}] We consider the closed contour $\widetilde{C}(M,\epsilon)$ defined as the positively oriented rectangle with vertices $1+iM$, $iM$, $-iM$ and $1-iM$, with indentations (to the right) of radius $0<\epsilon<\min\left\{ \frac{1}{h},\frac{1}{k}\right\} $ around 0 and 1 (see Figure~\ref{fig:IndentedPath}). \begin{figure}[htb] \noindent \begin{centering} \includegraphics[scale=0.5]{IndentedPath} \par\end{centering} \protect\caption{The closed contour $\widetilde{C}(M,\epsilon).$\label{fig:IndentedPath}} \end{figure} Since $\widetilde{C}(M,\epsilon)$ contains the same poles of $f(z) = \cot(\pi hz)\cot(\pi kz)\,\zeta(n,z)$ as the closed contour $C(M,\epsilon)$ in Figure \ref{fig:IntegrationPath} used to prove Theorem \ref{thm:GeneralReciprocity}, we may apply Cauchy's residue theorem, letting $M\rightarrow\infty$, and we only need to determine $\lim_{M\rightarrow\infty}\int_{\widetilde{C}(M,\epsilon)}f(z) \, dz$ in order to deduce a reciprocity law for the sums $c_{-n}$. As in the case of $C(M,\epsilon)$, the integrals along the horizontal paths vanish, so using the periodicity of the cotangent to add integrals along parallel paths, as we did when considering $C(M,\epsilon)$, we obtain \begin{equation} \lim_{M\rightarrow\infty}\int_{\widetilde{C}(M,\epsilon)}f(z) \, dz \ = \ \lim_{M\rightarrow\infty}\left(\int_{iM}^{i\epsilon}g(z) \, dz+\int_{-i\epsilon}^{-iM}g(z) \, dz\right)+\int_{\gamma_3}g(z) \, dz \, ,\label{eq:verticIntegralCancel} \end{equation} where $\gamma_3$ denotes the indented path around 0 and \[ g(z) \ = \ \frac{\cot(\pi kz)\cot(\pi hz)}{z^{n}} \, . \] Given that $g(z)$ is an odd function, the vertical integrals cancel and we may apply Cauchy's residue theorem to integrate $g(z)$ along the positively oriented circle of radius $\epsilon$ and centered at 0, to deduce that \[ \lim_{M\rightarrow\infty}\int_{\widetilde{C}(M,\epsilon)}f(z) \, dz \ = \ -\pi i\mathop{\text{Res}}_{z=0}g(z) \, . \] This is the main reason to use the contour $\widetilde{C}(M,\epsilon)$ instead of $C(M,\epsilon)$. Indeed, integration along $\widetilde{C}(M,\epsilon)$ exploits the parity of the function $g(z)$, allowing us to cancel the vertical integrals in (\ref{eq:verticIntegralCancel}). The expansion of the cotangent function is \[\pi z\cot(\pi z) \ = \ \sum_{m=0}^{\infty}\frac{(2\pi i)^{m}B_{m}}{m!}z^{m},\] with the convention that $B_{1}$ must be redefined to be zero. Thus, we have the expansion \[ \cot(\pi kz) \ = \ \sum_{m=-1}^{\infty}\frac{(2i)(2\pi ik)^{m}B_{m+1}}{(m+1)!}z^{m} \] and of course, an analogous result holds for $h$. Hence, \begin{equation} \mathop{\text{Res}}_{z=0}g(z) \ = \ \frac{(2i)(2\pi i)^{n}}{\pi hk(n+1)!}\sum_{m=0}^{n+1}{n+1 \choose m}B_{m}B_{n+1-m}h^{m}k^{n+1-m},\label{eq:Resg(z)} \end{equation} and given that $\zeta(n+1)=-\frac{(2\pi i)^{n+1}}{2(n+1)!}B_{n+1}$ \cite[eq.~25.6.2]{MR2723248}, the Cauchy residue theorem and (\ref{eq:ResidueSide}) yield \begin{align} &\left(\frac{2\pi i}{hk}\right)^{n}\frac{1}{i(n+1)!}\left(nB_{n+1}+\sum_{m=0}^{n+1}{n+1 \choose m}B_{m}B_{n+1-m}h^{m}k^{n+1-m}\right) \label{eq:RecipProofOdd} \\ &\qquad = \ h^{1-n}c_{-n}\left(\frac{h}{k}\right)+k^{1-n}c_{-n}\left(\frac{k}{h}\right). \nonumber \end{align} Finally, note that the convention $B_{1}:=0$ is irrelevant in (\ref{eq:RecipProofOdd}), since $B_{1}$ in this sum is always multiplied by a Bernoulli number with odd index larger than~1. \end{proof} Note that Theorem \ref{thm:nReciprocity} is essentially the same as the reciprocity deduced by Apostol for Dedekind--Apostol sums \cite{MR0034781}. This is a consequence of the fact that for $n>1$ an odd integer, $c_{-n}\left(\frac{h}{k}\right)$ is a multiple of the Dedekind--Apostol sum $s_{n}(h,k)$. Indeed, for such $n$ \cite[Theorem ~1]{MR0046379} \[ s_{n}(h,k) \ = \ i \, n! \, (2\pi i)^{-n} \, c_{-n}\left(\frac{h}{k}\right). \] It is worth mentioning that although the Dedekind--Apostol sum $s_{n}(h,k)$ is trivial for $n$ even \cite[eq.~(4.13)]{MR0034781}, in the sense that $s_{n}(h,k)$ is independent of $h$, the Bettin--Conrey sum $c_{-n}\left(\frac{h}{k}\right)$ is not. The following corollary is an immediate consequence of Theorems~\ref{thm:GeneralReciprocity} and~\ref{thm:nReciprocity}. \begin{cor} \label{cor:nIntegral}Let $n>1$ be an odd integer and suppose $h$ and $k$ are positive coprime integers, then for any $0<\epsilon<\min\left\{ \frac{1}{h},\frac{1}{k}\right\}$, \[ \int_{\epsilon+i\infty}^{\epsilon-i\infty}\frac{\cot(\pi hz)\cot(\pi kz)}{z^{n}} \, dz \ = \ \frac{2(2\pi i)^{n}}{hk(n+1)!}\sum_{m=0}^{n+1}{n+1 \choose m}B_{m}B_{n+1-m}h^{m}k^{n+1-m}. \] \end{cor} \begin{proof}[Proof of Theorem \ref{thm:nPsiG}] Given that $c_{-n}\left(\frac{-k}{h}\right)=-c_{-n}\left(\frac{k}{h}\right)$, it follows from Theorems~\ref{thm:bettinconrey} and~\ref{thm:nReciprocity} that \begin{equation} \psi_{-n}\left(\frac{h}{k}\right) \ = \ \frac{(2\pi i)^{n}}{\zeta(n)(n+1)!}\sum_{m=0}^{n+1}{n+1 \choose m}B_{m}B_{n+1-m}\left(\frac{h}{k}\right)^{m-1}.\label{eq:Psihk} \end{equation} The function \[ \phi_{-n}(z) \ = \ \frac{(2\pi i)^{n}}{\zeta(n)(n+1)!}\sum_{m=0}^{n+1}{n+1 \choose m}B_{m}B_{n+1-m} \, z^{m-1} \] is analytic on $\mathbb{C}\backslash\mathbb{R}_{\leq0}$ and, by \cite[Theorem~1]{bettinconreyperiodfunctions}, so is $\psi_{-n}$. Let \[ S_{n} \ = \ \left\{ z\in\mathbb{C}\backslash\mathbb{R}_{\leq0}\mid\psi_{-n}(z)=\phi_{-n}(z)\right\}. \] Since all positive rationals can be written in reduced form, it follows from (\ref{eq:Psihk}) that $\mathbb{Q}_{>0}\subseteq S_{n}$. Thus, $S_{n}$ is not a discrete set and given that both $\psi_{-n}$ and $\phi_{-n}$ are analytic on the connected open set $\mathbb{C}\backslash\mathbb{R}_{\leq0}$, Theorem 1.2($ii$) in \cite[p.~90]{MR1659317} implies that $\psi_{-n}=\phi_{-n}$ on $\mathbb{C}\backslash\mathbb{R}_{\leq0}$. That is, \[ \psi_{-n}(z) \ = \ \frac{(2\pi i)^{n}}{\zeta(n)(n+1)!}\sum_{m=0}^{n+1}{n+1 \choose m}B_{m}B_{n+1-m} \, z^{m-1} \] for all $z\in\mathbb{C}\backslash\mathbb{R}_{\leq0}$. Now \[ \psi_{-n}(z) \ = \ \frac{i}{\pi z}\frac{\zeta(1+n)}{\zeta(n)}-iz^{n-1}\cot\left(\frac{-\pi n}{2}\right)+i\frac{g_{-n}(z)}{\zeta(n)}, \] and since $n$ is odd, $ $$\cot\left(\frac{-\pi n}{2}\right)=0$ and $\zeta(n+1)=-\frac{(2\pi i)^{n+1}}{2(n+1)!}B_{n+1}$ \cite[eq.~25.6.2]{MR2723248}, so \begin{align*} g_{-n}(z) \ &= \ \frac{i(2\pi i)^{n}B_{n+1}}{(n+1)!}\left(\frac{1}{z}\right)-i\,\zeta(n)\,\psi_{-n}(z)\\ &= \ \frac{-i(2\pi i)^{n}}{(n+1)!}\sum_{m=1}^{n+1}{n+1 \choose m}B_{m}B_{n+1-m} \, z^{m-1}\\ &= \ \frac{-i(2\pi i)^{n}}{(n+1)!}\sum_{m=0}^{n}{n+1 \choose m+1}B_{m+1}B_{n-m} \, z^{m}\, . \qedhere \end{align*} \end{proof} Clearly, Theorem \ref{thm:nPsiG} is a particular case of Bettin--Conrey's \cite[Theorem ~3]{bettinconreyperiodfunctions}. However, the proofs are independent, so Theorem~\ref{thm:nPsiG} is stronger (in the particular case $a=-n$, with $n>1$ an odd integer), because it completely determines $g_{-n}$ and shows that $g_{-n}$ is a polynomial. In particular, it becomes obvious that if $a\in\mathbb{Z}_{\leq1}$ is odd and $(a,m)\neq(0,0)$, then $\pi g_{a}^{(m)}(1)$ is a rational polynomial in~$\pi^{2}$. \section{Generalizations of Bettin--Conrey Sums}\label{subsec:GeneralizationBettinConreySums} Now we study generalized Bettin--Conrey sum \[ c_{a}\left(\begin{array}{c|ccc} k_{0} & k_{1} & \cdots & k_{n}\\ m_{0} & m_{1} & \cdots & m_{n} \end{array}\right) \ = \ k_{0}^{a}\sum_{l=1}^{k_{0}-1}\zeta^{(m_{0})}\left(-a,\frac{l}{k_{0}}\right)\prod_{j=1}^{n}\cot^{(m_{j})}\left(\frac{\pi k_{j}l}{k_{0}}\right) . \] Henceforth, $B_n$ denotes the $n$-th Bernoulli number with the convention $B_{1}:=0$. The following reciprocity theorem generalizes Theorem~\ref{thm:GeneralReciprocity}. \begin{thm} \label{thm:GeneralRecip}Let $d\geq2$ and suppose that $k_{1},\dots,k_{d}$ is a list of pairwise coprime positive integers and $m_{0},m_{1},\dots,m_{d}$ are nonnegative integers. If $\Re(a)>1$ and $0<\epsilon<\min_{1\leq j\leq d}\left\{ \frac{1}{k_{j}}\right\} $ then \begin{multline*} \sum_{j=1}^{d}\frac{(-1)^{m_{j}}}{\pi}\sum_{{l_{0}+\cdots+\widehat{l_{j}}+\cdots+l_{d}=m_{j}\atop l_{0},\ldots,\widehat{l_{j}},\ldots,l_{d}\geq0}}{m_{j} \choose l_{0},\ldots,\widehat{l_{j}},\ldots,l_{d}}\left(\prod_{{t=1\atop t\neq j}}^{d}(\pi k_{t})^{l_{t}}\right)k_{j}^{a-1}\,c_{-a}(j) \ = \\ -\left(\sum_{l_{0}=0}^{m_{1}+\cdots+m_{d}+d-1}\sum_{l_{1}+\cdots+l_{d}=-l_{0}-1}\prod_{j=0}^{d}a_{l_{j}}\right)+\frac{(-1)^{m_{0}}a^{(m_{0})}}{2\pi i}\int_{\epsilon+i\infty}^{\epsilon-i\infty}\frac{\prod_{j=1}^{d}\cot^{(m_{j})}(\pi k_{j}z)}{z^{a+m_{0}}}dz \, . \end{multline*} where $x^{(n)}=\prod_{l=0}^{n-1}(x+l)$ is the rising factorial, \[ c_{-a}(j) \ = \ c_{-a}\left(\begin{array}{c|ccccc} k_{j} & k_{1} & \cdots & \widehat{k_{j}} & \cdots & k_{d}\\ m_{0}+l_{0} & m_{1}+l_{1} & \cdots & \widehat{m_{j}+l_{j}} & \cdots & m_{d}+l_{d} \end{array}\right) , \] and for $j=0,1,\ldots,d$, we define \[ a_{l_{j}}=\begin{cases} \frac{(-1)^{m_{0}+l_{0}}a^{(m_{0}+l_{0})}\zeta(a+m_{0}+l_{0})}{l_{0}!} & \textnormal{if }j=0\textnormal{ and }l_{0}\geq0,\\ \frac{(2i)^{l_{j}+m_{j}+1}B_{l_{j}+m_{j}+1}(\pi k_{j})^{l_{j}+m_{j}}(l_{j}+1)^{(m_{j})}}{(l_{j}+m_{j}+1)!} & \textnormal{if }j\neq0\textnormal{ and }l_{j}\geq0,\\ \frac{(-1)^{m_{j}}m_{j}!}{\pi k_{j}} & \textnormal{if }j\neq0\textnormal{ and }l_{j}=-(m_{j}+1),\\ 0 & \textnormal{otherwise.} \end{cases} \] \end{thm} The proof of this theorem is analogous to that of Theorem \ref{thm:GeneralReciprocity}. Henceforth, $\Re(a)>1$, $k_{1},\dots,k_{d}$ is a list of pairwise coprime positive integers and $0<\epsilon<\min_{1\leq j\leq d}\left\{ \frac{1}{k_{j}}\right\} $. In addition, $m_{0},m_{1},\dots,m_{d}$ is a list of nonnegative integers, \[ f(z) \ = \ \zeta^{(m_{0})}(a,z)\prod_{j=1}^{d}\cot^{(m_{j})}(\pi k_{j}z) \] and, as before, $C(M,\epsilon)$ denotes the positively oriented rectangle with vertices $1+\epsilon+iM$, $\epsilon+iM$, $\epsilon-iM$ and $1+\epsilon-iM$, where $M>0$ (see Figure \ref{fig:IntegrationPath}). \begin{proof} [Proof of Theorem \ref{thm:GeneralRecip}]For each $j$, we know that $\cot\left(\pi k_{j}z\right)$ is analytic on and inside $C(M,\epsilon)$, with the exception of the poles $\frac{1}{k_{j}},\dots,\frac{k_{j}-1}{k_{j}}$. This means that except for the aforementioned poles, $\cot^{(m_{j})}(\pi k_{j}z)$ is analytic on and inside $C(M,\epsilon)$, so the analyticity of $\zeta^{(m_{0})}(a,z)$ on and inside $C(M,\epsilon)$ implies that a complete list of (possible) poles of $f$ is \[ E \ = \ \left\{ \frac{1}{k_{1}},\dots,\frac{k_{1}-1}{k_{1}},\dots\frac{1}{k_{d}},\dots,\frac{k_{d}-1}{k_{d}},1\right\} . \] Let $j\in\{1,2,\dots d\}$ and $q\in\{1,2,\dots,k_{j}-1\}$, then the Laurent series of $\cot(\pi k_{j}z)$ about $\frac{q}{k_{j}}$ is of the form $\left(\frac{1}{\pi k_{j}}\right)\frac{1}{z-\frac{q}{k_{j}}}+(\textrm{analytic part})$, so near $\frac{q}{k_{j}}$, \[ \cot^{(m_{j})}(\pi k_{j}z) \ = \ \frac{(-1)^{m_{j}}m_{j}!}{\pi k_{j}}\left(z-\frac{q}{k_{j}}\right)^{-(m_{j}+1)}+\textrm{ (analytic part) } . \] Since $(k_{j},k_{t})=1$ for $t\neq j$, it follows from Taylor's theorem that for $t\neq j$ the expansion \[ \cot^{(m_{t})}(\pi k_{t}z) \ = \ \sum_{l_{t}=0}^{\infty}\frac{(\pi k_{t})^{l_{t}}}{l_{t}!}\cot^{(m_{t}+l_{t})}\left(\frac{\pi k_{t}q}{k_{j}}\right)\left(z-\frac{q}{k_{j}}\right)^{l_{t}} \] is valid near $\frac{q}{k_{j}}$. Taylor's theorem also yields that \[ \zeta^{(m_{0})}(a,z) \ = \ \sum_{l_{0}=0}^{\infty}\frac{\zeta^{(m_{0}+l_{0})}\left(a,\frac{q}{k_{j}}\right)}{l_{0}!}\left(z-\frac{q}{k_{j}}\right)^{l_{0}} \] near $\frac{q}{k_{j}}$. Hence, we may write $\mathop{\text{Res}}_{z=\frac{q}{k_{j}}}f(z)$ as \[ \frac{(-1)^{m_{j}}m_{j}!}{\pi k_{j}}\sum_{{l_{0}+\cdots+\widehat{l_{j}} +\cdots+l_{d}=m_{j}\atop l_{0},\ldots,\widehat{l_{j}},\ldots,l_{d}\geq0}}\frac{\zeta^{(m_{0}+l_{0})}\left(a,\frac{q}{k_{j}}\right)}{l_{0}!}\prod_{{t=1\atop t\neq j}}^{d}\frac{(\pi k_{t})^{l_{t}}}{l_{t}!}\cot^{(m_{t}+l_{t})}\left(\frac{\pi k_{t}q}{k_{j}}\right). \] Therefore $\sum_{q=1}^{k_{j}-1}\mathop{\text{Res}}_{z=\frac{q}{k_{j}}}f(z)$ is given by \begin{multline*} \frac{(-1)^{m_{j}}}{\pi}\sum_{{l_{0}+\cdots+\widehat{l_{j}}+\cdots+l_{d}=m_{j}\atop l_{0},\ldots,\widehat{l_{j}},\ldots,\,l_{d}\geq0}}{m_{j} \choose l_{0},\ldots,\widehat{l_{j}},\ldots,l_{d}}\left(\prod_{{t=1\atop t\neq j}}^{d}(\pi k_{t})^{l_{t}}\right)\\ \times\frac{1}{k_j}\sum_{q=1}^{k_{j}-1}\zeta^{(m_{0}+l_{0})}\left(a,\frac{q}{k_{j}}\right)\prod_{{t=1\atop t\neq j}}^{d}\cot^{(m_{t}+l_{t})}\left(\frac{\pi k_{t}q}{k_{j}}\right)\\ =\frac{(-1)^{m_{j}}}{\pi}\sum_{{l_{0}+\cdots+\widehat{l_{j}}+\cdots+l_{d}=m_{j}\atop l_{0},\ldots,\widehat{l_{j}},\ldots,l_{d}\geq0}}{m_{j} \choose l_{0},\ldots,\widehat{l_{j}},\ldots,l_{d}}\left(\prod_{{t=1\atop t\neq j}}^{d}(\pi k_{t})^{l_{t}}\right)k_{j}^{a-1}c_{-a}(j) \, . \end{multline*} Given that this holds for all $j$, we conclude that \begin{align} &\sum_{z_{0}\in E\backslash\{1\}}\mathop{\text{Res}}_{\{z=z_{0}\}}f(z) \ = \label{eq:sumResRecip} \\ &\qquad \sum_{j=1}^{d}\frac{(-1)^{m_{j}}}{\pi}\sum_{{l_{0}+\cdots+\widehat{l_{j}}+\cdots+l_{d}=m_{j}\atop l_{0},\ldots,\widehat{l_{j}},\ldots,l_{d}\geq0}}{m_{j} \choose l_{0},\ldots,\widehat{l_{j}},\ldots,l_{d}}\left(\prod_{{t=1\atop t\neq j}}^{d}(\pi k_{t})^{l_{t}}\right)k_{j}^{a-1}c_{-a}(j) \, . \nonumber \end{align} We now compute $\mathop{\text{Res}}_{z=1}f(z)$. For each $j\in\{1,2,\dots,d\}$, we know that $\cot(\pi k_{j}z)$ has an expansion about 1 of the form \[ \cot(\pi k_{j}z)=\frac{1}{\pi k_{j}(z-1)}+\sum_{n=0}^{\infty}\frac{(2i)^{n+1}B_{n+1}(\pi k_{j})^{n}}{(n+1)!}(z-1)^{n}, \] so the Laurent expansion of $\cot^{(m_{j})}(\pi k_{j}z)$ about 1 is given by \begin{align*} \cot^{(m_{j})}(\pi k_{j}z) & =\frac{(-1)^{m_{j}}m_{j}!}{\pi k_{j}(z-1)^{m_{j}+1}}+\sum_{l_{j}=0}^{\infty}\frac{(2i)^{l_{j}+m_{j}+1}B_{l_{j}+m_{j}+1}(\pi k_{j})^{l_{j}+m_{j}}(l_{j}+1)^{(m_{j})}}{(l_{j}+m_{j}+1)!}(z-1)^{l_{j}}\\ & =\sum_{l_{j}=-\infty}^{\infty}a_{l_{j}}(z-1)^{l_{j}}, \end{align*} where \[ a_{l_{j}}=\begin{cases} \frac{(2i)^{l_{j}+m_{j}+1}B_{l_{j}+m_{j}+1}(\pi k_{j})^{l_{j}+m_{j}}(l_{j}+1)^{(m_{j})}}{(l_{j}+m_{j}+1)!} & \textnormal{if }l_{j}\geq0,\\ \frac{(-1)^{m_{j}}m_{j}!}{\pi k_{j}} & \textnormal{if }l_{j}=-(m_{j}+1),\\ 0 & \textnormal{otherwise}. \end{cases} \] Now, Taylor's theorem implies that the expansion of $\zeta^{(m_{0})}(a,z)$ about 1 is of the form \[ \zeta^{(m_{0})}(a,z)=\sum_{l_{0}=0}^{\infty}a_{l_{0}}(z-1)^{l_{0}}, \] where \[ a_{l_{0}}=\frac{\zeta^{(m_{0}+l_{0})}(a,1)}{l_{0}!}=\frac{(-1)^{m_{0}+l_{0}}a^{(m_{0}+l_{0})}\zeta(a+m_{0}+l_{0})}{l_{0}!}. \] Therefore, \[ \mathop{\text{Res}}_{z=1}f(z)=\sum_{l_{0}=0}^{m_{1}+\cdots+m_{d}+d-1}\sum_{l_{1}+\cdots+l_{d}=-l_{0}-1}\prod_{j=0}^{d}a_{l_{j}}. \] Since $\frac{\partial}{\partial z}\,\zeta(a,z)=-a\,\zeta(a+1,z)$, \begin{align*} \zeta^{(m_{0})}(a,z+1) \ &= \ (-1)^{m_{0}}\zeta(a+m_{0},z+1)a^{(m_{0})} \ = \ (-1)^{m_{0}}a^{(m_{0})}\sum_{n=0}^{\infty}\frac{1}{(n+z+1)^{a+m_{0}}}\\ &= \ (-1)^{m_{0}}a^{(m_{0})}\left(\zeta(a+m_{0},z)-\frac{1}{z^{a+m_{0}}}\right) \ = \ \zeta^{(m_{0})}(a,z)-\frac{(-1)^{m_{0}}a^{(m_{0})}}{z^{a+m_{0}}} \, . \end{align*} This means that \[ \int_{1+\epsilon-iM}^{1+\epsilon+iM}f(z) \, dz \, +\int_{\epsilon+iM}^{\epsilon-iM}f(z) \, dz \ = \ (-1)^{m_{0}}a^{(m_{0})}\int_{\epsilon+iM}^{\epsilon-iM}\frac{\prod_{j=1}^{d}\cot^{(m_{j})}(\pi k_{j}z)}{z^{a+m_{0}}} \, dz \, . \] As in the proof of Theorem \ref{thm:GeneralReciprocity}, it follows from Lemmas \ref{lem:AsympCot} and \ref{lem:AsympHurwitz} that the integrals along the horizontal segments of $C(M,\epsilon)$ vanish, so the Cauchy residue theorem implies that \[ \sum_{z_{0}\in E}\mathop{\text{Res}}_{z=z_{0}}f(z) \ = \ \frac{(-1)^{m_{0}}a^{(m_{0})}}{2\pi i}\int_{\epsilon+i\infty}^{\epsilon-i\infty}\frac{\prod_{j=1}^{d}\cot^{(m_{j})}(\pi k_{j}z)}{z^{a+m_{0}}} \, dz \, . \] The result then follows from (\ref{eq:sumResRecip}) and the computation of $\mathop{\text{Res}}_{z=1}f(z)$. \end{proof} An analogue Theorem \ref{thm:nReciprocity} is valid for generalized Bettin--Conrey sums. \begin{thm} \label{thm:PartRecip}Let $d\geq2$ and suppose that $k_{1},\dots,k_{d}$ is a list of pairwise coprime positive integers and $m_{0},m_{1},\dots,m_{d}$ are nonnegative integers. If $n>1$ is an integer and $m_{0}+n+d+\sum_{j=1}^{d}m_{j}$ is odd, then \begin{align*} &\sum_{j=1}^{d}\frac{(-1)^{m_{j}}}{\pi}\sum_{{l_{0}+\cdots+\widehat{l_{j}}+\cdots+l_{d}=m_{j}\atop l_{0},\ldots,\widehat{l_{j}},\ldots,l_{d}\geq0}}{m_{j} \choose l_{0},\ldots,\widehat{l_{j}},\ldots,l_{d}}\left(\prod_{{t=1\atop t\neq j}}^{d}(\pi k_{t})^{l_{t}}\right)k_{j}^{n-1}c_{-n}(j) \ = \\ &-\left(\sum_{l_{0}=0}^{m_{1}+\cdots+m_{d}+d-1}\sum_{l_{1}+\cdots+l_{d}=-l_{0}-1}\prod_{j=0}^{d}a_{l_{j}}\right)+\frac{(-1)^{m_{0}+1}n^{(m_{0})}}{2}\sum_{l_{1}+\cdots+l_{d}=n+m_{0}-1}\prod_{j=1}^{d}a_{l_{j}} \end{align*} where \[ c_{-n}(j) \ = \ c_{-n}\left(\begin{array}{c|ccccc} k_{j} & k_{1} & \cdots & \widehat{k_{j}} & \cdots & k_{d}\\ m_{0}+l_{0} & m_{1}+l_{1} & \cdots & \widehat{m_{j}+l_{j}} & \cdots & m_{d}+l_{d} \end{array}\right) \] and \[ a_{l_{j}}=\begin{cases} \frac{(-1)^{m_{0}+l_{0}}n^{(m_{0}+l_{0})}\zeta(n+m_{0}+l_{0})}{l_{0}!} & \textnormal{if }j=0\textnormal{ and }l_{0}\geq0,\\ \frac{(2i)^{l_{j}+m_{j}+1}B_{l_{j}+m_{j}+1}(\pi k_{j})^{l_{j}+m_{j}}(l_{j}+1)^{(m_{j})}}{(l_{j}+m_{j}+1)!} & \textnormal{if }j\neq0\textnormal{ and }l_{j}\geq0,\\ \frac{(-1)^{m_{j}}m_{j}!}{\pi k_{j}} & \textnormal{if }j\neq0\textnormal{ and }l_{j}=-(m_{j}+1),\\ 0 & \textnormal{otherwise.} \end{cases} \] \end{thm} \begin{proof} As in the proof of Theorem \ref{thm:nReciprocity}, the contour $\widetilde{C}(M,\epsilon)$ is defined as the positively oriented rectangle with vertices $1+iM$, $iM$, $-iM$ and $1-iM$, with indentations (to the right) of radius $0<\epsilon<\min_{1\leq j\leq d}\left\{ \frac{1}{k_{j}}\right\} $ around 0 and 1 (see Figure~\ref{fig:IndentedPath}). Since this closed contour contains the same poles of $f$ as $C(M,\epsilon)$, we may apply Cauchy's residue theorem, letting $M\rightarrow\infty$, and we only need to determine $\lim_{M\rightarrow\infty}\int_{\widetilde{C}(M,\epsilon)}f(z) \, dz$ in order to deduce a reciprocity law for the generalized Bettin--Conrey sums of the form~$c_{-n}$. Given that $m_{0}+n+d+\sum_{j=1}^{d}m_{j}$ is odd, the function \[ g(z) \ = \ \frac{(-1)^{m_{0}}n^{(m_{0})}\prod_{j=1}^{d}\cot^{(m_{j})}(\pi k_{j}z)}{z^{n+m_{0}}} \] is odd: \[ g(-z) \ = \ \frac{(-1)^{m_{0}+d+\sum_{j=1}^{d}m_{j}}n^{(m_{0})}}{(-1)^{n+m_{0}}z^{n+m_{0}}}\prod_{j=1}^{d}\cot^{(m_{j})}(\pi k_{j}z) \ = \ \frac{(-1)^{d+\sum_{j=1}^{d}m_{j}}}{(-1)^{n+m_{0}}}g(z) \ = \ -g(z) \, . \] Let $\gamma(M,\epsilon)$ be the indentation around zero along $\widetilde{C}(M,\epsilon)$, then \begin{align*} \lim_{M\rightarrow\infty}\int_{\widetilde{C}(M,\epsilon)}f(z) \, dz \ &= \ \lim_{M\rightarrow\infty}\left(\int_{iM}^{\epsilon i}g(z) \, dz+\int_{-\epsilon i}^{-iM}g(z) \, dz\right)+\int_{\gamma(M,\epsilon)}g(z) \, dz\\ &= \ \int_{\gamma(M,\epsilon)}g(z) \, dz \, . \end{align*} Given that $g$ is odd, the Cauchy residue theorem implies that \[ \int_{\gamma(M,\epsilon)}g(z) \, dz \ = \ -\pi i\mathop{\text{Res}}_{z=0}g(z) \] and it follows that \[ \sum_{z_{0}\in E}\mathop{\text{Res}}_{z=z_{0}}f(z) \ = \ -\frac{1}{2}\mathop{\text{Res}}_{z=0}g(z) \, . \] For each $j\in\{1,2,\dots,d\}$ we have an expansion of the form \[ \cot^{(m_{j})}(\pi k_{j}z)=\sum_{l_{j}=-\infty}^{\infty}a_{l_{j}}z^{l_{j}}, \] where \[ a_{l_{j}}=\begin{cases} \frac{(2i)^{l_{j}+m_{j}+1}B_{l_{j}+m_{j}+1}(\pi k_{j})^{l_{j}+m_{j}}(l_{j}+1)^{(m_{j})}}{(l_{j}+m_{j}+1)!} & \textnormal{if }l_{j}\geq0,\\ \frac{(-1)^{m_{j}}m_{j}!}{\pi k_{j}} & \textnormal{if }l_{j}=-(m_{j}+1),\\ 0 & \textnormal{otherwise}. \end{cases} \] Thus $\mathop{\text{Res}}_{z=0}g(z)$ is given by \[ (-1)^{m_{0}}n^{(m_{0})}\sum_{l_{1}+\cdots+l_{d}=n+m_{0}-1}\prod_{j=1}^{d}a_{l_{j}} \, . \] Therefore, \[ \sum_{z_{0}\in E}\mathop{\text{Res}}_{z=z_{0}}f(z) \ = \ \frac{(-1)^{m_{0}+1}n^{(m_{0})}}{2}\sum_{l_{1}+\cdots+l_{d}=n+m_{0}-1}\prod_{j=1}^{d}a_{l_{j}} \, , \] which concludes our proof. \end{proof} From Theorems \ref{thm:GeneralRecip} and \ref{thm:PartRecip}, we deduce a computation of the integral \[ \int_{\epsilon+i\infty}^{\epsilon-i\infty}\frac{\prod_{j=1}^{d}\cot^{(m_{j})}(\pi k_{j}z)}{z^{n+m_{0}}} \, dz \] in terms of the sequences $B(m_{j})_{l_{j}}$, whenever $n\in\mathbb{Z}_{>1}$ and $m_{0}+n+d+\sum_{j=1}^{d}m_{j}$ is odd, which generalizes Corollary~\ref{cor:nIntegral}: \begin{cor} \label{cor:CollapsInt}Let $d\geq2$ and suppose that $k_{1},\dots,k_{d}$ is a list of pairwise coprime positive integers and $m_{0},m_{1},\dots,m_{d}$ are nonnegative integers. If $n>1$ is an integer and $ m_{0}+n+d+\sum_{j=1}^{d}m_{j} $ is odd, then for all $0<\epsilon<\min_{1\leq j\leq d}\left\{ \frac{1}{k_{j}}\right\} $, \[ \int_{\epsilon+i\infty}^{\epsilon-i\infty}\frac{\prod_{j=1}^{d}\cot^{(m_{j})}(\pi k_{j}z)}{z^{n+m_{0}}}dz=-\pi i\sum_{l_{1}+\cdots+l_{d}=n+m_{0}-1}\prod_{j=1}^{d}a_{l_{j}} \, , \] where the sequences $\{a_{l_{j}}\}$ are as in Theorem \ref{thm:PartRecip}, for $j=1,2,\ldots,d$. \end{cor} The consideration of the case $m_{0}=m_{1}=\cdots=m_{n}=0$ leads to the definition of \textit{higher\--dimensional Bettin--Conrey sums}, \[ c_{a}(k_{0};k_{1},\dots,k_{n}) \ = \ c_{a}\left(\begin{array}{c|ccc} k_{0} & k_{1} & \cdots & k_{n}\\ 0 & 0 & \cdots & 0 \end{array}\right) \ = \ k_{0}^{a}\sum_{m=1}^{k_{0}-1}\zeta\left(-a,\frac{m}{k_{0}}\right)\prod_{l=1}^{n}\cot\left(\frac{\pi k_{l}m}{k_{0}}\right), \] for $a\neq-1$ complex and $k_{0},k_{1},\dots,k_{n}$ a list of positive numbers such that $(k_{0},k_{j})=1$ for each $j\neq0$. Of course, higher\--dimensional Bettin--Conrey sums satisfy Theorems \ref{thm:GeneralRecip} and \ref{thm:PartRecip}. In particular, if $0<\epsilon<\min_{1\leq l\leq d}\left\{ \frac{1}{k_{l}}\right\} $, $\Re(a)>1$ and $k_{1},\dots,k_{d}$ is a list of pairwise coprime positive integers, then \begin{align*} &\sum_{j=1}^{d}k_{j}^{a-1}c_{-a}(k_{j};k_{1},\ldots,\widehat{k_{j}},\ldots,k_{d}) \ = \\ &\qquad -\pi\sum_{l_{0}=0}^{d-1}\sum_{l_{1}+\cdots+l_{d}=-l_{0}-1} a_{ l_0 } a_{ l_1 } \cdots a_{ l_d } + \frac{1}{2i}\int_{\epsilon+i\infty}^{\epsilon-i\infty}\frac{\prod_{j=1}^{d}\cot(\pi k_{j}z)}{z^{a}} \, dz \, , \end{align*} where \[ a_{l_{j}}=\begin{cases} \frac{(-1)^{l_{0}}a^{(l_{0})}\zeta(a+l_{0})}{l_{0}!} & \textnormal{if }j=0\textnormal{ and }l_{0}\geq0,\\ \frac{(2i)^{l_{j}+1}B_{l_{j}+1}(\pi k_{j})^{l_{j}}}{(l_{j}+1)!} & \textnormal{if }j\neq0\textnormal{ and }l_{j}\geq0,\\ \frac{1}{\pi_{k_{j}}} & \textnormal{if }j\neq0\textnormal{ and }l_{j}=-1,\\ 0 & \textnormal{otherwise}. \end{cases} \] \section{Derivative Cotangent Sums and Critical Values of Estermann Zeta}\label{sec:estermann} As usual, for $a, x\in\mathbb{C}$, let $\sigma_a(n)=\sum_{d|n}d^a$ and $e(x)=e^{2\pi i x}$. For a given rational number $x$, the \emph{Estermann zeta function} is defined through the Dirichlet series \begin{eqnarray}\label{Estermann} E(s,x,a) \ = \ \sum_{n\geq 1}\sigma_a(n) \, e(nx) \, n^{-s}, \end{eqnarray} initially defined for $\Re(s)>\max\left(1,\Re(a)+1\right)$ and analytically continued to the whole $s$-plane with possible poles at $s=1,\ a+1$. For $x= \frac p q$ with $(p,q)=1$ and $q>1$, \[E(s,x,a)-q^{1+a-2s}\zeta(s-a)\zeta(s) \] is an entire function of $s$. By use of the Hurwitz zeta function we observe that \begin{eqnarray}\label{Hurwitz} E(s,x,a) \ = \ q^{a-2s}\sum_{m,n=1}^q e(mnx) \, \zeta \left( s-a, \tfrac m q \right) \zeta \left( s, \tfrac n q \right) . \end{eqnarray} We consider the sums \begin{eqnarray}\label{Derivative-Cotangent} C(a,s,x) \ = \ q^a\sum_{m=1}^{q-1} e(mx) \, \Phi(-s,1,e(mx))\, \zeta \left(-a, \tfrac m q \right) , \end{eqnarray} where $\Phi(s,z,\lambda)=\sum_{n\geq 0}\frac{\lambda^n}{(z+n)^s}$ is \emph{Lerch's transcendent function}, defined for $z\ne 0,-1,-2,\dots, |\lambda|<1; \Re(s)>1, |\lambda|=1$, and analytically continued in $\lambda$.\\ The purpose of this section is to establish relationships between $C(a,s,x)$ and values of the Estermann zeta function at integers~$s$. We start with some preliminary results. \begin{lem}\label{difference} Let $k$ be a nonnegative integer. Then \begin{align} \lambda\Phi(s,z+1,\lambda) \ &= \ \Phi(s,z,\lambda)-z^{-s} \label{formula1}\\ \Phi(-k,z,\lambda) \ &= \ -\frac{B_{k+1}(z;\lambda)}{k+1} \label{formula2}\\ B_{k}(0;e(x)) \ &= \ \begin{cases} \frac{1}{2i}\cot(\pi x)-\frac{1}2& \textnormal{if \ensuremath{k=1},}\\ \frac{k}{(2i)^{k}}\cot^{(k-1)}(\pi x)& \textnormal{if \ensuremath{k>1}.} \end{cases}\label{formula3} \end{align} \end{lem} \begin{proof} Equation \eqref{formula1} follows from the special case $m=1$ in \cite[(25.14.4)]{MR2723248}. Equation \eqref{formula2} can be found in \cite[p.164]{Apostol3}. Equation \eqref{formula3} follows from \cite[Lemma 2.1]{Adamchik} and \cite[Theorem~4]{BC}. \end{proof} Lemma \ref{difference} implies that for a positive integer $s=k$, the sum $C(a,k,x)$ defined in \eqref{Derivative-Cotangent} is, up to a constant factor, the $k\th$-derivative cotangent sum \ C(a,k,x) \ = \ -\frac{1}{(2i)^{k+1}}\ q^a\sum_{m=1}^{q-1}\cot^{(k)}(\pi mx)\, \zeta \left(-a, \tfrac m q \right) . \] \begin{lem}\label{distribution} Let $p, q$ be coprime positive integers and $x=\frac{p}{q}$. For any $n \in \ZZ$ and $z \in \CC$ with $\Re(z)>0$, \begin{eqnarray} \sum_{m=0}^{q-1}e(mnx) \, \zeta \left( s, z + \frac m q \right) \ = \ q^s \, \Phi(s,qz,e(nx)) \, . \label{distribution1} \end{eqnarray} \end{lem} \begin{proof} Writing $m=kq+j$ with $j=0,\dots, q-1$, we have \[ q^s \, \Phi(s,qz,e(nx)) \ = \ q^s\sum_{m=0}^{\infty}\frac{e(nmx)}{(m+qz)^s} \ = \ \sum_{j=0}^{q-1}e(njx)\sum_{k=0}^{\infty}\frac1{(k+z+ \frac j q)^s} \, . \qedhere \] \end{proof} \begin{prop}\label{duals} Let $p, q$ be coprime positive integers and $x= \frac{p}{q}$. Then \begin{align*} E(-s,x,a-s) \ &= \ q^{a}\sum_{m=1}^{q-1} \, e(mx) \, \zeta \left( -a, \tfrac m q \right) \Phi(-s,1,e(mx))+q^{a}\zeta(-s) \, \zeta(-a) \\ E(-s,x,a-s) \ &= \ q^{s}\sum_{n=1}^{q-1}e(nx) \, \zeta \left( -s, \tfrac n q \right) \Phi(-a,1,e(nx))+q^{s}\zeta(-a) \, \zeta(-s) \, . \end{align*} \end{prop} \begin{thm} Let $a,k$ be nonnegative integers. Then \begin{align*} E\left(-k,x, a-k\right) \ &= \ C(a,k,x)+q^{a} \zeta(-k)\, \zeta(-a) \qquad \textrm{ if } k\geq 1, \\ E\left(-k,x, a-k\right) \ &= \ C(k,a,x)+q^{k} \zeta(-k)\, \zeta(-a) \qquad \textrm{ if } a\geq 1, \end{align*} and \begin{align*} E\left(0,x, a\right) \ &= \ C(a,0,x)-\tfrac12 \zeta(-a) \\ E\left(0,x, a\right) \ &= \ C(0,a,x)-\tfrac12 \zeta(-a)\ \textrm{ with $a\geq 1$.} \end{align*} \end{thm} \begin{cor} Let $a,k$ be nonnegative integers. For any rational number $x\neq 0$, \[ C(a,k,x)-C(k,a,x) \ = \ \begin{cases} 0 & \textnormal{if \ensuremath{k=0} or \ensuremath{a=0},}\\ \left(q^a-q^k\right)\zeta(-k)\, \zeta(-a) & \textnormal{otherwise}. \end{cases} \] \end{cor} \subsection*{Acknowledgements} We thank Sandro Bettin and an anonymous referee for valuable comments. Abdelmejid Bayad was partially supported by the FDIR of the Universit\'e d'Evry Val d'Essonne; Matthias Beck was partially supported by the US National Science Foundation (DMS-1162638).
{'timestamp': '2017-01-23T02:07:48', 'yymm': '1601', 'arxiv_id': '1601.06839', 'language': 'en', 'url': 'https://arxiv.org/abs/1601.06839'}
\section{Introduction} The ability to measure the intrinsic spectrum of quasars (QSOs) plays an important role in astrophysics. Resolving individual emission line profiles can enable the detailed study of both the central supermassive black hole (SMBH) and the internal structure within the SMBH powered accretion disk (through reverberation mapping e.g.\ \citealt{Blandford:1982,Peterson:1993} or the virial method e.g.\ \citealt{Kaspi:2000,Peterson:2004,Vestergaard:2006}). On cosmological scales, the relative strength (or lack thereof) of the emission profiles and spectral slopes of the QSO continuum can be used to yield insights into the thermal and ionisation state of the intergalactic medium (IGM) through studies of the QSO proximity zone \citep[e.g.][]{Mesinger:2004p3625,Wyithe:2004,Fan:2006p4005,Bolton:2007p3623,Mesinger:2007p855,Carilli:2010p1,Calverley:2011,Wyithe:2011,Schroeder:2013p919}. The key ingredient for using QSOs to explore the IGM is observing the intrinsic ultraviolet (UV) emission. Thermal emission from the accretion disk peaks in UV, which then interacts with surrounding neutral hydrogen and recombines two thirds of the time into \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} photons (rest frame $\lambda_{\ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{}}=1215.67$\AA) to produce a prominent \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} emission profile. However, owing to the large scattering cross-section of \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} photons, neutral hydrogen column densities of $N_{\ifmmode\mathrm{H\,{\scriptscriptstyle I}}\else{}H\,{\scriptsize I}\fi{}} > 10^{18} {\rm cm}^{-2}$ are sufficient for \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} to enter into the strong absorption regime. In the case of the IGM, even minute traces ($\sim$~few per cent) of intervening neutral hydrogen along the line-of-sight are capable of scattering \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} photons away from the observer. At $z\lesssim3$, the IGM is on average very highly ionised except in dense self shielded clumps \citep[e.g.][]{Fan:2006p4005,McGreer:2015p3668,Collaboration:2015p4320,Collaboration:2016,Collaboration:2016p5913}. Typically, at these redshifts the resonant scattering and absorption of \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} photons occurs only due to diffuse amounts of neutral hydrogen which appear blueward of \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} ($\lambda<\lambda_{\ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{}}$) as a series of discrete narrow absorption features, known as the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} forest \citep{Rauch:1998}. More problematic for measuring the intrinsic \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} emission line are larger neutral column density absorbers such as Lyman-limit systems and damped \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} absorbers (DLAs). In the case of DLAs, with column densities of $N_{\ifmmode\mathrm{H\,{\scriptscriptstyle I}}\else{}H\,{\scriptsize I}\fi{}} > 10^{20} {\rm cm}^{-2}$ not only do these systems lead to completely saturated absorption, but they are also sufficiently dense to allow \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} absorption within the Lorentzian wings. If these DLAs are located sufficiently close to the source QSO, absorption in the wings can significantly affect the intrinsic \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} emission profile. Furthermore, associated strong absorbers intrinsic to the host QSO environment itself can additionally impact the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line emission \citep{Shen:2012}. At $z>6$, the IGM becomes increasingly neutral, and the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} forest gives way to completely dark (absorbed) patches \citep[e.g.][]{Barkana:2002,Gallerani:2008,Mesinger:2010p6068,McGreer:2015p3668}. Once the IGM itself obtains a sufficiently large column density, it too begins to absorb \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} in the Lorentzian wings of the scattering cross-section, referred to as IGM damping wing absorption \citep{MiraldaEscude:1998p1041}. In principle, through the detection of the IGM damping wing imprint from the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} emission spectra of high-$z$ QSOs one is able to provide a direct measurement of the neutral fraction of the IGM during the reionisation epoch. Importantly, this approach requires an estimate of the intrinsic \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} emission line in order to determine the contribution from the smooth component absorption of the IGM damping wing. Using this technique, limits on the IGM neutral fraction at $z>6$ have been obtained \citep[e.g.][]{Mesinger:2007p855,Bolton:2011p1063,Schroeder:2013p919}. In the absence of a viable method to recover the intrinsic \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} emission profile, one can attempt to use an emission template constructed from a QSO composite spectrum. Numerous composite spectra exist, constructed by averaging over a vastly different number of QSOs \citep[e.g.][]{Francis:1991p5112,Brotherton:2001p1,VandenBerk:2001p3887,Telfer:2002p5713,Shull:2012p5716,Stevans:2014p5726,Harris:2016p5028}. However, by construction, these composite spectra only describe the average properties of QSOs, not the intrinsic variations of individual QSOs. Not accounting for these intrinsic properties can bias estimates of the IGM damping wing \citep{Mortlock:2011p1049,Bosman:2015p5005}. An alternative method is to reconstruct a template using a principle component analysis (PCA) \citep[e.g.][]{Boroson:1992p4641,Francis:1992p5021,Suzuki:2005p5157,Suzuki:2006p4770,Lee:2011p1738,Paris:2011p4774}. An improvement over the use of a composite spectrum, this approach aims to use the minimal subset of eigenvectors to characterise the QSO emission profile. For example, \citet{Francis:1992p5021} find 3 eigenvectors are sufficient to describe 75 per cent of the observed profile variation, and 95 per cent if 10 eigenvectors are used. Beyond 10, the eigenvectors obtain little information and become rather noisy \citep{Suzuki:2005p5157}. However, these eigenvectors are extracted from a fit to the original QSO spectrum. Therefore, given that the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line cannot be directly recovered at high redshifts, in order to obtain an estimate of the intrinsic \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} profile some form of reconstruction/extrapolation of the relevant eigenvectors that describe \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} would be required. Conveniently, studies of these QSO composites and PCA approaches have revealed a wealth of information regarding correlations amongst various emission lines and other observable properties of the source QSO. For example, the first eigenvector of \citet{Boroson:1992p4641} showed that in the H$\beta$ region a strong anti-correlation existed between [O\,{\scriptsize{III}}] and Fe\,{\scriptsize{II}}. Furthermore, \citet{Hewett:2010p6194} performed an in-depth exploration to establish relationships between emission line profiles and the source systemic redshift to reduce the scatter and biases in redshift determination. More directly relevant for this work, both \citet{Shang:2007p4862} and \citet{Kramer:2009p920} observe a strong correlation between the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} and \ifmmode\mathrm{C\,{\scriptscriptstyle IV}}\else{}C\,{\scriptsize IV}\fi{} peak blue shifts. Motivated by the existence of correlations amongst the various emission lines properties, and the lack of either a robust physical model of quasar emission regions \citep[e.g.][]{Baldwin:1995} or a method to recover the intrinsic \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} profile within high-$z$ or heavily \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} obscured QSOs, we propose a new method to reconstruct the intrinsic \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} emission line profile for any high-$z$ or \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} obscured QSO. In this work, we develop our reconstruction method using QSOs selected from the Baryon Oscillation Spectroscopic Survey (BOSS; \citealt{Dawson:2013p5160}), a component of SDSS-III \citep{Eisenstein:2011p5159}. In summary, our \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} reconstruction method is as follows: \begin{itemize} \item Perform a Monte Carlo Markov Chain (MCMC) fit to a subset of emission lines for each of our selected QSOs. \item Construct a covariance matrix which describes all correlations amongst the QSO emission lines. \item Assume a $N$-dimensional Gaussian likelihood function to describe the covariance matrix. \item MCMC fit a high-$z$ or \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} obscured QSO characterising all lines except \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{}. \item Use the recovered emission line information from the QSO to statistically characterise the intrinsic \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line. \end{itemize} The remainder of this paper is organised as follows. In Section~\ref{sec:Data}, we discuss the observational data used within this work, and the selection criteria we apply to construct our sample of QSOs. In Section~\ref{sec:Fitting} we describe our MCMC fitting procedure, and how we model the QSO continuum, emission line features and other components to aid the estimation of the observed QSO flux. With all the data obtained from fitting our entire QSO sample, in Section~\ref{sec:Covariance} we construct our covariance matrix, and discuss the major correlations and recovered features. In Section~\ref{sec:Recon} we outline our \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} reconstruction method, and highlight the performance of this approach. Following this, in Section~\ref{sec:Discussions} we provide a discussion of the potential applications for both the MCMC fitting algorithm and the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} reconstruction pipeline. Finally, in Section~\ref{sec:Conclusions} we finish with our closing remarks. \section{Data} \label{sec:Data} In this work, we select our QSOs from Data Release 12 (DR12) \citep{Alam:2015p5162} of the large-scale SDSS-III observational programme BOSS \citep{Dawson:2013p5160}. The full details of the SDSS telescope are available in \citet{Gunn:2006p1} and the details of the upgraded SDSS/BOSS spectrographs may be obtained in \citet{Smee:2013p1}. For reference, the wavelength coverage of the BOSS spectrograph is $3,\!600{\rm \AA} < \lambda < 10,\!400{\rm \AA}$ in the observed frame, with a resolution of $R\sim2000-2500$ corresponding to pixel resolution of $\sim120-150$~km/s. The QSO target selection for BOSS consisted of a variety of schemes, including colour and variability selection, counterparts to radio and X-ray sources, and previously known QSOs \citep{Bovy:2011p1,Kirkpatrick:2011p1,Ross:2012p1}. The candidate QSO spectra were then visually inspected following the procedure outlined in \citet{Paris:2016p1}, with updated lists of confirmed QSOs for DR12 available online\footnote{http://www.sdss.org/dr12/algorithms/boss-dr12-quasar-catalog/}. Furthermore, we use the publicly available flux calibration model of \citet{Margala:2015p1} to perform the spectrophotometric corrections of the DR12 QSO fluxes. In total, DR12, contains 294,512 uniquely identified QSOs, 158,917 of which are observed within the redshift range $2.15 < z < 3.5$. The principal science goal of the BOSS observational programme was the detection of the baryon acoustic oscillation scale from the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} forest. To perform this, only relatively low to moderate S/N QSOs are required \citep[e.g.][]{White:2003p578,McDonald:2007p6752}. However, within this work, we aim to statistically characterise the correlations between numerous QSO emission lines, which require much higher S/N. To construct the QSO sample used within this work, we preferentially selected QSOs with a median S/N across all filters ($ugriz$) of S/N~$>$~15 (`snMedian~$>15$'). This choice is arbitrary, and in principle we could go lower, however for decreasing S/N the weaker emission lines become more difficult to differentiate from the noise, reducing potential correlations. We selected QSOs containing broad-line emission using the `BROADLINE' flag and removed all sources visually confirmed to contain broad absorption lines (BALs) by using the `QSO' selection flag. Furthermore, only QSOs with `ZWARNING' set to zero are retained, where the redshifts were recovered with high confidence from the BOSS pipeline. Finally, we restrict our QSO redshift range to $2.08 < z < 2.5$\footnote{This choice of redshift range was a trade off between the wavelength coverage of the spectrograph and requiring that we were sufficiently blueward of rest-frame \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} to characterise the line profile (arbitrarily chosen to be 1180\AA, corresponding to $\sim3600$\AA~at $z=2.08$) while also including the \ifmmode\mathrm{Mg\,{\scriptscriptstyle II}}\else{}Mg\,{\scriptsize II}\fi{} emission line ($\lambda=2798.75$\AA, corresponding to $\lambda = 10,000$\AA~at $z=2.5$).}, selecting QSOs by their BOSS pipeline redshift\footnote{Note that the r.m.s of the quasar redshift error distribution for the BOSS pipeline redshift from DR9 was determined to be $550$~km/s with a mean offset of the quasar redshift (bias) of $\sim-150$~km/s \citep{Font-Ribera:2013}.} (e.g.\ \citealt{Bolton:2012SDSS}; see Appendix~\ref{sec:systemic_redshift} for a more in-depth discussion on our adopted choice of BOSS redshift). Following these selection cuts, we recover a total QSO sample of 3,862\footnote{We computed the absolute AB magnitude from the QSO continuum at 1450\AA\,($M_{1450}$) for this sample of QSOs and recovered a median of $M_{1450} = -26.1$ with an interquartile range of 0.7.}. Though the total QSO number of 3,862 appears small, it should be more than sufficient to elucidate any statistically significant correlations amongst the emission line parameters\footnote{The errors and biases when attempting to accurately measure the covariance matrix of a set of parameters ($N_{\rm par}$) from a total number of sources ($N_{\rm src}$) scales approximately as $N_{\rm par}$/$N_{\rm src}$ \citep[e.g][]{Dodelson:2013,Taylor:2014,Petri:2016}. Within this work, we ultimately construct an 18 parameter covariance matrix from a subsample of 1673 QSOs, corresponding to an expected error of $\sim1$~per cent.}. Furthermore, to properly assess the MCMC fitting to be discussed in Section~\ref{sec:Fitting} a detailed visual inspection will be required, therefore it is preferential to restrict the sample size. In selecting our redshift range of $2.08 < z < 2.5$, we are assuming there is no strong variation in the emission line profiles of QSOs across the age of the Universe. That is, the covariance matrix recovered from this QSO sample will always be representative of the underlying QSO population. In \citet{Becker:2013p1008}, these authors constructed 26 QSO composite spectra between rest-frame $1040 < \lambda < 1550$\AA~ across the redshift range of $2 < z < 5$. They found for \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} and the emission lines redward, no significant variation with redshift, lending confidence to our assumption. Note, throughout this work, we do not deredden our QSOs to account for interstellar dust extinction within the Milky Way \citep[e.g.][]{Fitzpatrick:1999p5034}. The dust extinction curve varies strongly in the UV and ultimately will impact the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line more than the \ifmmode\mathrm{C\,{\scriptscriptstyle IV}}\else{}C\,{\scriptsize IV}\fi{} line. However, this will not greatly impact our results as we are attempting to reconstruct the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} emission line using a covariance matrix of correlations from other emission lines. Applying the extinction curve will only impact slightly on the peak of the emission lines, not the other characteristics of the line profile. When highlighting the performance of the reconstruction process in Section~\ref{sec:Recon}, we see that this slight overestimate should be well within the errors of the reconstructed profile. \section{MCMC QSO Fitting} \label{sec:Fitting} We now introduce our MCMC fitting procedure. In Section~\ref{sec:lines}, we outline and justify the selection of the emission lines considered. In Section~\ref{sec:template} we then detail the construction of the QSO template. In order to improve the ability to recover the intrinsic emission line profile and QSO continuum we outline the treatment of `absorption' features in Section~\ref{sec:absorb}. In Section~\ref{sec:MCMC} the iterative MCMC fitting procedure is discussed and an example is presented. Finally, in Section~\ref{sec:QA} we perform a visual quality assessment of our entire sample of fit QSOs to remove contaminants that could impact the observed emission line correlations. \subsection{Emission line selection} \label{sec:lines} Owing to the limited wavelength coverage of BOSS, the selection of available emission lines to be used is limited. Due to the complexity in observing high-$z$ ($z\gtrsim6$) QSOs, where \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} is redshifted into the near infrared (IR), the emission lines we use from the BOSS sample needs to be consistent with what is detectable with near-IR instruments such as Keck/MOSFIRE \citep{McLean:2010,McLean:2012} and VLT/X-Shooter \citep{Vernet:2011}. The strongest emission lines, especially \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{}, \ifmmode\mathrm{C\,{\scriptscriptstyle IV}}\else{}C\,{\scriptsize IV}\fi{} and \ifmmode\mathrm{Mg\,{\scriptscriptstyle II}}\else{}Mg\,{\scriptsize II}\fi{}, are known to contain a broad and narrow component \citep[e.g.][]{Wills:1993,Baldwin:1996,VandenBerk:2001p3887,Shang:2007p4862,Kramer:2009p920,Shen:2011p4583}. These different components are thought to arise from moving clouds of gas above and below the accretion disk known as the broad and narrow line regions. Note that, for our purposes, the physical origins of these components is irrelevant; a double gaussian merely provides us a flexible basis set in which to characterise the line profile. Throughout this work, we will approximate all emission lines to have a Gaussian profile. While emission line profiles are known to be Lorentzian \citep[e.g.][]{Peebles1993}, the fitting of a Lorentzian profile is complicated by the requisite identification of the broader wings relative to the uncertainty in the QSO continuum\footnote{In practice, thermal (or other) broadening will make the Lorentzian wings irrelevant in the case of AGN emission lines. For even modest thermal broadening of $\sim10$~km/s only the Gaussian core can be detected \citep[see e.g.\ figure~4 of][]{Dijkstra:2014p1}.}. Ultimately, this is a subtle difference as we are only interested in characterising the total line profile. In order to decide which strong emission line profiles should be fit with a single or double component Gaussian\footnote{In principle, we could allow for $N>2$ components to our line fitting procedure. For example, several authors have fitted three Gaussian components to the \ifmmode\mathrm{C\,{\scriptscriptstyle IV}}\else{}C\,{\scriptsize IV}\fi{} emission line profile \citep[e.g.][]{Dietrich:2002p5912,Shen:2008p5975,Shen:2011p4583}. However, in this work, we attempt to avoid adding too much complexity to the fitting method.}, we perform a simple test analysing the Bayes information criteria (BIC; \citealt{Schwarz:1978p1,Liddle:2004p5730}) of each of the individual line profiles in Appendix~\ref{sec:line_component}. Below we summarise our findings: \begin{itemize} \item \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{}: The strongest observed UV emission line. In order to better characterise the fit to the line profile, a two component Gaussian is preferred. This is well known and consistent with other works. \item \ifmmode\mathrm{Si\,{\scriptscriptstyle IV}\,\plus O\,{\scriptscriptstyle IV]}}\else{}Si\,{\scriptsize IV}\,+O\,{\scriptsize IV]}\fi{}: This line complex consists of both the \ifmmode\mathrm{Si\,{\scriptscriptstyle IV}}\else{}Si\,{\scriptsize IV}\fi{} ($\lambda$1396.76\AA) and \ifmmode\mathrm{O\,{\scriptscriptstyle IV]}}\else{}O\,{\scriptsize IV]}\fi{} ($\lambda$1402.06\AA) line centres. Unfortunately, these individual line profiles are intrinsically broad, preventing our ability to distinguish between them. Therefore these two lines appear as a single, blended line. In Appendix~\ref{sec:line_component} we find a single Gaussian component is sufficient to characterise the line profile. \item \ifmmode\mathrm{C\,{\scriptscriptstyle IV}}\else{}C\,{\scriptsize IV}\fi{}: This line profile is a doublet ($\lambda$$\lambda$1548.20, 1550.78\AA). Again, both components being intrinsically broad prevents individual detection, therefore the line profile is observed as a single, strong line. For the \ifmmode\mathrm{C\,{\scriptscriptstyle IV}}\else{}C\,{\scriptsize IV}\fi{} line, we find a two component Gaussian to be preferable to characterise the line. \item \ifmmode\mathrm{C\,{\scriptscriptstyle III]}}\else{}C\,{\scriptsize III]}\fi{}: For the \ifmmode\mathrm{C\,{\scriptscriptstyle III]}}\else{}C\,{\scriptsize III]}\fi{} line ($\lambda$1908.73\AA) we find a single component Gaussian to be sufficient to characterise the line. For almost all of the BOSS QSOs in our sample, the nearby, weak Si\,{\scriptsize{III]}} line ($\lambda$1892.03\AA), which would appear on top of the much broader \ifmmode\mathrm{C\,{\scriptscriptstyle III]}}\else{}C\,{\scriptsize III]}\fi{} line is not resolvable\footnote{ In principle, by ignoring the Si\,{\scriptsize{III]}} feature, the centroid of the measured \ifmmode\mathrm{C\,{\scriptscriptstyle III]}}\else{}C\,{\scriptsize III]}\fi{} line can be shifted bluer than its true value. Quantitatively, we determine the extent of this blueshift by fitting all QSOs within our sample that have a clear, discernible Si\,{\scriptsize{III]}} feature. We find that by excluding Si\,{\scriptsize{III]}} the average recovered \ifmmode\mathrm{C\,{\scriptscriptstyle III]}}\else{}C\,{\scriptsize III]}\fi{} emission line centre is blue shifted by $\sim310$~km/s (with the associated line width broadened by $\sim315$~km/s). This sample, however, only contains 57 (out of 1673) QSOs, corresponding to only $\sim3.4$ per cent of the full `good' sample affected by this blueshift. For the vast majority of QSOs in our sample the S/N around \ifmmode\mathrm{C\,{\scriptscriptstyle III]}}\else{}C\,{\scriptsize III]}\fi{} is insufficient to tease out the individual emission component of the Si\,{\scriptsize{III]}} feature. Therefore, owing to the small resultant blueshift in the line centre estimate (compared to the average dispersion of $\sim550$~km/s in the redshift estimate of the chosen BOSS pipeline redshift) and the corresponding low number of sources potentially affected by this systematic shift, we deem the exclusion of Si\,{\scriptsize{III]}} a valid simplification. As a final point, owing to the weak total flux expected in the Si\,{\scriptsize{III]}} emission line, including this in the MCMC fitting for the full QSO sample (where Si\,{\scriptsize{III]}} is not clearly present) would cause a strong degeneracy with the \ifmmode\mathrm{C\,{\scriptscriptstyle III]}}\else{}C\,{\scriptsize III]}\fi{} line unless extremely strong priors on the line shape are imposed.}. Furthermore, the \ifmmode\mathrm{C\,{\scriptscriptstyle III]}}\else{}C\,{\scriptsize III]}\fi{} line may also be contaminated by a continuum of low ionisation \ifmmode\mathrm{Fe}\else{}Fe\fi{} lines, however, unlike \ifmmode\mathrm{Mg\,{\scriptscriptstyle II}}\else{}Mg\,{\scriptsize II}\fi{} (see next) the relative impact on the \ifmmode\mathrm{C\,{\scriptscriptstyle III]}}\else{}C\,{\scriptsize III]}\fi{} line is minor. \item \ifmmode\mathrm{Mg\,{\scriptscriptstyle II}}\else{}Mg\,{\scriptsize II}\fi{}: While the \ifmmode\mathrm{Mg\,{\scriptscriptstyle II}}\else{}Mg\,{\scriptsize II}\fi{} line ($\lambda$2798.75\AA) is observable within the wavelength range of our BOSS QSO sample, we choose not to fit this emission profile. This is because of the contamination from the \ifmmode\mathrm{Fe}\else{}Fe\fi{} pseudo-continuum. Though \ifmmode\mathrm{Fe}\else{}Fe\fi{} emission templates exist \citep[e.g.][]{Vestergaard:2001p3921}, the fitting procedure is complicated. One must simultaneously fit several spectral regions in order to calibrate the true flux level of the \ifmmode\mathrm{Fe}\else{}Fe\fi{} pseudo-continuum in order to remove it and obtain an estimate of the \ifmmode\mathrm{Mg\,{\scriptscriptstyle II}}\else{}Mg\,{\scriptsize II}\fi{} emission line. Since we aim to simultaneously fit the full QSO spectrum, this approach would add degeneracies to the model (e.g.\ between the true continuum, and the \ifmmode\mathrm{Fe}\else{}Fe\fi{} pseudo-continuum). While this can be overcome, it requires first fitting the QSO continuum blueward of \ifmmode\mathrm{Mg\,{\scriptscriptstyle II}}\else{}Mg\,{\scriptsize II}\fi{}, before fitting the \ifmmode\mathrm{Fe}\else{}Fe\fi{} pseudo-continuum, which reduces the flexibility of our model. \end{itemize} In addition to these lines above, we also consider several other lines all modelled by a single Gaussian. These include: (i) \ifmmode\mathrm{N\,{\scriptscriptstyle V}}\else{}N\,{\scriptsize V}\fi{} ($\lambda$$\lambda$1238.8, 1242.8\AA), which is an important line for characterising the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line profile as it can be degenerate with the broad \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line component (ii) \ifmmode\mathrm{Si\,{\scriptscriptstyle II}}\else{}Si\,{\scriptsize II}\fi{} ($\lambda$1262.59\AA), (iii) the O\,{\scriptsize I}/Si\,{\scriptsize II} blended complex ($\lambda$1304.35, $\lambda$1306.82\AA), (iv) C\,{\scriptsize II} ($\lambda$1335.30\AA), (v) \ifmmode\mathrm{He\,{\scriptscriptstyle II}}\else{}He\,{\scriptsize II}\fi{} ($\lambda$1640.42\AA) (vi) O\,{\scriptsize III} ($\lambda$1663.48\AA) and (vii) Al\,{\scriptsize III} ($\lambda$1857.40\AA). \subsection{Continuum and emission line template} \label{sec:template} In this work, we fit the rest-frame wavelength range ($1180{\rm \AA} < \lambda < 2300{\rm \AA}$) with a single power-law for the QSO continuum, \begin{eqnarray} \label{eq:continuum} f_{\lambda} = f_{1450}\left(\frac{\lambda}{1450{\rm \AA}}\right)^{\alpha_{\lambda}} {\rm erg\,cm^{-2}\,s^{-1}\,\AA^{-1}}, \end{eqnarray} where $\alpha_{\lambda}$ describes the spectral slope of the continuum and $f_{1450}$ is the normalisation of the QSO flux which we choose to measure at 1450\AA. While we normalise the QSO flux at 1450\AA, we allow this quantity to vary within our MCMC fitting algorithm (by adding a small, variable perturbation). While this quantity does not depart greatly from the original normalised value, it allows us to compensate for situations where the nearby region around 1450\AA\, might be impacted by line absorption or a noise feature from the spectrograph. Secondary (broken) power-law continua have been fit to QSO spectra at $\lambda > 4500$\AA~for an independent red continuum slope \citep[e.g.][]{VandenBerk:2001p3887,Shang:2007p4862}, however, this is beyond our QSO sample wavelength coverage. Typically, a broken power-law continuum is also adopted blueward of \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{}, visible in low-$z$ HST spectra \citep[e.g][]{Telfer:2002p5713,Shull:2012p5716}. In this work, we do not consider a different slope near \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} as we only fit down to 1180\AA, therefore there is insufficient information to include a secondary component. By only considering a single power-law through the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line profile, we may bias our results slightly on fitting the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} broad component. However, provided we are consistent in our usage of this QSO continuum between this fitting approach and the reconstruction method, this should not impact our results. As mentioned in the previous section, we model each emission line component with a Gaussian profile. Following \citet{Kramer:2009p920} the total flux for each component can be defined as, \begin{eqnarray} \label{eq:flux} F_{i} = a_{i}\,{\rm exp}\left[ - \frac{(\lambda - \mu_{i})^{2}}{2\sigma^{2}_{i}}\right], \end{eqnarray} where $a_{i}$ describes the amplitude of the line peak, $\mu_{i}$ is the location of the line centre in \AA, $\sigma_{i}$ is the width of the line in \AA~and the subscript `$i$' denotes the specific line species (e.g. \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{}). Note that within this work, the peak amplitude is always normalised by the continuum flux at $1450$\AA, $f_{1450}$, therefore it is always a dimensionless quantity. More intuitively, the line centre location, $\mu_{i}$, can be written in terms of a velocity offset relative to the systemic line centre, \begin{eqnarray} v_{{\rm shift},i} = c\,\frac{\mu_{i} - \lambda_{i}}{\lambda_{i}} \,{\rm km/s}, \end{eqnarray} and the line width can be expressed as \begin{eqnarray} \label{eq:width} \sigma_{i} = \lambda_{i}\left(\frac{{\rm LW}}{c}\right)\,{\rm \AA}, \end{eqnarray} where LW is the line width measured in km/s. Throughout Equations~\ref{eq:flux}-\ref{eq:width}, both $\lambda$ and $\lambda_{i}$ are measured in the rest frame. Each Gaussian line component can therefore be fully described by its three component parameters, the line width, peak amplitude and velocity offset. In total, we fit each QSO with two continuum parameters, two double component Gaussians (\ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} and \ifmmode\mathrm{C\,{\scriptscriptstyle IV}}\else{}C\,{\scriptsize IV}\fi{}), and 9 single component Gaussian profiles, resulting in a total of 41 continuum and emission line parameters. \subsection{Identifying absorption features} \label{sec:absorb} Intervening diffuse neutral hydrogen or larger column density absorbers (smaller than DLAs) along the line of sight can produce narrow absorption features. Furthermore, metal pollution in stronger absorption systems can additionally result in narrow absorption features appearing in the observed profiles of the emission lines. These narrow features, if not measured and accounted for, can artificially bias the shape and peak amplitude of the emission line components. Therefore, in this section we outline our approach for identifying these features in a clean and automated manner: \begin{itemize} \item Identify all flux pixels that are a local minima within a 2\AA~region surrounding the central pixel in question. This choice of 2\AA~is arbitrary, but is selected to be sufficiently broad to ignore features that might arise from noise fluctuations. \item Construct a horizontal line of constant flux that begins from the global flux minimum. \item Incrementally increase this line of constant flux, recording the depth and width of each absorption feature enclosed by the line of constant flux. \item If the depth becomes larger than 3$\sigma$ (5$\sigma$ in the vicinity of \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{}) of the observed error in the flux at that pixel and the candidate absorption line has remained isolated (i.e. not overlapped with a nearby feature) it is then classified as an absorption line. \end{itemize} Our adopted choice of 3$\sigma$ (5$\sigma$ near \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{}) arises after rigorously testing this pipeline against the absorption features that were visually identified, until the vast majority of the absorption lines could be robustly located by this procedure. Across the full QSO sample, we found a broad range in the number of features identified within each individual QSO, from only a few up to $\sim40$. For each identified absorption feature, a single Gaussian (described by its own three parameters) is assigned and is simultaneously fit with the continuum and emission line profiles outlined previously. Therefore, the total number of parameters to be fit per QSO varies. \subsection{MCMC sampling the QSO template} \label{sec:MCMC} We fit each QSO within a Bayesian MCMC framework, using the $\chi^{2}$ likelihood function to determine the maximum likelihood fit to the QSO spectrum. This choice enables us to fully characterise any potential model degeneracies between our model parameters, while also providing the individual probability distribution functions (PDFs) for each model parameter. In this work, we utilise the publicly available MCMC python code \textsc{CosmoHammer} \citep{Akeret:2012p842} built upon \textsc{EMCEE} \citep{ForemanMackey:2013p823} which is based on the affine invariant MCMC sampler \citep{Goodman:2010p843}. \begin{figure*} \begin{center} \includegraphics[trim = 0.25cm 0.75cm 0cm 0.7cm, scale = 0.495]{Plots/SpecExampleFull} \end{center} \caption[]{An example, zoomed in figure of the MCMC QSO template fitting of a BOSS (SDSS-III) spectrum (`spec-4185-55469-0076' a $z=2.478$ QSO). For all spectra we translate the flux into arbitrary units (1 A. U. = $10^{-17}\,{\rm erg\,cm^{-2}\,s^{-1}\,\AA^{-1}}$). \textit{Top:} The full QSO spectrum used within our fitting procedure as a function of the rest frame wavelength, $\lambda$. The red dashed curve corresponds to the two parameter continuum (see Equation~\ref{eq:continuum}). \textit{Middle left:} The two component fit to the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} emission profile highlighted by the broad (cyan) and narrow (magenta) components and a single Gaussian profiles for N\,{\scriptsize V} (yellow) and Si\,{\scriptsize II} (black). A series of `absorption' features were identified and fit to the spectrum (shown below the continuum level). \textit{Middle centre}: Two low ionisation lines, O\,{\scriptsize I} (cyan) and C\,{\scriptsize II} (magenta). \textit{Middle right:} Single component fit to the \ifmmode\mathrm{Si\,{\scriptscriptstyle IV}\,\plus O\,{\scriptscriptstyle IV]}}\else{}Si\,{\scriptsize IV}\,+O\,{\scriptsize IV]}\fi{} doublet. \textit{Bottom left:} Double component fit to \ifmmode\mathrm{C\,{\scriptscriptstyle IV}}\else{}C\,{\scriptsize IV}\fi{} highlighted by the broad (cyan) and narrow (magenta) components. \textit{Bottom centre:} Low ionisation lines, He\,{\scriptsize II} (cyan) and O\,{\scriptsize III} (magenta). \textit{Bottom right:} Single component fit to \ifmmode\mathrm{C\,{\scriptscriptstyle III]}}\else{}C\,{\scriptsize III]}\fi{} (magenta) and single Gaussian Al\,{\scriptsize III} component (cyan). The maximum likelihood fit to this spectrum was $\chi^{2} = 3110$, for which we recovered a $\chi_{\rm red}^{2} = 1.1$ (with 2828 degrees of freedom, 2899 bins and 71 free parameters).} \label{fig:QSOexample} \end{figure*} Assuming flat priors across our $>50$ model parameters and fitting the full high-resolution BOSS spectrum within the wavelength range ($1180{\rm \AA} < \lambda < 2300{\rm \AA}$) simultaneously is computationally inefficient. Instead, we perform an iterative procedure to boost the computational efficiency which we outline below: \begin{itemize} \item Normalise the full QSO spectrum at 1450\AA~and then fit the two continuum parameters, $f_{1450}$ and $\alpha_{\lambda}$ within a set of selected wavelength ranges of the full spectrum which are minimally contaminated by emission lines. We choose to fit the QSO continuum within the regions [1275, 1295], [1315, 1330], [1351, 1362], [1452, 1520], [1680, 1735], [1786, 1834], [1970, 2040] and [2148, 2243]~\AA. \item Break the full QSO spectrum up into wavelength regions centred around the emission lines. We choose [1180, 1350]~\AA~centred on \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{}, [1350, 1450]~\AA~centred on \ifmmode\mathrm{Si\,{\scriptscriptstyle IV}\,\plus O\,{\scriptscriptstyle IV]}}\else{}Si\,{\scriptsize IV}\,+O\,{\scriptsize IV]}\fi{}, [1450, 1700]~\AA~centred on \ifmmode\mathrm{C\,{\scriptscriptstyle IV}}\else{}C\,{\scriptsize IV}\fi{} and [1700, 1960]~\AA~centred on \ifmmode\mathrm{C\,{\scriptscriptstyle III]}}\else{}C\,{\scriptsize III]}\fi{}. \item Within each of these four regions, we take the continuum estimated from above, and then fit all emission lines and any absorption features which fall within the respective wavelength ranges. For each of these regions we then perform an MCMC fit. \item From the individual PDFs for each parameter we construct a flat prior across a much narrower allowed range, driven by the width of the individual distributions from the fitting above. \item Finally, we fit the entire QSO spectrum using the entire model parameter set and recover a maximum likelihood model which describes the full spectrum. \end{itemize} In Figure~\ref{fig:QSOexample}, we provide an example of one of the BOSS QSOs from our full sample. In this figure we provide zoomed in panels of both the QSO continuum (red dashed curve, top panel) and the various emission lines which are simultaneously fit within our MCMC approach across all other panels. This figure highlights that any notable absorption features have been identified by our pipeline and accurately characterised (e.g. the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line profile in the left panel of the middle row). From Figure~\ref{fig:QSOexample} we note that this approach is able to fit the full spectrum. The maximum likelihood we obtained was $\chi^{2} = 3110$, for which we had 2899 bins (from the raw spectrum) and 71 free parameters, corresponding to a reduced $\chi^{2}$ of 1.1. For reference, this took $\sim$1 hour on a single processing core, which can be rapidly improved if a binned spectrum is used. \subsection{QSO fitting quality assessment} \label{sec:QA} After fitting the entire sample with our full MCMC QSO fitting pipeline, we are in a position to construct a higher fidelity sample of QSOs for constructing our covariance matrix and investigating the correlations amongst the emission line parameters. This is performed by visually inspecting our QSO sample, applying a simple selection criteria. Following the completion of this selection process, we produce two separate QSO samples, one classified as `good' and another classified as `conservative', the details of which we discuss below (in Appendix~\ref{sec:QSO_QA} we provide a few select examples to visually highlight this subjective process). The criteria are outlined as follows: \begin{itemize} \item We remove all QSOs with a poor characterisation of the continuum. These include QSOs with a positive spectral index, or a clear departure from a single power-law continuum. Only a handful of QSOs exhibit this behaviour. See Figure~\ref{fig:QA_RemovedContinuum} for some examples. \item We further remove any QSOs which have either (i) missing sections of flux that overlap with any of the emission lines (ii) a sufficient number of absorption features which cause a loss of confidence in the fitting of the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} profile (iii) either an intervening dense neutral absorber or sufficiently strong/broad absorption blueward of line centre which can impact the broad component of either the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} or the \ifmmode\mathrm{C\,{\scriptscriptstyle IV}}\else{}C\,{\scriptsize IV}\fi{} lines (iv) a \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line region that is not well fit or characterised by our double component Gaussian, which might arise from numerous absorption profiles or the lack of a prominent \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} peak. \item Following the removal of these contaminants from our sample, we are left with 2,653 QSOs, which we call our `conservative' sample\footnote{Our naming convention of 'good' and 'conservative' refers to the tightness of constraints recovered from the reconstruction procedure. The 'conservative' sample, with less assumptions on data quality, produces slightly broader errors relative to the 'good' sample.}. These can still include QSOs which might have absorption features centred on the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line centre, or absorption line complexes which might contaminate large sections of the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line profile (see Figure~\ref{fig:QA_Conservative} for examples). \item We then apply a secondary criteria, which preferentially selects QSOs which contain (i) fewer absorption features and (ii) no absorption features on the line centre. This results in a final sample of 1673 QSOs, which we refer to as our `good' sample (see Figure~\ref{fig:QA_Good} for examples). \end{itemize} The major difference between the `good' and `conservative' sample is that the `conservative' sample will contain QSO spectra for which we have less confidence in either the identification and fitting of one, or several of the emission lines due to the presence of absorption features. Ultimately, if our claim that any correlations amongst the emission lines are a universal property of the QSOs then the covariance matrices of the two should be almost identical, with the `conservative' sample containing additional scatter (slightly weaker correlations). In the next section, we construct our covariance matrices and investigate this further. \section{Data Sample Covariance} \label{sec:Covariance} With the refined, quality assessed QSO spectra from the previous section, we now construct our covariance matrix to characterise all correlations amongst the various emission lines. \subsection{The covariance matrix} \label{sec:covariance} In performing the quality assessment of our QSO spectra, we note that a number of the weaker emission lines are not always well characterised or resolved. This becomes more prevalent for the QSOs nearer to our sample limit of S/N~$=15$, which correspond to the highest density of QSOs in our sample. It would be interesting to investigate correlations between the strong high ionisation lines and the weaker low ionisation lines, as well as correlations amongst these two classifications. However, since these lines are not always readily available in our QSOs we refrain from doing so. Regardless, by still attempting to fit these weaker lines we retain the flexibility of the MCMC approach, and more importantly this enables the QSO continuum to be estimated to a higher accuracy. \begin{figure*} \begin{center} \includegraphics[trim = 0.15cm 0.3cm 0cm 0.5cm, scale = 0.94]{Plots/CorrCoeff_and_Matrix_NoContLog_normalised.pdf} \end{center} \caption[]{The correlation coefficient matrix (correlation coefficients listed in the lower half) constructed from the `good` sample of 1673 QSOs. This $18\times18$ matrix contains the double component Gaussians of \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} and \ifmmode\mathrm{C\,{\scriptscriptstyle IV}}\else{}C\,{\scriptsize IV}\fi{} and the single component Gaussians for \ifmmode\mathrm{Si\,{\scriptscriptstyle IV}\,\plus O\,{\scriptscriptstyle IV]}}\else{}Si\,{\scriptsize IV}\,+O\,{\scriptsize IV]}\fi{} and \ifmmode\mathrm{C\,{\scriptscriptstyle III]}}\else{}C\,{\scriptsize III]}\fi{}. Each Gaussian component contains three parameters, its peak width, peak height and velocity offset from systemic (\ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} + \ifmmode\mathrm{C\,{\scriptscriptstyle IV}}\else{}C\,{\scriptsize IV}\fi{} + \ifmmode\mathrm{Si\,{\scriptscriptstyle IV}\,\plus O\,{\scriptscriptstyle IV]}}\else{}Si\,{\scriptsize IV}\,+O\,{\scriptsize IV]}\fi{} + \ifmmode\mathrm{C\,{\scriptscriptstyle III]}}\else{}C\,{\scriptsize III]}\fi{} = ($2\times3$) + ($2\times3$) + 3 + 3 = 18 parameters).} \label{fig:CovarianceMatrix} \end{figure*} In light of this, we construct our covariance matrix from a subset of these emission lines: the most prominent emission lines that should always be resolvable in a lower S/N or lower resolution QSO spectrum (as in the near-IR for $z>6$ QSOs). The lines we identify are \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{}, \ifmmode\mathrm{Si\,{\scriptscriptstyle IV}\,\plus O\,{\scriptscriptstyle IV]}}\else{}Si\,{\scriptsize IV}\,+O\,{\scriptsize IV]}\fi{}, \ifmmode\mathrm{C\,{\scriptscriptstyle IV}}\else{}C\,{\scriptsize IV}\fi{} and \ifmmode\mathrm{C\,{\scriptscriptstyle III]}}\else{}C\,{\scriptsize III]}\fi{}. Since for \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} and \ifmmode\mathrm{C\,{\scriptscriptstyle IV}}\else{}C\,{\scriptsize IV}\fi{} we allow a broad and narrow component, we recover an 18$\times$18 covariance matrix. We choose to exclude the \ifmmode\mathrm{N\,{\scriptscriptstyle V}}\else{}N\,{\scriptsize V}\fi{} line from our covariance matrix for two reasons: (i) a clear, identifiable \ifmmode\mathrm{N\,{\scriptscriptstyle V}}\else{}N\,{\scriptsize V}\fi{} line is not always present in our QSO spectra, therefore including it would artificially reduce any visible correlation and (ii) the ultimate goal of this work is the reconstruction of the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line whereby including the \ifmmode\mathrm{N\,{\scriptscriptstyle V}}\else{}N\,{\scriptsize V}\fi{} line to the reconstruction process would only increases the complexity (the \ifmmode\mathrm{N\,{\scriptscriptstyle V}}\else{}N\,{\scriptsize V}\fi{} line should in principle be recoverable from the high-$z$ or \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} obscured QSO). In Figure~\ref{fig:CovarianceMatrix}, we present the correlation coefficient matrix, which is obtained from the full covariance matrix from our `good' sample of 1673 QSOs. In constructing this, we assume the standard format for the covariance matrix, $\bmath{\Sigma}_{ij}$, given by, \begin{eqnarray} \bmath{\Sigma}_{ij} = \frac{1}{N-1}\sum^{N}_{i}(\textbfss{X}_{i} - \bmath{\mu}_{i})(\textbfss{X}_{j} - \bmath{\mu}_{j}), \end{eqnarray} where $\textbfss{X}_{i}$ is the data vector for the full QSO sample and $\bmath{\mu}$ is the mean data vector for the $i$th parameter. For this data vector we use the values from the parameter set which provide the maximum likelihood fit to each QSO. The correlation coefficient matrix, $\textbfss{R}_{ij}$, is then defined in the standard way, \begin{eqnarray} \textbfss{R}_{ij} = \frac{\textbfss{C}_{ij}}{\sqrt{\textbfss{C}_{ii}\textbfss{C}_{jj}}}, \end{eqnarray} where each diagonal entry, $\textbfss{R}_{ij}$, corresponds to the correlation coefficient between the $i$th and $j$th model parameters. Note, for the covariance matrix, we do not include the two continuum parameters. Firstly, for the parameters defining the emission line peak we define this parameter as the normalised peak value, where it is normalised by the continuum flux at $f_{1450}$. Therefore, if included, this would be completely degenerate with the peak amplitudes. Some correlation with continuum spectral index is expected, given that external (to the broad-line region) reddening will simultaneously weaken the bluer UV lines and redden the continuum spectral index. However, mild reddening is seen in only $\sim10-20$~per cent of SDSS quasars \citep[e.g.][]{Richards:2003p4639,Hopkins:2004p4640} so this has little effect on the correlations (we find very weak correlations on the order of 10 per cent). Therefore we do not report them, as these provide no additional information with respect to the individual line correlations. \subsection{Interpreting the covariance matrix} \label{sec:interpretation} In order to aid the interpretation of the correlation matrix, we divide Figure~\ref{fig:CovarianceMatrix} into the four emission line species (denoted by black curves), while the narrow and broad line species are further separated by black dashed lines. Positive correlations are represented by a decreasing (weakening) shading of red, with white representing no correlation and an increasing blue scaling denotes a strengthening of the anti-correlation. Using the upper half of the correlation matrix enables a faster analysis of the correlation patterns amongst the emission line parameters, and the lower half reports the numerical value of the correlation (or anti correlation). For the most part, each 3$\times$3 sub-matrix returns the same correlations and anti-correlations amongst the peaks, widths and velocity offsets, but with varying degrees of strength. \begin{figure} \begin{center} \includegraphics[trim = 0.5cm 0.7cm 0cm 0cm, scale = 0.58]{Plots/Correlations2D_peak-peak_log_normalised.pdf} \end{center} \caption[]{A 2D scatter plot of the correlation between the peak amplitudes of the narrow components of the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} and \ifmmode\mathrm{C\,{\scriptscriptstyle IV}}\else{}C\,{\scriptsize IV}\fi{} emission lines for our `good' sample of 1673 QSOs. We recover a correlation coefficient of $\rho=0.80$, indicative of a strong, positive correlation. The green solid and dashed curves correspond to the 68 and 95 per cent 2D marginalised joint likelihood contours, respectively. The histograms (black curves) correspond to the recovered PDFs for each of the two parameters, while we also provide Gaussian curves (blue) representative of the scatter in the fit parameters (not a direct fit). Note that for the peak amplitude, we normalise by $f_{1450}$, therefore these are dimensionless.} \label{fig:PeakCorrelation} \end{figure} Firstly, we note that with this correlation matrix it is straightforward to notice that all pairs of peak height parameters are positively correlated. This is to be expected as the peak amplitude is positively correlated with the QSO luminosity. This correlation between the peak heights is the strongest of the trends. Most importantly for this work, the correlations amongst the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} peak height parameters are the strongest. For example, we find the strongest correlation ($\rho = 0.8$) between the peak height of the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} narrow component and the associated peak height of the \ifmmode\mathrm{C\,{\scriptscriptstyle IV}}\else{}C\,{\scriptsize IV}\fi{} narrow component. In Figure~\ref{fig:PeakCorrelation}, we provide the 2D scatter plot for this strong correlation (central panel) and the 1D marginalised PDFs for the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} (top) and \ifmmode\mathrm{C\,{\scriptscriptstyle IV}}\else{}C\,{\scriptsize IV}\fi{} (right). The green solid and dashed contours in the central panel denote the 68 and 95 per cent 2D marginalised joint likelihood contours, which describe the relative scatter amongst our sample of QSOs. In the 1D marginalised PDFs, the solid black curves are the histograms of the sample of QSOs, while the blue solid curves are an approximated 1D Gaussian for that associated parameter. Note in this figure, and in subsequent figures, these blue curves are not a fit directly to the raw data, rather instead they are approximations of a Gaussian PDF with 1$\sigma$ scatter equivalent to the raw data. \begin{figure} \begin{center} \includegraphics[trim = 0.5cm 0.6cm 0cm 0cm, scale = 0.59]{Plots/Correlations2D_CIIIwidth-LyaPeak_log_normalised} \end{center} \caption[]{A 2D scatter plot of the correlation between the narrow component of the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} peak amplitude and line width of the \ifmmode\mathrm{C\,{\scriptscriptstyle III]}}\else{}C\,{\scriptsize III]}\fi{} emission line for our `good' sample of 1673 QSOs. We recover a correlation coefficient of $\rho=-0.74$, indicative of a strong anti-correlation. Histograms, blue curves and the green contours are as described in Figure~\ref{fig:PeakCorrelation}. As in Figure~\ref{fig:PeakCorrelation}, the peak amplitude is normalised by $f_{1450}$ and is therefore dimensionless.} \label{fig:PeakWidthCorrelation} \end{figure} Returning to the correlation matrix, we additionally recover a relatively strong trend (anti-correlation) between the peak height and the line width. In Figure~\ref{fig:PeakWidthCorrelation}, we provide the strongest of these anti-correlations ($\rho=-0.74$) which is between the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} peak amplitude of the narrow line component and the width of the single component \ifmmode\mathrm{C\,{\scriptscriptstyle III]}}\else{}C\,{\scriptsize III]}\fi{} line. We observe, that for an increasing peak amplitude for the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} narrow component, a decreasing of the \ifmmode\mathrm{C\,{\scriptscriptstyle III]}}\else{}C\,{\scriptsize III]}\fi{} line width. This behaviour could be inferred as a result of the Baldwin effect \citep{Baldwin:1977p5910}. For an increasing peak amplitude (i.e. QSO luminosity), we expect a weaker broad-line emission. Physically, a plausible scenario to describe this could be the over-ionisation of the inner broad-line region by the continuum in high luminosity QSOs, resulting in carbon being ionised into higher order species (i.e.\ no \ifmmode\mathrm{C\,{\scriptscriptstyle IV}}\else{}C\,{\scriptsize IV}\fi{}/\ifmmode\mathrm{C\,{\scriptscriptstyle III]}}\else{}C\,{\scriptsize III]}\fi{}). For these high luminosity QSOs, the high ionisation lines may then predominantly arise at larger radii where the velocity dispersion is lower, producing smaller line widths \cite[e.g.][]{Richards:2011p6031}. Alternately, it could arise from systematics from our line fitting. As a result of the decreasing equivalent widths with increasing QSO luminosity, it could be that the single component Gaussian for the \ifmmode\mathrm{C\,{\scriptscriptstyle III]}}\else{}C\,{\scriptsize III]}\fi{} does not characterise the line profile as accurately. With a less prominent broad-line component, the emission from the wings could be underestimated, producing a narrower \ifmmode\mathrm{C\,{\scriptscriptstyle III]}}\else{}C\,{\scriptsize III]}\fi{} line. We additionally recover a trend for a positive correlation between the widths of the various line species, though this is a relatively weaker trend. However, for the broad and narrow components for the same line species, we find more moderate correlations ($\rho=0.54$ for \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} and $\rho=0.62$ for \ifmmode\mathrm{C\,{\scriptscriptstyle IV}}\else{}C\,{\scriptsize IV}\fi{}). It is difficult to interpret these correlations though, as these broad and narrow lines are simultaneously fit, and therefore in principle could be degenerate. \citet{Shang:2007p4862} investigated correlations amongst the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{}, \ifmmode\mathrm{C\,{\scriptscriptstyle IV}}\else{}C\,{\scriptsize IV}\fi{} and \ifmmode\mathrm{C\,{\scriptscriptstyle III]}}\else{}C\,{\scriptsize III]}\fi{} lines for a significantly smaller sample of 22 QSOs. These authors recover moderate to strong correlations between a few of their emission line width full width half maxima (FWHMs). They find \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{}--\ifmmode\mathrm{C\,{\scriptscriptstyle IV}}\else{}C\,{\scriptsize IV}\fi{} ($\rho=0.81$) and \ifmmode\mathrm{C\,{\scriptscriptstyle IV}}\else{}C\,{\scriptsize IV}\fi{}--\ifmmode\mathrm{C\,{\scriptscriptstyle III]}}\else{}C\,{\scriptsize III]}\fi{} ($\rho=0.53$). If we equate this FWHM as the width of the narrow component of our emission line profiles we find correlations between the same species of \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{}--\ifmmode\mathrm{C\,{\scriptscriptstyle IV}}\else{}C\,{\scriptsize IV}\fi{} ($\rho=0.35$) and \ifmmode\mathrm{C\,{\scriptscriptstyle IV}}\else{}C\,{\scriptsize IV}\fi{}--\ifmmode\mathrm{C\,{\scriptscriptstyle III]}}\else{}C\,{\scriptsize III]}\fi{} ($\rho=0.46$)\footnote{By converting our fitted emission line parameters (both broad and narrow) into a line profile FWHM width, we find correlations similar in strength to those recovered purely from the narrow line component.}. While we recover an equivalent correlation for the \ifmmode\mathrm{C\,{\scriptscriptstyle IV}}\else{}C\,{\scriptsize IV}\fi{}--\ifmmode\mathrm{C\,{\scriptscriptstyle III]}}\else{}C\,{\scriptsize III]}\fi{} lines, we find a significant discrepancy for the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{}--\ifmmode\mathrm{C\,{\scriptscriptstyle IV}}\else{}C\,{\scriptsize IV}\fi{} lines. This potentially can be explained as an artificially large correlation owing to their small number of objects (22 compared to our 1683 QSOs)\footnote{Note that, if we adopt a systemic redshift measured purely from a single ionisation line (such as the \ifmmode\mathrm{Mg\,{\scriptscriptstyle II}}\else{}Mg\,{\scriptsize II}\fi{} redshift) rather than the pipeline redshift used throughout this work, we can recover a significantly stronger correlation of \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{}--\ifmmode\mathrm{C\,{\scriptscriptstyle IV}}\else{}C\,{\scriptsize IV}\fi{} ($\rho=0.7$).}. Supporting this hypothesis, in \citet{Corbin:1996p5640} for a larger sample of 44 QSOs, this is reduced to $\rho=0.68$ for \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{}--\ifmmode\mathrm{C\,{\scriptscriptstyle IV}}\else{}C\,{\scriptsize IV}\fi{}. \begin{figure} \begin{center} \includegraphics[trim = 0.5cm 0.6cm 0cm 0cm, scale = 0.58]{Plots/Correlations2D_CIV-Lya_velocity} \end{center} \caption[]{A 2D scatter plot of the correlation between the velocity offsets of the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} and \ifmmode\mathrm{C\,{\scriptscriptstyle IV}}\else{}C\,{\scriptsize IV}\fi{} emission lines for our `good' sample of 1673 QSOs. We recover a correlation coefficient of $\rho=0.35$, indicative of a moderate correlation, which is counter to the much stronger correlation hinted at from the smaller QSO samples of \citet{Shang:2007p4862} and \citet{Kramer:2009p920}. Histograms, blue curves and the green contours are as described in Figure~\ref{fig:PeakCorrelation}.} \label{fig:VelocityCorrelation} \end{figure} Additionally, \citet{Kramer:2009p920} recovered a moderate positive correlation ($\rho=0.68$) between the velocity offsets for the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{}--\ifmmode\mathrm{C\,{\scriptscriptstyle IV}}\else{}C\,{\scriptsize IV}\fi{} components. For the same combination, \citet{Shang:2007p4862} found a stronger correlation of $\rho=0.81$. In Figure~\ref{fig:VelocityCorrelation}, we provide our weaker results between the narrow components of the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{}--\ifmmode\mathrm{C\,{\scriptscriptstyle IV}}\else{}C\,{\scriptsize IV}\fi{} lines for which we recover a moderate correlation of $\rho=0.35$. Once again, this lack of an equivalently strong correlation in our sample could arise purely from the fact that our sample contains two orders of magnitude more QSOs. \citet{Shang:2007p4862} equivalently quote correlations of $\rho=0.45$ (\ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{}--\ifmmode\mathrm{C\,{\scriptscriptstyle III]}}\else{}C\,{\scriptsize III]}\fi{}) and $\rho=0.39$ (\ifmmode\mathrm{C\,{\scriptscriptstyle IV}}\else{}C\,{\scriptsize IV}\fi{}--\ifmmode\mathrm{C\,{\scriptscriptstyle III]}}\else{}C\,{\scriptsize III]}\fi{}). Using the narrow lines for \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} and \ifmmode\mathrm{C\,{\scriptscriptstyle IV}}\else{}C\,{\scriptsize IV}\fi{}, we find similarly moderate, but slightly stronger correlations of $\rho=0.54$ (\ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{}--\ifmmode\mathrm{C\,{\scriptscriptstyle III]}}\else{}C\,{\scriptsize III]}\fi{}) and $\rho=0.48$ (\ifmmode\mathrm{C\,{\scriptscriptstyle IV}}\else{}C\,{\scriptsize IV}\fi{}--\ifmmode\mathrm{C\,{\scriptscriptstyle III]}}\else{}C\,{\scriptsize III]}\fi{}). In the construction of the covariance matrix, and associated correlation matrix we have only used the maximum likelihood values, and ignored the relative errors for each parameter from the marginalised 1D PDFs. Our reasoning for this is that the amplitude of the error on the individual parameters is small with respect to the scatter that arises for each parameter across the full QSO sample. In principle however, we could perform an MCMC sampling of the full covariance matrix, allowing the means and all correlation coefficients to be free parameters. This would allow a more accurate characterisation of both the individual errors for each parameter and the total scatter across the full QSO sample, potentially tightening the correlations amongst the emission line parameters. However, this would require fitting $\frac{1}{2}N(N+1)$ parameters encompassing both the means of the individual parameters and all correlation coefficients (in our case, 171 free parameters). \subsection{Correlation of the QSO continuum parameters} Though the continuum parameters are not included in the covariance matrix in Figure~\ref{fig:CovarianceMatrix}, in Figure~\ref{fig:Continuum} we provide the 2D scatter between these two parameters. In our `good' sample of QSOs, we recover only a relatively weak anti-correlation ($\rho=-0.22$) between the QSO continuum spectral index, $\alpha_{\lambda}$ and the normalisation at 1450~\AA~($f_{1450}$). This weak anti-correlation likely arises due to dust reddening, which causes a drop in the normalisation, $f_{1450}$ and shallower spectral slopes. While both of these QSO parameters are well characterised by a Gaussian, the respective scatter in each parameter is considerable. For our sample of `good' (`conservative') QSOs, we recover a median QSO spectral index of $\alpha_{\lambda} = -1.30\pm0.37$ ($ -1.28\pm0.38$). In contrast, \citet{Harris:2016p5028} construct a QSO composite spectrum with a wavelength coverage of ($800{\rm \AA} < \lambda < 3300{\rm \AA}$) from the same BOSS DR12 sample. Using $\sim100,000$ QSOs, these authors find a median spectral index for their QSO sample of $\alpha_{\lambda} = -1.46$, consistent with our results within the large scatter. However, these authors only fit the QSO continuum between 1440-1480\AA~and 2160-2230\AA. Not fitting to the same spectral region likely results in a different recovered spectral slope. Furthermore, our estimate of $\alpha_{\lambda} = -1.30$ ($\alpha_{\nu} = -0.70$) is also consistent with the lower redshift samples of \citet{Scott:2004p5709} ($\alpha_{\nu}=-0.56\substack{+0.38 \\ -0.28}$), \citet{Shull:2012p5716} ($\alpha_{\nu}=-0.68\pm0.14$) and \citet{Stevans:2014p5726} ($\alpha_{\nu}=-0.83\pm0.09$). Note, within all these works, considerable scatter in the spectral index is also prevalent. \begin{figure} \begin{center} \includegraphics[trim = 0.5cm 0.7cm 0cm 0cm, scale = 0.58]{Plots/Correlations2D_Continuum.pdf} \end{center} \caption[]{A 2D scatter plot highlighting the correlation between the two QSO continuum parameters: the continuum spectral index, $\alpha_{\lambda}$ and the continuum flux at 1450\AA, $f_{1450}$ (for which we define 1 A. U. = $10^{-17}\,{\rm erg\,cm^{-2}\,s^{-1}\,\AA^{-1}}$), for our sample of 1673 QSOs (the `good' sample). We recover a correlation coefficient of $\rho=-0.22$, indicative of a weak anti-correlation. Histograms, blue curves and the green contours are as described in Figure~\ref{fig:PeakCorrelation}.} \label{fig:Continuum} \end{figure} \subsection{Potential sample bias} \label{sec:Bias} In Sections~\ref{sec:covariance} and~\ref{sec:interpretation}, we presented the correlation matrix and discussions on the relative trends between the emission lines and their relative strengths. However, these results were drawn from our refined, quality assessed `good' QSO sample. In order to guard against a potential bias which may have arisen following our specific selection process, we additionally construct the correlation matrix for our `conservative' QSO sample. This `conservative' sample contains $\sim1000$ additional QSOs which are defined to be less robust than required for our `good' sample. Therefore, this `conservative' sample should contain more scatter amongst the recovered parameters, which would notably degrade the strength of the correlations relative to the `good' sample if we have artificially biased our results. \begin{figure*} \begin{center} \includegraphics[trim = 0.15cm 0.3cm 0cm 0.5cm, scale = 0.94]{Plots/CorrCoeff_and_Matrix_Difference.pdf} \end{center} \caption[]{The amplitude of the difference of the correlation coefficients for the $18\times18$ covariance matrix between the `good' (1673) and `conservative' (2653) QSO datasets. Positive (red) differences indicate where the correlation in the `good' sample is greater than the `conservative' and negative (blue) indicates when the `conservative' sample is strongest. Note this does not differentiate between a positive or anti-correlation nor the strength of the original correlation, just a strengthening/weakening of the respective correlation. Dot-dashed squares indicate a swap between a positive and negative correlation (typically indicative of the correlation coefficients in either QSO samples being close to zero). The colour bar has been renormalised relative to Figure~\ref{fig:CovarianceMatrix} to indicate the strength of the difference in amplitude.} \label{fig:CovMatDiff} \end{figure*} In Figure~\ref{fig:CovMatDiff} we provide the matrix of the relative difference in the correlation coefficients between the `good' and `conservative' QSO samples. Here, we show the amplitude of the change in correlation coefficient between the two QSO samples. A positive (red) difference is indicative of the `good' sample having a stronger correlation (either positive or anti-correlation), while a negative (blue) difference is indicative of the `conservative' sample having a stronger correlation. Note, squares marked with a dot-dashed cross indicate a change between a positive and anti-correlation, which arises when the correlations are close to zero in either sample. For the most part, the relative change in the correlation is minor, of the order of $|\Delta\rho| < 0.04$. More importantly, for any of the strong correlations and notable trends we discussed in the previous section, we observe no sizeable differences, with the `good' sample providing on average slightly stronger correlations (as indicated by the prevalence of red squares). For example, the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{}--\ifmmode\mathrm{C\,{\scriptscriptstyle IV}}\else{}C\,{\scriptsize IV}\fi{} peak heights of the narrow component from our `good' sample was $\rho=0.80$, whereas for the `conservative sample it is $\rho=0.79$. Therefore, in the absence of any drastic differences in the strong and notable trends discussed previously between the two samples, it is clear that we have not biased our QSO covariance matrix in the construction of the `good' sample. Given the relative amplitude of these differences between the two QSO samples is small, and tend to be slightly weaker for the `conservative' sample, the reconstructed profile recovered from the covariance matrix should recover effectively the same best-fit \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} profile, with slightly broader errors owing to the increased scatter (reduced correlations). Equally, to be prudent, we considered several other ways to divide our QSO sample. First, we performed the same analysis, comparing instead the `conservative' minus `good' sample (the $\sim1000$ QSOs from the `conservative' sample not classified as `good') and secondly, constructing two equally sized random samples from the `conservative' sample. In both instances, we find the same strong correlations as in Figure~\ref{fig:CovarianceMatrix} with similar amplitude variations between the respective correlation matrices as shown in Figure~\ref{fig:CovMatDiff}. Finally, our results on line parameter correlations are sensitive to the choice of redshift estimate and any inherent biases in those estimates. We discuss this issue in detail in Appendix~\ref{sec:systemic_redshift}, but, in summary, we find that while different redshift estimates do result in slightly different covariance matrices, our general conclusions are robust and the redshift estimate we have chosen (the BOSS pipeline redshift, $z_{\rm pipeline}$) performs the best at the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} profile reconstruction (see the next section). \section{\ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} Reconstruction} \label{sec:Recon} We now use this covariance matrix to reconstruct the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line profile. In this section we outline the reconstruction pipeline, and associated assumptions followed by an example of this approach. \subsection{Reconstruction method} \label{sec:Reconstruction} Within our reconstruction approach, we approximate the distribution of the emission line parameters (across the entire data-sample) as a Gaussian\footnote{The choice in adopting a Gaussian covariance matrix is driven by the large computational burden required to perform a full end-to-end Bayesian approach folding in all modelling uncertainties.}. In Figures~\ref{fig:PeakCorrelation}-\ref{fig:VelocityCorrelation}, we observe that for these six parameters shown, this approximation is well motivated. Clearly, showing the 18 individual 1D PDFs for each parameters would be uninformative, however, we confirm by eye that this approximation is valid for all model parameters. Note, that in some of these cases, a Gaussian approximation holds only after taking the logarithm, especially for the normalised peak amplitude (normalised by $f_{1450}$) and the emission line width. Importantly, if anything, these Gaussian approximations have a tendency to slightly overestimate the relative scatter within each parameter (see Figure~\ref{fig:VelocityCorrelation} for example), therefore, this assumption in fact turns out to be a conservative estimate of the true scatter. The Gaussian nature of the scatter in Figures~\ref{fig:PeakCorrelation}-\ref{fig:VelocityCorrelation} and the conservative overestimation of the errors by our Gaussian approach lends confidence that our approach should not significantly underestimate the errors that one might recover from a fully Bayesian approach. In order to perform the reconstruction, we assume that the QSO can be fit following the same procedure outlined in Section~\ref{sec:Fitting}, except now we only fit the QSO red-ward of 1275 \AA. Our choice of $\lambda > 1275$~\AA\,is conservatively selected to be both close enough to \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} as possible, but with minimal to no contamination from emission line wings namely \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{}, N\,{\scriptsize V} or Si\,{\scriptsize II}). Furthermore, given that this approach is best suited for recovering the intrinsic \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} profile from a \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} obscured or high-$z$ QSO, it is best to be sufficiently far from any possible contamination of the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line region. We can then define the $N$ dimensional parameter space (i.e.~our 18 emission line parameters outlined previously) as an $N$ dimensional likelihood distribution given by, \begin{eqnarray} \label{eq:ML} \mathcal{L} = \frac{1}{(2\pi)^{N/2}|\bmath{\Sigma}|}{\rm exp}\left[\frac{1}{2}(\bmath{x}-\bmath{\mu})^{\mathsf{T}}\bmath{\Sigma}^{-1}(\bmath{x}-\bmath{\mu})\right]. \end{eqnarray} Here, $\bmath{\Sigma}$ is the recovered QSO covariance matrix (Section~\ref{sec:covariance}), $\bmath{\mu}$ is the data vector of the means obtained from the full QSO sample for each of the individual line profile parameters and $\bmath{x}$ is the data vector measured from our MCMC fitting algorithm for the individual obscured QSO spectrum. After the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} obscured or high-$z$ QSO has been fit following our fitting procedure, the recovered best-fit values for the unobscured emission line parameters are folded into Equation~\ref{eq:ML}. That is, we recover the best-fit estimates of the \ifmmode\mathrm{Si\,{\scriptscriptstyle IV}\,\plus O\,{\scriptscriptstyle IV]}}\else{}Si\,{\scriptsize IV}\,+O\,{\scriptsize IV]}\fi{}, \ifmmode\mathrm{C\,{\scriptscriptstyle IV}}\else{}C\,{\scriptsize IV}\fi{} and \ifmmode\mathrm{C\,{\scriptscriptstyle III]}}\else{}C\,{\scriptsize III]}\fi{} emission lines and evaluate Equation~\ref{eq:ML} to collapse the 18 dimensional likelihood function into a simple, six dimensional likelihood function describing the six unknown \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} emission line parameters (two Gaussian components each defined by three parameters). The maximum likelihood of this six dimensional function then describes the best-fit reconstructed profile for the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} emission line, while the full six dimensional matrix contains the correlated uncertainty. \subsection{Reconstruction example} \label{sec:ReconExample} \begin{figure*} \begin{center} \includegraphics[trim = 0.5cm 0.6cm 0cm 0.4cm, scale = 0.49]{Plots/ReconstructedPDFs_spec-4185-55469-0076.pdf} \end{center} \caption[]{The recovered 1D marginalised PDFs for each of the six reconstructed \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} emission line parameters obtained after applying the reconstruction method outlined in Section~\ref{sec:Reconstruction} to the example QSO in Figure~\ref{fig:QSOexample}. The vertical dashed lines correspond to the MCMC maximum likelihood fit to the full QSO spectrum, whereas the blue and red curves correspond to the recovered 1D PDFs obtained from using the covariance matrix constructed from the `conservative' and `good' QSO samples respectively. The yellow curve corresponds to reconstructing the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} profile parameters using the `good' sample while in addition applying a prior on the QSO flux within the range $1230 < \lambda < 1275$\AA~(see Section~\ref{sec:fluxprior} for further details). That is, we enforce our reconstructed \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} emission line profile to fit the observed spectrum within this region. Note that the peak amplitude is normalised by $f_{1450}$ and is therefore dimensionless. Importantly, we are interested in the joint probability, i.e.\ the full \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line profile. As shown below, the reconstruction performs considerably better than can be inferred from these marginalised PDFs.} \label{fig:PDFs_Recon} \end{figure*} We now present an example to highlight the performance of our approach. In order to do this, we choose the same QSO we showed in Figure~\ref{fig:QSOexample}, fitting for $\lambda > 1275$~\AA\,and recovering the six dimensional estimate for the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line profile. In restricting our fitting algorithm to $\lambda > 1275$~\AA\, we are not accessing all the information that was used by the full fit to estimate the QSO continuum. While, this does not affect the recovery of the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} peak profile itself, the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} profile plus continuum could be affected. In Appendix~\ref{sec:continuum_comparison} we test this assumption, finding that the QSO continuum parameters can be recovered equivalently from these two approaches, with a small amount of scatter in the QSO spectral index. \begin{figure*} \begin{center} \includegraphics[trim = 0.4cm 1cm 0cm 0.5cm, scale = 0.492]{Plots/ReconExamples_Reduced_new} \end{center} \caption[]{A zoom in of the recovered \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} emission line profile from our reconstruction procedure. The thin grey curves denote 100 \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line profiles extracted from the reconstructed six-dimensional \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} likelihood function. These curves are randomly selected to represent the full posterior distribution for the reconstructed \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} profiles, highlighting the relative scale of the errors. The white curves represent the original full MCMC fit of the same QSO (see Figure~\ref{fig:QSOexample}). Black curves are the observed flux of the original QSO spectrum. Note, in these figures, we provide only the emission line component of the full fit (i.e. absorption features are identified and fit, but not shown in the figure). \textit{Top left:} The reconstructed profile from the `good' sample. \textit{Top right:} The reconstructed profile from the `conservative' sample. Note that in both the \textit{top left} and \textit{top right} panels the \ifmmode\mathrm{N\,{\scriptscriptstyle V}}\else{}N\,{\scriptsize V}\fi{} emission feature is not fit in the reconstruction procedure. \textit{Bottom:} The reconstructed profile from the `good' sample utilising the flux prior applied within the range $1230 < \lambda < 1275$\AA~(see Section~\ref{sec:fluxprior} for further details).} \label{fig:Profile_Recon} \end{figure*} Before providing the full reconstructed profile, we first recover the individual marginalised 1D PDFs for each of the six \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} emission line parameters to better visualise the relative size of the errors. In order to obtain the recovered 1D PDFs for the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} profile parameters, we marginalise the six dimensional likelihood function over the remaining five \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} parameters. In Figure~\ref{fig:PDFs_Recon} we present these recovered 1D marginalised PDFs, showing in the top row the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} broad line component and in the bottom row, the narrow line component. The vertical black dashed lines represent the recovered values from fitting the full QSO in Figure~\ref{fig:QSOexample}, and the coloured curves represent the recovered 1D marginalised PDFs for each \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} parameter for the `good' sample (red) and `conservative' sample (blue). It is clear that both QSO samples recover almost identical best-fit values for each \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} parameter, highlighting that both samples are characterised by the same correlations within the covariance matrix. Furthermore, the choice in constructing a `good' sample is more evident here as consistently the `good' sample provides marginally narrower constraints. For the most part, the six reconstructed \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} parameters are recovered within the 68 percentile limits of the original fit to the QSO, with the exception being the velocity offset of the broad \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} component which is only slightly beyond this limit. Otherwise, the five remaining \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} parameters are effectively centred around the expected value from the full fit to the same QSO (Figure~\ref{fig:QSOexample}) which should enable a relatively robust recovery of the full \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line profile. In Figure~\ref{fig:Profile_Recon} we provide the full reconstructed \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line profile. In the top left panel we show the reconstructed \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line profile obtained from the `good' QSO sample, whereas in the top right panel is the reconstructed profile from the `conservative QSO sample. In all figures, we present 100 reconstructed \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} profiles denoted by the thin grey curves which are randomly drawn from the full posterior distribution. This small subset of reconstructed profiles highlight the relative scale of the variations in the total \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line profile peak height, width and position. The black curve is the raw data from the observed QSO whereas the white curve is the original fit to the full QSO spectrum as shown in Figure~\ref{fig:QSOexample}. In both, we find the total shape of the reconstructed \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line profile to match extremely well with the original full fit to the same QSO, highlighting the strength and utility of this covariance matrix reconstruction method. Both the `good' and `conservative' samples recover almost the identical reconstructed profile, though the `conservative' QSO sample provides slightly broader errors. Note that for both these QSO samples, the reconstructed \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line profile is systematically below the original fit to the same QSO, indicated by the original QSO fit (white curve) being above the highest density of line profiles\footnote{This is more prevalent when averaging over the full distribution of reconstructed \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} profiles.}. However, this systematic offset is only minor (less than 10 per cent in the normalised flux), well within the errors of the reconstruction. Referring back to Figure~\ref{fig:PDFs_Recon}, we can see that this underestimation appears due to the narrow component of the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} peak amplitude (bottom, central panel). \subsection{Improving the reconstruction with priors} \label{sec:fluxprior} We presented in Figure~\ref{fig:Profile_Recon} our best-fit reconstructed \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} profiles to a representative QSO drawn from our full sample. However, note that in this reconstruction method we have only used information from the QSO spectrum above $\lambda > 1275$\AA. In doing this, for the case of our example QSO we found our maximum likelihood estimates to slightly underestimate (by less than 10 per cent) the original \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line profile. At the same time, the total 68 per cent marginalised likelihoods of the reconstructed profiles are relatively broad. Motivated by this, we investigate whether we can provide an additional prior on the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} reconstruction profile to further improve the robustness of the recovered profile and to reduce the relative scatter. In the top panels of Figure~\ref{fig:Profile_Recon}, redward of \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} we see that the reconstructed \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line profile drops well below the observed flux of the original QSO. Given we are only reconstructing the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line, this is to be expected as we are not recovering or fitting the \ifmmode\mathrm{N\,{\scriptscriptstyle V}}\else{}N\,{\scriptsize V}\fi{} emission line. However, in both \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} obscured and high-$z$ QSOs, the \ifmmode\mathrm{N\,{\scriptscriptstyle V}}\else{}N\,{\scriptsize V}\fi{} emission line should be relatively unobscured, therefore, the observed flux within this region could be used as a relative prior on the overall \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} flux amplitude. We therefore include a flux prior\footnote{Note that this choice in phrasing does not imply a `prior' in the Bayesian statistical sense, rather it is used to highlight that we are including additional information into our reconstruction procedure compared to the reconstruction method discussed in Section~\ref{sec:Reconstruction}.} into our \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} reconstruction by performing the following steps: \begin{itemize} \item As before, we fit the QSO at $\lambda > 1275$\AA, recovering the QSO continuum and all emission line profiles necessary for our covariance matrix approach. \item Using these estimates, we collapse the 18-dimensional covariance matrix into a six dimensional estimate of the intrinsic \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} emission line profile. \item We then jointly MCMC sample the observed QSO spectrum within the range $1230 < \lambda < 1275$\AA. We fit the \ifmmode\mathrm{N\,{\scriptscriptstyle V}}\else{}N\,{\scriptsize V}\fi{} and \ifmmode\mathrm{Si\,{\scriptscriptstyle II}}\else{}Si\,{\scriptsize II}\fi{} lines at the same time sampling from our six dimensional reconstructed \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} likelihood function obtained from the $\lambda > 1275$\AA\,fit. Fitting to the observed QSO flux, and using the observed noise in the spectrum, we obtain a maximum likelihood for the reconstructed profile. In other words, we require the reconstructed \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line profiles to fit the observed spectrum over the range $1230 < \lambda < 1275$\AA. \end{itemize} Implementing this prior on the observed flux closer to $\lambda = 1230$\AA\footnote{This choice of 1230\AA\ is purely arbitrary, and is chosen based on the assumption that a damped absorption signal would not extend this far redward of \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{}. However, this choice is flexible, and can be adjusted on a case-by-case basis if evidence for stronger attenuation beyond 1230\AA\ is present.}~accesses additional information on the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} profile that was not available through our original $\lambda > 1275$\AA~reconstruction method. Near $\lambda = 1230$\AA, there should be a contribution from the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} broad line component, which is somewhat degenerate with the \ifmmode\mathrm{N\,{\scriptscriptstyle V}}\else{}N\,{\scriptsize V}\fi{} line (as can be seen in Figure~\ref{fig:QSOexample}). By simultaneously fitting the \ifmmode\mathrm{N\,{\scriptscriptstyle V}}\else{}N\,{\scriptsize V}\fi{} line and the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} likelihood function, we use this additional information to place a prior on the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} broad line component, which should then reduce the overall scatter in the six dimensional \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} likelihood function. Referring back to Figure~\ref{fig:PDFs_Recon} we provide an example of this prior applied to the same QSO fit and reconstructed previously. In Figure~\ref{fig:PDFs_Recon}, the yellow curves represent the recovered 1D marginalised PDFs for each of the six \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} emission line parameters. Immediately, it is clear that the application of this prior further reduces the relative error in the recovered PDFs. Furthermore, these PDFs remain centred on the originally recovered values, highlighting we have not biased our reconstruction method. \begin{figure*} \begin{center} \includegraphics[trim = 0.3cm 0.7cm 0cm 0.5cm, scale = 0.495]{Plots/ReconScatter} \end{center} \caption[]{A visual characterisation of the performance of the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} emission line reconstruction pipeline at recovering each of the six \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line parameters. We provide the 2D scatter between the full MCMC fits to the QSOs, compared to the reconstructed values from a fit to the same QSO masking the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} region (flux masked at $\lambda < 1275$\AA) and applying our additional prior on the QSO flux near $\lambda = 1230$\AA. Green solid and dashed contours enclose the 68 and 95 per cent scatter of the reconstructed \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} parameters relative to their expected (true) value, while the red dashed curves correspond to the one-to-one line on which all points would lie if the reconstruction procedure worked perfectly. Histograms (black curves) correspond to the 1D PDFs of the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} parameters from the full MCMC fitting (top) and the recovered estimate from the reconstruction pipeline fit to the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} masked QSO (right). Blue curves represent the associated Gaussian distribution with equivalent scatter. Note that the peak amplitude is normalised by $f_{1450}$ and is therefore dimensionless.} \label{fig:Recon_All} \end{figure*} In the bottom panel of Figure~\ref{fig:Profile_Recon} we present the full reconstructed \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} profile after the addition of this prior. The red curve indicates the fit to the QSO within $1230 < \lambda < 1275$\AA, which we have used as our prior to improve the reconstruction of the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line profile. At $\lambda < 1230$\AA, we then have the same 100 thin grey curves representing the full posterior distribution of the reconstructed \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} likelihood profiles. By applying this additional prior on the total observed flux, we reduce the overall scatter in the reconstructed profile. The maximum likelihood profiles (thin grey curves) now provide a more robust match to the observed QSO. \subsection{Statistical performance of the reconstruction pipeline} Thus far we have only applied our reconstruction pipeline to recover the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line profile from a single, test example selectively drawn from our `good' sample of 1673 QSOs. In order to statistically characterise the performance of the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} reconstruction pipeline across the QSO sample, in Figure~\ref{fig:Recon_All} we present 2D scatter plots of the maximum a posteriori (MAP) estimates\footnote{Note that throughout this work, the recovered MAP estimates from the full posterior distribution do not differ significantly from the peak of the associated marginalised PDFs.} of the reconstructed \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} emission line parameters compared to the originally obtained values from the full MCMC fit to the QSO (without masking out the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line). For each of the six \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} emission line parameters, we highlight the 68 and 95 percentiles of the joint marginalised likelihoods for the distributions by the green solid and dashed contours, respectively. Additionally, the red dashed curve demarcates the one-to-one line, along which all QSOs would sit if the reconstruction profile worked idealistically. Across the six \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} panels, we find strong agreement ($\rho > 0.7$) amongst half of our \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line parameters, those being the peak amplitudes of both the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} broad and narrow components and the velocity offset for the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} narrow line component. For the remainder of the parameters we find moderate to weaker recovery of the original line parameters. However, note that in order to keep this figure as clear as possible we are only providing the MAP estimates. Within Figures~\ref{fig:PDFs_Recon} and~\ref{fig:Profile_Recon} we found that the relative scatter on the reconstructed \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line profile parameters was notable, and therefore the majority of the reconstructed parameters are within the 68 per cent marginalised errors. \begin{figure*} \begin{center} \includegraphics[trim = 0.3cm 0.7cm 0cm 0.5cm, scale = 0.58]{Plots/FluxDistributions} \end{center} \caption[]{A comparison of the maximum likelihood reconstructed \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line flux (including the flux prior; Section~\ref{sec:fluxprior}) to the actual \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line flux obtained from the full fit at a specific wavelength for each QSO within our `good' sample (1673 QSOs; we define 1 A. U. = $10^{-17}\,{\rm erg\,cm^{-2}\,s^{-1}\,\AA^{-1}}$). \textit{Left panel:} blueward of \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} ($\lambda=1205$\AA). \textit{Right panel:} redward ($\lambda=1220$\AA). The red dashed curve corresponds to the one-to-one relation, and the grey shaded region encompasses 15 per cent scatter in the reconstructed \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} flux relative to the actual measured \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line flux. At $\lambda=1220$\AA~we find the reconstructed \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line flux to be within 15 per cent of the actual \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line flux in $\sim90$~per cent of all QSOs, decreasing to $\sim85$~per cent at $\lambda=1205$\AA.} \label{fig:FluxDistribution} \end{figure*} The reconstructed \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} profile parameters highlighted here reflect the correlations recovered from the covariance matrix in Figure~\ref{fig:CovarianceMatrix}. We found strong correlations in the peak amplitudes of the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} profiles, and the narrow component velocity offset. The lack of a strong correlation for the width of the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line in our covariance matrix, translates to weaker recovery of these parameters. In principle, these weaker correlations could be further strengthened by adding an appropriate prior on the line widths motivated by the statistical distributions recovered from the full sample, or other line properties such as correlations between the equivalent widths. In Figure~\ref{fig:Recon_All}, there is also slight evidence for a bias in the reconstructed parameters, as highlighted by the orientation of the green contours (68 and 95 percentiles of the reconstructed parameter distributions) relative to the reference one-to-one line. However, this could artificially arise as the increase/decrease in any one of these \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line parameters could be compensated for by respective changes in others (i.e.\ model degeneracies), whereas the full six dimensional \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} likelihood function takes these model degeneracies into account when estimating the full reconstructed \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} profile. In order to better illustrate the full reconstruction of the joint \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} parameter likelihoods, in Figure~\ref{fig:FluxDistribution} we show the information from the six individual \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line parameters as a single, total measured \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line flux at two arbitrarily defined locations blue (1205\AA) and redward (1220\AA) of the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line centre. We compare the total reconstructed \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line flux against the measured \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line flux from our full fit to the same QSO, providing the reference one-to-one relation as the red dashed curve, and the grey shaded region encompasses the region in which the reconstructed \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line flux is within 15 per cent of the measured flux. Immediately obvious from this figure is that there are no apparent biases in the reconstruction process, i.e.\ we neither systematically over nor underestimate the reconstructed \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line profile. This figure highlights the strength of the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} reconstruction process. Redward of \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} (at 1220\AA), closer to our flux prior at 1230\AA, we find that $\sim90$ per cent of all reconstructed \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line profiles have a recovered flux within 15 per cent. As one would expect, the scatter increases blueward of \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} (at 1205\AA), however we still find the reconstructed \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line flux to be within 15 per cent for $\sim85$ per cent of our sampled QSOs. This highlights statistically, that the reconstruction process performs an excellent job of recovering the full \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line profile. \section{Potential Applications} \label{sec:Discussions} In this work, we have developed an MCMC fitting algorithm for the sole purpose of characterising the QSO continuum and the emission line profiles within the range $1180{\rm \AA} < \lambda < 2300{\rm \AA}$. Our goal was the construction of a covariance matrix to reconstruct the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line profile. However, due to the flexibility of the MCMC approach, many other applications could benefit from such a pipeline. Firstly, we were only interested in correlations amongst the strongest, high ionisation emission line parameters for our covariance matrix. However, various properties of QSOs can be extracted from accurate recovery of the line widths and ratios. For example, the QSO metallicity has been estimated from measuring the \ifmmode\mathrm{N\,{\scriptscriptstyle V}}\else{}N\,{\scriptsize V}\fi{}/\ifmmode\mathrm{C\,{\scriptscriptstyle IV}}\else{}C\,{\scriptsize IV}\fi{}, \ifmmode\mathrm{N\,{\scriptscriptstyle V}}\else{}N\,{\scriptsize V}\fi{}/He\,{\scriptsize II} and the \ifmmode\mathrm{Si\,{\scriptscriptstyle IV}\,\plus O\,{\scriptscriptstyle IV]}}\else{}Si\,{\scriptsize IV}\,+O\,{\scriptsize IV]}\fi{}/\ifmmode\mathrm{C\,{\scriptscriptstyle IV}}\else{}C\,{\scriptsize IV}\fi{} line ratios in samples of QSOs from \citet{Nagao:2006p4776} ($2 < z < 4.5$) and \citet{Juarez:2009p4775} ($4 < z < 6.4$). Using the same line ratios, we could recover estimates for the metallicities for all QSOs within our measured sample. Several other emission line ratios (e.g.\ the R23 parameter, [O\,{\scriptsize III}]/[O\,{\scriptsize II}], \ifmmode\mathrm{C\,{\scriptscriptstyle IV}}\else{}C\,{\scriptsize IV}\fi{}/He\,{\scriptsize II}, N-based ratios) can additionally be used as proxies for QSO metallicity \citep[e.g][]{Nagao:2006,Matsuoka:2009,Batra:2014}. By extending our QSO MCMC framework, many other emission lines can be obtained and characterised to improve the QSO metallicity estimates. Crucially, this MCMC approach, would enable a large dataset of QSOs to be rapidly explored. The extension to measuring the metallicities of the QSOs would enable quantities such as the SMBH mass to be recovered through existing correlations between the emission line FWHMs. In Section~\ref{sec:absorb} we have outlined our method to identify and fit absorption features (see e.g.~\citealt{Zhu:2013} for a more robust approach). While in this work these have been considered contaminants, the prevalence and measurement of these lines could be used to infer properties of the metallicities within intervening absorbers \citep[e.g][]{RyanWeber:2006,Becker:2009,DOdorico:2013} and to reveal the presence of massive outflows of ionised gas from their nuclei \citep[e.g.][and reference therein]{Crenshaw:2003}. For example, we have thrown out any QSOs with strong intervening absorption from systems such as DLAs, however, analysing these sources with our MCMC fitting algorithm could yield measurements on the internal properties of these absorption systems. Assuming the input quasars to our MCMC are representative of the quasar population as a whole, and further that the UV spectral properties of quasars do not evolve with redshift, our model can be used to predict the intrinsic distribution of quasar spectra at redshifts beyond those from which the model was calibrated ($z\sim2.5$). A similar procedure has been often employed to characterise the colour selection efficiency of quasar surveys \citep[e.g][]{Fan:1999}, although the correlations obtained through our MCMC approach provide more detailed reconstruction of quasar emission features and hence more reliable colour models. Particularly since the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line plays such a key role in the selection of high-$z$ quasars, our model could be used to identify quasars missing from current surveys due to selection effects and provide more robust statistics for high-$z$ quasar luminosity functions. In addition, by recovering an estimate for the intrinsic \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} emission line profile one can investigate the QSO proximity effect. This approach requires an estimate of the intrinsic QSO luminosity, coupled with the modelling of the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} forest. Within the sphere of influence of the QSO, the photoionisation background is higher than the mean background permeating the IGM. Modelling the transition between the mean IGM background and the drop-off in QSO luminosity has been used by several authors to recover estimates of the photoionisation background in the IGM \citep[e.g][]{Bolton:2005p6088,Bolton:2007p3273,Calverley:2011}. At the redshifts where these studies have been performed (e.g. $2 < z < 6$), the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line profile is not obscured or attenuated by a neutral IGM. Therefore, one could in principle push the flux prior we used in this work much closer to the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line centre, to substantially reduce the errors on the intrinsic \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} emission line profile blueward of \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{}. At $z>6$, for an increasingly neutral IGM, the intrinsic \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} emission line can become increasingly attenuated by the Gunn-Peterson IGM damping wing. While existing methods have been developed to access information on the red-side of \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} \citep[e.g.][]{Kramer:2009p920}, for the $z=7.1$ QSO ULASJ1120+0641 \citep{Mortlock:2011p1049}, evidence suggests that the IGM damping wing imprint extends further redward \citep[e.g][]{Mortlock:2011p1049,Bolton:2011p1063}, limiting the effectiveness of these approaches. The approach developed in this work should be unaffected by this as here we do not fit the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line, our reconstruction method therefore is perfectly suited for exploring the potential imprint of the IGM damping wing on the $z=7.1$ QSO \citep[e.g.][]{Greig:2016p1}, along with other future $z>6$ QSOs. \section{Conclusion} \label{sec:Conclusions} Characterising the continuum and emission lines properties of QSOs provides a wealth of information on the internal properties of the AGN, such as the mass of the SMBH, the QSO metallicity, star formation rates of the host galaxy, nuclear outflows and winds etc. Furthermore, the intrinsic \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line shape can be used to probe properties of the IGM, such as the mean photoionisation background and the abundance of neutral hydrogen in the IGM at $z>6$. Motivated by correlations amongst QSO emission lines \citep[e.g.][]{Boroson:1992p4641,Sulentic:2000,Shen:2011p4583}, in this work, we developed a new reconstruction method to recover the intrinsic \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} profile. This method is based on the construction of a covariance matrix built from a large sample of moderate-$z$ ($2.0 < z < 2.5$), high S/N (S/N $> 15$) QSOs from the BOSS observational programme. We use this moderate-$z$ sample to characterise the intrinsic \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line profile, where it should be relatively unaffected by intervening neutral hydrogen in the IGM. In order to characterise each QSO within our sample we developed an MCMC fitting algorithm to jointly fit the QSO continuum, the emission lines and any absorption features that could contaminate or bias the fitting of the QSO. We modelled the QSO continuum as a single two parameter power-law ($\propto \lambda^{\alpha_{\lambda}}$), and each emission line is modelled as a Gaussian defined by three parameters, its width, peak amplitude and velocity offset from systemic. We constructed our covariance matrix from a refined sample of 1673 QSOs, using the high ionisation emission lines \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{}, \ifmmode\mathrm{Si\,{\scriptscriptstyle IV}\,\plus O\,{\scriptscriptstyle IV]}}\else{}Si\,{\scriptsize IV}\,+O\,{\scriptsize IV]}\fi{}, \ifmmode\mathrm{C\,{\scriptscriptstyle IV}}\else{}C\,{\scriptsize IV}\fi{} and \ifmmode\mathrm{C\,{\scriptscriptstyle III]}}\else{}C\,{\scriptsize III]}\fi{}. Owing to the flexibility of the MCMC framework, we explored various combinations of single and double component Gaussians to characterise these lines. For \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} and \ifmmode\mathrm{C\,{\scriptscriptstyle IV}}\else{}C\,{\scriptsize IV}\fi{} we settled on a double component Gaussian, to model the presence of a broad and narrow component. For the remaining two lines, we considered a single component. This resulted in an $18\times18$ covariance matrix, which we used to investigate new and existing correlations amongst the line profiles. We identified several strong trends from our covariance matrix, most notably the strong positive correlation between the peak amplitudes of the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} and \ifmmode\mathrm{C\,{\scriptscriptstyle IV}}\else{}C\,{\scriptsize IV}\fi{} narrow components ($\rho=0.8$) and the strong anti-correlation between the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} narrow component peak amplitude and the width of the \ifmmode\mathrm{C\,{\scriptscriptstyle III]}}\else{}C\,{\scriptsize III]}\fi{} line ($\rho=-0.74$). These two were the strongest examples of a consistent trend of a positive correlation in the peak amplitudes across all the emission line species and the anti-correlation between the peak amplitudes and line widths. Using this covariance matrix, we constructed an $N$-dimensional Gaussian likelihood function from which we are able to recover our reconstructed \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line profile. The reconstruction method works as follows: \begin{itemize} \item Fit a QSO with our MCMC pipeline within the range $1275{\rm \AA} < \lambda < 2300$\AA, recovering the parameters defining the QSO continuum, and the \ifmmode\mathrm{Si\,{\scriptscriptstyle IV}\,\plus O\,{\scriptscriptstyle IV]}}\else{}Si\,{\scriptsize IV}\,+O\,{\scriptsize IV]}\fi{}, \ifmmode\mathrm{C\,{\scriptscriptstyle IV}}\else{}C\,{\scriptsize IV}\fi{} and \ifmmode\mathrm{C\,{\scriptscriptstyle III]}}\else{}C\,{\scriptsize III]}\fi{} lines. \item Obtain a six dimensional estimate of the reconstructed \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line profile (modelled as a two component Gaussian) which provides the best-fit profile and correlated uncertainties, by evaluating the $N$-dimensional likelihood function describing our full covariance matrix including a prior on the reconstructed \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line using the observed QSO flux within the range $1230 < \lambda < 1275$\AA. \end{itemize} To visually demonstrate the performance of this reconstruction method, we applied it to a randomly selected QSO from our full data set. Finally, we quantitatively assessed its performance by applying it to the full QSO sample, and compared the reconstructed \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} profile parameters to those recovered from the original full MCMC fit of the same QSO. We found that estimates for both the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} peak amplitudes are recovered strongly, as is the velocity offset of the narrow line component. For both of the line widths and the broad component velocity offset we find moderate agreement. We additionally explored the total reconstructed \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} flux (rather than individual parameters) relative to the original full MCMC fit at two distinct wavelengths blueward (1205\AA) and redward (1220\AA) of \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{}. Our reconstruction method recovered the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line flux to within 15 per cent of the measured flux at 1205\AA~(1220\AA)~$\sim85$ ($\sim90$) per cent of the time. There are several potential applications for both the MCMC fitting method and the \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} reconstruction pipeline. The MCMC fitting could be easily modified to measure any emission or absorption feature within a QSO spectrum. With this, many properties of the source QSO could be extracted, for example QSO metallicities. The ability to reconstruct the intrinsic \ifmmode\mathrm{Ly}\alpha\else{}Ly$\alpha$\fi{} line profile could have important cosmological consequences such as improving estimates of the IGM photoionisation background or recovering estimates of the IGM neutral fraction. \section*{Acknowledgments} We thank the anonymous referee for their helpful suggestions. AM and BG acknowledge funding support from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 638809 -- AIDA -- PI: AM). ZH is supported by NASA grant NNX15AB19G. Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is http://www.sdss3.org/. SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University.
{'timestamp': '2016-12-20T02:08:43', 'yymm': '1605', 'arxiv_id': '1605.09388', 'language': 'en', 'url': 'https://arxiv.org/abs/1605.09388'}
\section{Introduction} \label{Introduction} Many biological and biomimetic structures use geometrically pronounced features to produce highly nonlinear behavior. These materials include seashells, hierarchical honeycombs, snail spiral, seahorse tail, fish scales, lobster exoskeleton, crab exoskeleton, butterfly wings, armadillo exoskeleton, sponge skeleton, etc. \cite{c07,c08,c09,c10,c11}. Among these structures, dermal scales have garnered special attention recently due to complex mechanical behavior in bending and twisting \cite{c12,c13,c14,c15,c16,c17,c18}. Scales in nature are naturally multifunctional, durable and lightweight \cite{c19,c20,c21,c22,c23,c24,c25,c26,c27,c28}, and protective for the underlying substrate, which has been an inspiration of armor designs \cite{c17,c18,c29,c30} where overlapping scales can resist penetration and provide additional stiffness \cite{c17,c18,c31,c32}. Fabrication methods such as synthetic mesh sewing and stretch-and-release have been recently developed to produce overlapping scale-covered structures in 2D and 1D configuration \cite{c33,c34}. These fabricated structures show almost ten times more puncture resistance than soft elastomers. However, in addition to these localized loads, global deformation modes such as bending and twisting can be important for a host applications that require a structural mode of deformation such as soft robotics, prosthetics or morphing structures. It is here that characterizing bending and twisting play an important role in ascertaining the benefit of these structures. Prior research has shown that bending and twisting of a substrate show small strain reversible nonlinear stiffening and locking behavior due to the sliding kinematics of the scales in one-dimensional substrates \cite{c35,c36,c37,c38,c39,c40,c41,c42,c43,c44,c48}. The universality of these behavior across bending of uniformly distributed scales, functionally graded scales and uniformly distributed twisting is an important discovery. However, the role of friction and its possible universal role has not been established in literature. In other words, questions remain about the parallels of properties modification brought about by friction in bending with twisting. For instance, Coulomb friction in bending regime advances the locking envelopes but at the same time, limits the range of operation \cite{c39}. In the dynamic regime, Coulomb friction can lead to damping behavior, which mimics viscous damping \cite{c44}. Clearly, friction between sliding scales can significantly alter the nature of nonlinearity. However, in spite of these studies, the role of friction in influencing the twisting behavior has never been investigated before. In this paper we investigate the role of friction in affecting the twisting behavior of biomimetic scale-covered systems under pure torsion for the first time. To this end, we establish an analytical model aided by finite element (FE) computational investigations. We assume rigid scales, linear elastic behavior of the substrate and Coulomb model of friction between scales’ surfaces. We compare our results with FE model to verify the proposed analytical model. \section{Materials and methods} \label{Materials and methods} \subsection{Materials and geometry} \label{Materials and geometry} We consider a rectangular deformable prismatic bar with a row of rigid rectangular plates embedded on substrate’s top surface. For the sake of illustration, we fabricate prototypes of 3D-printed PLA scales ($E_{PLA}\sim3$ $GPa$), embedded onto a silicone substrate and adhered with silicone glue to prefabricated grooves on the molded slender Vinylpolysiloxane (VPS) substrate ($E_{VPS}\sim1.5$ $MPa$) as shown in Figure \ref{Fig1a}. The prototype has been shown under twisting configuration in Figure \ref{Fig1b}. The rigidity assumption is valid in the limit of much higher stiffness of the scales, away from the locking state \cite{c18,c45}. \begin{figure}[htbp]% \centering \subfigure[][]{% \label{Fig1a}% \includegraphics[width=3.3in]{Fig1a.pdf}}% \hspace{8pt}% \subfigure[][]{% \label{Fig1b}% \includegraphics[width=3.3in]{Fig1b.pdf}} \\ \caption[]{The fabricated prototype made of 3D-printed PLA scales and molded slender Vinylpolysiloxane (VPS) substrate: \subref{Fig1a} untwisted configuration; and \subref{Fig1b} twisted configuration.}% \label{Fig1}% \end{figure} The pure twisting behavior allows us to assume periodicity, letting us isolate a fundamental representative volume element (RVE) for modeling the system, Figure \ref{Fig2a}. The scales are considered to be rectangular rigid plates with thickness $t_s$, width $2b$, and length $l_s$, and oriented at angles $\theta$ and $\alpha$ as shown in Figure \ref{Fig2b} with respect to the rectangular prismatic substrate. $\theta$ is the scale inclination angle defined as the dihedral angle between the substrate’s top surface and the scale’s bottom surface, and $\alpha$ is the angle between the substrate’s cross section and the scale’s width. The length of exposed section of scales is denoted as $l$, and the length of embedded section of the scales is $L$. Therefore, the total length of the scale is $l_s=L+l$. The spacing between the scales is constant and denoted by $d$, which is a geometrical parameter reciprocal to the density of scales. We assume that the scale’s thickness $t_s$ is negligible with respect to the length of the scales is $l_s$ ($t_s \ll l_s$), and the scale’s embedded length is also negligible with respect to the substrate’s thickness ($0 \ll L \ll 2t$). This thin-plate idealization for the biomimetic scales is appropriate for this case and typically used in literature for analogous systems \cite{c36,c39,c40,c41,c42,c43,c44}. \begin{figure}[htbp]% \centering \subfigure[][]{% \label{Fig2a}% \includegraphics[width=3.3in]{Fig2a.pdf}}% \hspace{8pt}% \subfigure[][]{% \label{Fig2b}% \includegraphics[width=3.3in]{Fig2b.pdf}} \\ \caption[]{The schematic of three consecutive scales geometrical configuration: \subref{Fig1a} top view of scales configuration; and \subref{Fig1b} dimetric view to represent scales orientational angles of $\theta$ and $\alpha$, and the embedded part of the each scales. Angle $\theta$ and thickness $t_s$ are exaggerated here.}% \label{Fig2}% \end{figure} \subsection{Kinematics} \label{Kinematics} For global deformation modes such as pure bending and twisting the scale periodicity is a good approximation \cite{c36,c42}. Periodicity assumption allows us to consider just three consecutive scales configuration at the RVE level, We call these scales as ``zeroth scale", ``1\textsuperscript{st} scale", and ``2\textsuperscript{nd} scale" respectively from left to right. Without loss of generality, we consider 1\textsuperscript{st} scale is fixed locally with respect to other scales. A twisting deformation with twist rate $\Upphi$, is applied to the rectangular prismatic substrate about torsion axis, which passes through the beam cross section center. Due to this underlying deformation, the 2\textsuperscript{nd} scale rotates by twist angle of $\varphi=\Upphi d$, and the zeroth scale rotates in reverse direction about the torsion axis with $-\varphi=-\Upphi d$, because 1\textsuperscript{st} scale assumed locally fixed. The continual twisting of the substrate progresses the contact between each two consecutive scales simultaneously due to periodicity, by coincidence between lines $C_1B_1$ and $D_2C_2$, as well as lines $D_1C_1$ and $C_0B_0$. To find a contact criterion between 1\textsuperscript{st} scale and 2\textsuperscript{nd} scale, the 3D-equations of lines $C_1B_1$ and $D_2C_2$ would be established. We place the coordinates $XYZ$ on the midpoint of 1\textsuperscript{st} scale's width as shown in Figure \ref{Fig2a}. Then we place coordinates $xyz$ on the torsion axis at point $O=(0,-t,0)$ measured form the coordinates $XYZ$. Hereafter, coordinates $xyz$ is our reference frame. We establish a local coordinates on each scales denoted as ``$i$\textsuperscript{th} scale" and its origin is located on the corner of the scale at point $D_i$. In these local coordinates, the unit vector of $x$-axis ($\bi{{n}_{Xi}}$) is on the edge $D_iC_i$, the unit vector of $y$-axis ($\bi{{n}_{Yi}}$) is on the edge $D_iA_i$, and the unit vector of $z$-axis ($\bi{{n}_{Zi}}$) is out of plane and perpendicular to $\bi{{n}_{Xi}}$ and $\bi{{n}_{Yi}}$, Figure \ref{Fig2b}. On each scales, edges $D_iC_i$ and $A_iB_i$ are parallel and in direction of $\bi{{n}_{Xi}}$, and edges $C_iB_i$ and $D_iA_i$ are parallel and in direction of $\bi{{n}_{Yi}}$. Point $M_i$ is located in the middle of edge $C_iB_i$. Using these established coordinates, symmetric equations of line $C_1B_1$ of 1\textsuperscript{st} scale is as follows \cite{c46}: \vspace{0.7pc} \begin{equation} \label{Eq1} \frac{x-x_{M_1}}{x_{\bi{{n}_{Y1}}}}=\frac{y-y_{M_1}}{y_{\bi{{n}_{Y1}}}}=\frac{z-z_{M_1}}{z_{\bi{{n}_{Y1}}}}, \end{equation} \vspace{0.7pc} where $\bi{{n}_{Y1}}=(x_\bi{{n}_{Y1}},y_\bi{{n}_{Y1}},z_\bi{{n}_{Y1}})$. By putting (\ref{Eq1}) equal to $p$ and using geometrical parameters in Figure \ref{Fig2}, we will have parametric form of the equation of line $C_1B_1$ as follows, where $p$ can vary from $-b$ to $b$: \vspace{0.7pc} \numparts \begin{eqnarray} x(p) = p\cos \alpha - l\sin \alpha \cos \theta, \label{Eq2a} \\ y(p) = t + l\sin \theta, \label{Eq2b}\\ z(p) = p\sin \alpha + l\cos \alpha \cos \theta. \label{Eq2c} \end{eqnarray} \endnumparts \vspace{-0.7pc} Point $D_i$ is located at one end of the edge $D_iC_i$. Symmetric equations of line $D_2C_2$ of 2\textsuperscript{nd} scale is as follows: \vspace{0.7pc} \begin{equation} \label{Eq3} \frac{x-x_{D_2}}{x_{\bi{{n}_{X2}}}}=\frac{y-y_{D_2}}{y_{\bi{{n}_{X2}}}}=\frac{z-z_{D_2}}{z_{\bi{{n}_{X2}}}}, \end{equation} \vspace{0.7pc} where $\bi{{n}_{X2}}=(x_\bi{{n}_{X2}},y_\bi{{n}_{X2}},z_\bi{{n}_{X2}})$. To find parametric equation of the line $D_2C_2$, which is on the 2\textsuperscript{nd} scale rotating with angle $\varphi$ about torsion axis, first we locate the corners of 2\textsuperscript{nd} as shown in Figure \ref{Fig2a}, and then their locations are found after rotation, using rotation matrix. Therefore, rotated local coordinates on this scale and the unit vector in direction $D_2C_2$ ($\bi{{n}_{X2}}$) can be established. By using these geometrical parameters and putting (\ref{Eq3}) equal to $q$, we have parametric form of the equation of line $D_2C_2$ as follows, where $q$ can vary from $0$ to $l$: \vspace{0.7pc} \numparts \begin{eqnarray} x(q) = ( {\tan \theta \tan \varphi - \sin \alpha } )q + ( {t\sin \varphi - b\cos \alpha \cos \varphi }), \label{Eq4a} \\ y(q) = ( {\tan \theta + \sin \alpha \tan \varphi } )q + ( {t\cos \varphi + b\cos \alpha \sin \varphi }), \label{Eq4b} \\ z(q) = ( {\frac{{\cos \alpha }}{{\cos \varphi }}} )q + ( {d - b\sin \alpha } ). \label{Eq4c} \end{eqnarray} \endnumparts \vspace{-0.2pc} To find a contact point between these two lines, (2) and (4) must be identical at $x$, $y$ and $z$ coordinate simultaneously. By putting (\ref{Eq2a}) equal to (\ref{Eq4a}) and also (\ref{Eq2b}) equal to (\ref{Eq4b}) simultaneously, we arrive at the following systems of equations: \vspace{0.7pc} \begin{equation} \label{Eq5} \left[\begin{array}{cccc} x_{\bi{{n}_{Y1}}} & -x_{\bi{{n}_{X2}}} \\ y_{\bi{{n}_{Y1}}}& -y_{\bi{{n}_{X2}}}\end{array} \right] \left[\begin{array}{cccc} p \\ q\end{array} \right] = \left[\begin{array}{cccc} x_{C_2}-x_{M_1} \cr y_{C_2}-y_{M_1}\end{array} \right]. \end{equation} \vspace{0.7pc} Solving (\ref{Eq5}) will lead us to equations for $p$ and $q$, and by putting derived equation of $p$ or $q$, into the (\ref{Eq2c}) or (\ref{Eq4c}), yields to an analytical relationship between $\varphi$ and $\theta$. To represent a general form for this relationship, we define dimensionless geometric parameters including $\eta=l/d$, $\beta=b/d$, and $\lambda=t/d$ as the overlap ratio, dimensionless scale width, and dimensionless substrate thickness, respectively. The governing nonlinear relationship between the substrate twist angle $\varphi$ and the scale inclination angle $\theta$ can be written as: \vspace{0.7pc} \begin{eqnarray} \label{Eq6} \fl (\cos \varphi - 1) {\Big( \beta \sin 2\alpha \sin \theta + \eta {{\cos }^2}\alpha \sin 2\theta + 2\lambda \cos 2\alpha \cos \theta \Big)} - 2\cos \alpha \cos \varphi \sin \theta + \\ 2\sin \alpha \sin \varphi ( {\eta + \lambda \sin \theta } ) + 2\cos \alpha \sin \varphi \cos \theta ( {\beta - \sin \alpha } ) = 0. \nonumber \end{eqnarray} \vspace{-0.7pc} From the beginning of scales engagement, the relationship (\ref{Eq6}) is established between the substrate twist angle $\varphi$ and the scales inclination angle $\theta$. After engaging, scales slide over each other and $\theta$ starts to increase from its initial value $\theta_0$ according to the nonlinear relationship (\ref{Eq6}). Scales engagement start at relatively small twist angle, therefore to find an explicit relationship for the engagement twist angle $\varphi_e$, we linearize (\ref{Eq6}) by considering small twist regime ($\varphi \ll 1$, $\theta \ll 1$) which leads to $\varphi_e={\theta_0}/(\eta \tan \alpha+ \beta- \sin \alpha)$. Using the kinematic relationship (\ref{Eq6}), we probe the existence of a singular point where locking can take place. This would be the envelope defined by $\partial \varphi /\partial \theta=0$, and beyond which no more sliding is possible without significant deformation of the scales. This point is called the ``kinematic locking" of the system \cite{c42}. By putting derived equation of $p$ or $q$ into the (2) or (4), we will have the location of point $P_{12}$ as the intersection between lines $D_2C_2$ and $C_1B_1$. We can use the same procedure to establish the locations of zeroth scale's corners and its local coordinates after rotating with angle $-\varphi$ about torsion axis. We find the same nonlinear relationship between $\varphi$ and $\theta$ due to the periodicity of the system, then we can find the location of point $P_{10}$ as the intersection between lines $D_1C_1$ and $C_0B_0$, using the same method. \subsection{Mechanics} \label{Mechanics} To investigate the role of friction in twisting behavior of biomimetic scale-covered substrate, we investigate the free body diagram of the RVE (here 1\textsuperscript{st} scale) during engagement as shown in Figure \ref{Fig3}. The forces on the 1\textsuperscript{st} scale are as follows. At contact point between zeroth scale and 1\textsuperscript{st} scale $P_{10}$, there are two reaction forces including friction force $\bi{f_{10}}$ acting in the plane of 1\textsuperscript{st} scale by angle $\chi_{10}$ with respect to the unit vector $\bi{{n}_{X1}}$, and normal force $\bi{N_{10}}$ acting perpendicular to this plane in direction $-\bi{{n}_{Z1}}$ as shown in Figure \ref{Fig3}. Also, at contact point between 1\textsuperscript{st} scale and 2\textsuperscript{nd} scale $P_{12}$, two reaction forces are acting as friction force $\bi{f_{12}}$ in the plane of 2\textsuperscript{nd} scale by angle $\chi_{12}$ with respect to the unit vector $\bi{{n}_{X2}}$, and normal force $\bi{N_{12}}$ perpendicular to the plane of 2\textsuperscript{nd} scale in direction $\bi{{n}_{Z2}}$ as shown in Figure \ref{Fig3}. \begin{figure}[htbp] \centering \includegraphics[width=3.3in]{Fig3.pdf} \caption{Free body diagram of each pair of scales representing their contact points, applied normal force $\bi{N}$, and friction force $\bi{f}_{fr}$ at the contact points.} \label{Fig3} \end{figure} Note that the direction of friction forces are dependent on the direction of relative motion between each scale pairs. Due to the periodicity, the value of friction forces are equal $f_{fr}=f_{10}=f_{12}$, and also the value of normal forces are equal $N=N_{10}=N_{12}$. According to the described free body diagram, the balance of moments at the base of 1\textsuperscript{st} scale can be described in the vectorial format as follows: \vspace{0.7pc} \begin{eqnarray} \label{Eq7} \fl K_ \theta (\theta - \theta_0)=\bigg( \bi{O_{1}P_{10}} \times \Big( -({f_{fr}} \cos \chi_{10})\bi{{n}_{X1}} - ({f_{fr}} \sin \chi_{10})\bi{{n}_{Y1}} - (N)\bi{{n}_{Z1}} \Big)+ \\ \bi{O_{1}P_{12}} \times \Big( ({f_{fr}} \cos \chi_{12})\bi{{n}_{X2}} + ({f_{fr}} \sin \chi_{12})\bi{{n}_{Y2}} + (N)\bi{{n}_{Z2}} \Big) \bigg).\bi{{n}_{Y1}}, \nonumber \end{eqnarray} \vspace{-0.5pc} \noindent where $\bi{O_{1}P_{10}}$ and $\bi{O_{1}P_{12}}$ are the position vector of contact points $P_{10}$ and $P_{12}$ with respect to the base of the 1\textsuperscript{st} scale, respectively as shown in Figure \ref{Fig3}. $K_{\theta}$ is the ``rotational spring constant" or the ``rigid scale--elastic substrate joint stiffness". As the scales engage, they tend to push each other and increase their inclination angle $\theta$, but the elastic substrate resists against scales rotation. This resistance is modeled as linear torsional spring \cite{c35,c36}, and the absorbed energy due to the rotation of each scale is $U_{scale}=\frac{1}{2} {K}_\theta (\theta-\theta_0 )^2$, thus the local reaction moment would be $M_{scale}=K_\theta (\theta-\theta_0)$. According to developed scaling expression in \cite{c42}, $K_{\theta}=3.62{E_B}{t_s}^2b( {{L}/{t_s}})^{1.55}$, where $E_B$ is the elastic modulus of substrate. To describe the relative motion between zeroth scale and 1\textsuperscript{st} scale, we would need the relative motion of contact point $P_{10}$ on the edge $D_1C_1$ and edge $C_0B_0$. Motion of point $P_{10}$ on the edge $D_1C_1$ can be described as the change in the length of vector $\bi{P_{10}C_1}$, which is always in direction of $\bi{{n}_{X1}}$, and the change in the length of vector $\bi{P_{10}C_0}$, which is always in direction of $\bi{{n}_{Y0}}$. By using the superposition principle, the total differential displacement of point $P_{10}$ can be described in vectorial format as $\mathrm{d}\bi{R_{10}}=\big(\mathrm{d}|\bi{P_{10}C_1}|\big)\bi{{n}_{X1}}+\big(\mathrm{d}|\bi{P_{10}C_0}|\big)\bi{{n}_{Y0}}$, Figure \ref{Fig3}. The unit vector $\bi{{n}_{Y0}}$ can be described in the local coordinate established on 1\textsuperscript{st} scale as follows: \vspace{0.7pc} \begin{equation} \label{Eq8} \bi{{n}_{Y0}}=(\bi{{n}_{Y0}}.\bi{{n}_{X1}})\bi{{n}_{X1}}+(\bi{{n}_{Y0}}.\bi{{n}_{Y1}})\bi{{n}_{Y1}}+(\bi{{n}_{Y0}}.\bi{{n}_{Z1}})\bi{{n}_{Z1}}. \end{equation} \vspace{0.7pc} By projecting $\bi{{n}_{Y0}}$ on the 1\textsuperscript{st} scale plane, we can describe relative motion of zeroth scale with respect to 1\textsuperscript{st} scale as the planar relative displacement, as follows: \vspace{0.7pc} \begin{equation} \label{Eq9} \fl \mathrm{d}\bi{r}=\Big(\mathrm{d}|\bi{P_{10}C_1}|+\mathrm{d}|\bi{P_{10}C_0}|(\bi{{n}_{Y0}}.\bi{{n}_{X1}})\Big)\bi{{n}_{X1}}+ \Big(\mathrm{d}|\bi{P_{10}C_0}|(\bi{{n}_{Y0}}.\bi{{n}_{Y1}})\Big)\bi{{n}_{Y1}}. \end{equation} \vspace{0.7pc} The length of (\ref{Eq9}) can be described as the relative differential displacement value: \vspace{0.7pc} \begin{equation} \label{Eq10} \fl \mathrm{d}r=|\mathrm{d}\bi{r}|=\sqrt{\Big(\mathrm{d}|\bi{P_{10}C_1}|+\mathrm{d}|\bi{P_{10}C_0}|(\bi{{n}}_{Y0}.\bi{{n}_{X1}})\Big)^2+ \Big(\mathrm{d}|\bi{P_{10}C_0}|(\bi{{n}_{Y0}}.\bi{{n}_{Y1}})\Big)^2}. \end{equation} \vspace{0.7pc} To find the angle between the friction force $\bi{{f}_{fr}}$ acting in the plane of 1\textsuperscript{st} scale and the unit vector $\bi{{n}_{X1}}$, we can use (\ref{Eq9}) and (\ref{Eq10}) as the relative displacement vector and its value, then angle $\chi_{10}$ is derived as: \vspace{0.7pc} \begin{equation} \label{Eq11} \chi_{10} = \arccos \Big( \frac{1}{\mathrm{d}r} \big( \mathrm{d}|\bi{P_{10}C_1}|+\mathrm{d}|\bi{P_{10}C_0}|(\bi{{n}_{Y0}}.\bi{{n}_{X1}}) \big) \Big). \end{equation} \vspace{0.7pc} If we repeat similar steps for the relative motion between 1\textsuperscript{st} scale and 2\textsuperscript{nd} scale, it will lead to the similar relationship for the angle between the friction force $\bi{{f}_{fr}}$ acting in the plane of 2\textsuperscript{nd} scale and the unit vector $\bi{{n}_{X2}}$. Finally by computing the values of these relationships, we find that $\chi_{10} = \chi_{12}$, and can be shown as ${\chi}$. This finding also conform the periodicity in the system. According to the Coulomb's Law of Friction, scales do not slide while $f_{fr} \leq \mu N$, where $\mu$ and $N$ are coefficient of friction and normal force, respectively, while sliding regime is marked by the equality. Note that we use the same value for static coefficient of friction as well as the kinetic coefficient of friction in this study, although typically static coefficient of friction is slightly higher. Using these considerations, we can derive the following expression as the non-dimensionalized friction force $\overline{f}_0$, with respect to the free body diagram shown in Figure \ref{Fig3}: \vspace{0.7pc} \begin{eqnarray} \label{Eq12} \fl \overline{f}_0 = \frac{f_{fr} l}{K_\theta} \leq \\ \fl \frac{(\theta - \theta_0)l}{ \Big( \hspace{-2pt} \bi{O_{1}P_{12}} \hspace{-3pt} \times \hspace{-3pt} \big( \hspace{-2pt} \cos \chi \bi{{n}_{X2}} \hspace{-2pt} + \hspace{-2pt} \sin \chi \bi{{n}_{Y2}} \hspace{-2pt} + \hspace{-2pt} \frac{\bi{{n}_{Z2}}}{\mu} \hspace{-2pt} \big) \hspace{-2pt} - \hspace{-2pt} \bi{O_{1}P_{10}} \hspace{-3pt} \times \hspace{-3pt} \big( \hspace{-1pt} \cos \chi \bi{{n}_{X1}} \hspace{-2pt} + \hspace{-2pt} \sin\chi \bi{{n}_{Y1}} \hspace{-2pt} + \hspace{-2pt} \frac{\bi{{n}_{Z1}}}{\mu} \hspace{-1pt} \big) \hspace{-1pt} \Big).\bi{{n}_{Y1}}}. \nonumber \end{eqnarray} \vspace{0.1pc} Due to the nature and the geometrical configuration of the system, the magnitude of the friction force derived in (\ref{Eq12}), may exhibit singularity at a certain twist rate. This rise in friction force may lead to a ``frictional locking" mechanism, observed in the bending case \cite{c39}. If predicted, the frictional locking should happen at the lower twist rate rather than the kinematic locking, because of the limiting nature of friction force. We call the twist rate in which locking happens as $\Upphi_{lock}$, and the twist angle and the scale inclination angle would be as $\varphi_{lock}=\Upphi_{lock}d$ and $\theta_{lock}$, respectively. The friction force computed above will lead to dissipative work in the system during sliding. The non-dissipative component of the deformation is absorbed as the elastic energy of the biomimetic beam. This elastic energy is composed of elastic energy of the beam and the scales rotation. To calculate this elastic energy of the beam, we consider a linear elastic behavior for the beam with a warping coefficient $C_w$ for a non-circular beam \cite{c42,c47}. Furthermore, due to the finite embedding of the scales, there will be an intrinsic stiffening of the structure even before scales engagement. This stiffening can be accurately captured by using an inclusion correction factor $C_f$ \cite{c42}. $C_f$ is function of the volume fraction of the rigid inclusion into the elastic substrate, and postulated as $C_f=1+1.33({\zeta \beta}/{\lambda})$, where $\zeta=L/d$ for an analogous system \cite{c42}. With these considerations, modified torque-twist relationship of the beam is $T=C_f C_w G_B I\Upphi$, and the elastic energy of the beam can be considered as $U_B=\frac{1}{2} C_f C_w G_B I\Upphi^2$. As mentioned earlier, the energy absorbed by the scales can be obtained by assuming the scale’s resistance as linear torsional spring and the absorbed energy due to the rotation of each scale will be $U_{scale}=\frac{1}{2} K_\theta (\theta-\theta_0)^2$. Similarly the dissipation can be given as the product of the sliding friction and distance travelled by the point of application per scale. Then we use the work–energy balance to arrive at: \vspace{0.7pc} \begin{equation} \label{Eq13} \fl \int_{0}^{\Upphi} T( {\Upphi }')\mathrm{d}{\Upphi}' = {\frac{1}{2}}{C_f}{C_w}{G_B}I{{\Upphi}^2} + \bigg( {\frac{1}{2}} {\frac{1}{d}}{K_\theta}{( {\theta - {\theta _0}})^2} + {\frac{1}{d}} \int_{\Upphi_e}^{\Upphi} \hspace{-3pt} f_{fr} \mathrm{d}r \bigg) H(\mathrm{\Upphi} - {{\Upphi}_e} ), \end{equation} \vspace{0.7pc} where $\Upphi$, $\Upphi_e=\varphi_e/d$, $G_B$, and $I$ are the current twist rate, the engagement twist rate, the shear modulus of elasticity, and the cross section's moment of inertia of the beam. $H(\Upphi-\Upphi_e)$ is the Heaviside step function to track scales engagement. Also, $C_f$, $C_w$, and $K_\theta$ are inclusion correction factor, warping coefficient, and rotational spring constant of scale–substrate joint stiffness, respectively. In (\ref{Eq13}), $f_{fr}$ is representing the friction force between scales, and $\mathrm{d}r$ is the relative differential displacement described in (\ref{Eq10}). The torque--twist rate relationship for the substrate's unit length could be obtained by taking the derivative of (\ref{Eq13}) with respect to the twist rate $\Upphi$, while considering $\varphi=\Upphi d$, as follows: \vspace{0.7pc} \begin{equation} \label{Eq14} T( \Upphi)={C_f}{C_w}{G_B}I{\Upphi} + \bigg({K_\theta}(\theta - \theta _0)\frac{\partial\theta}{\partial\varphi} + f_{fr} \frac{\mathrm{d}r}{\mathrm{d}\varphi} \bigg) H(\mathrm{\Upphi} - {{\Upphi}_e}). \end{equation} \vspace{0.7pc} We also compute the maximum possible dissipation of the system by computing the frictional work done till locking ($W_{fr}$) and compare it with the total work done ($W_{sys}=U_{el}+W_{fr}$, where $U_{el}$ is the elastic energy of the system). These energies can be computed per unit length of the beam as: \vspace{0.7pc} \numparts \begin{eqnarray} U_{el}={\frac{1}{2}}\bigg({C_f}{C_w}{G_B}I{(\Upphi_{lock})^2} +{\frac{1}{d}}{K_\theta}{( {\theta_{lock} - {\theta _0}})^2 \bigg)}, \label{Eq15a} \\ W_{fr}={\frac{1}{d}} \int_{\Upphi_e}^{\Upphi_{lock}} \hspace{-3pt} f_{fr} \mathrm{d}r. \label{Eq15b} \end{eqnarray} \endnumparts \vspace{-0.2pc} We define the relative energy dissipation ($RED$) factor as the ratio of the frictional work per unit length $W_{fr}$, to the total work done on the system per unit length $W_{sys}$: \vspace{0.7pc} \begin{equation} \label{Eq16} RED=\frac{W_{fr}}{W_{sys}}. \end{equation} \vspace{0.7pc} Generally, $RED$ is dependant on the coefficient of friction $\mu$, dimensionless geometric parameters of the system $\eta$, $\beta$, and $\lambda$, scale spacing $d$, scales initial orientation angles $\alpha$ and $\theta_0$, substrate elastic properties $G_B$, $I$, and $C_w$, and scale--substrate joint parameters $K_{\theta}$ and $C_f$, but the most important parameters are $\mu$, $\eta$, and $\alpha$. \section{Finite element simulations} \label{Finite element simulations} We developed an FE model for verification of the analytical model of the biomimetic scale-covered system under twisting deformation. The FE simulations are carried out using commercially available software ABAQUS/CAE 2017 (Dassault Syst\`emes). We considered 3D deformable solids for scale and substrate. However, for the scales, rigid body constraint was imposed. A sufficient substrate length is considered for rectangular prismatic substrate to satisfy the periodicity. Then an assembly of substrate with a row of 25 scales embedded on its top surface is created. The scales are oriented at angles of $\theta_0$ and $\alpha$ as defined in the analytical model. Linear elastic material properties including $E_B$ and $\nu$ are applied to the substrate part which leads to the shear modulus of $G_B={\frac{E_B}{2(1+\nu)}}$. The simulation considered as a static step with nonlinear geometry option. The left side of the beam is fixed and the twisting load was applied on the other side of the beam. A frictional contact criteria is applied to the scales surfaces with coefficient of friction $\mu$ for a twisting simulation. The top layer of substrate is meshed with tetrahedral quadratic elements C3D10 due to the geometrical complexity around scales inclusion. Quadratic hexahedral elements C3D20 are used for other regions of the model. A mesh convergence study is carried out to find sufficient mesh density for different regions of the model. A total of almost 70,000 elements are employed in the FE model. \section{Results and discussion} \label{Results and discussion} To study the frictional force behavior in this system, we use (\ref{Eq12}) to plot non-dimensionalized friction force $\overline{f}_0$ for different $\mu$ values at various non-dimensionalized twist rate $\Upphi/\Upphi_e$. This is shown in Figure \ref{Fig4} for a system with $\eta=3$, $\theta_0=10^\circ$, $\alpha=45^\circ$, $\beta=1.25$, and $\lambda=0.45$. From this figure, it is clear that increasing twist leads to a rapid increase in the friction force for any coefficient of friction. There is a singular characteristic to this load as shown with dashed lines for each $\mu$ in Figure \ref{Fig4}, which indicates a friction based locking mechanism. This is in addition to the purely kinematic locking mechanism reported earlier in literature for frictionless counterparts \cite{c42}. We call the value of twist rate at the locking point, as the locking twist rate $\Upphi_{lock}$. \begin{figure}[htbp] \centering \includegraphics[width=3in]{Fig4.pdf} \caption{Non-dimensionalized friction force vs Non-dimensionalized twist rate ($\Upphi_e$ is the engagement twist rate) for various coefficients of friction with the given values of $\eta=3$, $\theta_0=10^\circ$, $\alpha=45^\circ$, $\beta=1.25$, and $\lambda=0.45$. This figure shows that the friction forces approach singularity near a certain twist rate as the frictional locking configuration for each $\mu$.} \label{Fig4} \end{figure} Next, we investigate the scale rotation in response to applied twist. This is achieved by plotting the scale angle rotation $\theta$ versus twist angle $\varphi$. Using nonlinear relationship (\ref{Eq6}), two plots are established spanned by $(\theta-\theta_0)/\pi$ and $\varphi/\pi$ as shown in Figure \ref{Fig5} for different $\eta$ and $\alpha$, respectively. In Figure \ref{Fig5a}, the given geometrical parameters are as follows $\theta_0=10^\circ$, $\alpha=45^\circ$, $\beta=1.25$, and $\lambda=0.45$. For $\mu = 0$, which indicates frictionless case, we obtain purely kinematic locking points for each $\eta$ by using $\partial \varphi / \partial \theta=0$ to obtain rigidity envelope \cite{c42}. We juxtapose this with plots the rough interfaces ($\mu>0$), where the locking limits are found via the singularity point of friction force described in (\ref{Eq12}). Clearly, friction advances the locking configuration. However, the locking line does not merely translate downwards as observed in the bending case \cite{c39}. This is an important distinction from the pure bending of rough biomimetic beams reported earlier \cite{c39}. As coefficient of friction increases, the frictional locking envelope can intersect the horizontal axis. This is the instantaneous locking or the ``static friction" lock case. In Figure \ref{Fig5b}, the effect of scales orientation with angle $\alpha$ is investigated. This angle serves as an important geometric tailorability parameter of the system \cite{c42}. In this plot, $\eta=3$, $\theta_0=10^\circ$, $\beta =1.25$, and $\lambda=0.45$. For higher angles $\alpha$, a quicker engagement occurs with steeper nonlinear gains and earlier locking. Interestingly, by decreasing $\alpha$ sufficiently, the system would not reach to the kinematic locking. However, frictional locking is universal and will thus determine the locking behavior. In this aspect, this system again differs from bending case since friction can cause locking even when no-kinematic locking is possible. This figure also shows the possibility of static friction locking for increasing $\mu$. However note that as $\alpha$ increases, such static friction lock becomes more difficult to achieve requiring much higher frictional coefficients. Overall the frictional locking envelope is a highly nonlinear function admitting no closed form solution unlike the pure bending case \cite{c39}. \begin{figure}[htbp]% \centering \subfigure[][]{% \label{Fig5a}% \includegraphics[width=3in]{Fig5a.pdf}}% \hspace{8pt}% \subfigure[][]{% \label{Fig5b}% \includegraphics[width=3in]{Fig5b.pdf}} \\ \caption[]{The plot representation of the biomimetic scale-covered beam under twisting differentiated to three distinct regimes of performance including: linear (before scales engagement), kinematically determined nonlinear (during scales engagement), and a frictional locking boundary for various coefficients of friction: \subref{Fig5a} plot of the system for different $\eta$ with the given values of $\theta_0=10^\circ$, $\alpha=45^\circ$, $\beta=1.25$, and $\lambda=0.45$; and \subref{Fig5b} plot of the system for different $\alpha$ with the given values of $\eta=3$, $\theta_0=10^\circ$, $\beta=1.25$, and $\lambda=0.45$.}% \label{Fig5}% \end{figure} In order to understand the effect of friction force on the mechanics of the system, we use (\ref{Eq14}) to plot the non-dimensionalized post-engagement torque–-twisting rate plot for various coefficients of friction, Figure \ref{Fig6a}. Dimensionless geometrical parameters for this case are $\eta=3$, $\theta_0=10^\circ$, $\alpha=45^\circ$, $\beta=1.25$, $\lambda=0.45$, $\zeta=0.35$, and $L/t_s=35$. To verify the analytical model, we have developed an FE model as described in section \ref{Finite element simulations}. Then we have done FE simulations for different $\eta$ and $\mu$ values and extracted torsional response of the structure $T(\Upphi)/G_B I$, versus twist rate from the beginning of the simulation as shown in Figure \ref{Fig6b}. The following dimensionless parameters are used for this model: $\theta_0=10^\circ$, $\alpha=45^\circ$, $\beta=0.6$, $\lambda=0.32$, $\zeta=0.18$, and $L/t_s=45$. Also the following elastic properties are considered for substrate: $E_B=25$ $GPa$, $\nu=0.25$, with a cross section dimension of $32 \times 16$ $mm$. In this figure, the dotted lines are representing FE results. The plot highlights remarkable agreement between analytical and FE results for two different overlap ratios along different coefficients of friction. The small deviation between results could be caused by edge effects and numerical issues. As shown in Figure \ref{Fig6a}, higher coefficient of friction significantly increases the torsional stiffness of the structure. Therefore, the friction force has a dual contribution to the mechanical response of biomimetic scale-covered system –- while advancing locking, thereby limiting range of motion but also increasing the torsional stiffness of the system. \begin{figure}[htbp]% \centering \subfigure[][]{% \label{Fig6a}% \includegraphics[width=3in]{Fig6a.pdf}}% \hspace{8pt}% \subfigure[][]{% \label{Fig6b}% \includegraphics[width=3in]{Fig6b.pdf}} \\ \caption[]{Torque--twisting rate curve derived from (\ref{Eq14}) for different cases: \subref{Fig6a} non-dimensionalized post-engagement torque--twisting rate curves for various coefficients of friction with the given values of $\eta=3$, $\theta_0=10^\circ$, $\alpha=45^\circ$, $\beta=1.25$, $\lambda=0.45$, $\zeta=0.35$, and $L/t_s=35$, showing the perceptible effect of friction in the effective torsional stiffness of the biomimetic scale-covered structure; and \subref{Fig6b} verification of analytical model using numerical results through the plot of ${T(\Upphi )}/{G_B}I$ versus twist rate ($\Upphi$) for various coefficients of friction and two different $\eta$ with the given values of $\theta_0=10^\circ$, $\alpha=45^\circ$, $\beta=0.6$, $\lambda=0.32$, $\zeta=0.18$, and $L/t_s=45$. Black dotted lines represent FE results.}% \label{Fig6}% \end{figure} In order to quantify the dual contribution of friction, we investigate the frictional work during twisting by using the relative energy dissipation ($RED$), described in (\ref{Eq16}). Fixing all parameters involved in $RED$, except $\mu$, $\eta$, and $\alpha$ for the current simulation leads to contour plots shown in Figure \ref{Fig7}. In these contour plots, we have considered $\theta_0=10^\circ$, $\beta=1.25$, $\lambda=0.45$, $\zeta=0.35$, $L/t_s=35$, and the substrate’s properties as follows $E_B=25$ $GPa$, $\nu=0.25$, and the cross section dimension of $32 \times 16$ $mm$. In Figure \ref{Fig7a}, we fix $\alpha=45^\circ$ to obtain an energy dissipation contour plot spanned by $\eta$ and $\mu$. This plot indicates that $RED$ increases for higher $\mu$, and also increases very slightly with $\eta$. This contour plot shows that $\eta$ does not have as strong effect as coefficient of friction on frictional energy dissipation of the system. However, the frictional work quickly saturates with higher coefficient of friction for all $\eta$. To obtain Figure \ref{Fig7b}, we fix $\eta=3$ and the $RED$ contour plot spanned by $\alpha$ and $\mu$. This plot shows that, despite that locking twist rate $\Upphi_{lock}$ increases by decreasing $\alpha$ according to Figure \ref{Fig7b}, the effect of the friction is higher at the range of $40^\circ<\alpha<60^\circ$, and the $RED$ passes through its maximum by increasing $\mu$ around this range of $\alpha$. Also at lower $\alpha$, unilaterally increasing $\mu$ does not necessarily increase the frictional dissipation. The white region in this contour plot is related to the instantaneous post-engagement frictional locking, which happens at lower $\alpha$ and higher $\mu$. At this condition, the system lock statically at the engagement point and the friction force does not work on the system. \begin{figure}[htbp]% \centering \subfigure[][]{% \label{Fig7a}% \includegraphics[width=3in]{Fig7a.pdf}}% \hspace{8pt}% \subfigure[][]{% \label{Fig7b}% \includegraphics[width=3in]{Fig7b.pdf}} \\ \caption[]{Non-dimensional relative energy dissipation ($RED$) factor contour plot with given values of $\theta_0=10^\circ$, $\beta=1.25$, $\lambda=0.45$, $\zeta=0.35$, $L/t_s=35$, $E_B=25$ $GPa$, $\nu=0.25$, and the substrate's cross section of $32 \times 16$ $mm$ for two different cases: \subref{Fig5a} spanned by $\mu$ and $\eta$ for fixing $\alpha=45^\circ$; and \subref{Fig5b} spanned by $\mu$ and $\alpha$ for fixing $\eta=3$.}% \label{Fig7}% \end{figure} \section{Conclusion} \label{Conclusion} We investigate for the first time, the effect of Coulomb friction on the twisting response of a biomimetic beam using a combination of analytical and FE model. We established the extent and limits of universality of frictional behavior across bending and twisting regimes. The analytical model which have been developed, would help in obviating the need for full-scale FE simulations, which are complicated for large number of scales and for large deflection. We find that several aspects of the mechanical behavior show similarity to rough bending case investigated earlier. At the same time, critical differences in response were observed, most notably the effect of the additional dihedral angle. This work shows the dual contribution of frictional forces on the biomimetic scale-covered system, which includes advancing the locking envelope and at the same time adding to the torsional stiffens. Interestingly, if the coefficient of friction is large enough for a given configuration, it can lead to the instantaneous post-engagement frictional locking known as the static friction locking. This investigation demonstrates that engineering scale surfaces to produce wide range of coefficients of friction can play an important role on tailoring the deformation response of biomimetic scales systems under a variety of applications. \section*{References}
{'timestamp': '2020-01-31T02:00:51', 'yymm': '2001', 'arxiv_id': '2001.11054', 'language': 'en', 'url': 'https://arxiv.org/abs/2001.11054'}
\section{Introduction} In the last few decades, the increasing amount of information to which users are exposed on a daily basis determined the rise of recommendation systems. These models are able to identify and exploit patterns of users interest, with the goal of providing personalized recommendations and improving the final user experience. As a result, recommendation systems are now integrated in a large number of commercial applications, with some prominent examples being Amazon, Netflix, Spotify, Uber or Airbnb. One of the most popular approaches used in the development of recommendation systems is \textit{Collaborative Filtering} (CF). In CF, past user behaviours are used to derive relationships between users and inter-dependencies among items, in order to identify new user-item associations \cite{Koren09matrixfactorization}. The main advantage of CF lies in its intrinsic domain-free nature and in its ability to model user-item interactions without requiring the creation of explicit profiles In this work, we explore a wide spectrum of state-of-the-art approaches to Collaborative Filtering. In particular, we focus on both \textit{memory-based} approaches, where inference is based on calculations on past users' preference records (e.g. similarity models) and \textit{model-based} approaches, where the same past users' preferences are used to train a model which is then used at inference time to predict new recommendations (e.g. factorization, and neural models) \cite{taxonomy}. Other than exploring existing models, we propose a novel stochastic extension of a similarity-based algorithm, SCSR. Finally, we empirically verify that blending factorization-based and similarity-based approaches can yield more accurate results, decreasing the validation RMSE by 9.4\% with respect to the difference between the baseline and the best-performing single model. The performances of these model are compared on the 2022 CIL CF dataset. This dataset consists of a sparse matrix (only $11.77\%$ of the entries are observed) that defines the interactions (integer ratings between 1 and 5) between $10000$ users and $1000$ items. In the following sections, we theoretically introduce the investigated models (§\ref{sect:methods}) and explain in detail our experiments (§\ref{sect:exp}). Finally, we conclude with some final remarks on our work (§\ref{sect:conclusion}). \section{Methods} \label{sect:methods} \subsection{Matrix Factorization} As demonstrated by the Netflix Prize competition \cite{bennett2007netflix}, matrix factorization techniques are very effective in the context of CF. The main idea behind these algorithms consists in mapping both users and items to a joint (lower-dimensional) latent factor space, such that the original interactions between users and items can be reconstructed as inner products in that latent space. A direct implementation of this concept can be obtained applying Singular Value Decomposition (SVD) \cite{clsvd}, a well-established technique for identifying latent semantic factors in information retrieval. SVD decomposes the original matrix in the product \begin{equation} A = U \Sigma V^T \end{equation} where $U\in \mathbb{R}^{m\,\times \,m}$ and $V\in \mathbb{R}^{n\,\times \, n}$ are orthogonal matrices and $\Sigma \in \mathbb{R}^{m\,\times \, n}$ is the diagonal matrix of singular values. In our context, matrices $U$ and $V$ could be interpreted as the latent factors associated to users and items respectively, while matrix $\Sigma$ expresses the weight associated to each latent feature. Since the main goal is to generalize well to unobserved entries at inference time and not obtain a null reconstruction error, the singular value diagonal matrix $\Sigma$ is approximated using only the largest $k$ singular values. Hence, the latent features for users and items are embedded in a lower $k$-dimensional space, with $k<\!\!<n,m$. However, SVD needs a large storage space and has a large computational complexity. In order to overcome these limitations, a simple and yet remarkably powerful method, FunkSVD \cite{funk}, was proposed as part of the Netflix Prize competition. The algorithm initializes the latent factor matrices $U$ and $V$ randomly, and then optimize them using Stochastic Gradient Descent (SGD). The objective function optimized with SGD is: \begin{equation} \argmin_{U,V} \left\lVert \, A - \widetilde{A} \,\right\rVert_F + \alpha \left\lVert U \right\rVert + \beta \left\lVert V \right\rVert \end{equation} where $\left\lVert \, \cdot \, \right\rVert_F$ is the Frobenius norm, $A$ is the original matrix, and $\widetilde{A} = UV$ is the reconstructed matrix. The last two terms of the objective are regularization terms. Another well-known matrix-factorization technique based on iterative optimization is Alternating Least Square (ALS). The objective function which is optimized by the method is \begin{equation} \label{eq:als} \argmin_{U,V} \left\lVert \, A - \widetilde{A} \,\right\rVert_F^2 + \lambda \left( ||U||_F^2 + ||V||_F^2 \right) \end{equation} Since both $U$ and $V$ are unknown, (\ref{eq:als}) is not convex. However, if we fix one of the unknowns, the optimization problem becomes quadratic and can be solved in closed form. Thus, ALS techniques iteratively improves $U$ and $V$ by solving a least-square problem, with a loss that always decreases and, eventually, converges. The last investigated matrix factorization approaches are Factorization Machines (FM) \cite{Rendle2010FactorizationM}, which allow for predictions over a sparse dataset. FMs have the same flexibility as Support Vector Machines (SVM) in that they accept any feature vector, yet work well over sparse data by factorizing the parameters modeling the relevant interactions. The model equation of a FM of degree 2 for $n$ features is as follows: \footnotesize \begin{equation} \hat{y}(\boldsymbol{x}) := w_0 + \sum_{i=1}^n{w_i x_i} + \sum_{i=1}^n\sum_{j=i+1}^n{\langle \boldsymbol{v_i}, \boldsymbol{v_j} \rangle x_i x_j} \end{equation} \normalsize \noindent with learned parameters $w_0 \in \mathbb{R}, w \in \mathbb{R}^n$, and $V \in \mathbb{R}^{n \times k}$. $w_0$ is the global bias, $w$ models the weights of the individual features, $V$ a factorization of the weights of the interactions between pairs of features, and $v_i \in \mathbb{R}^k$ is the $i$-th row of $V$. Adding Bayesian inference, as in Bayesian Factorization Machines (BFM) \cite{Freudenthaler2011BayesianFM}, is a considerable improvement: it allows to avoid the manual tuning of multiple hyper-parameters, and performs better than SGD and ALS optimization. BFM uses Gibbs sampling, a Markov Chain Monte Carlo (MCMC) algorithm for posterior inference. with a complexity of $\mathcal{O}(k N_z)$ for each sampling step ($N_z$ being the number of non-zero elements in the feature matrix). One of the main advantages of FM, compared to the other matrix factorization models, is that we can easily choose how to engineer the feature vectors. The standard way is to simply concatenate a one-hot encoding of the user with a one-hot encoding of the item: $(u, i)$, to which a list of ratings is associated. These user-movie-rating combinations are also referred to as \textit{explicit feedback} \cite{Oard1998ImplicitFF}, i.e. information directly provided by the users themselves. However, \textit{implicit feedback} \cite{Oard1998ImplicitFF}, which is information not directly provided by the users but that collected through their usage of the service \cite{Oard1998ImplicitFF}, is also useful to the model. For example, the date of the rating (in days since the first rating of the dataset or since the movie release) has been used successfully for this task \cite{Koren2009CollaborativeFW}. Even if our data does not include any implicit feedback, we can recreate some about the users by highlighting all the movies that each user has rated, resulting in the feature vector $(u, i, iu)$, as in SVD++ model \cite{Koren2008FactorizationMT}, $iu$ standing for implicit user (feedback). Similarly, we can also go in the other direction by showing which users have rated each movie, such as in the Bayesian timeSVD++ flipped model \cite{onthediffic}, only without the time information: $(u, i, iu, ii)$, $ii$ standing for implicit item (feedback). \subsection{Similarity Based} Similarity based methods are designed to find neighborhoods of similar users/items using a similarity function on common ratings. Within these neighborhoods, it is possible to compute weighted averages of the observed ratings (with weights being proportional to the similarity degree), and use them to infer missing entries. In our work, we investigate 3 different similarity measures: cosine similarity, PCC and SiGra \cite{wu2017sigra}. The similarity between two users or items $u$ and $v$ are defined, respectively, as: \footnotesize \begin{equation*} \textrm{\normalsize cosine(u, v)} = \frac{\sum_{k \in I_u \cap I_v} r_{uk} \cdot r_{vk}}{\sqrt{\sum_{k \in I_u \cap I_v} r_{uk}^{2}} \cdot \sqrt{\sum_{k \in I_u \cap I_v} r_{vk}^{2}}} \end{equation*} \begin{equation*} \textrm{\normalsize PCC(u, v)} = \frac{\sum_{k \in I_u \cap I_v} (r_{uk}-\overline{\mu_u}) \cdot (r_{vk}-\overline{\mu_v})}{\sqrt{\sum_{k \in I_u \cap I_v} (r_{uk}-\overline{\mu_u})^{2}} \cdot \sqrt{\sum_{k \in I_u \cap I_v} (r_{vk}-\overline{\mu_v})^{2}}} \end{equation*} \begin{equation*} \textrm{\normalsize SiGra(u, v)} = \left(1+\exp{\left(-\frac{|I_u|+|I_v|}{2\cdot |I_u \cap I_v|}\right)}\right)^{-1} \cdot \frac{\sum_{k \in I_u \cap I_v}\frac{\min(r_{uk}, r_{vk})}{\max(r_{uk}, r_{vk})}}{|I_u \cap I_v|} \end{equation*} \vspace*{0.2cm} \normalsize \noindent where $k \in I_u \cap I_v$ represents the indexes of all commonly rated items, respectively users and $r_{uk}$ is the rating given by user $u$ for item $k$. These 3 functions could be either applied between all items or between all users, and we experiment with both. It can be observed that PCC is closely related to cosine similarity, except that it first centers the ratings by removing the average and hence try to overcome the bias which occur when some users give overall better ratings than others. Both PCC and cosine similarity tend to overestimate the similarity between users or items which have only few commonly rated items. To overcome that, some weighting functions can be added on top of the previous functions to penalize more the similarity of users or items having only few common ratings. We experiment with normal, significance and sigmoid weighting, defined as follows: \footnotesize \begin{equation*} {\text{\normalsize w\textsubscript{normal}(u, v)}} = \frac{2 \cdot |I_u \cap I_v|}{|I_u| + |I_v|} \qquad {\text{\normalsize w\textsubscript{significance}(u, v)}} = \frac{\min(|I_u \cap I_v|, \beta)}{\beta} \end{equation*} \begin{equation*} {\text{\normalsize w\textsubscript{sigmoid}(u, v)}} = \left(1+\exp{-\frac{|I_u \cap I_v|}{2}}\right)^{-1} \end{equation*} \vspace*{0.1cm} \normalsize \noindent SiGra already have such weighting incorporated into its formula, hence we do not add any of them to it. Note that the significance weighting only penalize the similarity of items or users which have fewer common ratings than $\beta$ which is an hyper-parameter. To find the neighborhoods and compute the predictions for the missing values once the similarity matrix has been computed, different methods exists. In our work, we use a weighted average of the $k$ nearest neighbors. The prediction is therefore defined as follows: \small \begin{equation*} \hat{r}_{ui} = \overline{\mu_u} + \frac{\sum_{v\in N_k^{u,i}} \textrm{sim}(u, v) \cdot (r_{vi} - \overline{\mu_v})}{\sum_{v\in N_k^{u,i}} |\textrm{sim}(u, v)|} \end{equation*} \normalsize where $N_k^{u,i}$ represents the set of neighbors i.e. the set of the $k$ most similar users which have a rating for item $i$. We also experiment with a combination of both user and item similarity for the final prediction, taking into account the confidence of the rating Finally, we re-implement the Comprehensive Similarity Reinforcement (CSR) algorithm \cite{hu2017mitigating}, which alternatively optimize the user similarity matrix using the item similarity matrix and vice versa. Note however that this method is extremely slow, as it runs in $\mathcal{O}(|I|^2 \cdot |U|^2 \cdot \textrm{max\_iter})$, and hence not applicable to our problem. For this reason, we propose a novel implementation of the algorithm, Stochastic CSR (Algorithm \ref{alg:scsr}) which only uses a random sample of items and users at each iteration to update the other matrix. This implementation of the algorithm runs in $\mathcal{O}((|I|^2 + |U|^2) \cdot \textrm{sample\_size}^2 \cdot \textrm{max\_iter})$, which can be significantly smaller if sample\_size is small enough. \subsection{Neural Based} Traditional matrix factorization techniques are often interpreted as a form of dimensionality reduction, therefore some authors investigated the use of autoencoders to tackle CF. In this case, the input $x$ is a sparse vector and the AE's output $y = f(x)$ is its dense correspondence, containing all the rating predictions of items in the corpus. First notable AE CF model was I-AutoRec (item-based) and U-AutoRec (user-based) \cite{autorec}. DeepRec \cite{deeprec} is a fine-tuned version of the AutoRec model. It employs deeper networks, embraces SELU as activations \cite{selu} and apply heavy regularization techniques to avoid overfitting. It also proposes a variation in training algorithm to address the fixed point constraint, intrinsic to the AE CF problem objective \cite{deeprec}. DeepRec augments every optimization step with an iterative dense re-feeding step, i.e. it performs a classic weight update on the sparse-to-dense loss $L(x, f(x))$ and then treat $y = f(x)$ as a new artificial input, performing another weight update on the dense-to-dense loss $L(y,f(y))$. DeepRec requires no prior training of the layers. The model is optimized using a masked MSE loss. During inference, model prediction is defined as a single forward pass $\hat{y} = f(x)$. In the alternative Graph Convolutional Networks (GCN) \cite{gcn} paradigm, LightGCN (LGC) \cite{lightgcn} explores a simplified and optimized version of NGCF \cite{ngcf}, the former state-of-the-art model in GCN for CF. LGC gets rid of the feature transformation matrices and the non-linear activation function, two legacy layer propagation operations that NGCF inherited from GCN, which were argued to be not helpful for CF purposes \cite{lightgcn}. Hence, LGC propagation rule is simply defined as: \footnotesize \begin{align*} e_u^{(k+1)} = \sum_{i \in \mathcal{N}_u} \frac{1}{\sqrt{|\mathcal{N}_u||\mathcal{N}_i}|} e_i^{(k)} \qquad\quad e_i^{(k+1)} = \sum_{u \in \mathcal{N}_i} \frac{1}{\sqrt{|\mathcal{N}_i||\mathcal{N}_u}|} e_u^{(k)} \end{align*} \normalsize where $e_u^{(k)}$ and $e_i^{(k)}$ respectively denote the refined embedding of user $u$ and item $i$ after $k$ layers propagation, and $\mathcal{N}_u$ denotes the set of items that are interacted by user $u$ (respectively for $\mathcal{N}_i$). The embeddings are then combined with a weighted linear combination, setting an importance to each of the $k$-th layer embeddings. The only other trainable parameters of LGC are the embeddings of the $0$-th layer. Originally, the model is optimized using the Bayesian Personalized Ranking (BPR) loss, but we adapt it to use a standard RMSE loss. During inference, model prediction is defined as the inner product of user and item final representations $\hat{ y}_{ui} = e_u^T e_i$. \subsection{Blending} Ensemble methods are designed to boost predictive accuracy by blending the predictions of multiple machine learning models. These methods have been proven effective in many fields, and CF is no exception. For example, both the first two classified solutions in the Netflix Prize challenge exploited blending \cite{blending1, blending2}. We propose an evaluation of different blending techniques, based on a wide range of regression approaches (linear, neural, and boosting). The underlying idea is to learn a transformation $\varphi:\mathbb{R}^{n\times m}\times ... \times \mathbb{R}^{n\times m} \rightarrow \mathbb{R}^{n\times m}$ such that the final prediction $\hat{A} = \varphi(\hat{A}^1,\, \dots \,, \hat{A}^k)$ is obtained in function of the predictions of $k$ different models. \section{Experiments} \label{sect:exp} \subsection{Matrix factorization} The first matrix factorization model to be implemented was ALS. Matrices $U$ and $V$ were initialized with the SVD decomposition of the normalized initial matrix (column-level) imputed with 0s. The results for this model are included in Table \ref{table:results}. A non-greedy tuning was performed, but none of the alternative parameters (rank, epoch, initialization, and imputation) improved the baseline score. The SVD-based factorization model was also implemented from scratch. The normalization and imputation steps were the same as for ALS. The optimal rank ($k=5$) was obtained using validation. A wide range of different parameters was tested, but with no improvement on the validation score. This model did not perform well, as shown by the validation score in in Table \ref{table:results}. FunkSVD was on the other hand not developed from scratch. Instead, we adapted to our framework an existing Python implementation \cite{bolmier}. FunkSVD achieved better results compared to SVD, but did not improve on the baseline score We used myFM Python library \cite{myFM} to implement the BFM Firstly, we tried the most basic version the BFM: only the one-hot encoded users and movies relevant to each rating are provided in the sparse feature matrix $(u, i)$. The only hyper-parameters to tune are the embedding dimension, i.e. the latent feature dimension $k$, and the number of iterations. Increasing the former decreases the validation error, reaching a plateau at around 50. Increasing the latter generally improves the model, but the validation RMSE start decreasing beyond 500 iterations. Therefore, for the rest of our experimentation we chose to set them at 50 and 500, respectively. \begin{figure}[!b] \centering \includegraphics[width=.9\columnwidth]{Images/rank_analysis.pdf} \caption{Rank analysis for matrix factorization techniques.} \label{fig:rankanalysis} \end{figure} We decided to experiment with adding implicit features \cite{Oard1998ImplicitFF}, as shown in the myFM \cite{myFM} documentation. Each row has the format $(u, i, iu)$, where $iu$, a sparse vector of size the number of movies overall, represents all the movies rated by that row's user. This is done by setting the values at the relevant indices of $iu$ to a normalized value $1/\sqrt{\left|N_u\right|}$, with $N_u$ the number of movies rated by user $u$, or to $1$ if $N_u = 0$ \cite{Koren2008FactorizationMT}. We also added implicit movie information in a similar fashion, showing all the users that have watched a movie for every movie, the complete feature matrix now being of the form $(u, i, iu, ii)$ \cite{onthediffic}. As can be observed in Table \ref{table:results}, for regression BFM the biggest change came through the addition of $iu$, bringing our validation score from 0.98034 to 0.97492. Adding $ii$ to that further improves our RMSE to 0.97268. Furthermore, myFM allows to use ordered probit, a type of ordinal classification This means that the model also learns four cutpoints, separating the real domain into five sections, one for each rating. We are then able to obtain the probabilities of a rating being in each of these categories, and we get the final rating prediction by calculating the expected value of the category. Conceptually, this way of framing the problem seems more natural, since ratings are always given as round numbers. Here, the Gibbs sampler for ordered probit on the BFM uses a Metropolis-within-Gibbs scheme \cite{myFM}, since on its own, Gibbs sampling does not work for classification problems beyond binary classification. The ordered probit implementation consistently outperformed the standard regression, decreasing the best RMSE from 0.97268 to 0.97032 (see Table \ref{table:results}). This may be explained by the fact that ratings are fundamentally discrete values and not necessarily linear, and hence ordered probit might be able to better model them. The most interesting analysis that can be performed to compare different matrix-factorization models is on their performances when varying the number of latent features $k$. We show the result of this investigation in Figure \ref{fig:rankanalysis}. It can be observed that, while all the other factorization techniques achieve a lowest point in the RMSE for lower ranks ($<10$), BFM continue to improve their validation score even for larger $k$. This behaviour was observed also in \cite{onthediffic}. \subsection{Similarity Based} For what concerns similarity-based models, we investigated different similarity functions, weighting functions, and a few different hyper-parameters. It was not possible to conduct a complete grid-search over all the possible combinations of parameters due to time constraints. Therefore, we fixed some of them and optimized over only the most promising. In our experiments, we combined the 3 possible uses of similarity (item-item, user-user, and both together) with the 3 similarity functions (PCC, cosine and SiGra), the 4 weightings (normal, significance, sigmoid, and no weighting) and different number of neighbors (1, 3, 6, 10, 30 and 10000), for a total of 162 trained models. The fixed parameters were the significance threshold $\beta$ (set to 7 for user similarity, 70 for item similarity, and 20 when we used both) and the prediction weights used when both similarity methods were used (set to 0.5). Finally, we also tested Stochastic CSR algorithm using the original similarity model (PCC similarity with normal weighting and all the neighbors). The model was trained for 15 iterations and a sample size of 15 items. Table \ref{table:results} shows some of the best scoring models for each categories we obtained during this experiment. In general, the models that used item-item similarity managed to achieve lower RMSE than the one using the user-user similarity. Since the loss for the user-based methods was in average higher, the ensemble using both methods also scored worse than the item-based methods with the initial user weight (0.5). However, by reducing this weight to 0.06, we managed to improve the item-based methods as we expected, since more information is available to the model. The similarity function which achieved the best results was PCC with normal weighting. SiGra performed worse than we expected and the significance weighting or the sigmoid weighting were not as effective as the normal one. Furthermore, the best number of neighbors happened to be 60 for our best scoring model. Finally, our novel extension of CSR algorithm, SCSR, greatly reduced the computational cost of the original algorithm, but also decreased it's effectiveness, performing worse than the model used for its initialization. We speculate that this might be due to an excessively small sample size, which might not be able to retain enough information to improve the initial solution. However, this algorithm was not sufficiently explored due to time constraints, and a more in-depth analysis of its performance could be carried out in future works. \subsection{Neural Based} Even conducting an extensive hyper-parameters search, we were not able to bring neural-based methods to stand-alone competitive results. DeepRec still showed decent performances (Table \ref{table:results}) with an unconstrained 3 layer AE architecture (512,128,1024,128,512) that could be blended in our ensemble techniques. LightGCN showed its strongest (yet poor) results on 4 layers with embedding size of 64. Overall, DeepRec is fast to train (according to layers architecture) and has demonstrated decent results, while LightGCN is much longer to train (1-60) and performs systematically worse. \subsection{Blending} Different blending techniques were investigated in our work. Our approach consisted in fitting different CF algorithms on a 80 split of the data and predicting the remaining 20, which became the training dataset for blending models. We experimented with different regression models, namely (regularized) linear regression, random forest, MLP, and XGBoost. Inspired by \cite{Freudenthaler_bayesianfactorization}, we included in our ensemble all the BFM models, and then added other explored CF algorithms based on repeated k-fold cross-validation results. Table \ref{table:results} shows the results obtained by the different combinations of CF and regression models. Linear regression proved to be the most effective regression model (for the others we report only the RMSE on the best feature set), and our final blending model was hence of a linear combination of the different input predictions. The most effective combination of CF models consisted of a blending of 8 BFM and 5 similarity based models on PCC. An interesting phenomenon observed during this experiment was that none of the other factorization models was able to improve the ensemble score. We speculate that this happens because BFM already extract the maximum information among factorization techniques. On the other hand, similarity-based approaches, despite their higher RMSE, had a positive impact on the ensemble. Blending factorization-based and similarity-based predictions allowed us to greatly decrease the validation error by an additional 9.4\% (with respect to the difference between the best single model and the baseline score), proving once again the effectiveness of blending methods in CF and showing that BFM can benefit from being combined with similarity-based approaches. \section{Conclusion} \label{sect:conclusion} In this work, we presented the task of Collaborative Filtering and explored a wide range of models that can be used to tackle it. We showed that older matrix-factorization methods, in particular BFM, greatly outperform all the other investigated techniques, and that blending their predictions with similarity-based techniques can further improve their performances. Moreover, we proposed a stochastic variation of a similarity-based approach, SCSR, which consistently reduce the asymptotic complexity of the original algorithm. Our final submission to the Kaggle competition, predicted with the ensemble of BFM and similarity models, achieved a public RMSE of 0.96374, placing our team first on the public leaderboard among the 16 teams which joined the 2022 Collaborative Filtering competition. \clearpage \onecolumn \vspace*{\fill} \renewcommand{\arraystretch}{1.2} \begin{table*}[h] \centering \begin{tabular}{|c | c | c | l | c c|} \hline \textbf{Category} & \textbf{Method} & \textbf{Model} & \textbf{Parameters} & \textbf{Validation} & \textbf{Public Score} \\ \hline \multirow{10}{*}{Memory-based} & \multirow{9}{*}{Similarity} & Item (1) & PCC, normal, 30 nn. & 0.99012 & 0.98265\\ &&Item & PCC, normal, 60 nn. & 0.98858 & 0.98174\\ &&Item (2)& PCC, normal, all nn. & 0.98944 & 0.98388\\ &&Item (3)& PCC, None, 30 nn. & 0.99069 & 0.98279\\ &&Item (4)& PCC, None, all nn. & 0.99105 & 0.96454\\ &&Item (6) & SiGra, all nn. & 1.00258 & -\\ &&User & PCC, normal, all nn. & 1.00025 & -\\ &&Both (7) & Cosine , normal, 30 nn., 0.5 w. & 0.99568 & 0.99052\\ &&Both & PCC, normal, all nn., 0.5 w. & 0.99941 & - \\ &&Both (5) & PCC, normal, 30 nn., 0.06 w. & 0.98767 & 0.98009\\ &&Both & PCC, normal, 60 nn., 0.06 w. & 0.98755 & 0.98069\\ \cline{2-6} &\multirow{2}{*}{Iterative similarity} & \multirow{2}{*}{Stochastic CSR} & PCC, normal, all nn., 0.5 w. & \multirow{2}{*}{1.00578}&\multirow{2}{*}{-}\\ &&& 15 samples, 15 iter, $\alpha=0.5$ && \\ \hline \multirow{22}{*}{Model-based} & \multirow{10}{*}{Matrix factorization} & ALS & rank 3, $\lambda=0.1$, 20 iter & 0.98865 & 0.98747 \\ && SVD & rank 5 & 1.01240 & - \\ && FunkSVD & rank 3, $\eta$ = 1e-3, $\lambda$=5e-3 & 0.99880 & 0.99892 \\ \cline{3-6} && BFM regression (8)& k = 50, 500 iters & 0.98034 & - \\ && BFM r. [u, i, iu] (9)& k = 50, 500 iters & 0.97492 & - \\ && BFM r. [u, i, ii] (10) & k = 50, 500 iters & 0.97773 & - \\ && BFM r. [u, i, iu, ii] (11)& k = 50, 500 iters & 0.97268 & - \\ && BFM ordered probit (12) & k = 50, 500 iters & 0.97668 & - \\ && BFM o.p. [u, i, iu] (13)& k = 50, 500 iters & 0.97191 & - \\ && BFM o.p. [u, i, ii] (14)& k = 50, 500 iters & 0.97527 & - \\ && BFM o.p. [u, i, iu, ii] (15) & k = 50, 500 iters & 0.97032 & 0.96543 \\ \cline{2-6} & \multirow{2}{*}{Neural} & DeepRec (16) & $\eta=1e-3$, batch 128, 300 epochs & 0.98777 & 0.98559\\ & & LightGCN & $\eta=1e-4$, batch 2048, 250 epochs & 0.99987 & 0.99929\\ \cline{2-6} &\multirow{12}{*}{Blending} & \multirow{7}{*}{Linear Regression} & (5) + (15) & 0.96968 & 0.96472\\ &&& (5) + (6) + (7) + (15) & 0.96956 & -\\ &&& (15) + (1)-(5) & 0.96939 & 0.96454\\ &&& (5) + (15) + (16) & 0.969691 & -\\ &&& (15) + (1)-(7) & 0.96940 & -\\ &&& (8)-(15) & 0.96981 & -\\ &&& (8)-(15) + (1)-(5) &\textbf{ 0.96870} & \textbf{0.96374}\\ \cline{3-6} & & Lasso & best, $\alpha=0.001$ & 0.96930 & - \\ & & Ridge & best, $\alpha=0.01$ & 0.96870 & - \\ & & MLP & best, hidden\_size=100, iter=1000 & 0.97937 & -\\ & & XGBoost &best, n\_est=100, depth=7 & 0.97032 & -\\ & & Random Forest &best, n\_est=100, depth=2 & 0.98367 & -\\ \hline \end{tabular} \vspace*{0.2cm} \caption{Validation and submissions results.} \label{table:results} \end{table*} \vspace*{\fill} \clearpage \twocolumn \bibliographystyle{IEEEtran}
{'timestamp': '2022-09-28T02:04:21', 'yymm': '2209', 'arxiv_id': '2209.13011', 'language': 'en', 'url': 'https://arxiv.org/abs/2209.13011'}
\section{Introduction} \label{intro} The current understanding of the interstellar medium (ISM) is that it is a multi-phase environment, consisting of gas and dust, which is both magnetized and highly turbulent (Ferriere 2001; McKee \& Ostriker 2007). In particular, magnetohydrodynamic (MHD) turbulence is essential to many astrophysical phenomena such as star formation, cosmic ray dispersion, and many transport processes (see Elmegreen \& Scalo 2004; Ballesteros-Paredes et al. 2007 and references therein). Additionally, turbulence has the unique ability to transfer energy over scales ranging from kiloparsecs down to the proton gyroradius. This is critical for the ISM, as it explains how energy is distributed from large to small spatial scales in the Galaxy. Observationally, several techniques exist to study MHD turbulence in different ISM phases. Many of these techniques focus on emission measure fluctuations or rotation measure fluctuations (i.e gradients of linear polarization maps) for the warm ionized media (Armstrong et al.1995; Chepurnov \& Lazarian 2010; Gaensler et al. 2011; Burkhart, Lazarian \& Gaensler 2012) as well as spectroscopic data and column density maps for the neutral warm and cold media (Spangler \& Gwinn 1990; Padoan et al. 2003), For studies of turbulence, spectroscopic data has a clear advantage in that it contains information about the turbulent velocity field as well as the density fluctuations. However, density and velocity are entangled in PPV space, making the interpretation of this type of data difficult. For the separation of the density and velocity fluctuations, special techniques such as the Velocity Coordinate Spectrum (VCS) and the Velocity Channel Analysis (VCA) have been developed (Lazarian \& Pogosyan 2000, 2004, 2006, 2008). Most of the efforts to relate observations and simulations of magnetized turbulence are based on obtaining the spectral index (i.e. the log-log slope of the power spectrum) of either the density and/or velocity (Lazarian \& Esquivel 2003; Esquivel \& Lazarian 2005; Ossenkopf et al. 2006). However, the power spectrum alone does not provide a full description of turbulence, as it only contains the Fourier amplitudes and neglects information on phases. This fact, combined with complexity of astrophysical turbulence, with multiple injection scales occurring in a multiphase medium, suggests researchers need additional ways of analyzing observational and numerical data in the context of turbulence. In particular, these technique studies are currently focused into two categories: \begin{itemize} \item Development: Test and develop techniques that will complement and build off the theoretical and practical picture of a turbulent ISM that the power spectrum presents. \item Synergy: Use several techniques simultaneously to obtain an accurate picture of the parameters of turbulence in the observations. \end{itemize} In regards to the first point, in the last decade there has been substantial progress in the development of techniques to study turbulence. Techniques for the study of turbulence can be tested empirically, using parameter studies of numerical simulations, or with the aid of analytical predictions (as was done in the case of the VCA). In the former, the parameters to be varied (see Burkhart \& Lazarian 2011) include the Reynolds number, sonic and Alfv\'enic Mach number, injection scale, equation of state, and, for studies of molecular clouds, should include radiative transfer and self-gravity (see Ossenkopf 2002; Padoan et al. 2003; Goodman et al. 2009). Some recently developed techniques include the application of probability distribution functions (PDFs), wavelets, spectral correlation function (SCF),\footnote{The similarities between VCA and SCF are discussed in Lazarian (2009).} delta-variance, the principal component analysis, higher order moments, Genus, Tsallis statistics, spectrum and bispectrum (Gill \& Henriksen 1990; Stutzki et al. 1998; Rosolowsky et al. 1999; Brunt \& Heyer 2002; Kowal, Lazarian \& Beresnyak 2007; Chepurnov et al. 2008; Burkhart et al. 2009; Esquivel \& Lazarian 2010; Tofflemire et al. 2011). Additionally, these techniques are being tested and applied to different wavelengths and types of data. For example, the PDFs and their mathematical descriptors have been applied to the \textit{observations} in the context of turbulence in numerous works using linear polarization data (see Gaensler et al. 2011; Burkhart, Lazarian, \& Gaensler 2012), HI column density of the SMC (Burkhart et al. 2010), molecular/dust extinction maps (Goodman, Pineda, \& Schnee 2009; Brunt 2010; Kainulainen et al. 2011) and emission measure and volume averaged density in diffuse ionized gas (Hill et al. 2008; Berkhuijsen \& Fletcher 2008). The latter point in regard to the synergetic use of tools for ISM turbulence is only recently being attempted as many techniques are still in developmental stages. However, this approach was used in Burkhart et al. (2010), which applied spectrum, bispectrum and higher order moments to HI column density of the SMC. The consistency of results obtained with a variety of statistics, compared with more traditional observational methods, made this study of turbulence in the SMC a promising first step. The current paper falls under the category of ``technique development.'' In particular, we investigate the utility of dendrograms in studying the hierarchical structure of ISM clouds. It has long been known that turbulence is able to create hierarchical structures in the ISM (Scalo 1985, 1990; Vazquez-Semadeni 1994; Stutzki 1998), however many questions remain, such as what type of turbulence is behind the creation of this hierarchy and what is the role of self-gravity and magnetic fields? Hierarchical structure in relation to this questions is particularly important for the star formation problem (Larson 1981; Elmegreen \& Elmegreen 1983; Feitzinger \& Galinski 1987; Elmegreen 1999; Elmegreen 2011). \begin{figure}[tbh] \centering \includegraphics[scale=1]{fig1.eps} \caption{The dendrogram for a hypothetical 1D emission profile showing three local maximum (leaves) and merger points (nodes). The Dendrogram is shown in blue and can be altered by changing the threshold level $\delta$ to higher or lower values. In this example, increasing the value of $\delta$ will merge the smallest leaf into the larger structure. The local maximum (green dots) and merger points (i.e. nodes, red dot) are the values used to create the distribution $\xi$, discussed further in Section \S~\ref{results}. } \label{fig:alyssa} \end{figure} Early attempts to characterize ISM hierarchy utilized tree diagrams as a mechanism for reducing the data to hierarchical ``skeleton images'' (see Houlahan \& Scalo 1992). More recently, dendrograms have been used on ISM data in order to characterize self-gravitating structures in star forming molecular clouds (Rosolowsky et al. 2008 and Goodman et al. 2009). A dendrogram (from the Greek dendron ``tree'',- gramma ``drawing'') is a hierarchical tree diagram that has been used extensively in other fields, particularly in computational biology, and occasionally in galaxy evolution (see Sawlaw \& Haque-Copilah 1998 and Podani, Engloner, \& Major 2009, for examples). Rosolowsky et al. (2008) and Goodman et al. (2009) used the dendrogram on spectral line data of L1448 to estimate key physical properties associated with isosurfaces of local emission maxima such as radius, velocity dispersion, and luminosity. These works provided a new and promising way of characterizing self-gravitating structures and properties of molecular clouds through the application of dendrogram to $^{13}$CO(J=1-0) PPV data. In the current paper we apply the dendrogram to synthetic observations (specifically PPV cubes) of isothermal MHD turbulence in order to investigate the physical mechanisms behind the gas hierarchy. Additionally, we are interested in the nature of the structures that are found in PPV data and how these structures are related to both the physics of the gas and the underlying density and velocity fluctuations generated by turbulence. Simulations provide an excellent testing ground for this problem, as one can identify which features in PPV space are density features and which are caused by velocity crowding. Furthermore, one can answer the question: under what conditions do the features in PPV relate back to the 3D density or PPP cube? In order to address these questions we perform a parameter study using the dendrogram. We focus on how changing the global parameters of the turbulence, such as the sonic Mach number, Alfv\'enic Mach number, affects the amount of hierarchy observed, the relationship between the density and velocity structures in PPV, and the number and statistical distribution of dominant emission structures. The paper is organized as follows. In \S~\ref{dendoalg} we describe the dendrogram algorithm, in \S~\ref{data} we discuss the simulations and provide a description of the MHD models. We investigate the physical mechanisms that create hierarchical structure in the dendrogram tree and characterize the tree diagrams via statistical moments in \S~\ref{results}. In \S~\ref{ppp} we compare the dendrograms of PPP and PPV. In \S~\ref{sec:app} we discuss applications and investigate issues of resolution. Finally, in \S~\ref{disc} we discuss our results followed by the conclusions in \S~\ref{con}. \section{Dendrogram Algorithm} \label{dendoalg} The dendrogram is a tree diagram that can be used in 1D, 2D or 3D spaces to characterize how and where local maxima merge as a function of a threshold parameter. Although the current paper uses the dendrogram in 3D PPV space to characterize the merger of local maxima of emission, it is more intuitive to understand the 1D and 2D applications. A 1D example of the dendrogram algorithm for an emission profile is shown in Figure \ref{fig:alyssa}. In this case, the threshold value is called $\delta$, and is the minimum amplitude above a merger point that a local maximum must have before it is considered distinct. That is, if a merger point (or node) is given by $n$ and a local maximum is given by $m$ then in order for a given local max $m_{1}$ to be considered significant, $m_{1}-n_{1,2} > \delta$. If $m_{1}-n_{1,2} \le \delta$ then $m_{1}$ would merge into $m_{2}$ and no longer be considered distinct. For 2D data, a common analogy (see Houlahan \& Scalo 1992; Rosolowsky et al. 2008) is to think of the dendrogram technique as a descriptor of an underwater mountain chain. As the water level is lowered, first one would see the peaks of the mountain, then mountain valleys (saddle points) and as more water is drained, the peaks may merge together into larger objects. The dendrogram stores information about the peaks and merger levels of the mountain chain. \begin{figure}[tlh] \centering \includegraphics[scale=.73]{fig2.eps} \caption{List of the simulations and their properties. We define the subsonic regimes as anything less than ${\cal M}_s$=1 and the supersonic regime as ${\cal M}_s > 1$. Two Alfv\'enic regimes exist for each sonic Mach number: super-Alfv\'enic and sub-Alfv\'enic. Same color along the ${\cal M}_s$ column indicates that the same initial sound speed was used. Same color along the ${\cal M}_A$ column indicates that the same initial mean field strength was used. We group the description of the $512^3$ simulations based on similar values of the initial sound speed. For example, Run 1 and Run 8 have the same initial sound speed and are both described as M0.5.} \label{descrp} \end{figure} For our purposes, we examine the dendrogram in 3D PPV space (see Rosolowsky et al. 2008; Goodman et al. 2009 for more information on the dendrogram algorithm applied in PPV). In the 3D case, it is useful to think of each point in the dendrogram as representing a 3D contour (isosurface) in the data cube at a given level. Our implementation of the dendrogram is similar to many other statistics that employ a user defined threshold value in order to classify structure. By varying the threshold parameter for the definition of ``local maximum'' (which we call $\delta$, see Figure \ref{fig:alyssa}), different dendrogram tree diagrams and distributions of local maximum and merger points are created. An example of another statistic that utilizes a density/emission threshold value is the Genus statistic, which has proven useful for studying ISM topology (Lazarian, Pogosyan \& Esquivel 2002; Lazarian 2004; Kim \& Park 2007; Kowal, Lazarian \& Beresnyak 2007; Chepurnov et al. 2008). For the Genus technique, the variation of the threshold value is a critical point in understanding the topology of the data in question. \begin{figure*}[th] \centering \includegraphics[scale=.9]{fig3.eps} \caption{Different ways of viewing dendrogram information used in this work. Here we show an example for supersonic sub-Alfv\'enic turbulence (Run 3 from Figure \ref{descrp}) for threshold values $\delta$=14, 16, 20 (left, center, right columns). The top row represents the isosurfaces in the PPV data and the middle row is the corresponding dendrogram (the black line is a reference marker for $\delta$) with colors matching to the isosurface structures. Note that there is no information on the $x$-axis of the tree diagram as the branches are sorted not to cross. However, this still preserves all information about connectivity and hierarchy at the expense of positional information. The bottom row is the histogram of the resulting tree diagram, including the leaves, branches and nodes. The red line is a reference marker at intensity level 25. The units of intensity on the $y$-axis of the tree diagrams in the middle row could be in brightness temperature ($T_b$) for scaled simulations or observations.} \label{fig:hists} \end{figure*} As $\delta$ sets the definition for ``local maximum,'' setting it too high will produce a dendrogram that may miss important substructures, while setting it very low may produce a dendrogram that is dominated by noise. The issues of noise and the dendrogram were discussed extensively in Rosolowsky et al. 2008. While the dendrogram is designed to present only the essential features of the data, noise will mask the low-amplitude or high spatial frequency variation in the emission structures. In extreme cases, where the threshold value is not set high enough or the signal-to-noise is very low, noise can result in local maxima that do not correspond to real structure. As a result, the algorithm has a built in noise suppression criteria which only recognizes structures that have 4 $\sigma_{rms}$ significance above $\delta$. Such a criterion has been previously used in data cube analysis, as noise fluctuations will typically produce 1 $\sigma_{rms}$ variations (Brunt et al. 2003; Rosolowsky \& Blitz 2005; Rosolowsky et al. 2008). The algorithm we use is extensively described in Goodman et al. (2009) in the Supplementary Methods section and in Rosolowsky et al. (2008), however we describe its main points here. To produce the dendrogram, we first identify a population of local maxima as the points which are larger than all surrounding voxels touching along the face (not along edges or corners). This large set of local maxima is then reduced by examining each maximum and searching for the lowest contour level that contains only that maximum. If this contour level is less than $\delta$ below the local maximum, that local maximum is removed from consideration in the leaf population. This difference in data values is the vertical length of the ``leaves'' of the dendrogram. Once the leaves (local maxima) of the dendrogram are established, we contour the data with a large number of levels (500 specifically, see Rosolowsky et al. 2008; Goodman et al. 2009). The dendrogram ``branches'' are graphically constructed by connecting the various sets of maxima at the contour levels where they are joined (see Figure \ref{fig:alyssa} for a 1D example). For graphical presentation, the leaves of the structure tree are shuffled until the branches do not cross when plotting. As a result, the $x$-axis of the dendrogram contains no information. \begin{figure}[th] \centering \includegraphics[scale=.74]{fig4a.eps} \includegraphics[scale=.478]{fig4b.eps} \caption{Top: total number of structures (leaves and nodes) vs.~$\delta$ for six different simulations. Error bars are created by running the analysis for multiple time snapshots of well-developed turbulence. Symbols with no connecting line have LOS taken perpendicular to the mean magnetic field. We show an example of a case with LOS parallel to the mean field for the M7 runs denoted with points connected with a straight line running. Bottom: number of segments from root to leaf on the largest branch of the tree vs. $\delta$. We show an example of a case with LOS parallel to the mean field for the M7/M8 runs denoted with points connected with a straight line running. Hierarchical structure is created both by shocks (high sonic Mach number cases) and a high Alfv\'enic Mach number. For both the top and bottom plots, the left panel shows higher magnetization (sub-Alfv\'enic) while the right shows lower magnetization (super-Alfv\'enic). Both panels have the $y$-axis set to the same range for ease of comparison and use a log-log scale. } \label{fig:maxvsdelta} \end{figure} Once the dendrogram is created, there are multiple ways of viewing the information it provides such as: \begin{itemize} \item A tree diagram (the dendrogram itself). \item 3D viewing of the isocontours and their connectivity in PPV space. \item A histogram of the dendrogram leaf and node values (i.e. intensities), which can then be further statistically analyzed. \end{itemize} We note that this third point is a novel interpretation of the dendrogram that we develop in this work. Here, the histogram will be composed of intensity values important to the hierarchical structure of the image. We define a distribution $\xi$ which includes the intensity values of the leaves (i.e. local maximum), denoted by $m$, and intensity values of the nodes, denoted with $n$. This interpretation is visualized in Figure \ref{fig:hists} and is further described in Section \ref{results}. The purpose of this paper is to use dendrograms to characterize the observed hierarchy seen in the data. While turbulence has often been cited as the cause of the observed hierarchical structure in the ISM (Stutzki 1998), it is unclear to what extent magnetic fields, gas pressure, and gravity play roles in the creation of ISM hierarchy even though these parameters are known to drastically change the PDF and spectrum of both column density and PPV data (see Falgarone 1994; Kowal, Lazarian \& Beresnyak 2007; Tofflemire et al. 2011). \section{Data} \label{data} We generate a database of sixteen 3D numerical simulations of isothermal compressible (MHD) turbulence by using the MHD code of Cho \& Lazarian 2003 and vary the input values for the sonic and Alfv\'enic Mach number. We briefly outline the major points of the numerical setup. The code is a third-order-accurate hybrid essentially non-oscillatory (ENO) scheme which solves the ideal MHD equations in a periodic box: \begin{eqnarray} \frac{\partial \rho}{\partial t} + \nabla \cdot (\rho {\bf v}) = 0, \\ \frac{\partial \rho {\bf v}}{\partial t} + \nabla \cdot \left[ \rho {\bf v} {\bf v} + \left( p + \frac{B^2}{8 \pi} \right) {\bf I} - \frac{1}{4 \pi}{\bf B}{\bf B} \right] = {\bf f}, \\ \frac{\partial {\bf B}}{\partial t} - \nabla \times ({\bf v} \times{\bf B}) = 0, \end{eqnarray} with zero-divergence condition $\nabla \cdot {\bf B} = 0$, and an isothermal equation of state $p = C_s^2 \rho$, where $p$ is the gas pressure. On the right-hand side, the source term $\bf{f}$ is a random large-scale driving force. Although our simulations are ideal MHD, diffusion is still present in the form of numerical resistivity acting on small scales. The scale at which the dissipation starts to act is defined by the numerical diffusivity of the scheme\footnote{ ENO schemes, such as the one employed here, are considered to be generally low diffusion ones (see Liu \& Osher 1998; Levy, Puppo \& Russo 1999, e.g )} . However, the dissipation scales can be estimated approximately from the velocity spectra and, in this case, we estimated the dissipation scale to be $k_v \approx$30 (see Kowal, Lazarian \& Beresnyak. 2007). We drive turbulence solenoidally\footnote{The differences between solenoidal and compressive driving is discussed more in Federrath et al. (2008). One can expect driving in the ISM to be a combination of solenoidal and compressive, however both types of driving will produce shocks on a range of scales, which is what we study here.} with energy injected on the large scales. The large eddy turnover time is given by $\approx L/\delta V$ where $\delta V$ is the RMS velocity (with fluctuations of around unity) and $L$ is the box size. The magnetic field consists of the uniform background field and a fluctuating field: ${\bf B}= {\bf B}_\mathrm{ext} + {\bf b}$. Initially ${\bf b}=0$. The average density is unity for all simulations. We stress that simulations (without self-gravity) are scale-free and all units are related to the turnover time, density, and energy injection scale. We divide our models into two groups corresponding to sub-Alfv\'enic and super-Alfv\'enic turbulence. For each group we computed several models with different values of gas pressure (see Figure \ref{descrp}) falling into regimes of subsonic and supersonic. We ran one compressible MHD turbulent model with self-gravity at 512$^3$ resolution and a corresponding case for the same initial values of pressure and mean magnetic field without self-gravity (models 15 and 16). We solve for the gravitation potential ($\Phi$) using a Fourier method similar to that described in Ostriker et al. 1999. In this case, Equation 2 now has a $-\rho \nabla \Phi$ term on the right hand side. The gravitational kernal used to provide a discrete representation of the Poisson equation is: \begin{equation} \phi_k=2\pi G \rho_k\{[1-cos(k_x\Delta x)]/\Delta x^2+[1-cos(k_y\Delta y)]/\Delta y^2+[1-cos(k_z\Delta z)]/\Delta z^2\}^{-1} \end{equation} We can set the strength of self-gravity by changing the physical scaling of the simulations, i.e. by changing the size scaling, cloud mass, and crossing time, which effectively changes the relation between $2\pi G$ in physical units and code units, which we term $g$ in Figure \ref{descrp}. We set these values to give a global virial number of $\alpha \approx 90$, which is in the upper limit of the observed values found in GMCs (see Kainulainen et al. 2011). We choose a high virial value to investigate the minimum effect gravity might have on the hierarchical structure of clouds. More information about the scalings and their relation to the virial parameter can be found in Section \ref{sec:app}. The models are listed and described in Figure \ref{descrp}. Here we list the root-mean-squared values of the sonic and Alfv\'enic Mach numbers calculated in ever cell and then averaged over the box. We use density and velocity perpendicular to the mean magnetic field in order to create fully optically thin synthetic PPV data cubes, although we also investigate dendrogram for other LOS orientations. The PPP and synthetic PPV cubes are all normalized by the mean value, i.e. $ PPV_{final}=PPV_{orignial}/<PPV_{orignial}>$, unless otherwise stated. Varying the optical depth will be done in a later work. We create cubes with a given velocity resolution of 0.07, which is approximately ten times smaller than the rms velocity of the simulation ($v_{rms}\approx$ unity). For reference, the sound speed of the simulations varies from $c_s=1.4-0.07$ for our most subsonic to most supersonic simulations. PPV cubes are created by reorganizing density cubes into channel bins based on given velocity intervals. Additional discussion on comparing the simulations to observations is found in Section \ref{sec:app}. \section{Characterizing Hierarchy and Structures Created by Turbulence} \label{results} We applied the dendrogram algorithm on synthetic PPV cubes with various sonic and Alfv\'enic Mach numbers. An example of how the the tree diagram output changes with threshold value $\delta$ is shown in Figure \ref{fig:hists}. The top row of Figure \ref{fig:hists} shows the isosurfaces with the colors relating back to the colors in the corresponding dendrogram shown in the middle row. As the threshold intensity value $\delta$ (which, shown here with a black line, sets the definition of the local maximum or ``leaves of the tree'') increases, structures in the dendrogram begin to merge with each other. The leaf and branch length and number of structures provide information on the hierarchical nature of the PPV cube. The bottom row of Figure \ref{fig:hists} shows the histograms of the dendrogram distribution of intensities (leaves and nodes). The red line is a reference line at intensity level 25. This distribution also changes with changing threshold value, as leaves merge with one another and the hierarchy changes. \begin{figure*}[th] \centering \includegraphics[scale=.13]{fig5.eps} \caption{Illustration of supersonic clouds with different magnetic regimes and how this affects the observed clumps. Panel A shows a cloud with very low Alfv\'enic Mach number (similar to hydrodynamic turbulence). In this case, turbulence allows the creation of hierarchical structure with no limitation on the gas motion. Panels B and C show a different cloud, with higher magnetization (sub-Alfv\'enic), with views of the compression parallel (panel B) and perpendicular (panel C) to the field lines. In the sub-Alfv\'enic cloud, motions will be correlated due to the strong field. The magnetic field will restrict shock compression perpendicular to the field lines (panel C). For shocks parallel to the field (panel B), increased compression will occur which will enhance density clumps.} \label{fig:clump} \end{figure*} In the next subsections, we investigate the effects of the compressibility, magnetization, and self-gravity on the number of structures, amount of hierarchical structure, and moments of the dendrogram distribution. We define a hierarchical dendrogram as one which has many segments on its paths and hence many levels above the root. \subsection{Sonic and Alfv\'enic Mach Numbers} \subsubsection{Leaf and Branch Counting} We computed the dendrogram for all synthetic non-self gravitating PPV cubes with varying threshold values. Figure \ref{fig:maxvsdelta} top shows how the total number of structures (i.e., dominant emission contours including dendrogram ``leaves and branches'') changes as we change $\delta$. We plot the total number of structures vs.~$\delta$ on a logarithmic scale (i.e. Log N vs Log $\delta$) for simulations with three differing values of sonic Mach number (the M7, M3, M0.7 models) and two values of Alfv\'enic Mach number. The left panel shows sub-Alfv\'enic models and the right shows super-Alfv\'enic models. Error bars are created by taking the standard deviation between different time snapshots. We note that power law tails can be seen at values of $\delta$ past the mean value (i.e. past log $\delta=0$). We over plot the values of the slopes with solid black lines for reference. The symbols with no lines through them in Figure \ref{fig:maxvsdelta} are for PPV cubes with LOS taken perpendicular to the mean magnetic field. We tested our results for LOS taken parallel to the mean magnetic field and found similar results. We show examples of the results for cubes with LOS taken parallel to the mean magnetic field in Figure \ref{fig:maxvsdelta} for the M7 models using symbols connected with solid lines. The M7 models are highly supersonic and therefore will show the most deviation along different sight lines. When $\delta$ is at or slightly above the mean value of the data cubes, there is little difference in the number of structures between simulations of different sonic Mach number. This is surprising, since the structures seen in subsonic turbulence are very different from the supersonic case. In the regime where $\delta$ is at the mean value, we are sampling most of the PPV cube emission and therefore are not sensitive to the differences seen at larger threshold values, which will merge low intensity structures. Once we increase $\delta$ beyond the mean however, the number of structures between the subsonic (black plus signs) and supersonic simulations (red stars and green diamonds) rapidly diverges. The larger number of structures in the supersonic case is a result of shocks creating higher intensity values in the PPV cubes. Additionally, the slopes in the subsonic simulations are much steeper as compared with the supersonic simulations since the number of structures the dendrogram considers significant at a given threshold value rapidly falls off to zero. Subsonic models have fewer significant emission contours since they do not have density enhancements created by shocks and therefore the density/velocity contrast between subsonic and supersonic turbulence becomes clear at higher threshold values. The higher the Mach number, the more small scale intensity enhancements we expect to see. As $\delta$ increases, differences between supersonic (M3) and very supersonic (M7) cases become more apparent, as the slopes for the M3 case are steeper. This is because interacting shocks in the M7 case are much stronger, and hence there is more contrast in the emission contours. Thus, as we increase $\delta$, the structures merge more rapidly for lower values of the sonic Mach number. Comparison between the left and right top panels of Figure \ref{fig:maxvsdelta} shows that the magnetic field also affects the number of structures and the trend with the threshold value. When $\delta$ is low, the super-Alfv\'enic cases (right panel) show slightly more structures than the sub-Alfv\'enic ones (left panel). However, as $\delta$ increases, the number of structures decreases more rapidly in the case of super-Alfv\'enic turbulence, i.e. the slopes are steeper. Interestingly, when we take a sight-line parallel to the mean magnetic field (green symbols with the solid line drawn through) we still see the same trend with Alfv\'enic Mach number. We suspect that these results are generally independent of time evolution in driven turbulence, as clump lifetimes have been shown to be determined by turbulence motions on large scales, rather then limited by their crossing times (see Falceta-Gon\c{c}alves \& Lazarian 2011) \begin{figure}[th] \centering \includegraphics[scale=.7]{fig6.eps} \caption{Moments of the dendrogram tree (leaves + branches) vs. average ${\cal M}_s$ for twelve different simulations spanning a range of sonic numbers from 0.5 to 8. Here, we have chosen $\delta$=4. Panels, from top to bottom, show mean ($\mu$), variance ($\nu$), skewness ($\gamma$) and kurtosis ($\beta$) of the distribution. Sub-Alfv\'enic is shown with black plus signs and super-Alfv\'enic with red asterisks.} \label{fig:norm_ms} \end{figure} The bottom plot of Figure \ref{fig:maxvsdelta} shows the number of segments from root to leaf on the largest branch vs. the threshold parameter $\delta$. A test of hierarchy is to count the number of segments along the largest branch, from leaf to root. Similar to what was shown in the top figure, the sonic Mach number has a strong relation to the amount of hierarchical structure created in the gas. Higher sonic Mach number yields more shocks, which in turn produce more high density clumps and more hierarchical structures in PPV space. Interestingly, the magnetic field seems to play an even stronger role in the hierarchical branching than shocks. Comparison between the $y$-axis values of the left and right plots reveals that a larger Alfv\'enic Mach number allows for more hierarchical structure in the PPV dendrogram. In the case of super-Alfv\'enic turbulence, magnetization is low and hence the structures created are closer to that of hydrodynamic turbulence, which is well known to show fractal behavior and hierarchical eddies. As turbulence transitions to sub-Alfv\'enic, it becomes magnetically dominated with fewer degrees of freedom. We illustrate the results of Figure \ref{fig:maxvsdelta} as a cartoon shown in Figure \ref{fig:clump}. The cartoon shows various configurations of two ISM clouds. We label the first cloud as case A, which is a cloud with a global Alfv\'enic Mach number $\geq 1$. The second cloud, which is a cloud with a global Alfv\'enic Mach number $\leq 1$, is labeled as cases B and C, indicating shock compression parallel and perpendicular to the mean magnetic field, respectively. Both clouds are assumed to have the same supersonic value of the sonic Mach number. Case A shows hierarchical structure forming in clumps that are not affected strongly by the magnetic field. The clumping and hierarchy is due to compression via shocks and the shredding effect of hydrodynamic turbulence. The turbulent eddies for cloud A can evolve with a full 3D range of motion and have more degrees of freedom as compared with turbulence in the presence of a strong magnetic field. In light of this, when we consider strong magnetization, we must now investigate the effects of shock compressions oriented parallel and perpendicular to the mean magnetic field (case B and C). For shock compression parallel to the field lines (case B), the clumps will be confined in the direction perpendicular to the field, and thus the compression will be able to squeeze the clumps, decrease the hierarchy in the gas, and create additional large density contrast. For shock compression perpendicular to the field lines, the magnetic pressure relative to the shock compression is higher, and the clumps will not feel as much of the compression. Furthermore, the strong field creates anisotropy in the eddies, which are stretched along the direction of the mean field line. This limits the range of motion of the eddies, which in turn limits their ability to interact. Thus, in comparison of cloud B/C with cloud A the contrast is higher while hierarchical structure is less. \subsubsection{Statistics of the Dendrogram Distribution} \label{moments} A dendrogram is a useful representation of PPV data in part because there are multiple ways of exploring the information on the data hierarchy. In this section we investigate how the statistical moments of the distribution of the dendrogram tree (see bottom panels of Figure \ref{fig:hists} for example) changes as we change the threshold parameter $\delta$ and how these changes depend on the compressibility and magnetization of turbulence. We consider a distribution $\xi$ containing all the intensity values of the leaves and merging intensity contour values in a given dendrogram. The question that forms the basis of our investigation in this section is: Do the moments of the distribution $\xi$ have any dependencies on the sonic and Alfv\'enic Mach numbers? The first and second order statistical moments (mean and variance) used here are defined as follows: $\mu_{\xi}=\frac{1}{N}\sum_{i=1}^N {\left( \xi_{i}\right)} $ and $ \nu_{\xi}= \frac{1}{N-1} \sum_{i=1}^N {\left( \xi_{i} - \overline{\xi}\right)}^2$, respectively. The standard deviation is related to the variance as: $\sigma_{\xi}^2=\nu_{\xi}$. The third and fourth order moments (skewness and kurtosis) are defined as: \begin{equation} \gamma_{\xi} = \frac{1}{N} \sum_{i=1}^N{ \left( \frac{\xi_{i} - \mu_{\xi}}{\sigma_{\xi}} \right)}^3 \label{eq:skew} \end{equation} \begin{equation} \beta_{\xi}=\frac{1}{N}\sum_{i=1}^N \left(\frac{\xi_{i}-\mu_{\xi}}{\sigma_{\xi}}\right)^{4}-3 \label{eq:kurt} \end{equation} We calculate the moments of the dendrogram tree distribution while varying our simulation parameter space. In particular, we vary the sonic Mach number, the Alv\'enic Mach number, and the threshold value. We find the moments vs. the threshold parameter $\delta$ to show linear behavior. As $\delta$ increases, the number of the intermediate intensity values that make up the branches and the hierarchical nesting (i.e. the intensity values between the high intensity local maximum and the low intensity values near the trunk) merge with each other. This effect can be seen visually in Figure \ref{fig:hists}. Thus, as $\delta$ increases, the mean and variance of the distribution $\xi$ (example show in the bottom of Figure \ref{fig:hists}) will increase. We plot the moments vs. ${\cal M}_s$ with $\delta=4$ in Figure \ref{fig:norm_ms}. This figure shows the full range of our simulations with sub- and super-Alfv\'enic combinations. Generally, as the sonic Mach number increases so do the moments. We found this trend to be consistent over a range of $\delta$ values, and hence only plot one case here. Error bars, created by taking the standard deviation of the value between different time snapshots of the simulation, generally increase with sonic number as the fluctuations become increasingly stochastic and shock dominated. The increase of the moments of $\xi$ is related to the compressibility of the model and more supersonic cases display more prominent clumpy features, which drive up both the average and the variation from average. The tails and peak of the distribution also become increasingly skewed and kurtotic towards higher values of intensity and the distribution becomes increasingly peaked around the mean value. It is interesting to note that a strong dependency on the magnetization of the model exists, particularly as the sonic number goes up. The sub-Alfv\'enic simulations show increased moments, which implies that they exhibit more contrast (mean value is higher) and more skewed/kurtotic distributions in their gas densities. In the above analysis, the distribution $\xi$ included all leaves and branches of the dendrogram tree. We could further cut the tree into its respective branches and leaves and analyze the distributions separately, which provides additional constraints on the parameters. We investigated the statistical moments on the histograms of the branch lengths, leaf lengths, and leaf intensities and found the trends discussed above to be consistent with the results of Figure \ref{fig:norm_ms}, and hence do not include the plots. \subsection{Self-Gravity} \label{sg} \subsubsection{Leaf and Branch Counting} \label{sglb} The issues of the importance of self-gravity in simulations for comparisons with the molecular medium have been raised by a number of authors (Padoan et al. 2001; Li et al. 2004; Goodman et al. 2009; Federrath et al. 2010). While self-gravity is known to be of great importance to accretion disk physics and protostellar collapse, its role in the outer regions of GMCs and in diffuse gasses is less obvious. The dendrogram can potentially be used to explore the relative role between turbulence and gravity regarding both the structure of the hierarchy and the distribution of dominant emission contours. Figure \ref{fig:sg1} shows tree diagrams at constant threshold value $\delta$=45 for sub-Alfv\'enic supersonic simulations with and without self-gravity (models 15 and 16 in Figure \ref{descrp}). A large value of $\delta$ is used in order to not over crowd the dendrogram with branches. It is clear that the case with self-gravity (the tree diagram in the right panel of Figure \ref{fig:sg1}) has a dendrogram with more significant structure and hierarchical nesting. Both models are supersonic with approximately the same sonic Mach number. Visually, Figure \ref{fig:sg1} makes it clear that self-gravity, even with a virial number of 90, is an important factor for the creation of hierarchical structure. \begin{figure*}[tbh] \centering \includegraphics[scale=.34]{fig7.eps} \caption{Effects of self-gravity on the dendrograms of sub-Alfv\'enic supersonic MHD turbulence with $\delta=45$. A high value of $\delta$ is used to keep the plots from being over crowded with branches. The dendrogram of the self-gravitating simulation is on the right while the dendrogram of the simulation with no gravity on the left.} \label{fig:sg1} \end{figure*} In order to quantify the number of structures and hierarchical nesting seen in Figure \ref{fig:sg1} we repeat the analysis performed in Figure \ref{fig:maxvsdelta}. We show the number of structures vs. $\delta$ (top panel) and the number of segments on the largest branch of the tree vs. $\delta$ (bottom panel) in Figure \ref{fig:sg5}. It is clear that the case with no self-gravity (black crosses) shows less overall structure (leaves and nodes, top plot) and less hierarchical structure (bottom plot) compared with the cases with self-gravity (red asterisk). Even with a high virial parameter, the supersonic self-gravitation simulation has significantly more nested structures and more contours considered to be areas of significant emission than supersonic simulation without gravity. Interestingly, the power-law slopes seen in the top plot of Figure \ref{fig:sg5} are not significantly different between the self-gravitating and non self-gravitating cases. In fact, the simulations appear to be only shifted vertically and no substantial difference is seen regarding the slopes. Therefore it is possible that the change in the slope, which was observed in Figure \ref{fig:maxvsdelta}, can be attributed to the sonic and Alfv\'enic nature of the turbulence directly, and is not substantially affected by the inclusion of weak self-gravity. More work should be done in the future to determine how the dendrogram is influenced by the presence of strong self-gravitating flows. We discuss further the use of the dendrogram for quantifying the relative importance of self-gravity and turbulence in the discussion section (section \ref{disc}). \begin{figure}[tbh] \centering \includegraphics[scale=.5]{fig8a.eps} \includegraphics[scale=.5]{fig8b.eps} \caption{The effects of self-gravity on the number of structures and hierarchical nesting seen in the dendrogram. Top plot: total number of structures (branches and leaves) vs. $\delta$. Bottom plot: number of segments from root to leaf on the largest branch of the tree vs. $\delta$. All plots are shown with a log-log scale. Black plus signs indicated the simulation with no gravity while red asterisk indicate the simulation with self-gravity included. Both simulations are sub-Alfv\'enic and supersonic.} \label{fig:sg5} \end{figure} \subsubsection{Statistics of the Dendrogram Distribution} \label{sgstat} We show how self-gravity affects the moments of the dendrogram distribution as we vary $\delta$ in Figure \ref{fig:sg6} for models 15 and 16. Higher levels of self-gravity show increases in all four moments over a range $\delta$. The presence of gravity increases the mean value and variance of emission contours as well as skews the distribution towards higher values due to the presence of attracting regions. The results of Section \ref{sg} can be applicable to molecular clouds where, given a relatively constant value of the sonic Mach number, regions under the influence of stronger self-gravity could possibly be identified with the use of the analysis presented in Figures \ref{fig:sg5} and \ref{fig:sg6}. This could be done by comparing the dendrogram moments and number of structures with different $\delta$ between various sections of a cloud. Areas that show increased moments and increased hierarchical clumping with no changes in the slopes of structures vs. $\delta$, may indicate changes in self-gravity. Knowing the Mach number of these regions will greatly increase the reliability of such analysis and fortunately, other techniques exist to find these (see for example Burkhart et al. 2010; Esquivel \& Lazarian 2011; Kainulainen \& Tan 2012). We further discuss applying the dendrogram on observations in section \ref{disc}. \section{Dendrograms of PPP vs. PPV} \label{ppp} The issue of interpreting structures seen in PPV space has vexed researchers for over a decade (see Pichardo et al. 2000). How the structures in PPV translate to PPP depends on many factors, most importantly the nature of the turbulent environment. The dendrogram presents a unique way of studying how the hierarchy of structures seen in density space (PPP) relate to PPV space via simulations. \begin{figure}[tbh] \centering \includegraphics[scale=.7]{fig9.eps} \caption{The effects of self-gravity on the moments of the dendrogram distribution vs. $\delta$. Self-gravity (red asterisk) shows increased mean ($\mu$), higher variance ($\nu$), and more skewed and peaked distributions, which are reflected in the skewness ($\gamma$) and kurtosis ($\beta$).} \label{fig:sg6} \end{figure} \begin{figure*}[tbh] \centering \includegraphics[scale=.5]{fig10.eps} \caption{Example of synthetic PPV data cubes with vertical axis being the velocity axis (left), and PPP data cube (right) for subsonic super-Alfv\'enic turbulence. Integrating along the velocity axis of PPV restores the column density map which can also be obtained from the 3D density cube. The bottom left PPV has PPP density equal to unity, and hence a constant column density. Structure in this PPV cube is due to \textit{pure velocity fluctuations}. This figure highlights the need to be cautious when interpreting the structures seen in PPV. The quantitative relation between the fluctuations in PPV and underlying density and velocity fluctuations is provided in Lazarian \& Pogosyan (2000)} \label{fig:ppv} \end{figure*} For turbulent clouds, it is never the case that the structures in PPV have a one-to-one correspondence with the density PPP, although this assumption may be more appropriate for some environments than others. We show a simple example illustrating this in Figure \ref{fig:ppv}, which shows two synthetic subsonic super-Alfv\'enic PPV data cubes (left), which share the same velocity distribution but have different density distributions. The bottom left PPV cube has constant density/column density, while the top left PPV cube's corresponding turbulent density cube (PPP cube) is shown on the right. Interestingly, the bottom left PPV cube has a very similar level of structure as compared with the top PPV cube, despite the fact that the column density of the bottom cube is constant. This points out the well known fact that there is not a one-to-one correspondence with PPV and PPP space. In fact, in this example (a subsonic model) most of the structures seen in PPV are due to the velocity rather than the density. Figure \ref{fig:ppv} illustrates the dominance of velocity in the subsonic case in the bottom PPV cube. Fluctuations in PPV here are \emph{entirely driven by the turbulent velocity field}. To illuminate this point further, Figure \ref{fig:ppp-ppv} shows PPP and PPV dendrograms for supersonic turbulence with (model 15, middle) and subsonic turbulence with (model 1, bottom). We also show the corresponding isosurfaces for the supersonic case in the top row. Comparing PPV and PPP should be done with care as they are different spaces. We increased the value of $\delta$ until the PPP dendrogram becomes mostly leaves, i.e. they have little hierarchy. The leaves are reached at $\approx$ $\delta =40$. We took the corresponding optically thin PPV cube and applied the dendrogram with the same $\delta=40$ threshold value. If the dominant emission is due to \textit{density} than the leaves should be similar for both PPV and PPP. All PPV and PPP cubes are normalized to have a mean value of unity. Interestingly, the supersonic turbulence dendrogram for density looks very similar to the corresponding PPV dendrogram for the same $\delta$ at the level of the leaves. For the subsonic case we see that the dendrogram of density and PPV look nothing alike (same $\delta$). In this case, the velocity field dominates PPV space. Hence, we do not show the isosurfaces for the subsonic case. In supersonic turbulence, the highest density peaks correspond to the highest intensity fluctuation in the PPV. This implies that if one knows that the turbulence in question is supersonic, the structures in PPV space at the level of the leaves can generally be interpreted as 3D density structures. However, if the turbulence is subsonic in nature this assumption is not appropriate. \begin{figure}[th] \centering \includegraphics[scale=.43]{fig11.eps} \caption{Dendrograms of density (right column) and PPV (left column). Supersonic isosurfaces and their corresponding dendrograms are shown in the top and middle rows, respectively. Colors are correspondent between structures in the isosurface figures and the dendrogram. Subsonic dendrograms are shown in the bottom row. } \label{fig:ppp-ppv} \end{figure} \section{Application} \label{sec:app} The dendrogram shows dependencies on the parameters of turbulence that are important for studies of star forming regions and the diffuse ISM. When analyzing a particular data set, one should keep in mind that comparisons between the observational and scaled numerical data, or comparisons between different clouds or objects within the same data set, are the most useful means of extracting these parameters. Our simulations can be scaled to observations by specifying the physical size of the simulation volume, the isothermal sound speed of the gas, and mass density. In particular, appropriate scalings must be made in the case of the self-gravitating simulation. We can set the strength of self-gravity by changing the physical scaling of the simulations, i.e. by changing the box size, cloud mass, and crossing time. In this case, the relevant scaling between physical and code units are the size scaling ($x_0$), the velocity scale factor ($v_0$), the time scale factor ($t_0$), and the mass scale factor ($m_0$). These are given by: \begin{equation} x_0=\frac{L_{obs}}{L} \end{equation} where $L_{obs}$ is the physical length of the box, and $L$ is in code units and is equal to unity. \begin{equation} v_0=\frac{c_{s,obs}}{c_{s,code}} \end{equation} where $c_{s,obs}$ and $c_{s,code}$ are the observed and code sound speeds, respectively, \begin{equation} t_0=\frac{x_0}{v_0} \end{equation} and \begin{equation} M_0=\frac{M_{obs}}{\rho_{code}L^3} \end{equation} where $m_{obs}$ is the observed cloud mass and $\rho_{code}$ is the average density of the simulation (unity). Using these relations between the code units and physical units we can define a relation between $2\pi G$ in code units and physical units, and then relate this to the free fall time. In this case, \begin{equation} 2\pi G_{code}=g=2\pi G_{physical}\left({\frac{t_0}{x_0}}\right)^2\frac{M_0}{x_0} \end{equation} Here we use a value of $2\pi G_{code}=g=0.01$ (see Figure \ref{descrp}). In this case, our free fall time (in code units and therefore using $\rho$ of unity) is $t_{ff} \equiv \frac{1}{4}\sqrt{\frac{3\pi^2}{0.01\rho}}=13.6$. The dynamical time (in code units with simulations box size of unity and rms velocity) of our self-gravitating simulation is $t_{dyn}\equiv L/v_{rms}=1.4$. The ratio of the free fall time to the dynamical time of the simulation with self gravity gives the global virial parameter $\alpha\approx({\frac{t_{ff}}{t_{dyn}}})^2\approx 90$. Additional information on scaling simulations to observations can be found in Hill et al. 2008. We include the effects of changing the velocity resolution, thermal broadening, and smoothing in the next subsection. \subsection{Smoothing} \label{obs} We investigate how smoothing and data resolution affect the dendrogram. When dealing with observational data, one must always consider the effect that the telescope beam smoothing will have on the measurement. The observations are rarely done with pencil beams, and the measured statistics change as the data is averaged. We expect the effect of smoothing to depend on a dimensionless number, namely, the ratio of the size of the turbulence injection scale to the smoothing scale. We apply the same technique that was applied in the previous sections, i.e. exploring the number of structures and moments of dendrogram tree statistics. However, we now include a boxcar smoothing kernel (truncating the edges). We expect that smoothing will affect supersonic turbulence and cases of high self-gravity the most. In this case, shocks and small-scale gravitational clumps become smoothed out and more difficult for the algorithm to identify. In the subsonic or low gravity cases, smoothing makes less of a difference, since the gas is already diffuse and less hierarchical. We show how the moments and number of structures changes with smoothing size (in pixels) in Figure~\ref{fig:smoothmomnumb}. One could also discuss smoothing beam size in terms of the injection scale of the turbulence. For instance, 7 pixel smoothing represents a beam scale that is 30 times smaller than our injection scale of turbulence. \begin{figure}[tbh] \centering \includegraphics[scale=.7]{fig12.eps} \caption{Moments of the dendrogram distribution (top four panels) and the number of structures (bottom panel) with smoothing vs. the threshold parameter $\delta$ . The left panel is sub-Alfv\'enic and the right panel is super-Alfv\'enic and the $y$-axis is the same for both columns for ease of comparison between the two. Both cases are for the M7 models.} \label{fig:smoothmomnumb} \end{figure} We found that in general, subsonic and transonic turbulence are not as affected by smoothing compared to highly supersonic models. In light of this, we plot the moments and number of structures vs. $\delta$ for different smoothing degrees for a highly supersonic model with (M7 models) in Figure ~\ref{fig:smoothmomnumb}. Two panels show different Alfv\'enic regimes with the $y$-axis the same for both for ease of comparison. Black lines indicate no smoothing, while red and blue indicate three and seven pixel smoothing, respectively. Error bars are produced by taking the standard deviation between different time snapshots of the simulations with well developed turbulence. As smoothing increases for this supersonic model, we see that the values of the moments as well as the total number of structures decreases. However, even out to seven pixel smoothing the differences between the Aflv\'enic cases is evident in the mean and variance, respective of the error bars. Furthermore, the trends with the threshold parameter do not change when we introduce smoothing, which gives us further confidence that this technique can be applied to the observational data. Other than the change in amplitude, the trends remain the same as what was seen in Section \ref{results}. \subsection{Velocity Resolution and Thermal Broadening} In addition to smoothing, we must also consider the effects of velocity resolution. As the velocity resolution changes in PPV space, so do the structures observed. We investigated how the number of structures in the dendrogram distribution change when we vary the velocity resolution. We find that the number of substructures drops as the velocity resolution decreases, from several hundreds to several dozen when changing the velocity resolution from $v_{res}=0.07$ to $v_{res}=0.7$. This effect corresponds to the channel sampling dropping from $\approx$ 60 to 15 channels. This may provide too low a number of statistics in the dendrogram distribution to look at the moments, however the general trends with the physical parameters stay consistent with section \ref{results}. An additional observational consideration that should be made regards thermal broadening. The bulk of the this paper focuses on the effects of turbulence and magnetic fields in the creation of hierarchical structure in ISM clouds. However, for warm subsonic or transonic gas, thermal broadening effects should also be considered. Convolution with a thermal broadening profile (i.e. a Gaussian) will smooth out the velocity profiles in these cases. However, we expect the structures seen in supersonic gas to not be affected since turbulence dominates the line broadening. To demonstrate this we convolve the line profiles of four of our simulations (models M0.7, M3, M7, M8) with Gaussian profiles to mimic the effects of thermal broadening. The thermal Gaussian has FWHM given as the ratio of the turbulent line width to the sonic Mach number. We repeat the analysis done in Figure \ref{fig:maxvsdelta}, using the models which now include thermal broadening, in Figure~\ref{fig:thermal}. We show the fitted slopes of the number of structures vs. $\delta$ from Figure \ref{fig:maxvsdelta} (top panel) as solid black lines accompanied by the numerical value of the slope. These black lines serve as a reference for which to compare with simulations without thermal broadening with the simulations that now include thermal broadening (colored symbols). As expected, the supersonic dendrograms are mostly unaffected by the inclusion of thermal broadening, since turbulent dominates the line profits. We find the number of structures vs. $\delta$ and corresponding slopes (top plot) for the supersonic simulations to be very similar to those shown in Figure \ref{fig:maxvsdelta}, with only a slight shallowing of the slopes. Additionally the amount of hierarchical branching (bottom plot) is also similar. However the subsonic PPV cube intensities are more drastically lowered as a result of the convolution with a broader Gaussian and thus the values of $\delta$ must be lowered as well. Additionally, the slopes seen for the subsonic simulations in the top plot of Figure~\ref{fig:thermal} are shallower then the slopes of the simulations with no thermal broadening included (comparison with solid lines referencing Figure \ref{fig:maxvsdelta}). Therefore when examining the the dendrograms of warm gas (such as HI and H$\alpha$) thermal broadening effects must also be taken into account. \begin{figure}[tbh] \centering \includegraphics[scale=.73]{fig13a.eps} \includegraphics[scale=.73]{fig13b.eps} \caption{Top: Total number of structures (leaves and branches) vs.~$\delta$. Bottom: Number of segments from root to leaf on the largest branch of the tree vs. $\delta$. Both plots are similar to Figure \ref{fig:maxvsdelta}, only here we include the effects of thermal broadening. The color and symbol scheme used to represent different sonic Mach numbers is the same as that of Figure \ref{fig:maxvsdelta} for both top and bottom plots } \label{fig:thermal} \end{figure} \section{Discussion} \label{disc} Hierarchical tree diagrams are finding more applications in interstellar studies, not only to locate clumps and calculate their properties, but also for characterizing properties of the physics present in interstellar and molecular gas. We used dendrograms to analyze how turbulence, magnetic fields and self-gravity shape the amount of structure and gas hierarchy in isothermal simulations. We examined the changes in the distribution of the dendrogram as we vary the threshold parameter $\delta$. This is analogous to changing the corresponding threshold parameter in other techniques that rely on contouring thresholds, e.g. in the Genus analysis (see Chepurnov et al. 2009). By varying $\delta$ we obtained a new outlook on the technique; in particular, we found that the dendrogram distribution and hierarchy have a strong dependency on the magnetization and compressibility of the gas and are sensitive to the amount of self-gravity. \subsection{The Hierarchical Nature of MHD Turbulence} The number of structures and the amount of hierarchy formed by MHD turbulence has interesting implications for the evolution of ISM clouds and for the star formation problem. In Section \ref{results} we found that more hierarchical structure and more overall structure was created in the presence of supersonic turbulence. We also found that the inclusion of self-gravity enhanced these trends. The magnetic field also had a strong influence in the creation of hierarchical nesting in PPV space. The relationship between the magnetization and the cloud dynamics is still not well understood, especially in regards to star formation. Star forming clouds are known to be hierarchical in nature and magnetized, but their exact Alfv\'enic nature is less clear. The results from this work seem to suggest that very hierarchical clouds might tend towards being super-Alfv\'enic. Several authors have suggested a variety of evidence for molecular clouds being super-Alfv\'enic. This includes the agreement of simulations and observations of Zeeman-splitting measurements, $B$ vs.$\rho$ relations, ${\cal M}_A$ vs. $\rho$ relations, statistics of the extinction measurement etc. (Padoan \& Nordlund 1999; Lunttila et al. 2008; Burkhart et al. 2009; Crutcher et al. 2009; Collins et al. 2012). Furthermore a study done by Burkhart et al. 2009 found that even in the presence of globally sub-Alfv\'enic turbulence, the highest density regions tend towards being locally super-Alfv\'enic. This suggests that even in the case of globally sub-Alfv\'enic turbulence, the densest regions might be super-Alfv\'enic. It is interesting that the dendrogram technique also points to super-Alfv\'enic turbulence as an avenue for hierarchical structure creation. This provides motivation for the dendrogram technique to be applied to the observational data with varying threshold value $\delta$ in order to see how the nature of the hierarchical structure and total number of structures change in the observations. \subsection{Characterizing self-gravity and obtaining the Sonic and Alfv\'enic Mach Numbers from the Observations} We provided a systematic study of the variations of the dendrogram thresholding parameter $\delta$ with the sonic and magnetic Mach numbers. We also included a simulation with weak self-gravity (global virial number of 90) in order to investigate the influence gravity has on the observed hierarchy. While real molecular clouds generally have virial numbers much lower than this value, we showed that the dendrogram is highly influenced by the inclusion of even weak self-gravity. The sonic and Alfv\'enic Mach numbers, as well as the virial parameter, are critical for understanding most processes in diffuse and molecular gas, including the process of star formation. Thus, the dendrogram, with its sensitivity to these parameters, provides a possible avenue of obtaining the characteristics of turbulence and the relative importance of self-gravity to turbulence in the ISM. For example, the dendrogram, coupled with a virial analysis, was already used to compare the relative importance of self-gravity within the L1148 GMC cloud using $^{13}$CO data in Rosolowsky et al. 2008 and Goodman et al. 2009. We view this work as a springboard for applying this technique to the observational data, which is why we addressed the issues of smoothing and thermal broadening in Section \ref{sec:app}. It is clear that the relation between the dendrogram structures, their statistics and the thresholding value $\delta$, as explored in this work (for examples see Figures \ref{fig:maxvsdelta} and \ref{fig:sg5}), do not yield universal numbers, i.e. they depend on the observational characteristics such as the velocity resolution and beams smoothing etc. Therefore, in order to apply the dendrogram to the observations, we feel that it is necessary to define a fiducial dendrogram for the data where some information is known about turbulence, and in the case of self-gravitating clouds, the virial parameter. The fiducial dendrogram can then be compared to other regions within the same data, which all contain the same observational constraints. Similarly, for comparison of simulations and observations, the simulated observations must be tailored to the resolution of the observational data. In order to obtain a fiducial region for further dendrogram analysis and to increase the reliability of the parameters found via the dendrogram, it is advantageous to combine different techniques designed to investigate ISM turbulence. For instance, by applying the VCA and VCS techniques to PPV data (see Lazarian 2009 for a review), one can obtain the velocity and density spectra of turbulence. While these measures are known to depend on ${\cal M}_s$ and to a lesser degree on ${\cal M}_A$ (see Beresnyak, Lazarian \& Cho 2005, Kowal, Lazarian \& Beresnyak 2007, Burkhart et al. 2009), the utility of the spectra is not limited to measuring these quantities. Spectra provide a unique way to investigate how the energy cascades between different scales, and show whether comparing observations with the simulations with a single scale of injection is reasonable. The analysis of the anisotropies of correlations using velocity centroids provides an insight into media magnetization, i.e., it provides ${\cal M}_A$ (Lazarian et al. 2002, Esquivel \& Lazarian 2005), which is complementary to the dendrogram technique. Studies of the variance, skewness and kurtosis of the PDFs (see Kowal, Lazarian \& Beresnyak 2007; Burkhart et al. 2009, 2010, 2012) provides measures of the sonic Mach number ${\cal M}_s$. Similarly, Tsallis statistics measures (Esquivel \& Lazarian 2010, Tofflemire et al. 2011) provide additional ways of estimating both ${\cal M}_s$ and ${\cal M}_A$. We feel the approach to obtaining these parameters should be conducted with synergetic use of multiple tools, as was done in Burkhart et al. 2010 on the SMC. The dendrogram is a unique tool as it can classify the hierarchical nature of the data and that it should be added to a standard set of statistical-tools for studies of ISM data. All these techniques provide independent ways of evaluating parameters of turbulence and therefore their application to the same data set provides a more reliable estimate of key parameters such as compressibility, magnetization, and degree of self-gravity. Dendrograms have some advantages over other statistics designed to search for turbulence parameters, in that one can analyze the resulting tree diagram in many different ways, as highlighted in here and in previous works. These include finding local maxima, calculating physical properties of dominate emission, exploring how those clumps are connected in PPV, varying the threshold and calculating moments and level of hierarchy. Of course, one should keep in mind that the medium that we investigate observationally is far from simple. Multiple energy injection sources, for example, are not excluded. Thus, obtaining a similar answer with different techniques should provide us with additional confidence in our results. Finally, we should stress that for studies of astrophysical objects, the dendrogram and other statistical measures can be applied locally to different parts of the media. For instance, Burkhart et al. (2010) did not characterize the entire SMC with one sonic Mach number. Instead, several measures were applied to parts of the SMC in order to obtain a distribution of the turbulence in the galaxy. A similar local scale selection was applied also to the SMC in Chepurnov et al. (2008) using the Genus technique. Correlating the variations of the turbulence properties with observed properties of the media, e.g. star formation rate, should provide insight into how turbulence regulates many key astrophysical processes. \section{Summary} \label{con} We investigated dendrograms of isothermal MHD simulations with varying levels of gravity, compressibility and magnetization using multiple values of the threshold parameter $\delta$. The dendrogram is a promising tool for studying both gas connectivity in the ISM as well as characterizing turbulence. In particular: \begin{itemize} \item We propose using statistical descriptions of dendrograms as a means to quantify the degree of hierarchy present in a PPV data cube. \item Shocks, self-gravity, and super-Alfv\'enic turbulence create the most hierarchical structure in PPV space. \item The number of dendrogram structures depends primarily on the sonic number and self-gravity and secondarily on the global magnetization. \item The first four statistical moments of the distribution of dendrogram leaves and nodes have monotonic dependencies on the inclusion of self-gravity and the sonic and Alfv\'en Mach numbers over a range of $\delta$. \item The dendrogram provides a convenient way of comparing PPP to PPV in simulations. Density structures are dominant in supersonic PPV and not in subsonic. Thus, it is more justifiable to compare PPV to PPP when the gas is known to be supersonic. \end{itemize} \acknowledgments Authors thank Professor Diego Falceta-Gon\c{c}alves for the use of the self-gravitating simulation and useful discussions. B.B also thanks Professor Jungyeon Cho for helpful discussion. B.B. acknowledges support from the NSF Graduate Research Fellowship and the NASA Wisconsin Space Grant Institution. B.B. is thankful for valuable discussions and the use of the Dendrogui code via Chris Beaumont. A.L. thanks NSF grant AST 1212096 and both A.L. and B.B. thank the Center for Magnetic Self-Organization in Astrophysical and Laboratory Plasmas for financial support. This work was completed during the stay of A.L. as Alexander-von-Humboldt-Preistr\"ager at the Ruhr-University Bochum. A.G. acknowledges support from NSF Grant No. AST-0908159. E.R. is supported by a Discovery Grant from NSERC of Canada.
{'timestamp': '2013-04-11T02:05:28', 'yymm': '1206', 'arxiv_id': '1206.4703', 'language': 'en', 'url': 'https://arxiv.org/abs/1206.4703'}
\section{Introduction (continuous case)} Consider an elliptic operator \be L=a(x)\frac{\d^2}{\d x^2}+ b(x)\frac{\d}{\d x} \de (with $a>0$) on $E:=(-M, N)$\, $(M, N\le \infty)$. Define a function $C(x)$: $$C(x)=\!\int_{o}^x \frac b a,\qqd x\in E,$$ where $o\in E$ is a reference point. Here and in what follows, the Lebesgue measure $\d x$ is often omitted. It is convenient for us to define two measures $\mu$ and $\nu$: \be \mu(\d x)=\frac{e^{C(x)}}{a(x)}\d x,\qqd \nu(\d x)= e^{-C(x)}\d x. \de As usual, the norm on $L^2(\mu)$ is denoted by $\|\cdot\|$. Define $$\aligned {\scr A}(-M, N)&= \text{the set of absolutely continuous functions on $(-M, N)$},\\ {\scr A}_0(-M, N)&= \{f\in {\scr A}(-M, N): f\text{ has a compact support} \},\\ D(f)&=\int_{-M}^N {f'}^2 e^C,\qqd f\in {\scr A}(-M, N),\;\; M, N\le \infty. \endaligned $$ Here $D(f)$ is allowed to be $\infty$. We are interested in the following eigenvalues: \begin{align} \lz^{\text{\rm DD}}&=\inf\{D(f): f\in {\scr A}_0(-M, N), \; \|f\|=1\},\\ \lz^{\text{\rm NN}}&=\inf\{D(f): f\in {\scr A}(-M, N), \;\mu(f)=0,\; \|f\|=1\}, \end{align} where $\mu(f)=\int_E f \d \mu$. The basic estimates, of $\lz^{\text{\rm DD}}$ for instance, given in \ct{cmf10} are as follows: \be\big(4\,\kz^{\text{\rm DD}}\big)^{-1}\le \lz^{\text{\rm DD}}\le \big(\kz^{\text{\rm DD}}\big)^{-1},\lb{40}\de where \be\big(\kz^{\text{\rm DD}}\big)^{-1}\!\!= \inf_{x<y} \big[\nu (-M, x)^{-1}\! + \nu (y, N)^{-1}\big]\, \mu (x, y)^{-1},\;\; \mbox{$\mu(x, y)\!:=\!\!\int_x^y \d\mu$}.\lb{05}\de The proof for the upper estimate is already straightforward, simply using the classical variational formula for $\lz^{\text{\rm DD}}$ (cf. \rf{cmf10}{Proof (b) of Theorem 8.2}). However, the proof for the lower estimate is much harder and deeper, using capacity theory (cf. \rf{cmf10}{Sections 8, 10}). Even through the capacitary tool is suitable in a general setup (cf. \ct{fmut}, \rf{cmf05}{Theorems 7.1 and 7.2}, \rf{mav}{Chapter 2}), it is still expected to have a direct proof (avoiding capacity) in such a concrete situation. This is done at the beginning of the next section. Surprisingly, the simple proof also works in the ergodic case for which the original proof is based on (\ref{40}) plus a use of the duality and the coupling technique. The main body of the paper is devoted to an improvement of the basic lower estimate given in (\ref{40}), as stated in Corollary \ref{t11} below. The result can be regarded as a typical application of a recent variational formula (\rf{cmf11}{Theorem 4.2} or Theorem \ref{t21} below). This note is an addition to the recent papers \ct{cmf10, cmf11} from which one can find the motivation of the study on the topic and further references. It is remarkable that the new result makes the whole analytic proof for the basic estimates more elementary. Here is our first main result which is a refinement of \rf{cmf11}{Corollary 4.3}. \crl\lb{t11} {\cms \begin{itemize} \item [(1)] We have $$\lz^{\text{\rm DD}}\ge \big({\underline\kz}^{\text{\rm DD}}\big)^{-1}\ge \big(4\,\kz^{\text{\rm DD}}\big)^{-1},$$ where $\kz^{\text{\rm DD}}$ is given in (\ref{05}) and ${\underline\kz}^{\text{\rm DD}}$ is defined by (\ref{11}) below. \item [(2)] Let $\mu(-M, N)<\infty$. Then assertion (1) holds if the codes DD are replaced by NN \big(for instance, $\lz^{\text{\rm NN}}\ge \big({\underline\kz}^{\text{\rm NN}}\big)^{-1}$\big) and the measures $\mu$ and $\nu$ are exchanged. \end{itemize}} \decrl The remainder of the paper is organized as follows. In the next section, we present shortly an alternative proof of the estimates in (\ref{40}). The proof shows one of the main new ideas of the paper. Then we prove Corollary \ref{t11}. Two illustrating examples are also included in this section. The discrete analog of Corollary \ref{t11} is presented in the third section. \section{Proofs and Examples} \medskip \noindent {\bf Proof of (\ref{40})}. Let $\uz\in (-M, N)$ be a reference point. Define $$\dz_{\uz}^-\!=\!\!\sup_{z\in (-M,\, \uz)}\nu(-M,\, z)\, \mu(z,\, \uz),\qqd \dz_{\uz}^+\!=\!\!\sup_{z\in (\uz,\, N)} \mu(\uz,\, z)\,\nu(z,\, N).$$ As will be remarked in the next section, we may assume that $\dz_{\uz}^{\pm}<\infty$. Otherwise, the problem becomes either trivial or degenerated. Next, denote by $\lz_{\uz}^{\pm}$ the principal eigenvalue on $(-M, \uz)$ and $(\uz, N)$, respectively, with common reflecting (Neumann) boundary at $\uz$ and absorbing (Dirichlet) boundary at $-M$ (and $N$) provided $M< \infty$ ($N<\infty$). Actually, by an approximating procedure, one may assume that $M, N<\infty$ (cf. \rf{cmf10}{Proof of Corollary 7.9}). Next, by a splitting technique, one may choose $\uz=\bar\uz$ to be the unique solution to the equation $\lz_{\uz}^-=\lz_{\uz}^+$. Then they coincide with $\lz^{\text{\rm DD}}$ since by \rf{czz03}{Theorem 1.1}, we have $$\lz_{\uz}^-\wedge \lz_{\uz}^+\le \lz^{\text{\rm DD}}\le \lz_{\uz}^-\vee\lz_{\uz}^+$$ for every $\uz\in (-M, N)$, where $x\wedge y =\min\{x, y\}$ and dually $x\vee y =\max\{x, y\}$. Alternatively, $\bar\uz$ is the root of the derivative of the eigenfunction of $\lz^{\text{\rm DD}}$ by \rf{czz03}{Proposition 1.3} and the monotonicity of the eigenfunctions of $\lz_{\bar\uz}^{\pm}$. From now on in this proof, we fix this $\bar\uz$. For given $\vz>0$, let $\bar x<\bar\uz$ and $\bar y>\bar\uz$ satisfy $$\nu(-M,\,\bar x)\,\mu(\bar x,\, \bar\uz)\ge \dz_{\bar\uz}^--\vz,\qqd \mu(\bar\uz,\, \bar y)\, \nu(\bar y,\, N) \ge \dz_{\bar\uz}^+-\vz,$$ respectively. As a continuous analog of \rf{cmf00}{Theorem 1.1}, we have $$\big[\big(\lz^{\text{\rm DD}}\big)^{-1} =\big] \qqd \big(\lz_{\bar\uz}^{+}\big)^{-1}\le 4\,\dz_{\bar\uz}^+\qqd \big[\le 4 \mu(\bar\uz,\, \bar y)\, \nu(\bar y,\, N)+ 4\vz\big].$$ Hence, $$\big[\big(\lz^{\text{\rm DD}}\big)^{-1}-4\vz\big]\nu(\bar y,\, N)^{-1}\le 4 \mu(\bar\uz,\, \bar y).$$ In parallel, we have $$\big[\big(\lz^{\text{\rm DD}}\big)^{-1}-4\vz\big]\nu(-M,\, \bar x)^{-1}\le 4 \mu(\bar x,\,\bar\uz).$$ Summing up the last two inequalities, it follows that $$\big[\big(\lz^{\text{\rm DD}}\big)^{-1}-4\vz\big]\big[ \nu(-M,\, \bar x)^{-1} +\nu(\bar y,\, N)^{-1}\big] \le 4 \mu(\bar x,\,\bar y).$$ That is, $$\big(\lz^{\text{\rm DD}}\big)^{-1}-4\vz \le 4 \big[ \nu(-M,\, \bar x)^{-1} +\nu(\bar y,\, N)^{-1}\big]^{-1} \mu(\bar x,\,\bar y).$$ In view of (\ref{05}), the right-hand side is bounded from above by $4 {\kz}^{\text{\rm DD}}.$ Since $\vz$ is arbitrary, we have proved the lower estimate in (\ref{40}). A direct proof for the upper one in (\ref{40}) is presented in \rf{cmf10}{Proof (b) of Theorem 8.2}. \deprf \medskip \noindent {\bf Proof of the dual of (\ref{40})}: $$\big(4\,\kz^{\text{\rm NN}}\big)^{-1}\le \lz^{\text{\rm NN}}\le \big(\kz^{\text{\rm NN}}\big)^{-1},$$ where $$\big(\kz^{\text{\rm NN}}\big)^{-1}\!\!= \inf_{x<y} \big[\mu (-M, x)^{-1}\! + \mu (y, N)^{-1}\big]\, \nu (x, y)^{-1}.$$ By exchanging ``Neumann'' and ``Dirichlet'', the splitting point $\uz=\bar\uz$ is now a common Dirichlet boundary and $-M$ becomes Neumann boundary if $M<\infty$ (and so is $N$). In other words, $\bar\uz$ is the unique root of the eigenfunction of $\lz^{\text{\rm NN}}$. Now, in the proof above, we need only to use \rf{cmf00}{Theorem 3.3} instead of \rf{czz03}{Theorem 1.1} and making the exchange of $\mu$ and $\nu$. We have thus returned to the role mentioned in \ct{cmf11}: exchanging the boundary condition ``Neumann'' and ``Dirichlet'' simultaneously leads to the exchange of the measures $\mu$ and $\nu$. Here is a direct proof for the upper estimate. Given $x, y\in (-M, N)$ with $x<y$, let $\bar\uz=\bar\uz (x, y)$ be the unique solution to the equation $$\aligned &\mu(-M, x)\nu(x, \uz)+\int_x^{\uz}\mu(\d z)\nu(z, \uz)\\ &\qd =\mu(y, N)\nu(\uz, y)+\int_{\uz}^y\mu(\d z)\nu(\uz, z),\qqd \uz\in (x, y).\endaligned$$ Next, define $$f(z)=-\mathbbold{1}_{\{z\le \bar\uz\}}\nu\big(x\vee z, \bar\uz\big) +\mathbbold{1}_{\{z>\bar\uz\}}\nu\big(\bar\uz, y\wedge z\big).$$ Then $\mu(f)=0$ by the definition of $\bar\uz$. We have $$\int_{-M}^N \big|f'\big|^2 e^C =\nu\big(x, \bar\uz\big)+\nu\big(\bar\uz, y\big) =\nu\big(x, y\big).$$ Moreover, $$\aligned \int_{-M}^N \big(f-\pi(f)\big)^2\d\mu&= \int_{-M}^N f^2\d\mu\\ &> \int_{-M}^x f^2\d\mu+\int_{y}^N f^2\d\mu\\ &=\mu(-M, x)\, \nu\big(x, \bar\uz\big)^2+\mu(y, N)\,\nu\big(\bar\uz, y\big)^2. \endaligned$$ Note that the function $$\gz(x)=\az x^2 +\bz (1-x)^2, \qqd x\in (0, 1),\; \az, \bz>0$$ achieves its minimum $\big(\az^{-1}+\bz^{-1}\big)^{-1}$ at $x^*=(1+\bz/\az)^{-1}$. As an application of this result with $$\az=\mu(-M, x),\qd \bz=\mu(y, N),\qd x=\nu\big(x, \bar\uz\big)/\nu\big(x, y\big),$$ we get $$\int_{-M}^N \big(f-\pi(f)\big)^2\d\mu \ge \frac{\nu(x, y)^2}{\mu(-M, x)^{-1}+\mu(y, N)^{-1}}.$$ Hence $$\frac{\int_{-M}^N \big(f-\pi(f)\big)^2\d\mu}{\int_{-M}^N \big|f'\big|^2 e^C} \ge \frac{\nu(x, y)}{\mu(-M, x)^{-1}+\mu(y, N)^{-1}}.$$ Making supremum with respect to $x<y$, we obtain the required $\kz^{\text{\rm NN}}$. \deprf It is remarkable that although the last proof is in parallel to the previous one, it does not depend on (\ref{40}). This is rather lucky since in other cases, part (2) of Corollary \ref{t11} for instance, we do not have such a direct proof. From now on, unless otherwise stated, we restrict ourselves to the Dirichlet case. For fixed $\uz$, much knowledge on $\lz_{\uz}^{\pm}$ is known (variational formulas, approximating procedure and so on, refer to \ct{cmf05, cmf10} for instance). Of which, only a little is used in the proof above. For instance, by \rf{czz03}{Corollary 1.5}, we have $$\Big(\sup_{\uz}\big[\dz_{\uz}^-\wedge \dz_{\uz}^+\big]\Big)^{-1} \ge \lz^{\text{\rm DD}} \ge \Big(4\inf_{\uz}\big[\dz_{\uz}^-\vee \dz_{\uz}^+\big]\Big)^{-1}.$$ Thus, if we choose $\bar\uz$ to be the solution of equation $\dz_{\uz}^-= \dz_{\uz}^+$, then we obtain $$\big(\dz_{\bar\uz}^-\big)^{-1} \ge \lz^{\text{\rm DD}} \ge \big(4\dz_{\bar\uz}^-\big)^{-1}$$ which is even more compact than (\ref{40}) in view of the comparison of $\kz^{\text{\rm DD}}$ and $\dz_{\uz}^{\pm}$. The problem is that $\bar\uz$, especially the one used in the first proof of this section, is usually not explicitly known and so a large part of the known results for $\lz_{\bar\uz}^{\pm}$ are not practical. To overcome this difficulty, the first proof above uses two parameters $x$ and $y$ to get $\kz^{\text{\rm DD}}$ and then to obtain the explicit lower estimates (\ref{40}). For our main result Corollary {\ref{t11}}, the fixed point $\bar\uz$ used in the proof of (\ref{40}) is replaced by its mimic given in (\ref{08}) below for suitable test function $f$. The difference is that equation (\ref{08}) is explicit but not the one for $\bar\uz$ used in the first proof above. \medskip \noindent {\bf Proof of Corollary \ref{t11}\,(1)}. By \ct{cmf10} or \ct{cmf11}, we have known that part (2) of Corollary \ref{t11} is a dual of part (1). Hence in what follows, we need study part (1) only. The first inequality in part (1) comes from \rf{cmf11}{Corollary 4.3}. Thus, it suffices to prove the last inequality in part (1). Even though it is not completely necessary, we assume that $M, N<\infty$ until the last paragraph of the proof. For a given $f\in {\scr C}_+:$ $${\scr C}_+\!=\!\{f\!\in\! {\scr C}(-M, N)\!: f\!>\!0 \;\text{on}\; (-M, N),\; f(-M+0)\!=\!0 \text{ and } f(N-0)\!=\!0\},$$ define \begin{align} h^-(z)&\!=\!h_f^-(z) \!=\!\!\int_{-M}^{z}\! e^{-C(u)}\d u \int_u^{\uz}\!\frac{e^C f}{a},\qqd z\le \uz, \\ h^+(z)&\!=\!h_f^+(z) \!=\!\!\int_{z}^{N}\! e^{-C(u)}\d u \int_{\uz}^u\!\frac{e^C f}{a},\qqd z>\uz, \lb{8} \end{align} where $\uz=\uz (f)\in (-M, N)$ is the unique root of the equation: \be h^-(\uz)= h^+(\uz)\lb{08}\de provided $h_f^{\pm}<\infty$. The uniqueness of $\uz$ should be clear since on $(-M, N)$, as a function of $\uz$, $h^-(\uz)$ is continuously increasing from zero to $h^-(N-0)>0$ and $h^+(\uz)$ is continuously decreasing from $h^+(-M+0)>0$ to zero. Next, define $$I\!I^{\pm}(f)= h^{\pm}/f.$$ Then we have the following variational formula. \thm\lb{t21}{\bf\rf{cmf11}{Theorem 4.2\,(1)}}\;\;{\cms Assume that $\nu(-M, N)<\infty$. Then \be\lz^{\text{\rm DD}}= \sup_{f\in {\scr C}_+}\Big\{\Big[\inf_{z\in (-m,\,\uz)}I\!I^-(f)(z)^{-1}\Big] \bigwedge \Big[\inf_{z\in (\uz, N)}I\!I^+(f)(z)^{-1}\Big]\Big\}.\lb{9}\de }\dethm We remark that in the original statement of \rf{cmf11}{Theorem 4.2\,(1)}, the boundary condition ``$f(-M+0)\!=\!0 \text{ and } f(N-0)\!=\!0$'' is ignored. The condition is added here for the use of the operators $I^{\pm}$ (different from $I\!I^{\pm}$) to be defined later. However, the conclusion (\ref{9}) remains true since the eigenfunction of $\lz^{\text{\rm DD}}$ does satisfy this condition. We now fix $x<y$ and let $f=f^{x, y}$: \be f^{x, y}(s)= \begin{cases} \sqrt{\fz^+(y)\fz^-(s\wedge x)/\fz^-(x)},\qd & s\le y\\ \sqrt{\fz^+(s)}, & s\ge y, \end{cases} \de where $$\fz^-(s)=\nu (-M, s)\qd\text{ and }\qd\fz^+(s)=\nu(s, N).$$ Certainly, here we assume that $\fz^{\pm}<\infty$ (which is automatic whenever $M, N< \infty$). Clearly, $f^{x, y}\in {\scr C}_+$. Here we are mainly interested in those pair $\{x, y\}$ having the property $x< \uz< y$. As proved in \ct{cmf11}, the quantity ${\underline\kz}^{\text{\rm DD}}$: \begin{align} {\underline\kz}^{\text{\rm DD}} &=\inf_{x<y}\Big[\sup_{z\in (-M,\, \uz)}I\!I^-(f^{x, y})(z)\Big] \bigvee \Big[\sup_{z\in (\uz, N)}I\!I^+(f^{x, y})(z)\Big]\lb{11} \end{align} used in Corollary \ref{t11}\,(1) has an explicit expression: \begin{align} &\inf_{x< y} \bigg\{ \sup_{z\in (-M,\, x)}\bigg[\frac{1}{\sqrt{\fz^-(z)}}\, \mu\Big((\fz^-)^{3/2} \mathbbold{1}_{(-M,\, z)}\Big) + \sqrt{{\fz^-(z)}}\, \mu\Big(\sqrt{\fz^-}\, \mathbbold{1}_{(z,\, x)}\Big)\nonumber\\ &\qqd\qqd\qqd\qd +\sqrt{\fz^-(z)\fz^-(x)}\,\mu(x,\, \uz)\bigg]\nonumber\\ &\qqd\qqd\bigvee \bigg[\frac{1}{\sqrt{\fz^-(x)}}\, \mu\Big((\fz^-)^{3/2} \mathbbold{1}_{(-M,\, x)}\Big) + \mu\Big(\fz^-\, \mathbbold{1}_{(x,\, \uz)}\Big)\bigg]\nonumber\\ &\qqd\qqd \bigvee\sup_{z\in (y,\, N)} \bigg[\frac{1}{\sqrt{\fz^+(z)}}\mu\Big((\fz^+)^{3/2} \mathbbold{1}_{(z,\, N)}\Big) + \sqrt{\fz^+(z)}\,\mu\Big(\sqrt{\fz^+}\, \mathbbold{1}_{(y,\, z)}\Big)\nonumber\\ &\qqd\qqd\qqd\qd\qqd +\sqrt{\fz^+(z)\fz^+(y)}\,\mu(\uz,\, y)\bigg] \bigg\}.\nonumber \end{align} We have thus sketched the original attempt (cf. \rf{cmf11}{Corollary 4.3}) to prove Corollary \ref{t11}\,(1). The study was stopped here since we were unable to compare this long expression with $4\,\kz^{\text{\rm DD}}$. Before moving further, let us make a remark on (\ref{08}). As proved in \rf{cmf11}{(31)}, for fixed $x$ and $y$, equation (\ref{08}) is equivalent to the following one. \begin{align} \!\!\!\!\frac{\mu\big((\fz^-)^{3/2} \mathbbold{1}_{(-M,\, x)}\big)}{\sqrt{\fz^-(x)}}\, \!+\! \mu\big(\fz^-\, \mathbbold{1}_{(x,\, \uz)}\big) \!=\!\frac{\mu\big((\fz^+)^{3/2} \mathbbold{1}_{(y,\, N)}\big)}{\sqrt{\fz^+(y)}}\, \!+\! \mu\big(\fz^+\, \mathbbold{1}_{(\uz,\, y)}\big)\lb{12}.\end{align} The quantity in (\ref{12}) is actually the ratio $$\frac{h^-(\uz)}{f^{x, y}(x)}=\frac{h^+(\uz)}{f^{x, y}(y)}$$ (cf. \rf{cmf11}{(34)}) noting that $f^{x, y}$ is a constant on $[x, y]$: $$f^{x, y}(x)=f^{x, y}(y)=\sqrt{\fz^+(y)}.$$ Next, note that the left-hand and the right-hand sides of (\ref{12}) are monotone, with respect to $x$ and $y$ respectively, since each of their derivatives does not change its sign: $$-\frac{e^{-C(x)}}{2(\fz^-)^{3/2}(x)} \mu\big((\fz^-)^{3/2}\mathbbold{1}_{(-M,\, x)}\big)\!\!<\!0\;\text{ and }\; \frac{e^{-C(y)}}{2 (\fz^+)^{3/2}(y)}\, \mu\Big((\fz^+)^{3/2} \mathbbold{1}_{(y,\, N)}\Big)\!\!>\!0.$$ The unique solution $\uz$ to (\ref{08}), or equivalently (\ref{12}), should satisfy \be{\begin{matrix} \displaystyle\lim_{x\to -M}\frac{\mu\big((\fz^-)^{3/2} \mathbbold{1}_{(-M,\, x)}\big)}{\sqrt{\fz^-(x)}}\, \!+\! \mu\big(\fz^-\, \mathbbold{1}_{(-M,\, \uz)}\big) \!\ge\! \frac{\mu\big((\fz^+)^{3/2} \mathbbold{1}_{(\uz,\, N)}\big)}{\sqrt{\fz^+(\uz)}}\, \qd\text{and}\\ \displaystyle\lim_{y\to N}\frac{\mu\big((\fz^+)^{3/2} \mathbbold{1}_{(y,\, N)}\big)}{\sqrt{\fz^+(y)}}\, + \mu\big(\fz^+\, \mathbbold{1}_{(\uz,\, N)}\big)\ge \frac{\mu\big((\fz^-)^{3/2} \mathbbold{1}_{(-M,\, \uz)}\big)}{\sqrt{\fz^-(\uz)}}. \end{matrix}}\lb{30}\de As just mentioned above (cf. \rf{cmf11}{(34)}), we also have \begin{align} \max_{z\in [x,\,\uz]} I\!I^-\big(f^{x, y}\big)(z) &= \max_{z\in [\uz, y]} I\!I^+\big(f^{x, y}\big)(z) =\frac{h^-(\uz)}{f^{x, y}(x)}=\frac{h^+(\uz)}{f^{x, y}(y)}\nonumber\\ &=\frac{1}{\sqrt{\fz^+(y)}}\, \mu\Big((\fz^+)^{3/2} \mathbbold{1}_{(y,\, N)}\Big) + \mu\big(\fz^+\, \mathbbold{1}_{(\uz,\, y)}\big).\lb{13} \end{align} Hence we have arrived at \begin{align} &\Big[\sup_{z\in (-M,\, \uz)}I\!I^-(f^{x, y})(z)\Big] \bigvee \Big[\sup_{z\in (\uz, N)}I\!I^+(f^{x, y})(z)\Big]\nonumber\\ &\qd =\Big[\sup_{z\in (-M,\, x)}I\!I^-(f^{x, y})(z)\Big] \bigvee \frac{h^+(\uz)}{\sqrt{\fz^+(y)}} \bigvee \Big[\sup_{z\in (y, N)}I\!I^+(f^{x, y})(z)\Big]\lb{14} \end{align} which is also known from \ct{cmf11}. Define $$I^-(f)(x)=\frac{e^{-C(x)}}{f'(x)}\int_x^{\uz} \frac{e^C}{a}f,\qqd I^+(f)(x)=-\frac{e^{-C(x)}}{f'(x)}\int_{\uz}^x \frac{e^C}{a}f $$ and $$\dz_{x,\, \uz}^-\!=\!\!\sup_{z\in (-M,\, x)}\fz_z^-\,\mu (z, \uz),\qqd \dz_{y,\, \uz}^+\!=\!\!\sup_{z\in (y,\, N)}\fz_z^+\,\mu (\uz, z). $$ Then we have first by the mean value theorem (both $h^-$ and $f^{x, y}$ are vanished at $-M$) that $$\sup_{z\in (-M,\, x)}I\!I^-(f^{x, y})(z)\le \sup_{z\in (-M,\, x)}I^-(f^{x, y})(z)$$ and then by \rf{cmf00}{Lemma 1.2} or \rf{cmf05}{page 97} that $$\sup_{z\in (-M,\, x)}I^-(f^{x, y})(z)\le 4\,\dz_{x,\, \uz}^-.$$ Here we remark that the supremum in the definition of $\dz_{x,\, \uz}^-$ is taken over $(-M,\, x)$ rather than $(-M,\, \uz)\supset (-M,\, x)$. Hence the original proof for the last estimate needs a slight modification using the fact that the function $f^{x, y}$ is a constant on $[x,\,\uz]$. In parallel, since $h^+$ and $f^{x, y}$ vanish at $N$, we have $$\sup_{z\in (y,\, N)}I\!I^+(f^{x, y})(z)\le \sup_{z\in (y,\, N)} I^+(f^{x, y})(z) \le 4\,\dz_{y,\, \uz}^+.$$ Therefore, we have arrived at \begin{align} {\underline{\kz}^{\rm DD}} &\le \inf_{x<y}\bigg\{\Big[\sup_{(-M,\, x)}I\!I^-(f^{x, y})\Big] \bigvee \frac{h^+(\uz)}{\sqrt{\fz^+(y)}} \bigvee \Big[\sup_{(y,\, N)}I\!I^+(f^{x, y})\Big]\bigg\}\nonumber\\ &\le \inf_{x<\uz<y}\bigg\{\Big[\sup_{(-M,\, x)}I\!I^-(f^{x, y})\Big] \bigvee \frac{h^+(\uz)}{\sqrt{\fz^+(y)}} \bigvee \Big[\sup_{(y,\, N)}I\!I^+(f^{x, y})\Big]\bigg\}\nonumber\\ &\le \inf_{x<\uz<y}\bigg\{\big[4\,\dz_{x,\, \uz}^-\big]\bigvee \frac{h^+(\uz)}{\sqrt{\fz^+(y)}} \bigvee\big[4\,\dz_{y,\, \uz}^+\big] \bigg\}\nonumber\\ &=: \inf_{x<\uz<y} R(x, y,\,\uz)\nonumber\\ &=:\az .\lb{15} \end{align} The restriction $\uz\in (x, y)$ is due to the fact that the eigenfunction of ${{\lz}^{\rm DD}}$ is unimodal and $\uz$ is a mimic of its maximum point. The use of $I\!I^{\pm}$, $I^{\pm}$ and $\dz^{\pm}$ is now standard (cf. \ct{cmf05}--\ct{cmf11}, for instance). We now go to the essential new part of the proof. First, we claim that for each small $\vz$, there exist $\bar x\in (-M,\,\uz)$ and $\bar y\in (\uz, N)$ (may depend on $\vz$) such that \be\fz_{\bar x}^-\,\mu ({\bar x}, \uz)\! \ge\! \frac{R(x_0, y_0, \uz_0)}{4}\!-\!\vz,\qqd \fz_{\bar y}^+\,\mu (\uz, {\bar y})\!\ge\! \frac{R(x_0, y_0, \uz_0)}{4}\!-\!\vz \lb{16}\de for some point $(x_0, y_0, \uz_0)$. In the present continuous case, the conclusion is clear since the infimum $\az =R(x^*, y^*, \uz^*)$ is achieved at a point $(x^*, y^*, \uz^*)$ with $x^*\le\uz^*\le y^*$, at which we have not only $h^-(\uz^*)=h^+(\uz^*)$ but also \be 4\,\dz_{x^*\!,\, \uz^*}^-=4\,\dz_{y^*\!,\, \uz^*}^+= \frac{h^+(\uz^*)}{\sqrt{\fz^+(y^*)}}.\lb{17}\de To see this, suppose that at the point $(x, y,\,\uz)$ with $x<\uz<y$, we have \be \frac{h^+(\uz)}{\sqrt{\fz^+(y)}}> 4 \big[\dz_{x,\,\uz}^- \vee \dz_{y,\,\uz}^+\big].\lb{18}\de Without loss of generality, assume that $\dz_{x,\,\uz}^- \ge \dz_{y,\,\uz}^+$. We now fix $y$ and let $\tilde \uz \in (\uz, y]$. Then $\dz_{y,\,\uz}^+\ge \dz_{y, \, \tilde\uz}^+$ by definition. In view of (\ref{13}), we have $$ \frac{h^+(\uz)}{\sqrt{\fz^+(y)}}>\frac{h^+\big(\tilde\uz\big)}{\sqrt{\fz^+(y)}}.$$ Next, to keep $h^-\big(\tilde\uz\big)=h^+\big(\tilde\uz\big)$, one has a new $\tilde x> x$ by using (\ref{12}) (the left-hand side of (\ref{12}) is decreasing in $x$). Correspondingly, we have $\dz_{\tilde x,\, \tilde\uz}^-\ge \dz_{x,\,\uz}^-$. In particular, for $\tilde \uz$ closed enough to $\uz$ such that $$\frac{h^+\big(\tilde\uz\big)}{\sqrt{\fz^+(y)}}\ge 4\,\dz_{\tilde x, \, \tilde\uz}^-,$$ we obtain $$\aligned \big[4\,\dz_{x,\,\uz}^-\big] \bigvee \frac{h^+(\uz)}{\sqrt{\fz^+(y)}} \bigvee \big[4\,\dz_{y,\,\uz}^+\big] &= \frac{h^+(\uz)}{\sqrt{\fz^+(y)}}\\ &>\frac{h^+\big(\tilde\uz\big)}{\sqrt{\fz^+(y)}}\\ &=\big[4\,\dz_{\tilde x,\, \tilde\uz}^-\big] \bigvee \frac{h^+\big(\tilde\uz\big)}{\sqrt{\fz^+(y)}} \bigvee \big[4\,\dz_{y,\, \tilde\uz}^+\big].\endaligned$$ Thus, once (\ref{18}) holds, we can find a new point $(\tilde x, y, \tilde\uz)$ such that $R(x, y, \uz)> R(\tilde x, y, \tilde\uz)$. In other words, if the infimum $\az $ is attained at $(x^*, y^*, \uz^*)$, we should have \be \frac{h^+(\uz^*)}{\sqrt{\fz^+(y^*)}}\le 4 \big[\dz_{x^*\!,\, \uz^*}^- \vee \dz_{y^*\!,\, \uz^*}^+\big].\lb{36}\de One may handle with the other two cases and finally arrive at (\ref{17}). Note that instead of (\ref{17}), the following weaker condition is still enough for our purpose. If at some point $(x, y,\,\uz)$, \be 4\,\dz_{x,\,\uz}^- =4\,\dz_{y,\,\uz}^+=: \az '\ge \frac{h^+(\uz)}{\sqrt{\fz^+(y)}},\lb{19} \de then we have not only $\az '\ge \az $ but also (\ref{16}) for suitable $\bar x\le x$ and $\bar y\ge y$. To check (\ref{19}), we first mention that the equation $\dz_{x,\,\uz}^- =\,\dz_{y,\,\uz}^+$ is solvable, at least in the case that $M, N<\infty$. Because $\dz_{x,\, \uz}^-$ starts from zero at $x=-M$ and then increases as $x\uparrow$; $\dz_{y,\,\uz}^+$ also starts from zero at $y=N$ and then increases as $y\downarrow$. Therefore, there are a lot of $(x, y)$ satisfying the required equation. Next, by (\ref{08}), we can regard $\uz$ as a function of $x$ and $y$. Then, determine $y$ in terms of $x$ by the equation $\dz_{y,\, \uz(x, y)}^+= \dz_{x,\, \uz(x, y)}^-$. Now there is only one free variable $x$. We claim that (\ref{19}) holds for some $x$ (and then for some $(x, y,\,\uz)$). Otherwise, the inverse inequality of (\ref{19}) would hold for all $x$ which contradicts with (\ref{36}). What we actually need is not the pair $\{x, y\}$ satisfying (\ref{19}) but the pair $\{\bar x, \bar y\}$ satisfying (\ref{16}). From which, the remainder of the proof is very much the same as the one given at the beginning of this section. First, we have $$(\az /4-\vz)\, \fz^- ({\bar x})^{-1}\le \mu ({\bar x}, \uz),\qqd (\az /4-\vz)\, \fz^+ ({\bar y})^{-1}\le \mu (\uz, {\bar y}). $$ Summing up these inequalities, we get $$(\az /4-\vz)\big[\fz^- ({\bar x})^{-1}+\fz^+ ({\bar y})^{-1}\big] \le \mu ({\bar x}, {\bar y}).$$ Therefore $$\aligned \az /4-\vz &\le \big[\fz^- ({\bar x})^{-1}+\fz^+ ({\bar y})^{-1}\big]^{-1}\mu ({\bar x}, {\bar y})\\ &\le \sup_{x<y}\big[\fz^- ({x})^{-1}+\fz^+ ({y})^{-1}\big]^{-1}\mu ({x}, {y})\\ &=\kz^{\text{\rm DD}}\qd(\text{by } (\ref{05})). \endaligned$$ Combining this fact with (\ref{15}), we obtain $$ {\underline{\kz}^{\rm DD}} \le \az \nonumber \le 4 \kz^{\text{\rm DD}}+ 4\,\vz. $$ Letting $\vz\downarrow 0$, we have thus proved that ${\underline{\kz}}^{\text{\rm DD}}\le 4 \kz^{\text{\rm DD}}$ as required. The main part of the proof is done since the first Dirichlet eigenvalue is based on compact sets. Finally, consider the general case that $M, N\le \infty$. First, we can rule out the degenerated situation that $\dz_{x, \uz}^-=\dz_{y, \uz}^+=\infty$. To see this, rewrite $\kz^{\text{\rm DD}}$ as follows $$\big(\kz^{\text{\rm DD}}\big)^{-1}= \inf_{x<y} \big[\big(\fz^- (x)\,\mu(x, y)\big)^{-1} + \big(\mu (x, y)\, \fz^+ (y)\big)^{-1}\big].$$ It is clear that $\big(\kz^{\text{\rm DD}}\big)^{-1}=0$ and then $\lz^{\text{\rm DD}}=0$ by (\ref{40}). The corollary becomes trivial. Next, if one of $\dz_{x, \uz}^-$ or $\dz_{y, \uz}^+$ is $\infty$, say $\dz_{x, \uz}^-=\infty$ for instance, then $$\big(\kz^{\text{\rm DD}}\big)^{-1}= \Big(\sup_y \,\mu (-M, y) \,\fz^+ (y)\Big)^{-1},$$ i.e., $$\kz^{\text{\rm DD}}= \sup_y\, \mu (-M, y)\, \fz^+ (y).$$ This becomes the essentially known one-side Dirichlet problem. In the case that both of $\dz_{x, \uz}^-$ and $\dz_{y, \uz}^+ $ are finite, one may adopt an approximating procedure with finite $M$ and $N$. This was done in the discrete context, refer to \rf{cmf10}{Proof of Corollary 7.9 and Proof (c) of Theorem 7.10}. \deprf \medskip To illustrate what was going on in the proof above and the computation/estimation of ${\underline{\kz}^{\rm DD}}$, we consider two examples to conclude this section. \xmp{\bf\rf{cmf11}{Example 5.2}}\qd{\rm Consider the simplest example, i.e. the Laplacian operator on $(0, 1)$. It was proved in \ct{cmf11} that $\lz^{\text{\rm DD}}=\pi^2$, $\big(\kz^{\text{\rm DD}}\big)^{-1}=16$ and $\big({\underline\kz}^{\text{\rm DD}}\big)^{-1}\approx 9.43693$. The eigenfunction of $\lz^{\text{\rm DD}}$ is $g(x)=\sin(\pi x)$ for which $g'(1/2)=0$ and so $\bar\uz=1/2$ is the root of equation $\lz_{\uz}^-=\lz_{\uz}^+$. Because of the symmetry, we have $\uz^*=1/2$ and $y^*=1-x^*$. Since $\mu=\nu=\d x$, we have $\fz_z^-\, \mu(z, 1/2)=(1/2-z) z$. Thus, $$\dz_x^-=\sup_{z\in (0, x)}\fz_z^-\, \mu(z, 1/2)=\begin{cases} (1/2-x)x\qd &\text{ if } x\le 1/4\\ 1/16& \text{ if } x\in (1/4, 1/2). \end{cases}$$ By (\ref{13}) and (\ref{12}), we have $$\frac{h^-(1/2)}{f^{x,\, 1-x}(x)}=\frac 1 8 -\frac{x^2}{10}.$$ Therefore, each $$x^*\in \bigg[\frac{20-\sqrt{205}}{78},\; \frac{20+\sqrt{205}}{78}\,\bigg]$$ is a solution to the inequality $4\,\dz_x^-\ge {h^-(1/2)}/{f^{x,\, 1-x}(x)}$. Correspondingly, we have $$4\,\dz_{x^*}^-= \begin{cases} 2 (1-2 x^*)x^* \qd & \text{if } x^*\in \big[\big(20-\sqrt{205}\,\big)/78,\; 1/4\big]\\ 1/4 & \text{if } x^*\in \big[1/4,\; \big(20+\sqrt{205}\,\big)/78\big). \end{cases}$$ Using this, our conclusion that $${\underline{\kz}}^{\text{\rm DD}}\le 4\,\dz_{x^*}^-\le 4 \kz^{\text{\rm DD}}$$ can be refined as follows: $$\big(4\,\kz^{\text{\rm DD}}\big)^{-1}=4\le \big(4\,\dz_{x^*}^-\big)^{-1} \le \frac{35-\sqrt{41/5}}{4}\approx 8.034< 9.43693\approx\big({\underline\kz}^{\text{\rm DD}}\big)^{-1}.$$ It follows that there are many solutions $x^*$, and so we have a lot of freedom in choosing $(\uz^*, x^*, y^*)$ for (\ref{19}). However, the maximum of $\big(4\,\dz_{x^*}^-\big)^{-1}$ is attained only at the point $x^*$ which is the smaller root of equation: $4\,\dz_x^-= {h^-(1/2)}/{f^{x,\, 1-x}(x)}$. } \dexmp The next example is unusual since for which the lower bound $\big(4\,\kz^{\text{\rm DD}}\big)^{-1}$ is sharp. Hence, there is no room for the improvement $\big({\underline\kz}^{\text{\rm DD}}\big)^{-1}$. The proof above seems rather dangerous for this example since at each step $$\aligned \big(\lz^{\text{\rm DD}}\big)^{-1}&\le {\underline\kz}^{\text{\rm DD}} \le \text{Est}(I^{\pm}(f))\le \text{Est}(\dz_{\cdot,\, \uz}^{\pm})\\ &\le 4\big(\ffz (\bar x, \bar y)+\vz\big) \le 4 \sup_{x<y}\big(\ffz (x, y)+\vz\big)=4\big( {\kz}^{\text{\rm DD}}+\vz\big) \endaligned$$ for some $\ffz$, where Est\,$(H)$ means the estimate using $H$, one may lose something. Here we have also explained the reason why ${\underline{\kz}^{\rm DD}}$ is often much better than $4\,{\kz}^{\rm DD}$ as shown in the last example. \xmp{\bf\rf{cmf11}{Example 5.3}}\qd{\rm Consider the operator $L=\d^2/\d x^2 +b \d/\d x$ with $b>0$ on $(0, \infty)$. It was checked in \ct{cmf11} that $\lz^{\rm DD}=b^2/4$, $\big(\kz^{\rm DD}\big)^{-1}=b^2$ and so the lower estimate $\big(4\,\kz^{\rm DD}\big)^{-1}$ is sharp. The eigenfunction of $\lz^{\text{\rm DD}}$ is $g(x)=x e^{-b x/2}$ for which $g'(2/b)=0$ and so $\bar\uz=2/b$ solves the equation $\lz_{\uz}^-=\lz_{\uz}^+$. We have $C(x)= bx$, $\mu (\d x)=e^{b x} \d x$, $$\fz^-(s)=\int_0^s e^{-b z}\d z=\frac{1}{b}\big(1- e^{-bs}\big)\qd\text{and}\qd \fz^+(s)=\int_s^\infty e^{-bz}\d z= \frac{1}{b}e^{-bs}.$$ We begin our study on the equation $\dz_{x,\,\uz}^-= \dz_{y,\,\uz}^+$ rather than Eq.(\ref{12}) since the former one is simpler. Note that the function $$\fz_z^-\,\mu (z, \uz)=\frac 1 {b^2} \big(1 - e^{-b z}\big) \big(e^{b \uz}-e^{b z}\big),\qqd z\in (0, \uz]$$ achieves its maximum $b^{-2}\big(e^{b\uz/2}-1\big)^2$ at $z=\uz/2$ and the function $$\mu (\uz, z)\fz_z^+=\frac 1 {b^2} \big(1-e^{b(\uz - z)} \big),\qqd z\ge \uz $$ achieves its maximum $1/b^2$ at $\infty$. Hence $$\dz_{x,\,\uz}^-= \frac{1}{b^2}\big(e^{b\uz/2}-1\big)^2\qd \forall x\in [\uz/2, \uz]\qd\text{and}\qd \dz_{y,\,\uz}^+=\frac 1 {b^2}\qd\forall y\ge \uz.$$ Solving the equation $$\frac{1}{b^2}\big(e^{b\uz/2}-1\big)^2=\frac 1 {b^2},$$ we get $\uz^*= 2 b^{-1} \log 2.$ To study (\ref{30}), note that $$\aligned &\frac{1}{\sqrt{\fz^+(y)}} \mu\Big((\fz^+)^{3/2} \mathbbold{1}_{(y,\, \infty)}\Big) +\mu\big(\fz^+ \mathbbold{1}_{(\uz,\, y)}\big)=\frac{2}{b^2}+\frac 1 b (y-\uz)\\ &\frac{1}{\sqrt{\fz^-(x)}}\mu\Big(\!(\fz^-)^{3/2} \mathbbold{1}_{(0,\, x)}\!\Big) \!+\!\mu\big(\fz^- \mathbbold{1}_{(x,\,\uz)}\big)\\ &\qqd\qqd= \frac{1}{b^2}\bigg\{2-b\uz+e^{b\uz}+bx -\frac{3\big(b x +\log\big(1+\sqrt{1-e^{-bx}}\,\big)\big)}{2 \sqrt{1-e^{-bx}}} \bigg\}. \endaligned$$ Then the second inequality in (\ref{30}) is trivial and the first one there becomes $$ \frac{1}{b^2}\big(2-b\uz+e^{b \uz}\big)\ge \frac{2}{b^2}.$$ It is now easy to check that $\uz^*=2 b^{-1} \log 2$ does not satisfy this inequality. In other words, there is no required solution $(x^*, y^*, \uz^*)$ under the restriction $x^*\in [\uz^*/2, \uz^*]$. Thus, unlike the last example, there is not much freedom in choosing $(x^*, y^*, \uz^*)$ for (\ref{19}). However, this does not finish the story since the solution $x^*$ may belong to $[0, \uz^*/2)$. We are now looking for a solution $x^*$ in the interval $[0, \uz^*/2)$. When $x\le \uz/2$, the maximum of the function $\sup_{z\le x} \fz_z^-\,\mu (z, \uz)$ on $[0, x]$ is achieved at $x$. Hence $$\dz_{x,\,\uz}^-= \frac 1 {b^2} \big(1 - e^{-b x}\big) \big(e^{b \uz}-e^{b x}\big)\qd \forall x\in (0, \, \uz/2]\qd\text{and}\qd \dz_{y,\,\uz}^+=\frac 1 {b^2}\qd\forall y\ge \uz.$$ Solving the equation $$\frac 1 {b^2} \big(1 - e^{-b x}\big) \big(e^{b \uz}-e^{b x}\big)=\frac 1 {b^2},$$ we obtain $\uz^*=x-b^{-1} \log\big(1-e^{-b x}\big).$ Besides, solving the equation $4\,\dz_{y^*\!, \, \uz^*}^+=h^+(\uz^*)\big/\!\sqrt{\fz^+(y^*)}\,$, we get $y^* = 2/b + \uz^*$. Inserting these into Eq.(\ref{12}), we obtain $$\frac{e^{2 b x}}{e^{b x}-1}+\log \left(1-e^{-b x}\right)=2+ \frac{3}{2 \sqrt{1-e^{-b x}}} \left(b x+2 \log \left(\sqrt{1-e^{-b x}}+1\right)\right).$$ From this, we obtain the required solution $x^*$ as shown by Figures 1 and 2 below, noting that the constraint that $x^*\le \uz^*/2$ is equivalent to $x^*\le b^{-1}\log 2$. Having $x^*$ at hand, it is clear that the solution $\uz^*$ here is very different from $2/b$. \begin{center}{{{\includegraphics{tmp-1.eps}\hskip-4.8truecm} } \vskip-4.2truecm{\hskip-1.8truecm\includegraphics{tmp-2.eps}}\newline {\bf Figure 1--2}\qd\rm Solution of $x^*=x^*(b)$ when $b$ varies on $(0, 2]$ (the curve on right) and on $[2, 20]$ (the curve on left), respectively.}\end{center} To see that the solutions $(\bar x, \bar y)$ to (\ref{16}) may not be unique, keeping $\uz^*$ to be the same as in the last paragraph but replace $y^*$ with a smaller one $\bar y = b^{-1}+\uz^*$, then one can find a point $\bar x$ satisfying Eq.\,(\ref{12}). } \dexmp \section{Birth--death processes (discrete case)} This section deals with the discrete case which is parallel in principal to the continuous one studied above, but it is quite involved and so is worth to write down some details here. The state space is $$E=\{i\in {\mathbb Z}: -M-1< i< N+1\},\qqd M, N\le \infty.$$ The transition rates $Q=(q_{ij})$ are as follows: $b_i:=q_{i,i+1}>0$, $a_i:=q_{i, i-1}>0$, $q_{ii}=-(a_i+b_i)$, $i\in E$. $q_{ij}=0$ for other $i\ne j$. Thus, we have $a_{-M}>0$ if $M<\infty$ and similarly for $b_N$. The operator of the process becomes $$\ooz f(i)=b_i\big(f_{i+1}-f_i\big)+ a_i\big(f_{i-1}-f_i\big), \qqd i\in E$$ with a convention $f_{-M-1}=0$ if $M<\infty$ and $f_{N+1}=0$ if $N<\infty$. Next, define the speed (or invariant, or symmetric) measure $\mu$ as follows. Fix a reference point $o\in E$ and set \begin{gather} \mu_{o+n}=\frac{a_{o -1} a_{o -2}\cdots a_{o +n+1}} {b_{o } b_{o-1 }\cdots b_{o +n}}, \qqd -M-1-o< n\le -2,\nonumber\\ \mu_{o-1 }=\frac{1}{b_{o } b_{o -1}},\qqd \mu_{o }=\frac{1}{a_{o } b_{o }},\qqd \mu_{o+1 }=\frac{1}{a_{o } a_{o +1}},\nonumber\\ \mu_{o+n }=\frac{b_{o +1} b_{o +2}\cdots b_{o +n-1}} {a_{o } a_{o+1 }\cdots a_{o +n}}, \qquad 2\le n<N+1-o.\nonumber \end{gather} A change of the reference point $o$ leads to a constant factor only to the sequence $(\mu_i)$ and so does not make any influence to the results below. Corresponding to $\ooz$, the Dirichlet form is \begin{gather} D(f)=\sum_{-M-1<i\le o} \mu_i a_i (f_i-f_{i-1})^2 +\sum_{o\le i <N+1} \mu_i b_i (f_{i+1}-f_{i})^2,\nonumber\\ f\in {\scr K},\; f_{-M-1}=0\;\text{if }M<\infty \text{ and }f_{N+1}=0\;\text{if }N<\infty, \nonumber \end{gather} where ${\scr K}$ is the set of functions on $E$ with compact supports. Having these preparations at hand, one can define the eigenvalues $\lz^{\text{\rm DD}}$ and $\lz^{\text{\rm NN}}$ on $L^2(\mu)$ as in the first section. To state our main result in this context, we need more notation. Define $${\scr C}_+=\{f|_E>0: f_{-M-1}=0\;\text{if }M<\infty \text{ and }f_{N+1}=0\;\text{if }N<\infty\}.$$ Given $f\in {\scr C}_+$, define $h^{\pm}=h_f^{\pm}$ as follows. $$\aligned h_i^-&=\sum_{k=-M}^i \frac{1}{\mu_k a_k} \sum_{\ell=k}^{\uz} \mu_{\ell} f_{\ell} =\sum_{\ell=-M}^{\uz} \mu_{\ell} f_{\ell} \fz_{\ell\wedge i}^-, \qd i\le \uz,\\ h_i^+&=\sum_{k=i}^N \frac{1}{\mu_k b_k} \sum_{\ell=\uz}^{k} \mu_{\ell} f_{\ell} =\sum_{\ell=\uz}^{N} \mu_{\ell} f_{\ell} \fz_{\ell\vee i}^+, \qd i\ge \uz, \endaligned$$ where $$\fz_i^-=\sum_{k=-M}^i \frac{1}{\mu_k a_k},\qqd \fz_k^+=\sum_{\ell=k}^N \frac{1}{\mu_{\ell} b_{\ell}}$$ and $\uz\in (-M-1, N+1)$ will be specified soon. Applying $h_f^{\pm}$ to the test function $f=f^{m, n}\,(m, n\in E,\, m\le n)$: $$f_i^{m,n}= \begin{cases} \sqrt{\fz_n^+ \fz_{i\wedge m}^-/\fz_m^-}\qqd & i\le n\\ \sqrt{\fz_i^+} & i\ge n, \end{cases}$$ we obtain a condition for $\uz$ which is an analog of (\ref{30}): \be{\begin{matrix} \displaystyle\lim_{m\to -M}\frac{1}{\sqrt{\fz_m^-}}\! \sum_{k=-M}^{m-1}\! (\fz_k^-)^{3/2}\mu_k \!+\!\! \sum_{k=-M}^{\uz}\! \fz_k^-\mu_k \displaystyle\!\ge\! \frac{1}{\sqrt{\fz_{\uz}^+}}\! \sum_{k=\uz}^N (\fz_k^+)^{3/2}\mu_k\qd \text{and}\\ \displaystyle\lim_{n\to N}\frac{1}{\sqrt{\fz_n^+}}\! \sum_{k=n+1}^N\! (\fz_k^+)^{3/2}\mu_k + \sum_{k=\uz}^N \fz_k^+\mu_k \displaystyle\!\ge\! \frac{1}{\sqrt{\fz_{\uz}^-}}\! \sum_{k=-M}^{\uz} (\fz_k^-)^{3/2}\mu_k. \end{matrix}}\lb{32}\de However, in the discrete situation, one can not expect (\ref{08}). This leads to a serious change. To explain the main idea, let us return to Theorem \ref{t21}. Because the derivative of the eigenfunction of $\lz^{\rm DD}$ has uniquely one zero point, say $\uz$. We can split the interval $(-M, N)$ into two parts having a common boundary $\uz$. Thus, the original process is divided into processes having a common reflecting boundary $\uz$. Theorem \ref{t21} says that the original $\lz^{\rm DD}$ can be represented by using the principal eigenvalues of these sub-processes. This idea is the starting point of \ct{czz03}, as already used in the first proof in Section 2. Since the maximum point $\uz$ is unknown in advance, in the original formulation, $\uz$ is free and then there is an additional term $\sup_{\uz}$ in the expression of Theorem \ref{t21}. This term was removed in \ct{cmf11}, choosing $\uz$ as a mimic of the maximum point of the eigenfunction. Unfortunately, such a mimic still does not work in the discrete case, we may lost (\ref{08}) and more seriously, the eigenfunction may be a simple echelon but not a unimodal (cf. \rf{cmf10}{Definition 7.13}). Therefore, more work is required. Again, the idea goes back to \ct{czz03} except here the choice of $\uz$ is based on (\ref{32}). The first key step of the method is constructing two birth--death processes on the left- and the right-hand sides, separately. As before, the two processes have Dirichlet boundaries at $-M-1$ and $N+1$ but they now have a common Neumann boundary at $\uz\in E$. Let us start from the birth--death process with rates $(a_i, b_i)$ and state space $E$. Fix a constant $\gz>1$. \begin{itemize} \item [(L)] The process on the left-hand side has state space $E^{\uz -}=\{i: -M-1<i\le \uz\}$, reflects at $\uz$ (and so $b_{\uz}=0$). Its transition structure is the same as the original one except $a_{\uz}$ is replaced by $a_{\uz}^{-, \gz}:=\gz a_{\uz}$. Then for this process, the sequence $(\mu_i: i\in E^{\uz -})$ is the same as the original one except the original $\mu_{\uz}$ is replaced by $\mu_{\uz}/\gz$. Hence, the sequence $(\mu_i a_i: i\in E^{\uz -})$ keeps the same as original. \item [(R)] The process on the right-hand side has state space $E^{\uz +}=\{i: \uz\le i <N+1\}$, reflects at $\uz$ (and then $a_{\uz}=0$). Its transition structure is again the same as the original one except $b_{\uz}$ is replaced by $b_{\uz}^{+, \gz}:=\gz (\gz-1)^{-1} b_{\uz}$. Then for this process, the sequence $(\mu_i: i\in E^{\uz +})$ is the same as the original one except the original $\mu_{\uz}$ is replaced by $(1-\gz^{-1})\mu_{\uz}$. Hence, the sequence $(\mu_i b_i: i\in E^{\uz +})$ remains the same as original. \end{itemize} Noting that $a_{\uz}^{-, \gz} \downarrow a_{\uz}$ and $b_{\uz}^{+, \gz}\uparrow \infty$ as $\gz\downarrow 1$, $a_{\uz}^{-, \gz} \uparrow \infty$ and $b_{\uz}^{+, \gz}\downarrow b_{\uz}$ as $\gz\uparrow \infty$, the constant $\gz$ plays a balance role for the principal eigenvalues of these processes. From here, following the first proof given in Section 2 and using \ct{czz03} and \rf{cmf10}{Theorem 7.10}, one can prove the basic estimate $\lz^{\text{\rm DD}}\ge \big(4\kz^{\text{\rm DD}}\big)^{-1}$ in the present context. Certainly, the parallel proof works also in the ergodic case. We now continue our study on the discrete analog of Corollary \ref{t11}\,(1). The quantity $\fz^{\pm}$ needs no change. But $h^{\pm}$ has to be modified as follows. $$\aligned h_i^{-, \gz}&=\sum_{k=-M}^i \frac{1}{\mu_k a_k} \bigg[\sum_{k\le \ell\le\uz-1} \mu_{\ell} f_{\ell}+\frac{1}{\gz}\mu_{\uz}f_{\uz}\bigg], \qqd i\le \uz,\\ h_i^{+, \gz}&=\sum_{k=i}^N \frac{1}{\mu_k b_k} \bigg[\frac{\gz-1}{\gz}\mu_{\uz}f_{\uz}+ \sum_{\uz+1\le\ell\le k} \mu_{\ell} f_{\ell}\bigg], \qqd i\ge \uz. \endaligned$$ Finally, define $I\!I^{\pm, \gz}(f)=h^{\pm, \gz}/f$. It is now more convenient to write the test functions on $E_{\uz}^{\pm}$ separately: $$ f_i^{-, m}=\sqrt{\fz_{i\wedge m}^-},\qd i\le \uz,\qqd f_i^{+, n}=\sqrt{\fz_{i\vee n}^+},\qd i\ge \uz. $$ Comparing with the original $f^{m, n}$, here a factor acting on $f^{-, m}$ is ignored \big(the reason why one needs the factor in the original case is for $f_{\uz}^{-, m}=f_{\uz}^{+, n}$\big). \crl\lb{t31} {\cms We have \be \lz^{\rm DD}\ge \big({\underline\kz}^{\text{\rm DD}}\big)^{-1}\ge \big(4\,\kz^{\text{\rm DD}}\big)^{-1},\lb{20}\de where \begin{gather} \mbox{\hspace{-1em}}{\underline\kz}^{\text{\rm DD}}\!=\inf_{\uz: \text{\rm(\ref{32}) holds}}\,\inf_{m\le \uz\le n}\inf_{\gz>1}\!\Big\{\!\Big[\sup_{ E\owns i\le\uz} I\!I_i^{-, \gz}(f^{-, m})\Big] \!\bigvee\! \Big[\sup_{\uz\le i\in E} I\!I_i^{+, \gz}(f^{+, n})\Big]\!\Big\},\\ \mbox{\hspace{-2em}}\big(\kz^{\text{\rm DD}}\big)^{-1}\!=\inf_{m, n\in E:\; m \le n}\bigg[\bigg(\sum_{i=-M}^m\frac{1}{\mu_i a_i}\bigg)^{-1}\! \!\!+ \bigg(\sum_{i=n}^{N}\frac{1}{\mu_i b_i}\bigg)^{-1}\bigg]\bigg(\sum_{j=m}^n \mu_j\bigg)^{-1}\!\!\!. \end{gather} } \decrl \prf By using an approximating procedure, one may assume that $M, N<\infty$ (cf. \rf{cmf10}{Proof of Corollary 7.9 and Proof (c) of Theorem 7.10}). Fix $\uz\in [m, n]$ and define \begin{gather} I_i^{-, \gz}(f)=\frac{1}{\mu_i a_i (f_i-f_{i-1})}\bigg[\frac{1}{\gz}\mu_{\uz}+\sum_{i\le \ell\le \uz-1} \mu_{\ell}\bigg],\qqd i\le \uz\nonumber\\ I_i^{+, \gz}(f)=\frac{1}{\mu_i b_i (f_i-f_{i+1})}\bigg[\frac{\gz-1}{\gz}\mu_{\uz}+\sum_{\uz+1\le \ell \le i} \mu_{\ell}\bigg],\qqd i\ge \uz\nonumber\\ \dz_{m,\, \uz}^{-, \gz}=\sup_{i\le m}\fz_i^- \bigg[\frac{1}{\gz}\mu_{\uz}+\!\!\sum_{i\le \ell\le \uz-1} \mu_{\ell}\bigg], \qqd \dz_{n,\, \uz}^{+, \gz}=\sup_{i\ge n} \fz_i^+\bigg[\frac{\gz-1}{\gz}\mu_{\uz}+\!\!\sum_{\uz+1\le \ell \le i} \mu_{\ell}\bigg].\nonumber \end{gather} We have \begin{align} \frac{h_{\uz}^{-, \gz}}{f_m^{-, m}} &=\frac{1}{\sqrt{\fz_m^-}}\sum_{k=-M}^{m-1} (\fz_k^-)^{3/2}\mu_k + \sum_{k=m}^{\uz-1} \fz_k^- \mu_k +\frac{1}{\gz} \fz_{\uz}^- \mu_{\uz},\\ \frac{h_{\uz}^{+, \gz}}{f_n^{+, n}} &=\frac{1}{\sqrt{\fz_n^+}}\sum_{k=n+1}^N (\fz_k^+)^{3/2}\mu_k + \sum_{k=\uz+1}^{n} \fz_k^+\mu_k +\frac{\gz-1}{\gz} \fz_{\uz}^+ \mu_{\uz}.\end{align} For simplicity, let $$H(m, n,\,\uz, \gz)= \max\bigg\{\frac{h_{\uz}^{-, \gz}}{f_m^{-, m}}\mathbbold{1}_{\{m<\uz\}},\; \frac{h_{\uz}^{+, \gz}}{f_n^{+, n}}\mathbbold{1}_{\{\uz<n\}}\bigg\}.$$ By \rf{cmf10}{Theorem 7.10\,(1), Sections 4, 2 and 3}, we obtain \begin{align} \big({{\underline\kz}^{\rm DD}}\big)^{-1}\!\! &\le\! \inf_{\begin{subarray}{c}\gz>1 \\ [m, n]\owns \uz\\ \uz: \text{\rm(\ref{32}) holds}\end{subarray}}\!\bigg\{\!\bigg[\bigvee_{i\le m}\! I\!I_i^{-, \gz}(f^{-, m})\bigg] \!\bigvee\! H(m, n,\,\uz, \gz) \!\bigvee\! \bigg[\bigvee_{i\ge n}\! I\!I_i^{+, \gz}(f^{+, n})\bigg]\!\bigg\}\nonumber\\ &\le \inf_{\begin{subarray}{c}\gz>1 \\ [m, n]\owns \uz\\ \uz: \text{\rm(\ref{32}) holds}\end{subarray}}\bigg\{\!\bigg[\bigvee_{i\le m}\! I_i^{-, \gz}(f^{-, m})\bigg] \!\bigvee\!H(m, n,\,\uz, \gz) \!\bigvee\! \bigg[\bigvee_{i\ge n}\! I_i^{+, \gz}(f^{+, n})\bigg]\!\bigg\}\nonumber\\ &\le \inf_{\begin{subarray}{c}\gz>1 \\ [m, n]\owns \uz\\ \uz: \text{\rm(\ref{32}) holds}\end{subarray}}\bigg\{\Big[ 4 \dz_{m, \,\uz}^{-, \gz}\Big] \bigvee H(m, n,\,\uz, \gz) \bigvee \Big[4 \dz_{n,\, \uz}^{+, \gz}\Big]\bigg\}\nonumber\\ &=: \inf_{\uz: \text{\rm(\ref{32}) holds}}\,\inf_{[m, n]\owns \uz}\,\inf_{\gz>1}\, R(m, n,\,\uz, \gz)\nonumber\\ &=:\az. \lb{24} \end{align} The point we need two terms in the expression of $H$, rather than one only in the continuous case, is the loss of an analog of (\ref{08}): here we may not have $h_{\uz}^{-, \gz}=h_{\uz}^{+, \gz}$. We now choose a candidate of $\uz^*$ (independent of $m$, $n$) from (\ref{32}) and then choose $\{m^*, n^*\}$ with $m^*\le \uz^*\le n^*$ (may not be unique) so that $(m^*, n^*, \uz^*)$ satisfies the following inequalities \be{\begin{matrix}\displaystyle \Big(\dz_{m,\, \uz}^{-, 1}=\!\Big)\;\sup_{i\le m}\fz_i^- \bigg[\mu_{\uz}+\!\!\sum_{i\le \ell\le \uz-1} \mu_{\ell}\bigg]\ge \sup_{i\ge n} \fz_i^+ \sum_{\uz+1\le \ell \le i} \mu_{\ell}\;\Big(\!= \dz_{n,\, \uz}^{+, 1}\Big)\qd\text{and}\\ \displaystyle\Big(\dz_{m,\, \uz}^{-, \infty}=\!\Big)\;\sup_{i\le m}\fz_i^- \sum_{i\le \ell\le \uz-1} \mu_{\ell} \le \sup_{i\ge n} \fz_i^+\bigg[\mu_{\uz}+\!\!\sum_{\uz+1\le \ell \le i} \mu_{\ell}\bigg] \;\Big(\!=\dz_{n,\, \uz}^{+, \infty}\Big).\end{matrix}} \lb{38}\de Roughly speaking, the condition (\ref{08}) in the continuous case is replaced by a much weaker one (\ref{32}) and the condition $\dz_{m, \,\uz}^{-}= \dz_{n,\, \uz}^{+}$ is replaced by (\ref{38}). Instead, let $\gz^*$ be the unique solution to the equation $$\dz_{m^*, \,\uz^*}^{-, \gz}= \dz_{n^*,\, \uz^*}^{+, \gz},\qqd \gz\in [1, \infty]$$ for each fixed pair $\{m^*, n^*\}$: $m^*\le\uz^*\le n^*$. Here is the balance role played by $\gz$ as mentioned before. As an analog of the continuous case, we are interested in those $\{m^*, n^*\}$: $[m^*\!,\,n^*]\owns \uz^*$ having the property \be\dz_{m^*\!, \,\uz^*}^{-, \gz^*}= \dz_{n^*\!,\, \uz^*}^{+, \gz^*}\ge\frac{1}{4} \max\bigg\{\frac{h_{\uz^*}^{-, \gz^*}}{f_{m^*}^{-, m^*}}\mathbbold{1}_{\{m^*<\uz^*\}},\; \frac{h_{\uz^*}^{+, \gz^*}}{f_{n^*}^{+, n^*}}\mathbbold{1}_{\{\uz^*<n^*\}}\bigg\}.\lb{34}\de Unlike the continuous case, here we may have to repeat the procedure in choosing $(m^*, n^*, \gz^*)$ since $\uz^*$ suggested by (\ref{32}) may not be unique. Note that the right-hand side of (\ref{34}) is trivial in the particular case that $m^*=n^*=\uz^*$. Thus, for sufficiently small $\vz>0$, we may choose $(\bar m, \bar n)$ with $[\bar m, \bar n]\owns \uz^*$ and ${\bar\gz}\in (1, \infty)$ such that \begin{align} &\fz_{\bar m}^-\bigg[\frac{1}{{\bar\gz}}\mu_{\uz^*}+ \sum_{\ell=\bar m}^{\uz^*-1} \mu_{\ell}\bigg]\ge \frac{R(m^*, n^*, \uz^*, \gz^*)}{4}-\vz \qd\text{and}\nonumber\\ &\fz_{\bar n}^+\bigg[\frac{{\bar\gz}-1}{{\bar\gz}}\mu_{\uz^*}+\sum_{\ell=\uz^*+1}^{\bar n} \mu_{\ell} \bigg]\ge \frac{R(m^*, n^*, \uz^*, \gz^*)}{4}-\vz. \nonumber\end{align} Therefore, we have $$\bigg(\frac{\az}{4}-\vz\bigg)\big(\fz_{\bar m}^-\big)^{-1}\!\!\le \frac{1}{{\bar\gz}}\,\mu_{\uz^*}+\! \sum_{\ell=\bar m}^{\uz^*-1} \mu_{\ell},\qqd \bigg(\frac{\az}{4}-\vz\bigg)\big(\fz_{\bar n}^+\big)^{-1}\!\!\le \frac{{\bar\gz}-1}{{\bar\gz}}\,\mu_{\uz^*}+\!\sum_{\ell=\uz^*+1}^{\bar n}\! \mu_{\ell}. $$ Summing up these inequalities, it follows that $$\aligned \bigg(\frac{\az}{4}\!-\!\vz\bigg) \Big\{\big(\fz_{\bar m}^-\big)^{-1}\!\!+\! \big(\fz_{\bar n}^+\big)^{-1}\Big\} \!\le\! \sum_{\ell=\bar m}^{\uz^*-1}\! \mu_{\ell}+\frac{1}{{\bar\gz}}\mu_{\uz^*}\!+\! \frac{{\bar\gz}-1}{{\bar\gz}}\mu_{\uz^*}\!+\!\!\sum_{\ell=\uz^*+1}^{\bar n}\! \mu_{\ell} \! =\!\sum_{\ell=\bar m}^{\bar n} \mu_{\ell}. \endaligned$$ The remainder of the proof is the same as in the continuous situation.\deprf The following example is almost the simplest one but is indeed very helpful to understand Corollary \ref{t31} and its proof. \xmp{\bf \rf{czz03}{Example 2.3} and \rf{cmf10}{Example 7.6\,(2)}}\qd{\rm Let $M=-1$, $N=2$, $b_1=1$, $b_2=2$, $$a_1=\frac{2-\vz^2}{1+\vz}, \qqd \vz\in \big[0, \sqrt{2}\,\big) \qqd\text{and\qqd $a_2=1$}.$$ Then ${{\lz}^{\rm DD}}=2-\vz$. It is known that $$\kz^{\rm DD}=\frac{1}{\lz_0} -\begin{cases} {\vz^2}(8 - 4\, \vz^2 + \vz^3)^{-1}\qd &\text{if } \vz\in \big[0,\; \big(\sqrt{13}-1\big)/{3}\big]\\ (8 + 2\, \vz - 3\, \vz^2)^{-1} &\text{if } \vz\in \big[\big(\sqrt{13}-1\big)/{3},\; \sqrt{2}\,\big). \end{cases}$$ We are now going to compare ${\underline\kz}^{\rm DD}$ with $4\,\kz^{\rm DD}$. First, we have $\mu_1=\mu_2=1$, $\mu_1 a_1=a_1$, $\mu_1 b_1=b_1$ and $\mu_2 b_2=b_2$. Next, (\ref{32}) holds for a small part of $\vz$ when $\uz=1$ but never holds if $\uz=2$. Hence, we choose $\uz=1$. Then $m=1$ and $$\fz_1^-= \frac{1}{a_1}=\frac{1+\vz}{2-\vz^2},\qqd \fz_1^+=\frac{1}{b_1}+\frac{1}{b_2}=\frac 3 2, \qqd \fz_2^+=\frac{1}{b_2}=\frac 1 2.$$ Furthermore $$\dz_{m,\,\uz}^{-, \gz}=\frac{1}{\gz a_1}=\frac{1+\vz}{\gz(2-\vz^2)}.$$ By (\ref{38}), we have $n=1$ or $2$. (1) When $n=1$, we have $$\dz_{n,\,\uz}^{+, \gz} =\bigg[\fz_1^+\bigg(1-\frac{1}{\gz}\bigg)\bigg]\bigvee \bigg[\fz_2^+\bigg(2-\frac{1}{\gz}\bigg)\bigg] =\begin{cases} \displaystyle\frac{3}{2}\bigg(1-\frac{1}{\gz}\bigg)\qd &\text{if } \gz\ge 2\\ \displaystyle 1-\frac{1}{2\gz} &\text{if } \gz\in (1, 2). \end{cases}$$ Clearly, the equation $\dz_{m,\,\uz}^{-, \gz}=\dz_{n,\,\uz}^{+, \gz}$ has a unique solution $$\gz=\begin{cases} \displaystyle\frac{8 + 2 \vz - 3 \vz^2}{3 (2 - \vz^2)}\ge 2 \qd &\text{if $\displaystyle \vz\in\bigg[\frac{\sqrt{13}-1}{3},\, \sqrt{2}\,\bigg)$}\\ \displaystyle\frac{4 + 2 \vz - \vz^2}{2 (2 - \vz^2)}\in (1, 2) \qd &\text{if $\displaystyle \vz\in\bigg(0,\, \frac{\sqrt{13}-1}{3}\bigg)$}. \end{cases}$$ Correspondingly, with $m=n=\uz=1$, we have $$\dz_{m,\,\uz}^{-, \gz}=\dz_{n,\,\uz}^{+, \gz}= \begin{cases} \displaystyle\frac{3(1+\vz)}{8 + 2 \vz - 3 \vz^2}\qd &\text{if $\displaystyle \vz\in\bigg[\frac{\sqrt{13}-1}{3},\, \sqrt{2}\,\bigg)$}\\ \displaystyle\frac{2(1+\vz)}{4 + 2 \vz - \vz^2}\qd &\text{if $\displaystyle \vz\in\bigg(0,\, \frac{\sqrt{13}-1}{3}\bigg)$}. \end{cases}$$ It is interesting that the last quantity coincides with $4\,\kz^{\rm DD}$. We have thus arrived at (\ref{34}) since we are in the particular case: $m^*=n^*=\uz^*$. (2) When $n=2$, we have $$ \frac{h_{\uz}^{+, \gz}}{f_n^{+, n}} =\fz_2^+ +\frac{\gz-1}{\gz} \fz_1^+=2-\frac{3}{2\gz},\qqd \dz_{n,\,\uz}^{+, \gz} =1-\frac{1}{2\gz}.$$ Clearly, $$\frac{h_{\uz}^{+, \gz}}{f_n^{+, n}}\le 4\,\dz_{n,\,\uz}^{+, \gz}\qd\text{iff $\gz\ge 1/4$}.$$ As we have seen above, the solution to the equation $\dz_{m,\,\uz}^{-, \gz}=\dz_{n,\,\uz}^{+, \gz}$ is $$\gz=\frac{4 + 2 \vz - \vz^2}{2 (2 - \vz^2)}>1 \qd \text{on $\big(0, \sqrt{2}\,\big)$}.$$ Then $$\dz_{m,\,\uz}^{-, \gz}=\dz_{n,\,\uz}^{+, \gz}=\frac{2(1+\vz)}{4 + 2 \vz - \vz^2}>\frac{h_{\uz}^{+, \gz}}{4 f_n^{+, n}}\qd (m=\uz=1,\; n=2).$$ Hence (\ref{34}) holds. Combining this case with the last one (i.e., $n=1$), it follows that ${\underline\kz}^{\rm DD}<4\,\kz^{\rm DD}$ for $\vz\in \big(\big(\sqrt{13}-1\big)/3,\, \sqrt{2}\,\big) $. }\dexmp \xmp{\bf\rf{cmf10}{Examples 7.7\,(5)}}\qd {\rm Let $E=\{1, 2, \cdots\}$, $a_i=1/i$ and $b_i=1$ for all $i\ge 1$. Then $\lz^{\rm DD}=(3-\sqrt{5}\,)/2\approx 0.38$ and $\big(\lz^{\rm DD}\big)^{-1}\approx 2.618$. We have $\mu_i=i!$, $\mu_i a_i=(i-1)!$ and $\mu_i b_i=i!$ for all $i\ge 1$. Furthermore, we have $$\big(\kz^{\rm DD}\big)^{-1}\!\!=\!\!\bigg(\bigg[\sum_{k=1}^1 \frac 1 {(k\! -\! 1)!}\bigg]^{-1}\!\! + \bigg[\sum_{k=4}^{\infty} \frac 1 {k!} \bigg]^{-1}\bigg)\! \bigg[\sum_{\ell=1}^4 \ell !\bigg]^{-1}\!\!\! =\!\frac 1 {33}\bigg[1 + \frac{3}{3 e\!-\!8}\bigg]\!\approx 0.6174.$$ And so $\kz^{\rm DD}\approx 1.62$. With $$\fz_i^-=\sum_{k=1}^i \frac{1}{\mu_k a_k}=\sum_{k=1}^i \frac{1}{(k-1)!}\qd \text{and}\qd \fz_k^+=\sum_{\ell=k}^\infty \frac{1}{\mu_{\ell} b_{\ell}}=\sum_{\ell=k}^\infty \frac{1}{\ell !},$$ we have \begin{gather} \dz_{m,\, \uz}^{-, \gz}=\sup_{i\le m}\bigg[\frac{1}{\gz} \uz!+\!\!\sum_{i\le \ell\le \uz-1} \ell!\bigg] \fz_i^-, \qqd \dz_{n,\, \uz}^{+, \gz}=\sup_{i\ge n} \bigg[\frac{\gz-1}{\gz}\uz!+\!\!\sum_{\uz+1\le \ell \le i} \ell!\bigg] \fz_i^+.\nonumber\\ \frac{h_{\uz}^{-, \gz}}{f_m^{-, m}} =\frac{1}{\sqrt{\fz_m^{-, \gz}}}\sum_{k=1}^{m-1} (\fz_k^-)^{3/2} k! + \sum_{k=m}^{\uz-1} \fz_k^- k! +\frac{1}{\gz} \fz_{\uz}^- \uz !,\nonumber\\ \frac{h_{\uz}^{+, \gz}}{f_n^{+, n}} =\frac{1}{\sqrt{\fz_n^{+, \gz}}}\sum_{k=n+1}^\infty (\fz_k^+)^{3/2} k! + \sum_{k=\uz+1}^{n} \fz_k^+ k! +\frac{\gz-1}{\gz} \fz_{\uz}^+ \uz!.\nonumber \end{gather} For convenience, let $(L)$, $(R)$, $(M_-)$ and $(M_+)$ denote the last four quantities. The candidates given by (\ref{32}) are $\uz=2,\,3$. The case of $\uz=3$ is ruled out by (\ref{38}) and so we fix $\uz=2$. Then with $m=1,\,2$, $(m, n)$ satisfies (\ref{38}) for every $n$: $2\le n \le 17$. For the simplest choice $m=n=\uz$, $\dz_{m,\, \uz}^{-, \gz}$ is attained at $i=1$ once $\gz\ge 2$, $\dz_{n,\, \uz}^{+, \gz}$ is attained at $i=4$ whenever $\gz\ge 5/4$, and then the solution to the equation $\dz_{m,\, \uz}^{-, \gz}=\dz_{n,\, \uz}^{+, \gz}$ is $\gz\approx 3.2273$. Therefore, $$\dz_{m,\, \uz}^{-, \gz}=\dz_{n,\, \uz}^{+, \gz}\approx 1.62.$$ A better choice is $(m, n)=(1, 5)$. Then $\gz\approx 3.944$ and $$4\times (R)=4\times (L)\approx 6.042,\qd (M_-)\approx 2.014, \qd (M_+)\approx 5.54.$$ This certainly implies (\ref{34}).} \dexmp \xmp{\bf\rf{cmf10}{Examples 7.7\,(8)}}\qd{\rm Let $E=\{1, 2, \cdots\}$, $a_1=1$, $a_i=(i-1)^2$ for $i\ge 2$ and $b_i=i^2$ for $i\ge 1$. Then $\lz^{\rm DD}=1/4=\big(4\,\kz^{\rm DD}\big)^{-1}$. Once again, this example is dangerous. Clearly, $\mu_i=1$, $\mu_i a_i= a_i$ and $\mu_i b_i=b_i$ for all $i\ge 1$. We have $$\fz_i^-=1+\sum_{k=1}^{i-1} \frac{1}{k^2},\qqd \fz_i^+ =\sum_{\ell=i}^\infty \frac{1}{\ell^2}$$ and \begin{gather} \dz_{m,\, \uz}^{-, \gz}=\sup_{i\le m}\bigg[\frac{1}{\gz}+ \uz-i \bigg]\fz_i^-, \qqd \dz_{n,\, \uz}^{+, \gz}=\sup_{i\ge n} \bigg[\frac{\gz-1}{\gz}+ i-\uz\bigg]\fz_i^+.\nonumber\\ \frac{h_{\uz}^{-, \gz}}{f_m^{-, m}} =\frac{1}{\sqrt{\fz_m^-}}\sum_{k=1}^{m-1} (\fz_k^-)^{3/2} + \sum_{k=m}^{\uz-1} \fz_k^- +\frac{1}{\gz} \fz_{\uz}^- ,\nonumber\\ \frac{h_{\uz}^{+, \gz}}{f_n^{+, n}} =\frac{1}{\sqrt{\fz_n^+}}\sum_{k=n+1}^\infty (\fz_k^+)^{3/2} + \sum_{k=\uz+1}^{n} \fz_k^+ +\frac{\gz-1}{\gz} \fz_{\uz}^+.\nonumber \end{gather} As in the last example, we use $(L)$, $(R)$, $(M_-)$ and $(M_+)$ to denote the last four quantities. The only candidate by (\ref{32}) is $\uz=2$ which is fixed now. Then (\ref{38}) holds for all $m=1, 2$ and $n\ge 2$. The key for this example is that $4\times (R)=4$, independent of $\uz$ and $\gz$. With $m=2$, the maximum of $(L)$ is achieved at $i=1$, it tends to 1 as $\gz\to\infty$. Since $m=\uz$, the term $(M_-)$ is ignored. Besides, we have $(M_+)<4$ for all $n$: $2\le n\le 58$. Therefore, (\ref{34}) holds.} \dexmp To conclude the paper, we make a remark on the generalization of the results given here. \rmk {\rm By using a known technique (cf. \rf{cmf05}{Section 6.7}), the variational formula and its corollaries for the lower estimate of ${\lz}^{\text{\rm DD}}$ can be extended to a more general setup (Poincar\'e-type inequalities). The upper estimate is easier and was given in \rf{cmf10}{the remark above Corollary 8.3}. } \dermk
{'timestamp': '2012-06-25T02:01:40', 'yymm': '1206', 'arxiv_id': '1206.5071', 'language': 'en', 'url': 'https://arxiv.org/abs/1206.5071'}
\section{Introduction} Turing machines can be classified according to their numbers of states and symbols. It is known (see \cite{WN09} for a survey) that there are universal Turing machines in the following sets (number of states $\times$ number of symbols): $$2 \times 18,\ 3 \times 9,\ 4 \times 6,\ 5 \times 5,\ 6 \times 4,\ 9 \times 3,\ 18 \times 2.$$ On the other hand, all the Turing machines in the following sets are decidable: $$1 \times n,\ 2 \times 3,\ 3 \times 2, n \times 1.$$ In order to refine the classification of Turing machines between universal and decidable classes, properties in connection with the $3x + 1$ function have been considered. Recall that the $3x + 1$ function $T$ is defined by $$T(x) =\left\{ \begin{array}{ll} x/2 & \mbox{ if $x$ is even}\\ (3x +1)/2 & \mbox{ if $x$ is odd} \end{array}\right.$$ This can also be written $T(2n) = n$, $T(2n + 1) = 3n + 2$. When function $T$ is iterated on a positive integer, it seems that the loop $2 \mapsto 1 \mapsto 2$ is always reached, but this is unproven, and is a famous open problem in mathematics \cite{La10}. For further references, we set \begin{quote} {\bf {\boldmath $3 x + 1$} Conjecture}: When function $T$ is iterated from positive integers, the loop $2 \mapsto 1 \mapsto 2$ is always reached. \end{quote} The $3x + 1$ function is also called the Collatz function, and \emph{Collatz-like} functions are functions on integers with a definition of the following form: there exist integers $d \ge 2$, $a_i$, $b_i$, $0 \le i \le d-1$, such that, for all integers $x$, $$f(x) = \frac{a_ix + b_i}{d}\quad \mbox{if}\quad x \equiv i\quad \mbox{(mod $d$)}.$$ With these definitions, we can state the following properties of Turing machines, that have been used to refine the classification according to the numbers of states and symbols (see \cite{MM10} for a survey). \begin{itemize} \item Turing machines that simulate the iteration of the $3x + 1$ function and never halt. It is known that there are such machines in the sets $$2 \times 8,\ 3 \times 5,\ 4 \times 4,\ 5 \times 3,\ 10 \times 2.$$ We improve these results by giving a $3 \times 4$ Turing machine. \item Turing machines that simulate the iteration of the $3x +1$ function and halt when the loop $2 \mapsto 1 \mapsto 2$ is reached. It is known that there is such a machine in the set $6 \times 3$. In this article, we give four new Turing machines, in the classes $3 \times 10$, $4 \times 6$, $5 \times 4$ and $13 \times 2$. \item Turing machine that simulate the iteration of a Collatz-like function. It is known that there are such machines in the sets $$2 \times 4,\ 3 \times 3,\ 5 \times 2.$$ \end{itemize} \section{Preliminaries: Turing machines} The Turing machines we use have \begin{itemize} \item one tape, infinite on both sides, made of cells containing symbols, \item one reading and writing head, \item a set $Q = \{A, B, \ldots\}$ of states, plus a halting state $H$ (or $Z$), \item a set $\Sigma = \{b,0,1,\ldots\}$ of symbols, where $b$ is the blank symbol (or $\Sigma = \{0,1\}$, when 0 is the blank symbol), \item a next move function $$\delta : Q \times \Sigma \rightarrow \Sigma \times \{L, R\} \times(Q \cup\{H\}).$$ \end{itemize} If $\delta(p,a) = (b,D,q)$, then the Turing machine, reading symbol $a$ in state $p$, replaces $a$ by $b$, moves in the direction $D \in \{L, R\}$ ($L$ for Left, $R$ for Right), and comes into state $q$. On an input $x_k\ldots x_0 \in \Sigma^{k+1}$, the initial configuration is $^\omega b(Ax_k)\ldots x_0b^\omega$. This means that the word $x_k\ldots x_0$ is written on the tape between two infinite strings of blank symbols, and the machine is reading symbol $x_k$ in state $A$. \begin{table} $$\begin{array}{c|c|c|c|c|c|c|c|c|c|c|c|c|c} \mbox{symbols} & \multicolumn{13}{c}{}\\ \cline{1-3} 10 & Ma & \mbox{\bf Mi}_2& \multicolumn{11}{c}{}\\ \cline{1-3} 9 & & &\multicolumn{11}{c}{}\\ \cline{1-3} 8 & Ba & &\multicolumn{11}{c}{}\\ \cline{1-3} 7 & & &\multicolumn{11}{c}{}\\ \cline{1-4} 6 & & Ma & \mbox{\bf Mi}_2 & \multicolumn{10}{c}{}\\ \cline{1-4} 5 & & Ba & & \multicolumn{10}{c}{}\\ \cline{1-5} 4 & & Mi_2 & Ma &\mbox{\bf Mi}_2 & \multicolumn{9}{c}{}\\ \cline{1-6} 3 & & & & Ma &\mbox{\bf Mi}_1 & \multicolumn{8}{c}{}\\ \cline{1-13} 2 & & & & & & & & & Ba & Ma & & \mbox{\bf Mi}_2\\ \hline & 2 & 3 & 4 & 5 &\,6\, &\,7\,&\,8\,&\,9\,& 10 & 11 & 12 & 13 & \mbox{states} \end{array}$$ \caption{Turing machines simulating the $3x + 1$ function: $Ma=$ Margenstern \cite{Ma98,Ma00}, $Ba=$ Baiocchi \cite{Ba98}, $Mi_1=$ Michel \cite{Mi93}, $Mi_2=$ Michel (this paper). In roman boldface, halting machines.} \end{table} \section{The known Turing machines} Let us give some more precisions about the Turing machines that simulate the $3x + 1$ function. The following results are displayed in Table 1. Michel \cite{Mi93} gave a $6 \times 4$ Turing machine that halts when number 1 is reached. This machine works on numbers written in binary. Division by 2 of even integers is easy and multiplication by 3 is done by the usual multiplication algorithm. Margenstern \cite{Ma98,Ma00} gave never halting $5 \times 3$ and $11 \times 2$ Turing machines in binary, and never halting $2 \times 10$, $3 \times 6$, $4 \times 4$ Turing machines in unary, that is working on numbers $n$ written as strings of $n$ 1s. Baiocchi \cite{Ba98} gave five never halting Turing machines in unary, including $2 \times 8$, $3 \times 5$ and $10 \times 2$ machines that improved Margenstern's results. In this article, we give a never halting $3 \times 4$ Turing machine that works on numbers written in base 3. Multiplication by 3 is easy and division by 2 is done by the usual division algorithm. Note that Baiocchi and Margenstern \cite{BM01} already used numbers written in base 3 to define cellular automata that simulate the $3x + 1$ function. By adding two states to this $3 \times 4$ Turing machine, we derive a $5 \times 4$ Turing machine that halts when number 1 is reached. We also give three other Turing machines that halt when number 1 is reached: \begin{itemize} \item A $3 \times 10$ Turing machine obtained by adding one state to the $2 \times 10$ Turing machine of Margenstern \cite{Ma98,Ma00}. \item A $4 \times 6$ Turing machine obtained by adding one state to the $3 \times 6$ Turing machine of Margenstern \cite{Ma98,Ma00}. \item A $13 \times 2$ Turing machine obtained by adding two states to the $11 \times 2$ Turing machine of Margenstern \cite{Ma98,Ma00}. \end{itemize} \section{A never halting $3 \times 4$ Turing machine} This Turing machine $M_1$ is defined as follows: \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline $M_1$ & $b$ & 0 & 1 & 2 \\ \hline $A$ & $b$L$C$ & 0R$A$ & 0R$B$ & 1R$A$ \\ \hline $B$ & 2L$C$ & 1R$B$ & 2R$A$ & 2R$B$ \\ \hline $C$ & $b$R$A$ & 0L$C$ & 1L$C$ & 2L$C$ \\ \hline \end{tabular} \end{center} The idea is simple. A positive integer is written on the tape, in base 3, in the usual order. Initially, in state $A$, the head reads the most significant digit, at the left end of the number. The initial configuration on input $x = \sum_{i=0}^k x_i3^i$ is $^\omega b(Ax_k)\ldots x_0b^\omega$. Then the machine performs the division by 2, using the usual division algorithm. Partial quotients are written on the tape. Partial remainders are stored in the states: 0 in state $A$, 1 in state $B$. When the head passes the right end of the number, reading a $b$, then \begin{itemize} \item if the remainder is 0, nothing is done: $2n \mapsto n$, \item if the remainder is 1, a 2 is concatenated to the number: $2n + 1 \mapsto n \mapsto 3n + 2$. \end{itemize} Then the head comes back, in state $C$, to the left end of the number and is ready to perform a new division by 2. We have the following theorem. \begin{thm} The $3x + 1$ conjecture is true iff, for all positive integer $x = x_k\ldots x_0$ written in base 3, there exists an integer $n \ge 0$ such that, on input $x_k\ldots x_0$, the Turing machine $M_1$ eventually reaches the configuration ${^\omega}b0^n(A1)b^\omega$. \end{thm} \section{Turing machines that halts on the final loop} \subsection{A $3 \times 10$ Turing machine} Margenstern \cite[Fig.\ 11]{Ma00} gave the folowing never halting $2 \times 10$ Turing machine $M_2$. \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline $M_2$ & $b$ & 1 & $x$ & $r$ & $u$ & $v$ & $y$ & $z$ & $t$ & $k$ \\ \hline $A$ & $b$R$A$ & $x$R$B$ & 1L$A$ & $k$R$B$ & $x$R$A$ & $x$R$A$ & $r$L$A$ & $r$L$A$ & $y$R$A$ & \\ \hline $B$ & $z$L$B$ & $u$R$B$ & $x$R$B$ & $y$R$B$ & $v$L$B$ & $u$R$A$ & $t$L$B$ & 1L$A$ & $x$R$B$ & $b$R$B$ \\ \hline \end{tabular} \end{center} Turing machine $M_2$ works on numbers written in unary, so that the initial configuration on number $n \ge 1$ is $^\omega b(A1)1^{n-1}b^\omega$. By adding a new state $C$, we can detect the partial configuration $(A1)b$, and we obtain the following $3 \times 10$ Turing machine $M_3$. \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline $M_3$ & $b$ & 1 & $x$ & $r$ & $u$ & $v$ & $y$ & $z$ & $t$ & $k$ \\ \hline $A$ & $b$R$A$ & $x$R$C$ & 1L$A$ & $k$R$B$ & $x$R$A$ & $x$R$A$ & $r$L$A$ & $r$L$A$ & $y$R$A$ & \\ \hline $B$ & $z$L$B$ & $u$R$B$ & $x$R$B$ & $y$R$B$ & $v$L$B$ & $u$R$A$ & $t$L$B$ & 1L$A$ & $x$R$B$ & $b$R$B$ \\ \hline $C$ & $b$L$H$ & $u$R$B$ & & $y$R$B$ & & & & & & \\ \hline \end{tabular} \end{center} We have the following theorem \begin{thm} The $3x + 1$ conjecture is true iff, for all positive integers $n$, Turing machine $M_3$ halts on the initial configuration $^\omega b(A1)1^{n-1}b^\omega$. \end{thm} \subsection{A $4 \times 6$ Turing machine} Margenstern \cite[Fig.\ 10]{Ma00} gave the folowing never halting $3 \times 6$ Turing machine $M_4$ (Note that transition $(1,z) \mapsto (x\mbox{R}2)$ in this figure should be $(1,z) \mapsto (r\mbox{R}2)$). \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline $M_4$ & $b$ & 1 & $x$ & $a$ & $z$ & $r$ \\ \hline $A$ & $b$R$A$ & $x$R$B$ & 1L$A$ & 1L$A$ & $r$R$B$ & \\ \hline $B$ & 1L$B$ & $a$R$C$ & 1L$B$ & 1L$A$ & $x$R$B$ & $b$R$A$ \\ \hline $C$ & $z$L$C$ & $x$R$C$ & 1L$C$ & $a$R$A$ & $r$R$C$ & $z$L$C$ \\ \hline \end{tabular} \end{center} Turing machine $M_4$ works on numbers written in unary, with initial configuration ${^\omega}b(A1)1^{n-1}b^\omega$. By adding a new state $D$, we can detect the partial configuration $(A1)b$, and we obtain the following $4 \times 6$ Turing machine $M_5$. \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline $M_5$ & $b$ & 1 & $x$ & $a$ & $z$ & $r$ \\ \hline $A$ & $b$R$A$ & $x$R$D$ & 1L$A$ & 1L$A$ & $r$R$B$ & \\ \hline $B$ & 1L$B$ & $a$R$C$ & 1L$B$ & 1L$A$ & $x$R$B$ & $b$R$A$ \\ \hline $C$ & $z$L$C$ & $x$R$C$ & 1L$C$ & $a$R$A$ & $r$R$C$ & $z$L$C$ \\ \hline $D$ & $b$L$H$ & $a$R$C$ & & & $x$R$B$ & \\ \hline \end{tabular} \end{center} We have the following theorem. \begin{thm} The $3x + 1$ conjecture is true iff, for all positive integers $n$, Turing machine $M_5$ halts on the initial configuration ${^\omega}b(A1)1^{n-1}b^\omega$. \end{thm} \subsection{A $5 \times 4$ Turing machine} This Turing machine $M_6$ is defined as follows. \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline $M_6$ & $b$ & 0 & 1 & 2 \\ \hline $A$ & $b$L$C$ & 0R$A$ & 0R$B$ & 1R$A$ \\ \hline $B$ & 2L$E$ & 1R$B$ & 2R$A$ & 2R$B$ \\ \hline $C$ & $b$R$D$ & 0L$C$ & 1L$C$ & 2L$C$ \\ \hline $D$ & & $b$R$A$ & $b$R$B$ & 1R$A$ \\ \hline $E$ & $b$R$H$ & 0L$C$ & 1L$C$ & 2L$C$ \\ \hline \end{tabular} \end{center} Turing machine $M_6$ is obtained from Turing machine $M_1$ by adding a state $D$ that wipes out the useless 0s, and a state $E$ that detects the partial configuration $b(Bb)$. We have the following theorem. \begin{thm} The $3x + 1$ conjecture is true iff Turing machine $M_6$ halts on all input $x = x_k\ldots x_0$ representing a positive integer written in base 3. \end{thm} \subsection{A $13 \times 2$ Turing machine} Margenstern \cite[Fig.\ 8]{Ma00} gave the following never halting $11 \times 2$ Turing machine $M_7$ (in this table, $H$ is \emph{not} a halting state). \begin{center} \begin{tabular}{|c|c|c|} \hline $M_7$ & 0 & 1 \\ \hline $A$ & 1R$I$ & 0R$B$ \\ \hline $B$ & 0R$A$ & 0R$G$ \\ \hline $C$ & 0R$A$ & 1R$D$ \\ \hline $D$ & 0R$C$ & 1R$E$ \\ \hline $E$ & 1R$I$ & 1R$F$ \\ \hline $F$ & 1R$C$ & 0R$G$ \\ \hline $G$ & 1R$C$ & 1R$H$ \\ \hline $H$ & 0R$E$ & 1R$G$ \\ \hline $I$ & 1L$J$ & \\ \hline $J$ & 0R$B$ & 1L$K$ \\ \hline $K$ & 0L$J$ & 1L$J$ \\ \hline \end{tabular} \end{center} This machine works on numbers written in binary, with the least significant bit at the left end of the number, and digits 0 and 1 coded by 10 and 11, so that the initial configuration on number $n = x_k\ldots x_0 = \sum_{i=0}^k x_i2^i$ is $^\omega 0(A1)x_01x_1\ldots 1x_k0^\omega$. Division by 2 of even integers is easy, and multiplication by 3 is done by the usual algorithm. By adding two new states $L$ and $M$, we can detect the partial configuration $(A1)10$, and we obtain the following $13 \times 2$ Turing machine $M_8$, where $Z$ is the halting state. \begin{center} \begin{tabular}{|c|c|c|} \hline $M_8$ & 0 & 1 \\ \hline $A$ & 1R$I$ & 0R$L$ \\ \hline $B$ & 0R$A$ & 0R$G$ \\ \hline $C$ & 0R$A$ & 1R$D$ \\ \hline $D$ & 0R$C$ & 1R$E$ \\ \hline $E$ & 1R$I$ & 1R$F$ \\ \hline $F$ & 1R$C$ & 0R$G$ \\ \hline $G$ & 1R$C$ & 1R$H$ \\ \hline $H$ & 0R$E$ & 1R$G$ \\ \hline $I$ & 1L$J$ & \\ \hline $J$ & 0R$B$ & 1L$K$ \\ \hline $K$ & 0L$J$ & 1L$J$ \\ \hline $L$ & 0R$A$ & 0R$M$ \\ \hline $M$ & 0L$Z$ & 1R$H$ \\ \hline \end{tabular} \end{center} We have the following theorem. \begin{thm} The $3x + 1$ conjecture is true iff, for all positive number $n = x_k\ldots x_0 = \sum_{i=0}^k x_i2^i$, Turing machine $M_8$ halts on the initial configuration $^\omega 0(A1)x_01x_1\ldots 1x_k0^\omega$. \end{thm} \section{Conclusion} We have given a new $3 \times 4$ never halting Turing machine that simulates the iteration of the $3x + 1$ function. It seems that it will be hard to improve the known results on never halting machines. On the other hand, for Turing machines that halt on the conjectured final loop of the $3x + 1$ function, more researches are still to be done.
{'timestamp': '2014-09-26T02:12:56', 'yymm': '1409', 'arxiv_id': '1409.7322', 'language': 'en', 'url': 'https://arxiv.org/abs/1409.7322'}
\section{Introduction} Discrete approaches have proved useful in many areas of physics. Regge calculus \cite{Regge} is a discrete formulation of general relativity (GR) where spacetime is approximated by a triangulated manifold, and the fundamental variables used to describe the metric are the lengths of the edges of the triangulation. This approach has been applied with some success to classical gravity \cite{RuthRegge, Gentle}, and used as a starting point for a lattice quantization of GR \cite{RuthRegge, Rocek, Immirzi, Hamber}. As other non-perturbative approaches to quantum gravity, quantum Regge calculus suffers from the problem of defining a unique gauge-invariant measure in the path integral. The background-independent spinfoam approach \cite{carlo} suggests an original route based on the well-defined quantum measure of BF theory. The latter is a topological theory where area variables appear naturally, and whose action can be reduced to GR by means of so-called simplicity constraints, as discovered by Plebanski \cite{Plebanski}. This and other motivations have led Rovelli \cite{carloarea} to suggest that 4d quantum gravity should be related to a modification of Regge calculus where the fundamental variables are the areas of triangles rather than the edge lengths. Some effort was put in this line of research by Makela and Williams among others \cite{Makela,Barrett,Wainwright,RuthRegge}, but the problem has been open for more than ten years. The main difficulty lies in the fact that a generic triangulation has many more triangles than edges, thus area variables should be constrained. An explicit expression of these constraints is obscured by their non-local nature in the triangulation. In this paper, we introduce a description of discrete gravity that overcomes this difficulty. The key idea is to enlarge the set of variables from areas only, to areas and angles. In this way the constraints become local, are easy to write explicitly, and further they are related to the simplicity constraints of Plebanski's formulation of GR.\footnote{Similar ideas have been investigated by Reisenberger \cite{Mike} and Rovelli \cite{CarloUn}.} We approximate the spacetime manifold by a simplicial triangulation, where each 4-simplex\footnote{The 4-simplex, also known as pentachoron in the mathematical literature, is the convex hull of five points. A 4-simplex contains five tetrahedra, ten triangles and ten edges.} is flat and the curvature is described by deficit angles associated to the triangles. Regge calculus uses the fact that on each 4-simplex $\sigma$ the ten components of the (constant) metric tensor $g_{\mu\nu}(\sigma)$ can be straighforwardly expressed in terms of the ten edge lengths $\ell_e$. A further advantage of using the edge lengths as variables is that they endow each tetrahedron with six quantities which are sufficient to completely characterize the tetrahedron's geometry. Therefore the gluing of 4-simplices, obtained by identifying a shared tetrahedron, is trivial and causes no complications. On a single 4-simplex, there are also ten triangles, suggesting that areas can be equivalently taken as the metric variables. There are two difficulties with this idea. First, it is less straighforward to express $g_{\mu\nu}(\sigma)$ in terms of areas. For instance the change of variables from edge lengths to areas on a 4-simplex is singular for orthogonal configurations \cite{Barrett}, that is where right angles among the edges are present. Even the equal area configuration has a two-fold ambiguity where the same set of areas corresponds to two different sets of edge lengths. This is a more significant difficulty than it might seem at a first look, as such configurations are relevant in the case of a regular lattice, the simplest flat solution to Regge calculus. The second issue is even more serious. Ten areas might be enough to describe the 4-geometry of the simplex, but how about its boundary 3-geometry? Taken any of the five tetrahedra in the 4-simplex, its geometry is not uniquely defined by the areas of its 4 triangles (two more quantities are needed, corresponding for instance to (non-opposite) dihedral angles). So we need the geometry of the full 4-simplex to determine the individual geometry of any of its boundary tetrahedra. As a consequence, two adjacent 4-simplices in a triangulation will typically induce different geometries on the common tetrahedron, leading to discontinuities in the metric \cite{Wainwright}, or to non-local constraints involving the two 4-simplices \cite{Makela}. A solution to the problem can be achieved adding to the areas enough variables so that the geometry of each of the five tetrahedra can be independently and completely determined. A natural choice is to add the tetrahedral dihedral angles. Of course, this pleonastic set of variables needs to be constrained in order to succesfully reproduce the dynamics of GR. We now turn to the study of these constraints. \section{De natura pentachori} Let us study how to characterize the geometry of a 4-simplex and its five boundary tetrahedra, using areas and 3d dihedral angles. We use a notation which might seem counterintuitive at first, but that pays off well in terms of efficiency and extends to any dimension. We denote by $V$ the 4-volume of the simplex $\si$ (or the $n$-volume in general), $V(i)$ the 3-volume of the tetrahedron $\si(i)$ obtained by removing the vertex $i$ from the 4-simplex, $V(ij)$ the area of the triangle $\si(ij)$ obtained by removing the vertices $i$ and $j$, and so on. For the dihedral angles, we use the following notation: $\theta_{ij}$ is the 4d dihedral angle between the tetrahedra $\si(i)$ and $\si(j)$, hinged at the triangle $\si(ij)$; $\phi_{ij,k}$ is the 3d dihedral angle between the two triangles $\si(ik)$ and $\si(jk)$, hinged at $\si(ijk)$ within the tetrahedron $\si(k)$; finally, $\alpha_{ij,kl}$ is the 2d dihedral angle between the edges $\si(ijk)$ and $\si(ijl)$ belonging to the triangle $\si(kl)$. All dihedral angles are \emph{internal}, thus for instance an equilateral $4$-simplex has $\cos\theta=1/4$. These various types of dihedral angles satisfy a number of relations in a closed 4-simplex, which we present together with their proofs in the Appendix. An important role in our construction is played by the following expression of the 2d $\alpha$'s in terms of the 3d $\phi$'s, \begin{eqnarray}\label{a} \cos\alpha_{ij,kl}=\frac{\cos\phi_{ij,k}+\cos\phi_{il,k} \cos\phi_{jl,k} }{\sin\phi_{il,k} \sin\phi_{jl,k}}. \end{eqnarray} In this formula the 2d angle, belonging to the triangle $kl$, is described in terms of three 3d angles all belonging to the \emph{same} tetrahedron $k$. In a closed 4-simplex, a triangle is shared by two tetrahedra, thus there are two possible choices. Consistency of the two choices, i.e. $\alpha_{ij,kl}=\alpha_{ij,lk}$ (see Fig.\ref{fig1}), gives \begin{eqnarray}\label{aa} {\cal C}_{kl,ij}(\phi) &\equiv& \frac{\cos\phi_{ij,k}+\cos\phi_{il,k} \cos\phi_{jl,k} }{\sin\phi_{il,k} \sin\phi_{jl,k}} \nonumber\\ &-& \frac{\cos\phi_{ij,l}+\cos\phi_{ik,l} \cos\phi_{jk,l} }{\sin\phi_{ik,l} \sin\phi_{jk,l}} = 0. \end{eqnarray} \begin{figure}[h] \includegraphics[width=2cm]{aa.eps} \caption{\label{fig1} The geometric meaning of equation \Ref{aa}: the 2d angle $\alpha_{ij,kl}$ belonging to the shaded triangle can be expressed in terms of 3d angles associated the thick edges of the tetrahedron $k$, or equivalenty of the tetrahedron $l$.} \end{figure} Thus a consistent gluing of the tetrahedra in a 4-simplex gives relations among the $\phi$'s. These are three relations per triangle, hence $30$ in total, of which only $20$ are independent. To see this, we linearized the equations (\ref{aa}) around generic non-degenerate configurations, including the potentially harmful orthogonal one, and used an algebraic manipulator to study the rank. The good behaviour of the orthogonal configuration can also be anticipated by the absence of cosines in the denominator of \Ref{a}. Of course, our construction would fail for degenerate configurations where one or more angles equal $0$ or $\pi$. These relations are important to characterize the geometry of a 4-simplex. Consider a generic 4-simplex. Its ten 4d angles $\theta_{ij}$ define the Gram matrix $G_{ij}(\theta) \equiv \cos\theta_{ij}$ (with the convention $\cos\theta_{ii} \equiv -1$). If the simplex is closed and flat, these ten angles can not be all independent, but have to satisfy the condition of vanishing of the Gram determinant, $\det G=0$ (e.g. \cite{Barrett1} and \cite{Freidel}).\footnote{This condition is the origin of the well-known Schl\"afli identity.} The nine independent quantities parametrize the space of shapes of the 4-simplex (a scale factor being the tenth and last metric variable). We then expect that to characterize the geometry in terms of the thirty 3d angles $\phi_{ij,k}$, there must exist 21 relations among them. These can be found as follows. First, we consider the Gram matrices $G^k{}_{ij}(\phi)$ associated to the five tetrahedra; imposing the vanishing of their determinant guarantees that the tetrahedra are closed. These are five independent conditions. Next, we use the 2d angle consistency relations \Ref{aa} to ensure a consistent gluing of the tetrahedra into a 4-simplex. The complete set \begin{equation}\label{C1} \det \,G^k(\phi) = 0, \quad {\cal C}_{kl,ij}(\phi) = 0 \end{equation} can be shown, again by linearization, to have rank 21. Notice that the first constraint is local on each tetrahedron, unlike the second that involves two adjacent tetrahedra. Hence we found a necessary and sufficient set of relations among the $\phi$ angles to be the 3d dihedral angles of a 4-simplex. Other sets are possible (see Appendix B); the advantage of this one is the transparency of its geometric meaning. \section{Area-angle Regge calculus} With the understanding of the geometry of a 4-simplex gained above, we now come to the main point of this paper: describing the dynamics of general relativity on a discrete manifold, using areas and 3d angles as variables. For simplicity, we consider here the case of a Riemannian manifold with no boundaries. The extension to Lorentzian signature and to boundary terms will be discussed elsewhere. Using the standard notation ($t$ a triangle, $e=tt'$ an edge), the variables on the full triangulation are $A_t$ and $\phi_e^\tau$. The gluing conditions \Ref{aa} refer to each pair of edges in a triangle shared by two tetrahedra. In a triangulation there will be in general many tetrahedra around the same triangle, and \Ref{aa} has to hold for any choice of two. However transitivity ensures that it is enough to impose \Ref{aa} to the pairs of tetrahedra belonging to the same 4-simplex. We can then write these constraints as \begin{equation}\label{Cee} {\cal C}^\si_{ee'}(\phi_e^\tau)=0, \end{equation} where ${\cal C}^\si_{ee'}$ is given by \Ref{aa} for $e=\si(kli)$ and $e'=\si(klj)$ sharing a vertex in a 4-simplex $\si$, and zero otherwise. On a single 4-simplex we have ten areas and thirty 3d angles, thus we need thirty independent constraints to reduce the total number of variables to ten. The situation parallels the analysis we performed in the previous section. We can still take the triangle gluing conditions \Ref{Cee} involving only $\phi$ angles, and include the areas in the closure conditions for the five tetrahedra. Denoting $n_t$ the normal to a triangle, we have by definition $|n_t|^2 = A_t^2$ and $n_t \cdot n_{t'} = - A_t A_{t'} \cos\phi_{tt'}^\tau$. The closure condition on a tetrahedron $\tau$ reads $N_\tau \equiv \sum_{t\in\tau} n_t = 0$. By sequentially taking the scalar product of $N_\tau$ with the four $n_t$ we obtain four constraints, \begin{equation}\label{closure} {\cal N}_t^\tau(A,\phi) = A_t - \sum_{t'\neq t} A_{t'}\, \cos\phi_{tt'}^\tau = 0. \end{equation} Considering the five tetrahedra on the whole 4-simplex \Ref{closure} gives twenty constraints, to be added to the thirty constraints \Ref{Cee}. Again we studied the number of independent constraints by linearization, and found that the resulting system has rank 30 for a generic configuration and also for the orthogonal one. Consequently only ten of the 40 variables used are truly independent. This is consistent with the kinematical degrees of freedom of discrete general relativity. As shown explicitly in the Appendix, the forty variables $(A_t, \phi_e^\tau)$ satisfying these thirty independent relations determine completely the geometry of the 4-simplex \emph{and} of its five tetrahedra, thus each tetrahedron has a well-defined geometry, and gluing 4-simplices causes no problems. In other words, satisfying the constraints \Ref{Cee} and \Ref{closure} allows us to reconstruct uniquely a set of edge lengths from the variables $(A_t, \phi_e^\tau)$. We then consider the following action for general relativity, \begin{eqnarray}\label{action} && S[A_{t},\phi_e^\tau,\lambda_t^\tau, \mu^\si_{ee'}] = \sum_{t} A_{t} \, \eps_{t}(\phi) + \\\nonumber && \qquad + \sum_\tau \sum_{t\in\tau} \lambda_t^\tau \, {\cal N}_t^\tau(A, \phi) + \sum_\si \sum_{ee'\in \si} \mu^\si_{ee'} \, {\cal C}^\si_{ee'}(\phi). \end{eqnarray} The first term is just the Regge action with independent area-angle variables,\footnote{The deficit angles $\eps_t$ are given by the sum over 4d angles on the 4-simplices sharing the triangle $t$, $\eps_t = 2\pi -\sum_{\si} \theta_t^\si$. We describe in the Appendix how to express them in terms of the $\phi$'s, or in terms of edge lengths as it is done in Regge calculus.} and the other terms are the constraints \Ref{closure} and \Ref{Cee} imposed by the Lagrange multipliers $\lambda_t^\tau$ and $\mu_{ee'}^\si$. As discussed above, they effectively reduce the set of variables $(A_t, \phi_e^\tau)$ to the edge lenghts $\ell_e$, therefore \Ref{action} is equivalent to the conventional Regge action, $S_{\rm R}[\ell_e] = \sum_t A_t(\ell_e) \, \eps_t(\ell_e)$. Notice that our approach should not be seen as a first order formulation of Regge calculus (see for instance \cite{Caselle, Barrett1}), because we are adding 3d dihedral angles (the $\phi$'s), not 4d dihedral angles (the $\theta$'s): only the latter encode the extrinsic curvature of a 3d slice and are thus conjugate to the areas. The reader might wonder at this point whether \Ref{action} is a discretization of a continuum action for GR, like Regge's is a discretization of the Einstein-Hilbert action $\int \sqrt{g} \, R$. We argue that this is the case, the continuum avatar of \Ref{action} being Plebanski's action \cite{Plebanski}. The latter is a modified BF action, which schematically reads $S = \int B \wedge F + \mu \, {\cal C}(B)$ (we refer the reader to the literature for more details \cite{Pleb}). The term ${\cal C}(B)$ is a set of constraints reducing topological BF theory to GR. We are naturally led towards the interpretation of the first two terms of \Ref{action} as a discretization of BF theory, with the closure constraint \Ref{closure} implementing the Gauss constraint of the continuum BF action, and the third term as the simplicity constraints. Recall that Plebanski's constraints state that the bi-normal $B$ to any triangle $\si(ij)$ must be simple, i.e. it must be the wedge product of (any) two edge vectors. In our notation, $B_{ij} = \pm e_{ijk} \wedge e_{ijl}$ with $k$ and $l$ different from $i$ and $j$. Then if the closure and simplicity constraints are satisfied, they imply \begin{eqnarray}\label{simpl1} B_{ij}\cdot B_{ij} &=& e_{ijk}{}^2 \, e_{ijl}{}^2 - (e_{ijk}\cdot e_{ijl})^2 = \nonumber\\ &=& V_{ijk}{}^2 V_{ijl}{}^2 \sin\alpha_{ijkl}^2 = \nonumber\\ &=& V_{ij}{}^2, \\\label{simpl2} B_{ij}\cdot B_{ik} &=& e_{ijk}{}^2 \, (e_{ijl} \cdot e_{ikl}) - (e_{ijk}\cdot e_{ijl}) \, (e_{ijk} \cdot e_{ikl}) \nonumber\\ && \hspace{-1.8cm} = V_{ijk}{}^2 \, V_{ijl} \, V_{ikl} \, \big( \cos\alpha_{il,jk}-\cos\alpha_{ij,kl} \cos\alpha_{ik,jl} \big) = \nonumber\\ &=& V_{ij}\, V_{ik} \, \cos\phi_{jk,i}, \end{eqnarray} thus using \Ref{simpl1} in \Ref{simpl2} gives \begin{equation}\label{ainv} \cos\phi_{jk,i} = \frac{\cos\alpha_{il,jk}-\cos\alpha_{ij,kl} \cos\alpha_{ik,jl} }{\sin\alpha_{ij,kl} \sin\alpha_{ik,jl}}. \end{equation} This relation can be inverted to give \Ref{a} with \Ref{aa} holding (see the Appendix). Conversely if \Ref{aa} and \Ref{closure} are satisfied, we can proceed backwards and define bi-normals satisfying the simplicity constraints.\footnote{Notice that the simplicity constraints have typically solutions in two sectors. Here we are imposing directly the solution in the geometric sector, so we do not comment about this ambiguity, which however plays an important role in quantum models \cite{newvertex, noi, loro}.} To further study this correspondence, a canonical analysis of the action \Ref{action} is in progress, and will appear elsewhere \cite{Bianca2}. \section{Conclusions} We introduced a modified Regge calculus where the fundamental variables are areas and 3d dihedral angles between triangles. The action, given in \Ref{action}, is the conventional Regge term with independent area-angle variables, plus two additional constraints. The first imposes the closure of each tetrahedron in the triangulation, the second guarantees a consistent gluing between adjacent tetrahedra, by imposing the (conformal) geometry of the common triangle to be the same. All the constraints are local in the triangulation. We expect our action to be related to a discretization of Plebanski's action. Our main result is to show that the full set of constraints guarantees that local variables determine completely the geometry of each 4-simplex \emph{and of each tetrahedron} in the triangulation. As 4-simplices are glued together identifying a tetrahedron in their boundary, being able to determine the tetrahedra's geometry is crucial to have a consistent propagation of the degrees of freedom in the triangulation. The crucial counting of the independent constraints was performed by linearizing the constraints around generic non-degenerate configurations (see Appendix B). In particular we checked that the potentially harmful orthogonal configuration is well behaved. Similarly, the ambiguity of two sets of lengths giving the same areas \cite{Barrett} is removed in our formalism simply because the two sets give different 3d angles. On the basis of our analysis, we can not exclude completely the presence of pathological configurations. However, a well-behaved orthogonal configuration reassures us that at least for the regular lattice our approach solves both difficulties with area Regge calculus described in the introduction. This opens the way, for instance, to perturbation theory on a flat background. While studying the constraints we found a number of relations between the dihedral angles of various dimension of a 4-simplex. We present them together with their derivation in the Appendix. We also provide an explicit algorithm to compute the edge lengths from area-angles in a tetrahedron and in a 4-simplex. We expect our result to have a number of applications, and before concluding, we would like to briefly point out a few potentially promising ones. At the classical level, the canonical analysis of \Ref{action} could shed light on the description of the Hamiltonian algebra of deformation of discrete manifolds \cite{Bianca2}. The applicability of this approach to numerical studies of lattice gravity has to be explored. It would for instance be interesting to study whether our approach keeps the good convergence properties of conventional Regge calculus in the continuum limit \cite{Brewin}. At the quantum level, there are possible links to the spinfoam formalism that are worth exploring. The formalism is expected to provide a well-defined measure for a regularized path integral for non-perturbative quantum gravity (however see also \cite{Bojo}). Recently a spinfoam model has been proposed \cite{newvertex} (see also \cite{NewImmirzi}), whose dynamical variables can be expressed as normals to triangles \cite{noi} (see also \cite{loro}). The scalar reduction of these quantities produces exactly the variables $(A_t, \phi_e^\tau)$ considered here. The matching of variables suggests that the discrete calculus introduced here is a candidate for the semiclassical limit of this new spinfoam model, mimicking what happens in the 3d case with Regge calculus \cite{Ponzano}. Indeed, the recent advances \cite{Ashtekar} on calculating the graviton propagator from spinfoams \cite{grav} are based precisely on such a link. In the context of pure area Regge calculus on a single 4-simplex, this idea was investigated in \cite{Bianchi}. From this viewpoint, it would be useful to study the quantum theory defined on a regular lattice in perturbative expansion around flat spacetime, as done by Rocek and Williams \cite{Rocek}. \subsubsection*{Acknowledgements} We would like to thank Ruth M. Williams for her inspirational work and Carlo Rovelli for useful discussions. This research was supported by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research \& Innovation.
{'timestamp': '2008-07-18T01:50:08', 'yymm': '0802', 'arxiv_id': '0802.0864', 'language': 'en', 'url': 'https://arxiv.org/abs/0802.0864'}
\section{Introduction} \label{sec:intro} Microwave resonators (for example, Chap.~7 of Ref.~\onlinecite{Poz90}) are one of the key components in a variety of circuits operated at GHz frequencies, and their new applications continue to emerge. A simple example is band-pass filters, which are based on the fact that the microwave transmission through resonators is frequency sensitive. The same idea is also used for more complex devices, such as oscillators, tuned amplifiers, and frequency meters. Actually, having high-quality filters and oscillators is critical in mobile communications, where available bands keep getting overcrowded as demand grows rapidly. Another application of microwave resonators is radiation detectors, which often consist of sensor heads and read-out circuits, and resonators can be used in the readout circuit. When one would like to detect at the single-photon level, one needs to have detectors with high enough energy resolutions. In this respect, superconducting sensor heads\cite{Irw95,Pea96,Gol01} can be advantageous, and may be the only solution at present depending on the energy range of the object. Once one decides to use superconducting sensor heads, it makes sense to fabricate the read-out circuit with superconducting materials as well. Superconducting microwave resonators allow one to obtain higher quality factors, which are favorable for frequency multiplexing. In addition, one type of photon detector is designed to probe the change in the kinetic inductance of superconducting thin-film resonator due to the absorbed photons.\cite{Maz02} In this device concept, the resonator works as a sensor head rather than a part of the readout circuit. Recently, superconducting resonators are used for the nondemolition readout of superconducting qubits as well.\cite{Wal04} Since the demonstration by Wallraff {\it et al.},\cite{Wal04} this type of readout scheme has been one of the main topics in the field of superconducting qubits, and we are also developing a similar readout technique.\cite{Ino09} In superconducting resonators, kinetic inductance, which is essentially the internal mass of the current carriers, plays an important role especially when the superconducting film is thin. In our circuit,\cite{Ino09} for example, a Nb $\lambda/4$ coplanar-waveguide (CPW) resonator is terminated by an Al dc SQUID, and the total thickness of the Al layers is 0.04~$\mu$m. In order to avoid a discontinuity at the Al/Nb interface, we usually choose the Nb thickness to be 0.05~$\mu$m, which is much thinner than a typical thickness of $\geq$0.3~$\mu$m for superconducting integrated circuits fabricated by the standard photolithographic technology. Fabricating circuits with thinner films is actually important from the viewpoint of miniaturization as well. Therefore, for designing resonators, quantitative understanding of the kinetic inductance in the CPW is important. There have been a number of reports on kinetic inductance for a variety of materials.\cite{Mes69,Rau93,Kis93,Wat94,Gub05,Fru05} In general, however, kinetic inductance is indirectly measured by assuming a theoretical model, and as a result, the uncertainties are relatively large. Thus, although kinetic inductance is a well established notion and the phenomenon is qualitatively understood, the quantitative information is not necessarily sufficient from the point of view of applications, especially at high frequencies, $>$10~GHz. When we would like to precisely predict the resonant frequency, the best solution would be to characterize the actual CPW in a simple circuit. Such characterization should also improve the knowledge of superconducting microwave circuits. Very recently, G\"{o}ppl {\it et al.}\cite{Gop08} measured a series of Al CPW resonators with nominally the same film thickness of 0.2~$\mu$m, and investigated the relationship between the loaded quality factor at the base temperature of 0.02~K and the coupling capacitance. For this purpose, it is justified to neglect kinetic inductance because the kinetic inductance should be the same in their resonators and estimated\cite{Gop08} to be about two orders of magnitude smaller than the usual magnetic inductance determined by the CPW geometry. In this work, on the other hand, we paid close attention to the resonant frequency as well, and characterized Nb CPW resonators as a function of film thickness rather than a function of coupling capacitance. We also looked at the temperature dependence in order to discuss kinetic inductance in detail. \section{Experiment} \label{sec:Ex} \begin{table} \caption {\label{tab:list} List of resonators. $t$ is the thickness of Nb film; $f_r$ is the resonant frequency; $Q_L$ is the unloaded quality factor; $C_c$ is the coupling capacitance; $v_p$ is the phase velocity, and its ratio to the speed of light $c$ is listed in percent. The values for $f_r$ and $Q_L$ are obtained at the base temperatures. $C_c$ and $v_p$ are evaluated by least-squares fitting (see Fig.~\ref{fig:S21}) with $C=1.6\times10^{-10}$~F/m, and their uncertainties are determined by changing the value of $C$ by $\pm10\%$, where $C$ is the capacitance per unit length.} \begin{ruledtabular} \begin{tabular}{cccccc} Reso- &$t$&$f_r$&$Q_L$&$C_c$&$v_p/c$\\ nator &($\mu$m)&(GHz)&($\times$10$^3$)&(fF)&(\%)\\ \hline A1& 0.05 & 10.01 & 1.6& 7.0$\pm$0.4 & 39.31$\pm$0.03 \\ A2& 0.1\phantom{0} & 10.50 & 1.4& 7.3$\pm$0.4 & 41.28$\pm$0.04 \\ A3& 0.2\phantom{0} & 10.74 & 1.4& 7.2$\pm$0.4 & 42.21$\pm$0.04 \\ A4& 0.3\phantom{0} & 10.88 & 1.6& 6.6$\pm$0.4 & 42.71$\pm$0.03 \\ B1& 0.05 & 10.06 & 3.4& 4.6$\pm$0.3 & 39.32$\pm$0.02 \\ B2& 0.1\phantom{0} & 10.56 & 3.1& 4.8$\pm$0.3 & 41.26$\pm$0.02 \\ B3& 0.2\phantom{0} & 10.81 & 2.7& 5.0$\pm$0.3 & 42.27$\pm$0.03 \\ B4& 0.3\phantom{0} & 10.94 & 3.3& 4.5$\pm$0.3 & 42.72$\pm$0.02 \\ \end{tabular} \end{ruledtabular} \end{table} \begin{figure} \begin{center} \includegraphics[width=0.8\columnwidth,clip]{fig1_CPW_equ_lr.eps} \caption{\label{fig:CPW_equ} Schematic diagram of coplanar-waveguide (CPW) resonators. A CPW of length $l$ is coupled to the microwave lines through capacitors $C_c$. } \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.9\columnwidth,clip]{fig2_CPWshape3_lr.eps} \caption{\label{fig:CPWshape} (Color online) (a) Cross section of a coplanar waveguide. (b) Top view of coupling capacitors. } \end{center} \end{figure} We studied two series of Nb $\lambda/2$ CPW resonators listed in Table~\ref{tab:list}. Each resonator consists of a section of CPW and coupling capacitors, as shown schematically in Fig.~\ref{fig:CPW_equ}. The resonators were fabricated on a nominally undoped Si wafer whose surface had been thermally oxidized. On the SiO$_2$/Si substrate, a Nb film was deposited by sputtering and then patterned by photolithography and SF$_6$ reactive ion etching. Figure~\ref{fig:CPWshape}(a) represents the cross section of CPW. The center conductor has a width of $w=10$~$\mu$m, and separated from the the ground planes by $s=5.8$~$\mu$m, so that the characteristic impedance becomes $\sim50$~$\Omega$. The thickness of Nb is $t=0.05$, 0.1, 0.2, or 0.3~$\mu$m (see Table~\ref{tab:list}), and that of SiO$_2$/Si substrate is $h=300$~$\mu$m. The SiO$_2$ layer, whose thickness is 0.3~$\mu$m, is not drawn in Fig.~\ref{fig:CPWshape}(a). We employed interdigital coupling capacitors as shown in Fig.~\ref{fig:CPWshape}(b). The finger width is $w_f = 9$~$\mu$m, the space between the fingers is $s_f = 2$~$\mu$m, and the finger length is $l_f = 78$~$\mu$m for Resonators~A1--A4 and $l_f = 38$~$\mu$m for Resonators~B1--B4. Here, we quoted designed dimensions for the Nb structures. The actual dimensions differ by about 0.2~$\mu$m due to over-etching; for example, $w$ and $w_f$ are $\sim0.2$~$\mu$m smaller, whereas $s$ and $s_f$ are $\sim0.2$~$\mu$m larger. In this paper, we define the resonator length $l$ as the distance between the center of the fingers on one side and that on the other side, and $l=5.8$~mm for all resonators. Because our chip size is 2.5~mm by 5.0~mm, our CPWs meander as in Fig.~\ref{fig:Outline}. \begin{figure} \begin{center} \includegraphics[width=0.7\columnwidth,clip]{fig3_Outline_lr.eps} \caption{\label{fig:Outline} Schematic diagram of a typical measurement setup. Boxes represent attenuators. } \end{center} \end{figure} The resonators were measured in a $^3$He-$^4$He dilution refrigerator at $T=0.02-5$~K. A typical measurement setup is shown schematically in Fig.~\ref{fig:Outline}. The boxes in the figure represent attenuators. The amount of attenuation was not the same because the microwave lines in our refrigerator had been designed for several different purposes. The attenuation was $x = 10$~dB for Resonators~A2, B3, and B4, and $x = 20$~dB for the others; $y = 10$~dB for all resonators except A1 and A3. For Resonators~A1 and A3, we used a line with no attenuators ($y=0$~dB) but with an isolator and a cryogenic amplifier at 4.2~K. The gain of the cryogenic amplifier was 40~dB for Resonator~A1 and 34~dB for A3. We measured the transmission coefficient $S_{21}$ by connecting a vector network analyzer to the ``IN" and ``OUT" ports in Fig.~\ref{fig:Outline}. A typical incident power to the resonator was $-40$~dBm. For each resonator, we confirmed that the measurements were done in an appropriate power range in the sense that the results looked power independent. \section{Results} \label{sec:R} \subsection{ {\boldmath $S_{21}$} at the base temperatures} \label{subsec:baseT} \begin{figure} \begin{center} \includegraphics[width=\columnwidth,clip]{fig4_S21_lr.eps} \caption{\label{fig:S21} (Color online) Amplitude of the transmission coefficient $S_{21}$ as a function of frequency for (a) Resonators~A1--A4, and (b) Resonators~B1--B4. } \end{center} \end{figure} Figure~\ref{fig:S21} shows the amplitude of $S_{21}$ at the base temperatures, $T=0.02-0.03$~K, as a function of frequency $f$ for all resonators. The resonant frequency $f_r$ has a rather large film-thickness dependence. Our interpretation is that this is due to the kinetic inductance of the CPW center conductor. Before discussing the thickness dependence in detail, let us look at the quality factors. What we obtain by measuring $S_{21}$ as a function of $f$ is the loaded quality factor $Q_L$, which is related to the external quality factor $Q_e$ and the unloaded quality factor $Q$ by \begin{equation} \label{eq:Q} Q_L^{-1} = Q_e^{-1}+Q^{-1}. \end{equation} In general, $Q_e$ is determined mainly by $C_c$, whereas $Q$ is a measure of the internal loss, which arises not only from the dielectric but also from the superconductor in the high-frequency regime. Our resonators should be highly overcoupled to the input/output lines at the base temperatures, that is, $Q \gg Q_e$, and thus, $Q_L\sim Q_e$. As listed in Table~\ref{tab:list}, $Q_L$ of our resonators is on the order of 10$^3$. These values are not only reasonable for the designs of our finger-shaped coupling capacitors but also much smaller than typical values of $Q$ below 0.1~K for superconducting microwave resonators.\cite{Maz02,Fru05,Gop08} When $Q \gg Q_e$, the maximum $|S_{21}|$ is expected to be 0~dB. We have confirmed by taking into account attenuators, amplifiers, and cable losses, that our measurements are indeed consistent within the uncertainties of gain/loss calculations, 1--2~dB. Based on this confirmation, the experimental data in Fig.~\ref{fig:S21} are normalized so that the peak heights equal 0~dB. The solid curves in Fig.~\ref{fig:S21} are calculations based on the transmission ($ABCD$) matrix (for example, Sec.~5.5 of Ref.~\onlinecite{Poz90}), and they reproduce the experimental data well. The matrix for the resonators is given by \begin{equation} \label{eq:ABCD} \left( \begin{array}{cc} A & B \\ C & D \end{array} \right) = T_{\rm cc}\,T_{\rm cpw}\,T_{\rm cc}\,, \end{equation} where \begin{equation} \label{eq:Tcc} T_{\rm cc} = \left( \begin{array}{cc} 1 & (j\omega C_c)^{-1} \\ 0 & 1 \end{array} \right), \end{equation} $j$ is the imaginary unit, \begin{equation} \label{eq:Tcpw} T_{\rm cpw} = \left( \begin{array}{cc} \cos\beta l & jZ_{\rm cpw}\sin\beta l\\ j(Z_{\rm cpw})^{-1}\sin\beta l & \cos\beta l \end{array} \right) \end{equation} for lossless CPWs, $\omega = 2\pi f$, $\beta = \omega/v_p$, \begin{equation} \label{eq:vp} v_p=1/\sqrt{LC} \end{equation} is the phase velocity, which is strongly related to $f_r$, \begin{equation} \label{eq:Zcpw} Z_{\rm cpw}=\sqrt{L/C} \end{equation} is the characteristic impedance, $L$ is the inductance per unit length, and $C$ is the capacitance per unit length. From these transmission-matrix elements, the scattering-matrix elements are calculated, and $S_{21}$ is given by \begin{equation} \label{eq:S21} S_{21} = 2/(A+B/Z_0+CZ_0+D), \end{equation} where $Z_0=50$~$\Omega$ is the characteristic impedance of the microwave lines connected to the resonator. Unit-length properties of CPW are determined when two parameters out of $v_p$, $Z_{\rm cpw}$, $L$, and $C$ are specified. In the calculations for Fig.~\ref{fig:S21}, we employed $C=1.6\times10^{-10}$~F/m based on the considerations described in the following paragraph, and evaluated $C_c$ and $v_p$ by least-squares fitting. Wen\cite{Wen69} calculated CPW parameters using conformal mapping. Within the theory, $C$ does not depend on $t$, and it is given by \begin{equation} \label{eq:Wen_C} C = (\epsilon_r+1)\epsilon_0\,2K(k)/K(k'), \end{equation} where $\epsilon_r$ is the relative dielectric constant of the substrate, $\epsilon_0 = 8.85\times 10^{-12}$~F/m is the permittivity of free space, $K(k)$ is the complete elliptical integral of the first kind, the argument $k$ is given by \begin{equation} \label{eq:Wen_k} k = w/(w+2s)\,, \end{equation} and $k'= \sqrt{1-k^2}.$ For our CPWs, we obtain $C=1.6\times10^{-10}$~F/m when we employ $\epsilon_r=11.7$ for Si (p.~223 of Ref.~\onlinecite{Kit96}) neglecting the contribution from the SiO$_2$ layer, which is much thinner compared to $w$, $s$, or $h$. Circuit simulators [Microwave Office from AWR (\#1) and AppCAD from Agilent (\#2)] also predict similar values of $C$. The simulators calculate CPW parameters from the dimensions and the material used for the substrate. The predictions by the simulators have $t$ dependence, but in the relevant $t$ range, the variations are on the order of 1\% or smaller as summarized in Table~\ref{tab:LC}, and the values of $C$ are between $1.6\times10^{-10}$~F/m and $1.7\times10^{-10}$~F/m. Thus, partly for simplicity, we used $C=1.6\times10^{-10}$~F/m for all of our resonators. \begin{table} \caption {\label{tab:LC} Dependence of coplanar-waveguide parameters on the film thickness $t$. For capacitance $C$ and inductance $L$ per unit length, the normalized variations $\Delta C(t)/C^*$ and $\Delta L(t)/L^*$ are listed in percent, where $\Delta C(t) = C(t) - C^*$, $C^* \sim 1.6 \times 10^{-10}$~F/m is the value at $t=0.3$~$\mu$m, and the definitions of $\Delta L(t)$ and $L^*\sim4\times10^{-7}$~H/m are similar. The predictions by circuit simulators \#1 and \#2 are compared. Regarding $L$, experimental values for ``A"=Resonators~A1--A4 and for ``B"=Resonators~B1--B4 are also given, and they are obtained from the values of $v_p$ in Table~\ref{tab:list} using Eq.~(\ref{eq:vp}) and by neglecting the $t$ dependence of $C$. } \begin{ruledtabular} \begin{tabular}{ccc|cccc} $t$ & \multicolumn{2}{c|}{$\Delta C(t)/C^*$ (\%)} & \multicolumn{4}{c}{$\Delta L(t)/L^*$ (\%)}\\ ($\mu$m) & \#1 & \#2 & \#1 & \#2 & A & B \\ \hline 0.05 & $-0.6$ & 1.7 & 3.9 & 2.9 & 18.0 & 18.0 \\ 0.1\phantom{0} & $-0.5$ & 1.4 & 3.0 & 2.2 & \phantom{1}7.1 & \phantom{1}7.2\\ 0.2\phantom{0} & $-0.2$ & 0.8 & 1.4 & 1.2 & \phantom{1}2.4 & \phantom{1}2.1\\ \end{tabular} \end{ruledtabular} \end{table} With $C=1.6\times10^{-10}$~F/m, the values of $v_p$ in Table~\ref{tab:list} correspond to $Z_{\rm cpw}=49-53$~$\Omega$, which agrees with our design of $\sim50$~$\Omega$. We have done the same fitting by changing the value of $C$ by $\pm10\%$ as well in order to estimate the uncertainties, which are also listed in Table~\ref{tab:list}. Within the uncertainties, the values of $C_c$ from the same coupling-capacitor design agree, and $C_c\sim7$~fF for Resonators~A1--A4 with $l_f=78$~$\mu$m and $C_c\sim5$~fF for Resonators~B1--B4 with $l_f=38$~$\mu$m. The uncertainties for $v_p$ is much smaller, $<0.1\%$, and again within the uncertainties, the values of $v_p$ for the same $t$ agree. For the rest of this paper, let us assume that $t$ dependence of $C$ is negligible. This assumption is consistent with the fact that the experimental $C_c$ vs.\ $t$ in Table~\ref{tab:list} does not show any obvious trend. Moreover, according to the circuit simulators in Table~\ref{tab:LC}, $t$ dependence of $C$ is smaller than that of $L$. Below, we look at $L$ mainly instead of $v_p$ or other CPW parameters so that we will be able to discuss the kinetic inductance. As long as we deal with a normalized inductance such as the ratio of $L(t)$ to $L^*\equiv L(\mbox{0.3~$\mu$m})$, what we choose for the value of $C$ does not matter very much because $v_p$ obtained from the fitting was not so sensitive to $C$. Hence, we analyze the quantities obtained with $C=1.6\times10^{-10}$~F/m only hereafter. In Table~\ref{tab:LC}, we list the variations of $L$ in our two series of resonators as well. For both series, the magnitude of the variations are much larger than the predictions by circuit simulators. We will discuss this large $t$ dependence in terms of kinetic inductance in Sec.~\ref{sec:D} after examining the temperature dependence in Sec.~\ref{subsec:Tdep}. \subsection{Temperature dependence of {\boldmath $S_{21}$}} \label{subsec:Tdep} \begin{figure} \begin{center} \includegraphics[width=0.95\columnwidth,clip]{fig5_S21_temp_lr.eps} \caption{\label{fig:S21_temp} (Color online) Amplitude of the transmission coefficient $S_{21}$ as a function of frequency for Resonators~A1 at different temperatures. } \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.7\columnwidth,clip]{fig6_Q2_lr.eps} \caption{\label{fig:Q2} (Color online) Normalized quality factors, (a) $Q/Q_e$ and (b) $Q_L/Q_e$, as functions of temperature for Resonators~A1--A4 (Nb thickness $t=0.05,$ 0.1, 0.2, and 0.3~$\mu$m), where $Q_L$, $Q_e$, and $Q$ are loaded, external, and unloaded quality factors, respectively, and $Q_e$ is assumed to be temperature independent. The markers are data points, whereas the curves are guides to the eyes. } \end{center} \end{figure} We also measured $S_{21}$ vs.\ $f$ at various temperatures up to $T=4-5$~K for Resonators~A1--A4. We show the results for Resonator~A1 in Fig.~\ref{fig:S21_temp}. With increasing temperature, $f_r$, $Q_L$, and the peak height decrease. As in Sec.~\ref{subsec:baseT}, let us look at the quality factors first. In our resonators, $Q_e\sim Q_L$ at the base temperatures as we pointed out in Sec.~\ref{subsec:baseT}. Thus, when we assume that $Q_e$ is temperature independent, we can calculate $Q$ from measured $Q_L$ using Eq.~(\ref{eq:Q}). We plot $Q_L(T)/Q_e$ and $Q(T)/Q_e$ vs.\ $T$ in Fig.~\ref{fig:Q2} for all of the four resonators. With increasing temperature, $Q$ decreases in all resonators. A finite $Q^{-1}$ means that the resonator has a finite internal loss, which is consistent with a peak height smaller than unity in Fig.~\ref{fig:S21_temp}. The internal loss at high temperatures must be due to quasiparticles in the superconductor, as discussed in Ref.~\onlinecite{Maz02}. The reduction of quality factors becomes larger as the Nb thickness is decreased. At $T<1$~K, however, the reduction is negligibly small, and thus, in this sense, it should be fine to choose any thickness in the range of $t=0.05-0.3$~$\mu$m for the study of superconducting qubits that we mentioned in Sec.~\ref{sec:intro} because qubit operations are almost always done at the base temperatures. When CPWs are no longer lossless, $\beta$ in Eq.~(\ref{eq:Tcpw}) has to be replaced by $(\alpha+j\beta)/j$. This $\alpha$ characterizes the internal loss, and $\beta/(2\alpha)$ is equal to $Q$ (for example, Sec.~7.2 of Ref.~\onlinecite{Poz90}). From similar calculations to those in Sec.~\ref{subsec:baseT}, we evaluated $L$ at higher temperatures as well by neglecting the $T$ dependence of $C$ and $C_c$. Because we are interested in the temperature variation of $L$, we show $\Delta L(t,T)/L^*$ vs.\ $T$ in Fig.~\ref{fig:L_delta}, where $\Delta L(t,T) \equiv L(t,T) - L(t,T^*)$, $T^*$ is the base temperature, and $L^* \equiv L(\mbox{0.3~$\mu$m},T^*)$. The variation becomes larger as the Nb thickness is decreased. This trend also suggests that we should take into account the kinetic inductance. \begin{figure} \begin{center} \includegraphics[width=0.7\columnwidth,clip]{fig7_L_delta_lr.eps} \caption{\label{fig:L_delta} (Color online) Temperature variations of inductance $L$ per unit length for Resonators~A1--A4, whose Nb thickness is $t=0.05,$ 0.1, 0.2, and 0.3~$\mu$m. The unit of the vertical axis is percent. See text for the definition of $\Delta L(t,T)/L^*$. } \end{center} \end{figure} \section{Discussion} \label{sec:D} The film-thickness and temperature dependence that we have examined in Sec.~\ref{sec:R} is explained by the model, \begin{equation} \label{eq:L} L(t,T) = L_g(t) + L_k(t,T), \end{equation} where $L_g$ is the usual magnetic inductance per unit length determined by the CPW geometry and $L_k$ is the kinetic inductance of the CPW center conductor per unit length. We neglect the contribution of the ground planes to $L_k$ because the ground planes are much wider than the center conductor in our resonators [see Eq.~(\ref{eq:Lk})]. We also assume that $L_g$ depends on $t$ only, whereas $L_k$ does on both $t$ and $T$. This type of model has been employed in earlier works\cite{Rau93,Fru05,Gop08} as well. The $T$ dependence of $L_k$ arises from the fact that $L_k$ is determined not only by the geometry but also by the penetration depth $\lambda$, which varies with $T$. Meservey and Tedrow\cite{Mes69} calculated $L_k$ of a superconducting strip, and when the strip has a rectangular cross section like our CPWs, $L_k$ is written as \begin{equation} \label{eq:Lk} L_k = \frac{\mu_0}{\,\pi^2\,}(\lambda/w)\ln(4w/t) \frac{\sinh(t/\lambda)}{\,\cosh(t/\lambda)-1\,}, \end{equation} where $\mu_0=4\pi\times10^{-7}$~H/m is the permeability of free space. The relationship between $L_k$ and $\lambda$ is expressed in a much simpler form in the thick- and thin-film limits; $L_k\propto\lambda$ for $t\gg\lambda$, and $L_k\propto\lambda^2$ for $t\ll\lambda$. When we assume Eqs.~(\ref{eq:L}) and (\ref{eq:Lk}), we obtain $\lambda(t,T)$ numerically, once $L_g(t)$ is given. Below, we discuss $\lambda(t,T)$ in our Nb films in order to confirm that the model represented by Eq.~(\ref{eq:L}) is indeed appropriate. \begin{figure} \begin{center} \includegraphics[width=0.8\columnwidth,clip]{fig8_lambda_tdep_lr.eps} \caption{\label{fig:lambda_tdep} (Color online) (a) Temperature dependence of the penetration depth $\lambda$ for Resonators~A1--A4, whose Nb thickness is $t=0.05,$ 0.1, 0.2, and 0.3~$\mu$m. The solid and broken curves are theoretical predictions expressed by Eqs.~(\ref{eq:2fluid}) and (\ref{eq:Rau}), respectively. As in Ref.~\onlinecite{Tin96}, $[\lambda(t,0)/\lambda(t,T)]^2$ is plotted vs.\ $T/T_c(t)$. (b) Superconducting transition temperature $T_c(t)$ used in (a), and $\lambda(t,0)$. The curves are from Ref.~\onlinecite{Gub05}, that is, not fitted to our experimental data. Both in (a) and (b), $\lambda(T^*)\sim\lambda(0)$ is assumed, where $T^*$ is the base temperature. } \end{center} \end{figure} \begin{table} \caption {\label{tab:Lm} Inductance per unit length at the base temperatures in Resonators~A1--A4. $t$ is the thickness of Nb film; $\Delta L_g(t) = L_g(t) - L_g^*$, where $L_g$ is the usual magnetic inductance per unit length determined by the CPW geometry, and $L_g^*\equiv L_g(\mbox{0.3~$\mu$m}) =3.75\times10^{-7}$~H/m; $L_k$ is the kinetic inductance per unit length, and $L=L_g+L_k$. } \begin{ruledtabular} \begin{tabular}{ccc} $t$ ($\mu$m) & $\Delta L_g(t)/L_g^*$ (\%)&$L_k/L$ (\%)\\ \hline 0.05 & 4.3 & 13.1\\ 0.1\phantom{0} & 3.4 & \phantom{1}4.9\\ 0.2\phantom{0} & 1.7 & \phantom{1}2.2\\ 0.3\phantom{0} & -- & \phantom{1}1.6\\ \end{tabular} \end{ruledtabular} \end{table} In Fig.~\ref{fig:lambda_tdep}(a), we plot $[\lambda(t,T^*)/\lambda(t,T)]^2$ vs.\ $T/T_c(t)$ for Resonators~A1--A4, where $T_c(t)$ is the superconducting transition temperature, which is assumed to be also $t$ dependent in this paper. We have found that with a reasonable set of parameters, $L_g(t)$ and $T_c(t)$, the experimental data for all resonators are described by a single curve. This kind of scaling is expected theoretically in the limits of $\xi_0/\lambda_L\gg 1$ and $\xi_0/\lambda_L\ll 1$, where $\xi_0$ is the coherence length and $\lambda_L$ is the London penetration depth.\cite{Tin96} Although $\xi_0/\lambda_L\sim1$ in Nb (p.~353 of Ref.~\onlinecite{Kit96}) at temperatures well below $T_c$, it would be still reasonable to expect a scaling in our Nb resonators because at a given normalized temperature $T/T_c(t)$, the relevant quantities should be on the same order of magnitude in all resonators, and thus, two parameters, $\lambda(t,T^*)$ and $T_c(t)$, are probably enough for characterizing $\lambda(t,T)$ of our resonators. The values of $L_g(t)$ and $T_c(t)$ employed in Fig.~\ref{fig:lambda_tdep}(a) are summarized in Table~\ref{tab:Lm} and Fig.~\ref{fig:lambda_tdep}(b), respectively. The relative change of $L_g(t)$ in Table~\ref{tab:Lm} is similar to the predictions by circuit simulators in Table~\ref{tab:LC}, which do not take into account the kinetic inductance. The magnitude of $L_g(t)$ is also reasonable because $\sqrt{L_g(t)/C}\sim49$~$\Omega$ for all thickness. In Table~\ref{tab:Lm}, we also list the ratio of kinetic inductance $L_k$ to the total inductance $L$. With decreasing thickness, $L_k/L$ indeed increases rapidly. In Fig.~\ref{fig:lambda_tdep}(b), $T_c(t)$ and $\lambda(t,T^*)$ are plotted together with the theoretical curves in Figs.~1 and 6 of Ref.~\onlinecite{Gub05}, where Gubin {\it et al.}\cite{Gub05} determined some parameters of the curves by fitting to their experimental data. The values of $T_c(t)$ are reasonable, and $\lambda(t,T^*)$ is on the right order of magnitude. The solid curve in Fig.~\ref{fig:lambda_tdep}(a) is the theoretical $T$ dependence based on the two-fluid approximation,\cite{Tin96} \begin{equation} \label{eq:2fluid} [\lambda(0)/\lambda(T)]^2 = 1-(T/T_c)^4. \end{equation} This theoretical curve reproduces the experimental data at $T/T_c < 0.4$, when we assume that $\lambda(t,T^*)\sim\lambda(t,0)$ in Resonators~A1--A4. At $T/T_c \geq 0.4$, on the other hand, the experimental data deviate from Eq.~(\ref{eq:2fluid}), but according to Ref.~\onlinecite{Tin96}, the expression for $\lambda$ vs.\ $T$ depends on the ratio of $\xi_0/\lambda_L$, and thus, Eq.~(\ref{eq:2fluid}) cannot be expected to apply to all materials equally well. Indeed, although the temperature dependence of Eq.~(\ref{eq:2fluid}) has been observed in the classic pure superconductors,\cite{Tin96} such as Al with $\xi_0/\lambda_L \gg 1$ at temperatures well below $T_c$, it does not seem to be the case in the high-$T_c$ materials, whose typical $\xi_0/\lambda_L$ is in the opposite limit,\cite{Tin96} $\xi_0/\lambda_L \ll 1$, and for example, Rauch {\it et al.}\cite{Rau93} employed for a high-$T_c$ material YBa$_2$Cu$_3$O$_{7-x}$, an empirical expression of \begin{equation} \label{eq:Rau} [\lambda(0)/\lambda(T)]^2 = 1-0.1(T/T_c)-0.9(T/T_c)^2, \end{equation} which is the broken curve in Fig.~\ref{fig:lambda_tdep}(a), instead. Because $\xi_0/\lambda_L\sim1$ in Nb even at $T/T_c\ll 1$, and because the experimental data at $T/T_c \geq 0.4$ are between Eqs.~(\ref{eq:2fluid}) and (\ref{eq:Rau}), we believe that the deviation from Eq.~(\ref{eq:2fluid}) at $T/T_c \geq 0.4$ is reasonable. From the discussion in this section, we conclude that the model represented by Eq.~(\ref{eq:L}) explains the film-thickness and temperature dependence of our resonators. \section{Conclusion} We investigated two series of Nb $\lambda/2$ CPW resonators with resonant frequencies in the range of $10-11$~GHz and with different Nb-film thicknesses, $0.05-0.3$~$\mu$m. We measured the transmission coefficient $S_{21}$ as a function of frequency at low temperatures, $T=0.02-5$~K. For each film thickness, we determined the phase velocity in the CPW with an accuracy better than 0.1\% by least-squares fitting of a theoretical $S_{21}$ curve based on the transmission matrix to the experimental data at the base temperatures. Not only the film-thickness dependence but also the temperature dependence of the resonators are explained by taking into account the kinetic inductance of the CPW center conductor. \section*{Acknowledgment} The authors would like to thank Y. Kitagawa for fabricating the resonators, and T. Miyazaki for fruitful discussion. T.\ Y., K.\ M., and J.-S.\ T.\ would like to thank CREST-JST, Japan for financial support.
{'timestamp': '2009-11-24T05:05:51', 'yymm': '0911', 'arxiv_id': '0911.4536', 'language': 'en', 'url': 'https://arxiv.org/abs/0911.4536'}
\section{Introduction} Artificial Intelligence has made tremendous progress in recent years due to the development of deep neural networks. Its deployment at the edge, however, is currently limited by the high power consumption of the associated algorithms \cite{xu2018scaling}. Low precision neural networks are currently emerging as a solution, as they allow the development of low power consumption hardware specialized in deep learning inference \cite{hubara2017quantized}. The most extreme case of low precision neural networks, the Binarized Neural Network (BNN), also called XNOR-NET, is receiving particular attention as it is especially efficient for hardware implementation: both synaptic weights and neuronal activations assume only binary values \cite{courbariaux2016binarized,rastegari2016xnor}. Remarkably, this type of neural network can achieve high accuracy on vision tasks \cite{lin2017towards}. One particularly investigated lead is to fabricate hardware BNNs with emerging memories such as resistive RAM or memristors \cite{bocquet2018,yu2016binary,giacomin2019robust,sun2018xnor,zhou2018new,natsui2018design,tang2017binary, lee2019adaptive}. The low memory requirements of BNNs, as well as their reliance on simple arithmetic operations, make them indeed particularly adapted for ``in-memory'' or ``near-memory'' computing approaches, which achieve superior energy-efficiency by avoiding the von~Neumann bottleneck entirely. Ternary neural networks \cite{alemdar2017ternary} (TNN, also called Gated XNOR-NET, or GXNOR-NET \cite{deng2018gxnor}), which add the value $0$ to synaptic weights and activations, are also considered for hardware implementations \cite{ando2017brein,prost2017scalable,li2017design,pan2019skyrmion}. They are comparatively receiving less attention than binarized neural networks, however. In this work, we highlight that implementing TNNs does not necessarily imply considerable overhead with regards to BNNs. We introduce a two-transistor/two-resistor memory architecture for TNN implementation. The array uses a precharge sense amplifier for reading weights, and the ternary weight value can be extracted in a single sense operation, by exploiting the fact that latency of the sense amplifier depends on the resistive states of the memory devices. This work extends a hardware developed for the energy-efficient implementation of BNNs \cite{bocquet2018}, where the synaptic weights are implemented in a differential fashion. We, therefore, show that it can be extended to TNNs without overhead on the memory array. The contribution of this work is as follows. After presenting the background of the work (section~\ref{sec:background}): \begin{itemize} \item We demonstrate experimentally, on a fabricated 130~nm RRAM/CMOS hybrid chip, a strategy for implementing ternary weights using a precharge sense amplifier, which is particularly appropriate when the sense amplifier is operated at low supply voltage (section~\ref{sec:circuit}). \item We analyze the bit errors of this scheme experimentally and their dependence on the RRAM programming conditions (section~\ref{sec:programmability}). \item We verify the robustness of the approach to process, voltage, and temperature variations (section~\ref{sec:PVT}). \item We carry simulations that show the superiority of TNNs over BNNs on the canonical CIFAR-10 vision task, and evidence the error resilience of hardware TNNs (section~\ref{sec:network}). \item We discuss the results, and compare our approach with the idea of storing three resistance levels per device. \end{itemize} Partial and preliminary results of this work have been presented at a conference \cite{laborieux2020low}. This journal version adds the experimental characterization of bit errors in our architecture, supported by a comprehensive analysis of the impact on process, voltage, and temperature variations, and their impact at the neural network level, together with a detailed analysis of the use of ternary networks over binarized ones. \section{Background} \label{sec:background} \begin{figure}[t] \centering \includegraphics[width=3.0in]{fig1.pdf} \caption{(a) Electron microscopy image of a hafnium oxide resistive memory cell (RRAM) integrated in the backend-of-line of a $130\nano\meter$ CMOS process. (b) Photograph and (c) simplified schematic of the hybrid CMOS/RRAM test chip characterized in this work. The white rectangle in (b) materializes a single PCSA.} \label{fig:testchip} \end{figure} The main equation in conventional neural networks is the computation of the neuronal activation $A_j = f \left( \sum_i W_{ji}X_i \right),$ where $A_j$, the synaptic weights $W_{ji}$, and input neuronal activations $X_i$ assume real values, and $f$ is a non-linear activation function. Binarized neural networks (BNNs) are a considerable simplification of conventional neural networks, in which all neuronal activations ($A_j$, $X_i$) and synaptic weights $W_{ji}$ can only take binary values meaning $+1$ and $-1$. Neuronal activation then becomes: \begin{equation} \label{eq:activ_BNN} A_j = \mathrm{sign} \left( \sum_i XNOR \left( W_{ji},X_i \right) -T_j \right), \end{equation} where $\mathrm{sign}$ is the sign function, $T_j$ is a threshold associated with the neuron, and the $XNOR$ operation is defined in Table~\ref{tab:gates}. Training BNNs is a relatively sophisticated operation, during which each synapse needs to be associated with a real value in addition to its binary value (see Appendix). Once training is finished, these real values can be discarded, and the neural network is entirely binarized. Due to their reduced memory requirements, and reliance on simple arithmetic operations, BNNs are especially appropriate for in- or near- memory implementations. In particular, multiple groups investigate the implementation of BNN inference with resistive memory tightly integrated at the core of CMOS \cite{bocquet2018,yu2016binary,giacomin2019robust,sun2018xnor,zhou2018new,natsui2018design,tang2017binary, lee2019adaptive}. Usually, resistive memory stores the synaptic weights $W_{ji}$. However, this comes with a significant challenge: resistive memory is prone to bit errors, and in digital applications, is typically used with strong error-correcting codes (ECC). ECC, which requires large decoding circuits \cite{gregori2003chip}, goes against the principles of in- or near- memory computing. For this reason, \cite{bocquet2018} proposes a two-transistor/two-resistor (2T2R) structure, which reduces resistive memory bit errors, without the need for ECC decoding circuit, by storing synaptic weights in a differential fashion. This architecture allows the extremely efficient implementation of BNNs, and using the resistive memory devices in very favorable programming conditions (low energy, high endurance). It should be noted that systems using this architecture function with row-by-row read operations, and do not use the in-memory computing technique of using the Kirchhoff current law to perform the sum operation of neural networks, while reading all devices at the same time \cite{prezioso2015training,ambrogio2018equivalent}. This choice limits the parallelism of such architectures, while at the same time avoiding the need of analog-to-digital conversion and analog circuits such as operational amplifiers, as discussed in detail in \cite{hirtzlin2019digital}. In this work, we show that the same architecture can be used for a generalization of BNNs -- ternary neural networks (TNNs)\footnote{ In the literature, the name ``Ternary Neural Networks'' is sometimes also used to refer to neural networks where the synaptic weights are ternarized, but the neuronal activations remain real or integer \cite{mellempudi2017ternary,nurvitadhi2017can}.}, where neuronal activations and synaptic weights $A_j$, $X_i$, and $W_{ji}$ can now assume three values: $+1$, $-1$, and $0$. Equation~\eqref{eq:activ_BNN} now becomes: \begin{equation} \label{eq:activ_TNN} A_j = \phi \left( \sum_i GXNOR \left( W_{ji},X_i \right) -T_j \right). \end{equation} $GXNOR$ is the ``gated'' XNOR operation that realizes the product between numbers with values $+1$, $-1$ and $0$ (Table~\ref{tab:gates}). $\phi$ is an activation function that outputs $+1$ if its input is greater than a threshold $\Delta$, $-1$ if the input is lesser than $-\Delta$ and $0$ otherwise. We show experimentally and by circuit simulation in sec.~\ref{sec:circuit} how the 2T2R BNN architecture can be extended to TNNs with practically no overhead, in sec.~\ref{sec:programmability} its bit errors, and in sec.~\ref{sec:network} the corresponding benefits in terms of neural network accuracy. \begin{table}[tbp] \caption{Truth Tables of the XNOR and GXNOR Gates} \begin{center} \begin{tabular}{|c|c|c|} \hline $W_{ji}$ & $X_i$ & $XNOR$ \\ \hline $-1$ & $-1$ & $1$ \\ $-1$ & $1$ & $-1$ \\ $1$ & $-1$ & $-1$ \\ $1$ & $1$ & $1$ \\ \hline \end{tabular} \begin{tabular}{|c|c|c|} \hline $W_{ji}$ & $X_i$ & $GXNOR$ \\ \hline $-1$ & $-1$ & $1$ \\ $-1$ & $1$ & $-1$ \\ $1$ & $-1$ & $-1$ \\ $1$ & $1$ & $1$ \\ $0$ & $X$ & $0$ \\ $X$ & $0$ & $0$ \\ \hline \end{tabular} \label{tab:gates} \end{center} \end{table} \section{The Operation of A Precharge Sense Amplifier Can Provide Ternary Weights} \label{sec:circuit} \begin{figure}[htbp] \centering \includegraphics[width=3.4in]{fig2.pdf} \caption{Schematic of the precharge sense amplifier fabricated in the test chip.} \label{fig:PCSA} \vspace{0.5cm} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{fig3.pdf} \caption{Circuit simulation of the precharge sense amplifier of Fig.~\ref{fig:PCSA} with a supply voltage of $1.2\volt$, using thick oxide transistors (nominal voltage of $5 \volt$)}, if the two devices are programmed in an (a) LRS / HRS ({$5\kilo\ohm$/$350\kilo\ohm$)} or (b) HRS/HRS ({$320\kilo\ohm$/$350\kilo\ohm$}) configuration. \label{fig:SPICE} \end{figure} In this work, we use the architecture of \cite{bocquet2018}, where synaptic weights are stored in a differential fashion. Each bit is implemented using two devices programmed either as low resistance state (LRS) / high resistance state (HRS) to mean weight $+1$ or HRS/LRS to mean weight $-1$. Fig.~\ref{fig:testchip} presents the test chip used for the experiments. This chip cointegrates $130\nano\meter$ CMOS and resistive memory in the back-end-of-line, between levels four and five of metal. The resistive memory cells are based on $10\nano\meter$ thick hafnium oxide (Fig.~\ref{fig:testchip}(a)). All devices are integrated with a series NMOS transistor. After an initial forming step (consisting in the application of a voltage ramp from zero volts to $3.3\volt$ at a rate of $1000\volt\per\second$, and with a current limited to a compliance of $200\micro\ampere$), the devices can switch between high resistance state (HRS) and low resistance state (LRS), through the dissolution or creation of conductive filaments of oxygen vacancies. Programming into the HRS is obtained by the application of a negative RESET voltage pulse (typically between $1.5\volt$ and $2.5\volt$ during $1\micro\second$). Programming into the LRS is obtained by the application of a positive SET pulse (also typically between $1.5\volt$ and $2.5\volt$ during $1 \micro\second$), with current limited to a compliance current through the choice of the voltage applied on the transistor gate through the word line. This test chip is designed with highly conservative sizing, allowing the application of a wide range of voltages and electrical currents to the RRAM cells. The area of each bit cell is $6.6 \times 6.9 \micro\meter ^2$. More details on the RRAM technology are provided in \cite{hirtzlin2019digital}. Our experiments are based on a $2,048$ devices array incorporating all sense and periphery circuitry, illustrated in Fig.~\ref{fig:testchip}(b-c). The ternary synaptic weights are read using on-chip precharge sense amplifiers (PCSA), presented in Fig.~\ref{fig:PCSA}, and initially proposed in \cite{zhao2009high} for reading spin-transfer magnetoresistive random access memory. Fig.~\ref{fig:SPICE}(a) shows an electrical simulation of this circuit to explain its working principle, using the Mentor Graphics Eldo simulator. These first simulations are presented in the commercial $130\nano\meter$ ultra-low leakage technology, used in our test chip, with a low supply voltage of $1.2 \volt$ \cite{nearth}, with thick oxide transistors (the nominal voltage in this process for thick oxide transistor is $5 \volt$). Since the technology targets ultra-low leakage applications the threshold voltages are significantly high (around $0.6 \volt$), thus a supply voltage of $1.2 \volt$ significantly reduces the overdrive of the transistors ($V_{GS} - V_{TH}$). In the first phase (SEN=0), the outputs Q and Qb are precharged to the supply voltage $V_{DD}$. In the second phase (SEN=$V_{DD}$), each branch starts to discharge to the ground. The branch that has the resistive memory (BL or BLb) with the lowest electrical resistance discharges faster and causes its associated inverter to drive the output of the other inverter to the supply voltage. At the end of the process, the two outputs are therefore complementary and can be used to tell which resistive memory has the highest resistance and therefore the synaptic weight. We observed that the convergence speed of a PCSA depends heavily on the resistance state of the two resistive memories. This effect is particularly magnified when the PCSA is used with a reduced overdrive, as presented here: the operation of the sense amplifier is slowed down, with regards to nominal voltage operation, and the convergence speed differences between resistance values become more apparent. Fig.~\ref{fig:SPICE}(b) shows a simulation where the two devices, BL and BLb, were programmed in the HRS. We see that the two outputs converge to complementary values in more than $200\nano\second$, whereas less than $50 \nano\second$ were necessary in Fig.~\ref{fig:SPICE}(a), where the devices are programmed in complementary LRS/HRS states. These first simulations suggest a technique for implementing ternary weights using the memory array of our test chip. Similarly to when this array is used to implement BNN, we propose to program the devices in the LRS/HRS configuration to mean the synaptic weight $1$, and HRS/LRS to mean the synaptic weight $-1$. Additionally, we use the HRS/HRS configuration to mean synaptic weight $0$, while the LRS/LRS configuration is avoided. The sense operation is performed during a duration of $50\nano\second$. If at the end of this period, outputs Q and Qb have differentiated, causing the output of the XOR gate to be 1, output Q determines the synaptic weight ($1$ or $-1$). Otherwise, the output of the XOR gate is 0, and the weight is determined to be $0$. This type of coding is reminiscent to the one used by the 2T2R ternary content-addressable memory (TCAM) cell of \cite{yang2019ternary}, where the LRS/HRS combination is used for coding $0$, the HRS/LRS combination for coding $1$, and the HRS/HRS combination for coding ``don't care'' (or X). \begin{figure}[tbp] \centering \includegraphics[width=3.4in]{fig4.pdf} \caption{ Two devices have been programmed in four distinct programming conditions, presented in (a), and measured using an on-chip sense amplifier. (b) Proportion of read operations that have converged in $50\nano\second$, over 100 trials. } \label{fig:SenseVsR} \end{figure} \begin{figure}[tbp] \centering \includegraphics[width=2.8in]{fig5.pdf} \caption{For 109 device pairs programmed with multiple $R_{BL}/R_{BLb}$ configuration, value of the synaptic weight measured by the on-chip sense amplifier using the strategy described in body text and $50\nano\second$ reading time. } \label{fig:multiniveau_2} \end{figure} Experimental measurements on our test chip confirm that the PCSA can be used in this fashion. We first focus on one synapse of the memory array. We program one of the two devices (BLb) to a resistance of $100\kilo\ohm$. We then program its complementary device BL to several resistance values, and for each of them perform 100 read operations of duration $50\nano\second$, using on-chip PCSAs. These PCSAs are fabricated using thick-oxide transistors, designed for a nominal supply voltage of $5V$, and here used with a supply voltage of $1.2V$, close to their threshold voltage ($0.6V$), to reduce their overdrive, and thus to exacerbate the PCSA delay variations. In the test chip, they are sized conservatively with a total area of $290 \micro\meter ^2$. The use of thick oxide transistors in this test chip allows us to investigate the behavior of the devices at high voltages, without the concern of damaging the CMOS periphery circuits. Fig.~\ref{fig:SenseVsR} plots the probability that the sense amplifier has converged during the read time. In $50\nano\second$, the read operation is only converged if the resistance of the BL device is significantly lower than $100\kilo\ohm$. To evaluate this behavior in a broader range of programming conditions, we repeated the experiment on 109 devices and their complementary devices of the memory array programmed, every 14 times, with various resistance values in the resistive memory, and performed a read operation in $50\nano\second$ with an on-chip PCSA. The memory array of our test chip features one separate PCSA per column. Therefore, 32 different PCSAs are used in our results. Fig.~\ref{fig:multiniveau_2}(a) shows, for each couple of resistance values $R_{BL}$ and $R_{BLb}$ if the read operation was converged with $Q=V_{DD}$ (blue), meaning a weight of $1$, converged with $Q=0$ (red), meaning a weight of $-1$, or not converged (grey) meaning a weight of $0$. The results confirm that LRS/HRS or HRS/LRS configurations may be used to mean weights $1$ and $-1$, and HRS/HRS for weight $0$. When both devices are in HRS (resistance higher than $100\kilo\ohm$, the PCSA never converges within $50\nano\second$ (weight of $0$). When one device is in LRS (resistance lower than $10\kilo\ohm$, the PCSA always converges within $50\nano\second$ (weight of $\pm1$). The separation between the $1$ (or $-1$) and $0$ regions is not strict, and for intermediate resistance values, we see that the read operation may or may not converge in $50\nano\second$. Fig.~\ref{fig:multiniveau_2}(b) summarizes the different operation regimes of the PCSA. \begin{table}[h] \begin{center} \vspace{0.5cm} \caption{Error Rates on Ternary Weights Measured Experimentally} \begin{tabular}{|l|c|c|c|} \hline Programming & Type 1 & Type 2 & Type 3 \\ Conditions& ($1\longleftrightarrow -1$)& ($\pm 1\rightarrow0$) & ($0 \rightarrow \pm 1$) \\ \hline Fig.~\ref{fig:distrib}(a) & $<10^{-6}$ & $<1\%$ & $6.5\%$ \\Fig.~\ref{fig:distrib}(b) & $<10^{-6}$ & $<1\%$ & $18.5\%$ \\ \hline \end{tabular} \vspace{0.5cm} \label{table:exp_errors} \end{center} \end{table} \section{ Impact of Process, Voltage, and Temperature Variations} \label{sec:PVT} \begin{figure*}[tbp] \centering \includegraphics[width=1.0\textwidth]{fig6.pdf} \caption{ Three Monte Carlo SPICE-based simulation of the experiments of Fig.~\ref{fig:multiniveau_2}, in three situations: (a) slow transistors ($0 \celsius$ temperature, $1.1\volt$ supply voltage), (b) experimental conditions ($27 \celsius$ temperature, $1.2\volt$ supply voltage), (c) fast transistors ($60 \celsius$ temperature, $1.3\volt$ supply voltage). The simulations include local and global process variations, as well as transistor mismatch, in a way that each point in the Figure is obtained using different transistor parameters. All results are plotted in the same manner and with the same conventions as Fig.~\ref{fig:multiniveau_2}. } \label{fig:PVT} \end{figure*} We now verify the robustness of the proposed scheme to process, voltage, and temperature variation. For this purpose, we performed extensive circuit simulations of the operation of the sense amplifier, reproducing the conditions of the experiments of Fig.~\ref{fig:multiniveau_2}, using the same resistance values for the RRAM devices, and including process, voltage, and temperature variations. The results of the simulations are processed and plotted using the same format as the experimental results of Fig.~\ref{fig:multiniveau_2}, to ease comparison. These simulations are obtained using the Monte Carlo simulator provided by the Mentor Graphics Eldo tool with parameters validated on silicon, provided by the design kit of our commercial CMOS process. Each point in the graphs of Fig.~\ref{fig:PVT} therefore features different transistor parameters. We included global and local process variations, as well as transistor mismatch, in order to capture the whole range of transistor variabilities observed in silicon. In order to assess the impact of voltage and temperature variations, these simulations are presented in three conditions: slow transistors ($0 \celsius$ temperature, and $1.1\volt$ supply voltage, Fig.~\ref{fig:PVT}(a)), experimental conditions ($27 \celsius$ temperature, and $1.2\volt$ supply voltage, Fig.~\ref{fig:PVT}(b)), and fast transistors ($60 \celsius$ temperature, and $1.3\volt$ supply voltage, Fig.~\ref{fig:PVT}(c)). The RRAM devices are modeled by resistors. Their process variations are naturally included through the use of different resistance values in Fig.~\ref{fig:PVT}. The impact of voltage variation on RRAM is naturally included through Ohm's law, and the impact of temperature variation, which is smaller than on transistors, is neglected. In all three conditions, the simulation results appear very similar to the experiments. Three clear regions are observed: non-convergence of the sense amplifier within $50ns$ for devices in HRS/HRS, and convergence within this time to a $+1$ or $-1$ value for devices in LRS/HRS and HRS/LRS, respectively. However, the frontier between these regimes is much sharper in the simulations than in the experiments. As the different data points in Fig.~\ref{fig:PVT} differ by process and mismatch variations, this suggests that process variation does not cause the stochasticity observed in the experiments of Fig.~\ref{fig:multiniveau_2}, and that they have little impact in our scheme. We also see that the frontier between the different sense regimes in all three operating conditions remains firmly within the $10-100 k\Omega$ range, suggesting that even high variations of voltage ($\pm 0.1V$) and temperature ($\pm 30 ^\circ C$) do not endanger the functionality of our scheme. Logically, in the case of fast transistors, the frontier is shifted toward higher resistances, whereas in the case of slow transistors, it is shifted toward lower resistances. Independent simulations allowed verifying that this change is mostly due to the voltage variations: the temperature variations have an almost negligible impact on the proposed scheme. We also observed that the impact of voltage variations increased importantly when reducing the supply voltage. For example, with a supply voltage of $0.7V$ instead of the $1.2V$ value considered here, variations of the supply voltage of $\pm0.1V$ can impact the mean switching delay of the PCSA, by a factor two. The thick oxide transistors used in this work have a nominal voltage of $5V$, and a typical threshold voltage of approximately $0.6V$. Therefore, although our scheme is especially appropriate for supply voltages far below the nominal voltage, it is not necessarily appropriate for voltages in the subthreshold regime, or very close to the threshold voltage. \section{Programmability of Ternary Weights} \label{sec:programmability} \begin{figure}[h] \centering \includegraphics[width=3.4in]{fig7.pdf} \caption{Distribution of the LRS and HRS states programmed with a SET compliance of $200\micro\ampere$, RESET voltage of $2.5\volt$ and programming pulses of (a) $100\micro\second$ and (b) $1\micro\second$. Measurements are performed $2,048$ RRAM devices, separating bit line (full lines) and bit line bar (dashed lines) devices.} \label{fig:distrib} \end{figure} To ensure reliable functioning of the ternary sense operation, we have seen that devices in LRS should be programmed to electrical resistance below $10\kilo\ohm$, and devices in HRS to resistances above $100\kilo\ohm$ (Fig.~\ref{fig:multiniveau_2}(b)). The electrical resistance of resistive memory devices depends considerably on their programming conditions \cite{hirtzlin2019digital,grossi2016fundamental}. Fig.~\ref{fig:distrib} shows the distributions of LRS and HRS resistances using two programming conditions, over the $2,048$ devices of the array, differentiating devices connected to bit lines and to bit lines bar. We see that in all cases, the LRS features a tight distribution. The SET process is indeed controlled by a compliance current that naturally stops the filament growth at a targeted resistance value \cite{bocquet2014robust}. An appropriate choice of the compliance current can ensure LRS below $10\kilo\ohm$ in most situations. On the other hand, the HRS shows a broad statistical distribution. In the RESET process, the filament indeed breaks in a random process, making it extremely hard to control the final state \cite{bocquet2014robust,ly2018role}. The use of stronger programming conditions leads to higher values of the HRS. This asymmetry between the variability of LRS and HRS means that in our scheme, the different ternary weight values feature different error rates naturally. The ternary error rates in the two programming conditions of Fig.~\ref{fig:distrib}(a) are listed in Table~\ref{table:exp_errors}. Errors of Type~1, where weight values of $1$ and $-1$ are inverted are the least frequent. Errors of Type~2, where a weight value of $1$ or $-1$ is replaced by a weight value of $0$ are infrequent as well. On the other hand, due to the large variability of the HRS, weight values $0$ have a significant probability to be measured as $1$ or $-1$ (Type~3 errors): $6.5\%$ in the conditions of Fig.~\ref{fig:distrib}(a), and $18.5\%$ in the conditions of Fig.~\ref{fig:distrib}(b). Some resistive memory technologies with large memory windows, such as specifically optimized conductive bridge memories \cite{vianello2014resistive}, would feature lower Type~3 error rates. Similarly, program-and-verify strategies \cite{lee2012multi,alibart2012high,xu2013understanding} may reduce this error rate. Nevertheless, the higher error rate for zeros than for $1$ and $-1$ weights is an inherent feature of our architecture. Therefore, in the next section, we assess the impact of these errors on the accuracy of neural networks. \begin{figure}[bp] \centering \includegraphics[width=3.4in]{fig8.pdf} \caption{Simulation of the maximum test accuracy reached during one training procedure, averaged over five trials, for BNNs and TNNs with various model sizes on the CIFAR-10 dataset. Error bar is one standard deviation.} \label{fig:CIFAR10} \end{figure} \section{Network-Level Implications} \label{sec:network} We first investigate the accuracy gain when using ternarized instead of binarized networks. We trained BNN and TNN versions of networks with Visual Geometry Group (VGG) type architectures \cite{simonyan2014very} on the CIFAR-10 task of image recognition, consisting in classifying 1,024 pixels color images among ten classes (airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck) \cite{krizhevsky2009learning}. Simulations are performed using PyTorch 1.1.0 \cite{paszke2017automatic} on a cluster of eight Nvidia GeForce RTX 2080 GPUs. The architecture of our networks consists of six convolutional layers with kernel size three. The number of filters at the first layer is called $N$ and is multiplied by two every two layers. Maximum-value pooling with kernel size two is used every two layers and batch-normalization \cite{ioffe2015batch} every layer. The classifier consists of one hidden layer of 512 units. For the TNN, the activation function has a threshold $\Delta = 5\cdot10^{-2}$ (as defined in section~\ref{sec:background}). The training methods for both the BNN and the TNN are described in the Appendix. The training is performed using the AdamW optimizer \cite{kingma2014adam, loshchilov2017fixing}, with minibatch size $128$. The initial learning rate is set to $0.01$, and the learning rate schedule from \cite{loshchilov2017fixing, loshchilov2016sgdr} (Cosine annealing with two restarts, for respectively $100$, $200$, $400$ epochs) is used, resulting in a total of $700$ epochs. Training data is augmented using random horizontal flip, and random choice between cropping after padding and random small rotations. No error is added during the training procedure, as our device is meant to be used for inference. The synaptic weights encoded by device pairs would be set after the model has been trained on a computer. Fig.~\ref{fig:CIFAR10} shows the maximum test accuracy resulting from these training simulations, for different sizes of the model. The error bars represent one standard deviation of the training accuracies. TNNs always outperform BNNs with the same model size (and, therefore, the same number of synapses). The most substantial difference is seen for smaller model size, but a significant gap remains even for large models. Besides, the difference in the number of parameters required to reach a given accuracy for TNNs and BNNs increases with higher accuracies. There is, therefore, a definite advantage to use TNNs instead of BNNs. \begin{figure}[tbp] \centering \includegraphics[width=3.4in]{fig9.pdf} \caption{Simulation of the impact of Bit Error Rate on the test accuracy at inference time for model size $N = 128$ TNN in (a) and BNN in (b). Type 1 errors are sign switches (e.g. $+1$ mistaken for $-1$), Type 2 errors are $\pm 1$ mistaken for $0$, and Type 3 errors are $0$ mistaken for $\pm 1$, as described in the inset schematics. Errors are sampled at each mini batch and the test accuracy is averaged over five passes through the test set. Error bars are one standard deviation. The bit error rate is given as an absolute rate.} \label{fig:BT_errors} \end{figure} Fig.~\ref{fig:CIFAR10} compared fully ternarized (weights and activations) with regards to fully binarized (weights and activations) ones. Table~\ref{table:acc_gain} lists the impact of weight ternarization for different types of activations (binary, ternary, and real activation). All results are reported on a model of size $N=128$, trained on CIFAR-10, and are averaged over five training procedures. We observe that for BNNs and TNNs with quantized activations, the accuracy gains provided by ternary weights over binary weights are $0.84$ and $0.86$ points and are statistically significant over the standard deviations. This accuracy gain is more important than the gain provided by ternary activations over binary activations, which is about $0.3$ points. This bigger impact of weight ternarization over ternary activation may come from the ternary kernels having a better expressing power over binary kernels, which are often redundant in practical settings \cite{courbariaux2016binarized}. The gain of ternary weights drops to $0.26$ points if real activation is allowed (using rectified linear unit, or ReLU, as activation function, see appendix), and is not statistically significant considering the standard deviations. Quantized activations are vastly more favorable in the context of hardware implementations, and in this situation, there is thus a statistically significant benefit provided by ternary weights over binary weights. \begin{table}[!h] \begin{center} \vspace{0.5cm} \caption{Comparison of the gain in test accuracy for a $N=128$ model size on CIFAR-10 obtained by weight ternarization instead of binarization for three types of activation quantization. } \begin{tabular}{|l|c|c|c|} \hline & \multicolumn{3}{c|}{\textbf{Activations}} \\ & Binary & Ternary & Full Precision \\ \hline \textbf{Weights} & & & \\ Binary & $91.19\pm0.08$ & $91.51 \pm 0.09$ & $93.87 \pm 0.19$ \\ Ternary & $92.03\pm 0.12$ & $92.35 \pm 0.05$ & $94.13 \pm 0.10$ \\ \textit{Gain of ternarization} & \textit{0.84 } & \textit{0.86} & \textit{0.26} \\ \hline \end{tabular} \vspace{0.5cm} \label{table:acc_gain} \end{center} \end{table} We finally investigate the impact of bit errors in BNNs and TNNs to see if the advantage provided by using TNNs in our approach remains constant when errors are taken into account. Consistently with the results reported in section~\ref{sec:programmability}, three types of errors are investigated: Type 1 errors are sign switches, e.g., $+1$ mistaken for $-1$, Type 2 errors are only defined for TNNs and correspond to $\pm1$ mistaken for $0$, and Type 3 errors are $0$ mistaken for $\pm 1$, as illustrated in the inset schematic of Fig.~\ref{fig:BT_errors}(a). Fig.~\ref{fig:BT_errors}(a) shows the impact of these errors on the test accuracy for different values of the error rate at inference time. These simulation results are presented on CIFAR-10 with a model size of $N=128$. Errors are randomly and artificially introduced in the weights of the neural network . Bit errors are included at the layer level and sampled at each mini-batch of the test set. Type 1 errors switch the sign of a synaptic weight with a probability equal to the rate of type 1 errors. Type 2 errors set a non-zero synaptic weight to 0 with a probability equal to the type 2 error rate. Type 3 errors set a synaptic weight of 0 to $\pm1$ with a probability equal to the type 3 error rate, the choice of the sign ($+1$ or $-1$) is made with 0.5 probability. Fig.~\ref{fig:BT_errors} is obtained by averaging the test accuracy obtained for five passes through the test set for increasing bit error rate. Type~1 errors have the most impact on neural network accuracy. As seen in Fig.~\ref{fig:BT_errors}(b), the impact of these errors is similar to the impact of weight errors in a BNN. On the other hand, Type~3 errors have the least impact, with bit error rates as high as $20\%$ degrading surprisingly little the accuracy. This result is fortunate, as we have seen in section~\ref{sec:programmability} that Type~3 errors are the most frequent in our architecture. We also performed simulations considering all three types of error at the same time, with error rates reported in Table~\ref{table:exp_errors} corresponding to the programming conditions of Fig.~\ref{fig:distrib}(a) and \ref{fig:distrib}(b). For Type~1 and Type~2 errors, we considered the upper limits listed in Table~\ref{table:exp_errors}. For the conditions of Fig.~\ref{fig:distrib}(a) (Type~3 error rate of 6.5\%), the test accuracy was degraded from 92.2\% to $92.05 \pm 0.14 \%$, and to $92.02 \pm 0.17 \%$ for the conditions of \ref{fig:distrib}(b) (Type~3 error rate of 18.5\%), where the average and standard deviation is performed over 100 passes through the test set. We found that the slight degradation on CIFAR-10 test accuracy was mostly due to the Type~2 errors, although Type 3 errors are much more frequent. The fact that mistaking a $0$ weight for a $\pm1$ weight (Type~3 error) has much less impact than mistaking a $\pm1$ weight for a $0$ weight (Type~2 error) can seem surprising. However, it is known, theoretically and practically, that in BNNs, some weights have little importance to the accuracy of the neural networks \cite{laborieux2020synaptic}. They typically correspond to synapses that feature a $0$ weight in a TNN, whereas synapses with $\pm1$ weights in a TNN correspond to ``important'' synapses of a BNN. It is thus understandable that errors on such synapses have more impact on the final accuracy of the neural network. \section{ Comparison with Three-Level Programming} \label{sec:three-level} An alternative approach to implementing ternary weights with resistive memory can be to program the individual devices into there separate levels. This idea is feasible, as the resistance level of the LRS can to a large extent be controlled through the choice of the compliance current during the SET operation in many resistive memory technologies \cite{bocquet2014robust,hirtzlin2019digital}. The obvious advantage of this approach is that it requires a single device per synapse. This idea also brings several challenges. First, the sense operation has to be more complex. The most natural technique is to perform two sense operations, comparing the resistance of a device under test to two different thresholds. Second, this technique is much more prone to bit errors than our technique, as states are not programmed in a differential fashion \cite{hirtzlin2019digital}. Additionally, this approach does not feature the natural resilience to Type~1 and Type~2 errors, and Type~2 and Type~3 errors will typically feature similar rates. Finally, unlike ours, this approach is prone to resistive drift, inherent to some resistive memory technologies \cite{li2012resistance}. These comments suggest that the choice of a technique for storing ternary weights should be dictated by technology. Our technique is especially appropriate for resistive memories not supporting single-device multilevel storage, with high error rates, or resistance drift. The three-levels per devices approach would be the most appropriate with devices with well controlled analog storage properties. \section{Conclusion} In this work, we revisited a differential memory architecture for BNNs. We showed experimentally on a hybrid CMOS/RRAM chip that, its sense amplifier can differentiate not only the LRS/HRS and HRS/LRS states, but also the HRS/HRS states in a single sense operation. This feature allows the architecture to store ternary weights, and to provide a building block for ternary neural networks. We showed by neural network simulation on the CIFAR-10 task the benefits of using ternary instead of binary networks, and the high resilience of TNNs to weights errors, as the type of errors observed experimentally in our scheme is also the type of errors to which TNNs are the most immune. This resilience allows the use of our architecture without relying on any formal error correction. Our approach also appears resilient to process, voltage, and temperature variation if the supply voltage remains reasonably higher than the threshold voltage of the transistors. As this behavior of the sense amplifier is exacerbated at supply voltages below the nominal voltage, our approach especially targets extremely energy-conscious applications such as uses within wireless sensors or medical applications. This work opens the way for increasing the edge intelligence in such contexts, and also highlights that the low voltage operation of circuits may sometimes provide opportunities for new functionalities. \section*{Acknowledgments} The authors would like to thank M.~Ernoult and L.~Herrera Diez for fruitful discussions. \section*{Appendix: Training Algorithm of Binarized and Ternary Neural Networks} \begin{algorithm}[ht!]{\emph{Input}: $W^{\rm h}$, $\theta^{\rm BN} = (\gamma_l, \beta_l)$, $U_W$, $U_{\theta}$, $(X, y)$, $\eta$. \\ \emph{Output}: $W^{\rm h}$, $\theta^{\rm BN}$, $U_W$, $U_{\theta}$.} \caption{Training procedure for binary and ternary neural networks. $W^{\rm h}$ are the hidden weights, $\theta^{\rm BN} = (\gamma_l, \beta_l)$ are Batch Normalization parameters, $U_W$ and $U_{\theta}$ are the parameter updates prescribed by the Adam algorithm \cite{kingma2014adam}, $(X, y)$ is a batch of labelled training data, and $\eta$ is the learning rate. ``cache'' denotes all the intermediate layers computations needed to be stored for the backward pass. ${\rm Quantize}$ is either $\phi$ or ${\rm sign}$ as defined in section \ref{sec:background}. ``~$\cdot$~'' denotes the element-wise product of two tensors with compatible shapes.}\label{algo} \begin{algorithmic}[1] \State $W^{\rm Q} \leftarrow {\rm Quantize} ( W^{\rm h})$ \Comment{Computing quantized weights} \State $A_0 \leftarrow X$ \Comment{Input is not quantized} \For{$l=1$ to $L$} \Comment{For loop over the layers} \State $z_l \leftarrow W^{\rm Q}_l A_l$ \Comment{Matrix multiplication} \State $A_l \leftarrow \gamma_l \cdot \frac{z_l - {\rm E}(z_l)}{\sqrt{{\rm Var}(z_l) + \epsilon}} + \beta_l$ \Comment{Batch~Normalization~\cite{ioffe2015batch}} \If{$l < L$} \Comment{If not the last layer} \State $A_l \leftarrow {\rm Quantize}(A_l)$ \Comment{Activation is quantized} \EndIf \EndFor \State $\hat{y} \leftarrow A_L$ \State $C \leftarrow {\rm Cost}(\hat{y}, y)$ \Comment{Compute mean loss over the batch} \State $(\partial_W C, \partial_{\theta} C) \leftarrow {\rm Backward}(C, \hat{y}, W^{\rm Q}, \theta^{\rm BN}, {\rm cache}) $ \Comment{Cost gradients} \State $ (U_W, U_{\theta}) \leftarrow {\rm Adam}(\partial_W C, \partial_{\theta} C, U_W, U_{\theta} )$ \State $W^{\rm h} \leftarrow W^{\rm h} - \eta U_W $ \State $\theta^{\rm BN} \leftarrow \theta^{\rm BN} - \eta U_{\theta}$ \\ \Return $W^{\rm h}$, $\theta^{\rm BN}$, $U_W$, $U_{\theta}$ \end{algorithmic} \end{algorithm} During the training of BNNs and TNNs, each quantized (binary or ternary) weight is associated with a real hidden weight. This approach to training quantized neural network was introduced in \cite{courbariaux2016binarized} and is presented in Algorithm~\ref{algo}. The quantized weights are used for computing neuron values (equations~\eqref{eq:activ_BNN} and~\eqref{eq:activ_TNN}), as well as the gradients values in the backward pass. However, training steps are achieved by updating the real hidden weights. The quantized weight is then determined by applying to the real value the quantizing function ${\rm Quantize}$, which is $\phi$ for ternary or ${\rm sign}$ for binary as defined in section \ref{sec:background}. The quantization of activations is done by applying the same function ${\rm Quantize}$, except for real activation, which is done by applying a rectified linear unit (${\rm ReLU}(x) = {\rm max}(0, x)$). Quantized activation functions ($\phi$ or ${\rm sign}$) have zero derivatives almost everywhere, which is an issue for backpropagating the error gradients through the network. A way around this issue is the use of a straight-through estimator \cite{bengio2013estimating}, which consists in taking the derivative of another function instead of the almost everywhere zero derivatives. Throughout this work, we take the derivative of $\rm Hardtanh$, which is 1 between -1 and 1 and 0 elsewhere, both for binary and ternary activations. The simulation code used in this work is available publicly in the Github repository: \url{https://github.com/Laborieux-Axel/Quantized_VGG} \FloatBarrier
{'timestamp': '2020-10-15T02:13:15', 'yymm': '2007', 'arxiv_id': '2007.14234', 'language': 'en', 'url': 'https://arxiv.org/abs/2007.14234'}
\section{The Dataset} \subsection{The game} The dataset contains practically all actions of all players of the MMOG \verb|Pardus| since the game went online in 2004 \cite{pardus}. \verb|Pardus| is an open-ended online game with a worldwide player base of currently more than 370,000 people. Players live in a virtual, futuristic universe in which they interact with others in a multitude of ways to achieve their self-posed goals \cite{castra2005}. Most players engage in various economic activities typically with the (self-posed) goal to accumulate wealth and status. Social and economical decisions of players are often strongly influenced and driven by social factors such as friendship, cooperation, and conflict. Conflictual relations may result in aggressive acts such as attacks, fights, punishment, or even destruction of another player's means of production or transportation. The dataset contains longitudinal and relational data allowing for a complete and dynamical mapping of multiplex relations of the entire virtual society, over 1238 days. The behavioral data are free of `interviewer-bias' or laboratory effects since users are not reminded of their actions being logged during playing. The longitudinal aspect of the data allows for the analysis of dynamical aspects such as the emergence and evolution of network structures. It is possible to extract multiple social relationships between a fixed set of humans \cite{PNAS}. \subsection{Human behavioral sequences} We consider eight different actions every player can execute at any time. These are communication (C), trade (T), setting a friendship link (F), removing an enemy link (forgiving) (X), attack (A), placing a bounty on another player (punishment) (B), removing a friendship link (D), and setting an enemy link (E). While C, T, F and X can be associated with {\em positive} (good) actions, A, B, D and E are hostile or {\em negative} (bad) actions. We classify communication as positive because only a negligible part of communication takes place between enemies \cite{SN}. Segments of action sequences of three players (146, 199 and 701) are shown in the first three lines of Fig.~\ref{fig:seq}~(a). \begin{table*}[b] \caption*{Table 1. First row: total number of actions by all players (with at least 1000 actions) in the Artemis universe of the Pardus game. Further rows: first 4 moments of $r_Y(d)$, the distribution of the log-increments of the $N_Y$ processes (see text). Approximate log-normality is indicated. The large values of kurtosis for $T$ and $A$ result from a few extreme outliers. } \begin{tabular}{ l|c c c c c c c c} & D & B & A & E & F & C & T & X \\ \hline $\sum_{d=1}^{1238} N_Y(d)$ & 26,471 & 9,914 & 558,905 & 64,444& 82,941 & 5,607,060& 393,250& 20,165\\ \hline mean & 0.002 & 0.001 & 0.004 & -0.002 & -0.002 & 0.000 & 0.003 & 0.002 \\ std & 1.13 & 0.79 & 0.54 & 0.64 & 0.35 & 0.12 & 0.28 & 0.94 \\ skew & 0.12 & 0.26 & 0.35 & 0.08 & 0.23 & 0.11 & 1.00 & -0.01\\ kurtosis & 3.35 & 3.84 & 6.23 & 3.67 & 3.41 & 3.76 & 13.89 & 3.72\\ \end{tabular} \label{tab1} \end{table*} We consider three types of sequences for any particular player. The first is the stream of $N$ consecutive actions $A^i=\{a_n | n=1, \cdots, N \}$ which player $i$ performs during his `life' in the game. The second sequence is the (time-ordered) stream of actions that player $i$ receives from all the other players in the game, i.e. all the actions which are directed towards player $i$. We denote by $R^i=\{r_n | n=1, \cdots, M \}$ received-action sequences. Finally, the third sequence is the time-ordered combination of player $i$'s actions and received-actions, which is a chronological sequence from the elements of $A^i$ and $R^i$ in the order of occurrence. The combined sequence we denote by $C^i$; its length is $M+N$, see also Fig.~\ref{fig:seq}~(a). The $n$th element of one of these series is denoted by $A^i(n)$, $R^i(n)$, or $C^i(n)$. We do not consider the actual time between two consecutive actions which can range from milliseconds to weeks, rather we work in `action-time'. If we assign $+1$ to any positive action C, T, F or X, and $-1$ to the negative actions A, B, D and E, we can translate a sequence $A^i$ into a symbolic binary sequence $A_{\rm bin}^i $. From the cumulative sum of this binary sequence a `world line' or `random walk' for player $i$ can be generated, $W_{\rm good-bad}^i(t) = \sum_{n=1}^{t} A^i_{\rm bin} (n)$, see Fig.~\ref{fig:seq}~(b). Similarly, we define a binary sequence from the combined sequence $C^i$, where we assign $+1$ to an executed action and $-1$ to a received-action. This sequence we call $C^i_{\rm bin}$, its cumulative sum, $W_{\rm act-rec}^i(t) = \sum_{n=1}^{t} C^i_{\rm bin} (n)$ is the `action-receive' random-walk or world line. Finally, we denote the number of actions which occurred during a day in the game by $N_Y(d)$, where $d$ indicates the day and $Y$ stands for one of the eight actions. \section{Results} The number of occurrences of the various actions of all players over the entire time period is summarized in Tab. \ref{tab1} (first line). Communication is the most dominant action, followed by attacks and trading which are each about an order of magnitude less frequent. The daily number of all communications, trades and attacks, $N_C(d)$, $N_T(d)$ and $N_A(d)$ is shown in Fig. \ref{fig:actions} (a), (b) and (c), respectively. These processes are reverting around a mean, $R_Y$. All processes of actions show an approximate Gaussian statistic of its log-increments, $r_Y(d)=\log \frac{N_Y(d)}{N_Y(d-1)}$. The first 4 moments of the $r_Y$ series are listed in Tab. \ref{tab1}. The relatively large kurtosis for $T$ and $A$ results from a few extreme outliers. The distribution of log-increments for the $N_C$, $N_T$ and $N_A$ timeseries are shown in Fig. \ref{fig:actions} (d). The lines are Gaussians for the respective mean and standard deviation from Tab. \ref{tab1}. As maybe the simplest mean-reverting model with approximate lognormal distributions, we propose \begin{equation} N_Y(d)= N_Y(d-1)^{\rho_Y} \, e^{\xi(d)} \, R_Y^{(1-\rho_Y)} \quad, \label{mod} \end{equation} where $\rho_Y$ is the mean reversion coefficient, $ \xi(d)$ is a realization of a zero mean Gaussian random number with standard deviation $\sigma_Y$, and $R_Y$ is the value to which the process $N_Y(t)$ reverts to. $\sigma$ is given by the third line in Tab. \ref{tab1}. \begin{figure}[t] \begin{center} \includegraphics{fig02.eps} \end{center} \caption{Timeseries of the daily number of (a) trades, (b) attacks, (c) communications in the first 1238 days in the game. Clearly a mean reverting tendency of three processes can be seen. (d) Simulation of a model timeseries, Eq. (\ref{mod}), with $\rho=0.94$. We use the values from the $N_C$ timeseries, $R=4000$, and standard deviation $\sigma=0.12$. Compare with the actual $N_C$ in (c). The only free parameter in the model is $\rho$. Parameters are from Tab. \ref{tab1}. Mean reversion and log-normality motivate the model presented in Eq. (\ref{mod}). (e) The distributions of log-increments $r_Y$ of the processes and the model. All follow approximate Gaussian distribution functions. \label{fig:actions} } \end{figure} \subsection{Transition probabilities} With $p( Y | Z )$ we denote the probability that an action of type $Y$ follows an action of type $Z$ in the behavioral sequence of a player. $Y$ and $Z$ stand for any of the eight actions, executed or received (received is indicated by a subscript $r$). In Fig.~\ref{fig:trans}~(a) the transition probability matrix $p\left( Y|Z \right)$ is shown. The $y$ axis of the matrix indicates the action (or received-action) happening at a time $t$, the probabilities for the actions (or received-actions) that immediately follow are given in the corresponding horizontal place. This transition matrix specifies to which extent an action or a received action of a player is influenced by the action that was done or received at the previous time-step. In fact, if the behavioral sequences of players had no correlations, i.e. the probability of an action, received or executed, is independent of the history of the player's actions, the transition probability $p\left( Y | Z \right)$ simply is $p \left( Y \right)$, i.e. to the probability that an action or received action $Y$ occurs in the sequence is determined by its relative frequency only. Therefore, deviations of the ratio $\frac{p\left( Y | Z \right)}{p(Y)}$ from $1$ indicate correlations in sequences. In Fig.~\ref{fig:trans}~(b) we report the values of $\frac{p\left( Y | Z \right)}{p(Y)}$ for actions and received actions (received actions are indicated with the subscript $r$) classified only according to their positive (+) or negative (-) connotation. In brackets we report the \emph{Z}-score with respect to the uncorrelated case. We find that the probability to perform a good action is significantly higher if at the previous time-step a positive action has been received. Similarly, it is more likely that a player is the target of a positive action if at the previous time-step he executed a positive action. Conversely, it is highly unlikely that after a good action, executed or received, a player acts negatively or is the target of a negative action. Instead, in the case a player acts negatively, it is most likely that he will perform another negative action at the following time-step, while it is highly improbable that the following action, executed or received, will be positive. Finally, in the case a negative action is received, it is likely that another negative action will be received at the following time-step, while all other possible actions and received actions are underrepresented. The high statistical significance of the cases $P(-|-)$ and $P(-_r|-_r)$ hints toward a high persistence of negative actions in the players' behavior, see below. Another finding is obtained by considering only pairs of received actions followed by performed actions. This approach allows to quantify the influence of received actions on the performed actions of players. For these pairs we measure a probability of $0.02$ of performing a negative action after a received positive action. This value is significantly lower compared to the probability of $0.10$ obtained for randomly reshuffled sequences. Similarly, we measure a probability of $0.27$ of performing a negative action after a received negative action. Note that this result is not in contrast with the values in Fig.~\ref{fig:trans}~(b), since only pairs made up of received actions and performed actions are taken into account. \begin{figure}[t] \begin{center} \includegraphics{fig03.eps} \end{center} \caption{(a) Transition probabilities $p\left( Y | Z \right)$ for actions (and received-actions) $Y$ at a time $t+1$, given that a specific action $Z$ was executed or received in the previous time-step $t$. Received-actions are indicated by a subscript $r$. Normalization is such that rows add up to one. The large values in the diagonal signal that human actions are highly clustered or repetitive. Large values for $C \rightarrow C_r$ and $C_r \rightarrow C$ reveal that communication is a tendentially anti-persistent activity -- it is more likely to receive a message after one sent a message and vice versa, than to send or to receive two consecutive messages. (b) The ratio $\frac{p\left( Y | Z \right)}{p(Y)}$, shows the influence of an action $Z$ at a previous time-step $t$ on a following action $Y$ at a time $t+1$, where $Y$ and $Z$ can be positive or negative actions, executed or received (received actions are indicated by the subscript $r$). In brackets, we report the \emph{Z}-score (significance in number of standard deviations) in respect to a sample of 100 randomized versions of the dataset. The cases for which the transition probability is significantly higher (lower) than expected in uncorrelated sequences are highlighted in red (green). Receiving a positive action after performing a positive action is highly overrepresented, and vice versa. Performing (receiving) a negative action after performing (receiving) another negative one is also highly overrepresented. Performing a negative action has no influence on receiving a negative action next. All other combinations are strongly underrepresented, for example after performing a negative action it is very unlikely to perform a positive action with respect to the uncorrelated case. \label{fig:trans} } \end{figure} \begin{figure*}[t] \begin{center} \includegraphics{fig04.eps} \end{center} \caption{ (a) World lines of good-bad action random walks of the 1,758 most active players, (b) distribution of their slopes $k$ and (c) of their scaling exponents $\alpha$. By definition, players who perform more good (bad) than bad (good) actions have the endpoints of their world lines above (below) 0 in (a) and only fall into the $k>0$ ($k<0$) category in (b). (d) World lines of action-received random walks, (e) distribution of their slopes $k$ and (f) of their scaling exponents $\alpha$. The inset in (d) shows only the world lines of bad players. These players are typically dominant, i.e. they perform significantly more actions than they receive. In total the players perform many more good than bad actions and are strongly persistent with good as well as with bad behavior, see (c), i.e. actions of the same type are likely to be repeated. \label{fig:world} } \end{figure*} \subsection{World lines} The world lines $W_{\rm good-bad}^i$ of good-bad action sequences are shown in Fig. \ref{fig:world} (a), the action-reaction world lines in Fig. \ref{fig:world} (b). As a simple measure to characterize these world lines we define the slope $k$ of the line connecting the origin of the world line to its end point (last action of the player). A slope of $k=1(-1)$ in the good-bad world lines $W_{\rm good-bad}$ indicates that the player performed only positive (negative) actions. The slope $k^i$ is an approximate measure of `altruism' for player $i$. The histogram of the slopes for all players is shown in Fig.~\ref{fig:world}~(b), separated into good (blue) and bad (red) players, i.e. players who have performed more good than bad actions and vice versa. The mean and standard deviation of slopes of good, bad, and all players are $\bar k^{\rm good} = 0.81 \pm 0.19$, $\bar k^{\rm bad} = -0.40 \pm 0.28$, and $\bar k^{\rm all} = 0.76 \pm 0.31$, respectively. Simulated random walks with the same probability $0.90$ of performing a positive action yield a much lower variation, $\bar k^{\rm sim} = 0.81 \pm 0.01$, pointing at an inherent heterogeneity of human behavior. For the combined action--received-action world line $W_{\rm act-rec}$ the slope is a measure of how well a person is integrated in her social environment. If $k=1$ the person only acts and receives no input, she is `isolated' but dominant. If the slope is $k=-1$ the person is driven by the actions of others and does never act nor react. The histogram of slopes for all players is shown in Fig. \ref{fig:world} (e). Most players are well within the $\pm45$ degree cone. Mean and standard deviation of slopes of good, bad, and all players are $\bar k^{\rm good} = 0.02 \pm 0.10$, $\bar k^{\rm bad} = 0.30 \pm 0.19$, and $\bar k^{\rm all} = 0.04 \pm 0.12$, respectively. Bad players are tendentially dominant, i.e. they perform significantly more actions than they receive. Simulated random walks with equal probabilities for up and down moves for a sample of the same sequence lengths, we find again a much narrower distribution with slope $\bar k^{\rm sim} = 0.00 \pm 0.01$. As a second measure we use the mean square displacement of world lines to quantify the persistence of action sequences, \begin{equation} M^2(\tau) = \langle \Delta W (\tau) - \langle \Delta W (\tau) \rangle \rangle^2 \propto \tau^{2\alpha} \quad, \end{equation} (see Materials and Methods). The histogram of exponents $\alpha$ for the good-bad random walk, separated into good (blue) and bad (red) players, is shown in Fig.~\ref{fig:world}~(c), for the action--received-action world line in (f). In the first case strongly persistent behavior is obvious, in the second there is a slight tendency towards persistence. Mean and standard deviation for the good-bad world lines are $\alpha_{\rm good-bad}= 0.87\pm 0.06$, for the action-received actions $\alpha_{\rm act-rec}=0.59 \pm 0.10$. Simulated sequences of random walks have -- as expected by definition -- an exponent of $\alpha_{\rm rnd} = 0.5$, again with a very small standard deviation of about $0.02$. Figure \ref{fig:world} (a) also indicates that the lifetime of players who use negative actions frequently is short. The average lifetime for players with a slope $k<0$ is $2528 \pm 1856$ actions, compared to players with a slope $k>0$ with $3909 \pm 4559$ actions. The average lifetime of the whole sample of players is $3849 \pm 4484$ actions. \subsection{Motifs, Entropy and Zipf law} By considering all the sequences of actions $A^i$ of all possible players $i$, we have an ensemble which allows to perform a motif analysis \cite{sinatra_motifs}. We define a $n$-string as a subsequence of $n$ contiguous actions. An $n$-motif is an $n$-string which appears in the sequences with a probability higher than expected, after lower-order correlations have been properly removed (see Materials and Methods). We computed the observed and expected probabilities $p^{\rm obs}$ and $p^{\rm exp}$ for all $8^2=64$ 2-strings and for all $8^3=512$ 3-strings, focusing on those $n$-strings with the highest ratio $\frac{p^{\rm obs}}{p^{\rm exp}}$. Higher orders are statistically not feasible due to combinatorial explosion. We find that the $2$-motifs in the sequences of actions $A$ are clusters of same letters: BB, DD, XX, EE, FF, AA with ratios $\frac{p(\rm obs)}{p(\rm exp)} \approx 169$, $136$, $117$, $31$, $15$, $10$, respectively. This observation is consistent with the previous first-order observation that actions cluster. The most significant $3$-motifs however are (with two exceptions) the palindromes: EAX, DAF, DCD, DAD, BGB, BFB, with ratios $\frac{p(\rm obs)}{p(\rm exp)} \approx 123$, $104$, $74$, $62$, $33$, $32$, respectively. The exceptions disappear when one considers actions executed on the same screen in the game as equivalent, i.e. setting or removing friends or enemies: F, D, E, X. This observation hints towards processes where single actions of one type tend to disrupt a flow of actions of another type. Finally, we partition the action sequences into $n$-strings (`words'). Fig. \ref{fig:entropy} shows the rank distribution of word occurrences of different lengths $n$. The distribution shows an approximate Zipf law \cite{zipf} (slope of $\kappa=-1$) for ranks below 100. For ranks between 100 and 25,000 the scaling exponent approaches a smaller value of about $\kappa \sim -1.5$. The Shannon $n$-tuple redundancy (see e.g. \cite{dna, dna2, dna3}) for symbol sequences composed of 8 symbols (our action types) is defined as \begin{equation} R^{(n)} = 1+ \frac{1}{3n} \sum_{i}^{8^n} P_i^{(n)} \log_2 P_i^{(n)} \quad , \end{equation} where $P_i^{(n)}$ is the probability of finding a specific $n$-letter word. Uncorrelated sequences yield an equi-distribution, $P_i= 8^{-n}$, i.e. $R^{(n)} =0$. In the other extreme of only one letter being used, $R^{(n)} =1$. In Fig. \ref{fig:entropy} (inset) $R^{(n)}$ is shown as a function of sequence length $n$. Shannon redundancy is not a constant but increases with $n$. This indicates that Boltzmann-Gibbs entropy might not be an extensive quantity for action sequences \cite{hanel2011}. \section{Discussion} The analysis of human behavioral sequences as recorded in a massive multiplayer online game shows that communication is by far the most dominant activity followed by aggression and trade. Communication events are about an order of magnitude more frequent than attacks and trading events, showing the importance of information exchange between humans. It is possible to understand the collective timeseries of human actions of a particular type ($N_Y$) with a simple mean-reverting log-normal model. On the individual level we are able to identify organizational patterns of the emergence of good overall behavior. Transition rates of actions of individuals show that positive actions strongly induces positive \emph{reactions}. Negative behavior on the other hand has a high tendency of being repeated instead of being reciprocated, showing the `propulsive' nature of negative actions. However, if we consider only reactions to negative actions, we find that negative reactions are highly overrepresented. The probability of acting out negative actions is about 10 times higher if a person received a negative action at the previous timestep than if she received a positive action. The action of communication is found to be of highly reciprocal `back-and-forth' nature. The analysis of binary timeseries of players (good-bad) shows that the behavior of almost all players is `good' almost all the time. Negative actions are balanced to a large extent by good ones. Players with a high fraction of negative actions tend to have a significantly shorter life. This may be due to two reasons: First because they are hunted down by others and give up playing, second because they are unable to maintain a social life and quit the game because of loneliness or frustration. We interpret these findings as empirical evidence for self organization towards reciprocal, good conduct within a human society. Note that the game allows bad behavior in the same way as good behavior but the extent of punishment of bad behavior is freely decided by the players. \begin{figure}[t] \begin{center} \includegraphics{fig05.eps} \end{center} \caption{ Rank ordered probability distribution of 1 to 6 letter words. Slopes of $\kappa = -1$ and $\kappa = -1.5$ are indicated for reference. The inset shows the Shannon $n$-tuple redundancy as a function of word length $n$. \label{fig:entropy} } \end{figure} Behavior is highly persistent in terms of good and bad, as seen in the scaling exponent ($\alpha \sim 0.87$) of the mean square displacement of the good-bad world lines. This high persistence means that good and bad actions are carried out in clusters. Similarly high levels of persistence were found in a recent study of human behavior \cite{fan}. A smaller exponent ($\alpha \sim 0.59$) is found for the action--received-action timeseries. Finally we split behavioral sequences of individuals into subsequences (of length 1-6) and interpret these as behavioral `words'. In the ranking distribution of these words we find a Zipf law to about ranks of 100. For less frequent words the exponent in the rank distribution approaches a somewhat smaller exponent of about $\kappa \sim -1.5$. From word occurrence probabilities we further compute the Shannon $n$-tuple redundancy which yields relatively large values when compared for example to those of DNA sequences \cite{dna, dna2, dna3}. This reflects the dominance of communication over all the other actions. The $n$-tuple redundancy is clearly not a constant, reflecting again the non-trivial statistical structure of behavioral sequences. \section{Materials and Methods} The game \verb|Pardus| \cite{pardus} is sectioned into three independent `universes'. Here we focus on the `Artemis' universe, in which we recorded player actions over the first 1,238 consecutive days of the universe's existence. Communication between any two players can take place directly, by using a one-to-one, e-mail-like private messaging system, or indirectly, by meeting in built-in chat channels or online forums. For the player action sequences analyzed we focus on one-to-one interactions between players only, and discard indirect interactions such as e.g. participation in chats or forums. Players can express their sympathy (distrust) toward other players by establishing so-called friendship (enmity) links. These links are only seen by the player marking another as a friend (enemy) and the respective recipient of that link. For more details on the game, see \cite{SN,pardus}. From all sequences of all 34,055 Artemis players who performed or received an action at least once within 1,238 days, we removed players with a life history of less than 1000 actions, leading to the set of the most active 1,758 players which are considered throughout this work. All data used in this study is fully anonymized; the authors have the written consent to publish from the legal department of the Medical University of Vienna. \subsection{Mean square displacement} The mean square displacement $M^2$ of a world line $W$ is defined as $M^2(\tau) = \langle \Delta W (\tau) - \langle \Delta W (\tau) \rangle \rangle^2, $ where $\Delta W(\tau) \equiv W(t +\tau)- W(t)$ and $\langle . \rangle$ is the average over all $t$. The asymptotic behavior of $M(\tau)$ yields information about the `persistence' of a world line. $M(\tau)\propto \tau^{\frac12}$ is the pure diffusion case, $M(\tau)\propto \tau^{\alpha}$ with scaling exponent $\alpha \neq \frac12$ indicates persistence for $\alpha>\frac12$, and anti-persistence for $\alpha < \frac12$. Persistence means that the probability of making an up(down) move at time $t+1$ is larger(less) than $p=1/2$, if the move at time $t$ was an up move. For calculating the exponents $\alpha$ we use a fit range of $\tau$ between 5 and 100. We checked from the mean square displacements of single world lines that this fit range is indeed reasonable. \subsection{Motifs} We define $n$-strings a subsequence of $n$ contiguous actions. Across the entire ensemble, $8^n$ different $n$-strings can appear, each of them occurring with a different probability. The frequency, or observed probability, of each $n$-string can be compared to its expected probability of occurrence, which can be estimated on the basis of the observed probability of lower order strings, i.e. on the frequency of $(n-1)$-strings. For example, the expected probability of occurrence of a 2-string $(A_t, A_{t+1})$ is estimated as the product of the observed probability of the single actions $A_t$ and $A_{t+1}$, $p^{\rm exp}(A_{t},A_{t+1})=p^{\rm obs}(A_{t})p^{\rm obs}(A_{t+1})$. Similarly, the probability of a 3-string $(A_{t}, A_{t+1}, A_{t+2})$ to occur can be estimated as $p^{\rm exp}(A_{t}, A_{t+1}, A_{t+2})=p^{\rm obs}(A_{t}, A_{t+1})p^{\rm obs}(A_{t+2}|A_{t+1})$, where $p^{\rm obs}(A_{t+2}|A_{t+1})$ is the conditional probability to have action $A_{t+2}$ following action $A_{t+1}$. By definition of conditional probability, one has $p^{\rm obs}(A_{t+2}|A_{t+1})=\frac{p^{\rm obs}(A_{t+1},A_{t+2})}{p^{\rm obs}(A_{t+1})}$ (see \cite{sinatra_motifs} for details). A $n$-motif in the ensemble is then defined as a $n$-string whose observed probability of occurrence is significantly higher than its expected probability. \begin{acknowledgments} We thank Werner Bayer for compiling \verb|Pardus| data. Supported in part by the Austrian Science Fund, FWF P23378, and the European Cooperation in Science and Technology Action, COST MP0801. \end{acknowledgments}
{'timestamp': '2011-07-05T02:01:22', 'yymm': '1107', 'arxiv_id': '1107.0392', 'language': 'en', 'url': 'https://arxiv.org/abs/1107.0392'}
\section{Introduction} Cold dark matter (CDM) simulations (both with and without the inclusion of baryonic physics) are a crucial tool and proving ground for understanding the physics of the universe on nonlinear scales. One of the most active aspects of research in this area concerns the form of the dark matter density profile. Key questions raised in recent years include: Is there a universal dark matter density profile that spans a wide range of halo masses? What is the form of this profile and how uniform is it from one halo to another? To what extent do baryons modify the dark matter distribution? Drawing on a suite of N-body simulations, \citet{NFW97} originally proposed that the dark matter density profiles in halos ranging in size from those hosting dwarf galaxies to those with galaxy clusters have a universal form. This 3-D density distribution, termed the NFW profile, follows $\rho_{DM}\propto r^{-1}$ within some scale radius, $r_{sc}$, and falls off as $\rho_{DM}\propto r^{-3}$ beyond. Subsequent simulations indicated that the inner density profile could be yet steeper - $\rho_{DM}\propto r^{-1.5}$ \citep{M98,Ghigna00}. As computing power increases and numerical techniques improve, it is now unclear whether the inner dark matter distribution converges to a power law form rather than becoming progressively shallower in slope at smaller radii \citep{P03,Navarro04,Diemand04,Diemand05}. For comparisons with data, such simulations need to account for the presence of baryons. This is particularly the case in the cores of rich clusters. Although baryons represent only a small fraction of the overall cluster mass, they may be crucially important on scales comparable to the extent of typical brightest cluster galaxies. Much work is being done to understand the likely interactions between baryons and DM \citep{Gnedin04,Nagai05,Faltenbacher05}. These simulations will provide refined predictions of the relative distributions of baryons and DM. This paper is a further step in a series which aims to present an observational analog to progress described above in the numerical simulations. At each stage it is desirable to confront numerical predictions with observations. Whereas some workers have made good progress in constraining the {\em total} density profile (e.g.~\citet{Broadhurst05b}), in order to address the relevance of the numerical simulations we consider it important to develop and test techniques capable of separating the distributions of dark and baryonic components (e.g. ~\citet{Sand02,Zappacosta06,Biviano06,Mahdavi07}). This paper presents a refined version of the method first proposed by \citet{Sand02}, exploited more fully in \citet{Sand04} (hereafter S04). S04 sought to combine constraints from the velocity dispersion profile of a central brightest cluster galaxy (BCG) with a strong gravitational lensing analysis in six carefully selected galaxy clusters in order to separate the baryonic and dark matter distributions. S04 carefully selected clusters to have simple, apparently 'relaxed' gravitational potentials in order to match broadly the 'equilibrium' status of the simulated dark matter halos originally analyzed by \citet{NFW97} and subsequent simulators. For example, Abell 383 and MS2137-23 have almost circular BCGs ($b/a$=0.90 and 0.83 respectively), require a single cluster dark matter halo to fit the strong lensing constraints (in contrast to the more typical clusters that require a multi-modal dark matter morphology -- Smith et al. 2005), have previously published lens models with a relatively round dark matter halo ($b/a$=0.88 and 0.78 respectively - Smith et al. 2001; Gavazzi 2005), and display no evidence for dynamical disturbance in the X-ray morphology of the clusters (Smith et al. 2005; Schmidt \& Allen 2006). The merit of the approach resides in combining two techniques that collectively probe scales from the inner $\sim$10 kpc (using the BCG kinematics) to the $\sim$100 kpc scales typical of strong lensing. Whereas three of the clusters contained tangential arcs, constraining the total enclosed mass within the Einstein radius, three contained both radial and tangential gravitational arcs, the former providing additional constraints on the derivative of the total mass profile. In their analysis, S04 found the gradient of the inner dark matter density distribution varied considerably from cluster to cluster, with a mean value substantially flatter than that predicted in the numerical simulations. S04 adopted a number of assumptions in their analysis whose effect on the derived mass profiles were discussed at the time. The most important of these included ignoring cluster substructure and adopting spherically-symmetric mass distributions centered on the BCG. The simplifying assumptions were considered sources of systematic uncertainties, of order 0.2 on the inner slope. Although the six clusters studied by S04 were carefully chosen to be smooth and round, several workers attributed the discrepancy between the final results and those of the simulations as likely to arise from these simplifying assumptions \citep{Bartelmann04,Dalal04b,Meneghetti05}. The goal of this paper is to refine the data analysis for two of the clusters (MS2137-23 and Abell 383) originally introduced by S04 using fully 2-D strong gravitational lensing models, thus avoiding any assumptions about substructure or spherical symmetry. The lensing models are based on an improved version of the LENSTOOL program (\citealt{Kneibphd,Kneib96}; see Appendix; http://www.oamp.fr/cosmology/lenstool/). A major development is the implementation, in the code, of a pseudo-elliptical parameterization for the NFW mass profile, i.e. a generalization of those seen in CDM simulations, viz: \begin{equation}\label{eqn:gnfw} \label{eq:gnfw} \rho_d(r)=\frac{\rho_{c} \delta_{c}}{(r/r_{sc})^{\beta}(1+(r/r_{sc}))^{3-\beta}} \end{equation} where the asymptotic DM inner slope is $\beta$. This formalism allows us to overcome an important limitation of previous work and takes into account the ellipticity of the DM halo and the presence of galaxy-scale subhalos. Furthermore the 2-D lensing model fully exploits the numerous multiply-imaged lensing constraints available for MS2137-23 and Abell 383. The combination of gravitational lensing and stellar dynamics is the most powerful way to separate baryons and dark matter in the inner regions of clusters. However, it is important to keep a few caveats in mind. Galaxy clusters are structurally heterogeneous objects that are possibly not well-represented by simple parameterized mass models. To gain a full picture of their mass distribution and the relative contribution of their major mass components will ultimately require a variety of measurements applied simultaneously across a range of radii. Steps in this direction are already being made with the combined use of strong and weak gravitational lensing (e.g.~\citet{Limousin06,Bradac06}), which may be able to benefit further from information provided from X-ray analyses (e.g.~\citet{Schmidt06}) and kinematic studies (e.g.~\citet{Lokas03}). A recent analysis has synthesized weak-lensing, X-ray and Sunyaev-Zeldovich observations in the cluster Abell 478 -- similar cross-disciplinary work will lend further insights into the mass distribution of clusters \citep{Mahdavi07}. Of equal importance are mass models with an appropriate amount of flexibility and sophistication. For instance, incorporating models that take into account the interaction of baryons and dark matter may shed light into the halo formation process and provide more accurate representations of dark matter structure. Halo triaxiality, multiple structures along the line of sight and other geometric effects will also be important to characterize. At the moment, incorporating these complexities and securing good parameter estimates is computationally expensive even with sophisticated techniques such as the Markov-Chain Monte Carlo method. Numerical simulation results are often presented as the average profile found in the suite of calculations performed. Instead, the distribution of inner slopes would be a more useful quantity for comparison with individual cluster observations. Also, comparisons between simulations and observations would be simplified if {\it projected} density profiles of simulated halos along multiple lines of sight were to be made available. These issues should be resolvable as large samples of observed mass profiles are obtained. For the reasons above, comparing observational results with numerical simulations is nontrivial. The observational task should be regarded as one of developing mass modeling techniques of increasing sophistication that separate dark and baryonic matter, so as to provide the most stringent constraints to high resolution simulations which include baryons as they also increase in sophistication. The combination of stellar dynamics and strong lensing is the first crucial step in this process. Its diagnostic power will be further enhanced by including other major mass components (i.e.~the hot gas of the intracluster medium or the stellar contribution from galaxies) out to larger radii. A plan of the paper follows. In \S~\ref{sec:methods} we explain the methodology used to model the cluster density profile by combining strong lensing with the BCG velocity dispersion profile. In \S~\ref{sec:obsresults} we focus on translating observational measurements into strong lensing input parameters. This section includes the final strong lensing interpretation of MS2137-23 and Abell 383. In \S~\ref{sec:stronglens} we present the results of our combined lensing and dynamical analysis. In \S~\ref{sec:systematics} we discuss further systematic effects, limitations and degeneracies that our technique is susceptible to -- with an eye towards future refinements. Finally, in \S~\ref{sec:finale} we summarize and discuss our conclusions. Throughout this paper, we adopt $r$ as the radial coordinate in 3-D space and $R$ as the radial coordinate in 2-D projected space. When necessary, we assume $H_{0}$=65 km~$\tx{s}^{-1}$ Mpc$^{-1}$, $\Omega_{m}$=0.3, and $\Omega_{\Lambda}$=0.7. \section{Methods}\label{sec:methods} The intent of this work is to use the full 2D information provided by the deep Hubble Space Telescope (HST) imaging in two strong lensing clusters (MS2137-23 and Abell 383) in conjunction with the BCG stellar velocity dispersion profile in order to constrain the distribution of baryonic and dark matter. These two clusters were selected for further study from the larger sample presented by S04 because, of the three systems with both radial and tangential gravitational arcs, these two presented the shallowest DM inner slopes. \subsection{Lens Modeling}\label{sec:massmodel} We use the updated LENSTOOL ray-tracing code to construct models of the cluster mass distribution. Our implementation of the mass profiles is identical to that of \citet{Golse02}, with the exception that we have generalized their pseudo-elliptical parameterization to include ones with arbitrary inner logarithmic slopes. For the details, the reader is referred to both \citet{Golse02}, and the Appendix. Here we briefly explain the lens modeling process and parameterization of the cluster mass model. Identifying mass model components and multiple-image candidates is an iterative process. Initially, multiple images are spectroscopically confirmed systems with counter images identified by visual inspection and with the aid of preliminary lens models, taking into account that gravitational lensing conserves surface brightness. Multiple images without spectroscopic confirmation were used in the case of Abell 383, since these additional constraints helped clarify the role that galaxy perturber \#1 (Table~\ref{tab:lensmodel}) played in the central regions of the cluster (see \S~\ref{sec:lensinterpa383}). If the location of a counter image is tentative, especially if there are several possibilities or an intervening cluster galaxy confuses the situation, the system is not included in deriving the mass model. \S~\ref{sec:lensinterpms2137} and \S~\ref{sec:lensinterpa383} present a detailed description of the final multiple image list adopted. Once the multiple images are determined, the cluster mass model is refined and perturber galaxy properties are fixed. In general, a lens mass model will have both cluster and galaxy scale mass components. The cluster scale mass component represents the DM associated with the cluster as a whole plus the hot gas in the intracluster medium. In the limit that the cluster DM halo is spherical (see Eqn~\ref{rho_ell}), its density profile has the form of Eqn~\ref{eqn:gnfw}. In the adopted parameterization, the DM halo also has a position angle ($\theta$) and associated pseudo-ellipticity ($\epsilon$) (see Eqn~\ref{sigma_ell} \& \ref{rho_ell}). Galaxy scale mass components are necessary to account for perturbations to the cluster potential that seem plausible based on the HST imaging and are demanded by the observed multiple image positions. These components are described by pseudo-isothermal elliptical mass distributions (PIEMD; \citet{Kassiola93}). Each PIEMD mass component is parametrized by its position ($\mathrel{x_{\rm c}}$, $\mathrel{y_{\rm c}}$), ellipticity ($e$), position angle ($\theta$), core radius ($\mathrel{r_{\rm core}}$), cut-off radius ($\mathrel{r_{\rm cut}}$) and central velocity dispersion ($\mathrel{\sigma_{\rm o}}$). The projected mass density, $\Sigma$, is given by: \begin{equation}\label{eq:piemd} \Sigma(x,y){=}\frac{\mathrel{\sigma_{\rm o}}^2}{2G}\,\frac{\mathrel{r_{\rm cut}}}{\mathrel{r_{\rm cut}}{-}\mathrel{r_{\rm core}}}\left(\frac{1}{(\mathrel{r_{\rm core}}^2{+}\rho^2)^{1/2}}{-}\frac{1}{(\mathrel{r_{\rm cut}}^2{+}\rho^2)^{1/2}}\right) \end{equation} \noindent where $\rho^2{=}[(x{-}\mathrel{x_{\rm c}})/(1{+}e)]^2{+}[(y{-}\mathrel{y_{\rm c}})/(1{-}e)]^2$ and the ellipticity of the lens is defined as $e{=}(a{-}b)/(a{+}b)$ \footnote{This quantity should not be confused with the quite different definition used for the pseudo-elliptical generalized NFW profile, see the Appendix.}. The total mass of the PIEMD is thus $3/2\pi \sigma_0^2 r_{\rm cut}/G$. In order to relate Equation~\ref{eq:piemd} to the observed surface brightness of the BCG in particular, we take $\Sigma=(M_*/L)I$, where $M_*/L$ is the stellar mass to light ratio and $I$ is the surface brightness, and find the following relation \begin{equation} \label{eq:mlpiemd} M_*/L = 1.50 \pi\mathrel{\sigma_{\rm o}}^{2}\mathrel{r_{\rm cut}} / (GL) \end{equation} \noindent where $L$ is the total luminosity of the BCG. The $M_*/L$ of the central BCG will be used as a free parameter in our mass modeling analysis. Further details and properties of the truncated PIEMD model can be found in \citet{Natarajan97} and \citet{Limousin05}. The relevant parameters of the perturber galaxies (position, ellipticity, core radius, cutoff radius and position angle) are assumed to be those provided via examination of the HST imaging (see \S~\ref{sec:galgeom} for details). Only the central stellar velocity dispersion, $\mathrel{\sigma_{\rm o}}$, is determined by optimization. At a particular stage in the process, the predicted multiple image positions are compared with those observed, and a $\chi^{2}$ value calculated (see \S~\ref{sec:stats}). The iteration stops when a $\chi^{2}$ minimum is reached. The sole criteria for adding a perturber galaxy was whether or not it was necessary for the lens model to match the multiple image positions. If adding an additional perturber did not alter our interim minimum $\chi^{2}$, we did not include it in our subsequent analysis. Two special cases were encountered during the above procedure. First, one of the galaxy perturbers in Abell~383 required a larger cutoff radius ($r_{cut}$) than implied from the light distribution. As is described in \S~\ref{sec:lensinterpa383}, this concentration is necessary to account for several of the multiple image positions. For this mass concentration, not only do we determine the optimum $\sigma_{0}$ parameter, but also the cutoff radius ($r_{cut}$). The other special case concerns the BCG in both galaxy clusters. These are assumed to be coincident with the center of the cluster DM halo -- one justified by the co-location of the BCG and central X-ray isophotes \citep{gps05,gavazzi03}. The BCG mass distribution, represented by a PIEMD model, comprises only the stellar mass. In this case the HST imaging is used to fix the BCG ellipticity and position angle, but since the measured stellar velocity dispersion is to be used as a constraint on the cluster mass profile, we leave the central velocity dispersion parameter (and hence the stellar $M_*/L$, Eqn~\ref{eq:mlpiemd}) free {\it in the lensing analysis}. As the Jaffe density profile is used for the BCG dynamical analysis (S04), the PIEMD core and cutoff radius which best match the Jaffe surface density are adopted. \S~\ref{sec:galgeom} discusses further the results of the surface brightness matching between the Jaffe and PIEMD models in the clusters. \subsection{Incorporating the Dynamical Constraints}\label{sec:dynconsts} Apart from the use of the pseudo-elliptical gNFW profile for the dark matter component, the observational data and analysis methods adopted here are identical to those used by S04. In that work, the observed velocity dispersion profile of the BCG was interpreted via the spherical Jeans equation (see Appendix of S04) to assist in the decomposition of the dark and baryonic mass components. This portion of the $\chi^{2}$ was calculated by comparing the expected velocity dispersion profile of the BCG (which depends on the mass enclosed at a given radius and the relative contribution of dark and luminous matter) given a mass model with the observed velocity dispersion profile, taking into account the effects of seeing and the long slit shape used for the observations. Ellipticity in the BCG and its dark matter halo can be ignored as its effect on the velocity dispersion profile will be small (e.g.~\citet{Kronawitter00}). The reader is referred to S04 for the observational details pertaining to the velocity dispersion profile, the surface brightness profile of the BCG and the subsequent dynamical modeling to constrain the cluster DM inner slope. \subsection{Statistical Methods}\label{sec:stats} A $\chi^{2}$-estimator is used to constrain the acceptable range of parameters compatible with the observational data. First we use the strong lens model to calculate the likelihood of the lensing constraints and then we combine it with a dynamical model to include the kinematic information into the likelihood. The first step is the strong lensing likelihood. Once the lensing interpretation is finalized and the perturbing galaxy parameters are fixed, the remaining free parameters are constrained by calculating a lensing $\chi^2$ over a hypercube which encompasses the full range of acceptable models, modulo a prior placed on the dark matter scale radius (see \S~\ref{sec:rscprior}). In Bayesian terms this corresponds to adopting a uniform prior. The lensing $\chi^2$ value is calculated in the source plane identically to that of \citep{gps05}. For each multiply-imaged system, the source location for each noted image ($x_{model,i}$,$y_{model,i}$) is calculated using the lens equation. Since there should be only one source for each multiple image set (with N images), the difference between the source positions should be minimized, hence: \begin{equation} \chi^2_{pos}{=}\sum_{i{=}1}^{N}\frac{(x_{model,i}{-}x_{model,i+1})^2{+}(y_{model,i}{-}y_{model,i+1})^2}{\sigma_{S}^2} \label{theory:chipos} \end{equation} \noindent $\sigma_{S}$ is calculated by scaling the positional error associated with a muliple image knot, $\sigma_{I}$, by the amplification of the source $A$ so that $\sigma_{I}^{2}=A \sigma_{S}^{2}$. The following analysis will assume two different positional errors for the multiply imaged knots, using uncertainties of $\sigma_{I}=$0\farcs2 and 0\farcs5, referred to hereafter as the `fine' and `coarse' analyses, respectively. The case for each is justified below. The finer 0\farcs2 error bar corresponds to the uncertainty in the multiple image knot positions as defined by the resolution and pixel size of the HST/WFPC2 images. Excellent strong lensing fits are achieved with the finer 0\farcs2 error bar ($\chi^{2}/dof\sim 1$), so that technically there is no need for increased uncertainties. The uncertainty is dominated by the spatial extension of the multiple image knots employed and the ability to identify surface brightness peaks. In contrast to our ability to match the image positions down to the resolution of the HST/WFPC2 images, recent combined strong and weak lensing analyses of Abell 1689 have been unable to do so (e.g.~\citet{Broadhurst05,Halkola06,Limousin06}). Although Abell 1689 is a more complex cluster than those studied here, it does have the most identified multiple images of any other cluster to date, and so can probe the overall mass profile on smaller scales and with many more constraints. As espoused in \S~1, real galaxy clusters are complex systems that are likely not easily parameterized by simple mass models, and as the number and density of mass probes increases the more refined and complete the mass model necessary to match the data. In our case, where we have relatively few mass profile constraints (at least in comparison to Abell 1689), we adopt a coarser 0\farcs5 positional error which allows us to account for complexities in the actual mass distribution of our clusters that our small number of mass probes are insensitive to. This, plus the fact that we carefully chose our perturbing galaxies such that a lensing $\chi^{2}/dof\sim1$ was found should account for reasonable situations where we have missed an interesting perturber galaxy. By adopting too small a multiple image position uncertainty, the region in parameter space explored may be overly confined (such that, for example, the observed BCG velocity dispersion profile cannot be reproduced). The strong lensing analysis is performed with 5 free parameters, analogous to those adopted in S04. These are the dark matter inner logarithmic slope ($\beta$), the pseudo-ellipticity of the potential ($\epsilon$), the amplitude of the DM halo ($\delta_{c}$), the dark matter scale radius ($r_{sc}$), and the mass-to-light ratio of the BCG ($M_*/L$). We choose to place a uniform prior on the dark matter scale radius, ($r_{sc}$) based on past mass profile analyses of these clusters and results from CDM simulations in order to reduced computation time (see \S~\ref{sec:rscprior}). In practice, to evaluate the $\chi^2_{pos}$ at each point in the hypercube, the pseudo-ellipticity of the cluster dark matter halo is optimized while simply looping over the remaining free parameters. Once the strong lensing $\chi^{2}$ values are computed, attention is turned to the dynamical data. In contrast to the strong lensing model, the dynamical model is spherically symmetric and follows that presented by S04, with the $\chi^{2}$ value being: \begin{equation} \chi^{2}_{\sigma}{=}\sum_{i{=}1}^{N}\frac{(\sigma_{i,obs}{-}\sigma_{i,model})^{2}}{\Delta_{i}^{2}} \label{eqn:chisigma} \end{equation} \noindent where $\Delta_{i}$ is the uncertainty in the observed velocity dispersion measurements. The lensing and velocity dispersion $\chi^{2}$ values are summed, allowing for standard marginalization of nuisance parameters and the calculation of confidence regions. \subsection{Dark matter scale radius prior}\label{sec:rscprior} As mentioned in the previous sections, in order to limit computation time, a prior was placed on the dark matter scale radius $r_{sc}$. This is justified both on previous mass profile analyses of these clusters and the results of CDM simulations. An array of CDM simulations has provided information not only on the inner dark matter density profile, but on the expected value of the scale radius, $r_{sc}$, and its intrinsic scatter at the galaxy cluster scale (e.g.~\citealt{Bullocketal01,Tasitsiomi04,Dolag04}). For example, \citet{Bullocketal01} found that dark matter halos the size of small galaxy clusters have scale radii between 240 and 550 kpc (68\% CL). \citet{Tasitsiomi04}, using higher resolution simulations with fewer dark matter halos found $r_{sc}$ of 450$\pm$300 kpc. Finally, \citet{Dolag04} studied the DM concentrations of galaxy clusters in a $\Lambda CDM$ cosmology and found a typical range of scale radii between $r_{sc}$ of 150 and 400 kpc. These results represent a selection of the extensive numerical work being done on the concentration of dark matter halos. Previous combined strong and weak lensing analyses of MS2137-23 have provided approximate values for the scale radius \citep{gavazzi03,Gavazzi05}. \citet{gavazzi03} found a best fitting scale radius of $\sim$130 kpc (and hints that the scale radius may be as low as $\sim$70 kpc from their weak lensing data) for their analysis of MS2137-23. A more recent analysis \citep{Gavazzi05} found a best fitting radius of $\sim$170 kpc. Similarly for Abell 383, a recent X-ray analysis found a best-fitting dark matter scale radius of $\sim130$ kpc (Zhang et al. 2007). Taking these observational studies into account, we chose a uniform scale radius prior between 100 and 200 kpc for MS2137-23 and Abell 383, both for simplicity and to bracket the extant observational results which often have constraints at larger radii (and thus constrain the scale radius better) than the current work. It is worth noting that the extant observations of these two clusters indicate a scale radius which is on the low end of that predicted from CDM simulations. For a fixed virial mass, a smaller scale radius indicates a higher concentration, $c=r_{vir}/r_{sc}$. This could be due to the effect of baryonic cooling, which could increase halo concentration (as well as inner slope perhaps). It has been suggested that those halos with the highest concentration (again for a fixed mass) are those that are the oldest and with the least substructure, providing more indirect evidence that we have chosen 'relaxed' galaxy clusters to study \citep{Zentner05}. We briefly explore the consequences of changing our assumed scale radius prior range in \S~\ref{sec:highrad}. \section{Application to Data}\label{sec:obsresults} We now turn to the observational data for MS2137-23 and Abell 383 and describe our methods for analyzing these in the context of lensing input parameters. \subsection{BCG and Perturber Galaxy Parameters}\label{sec:galgeom} In order to fix the position angle and ellipticity of the perturber galaxies and BCG components, the IRAF task {\sc ellipse} is used to estimate the surface brightness profile at typically the effective radius. The measured parameters are fixed in the lensing analysis. The galaxy position, core radius ($\mathrel{r_{\rm core}}$) and cutoff radius ($\mathrel{r_{\rm cut}}$) are each chosen to match those fitting the photometry (Table~\ref{tab:lensmodel}). For perturbing galaxies, this leaves only the PIEMD parameter velocity dispersion ($\mathrel{\sigma_{\rm o}}$) which must be adjusted to match the multiple imaging constraints, as explained in \S~\ref{sec:massmodel}. For the BCG, following S04, it is preferable to use the Jaffe stellar density profile for the dynamical analysis since this function provides an analytic solution to the spherical Jeans equation. However, the PIEMD model implemented in {\sc lenstool} offers numerous advantages for the lensing analysis. To use the most advantageous model in each application, a correspondence is established between the two by fitting with a PIEMD model the Jaffe surface brightness fit presented by S04. An appropriate combination of the PIEMD $\mathrel{r_{\rm core}}$ and $\mathrel{r_{\rm cut}}$ model parameters matches the Jaffe profile found by S04 with no significant residuals (PSF smearing was also taken into account). Table~\ref{tab:lensfixed} lists the PIEMD parameters used for our lensing analysis, as well as the Jaffe profile parameters used by S04. \begin{table*} \begin{center} \caption{Fixed Parameters in Abell 383 and MS2137-23 Lens Model\label{tab:lensfixed}} \begin{tabular}{lcccccccc} \tableline\tableline Cluster&Parameters& $x_{c}$&$y_{c}$& b/a & $\theta$&$r_{core}$&$\sigma_{0}$& $r_{cut}/R_{e}$\\ &&(arcsec)&(arcsec)&&(deg)&(kpc)&($km s^{-1}$)&(kpc)\\ \tableline MS2137&Cluster-scale DM halo&0.0&0.0&Free&5.0 & -&-&-\\ &${\rm BCG_{PIEMD}}$&0.0&0.0&0.83&17.75&5$\times10^{-6}$&Free&22.23\\ &${\rm BCG_{Jaffe}}$&0.0&0.0&-&-&-&-&24.80\\ &Galaxy Perturber&16.2&-5.46&0.66&159.9&0.05&173.0&4.8\\ Abell 383&Cluster-scale DM halo&0.0&0.0&Free&104.3&-&-&-\\ &${\rm BCG_{PIEMD}}$&0.0&0.0&0.90&107.2&3$\times10^{-6}$&Free&25.96\\ &${\rm BCG_{Jaffe}}$&0.0&0.0&-&-&-&-&46.75\\ &Perturber 1&14.92&-16.78&0.804&-20.9&0.67&230.0&26.98\\ &Perturber 2&9.15&-1.92&0.708&10.3&0.51&140.0&10.79\\ &Perturber 3&0.17&-24.26&0.67&65.2&0.24&124.8&9.10\\ &Perturber 4&-4.10&-13.46&0.645&27.7&0.17&125.7&2.19\\ \tableline \end{tabular} \tablecomments{The position angle, $\theta$ is measured from North towards East. The DM halo is parameterized with the pseudo-gNFW profile. All other mass components are parameterized by a PIEMD model. Note that $R_{e}=0.76r_{jaffe}$ \citep{Jaffe83}.} \end{center} \end{table*} \subsection{MS2137-23 Lens Model}\label{sec:lensinterpms2137} The strong lensing properties of MS2137-23 have been studied extensively by many workers \citep{Mellier93,Miralda95,Hammer97,gavazzi03,Gavazzi05}. The most detailed model \citep{gavazzi03} used 26 multiply-imaged knots from two different background sources. The model adopted here is more conservative and based only on those multiple images confirmed via spectroscopy or suggested on the grounds of surface brightness or interim lens models. Despite having some multiple images in common with \citet{gavazzi03}, we have retained our own nomenclature. Following \citet{Sand02}, the tangential and radial arcs arise from separate sources, at $z=1.501$ and $z=1.502$, respectively. The multiple image interpretation is detailed in Figure~\ref{fig:mulplot} and Table~\ref{tab:lensinterp}. There are two separate features (1 and 3) on the source giving rise to the tangential arc which is multiply-imaged four and three times respectively. It has not been possible to confidently locate the fourth image of feature 3, since it is adjacent to the perturbing galaxy. Also, it is expected that a fifth, central image would be associated with the giant tangential arc. Although the position of this fifth central image has been tentatively reported \citep{gavazzi03}, we do not include it in our model because we were unable to clearly identify it due to BCG subtraction residuals. Two images of the source giving rise to the radial arc were also identified. The mirror image of feature 2a nearer the center of the BCG could not be recovered, most likely because of residuals arising from the subtraction of the BCG. \begin{figure*} \begin{center} \mbox{ \mbox{\epsfysize=6.2cm \epsfbox{f1a.eps}} \mbox{\epsfysize=6.2cm \epsfbox{f1b.eps}} } \mbox{ \mbox{\epsfysize=6.2cm \epsfbox{f1c.eps}} \mbox{\epsfysize=6.2cm \epsfbox{f1d.eps}} } \caption[Multiple image interpretation of the cluster MS2137.]{\label{fig:mulplot} Multiple image interpretation of the cluster MS2137-23. The exact positions used are shown in Table~\ref{tab:lensinterp}. Three sets of multiple images are identified, one with the radial arc system (2a \& 2b) and two with the tangential arc system (1abcd \& 3abc). The perturbing galaxy is the elliptical S0 next to the lensed feature 1b. } \end{center} \end{figure*} As in \citet{gavazzi03}, only one perturbing galaxy is included in the lens model (see Table~\ref{tab:lensfixed}). For this system, the best-fitting $\chi^{2}$ value occurred near $\mathrel{\sigma_{\rm o}}$=173 km~$\tx{s}^{-1}$. In the initial modeling of MS2137-23, some experimentation was undertaken with different cluster DM ellipticities and position angles. While some variation in ellipticity is permitted by the lensing interpretation, a robust position angle offset was detected between the BCG and that of the DM halo of $\Delta \theta \sim$13 degrees, in agreement with \citet{gavazzi03}. In the following, results are presented with the DM position angle fixed at $\theta$=5 degrees. This optimal position angle was determined during the initial lens modeling process by fixing all cluster mass parameters to values corresponding to a model with $\chi^{2}/dof\sim 1$ and letting the DM position angle vary until a $\chi^{2}$ minimum was reached. As a consistency check, we repeated our calculations with a fixed DM position angle of 4.0 and 6.0 degrees. Varying the DM position angle had very little effect on our other parameter constraints, but results in a slightly larger overall $\chi^{2}$ (lensing + velocity dispersion profile; $\Delta \chi^{2} <$1) value. For this reason, we only present our DM position angle of 5 degree results. \subsection{Abell 383 Lens Model}\label{sec:lensinterpa383} Detailed lens models for Abell 383 have been published in \citet{gps01,gps05}, which we will largely adopt in this work. Multiple image sets 1 and 2 are based on the in-depth lensing interpretation of \citet{gps05}. The reader is referred to that work for a detailed description of this radial and tangential gravitational arc. Multiple image sets 3, 4, 5 and 6 (for which there are no spectroscopic redshifts, but for which their distinctive morphology is reassuring) are included largely to constrain the properties of perturbing galaxies 1, 3, and 4 (see Fig~\ref{fig:mulplota383} and Table~\ref{tab:lensinterp}). Since these images have no spectroscopic confirmation, a redshift $z\sim$3 was assumed; the mass model is very insensitive to the exact choice. \begin{figure*} \begin{center} \mbox{ \mbox{\epsfysize=6.2cm \epsfbox{f2a.eps}} \mbox{\epsfysize=6.2cm \epsfbox{f2b.eps}} } \mbox{ \mbox{\epsfysize=6.2cm \epsfbox{f2c.eps}} \mbox{\epsfysize=6.2cm \epsfbox{f2d.eps}} } \caption[Multiple image interpretation of the cluster Abell 383.]{\label{fig:mulplota383} Multiple image interpretation of the cluster Abell 383. The exact positions used are shown in Table~\ref{tab:lensinterp}. In all figures except for the top left we have subtracted cluster galaxies in order to more clearly see the multiple image features. } \end{center} \end{figure*} The Abell 383 cluster mass model is more complex than that for MS2137-23, but only in the sense that there are more perturbing galaxies that must be put into the mass model to match the image positions. The bright cluster elliptical southwest of the BCG (\#2 in \citet{gps05}; Perturber \#1 in this work) requires a DM halo more extended than the light, as mentioned in \S~\ref{sec:massmodel}. After some iteration, it was found that the parameters of this important perturber could be fixed to those listed in Table~\ref{tab:lensfixed}. Multiple image sets 3, 4 and 5 play a crucial role (see Table~\ref{tab:lensinterp}) in constraining the perturber. Although other perturbing galaxies were added, none has a comparable effect on the lensing $\chi^2$. Table~\ref{tab:lensfixed} provides the full model parameter list. A slight ($\sim$3 degree) offset between the position angle of the BCG and the cluster DM halo was noted. We found the best-fitting DM position angle in the same way as in MS2137-23 (\S~\ref{sec:lensinterpms2137}). The position angle of the DM halo was kept fixed but the ellipticity was left as a free parameter. As in MS2137-23, we also reran our analysis with a DM position angle of 103.3 and 105.3, but only present the results with a DM PA of 104.3. \begin{table*} \begin{center} \caption{Multiple Image Interpretation of MS2137 and Abell 383\label{tab:lensinterp}} \begin{tabular}{lcccc} \tableline\tableline Cluster&Multiple Image& $x_{c}$&$y_{c}$&Redshift\\ &ID&(arcsec)&(arcsec)\\ \tableline MS2137&1a&6.92&-13.40&1.501\\ &1b&12.40&-7.94&1.501\\ &1c&0.07&19.31&1.501\\ &1d&-11.57&-7.49&1.501\\ &2a&3.96&-5.51&1.502\\ &2b&-8.01&22.10&1.502\\ &3a&5.16&-14.68&1.501\\ &3b&0.11&18.91&1.501\\ &3c&-12.30&-6.74&1.501\\ Abell 383&1A&-1.74&2.56&1.0\\ &1B&-1.03&1.20&1.0\\ &1C&16.37&-4.03&1.0\\ &2A&7.00&-14.01&1.0\\ &2B&8.23&-13.20&1.0\\ &2C&14.11&-8.19&1.0\\ &3A&5.88&-22.02&3.0\*\\ &3B&14.69&-14.68&3.0\*\\ &3C&16.49&-14.39&3.0\*\\ &4A&8.35&-23.96&3.0\*\\ &4B&17.45&-17.28&3.0\*\\ &4C&17.92&-15.43&3.0\*\\ &5A&6.64&-21.75&3.0\*\\ &5B&16.98&-14.09&3.0\*\\ &6A&7.05&-21.75&3.0\*\\ &6B&17.27&-14.17&3.0\*\\ \tableline \end{tabular} \tablecomments{Multiple Image Interpretation. All image positions are with respect to the BCG center.} \end{center} \end{table*} \section{Results}\label{sec:stronglens} We now analyze the refined 2D lens models of MS2137-23 and Abell 383 together with the velocity dispersion profiles presented in S04. We present our analysis with both a multiple image position uncertainty of 0\farcs2 and 0\farcs5, described as the fine and coarse fits in \S~\ref{sec:stats}. The results are summarized in Table~\ref{tab:lensmodel} and Figures~\ref{fig:ms2137rscprior} and \ref{fig:a383rscprior}. \subsection{MS2137-23} Figure~\ref{fig:ms2137rscprior} and the discussion below summarizes the results for the fine and coarse positional cases. One thing to note is the number of degrees of freedom involved, i.e. the difference between the number of constraints and the number of free parameters, in order to quantify the goodness of fit. The mass model has 5 free parameters, as detailed in \S~\ref{sec:lensinterpms2137}: the DM inner slope $\beta$, the DM pseudo-ellipticity $\epsilon$, the DM amplitude $\delta_{c}$, the BCG stellar $M_{*}/L$, and the dark matter scale radius $r_{sc}$ which is allowed to vary in the 100-200 kpc range. Considering Table~\ref{tab:lensinterp}, the multiple images provide 12 constraints, while the velocity dispersion data provides 8, giving a total of 20 data constraints. The resulting number of degrees of freedom is thus 15. \begin{figure*} \begin{center} \mbox{ \mbox{\epsfysize=4.5cm \epsfbox{f3a.eps}} \mbox{\epsfysize=4.5cm \epsfbox{f3b.eps}} } \mbox{ \mbox{\epsfysize=4.5cm \epsfbox{f3c.eps}} \mbox{\epsfysize=4.5cm \epsfbox{f3d.eps}} } \caption{The combined lensing+dynamics results for the cluster MS2137-23. The top row summarizes the results for the 0\farcs2 lensing position uncertainty scenario while the bottom row encapsulates the 0\farcs5 scenario. Top Left--Lensing+dynamics likelihood contours (68\%,95\%, and 99\%) in the $M/L-\beta$ plane after marginalizing over the other free parameters with the 0\farcs2 lensing multiple image uncertainty. Top Right-- Best fitting velocity dispersion profile from the combined lensing+dynamics analysis with the 0\farcs2 lensing multiple image uncertainty. No models could be found that fit both the lensing and observed velocity dispersion constraints. Bottom Left -- Lensing+dynamics likelihood contours (68\%,95\%, and 99\%) with the 0\farcs5 lensing multiple image uncertainty in the $M/L-\beta$ plane after marginalizing or optimizing over the other free parameters. Bottom Right -- Best fitting velocity dispersion profile from the combined lensing+dynamics analysis with the 0\farcs5 lensing multiple image uncertainty. While the best-fitting model velocity dispersion is a better fit to the data than in the 0\farcs2 lensing scenario, it still cannot reproduce the observed decline in the velocity dispersion profile in our highest radial bins -- suggesting a problem with our current mass model parameterization. \label{fig:ms2137rscprior}} \end{center} \end{figure*} \subsubsection{The fine positional accuracy lensing case} The two top panels of Figure~\ref{fig:ms2137rscprior} and the appropriate line in Table~\ref{tab:lensmodel} encapsulate the results of the fine positional analysis. DM inner slopes between $\beta = 0.65 - 1.0$ lie within the 68\% confidence limit (after marginalizing over all other free parameters), although the best fitting DM scale radius sits at the edge of the allowed prior ($r_{sc,best}$=200 kpc). A scale radius of 200 kpc is higher than that seen in previous lensing analyses of MS2137-23 that have used a similar mass parameterization to our own (e.g.\citet{gavazzi03,Gavazzi05}), so there is no case to alter the prior. The parameter constraints are not particularly tight because the total $\chi^2\sim$54, larger than expected given the number of degrees of freedom. Such a value may indicate that the form of the mass profile used in the fit is inappropriate. Indeed, the model velocity dispersion profile is a poor match to that observed (Figure~\ref{fig:ms2137rscprior}). In fact, if the BCG velocity dispersion results are ignored, acceptable lens models ($\chi^{2}/dof \lesssim 1$) can be recovered with a variety of inner DM slopes, scale radii and BCG stellar $M/L$, although these parameters have correlated values. We postpone discussion of the possible reasons for this mismatch until later. \subsubsection{The coarse positional accuracy lensing case} The two bottom panels of Figure~\ref{fig:ms2137rscprior}, along with Table~\ref{tab:lensmodel} summarize our results for the coarse positional analysis. DM inner slopes between $\beta =$0.4-0.75 lie within our 68\% CL, and we again find DM scale radii at the high end of our prior range, as expected. The shift in the BCG M/L vs. DM inner slope contour fine positional case indicates that the increased parameter space in the lensing models has led to a slightly improved velocity dispersion profile (bottom right panel of Figure~\ref{fig:ms2137rscprior}). The overall $\chi^{2}\simeq$31 is improved, although the probability for 15 degrees of freedom is less than 1\%, assuming measurements are governed by Gaussian statistics. Thus the model remains a marginal fit to the data. \subsubsection{Comparison with Gavazzi 2005} We now briefly compare our results with those of \citet{Gavazzi05}. Gavazzi's analysis used a similar strong lensing model to our own, including what we consider to be somewhat less secure multiple images. However, he extended the analysis to larger scales including weak lensing data and incorporated the BCG velocity dispersion profile presented by S04. Gavazzi adopted a strict NFW profile for the cluster DM halo and a Hernquist profile for the stellar component of the BCG. Despite these differences, Gavazzi's conclusions are very similar to those of the present paper. Models with NFW-like DM haloes (regardless of whether the inner slopes were varied) were uniformly poor fits to the observational data. In particular, the falling velocity dispersion profile observed at $R \gtrsim 5$kpc cannot be reproduced, despite experimenting with the effect of anisotropic orbits in the stellar distribution. A major conclusion of Gavazzi's study is that halo triaxiality, an effect not typically included, may play an important role in the central regions of galaxy clusters. We will return to this topic in \S~\ref{sec:ac}. \begin{table*} \begin{center} \caption{Acceptable Parameter Range\label{tab:lensmodel}} \begin{tabular}{lcccccccc} \tableline\tableline Prior Setup&Cluster& Inner DM Slope& $\epsilon$& $\delta_{c}$ & $r_{sc}$ & $M_*/L_{V}$ \\ && $\beta$ & & & (kpc) &\\ \tableline 100 - 200 kpc $r_{sc}$ Prior&\\ $\sigma_{lens}=$0\farcs2& MS2137 & $0.95^{+0.05}_{-0.30}$&$0.08^{+0.01}_{-0.01}$&$29420^{+98310}_{-1760}$&$200^{-42}$&$1.58^{+0.52}_{-0.635}$\\ & Abell 383 & $0.55^{+0.2}_{-0.05}$&$0.08^{+0.01}_{-0.02}$&$140000^{+8500}_{-60600}$&$100^{+28}$& $2.4^{+0.42}_{-0.42}$\\ $\sigma_{lens}=$0\farcs5& MS2137 & $0.6^{+0.15}_{-0.2}$&$0.06^{+0.01}_{-0.01}$&$44600^{+35500}_{-7175}$&$200^{-31}$&$2.45^{+0.75}_{-0.65}$\\ & Abell 383 & $0.45^{+0.2}_{-0.25}$&$0.06^{+0.03}_{-0.01}$&$156000^{+38500}_{-67150}$&$100^{+21}$& $2.34^{+1.02}_{-0.54}$\\ \tableline \end{tabular} \tablecomments{Best-fitting parameters and/or confidence limits for the different prior scenarios present in this paper. } \end{center} \end{table*} \subsection{Abell 383} Our results for Abell 383 are shown in Figure~\ref{fig:a383rscprior} and Table~\ref{tab:lensmodel} for both the fine and coarse positional uncertainty cases. We discuss each separately below. Calculating the number of degrees of freedom in a similar way to that done for MS2137-23, we again have 5 free parameters in our mass model. Considering Table~\ref{tab:lensinterp}, multiple images provide 16 constraints, taking into account that those related to multiple image sets 3,4,5 \& 6 do not have known redshift information. The velocity dispersion data provide 3 additional constraints. Thus, the resulting number of degrees of freedom is 14. \subsubsection{The fine positional accuracy lensing case} The top two panels of Figure~\ref{fig:a383rscprior} and the appropriate line in Table~\ref{tab:lensmodel} summarize the results in this case. DM inner slopes between $\beta = 0.5 - 0.7$ lie within the 68\% confidence limit of our analysis (after marginalizing over all other free parameters). The best fitting scale radius sits again at the edge of the allowed $r_{sc}$ prior range ($r_{sc}$=100 kpc). An X-ray analysis of Abell 383, which was able to probe to higher radius than the current analysis, indicates that the DM scale radius is well above 100 kpc (Zhang et al. 2007). For these reasons, and those discussed earlier, there is no case for adjusting the DM scale radius prior. The total $\chi^{2}$=40.4, high given the 14 degrees of freedom in the analysis. \subsubsection{The coarse positional accuracy lensing case} The two bottom panels of Figure~\ref{fig:a383rscprior}, along with Table~\ref{tab:lensmodel} summarize the results for the coarse positional accuracy case. DM inner slopes between $\beta =$0.2 - 0.65 lie within our 68\% CL, along with a best fitting DM scale radius of 100 kpc. Our parameter constraints encompass the values found in the fine accuracy case with no shift in parameter space (unlike the case for MS2137-23). This suggests that although we should expect a lower $\chi^{2}$ due to the increased uncertainties allowed, no significant improvement to the best-fitting velocity dispersion profile should be expected. As we can see in the bottom right panel of Figure~\ref{fig:a383rscprior}, the best fitting velocity dispersion profile is very similar to that obtained in the fine case. The total $\chi^{2}$=22, acceptable given 14 degrees of freedom. \begin{figure*} \begin{center} \mbox{ \mbox{\epsfysize=4.5cm \epsfbox{f4a.eps}} \mbox{\epsfysize=4.5cm \epsfbox{f4b.eps}} } \mbox{ \mbox{\epsfysize=4.5cm \epsfbox{f4c.eps}} \mbox{\epsfysize=4.5cm \epsfbox{f4d.eps}} } \caption{The combined lensing+dynamics results for the cluster Abell 383. The top row summarizes the results for the 0\farcs2 lensing position uncertainty scenario while the bottom row encapsulates the 0\farcs5 scenario. Top Left--Lensing+dynamics likelihood contours (68\%,95\%, and 99\%) in the $M/L-\beta$ plane with the 0\farcs2 lensing multiple image uncertainty after marginalizing over the other free parameters. Top Right-- Best fitting velocity dispersion profile from the combined lensing+dynamics analysis with the 0\farcs2 lensing multiple image uncertainty. Bottom Left -- Lensing+dynamics contours (68\%,95\%, and 99\%) in the $M/L-\beta$ plane with the 0\farcs5 lensing multiple image uncertainty after marginalization over the other free parameters. Bottom Right -- Best fitting velocity dispersion profile from the combined lensing+dynamics analysis with the 0\farcs5 lensing multiple image uncertainty. The 0\farcs5 lensing multiple image scenario provides a better overall fit to the observations, although we are limited by the relatively poor quality of the observed Abell 383 velocity dispersion profile. \label{fig:a383rscprior}} \end{center} \end{figure*} \section{Discussion}\label{sec:systematics} In the previous section we have presented the results of our analysis, which showed that a mass model comprising a stellar component for the BCG following a Jaffe profile together with a generalized NFW DM cluster halo is able to adequately reproduce the observations for Abell 383 (albeit {\it only} for the coarse lensing positional accuracy scenario) but is unable to simultaneously reproduce the observed multiple image configuration and BCG velocity dispersion profile for MS2137-23. In the case of Abell 383, the inner DM profile is flatter than $\beta$=1, supporting the earlier work of S04. This indicates that at least some galaxy clusters have inner DM slopes which are shallower than those seen in numerical simulations -- but only if the mass parameterization used in the current work is reflective of reality. Further work in this interesting cluster using other observational probes will further refine the mass model, and determine if the generalized NFW DM form is a good fit to the cluster profile. In this section we discuss systematic uncertainties in our method and possible refinements that could be made to reconcile the mass model with the observations for MS2137-23. We hope that many of these suggestions will become important as cluster mass models improve and thus will present fruitful avenues of research. \subsection{Systematic Errors} We focus first on systematic errors associated particularly with the troublesome stellar velocity dispersion profile for MS2137-23. Errors could conceivably arise from (i) significant non-Gaussianity in the absorption lines (which are fit by Gaussians), (ii) uncertain measurement of the instrumental resolution used to calibrate the velocity dispersion scale, and (iii) template mismatch. Non-gaussianity introduces an error that we consider too small to significantly alter the goodness of fit \citep{Gavazzi05}. The instrumental resolution of ESI (the Keck II instrument used to measure the velocity dispersion profile; \citet{Sheinis02}) is $\sim$30 km~$\tx{s}^{-1}$; this is much smaller than the measured dispersion. Even if the instrumental resolution was in error by a factor of two, the systematic shift in $\sigma$ would only be 3 km~$\tx{s}^{-1}$ (using Eq~3 in Treu et al.\ 1999). This would affect all measurements and not reverse the trend with radius. Concerning template mismatch, S04 estimated a possible systematic shift of up to 15-20 km~$\tx{s}^{-1}$ . This could play a role especially as the signal to noise diminishes at large radii, where the discrepancy with the model profile is greatest. To test this hypothesis, we added 20 km~$\tx{s}^{-1}$ in quadrature to only those velocity dispersion data points in MS2137-23 at $R > 4$ kpc and recalculated the best-fitting $\chi^{2}$ values. $\chi^{2}$ is reduced from 31 to 28.8, a modest reduction which fails to explain the poor fit. Although selectively increasing the error bars on those data points most discrepant with the model is somewhat contrived, our result does highlight the need for high S/N velocity dispersion measures out to large radii. A high quality velocity dispersion profile has been measured locally for Abell 2199 to $\sim$20 kpc \citep{Kelsonetal02}. Interestingly, these high S/N measures display similar trends to those for MS2137-23 in the overlap regime, i.e. a slightly decreasing profile at $R\lesssim 10$kpc. The dip witnessed in MS2137-23 is thus not a unique feature, although with deeper measurements we might expect to see a rise at larger radii as a result of the shallow DM profile. A final potential limitation in the dynamical analysis is the assumption of orbital isotropy. Both S04 and \citet{Gavazzi05} explored the consequences of mild orbital anisotropy, concluding a possible offset of $\Delta \beta \sim0.15$ might result. Even including orbital anisotropy into his analysis, \citet{Gavazzi05} was unable to fit the observed velocity dispersion profile. Since we determine our lensing $\chi^{2}$ values in the source plane, we checked to make sure that no extra images were seen after remapping our best-fit lensing + velocity dispersion models back to the image plane. No unexpected images were found, although several images that were explicitly not used as constraints were found, such as the mirror image of radial arc image 2a in MS2137 and the complex of multiple images associated with 3abc, 5ab, and 6ab in Abell 383 (see Figures~\ref{fig:mulplot} and \ref{fig:mulplota383}). As discussed in \S~\ref{sec:lensinterpms2137} and \ref{sec:lensinterpa383}, some of these multiple images were not used as constraints because we could not confidently identify their position either due to galaxy subtraction residuals or blending with other possible multiple image systems. \begin{figure*} \begin{center} \mbox{ \mbox{\epsfysize=4.5cm \epsfbox{f5a.eps}} \mbox{\epsfysize=4.5cm \epsfbox{f5b.eps}} } \mbox{ \mbox{\epsfysize=4.5cm \epsfbox{f5c.eps}} \mbox{\epsfysize=4.5cm \epsfbox{f5d.eps}} } \caption{Confidence contours (68\%,95\%, and 99\%) when we allow the dark matter scale radius to be fixed at values a factor of two beyond our observationally motivated prior. Top Row -- Contours when we fix the dark matter scale radius to $r_{sc}$=50 and 400 kpc in MS2137. Although the $r_{sc}$=400 kpc scenario provides a relatively good fit to the data ($\chi^{2}\sim$26), this value for the scale radius is much larger than that observed from weak lensing data. The $r_{sc}$=50 kpc scenario is a significantly worse fit to the data, with $\chi^{2}\sim$39. Note that the DM inner slope is $\beta < 1$ in both scenarios. Bottom Row -- Contours when we fix the dark matter scale radius to $r_{sc}$=50 and 400 kpc in A383. The large discrepancy in inner slope values obtained emphasize the need for a mass probe at larger radii. The best-fitting model for either fixed scale radius is significantly worse than the best-fitting $r_{sc}$=100 kpc result ($\chi^{2}\sim$26.5 and 31.3 for $r_{sc}$=50 and 400 kpc respectively). \label{fig:diffrsc}} \end{center} \end{figure*} We finally comment on the uncertainties assigned to the multiple image systems for our lens models. We have presented two sets of results in this work; with assigned image positional accuracies of $\sigma_{I}$=0\farcs2 and 0\farcs5. We find a variety of lens models are compatible with the $\sigma_{I}$=0\farcs2 case and only when the velocity dispersion data is included into the analysis does the data fail to be reproduced by the model. Certainly if we were to further increase the positional errors, at some point a good velocity dispersion fit could conceivably be obtained, but we will refrain from doing so in the present work. Increasing the positional uncertainties is only justified if there is evidence that there are significant missing components in the mass models. Further observations that can probe the mass distribution on fine scales to larger radii and higher quality models which can account for phenomena such as adiabatic contraction in the inner regions of galaxy clusters and triaxiality represent the best way to obtain a more precise picture of the cluster mass distribution. \subsection{Improving the Mass Model} We now turn our attention to possible inadequacies in the mass model. It is important to stress that the two diagnostics (lensing and dynamics) adopted in this study probe different scales. The lensing data tightly constrains the mass profile at and outside the radial arc ($\sim$20 kpc), while the velocity dispersion constrains the mass profile inside $R \lesssim 10$ kpc. Since multiple images are numerous and their positions can be more precisely measured than velocity dispersion \footnote{The error on the astrometry with respect to the relevant scale, the Einstein Radius $\theta_{\rm E}$ is much smaller than the relative error on velocity dispersion, i.e. $\delta \theta / \theta_{\rm E} << \delta \sigma/\sigma$}, they carry more weight in the $\chi^2$ statistic than the kinematic points, producing a best overall fitting model (which is lensing dominated) that is a relatively poor fit to the kinematic data. To improve the model, one must admit that either one of the two components of the modeling is incorrect, or that the functional form of the mass profile chosen to extrapolate the lensing information at the scales relevant for dynamics is insufficient. In this section we discuss several areas where the mass model presented in this paper could be improved. \subsubsection{The Contribution of the Brightest Cluster Galaxy} We might query the assumption of a Jaffe density profile for the BCG. This seems an unlikely avenue for improvement given the Jaffe profile fits the observed BCG surface brightness profile remarkably well (see Figure 2 of S04). Moreover, \citet{Gavazzi05} utilized a Hernquist mass profile in his analysis of MS2137-23, which also matches the observations, and \citet{Gavazzi05} was likewise unable to reproduce the observed S04 velocity dispersion profile. We have additionally checked our assumptions by altering the PIEMD fit to the BCG surface brightness data so that it is matched not to the derived Jaffe profile fit to the BCG but directly to the HST surface brightness profile. With this setup, we found a $r_{cut}$ value of 23.70 kpc for MS2137 and 28.65 kpc for Abell 383 (compare this with the numbers in Table~\ref{tab:lensfixed}). Redoing our analysis for the best-fitting $r_{sc}$ scenario only, our constraints on $\beta$ for both Abell 383 and MS2137 did not change by more than 0.05, and so it is not likely that our method for constraining the BCG mass contribution is the root cause of our inability to fit the data to a BCG + gNFW cluster DM halo mass model. Conceivably the BCG may not be coincident with the center of the cluster DM halo, as has been assumed throughout this work. It is often the case that small subarcsecond off sets between BCGs and cluster DM halos are necessary to fit lensing constraints (e.g.~\citet{gps05}). There is strong evidence that the BCG is nearly coincident with the general cluster DM halo {\it in projection} from the strong lensing work presented here and by others \citep{gavazzi03,Gavazzi05}. However, an offset could be responsible for the flat to falling observed velocity dispersion profile if the BCG were actually in a less dark matter dominated portion of the cluster. Another possibility is that there are multiple massive structures along the line of sight, which would be probed by the strong lensing analysis, but not with the velocity dispersion profile of the BCG. A comprehensive redshift survey of MS2137-23 could provide further information on structures along the line of sight. \subsubsection{The Advantage of a Mass Probe at Larger Radii}\label{sec:highrad} With our presented data set, we have seen that it is difficult to constrain the DM scale radius, $r_{sc}$ because both of our mass probes are only effective within the central $\sim$100 kpc of the clusters -- within the typical DM scale radius observed and seen in CDM simulations. For this reason, the inferred DM scale radius for both Abell 383 and MS2137-23 lay at the boundary of our assumed prior range. Future work will benefit from weak lensing data, along with galaxy kinematics and X-ray data of the hot ICM which can each probe out to large clustercentric radii. Although not the focus of the current work, pinning down the correct DM scale radius will be crucial for constraining other DM mass parameters. For instance, there is a well-known degeneracy between $r_{sc}$ and the inner slope $\beta$ (e.g.~\citet{gavazzi03,Gavazzi05}). To briefly explore this, we have reran our analysis (for the coarse positioning lensing case) for both clusters with a $r_{sc}$ of 50 and 400 kpc -- factors of two beyond our chosen $r_{sc}$ prior. We show our confidence contours in Figure~\ref{fig:diffrsc}, which are noteworthy. For example in the case of MS2137-23, if we fix $r_{sc}$=50 kpc, then the best-fitting $\beta = 0.05$. However, if $r_{sc}$=400 kpc then $\beta=0.7$, more in accordance with simulations. Interestingly, the $r_{sc}$=400kpc scenario returns a better overall $\chi^{2}\sim26$ than any model with $r_{sc}$=100-200 kpc -- even though a $r_{sc}$ of 400 kpc is clearly ruled our by extant weak lensing observations. None of the other $r_{sc}$=50,400 kpc scenarios produced $\chi^{2}$ values that were comparable to those seen with $r_{sc}$=100-200 kpc. Any further knowledge of the DM scale radius would aid greatly in constraining $\beta$ and determining the overall goodness of fit of the generalized NFW DM profile to the cluster data. X-ray studies assuming hydrostatic equilibrium \citep{Allen01vir,Schmidt06} and a combined strong and weak lensing analysis \citep{Gavazzi05} have presented data on MS2137-23 to radii much larger than that probed in this study. To check that the mass model derived from data within $\sim$ 100 kpc do not lead to results at variance with published data at larger radii, we have taken the \citet{Gavazzi05} results and compared their derived mass at large radii with an extrapolation of our mass models. Examining Figure~3 from \citet{Gavazzi05} we estimate that from his weak lensing analysis a 2D projected mass enclosed between $1.6 \times 10^{14}$ and $1.1 \times 10^{15} M_{\odot}$ at $\sim 1.08$ Mpc using the cosmology adopted in this paper. Correspondingly, if we take all of the $\Delta \chi^{2}<1.0$ models using our analysis method (the coarse positional accuracy case was use) and calculate the expected 2D projected mass enclosed at 1.08 Mpc we find values between $6.9 \times 10^{14}$ and $8.4 \times 10^{14} M_{\odot}$, well within the expected range. Note that no attempt was made to extrapolate the mass {\it profiles} derived in our analysis to larger radii than the data in this paper allow, although we are acquring weak lensing data for a large sample of galaxy clusters to perform a more extensive analysis. The purpose of this consistency check is to only ensure that the masses we derive for such large radii are not too discrepant with existing analyses. The consistency check is satisfied and lends some credence to the models. \subsubsection{Dark matter baryons interactions and Triaxiality}\label{sec:ac} The central regions of DM halos can be strongly affected by the gravitational interaction with baryons during halo formation. If stars form and condense much earlier than the DM, it is expected that the baryons will adiabatically compress the DM resulting in a halo that is {\it steeper} than that of the original \citep{Blumenthal86,Gnedin04}. Alternatively, dark matter heating through dynamical friction with cluster galaxies can counteract adiabatic contraction, leading to a shallower DM profile \citep{Elzant04,Nipoti03}. The present work takes into account neither of the above scenarios, and if any baryon-DM interaction greatly changes the cluster density profile, our assumed parameterized gNFW profile may be inappropriate. Recently, \citet{Zappacosta06} have used X-ray mass measurements in the cluster Abell 2589 to conclude that processes in galaxy cluster formation serve to counteract adiabatic contraction in the cluster environment. Certainly, more observational work is needed to understand the interplay between baryons and DM in clusters, and extended velocity dispersion profiles of BCGs in conjunction with other mass tracers at larger radii could serve as the best testing ground for the interplay of dark and luminous matter. Not only is there likely significant interplay between baryons and DM in the central regions of clusters, but real galaxy clusters are certainly triaxial and, if ignored, this may lead to biased parameter estimations and discrepancies when combining mass measurement techniques that are a combination of two- and three-dimensional. Several recent studies have considered the effects of halo triaxiality on observations. Using an N-body hydrodynamical simulation of a disk galaxy and performing a 'long slit' rotation curve observation, \citet{Hayashi04} found that orientation and triaxial effects can mistake a cuspy DM profile for one that has a constant density core. At the galaxy cluster scale, \citet{Clowe04} performed mock weak lensing observations of simulated galaxy clusters and found that the NFW concentration parameter recovered was correlated with the 3D galaxy cluster orientation. In order to investigate the recent rash of galaxy clusters with observed high concentration parameters in seeming contradiction to the CDM paradigm \citep{Kneib03,Gavazzi05,Broadhurst05b}, \citet{Oguri05} used strong and weak lensing data in Abell 1689 along with a set of models that included halo triaxiality and projection effects. Again, it was seen that halo shape causes a bias in mass (and mass profile) determination, although it should be kept in mind that measurements of concentration are extremely difficult (e.g. Halkola et al.\ 2006), and the recent study of \citet{Limousin06} has seemed to clear up the concentration parameter controversy for at least Abell 1689. In terms of the current work, \citet{Gavazzi05} has pointed out that the inability of his lensing model to fit the MS2137-23 BCG velocity dispersion profile may be due to halo triaxiality or projected mass along the line of sight (which would increase the mass measured in the lensing analysis but would not be seen in the stellar velocity dispersion). \citet{Gavazzi05} showed that an idealized prolate halo with an axis ratio of $\sim$ 0.4 could explain the velocity dispersion profile in MS2137-23. Halo triaxiality could also explain the high concentration previously seen in this cluster. Again, the gap between simulations and observations may be bridged with respect to triaxiality if further steps were taken to compare the two directly. One step in this direction would be the publication of detailed density profiles for the simulations (in 3-D or along numerous projected sight-lines). The most recent DM only simulations have indicated that the standard NFW profile representation of a DM profile (and its \citet{M99} counterpart with an inner slope $\beta \sim 1.5$) can be significantly improved by slightly altering the model to a profile with a slope that becomes systematically shallower at small radii (e.g.~\citet{Navarro04}, but see \citet{Diemand05}). While we have adopted the traditional generalized NFW profile in this study, future work with parameterized models should move towards the latest fitting functions along with an implementation of adiabatic contraction as has already been attempted by \citet{Zappacosta06}. Note, however, that both \citet{Navarro04} and \citet{Diemand04} have stated that all fitted functions have their weaknesses when describing complicated N-body simulations and that when possible simulations and observations should be compared directly. \section{Summary \& Future Work}\label{sec:finale} We have performed a joint gravitational lensing and dynamical analysis in the inner regions of the galaxy clusters Abell 383 and MS2137-23 in order to separate luminous baryonic from dark matter in the cluster core. To achieve this, we implemented a new 2D pseudo-elliptical generalized NFW mass model in an updated version of the {\sc LENSTOOL} software package. This refinement is a natural progression from our earlier attempts to measure the dark matter density profile \citep{Sand02, Sand04}. For the study, we adopted an observationally motivated scale radius prior of $r_{sc}=100-200$ kpc. With strong lensing alone, we find that a range of mass parameters and DM inner slopes are compatible with the multiple image data, including those with $\beta > 1$ as seen in CDM simulations. However, including the BCG kinematic constraints for both systems, the acceptable parameter ranges shrink significantly. We can summarize the results for the two clusters as follows: \begin{enumerate} \item For the cluster Abell 383 we have found satisfactory BCG + generalized NFW cluster DM models only for our coarse lensing positional accuracy scenario. Assuming that this is reflective of the underlying cluster DM distribution, the dark matter inner slope is found to be $\beta=0.45^{0.2}_{-0.25}$, supporting our earlier contention that some clusters have inner DM profiles flatter than those predicted in numerical simulations. \item For MS2137-23 our model is unable to reproduce the observed BCG velocity dispersion profile and the range of accepted inner slopes therefore depends sensitively on the adopted uncertainties in the mass model. This may suggest an unknown systematic uncertainty in our analysis or that we have adopted an inappropriate mass model. We explore the former in considerable detail, extending the quite extensive discussion of \citet{Sand04}. However, no obvious cause can be found. If, as we suspect, the cause lies with our adopted mass model, it points to the need for further work concerning the distribution of dark matter in the central regions of galaxy clusters. \end{enumerate} Future modeling efforts should include the effects of triaxiality and the influence of baryons on dark matter. It is also critical to obtain high S/N extended velocity dispersion measurements of more BCGs out to larger radii so that, in conjunction with other mass measurement techniques, the interplay of baryons and dark matter in cluster cores can be studied with a real sample. Some other future directions are straightforward. For example, the deep multiband ACS imaging now being done with galaxy clusters \citep{Broadhurst05} allow for literally hundreds of multiple images to be found, significantly increasing the number of constraints and allowing for nonparametric mass modeling \citep{Diego04} -- a crucial addition in case the currently used parameterized models do not correspond to reality. We are eager to find ways to more directly compare simulations with observations so that clearer conclusions can be drawn over whether or not simulations and observations are compatible. This may involve measuring other properties of the dark matter halo rather than a sole emphasis on the inner slope, such as the concentration parameter, $c$. Simulated observations of numerical simulations, such as that presented recently by \citet{Meneghetti05}, offer a clear way forward in understanding the systematics involved in observational techniques and the kinds of observations required to test the current paradigm for structure formation. \acknowledgements We thank Raphael Gavazzi for numerous stimulating conversations. DJS acknowledges support provided by NASA through Chandra Postdoctoral Fellowship grant number PF5-60041. TT acknowledges support from the Sloan Foundation through a Sloan Research Fellowship. GPS acknowledges financial support from a Royal Society University Research Fellowship. Finally, the authors wish to recognize and acknowledge the cultural role and reverence that the summit of Mauna Kea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
{'timestamp': '2007-10-04T20:36:39', 'yymm': '0710', 'arxiv_id': '0710.1069', 'language': 'en', 'url': 'https://arxiv.org/abs/0710.1069'}
\section{Introduction} \label{sec:introduction} It has been proposed that the low energy dynamics of D-branes can be captured by the Dirac-Born-Infeld (DBI) effective action. For unstable D-branes, there are a or more tachyon fields included in the DBI action \cite{Garousi:2000tr,Sen:2003bc,Sen:2004nf}. In type II superstring theory, BPS D-branes are supersymmetric and stable while non-BPS D-branes are non-supersymmetric and unstable. For a single non-BPS D-brane, the DBI effective action contains a real tachyon field. The validity of this tachyon field action has been justified in string field theory \cite{Kutasov:2003er,Niarchos:2004rw}, where the tachyon DBI action is derived based on a general effective action setup by adopting a time-dependent tachyon profile. This implies that there should be some relation between tachyon profiles and the DBI effective action. The equations of motion from the DBI action of a tachyon are usually complicated in the spacetime-dependent case and usually studied by numerical simulations, even in the case in the absence of other world-volume fields. However, we can not get complete numerical spacetime-dependent solutions due to singularities near kinks and the vacuum \cite{Felder:2002sv,Felder:2004xu,Barnaby:2004dz,Cline:2003vc, Hindmarsh:2007dz}. In our previous work \cite{Li:2008ah}, we have shown that the equation of motion of a general scalar, a tachyon or an ordinary scalar, from the DBI action with well selected potentials has exact classical solutions after appropriate field redefinitions. For a tachyon, the solution is a tachyon profile of a single momentum mode. However, the tachyon profile solution in the spacetime-dependent case is complex and hence is invalid in the DBI theory of a real tachyon field. We can get real tachyon profile solutions only in the purely time-dependent case or the purely space-dependent case. A special form of the potential that leads to the tachyon profile solution is the one derived from string field theory in \cite{Kutasov:2003er}. For massless and massive scalars, the spacetime-dependent solutions can be real and so are valid. In this paper, we apply the results obtained in \cite{Li:2008ah} to study the dynamics of a general scalar field on a stable or unstable D-brane in the presence of world-volume massless fields. We adopt the potentials and field redefinition relations that can lead to exact solutions to study the multi-field system in the DBI effective theory, in which we also find exact solutions for all fields. It is shown that the solution of a specific field in the DBI action of multi-fields is exactly the one obtained in the case with all other fields vanishing, except that there are some momentum coupling constraints between fields. From these solutions and momentum coupling conditions, we can study the dynamics of stable and unstable D-branes in the presence of these massless fields. The world-volume massless fields we consider include the massless scalars in the transverse directions to D-branes and the $U(1)$ gauge fields. As we know, the transverse massless scalars describe the fluctuation of the D-brane in the transverse directions. Hence, from the exact solutions obtained in the case of coupling to the transverse massless scalars we can learn the dynamics of D-branes that move or fluctuate in the bulk in different modes. From the solutions obtained in the case of coupling to gauge fields, we can learn the dynamics of D-branes that carry constant electromagnetic fields in arbitrary directions or propagating electromagnetic waves. This redefinition method provides a useful way to simplify the discussion and to extract exact information in the DBI effective theory, in particular in some complicated circumstances. This paper is organised as follows. In Sec.\ 2, we present the exact solution of a general scalar in the DBI effective theory in the absence of other world-volume fields obtained in \cite{Li:2008ah}. In Sec.\ 3, we apply the results in Sec.\ 2 to derive exact solutions in the DBI theory of the general scalar coupled to transverse massless scalars. In Sec.\ 4, we derive exact solutions in the case that the general scalar couples to gauge fields. Finally, we summarise the results in Sec.\ 5. \section{Exact solutions in the DBI effective theory} \label{sec:excsol} Generalising the DBI action of tachyon field \cite{Garousi:2000tr,Bergshoeff:2000dq,Sen:2002an}, we can have the DBI action of a general scalar field $X$ on a D$p$-brane \begin{equation}\label{e:DBIA} S=-\int d^{p+1}x\mathcal{C}(X)\sqrt{1+\eta^{\mu\nu} \partial_\mu{X}\partial_\nu{X}}, \end{equation} where $\mathcal{C}(X)$ is the potential. In the following discussion, the field $X$ can be a tachyon $T$, a massless scalar $Y$ or a massive scalar $\Phi$. The equation of motion from this action is: \begin{equation}\label{e:Teom} (1+\partial{X}\cdot\partial{X})\left(\Box X-\frac {\mathcal{C}'}{\mathcal{C}}\right)=\partial^\mu{X} \partial^\nu{X}\partial_\mu\partial_\nu{X}, \end{equation} where $\Box=\eta^{\mu\nu}\partial_{\mu}\partial_{\nu}$ and $\mathcal{C}'=\partial\mathcal{C}(X)/\partial X$. It is clear that any solutions satisfying $1+\partial X\cdot\partial X=0$ are always solutions to this equation as we note that the right hand side of this equation can be recast into $(1/2)\partial{X}\cdot\partial(1+\partial X\cdot\partial X)$. For the tachyon field $X=T$, it is proved that the equation $1+\partial T\cdot\partial T=0$ can be achieved only near the vacuum at $T\rightarrow\pm\infty$, to be an alternative equation to the equation of motion (\ref{e:Teom}) \cite{Cline:2003vc,Barnaby:2004dz,Hindmarsh:2007dz}. Following the previous work \cite{Li:2008ah}, we rewrite the DBI action (\ref{e:DBIA}) by adopting the following potentials $\mathcal{C}(X)$ and field redefinition relations respectively for a tachyon, massless scalar and massive scalar: (i) For a tachyon field $X=T$, we adopt the potential \begin{equation}\label{e:hyppot} \mathcal{C}(T)=\frac{\mathcal{C}_m}{\cosh(\beta T)} \end{equation} and the field redefinition relation \begin{equation}\label{e:tacmapping} T=\frac{1}{\beta}\sinh^{-1}(\beta \phi). \end{equation} where $\mathcal{C}_m$ is the maximum value of the potential, located at $T=0$, and $\beta$ is a constant. This constant $\mathcal{C}_m$ is related to the tension of the non-BPS D-brane. (ii) For a massless scalar $X=Y$, the potential $\mathcal{C}(Y)=\mathcal{C}_m$ is constant and we set $Y=\phi$. (iii) For a scalar field $X=\Phi$, we adopt \begin{equation}\label{e:cospot} \mathcal{C}(\Phi)=\frac{\mathcal{C}_m}{\cos(\gamma\Phi)}, \textrm{ }\textrm{ }\textrm{ } -\frac{\pi}{2}\leq\gamma\Phi\leq\frac{\pi}{2}, \end{equation} and \begin{equation}\label{e:scamapping} \Phi=\frac{1}{\gamma}\sin^{-1}(\gamma\phi), \textrm{ }\textrm{ }\textrm{ } -1\leq\gamma\phi\leq 1, \end{equation} where $\mathcal{C}_m$ is the minimum value of the potential, located at $\Phi=0$, and $\gamma$ is a constant. The constant $\mathcal{C}_m$ here is related to the tension of the stable D-brane. With the above potentials and field redefinition relations, the action (\ref{e:DBIA}) can be changed to \begin{equation}\label{e:phidbi} \mathcal{L}=-V(\phi)\sqrt{U(\phi)+\partial_{\mu}{\phi} \partial^{\mu}{\phi}}, \end{equation} where $U(\phi)=\mathcal{C}_m/V(\phi)=1+\alpha\phi^2$. For a tachyon $\phi$, $\alpha=\beta^2$; For a massless scalar $\phi$, $\alpha=0$; For a massive scalar $\phi$, $\alpha=-\gamma^2$. The equation of motion from this Lagrangian is: \begin{equation}\label{e:phieq} U(\Box{\phi}+\alpha\phi)+[(\partial{\phi}\cdot \partial{\phi})\Box{\phi}-\partial_\mu\phi\partial_ \nu\phi\partial^\mu\partial^\nu{\phi}]=0. \end{equation} It is clear that this equation has the following spacetime-dependent solution \begin{eqnarray}\label{e:phisol1} \phi(x^\mu)=\phi_+e^{ik\cdot x}+\phi_-e^{-ik\cdot x}, & k^2=k^\mu k_\mu=\alpha, \end{eqnarray} where $\phi_\pm$ are constants and $k_\mu=(k_0,\vec{k})$ are the momenta of the field $\phi$. Note that this solution is only of a single momentum mode, while the solution to the equation $\Box{\phi}+\alpha\phi=0$ can be any combinations of the solution (\ref{e:phisol1}) of all possible momentum modes $\vec{k}$. Since we are considering the DBI theory (\ref{e:DBIA}) of a real scalar field, the solution $\phi$ should be real, which requires that the coefficients must satisfy $\phi_+^*=\phi_-$ and $\phi_-^*=\phi_+$. Actually, the real solution (\ref{e:phisol1}) can be more explicitly expressed as \begin{equation}\label{e:phisol2} \phi=\phi_s\sin(k\cdot x)+\phi_c\cos(k\cdot x), \textrm{ }\textrm{ }\textrm{ } k^2=\alpha, \end{equation} where $\phi_{s,c}$ are constants. For the exact solution $\phi$ (\ref{e:phisol1}) or (\ref{e:phisol2}), one can prove that $c=\partial\phi\cdot\partial\phi+\alpha\phi^2$ is a constant: $c=4\alpha|\phi_+|^2=4\alpha|\phi_-|^2$ for (\ref{e:phisol1}) and $c=\alpha(\phi_s^2+\phi_c^2)$ for (\ref{e:phisol2}). With the constant $c$, we can derive the energy-momentum tensor from the exact solution (\ref{e:phisol1}) or (\ref{e:phisol2}), as will discussed below. It can be proved that the energy-momentum tensor of the field $\phi$ is equal to that of $T$, i.e., $T_{\mu\nu}(\phi)=T_{\mu\nu}(X)$. The latter is given by \begin{equation}\label{e:enmo1} T_{\mu\nu}(X)=\mathcal{C}(X)\left[\frac{\partial_\mu X \partial_\nu X}{\sqrt{1+\partial X\cdot\partial X}}- \eta_{\mu\nu}\sqrt{1+\partial X\cdot\partial X}\right]. \end{equation} In what follows, we will discuss in detail the above exact solutions and their energy-momentum tensors respectively for a tachyon, massless scalar and massive scalar. \subsection{Tachyon} For a tachyon $\phi$, there are two situations for the solution (\ref{e:phisol1}) or (\ref{e:phisol2}) in terms of the condition $k^2=-k_0^2+\vec{k}^2=\beta^2$. (a) ${\vec{k}}^2\geq\beta^2$: All components of $k_\mu$ are real. The solution $\phi$ in this case describes an oscillating scalar field travelling faster than light with $\vec{k}^2=k_0^2+\beta^2$. (b) ${\vec{k}}^2<\beta^2$: The space-like components $\vec{k}$ are still real but the time-like component $k_0$ must be imaginary, i.e., $k_0=i\omega$ with $\omega$ real. Then the solution (\ref{e:phisol1}) can be expressed in the form: \begin{eqnarray}\label{e:ctachprof} \phi(x^\mu)=\phi_+e^{\omega t}e^{i\vec{k}\cdot\vec{x}}+ \phi_-e^{-\omega t}e^{-i\vec{k}\cdot\vec{x}}, & \omega^2+{\vec{k}}^2=\beta^2, \end{eqnarray} which describes an unstable process, like the decay of unstable D-branes. The field $\phi$ with this solution contains two modes: one shrinks to $0$ and the other grows to $\infty$ with time. However, the solution (\ref{e:ctachprof}) is complex. So it is not a valid solution in the DBI effective theory of a real tachyon field. But there exist real solutions in the purely time-dependent or space-dependent case. In the homogeneous case, Eq.\ (\ref{e:phieq}) for a tachyon $\phi$ reduces to $\ddot{\phi}=\beta^2\phi$ whose solution is a time-dependent tachyon profile: $\phi=\phi_+e^{\beta t}+\phi_-e^{-\beta t}$. In the static and $p=1$ case, Eq.\ (\ref{e:phieq}) for a tachyon reduces to $\phi''=-\beta^2\phi$ whose solution is a space-dependent tachyon profile: $\phi=\phi_s\sin(\beta x)+\phi_c\cos(\beta x)$. Since the solution (\ref{e:ctachprof}) is valid in the space-dependent and the time-dependent cases, we still give the energy-momentum tensor from this solution \begin{equation}\label{e:enmo2} T_{\mu\nu}=\frac{\mathcal{C}_m}{\sqrt{1+c}}\left[ \frac{1+c}{\cosh^2(\beta T)}\left(\frac{k_\mu k_\nu}{\beta^2} -\eta_{\mu\nu}\right)-\frac{k_\mu k_\nu}{\beta^2}\right]. \end{equation} It is easy to prove that the energy-momentum tensor obeys the conservation law: $\partial^\mu T_{\mu\nu}=0$. In the homogeneous case with $k_0^2=-\omega^2=-\beta^2$ and $\vec{k}^2=0$, the components of the energy-momentum tensor are: \begin{equation}\label{e:tachvacEnPr} T_{00}=\frac{\mathcal{C}_m}{\sqrt{1+c}}, \textrm{ }\textrm{ }\textrm{ } T_{ii}=0. \end{equation} It is a pressureless state with finite energy density. It is shown by numerical simulations \cite{Felder:2002sv,Felder:2004xu} that the spacetime-dependent tachyon field develops into a nearly homogeneous field towards the end of condensation. So the energy-momentum tensor (\ref{e:tachvacEnPr}) should exist as the tachyon approaches the vacuum. This is consistent with the previous analysis in conformal field theory \cite{Sen:2002in,Sen:2002an} and in the DBI effective theory \cite{Sen:2002an,Felder:2002sv,Felder:2004xu,Hindmarsh:2009hs}, where it was shown that the tachyon field approaching the vacuum evolved into a pressureless state with finite energy density. \subsection{Massless scalar} For a massless scalar $\phi$, $\alpha=0$. The equation of motion (\ref{e:phieq}) reduces to \begin{equation}\label{e:masslessscaeom} \Box{\phi}+[(\partial{\phi}\cdot\partial{\phi}) \Box{\phi}-\partial_\mu\phi\partial_\nu\phi\partial^\mu \partial^\nu{\phi}]=0. \end{equation} In this case, the field $Y=\phi$ is an appropriate massless scalar describing the fluctuation of a D-brane in one of its transverse directions. There are two kinds of solutions to the above massless scalar field equation. (a) One possible solution is the one of an oscillating mode (\ref{e:phisol1}), which can be more explicitly written as \begin{equation}\label{e:maslesscasol1} Y=\phi=\phi_+e^{ik\cdot x}+\phi_-e^{-ik\cdot x}, \textrm{ }\textrm{ }\textrm{ } k_0^2=\vec{k}^2. \end{equation} This solution describes the oscillation of a D-brane in one of its transverse directions. From this solution of a single oscillating mode, one can derive the energy-momentum tensor \begin{equation} T_{\mu\nu}=\mathcal{C}_m(\partial_\mu Y\partial_\nu Y-\eta_{\mu\nu}), \end{equation} where the first term comes from the oscillation of $Y$, while the second term comes from the negative contribution of the D-brane tension. It is easy to prove that this energy-momentum tensor is conserved: $\partial^\mu T_{\mu\nu}=0$. A feature indicated by the energy-momentum tensor is that the pressure of the D-brane of an oscillating mode is generically negative, whereas in ordinary field theory of massless scalars the pressure is generically positive. (b) Besides the above solution, there is a special solution to the equation (\ref{e:masslessscaeom}) \begin{equation}\label{e:maslesscasol2} \partial_\mu Y=\partial_\mu\phi=a_\mu, \end{equation} with $a^2\geq -1$ due to the positiveness of the kinetic part $1+\partial Y\cdot\partial Y$ in the DBI action. This solution describes a D-brane that is inclined at an angle decided by the parameters $a_i$ and moves with the velocity $a_0$. If the inclined angle is zero, i.e., $a_i=0$, the solution (\ref{e:maslesscasol2}) is simply $\dot{Y}=a_0$, which is the same as the zero mode solution of a string. For the solution (\ref{e:maslesscasol2}) describing the uniform motion of a D-brane, the energy-momentum tensor is \begin{equation} T_{\mu\nu}=\mathcal{C}_m\left[\frac{a_\mu a_\nu}{ \sqrt{1+a^2}}-\eta_{\mu\nu}\sqrt{1+a^2}\right], \end{equation} which is constant and hence obeys the conservation law. For a D-brane with zero inclined angle or a D-particle, the energy density is $T_{00}=\mathcal{C}_m/\sqrt{1-a_0^2}$. This expression is similar to the expression of the effective mass $m$ of an ordinary particle in relativity: $m=m_0/\sqrt{1-v^2}$, with the rest mass $m_0=\mathcal{C}_m$ and the velocity $v=a_0$. But, at the speed $a_0=\pm1$, the pressure of D-particles $T_{ii}=-\mathcal{C}_m\sqrt{1-a_0^2}$ is zero. For ordinary particles, the pressure is zero when they are at rest. \subsection{Massive scalar} For a massive scalar $\phi$, $\alpha=-\gamma^2$. The massive scalar in the DBI effective theory may describe confined fluctuation of the D-brane, in contrast to transverse massless scalars. The real solution of the massive scalar $\phi$ is (\ref{e:phisol1}) or (\ref{e:phisol2}). The energy-momentum tensor from this exact solution is \begin{equation} T_{\mu\nu}=\frac{\mathcal{C}_m}{\sqrt{1+c}}\left[ \frac{k_\mu k_\nu}{\gamma^2}-\frac{1+c}{\cos^2(\gamma\Phi)} \left(\frac{k_\mu k_\nu}{\gamma^2}+\eta_{\mu\nu}\right) \right]. \end{equation} It also obeys the conservation law $\partial^\mu T_{\mu\nu}=0$. Note that in this case $c<0$ and $1+c\leq\cos^2(\gamma\Phi)=1-\gamma^2\phi^2$ for the real solution (\ref{e:phisol2}). Thus, the energy density $T_{00}$ is positive. But, the pressure is also generically negative due to the negative contribution from the tension of the stable D-brane. \section{Coupling to transverse massless scalars} \label{sec:coupmassless} In the rest of this paper, we shall adopt the method used in the previous section for deriving exact solutions to explore solutions in the DBI theory including world-volume massless fields. As done for the action (\ref{e:DBIA}), we can generalise the DBI action of a tachyon field in the presence of world-volume massless fields \cite{Garousi:2000tr,Sen:2003bc,Sen:2004nf} to get the one of a general scalar field $X$: \begin{equation}\label{e:simbacdbi} S=-\int d^{p+1}x\mathcal{C}(X)\sqrt{-\det(\eta_{\mu\nu} +\partial_\mu{X}\partial_\nu{X}+\partial_\mu{Y^I} \partial_\nu{Y^I}+F_{\mu\nu})}, \end{equation} where $Y^I$ ($I=p+1,\cdots,D-1$) are a set of massless scalar fields for each direction transverse to the D$p$-brane, with $D=26$ for bosonic strings and $D=10$ for superstrings, and $F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu$ is the field strength of the $U(1)$ gauge field $A_\mu$. $Y^I$ describe fluctuations of the D-brane in transverse directions. In this section, we first study the dynamics of the general scalar $X$ in the presence of only the transverse massless scalars $Y^I$. Working out the determinant in the DBI action (\ref{e:simbacdbi}) with the gauge fields vanishing, we get the Lagrangian \begin{equation}\label{e:dbicouscalar2} \mathcal{L}=-\mathcal{C}{(X)}\left[1+M+\partial_\mu{X} \partial^\mu{X}+ \frac{1}{2}H_{\mu\nu}^{IJ}H^{IJ\mu\nu}+\frac{1}{2} H_{\mu\nu}^I(X)H^{I\mu\nu}(X)\right]^{\frac{1}{2}}, \end{equation} where $M=\sum_I\partial_\mu{Y^I}\partial^\mu{Y^I}$ and \begin{equation} H_{\mu\nu}^I(X)=\partial_\mu{X}\partial_\nu{Y^I}- \partial_\nu{X}\partial_\mu{Y^I}, \end{equation} \begin{equation} H_{\mu\nu}^{IJ}=\partial_\mu{Y^I}\partial_\nu{Y^J}- \partial_\nu{Y^I}\partial_\mu{Y^J}. \end{equation} Now we adopt the potentials and field redefinition relations respectively for the tachyon, massless scalar and massive scalar given in the previous section to rewrite this Lagrangian as \begin{equation}\label{e:phidbicoumassless} \mathcal{L}=-V(\phi)\left[\left(1+M+\frac{1}{2} H_{\mu\nu}^{IJ}H^{IJ\mu\nu}\right)U(\phi)+\partial_ \mu{\phi}\partial^\mu{\phi}+\frac{1}{2}H_{\mu\nu}^I H^{I\mu\nu}\right]^{\frac{1}{2}}, \end{equation} where $U(\phi)=\mathcal{C}_m/V(\phi)=1+\alpha\phi^2$ and \begin{equation} H_{\mu\nu}^I=\partial_\mu{\phi}\partial_\nu{Y^I}- \partial_\nu{\phi}\partial_\mu{Y^I}, \end{equation} The equations of motion of $Y^I$ and $\phi$ from this Lagrangian are respectively \begin{equation}\label{e:Yphieom} \partial^\mu\left(\frac{\partial_\mu{Y^I}+H_{\mu\nu}^{IJ} \partial^\nu Y^J-V(\phi)H_{\mu\nu}^I\partial^\nu{\phi}}{y} \right)=0, \end{equation} \begin{equation}\label{e:phiYeom} \partial^\mu\left(\frac{\partial_\mu{\phi}+H_{\mu\nu}^I \partial^\nu{Y^I}}{y}\right)+\left(1+M+\frac{1}{2}H^{IJ} \cdot H^{IJ}\right)\frac{\alpha\phi}{y}=0, \end{equation} where $y$ is the kinetic part of the DBI action (\ref{e:phidbicoumassless}): $\mathcal{L}=-Vy$. \subsection{Time evolution} Let us first consider the simple case, the homogeneous case, in which the coupling terms $H_{\mu\nu}^I$ and $H_{\mu\nu}^{IJ}$ automatically vanish. Denoting $M_0=\sum_I(\dot{Y}^I)^2$, the equation of motion of the massless scalar $Y^I$ (\ref{e:Yphieom}) can be expressed as \begin{equation} [(1-M_0)U-\dot{\phi}^2]\ddot{Y}^I+\dot{Y}^I[\dot{\phi}\ddot{\phi} -\alpha(1-M_0)\phi\dot{\phi}+\frac{1}{2}U\dot{M}_0]=0, \end{equation} where the dots denote time derivatives. Multiplying $\dot{Y}^I$ on both sides of the equation and summing over all components $I$, we can get \begin{equation}\label{e:homYphieq} \dot{M}_0(U-\dot{\phi}^2)+2M_0\dot{\phi}[\ddot{\phi} -\alpha(1-M_0)\phi]=0, \end{equation} On the other hand, the equation of motion of $\phi$ (\ref{e:phiYeom}) is \begin{equation}\label{e:homphiYeq} 2(1-M_0)[\ddot{\phi}-\alpha(1-M_0)\phi]+\dot{\phi}\dot{M}_0=0. \end{equation} For the massless scalars $Y^I$, we can have $\ddot{Y}^I=0$, which imply that $\dot{Y}^I$ are constant: $\dot{Y}^I=a_0^I$, where $a_0^I$ are real constants. In the special case when $a_0^2=1$, the tachyon or massive scalar $\phi$ satisfies $\ddot{\phi}=0$, behaving as a massless scalar. When $a_0^2<1$, denoting $a_0^2=\sum_I(a_0^I)^2$, we can have the solution of $\phi$ \begin{equation}\label{e:homotacphisol} \phi=\phi_+e^{\omega t}+\phi_-e^{-\omega t}, \textrm{ }\textrm{ }\textrm{ } \omega^2=(1-a_0^2)\beta^2 \end{equation} for a tachyon with $\alpha=\beta^2$, and \begin{equation}\label{e:homoscaphisol} \phi=\phi_s\sin{\left(\omega t\right)}+\phi_c \cos{\left(\omega t\right)}, \textrm{ }\textrm{ }\textrm{ } \omega^2=(1-a_0^2)\gamma^2 \end{equation} for a massive scalar with $\alpha=-\gamma^2$. Thus, the absolute values of the mass squared $|-k_0k^0|=\omega^2$ for both cases become smaller than the original values due to the presence of the transverse massless scalars $Y^I$. At the critical value $a_0=\pm1$, the field $\phi$ becomes a massless scalar. Moreover, how much the absolute values of the mass squared decrease depends on the velocity $a_0$ of the whole D-brane but not necessarily on its components $a_0^I$. To get the same mass squared $\omega^2$, we only need to keep the velocity $a_0$ of the whole D-brane the same value, while the values of its components $a_0^I$ may vary. For a tachyon $\phi$, the decreasing $\omega^2$ leads to smaller growth rate of the field, which at late time is $\dot{T}\simeq\sqrt{1-a_0^2}\leq1$. It means that unstable D-branes decay slower when they move faster. For a massive scalar $\phi$, the decreasing $\omega^2$ leads to smaller oscillating frequency, which is $|\omega|=\gamma\sqrt{1-a_0^2}\leq\gamma$. Finally, we need to make clear one missing point about the two field equations (\ref{e:homYphieq}) and (\ref{e:homphiYeq}): we actually can get the following relation from these two equations \begin{equation}\label{e:homYphiequiveq} 1-M_0=\frac{\dot{\phi}^2}{U}=\dot{X}^2. \end{equation} One can prove that any solutions of $M_0(t)$ and $\phi(t)$ satisfying this equation can sufficiently satisfy Eqs.\ (\ref{e:homYphieq}) and (\ref{e:homphiYeq}) (note that $\dot{M}_0=-2\dot{\phi}[\ddot{\phi}-\alpha(1-M_0)\phi]/U$ from Eq.\ (\ref{e:homYphiequiveq})). Actually, this relation (\ref{e:homYphiequiveq}) leads to the vanishing of the kinetic parts of the DBI action (\ref{e:dbicouscalar2}) and (\ref{e:phidbicoumassless}) in the homogeneous case, i.e., $1+M+\partial X\cdot\partial X=0$ and $y=0$ respectively. This is similar to the results for the equation of motion (\ref{e:Teom}): the solutions satisfying $1+\partial X\cdot\partial X=0$ are always solutions to the equation of motion (\ref{e:Teom}), as pointed out in the previous section. We also mentioned that, for the tachyon field $X=T$ case, the equation $1+\partial T\cdot\partial T=0$ can be achieved only when the tachyon rolls down to the vacuum. Hence, we can conjecture that the equation (\ref{e:homYphiequiveq}) for a tachyon $X=T$ could be achieved only towards the end of condensation. \subsection{The spacetime-dependent case} In the spacetime-dependent case, denoting $M=\sum_J\partial Y^J\cdot\partial Y^J$ and expanding Eq.\ (\ref{e:Yphieom}) give \begin{eqnarray}\label{e:Yphieom2} [U(1+M+\frac{1}{2}H^{IJ}\cdot H^{IJ})+\partial\phi\cdot \partial\phi+\frac{1}{2}H^I\cdot H^I][\Box Y^I+\partial^ \mu(H_{\mu\nu}^{IJ})\partial^\nu Y^J \nonumber \\ -V\partial^\mu(H_{\mu\nu}^I)\partial^\nu\phi] -\frac{1}{4} (\partial_\mu Y^I+H_{\mu\nu}^{IJ}\partial^\nu Y^J-V H_{\mu\nu}^I\partial^\nu\phi)[2U\partial^\mu M+ \nonumber \\ U\partial^\mu(H^{IJ}\cdot H^{IJ})+\partial^\mu(H^I\cdot H^I)+2\alpha\phi\partial^\mu\phi(H^{IJ}\cdot H^{IJ})+ \nonumber \\ 4\alpha\phi(1+M+\frac{1}{2}H^{IJ}\cdot H^{IJ})\partial^\mu\phi+4\partial_\rho\phi \partial^\mu\partial^\rho\phi]=0. \end{eqnarray} Expanding Eq.\ (\ref{e:phiYeom}) gives \begin{equation}\label{e:phiYeom2} I_1+I_3=0, \end{equation} where the linear and cubic order terms of $\phi$ are respectively \begin{eqnarray} I_1=(1+M+\frac{1}{2}H^{IJ}\cdot H^{IJ})[\Box\phi+ \alpha\phi(1+M+\frac{1}{2}H^{IJ}\cdot H^{IJ})+ \nonumber\\ \partial^\mu(H_{\mu\nu}^I)\partial^\nu Y^I] -\frac{1}{2}(\partial_\mu\phi+H_{\mu\nu}^I\partial^\nu Y^I)\partial^\mu(M+\frac{1}{2}H^{IJ}\cdot H^{IJ}), \nonumber \end{eqnarray} \begin{eqnarray} I_3=[\alpha\phi^2(1+M+\frac{1}{2}H^{IJ}\cdot H^{IJ}) +\frac{1}{2}H^I\cdot H^I][\Box\phi+\alpha\phi(1+M+ \nonumber \\ \frac{1}{2}H^{IJ}\cdot H^{IJ})+\partial^\mu(H_{\mu\nu} ^I)\partial^\nu Y^I]+[(\partial\phi\cdot\partial\phi) \Box\phi-\partial_\mu\phi\partial_\nu\phi\partial^\mu \partial^\nu\phi] \nonumber \\ +(\partial\phi\cdot\partial\phi)\partial^\mu(H_{\mu\nu} ^I)\partial^\nu Y^I-H_{\mu\nu}^I\partial^\nu Y^I [\partial_\rho\phi\partial^\mu\partial^\rho \phi +\alpha\phi\partial^\mu\phi(1 \nonumber \\ +M+\frac{1}{2}H^{IJ}\cdot H^{IJ})]-\frac{1}{4} (\partial_\mu\phi+H_{\mu\nu}^I\partial^\nu Y^I)[2\alpha\phi^2\partial^\mu M \nonumber \\ +\alpha\phi^2\partial^\mu( H^{IJ}\cdot{H^{IJ}}) +\partial^\mu(H^I\cdot{H^I)}]. \nonumber \end{eqnarray} For a tachyon or a massive scalar $\phi$ with $\alpha\neq0$, the exact solutions of $\phi$ and $Y^I$ satisfying Eqs.\ (\ref{e:Yphieom2}) and (\ref{e:phiYeom2}) are respectively \begin{equation}\label{e:stYphisol} \partial_\mu Y^I(x^\mu)=a_\mu^I, \textrm{ } \textrm{ }\textrm{ } a^2=\sum_Ia_\mu^Ia^{I\mu}\geq-1, \end{equation} \begin{equation}\label{e:stphiYsol} \phi(x^\mu)=\phi_+e^{ik\cdot x}+\phi_-e^{-ik\cdot x}, \textrm{ }\textrm{ }\textrm{ } k^2=(1+a^2)\alpha, \end{equation} with their momenta satisfying the coupling relations \begin{equation}\label{e:Yphimomcoup1} \frac{a_\mu^I}{a_\nu^I}=\frac{a_\mu^J}{a_\nu^J} =\frac{k_\mu}{k_\nu}, \end{equation} which equivalently give \begin{equation}\label{e:Yphimomcoup2} \frac{a_0^I}{a_0^J}=\frac{a_1^I}{a_1^J}=\cdots\frac{a_p^I} {a_p^J}=\kappa^{IJ}, \textrm{ }\textrm{ }\textrm{ } \frac{a_0^I}{k_0}=\frac{a_1^I}{k_1}=\cdots= \frac{a_p^I}{k_p}=\kappa^{I}, \end{equation} where $\kappa^{IJ}$ and $\kappa^{I}$ are constants. These momentum coupling relations give rise to the vanishing of the coupling terms \begin{eqnarray}\label{e:coupvan} H_{\mu\nu}^I=0, & H_{\mu\nu}^{IJ}=0. \end{eqnarray} For the solution of the massless scalars $Y^I$, we choose the solution (\ref{e:maslesscasol2}) rather than (\ref{e:maslesscasol1}) because the coupling terms $H_{\mu\nu}^I$ and $H_{\mu\nu}^{IJ}$ cannot vanish if we choose the latter as the solution of $Y^I$. The momenta of the solution (\ref{e:maslesscasol1}) satisfy the condition $k^2=0$, while those of the tachyon or massive scalar field $\phi$ satisfy $k^2=\alpha\neq0$. Thus, they cannot give the coupling relation (\ref{e:Yphimomcoup1}) which comes from the vanishing of $H_{\mu\nu}^I$ and $H_{\mu\nu}^{IJ}$. Since the transverse massless scalars $Y^I$ describe the fluctuation of the D-brane in the transverse directions, the solutions of $Y^I$ (\ref{e:stYphisol}) imply that the D-brane is inclined at an angle related to $a_i^I$ and moves with a uniform speed related to $a_0^I$ in the bulk, with $a_\mu^I$ satisfying the coupling relations (\ref{e:Yphimomcoup2}). From the solution of $\phi$ (\ref{e:stphiYsol}), we can see that the absolute values of the mass squared $|-k^2|$ also become smaller than the original values due to the uniform motion of the transverse massless scalars $Y^I$. We can also learn that the mass squared $|-k^2|$ is dependent on the value $a^2$ but not necessarily on its components $a_\mu^I$. For a tachyon $\phi$, the decreasing $|-k^2|$ implies that inclined unstable D-branes also decay slower when moving faster. But, in this case, the spacetime-dependent tachyon solution is complex when we require $k_0$ to be imaginary. We can only have real solutions in the homogeneous case and in the static case. The former case has been discussed in the previous subsection. For a massive scalar $\phi$, the spacetime-dependent solution (\ref{e:stphiYsol}) is real and so is valid. In this case, $-1<a^2<0$ in terms of the coupling relations (\ref{e:Yphimomcoup2}). It is clear that the mass of the massive scalar is given by $-k^2=\gamma^2(1+a^2)$, which is smaller than $\gamma^2$ since $-1<a^2<0$. In terms of the momentum coupling relation, we can give a relation between the field $\phi$ and $Y^I$ by neglecting the constant term of $Y^I$ and denoting $Y=Y^I/\kappa^I$ \begin{eqnarray} \phi=\phi_s\sin(Y)+\phi_c\cos(Y). \end{eqnarray} From it, we further have the relation between $\Phi$ and $Y$ \begin{eqnarray} \Phi=\frac{1}{\gamma}\sin^{-1}[\gamma\phi_s\sin(Y)+\gamma\phi_c\cos(Y)]. \end{eqnarray} That is, the scalar $\Phi$ oscillates in a mode proportional to that of the uniformly moving D-brane. For a massless scalar $\phi$ with $\alpha=0$, the solutions of $Y^I$ can be either (\ref{e:maslesscasol1}) or (\ref{e:maslesscasol2}). Correspondingly, the solution of $\phi$ is (\ref{e:maslesscasol1}) or (\ref{e:maslesscasol2}) as well. \section{Coupling to gauge fields} \label{sec:coupgauge} The case of coupling to gauge fields is more complicated. We first work out the determinant in the DBI action including both a general scalar and the gauge fields and write it in an explicit form. But these forms of the determinant are different on D-branes of different dimensions. Another point is that, for varying gauge fields, we usually need to choose gauge conditions. The DBI effective action of a general scalar field $X$ coupled to the $U(1)$ gauge fields is given in the form (\ref{e:simbacdbi}) without the transverse massless scalars $Y^I$ \begin{equation}\label{e:dbicougauge} S=-\int d^{p+1}x \mathcal{C}{(X)}\sqrt{-\det(\eta_{\mu\nu} +\partial_\mu{X}\partial_\nu{X}+F_{\mu\nu})}. \end{equation} Aspects of static kink (and vertex) solutions from this action for a tachyon have been studied in \cite{Mukhopadhyay:2002en,Sen:2003tm,Sen:2003zf,Kim:2003in,Kim:2003ma, Brax:2003wf,Afonso:2006ws,Cho:2006pg}, where it was shown that the electromagnetic fields were constant. Due to the existence of the constant electromagnetic fields, the separations between kinks and anti-kinks are reduced. The electromagnetic fields are also restricted to be constant in the homogeneous case \cite{Kim:2003he}, in which the tachyon grows slower with time in the presence of constant electromagnetic fields. In what follows, we shall use the method of redefinition adopted in Sec.\ 2 to explore solutions in the coupled system of a general scalar and gauge fields on D-branes of various dimensions. \subsection{D-strings} For D-strings, the action (\ref{e:dbicougauge}) is simply \begin{equation}\label{e:1dordbi} S=-\int dtdx \mathcal{C}{(X)}\sqrt{1-E(t,x)^2 +\partial_\mu{X}\partial^\mu{X}}, \end{equation} where $E(t,x)=F_{01}$ is the electric field along the $x$ direction on a D-string. Implementing the field redefinitions on the field $X$ as given in Sec. 2, the action (\ref{e:1dordbi}) can be rewritten as: \begin{equation}\label{e:1ddbi} S=-\int dtdx V(\phi)\sqrt{\left(1-E^2\right)U(\phi) +\partial_\mu{\phi}\partial^\mu{\phi}}, \end{equation} where $U(\phi)=\mathcal{C}_m/V(\phi)=1+\alpha\phi^2$ as before and $\mu,\nu$ runs over $0,1$. The equations of motion of the electric field and the field $\phi$ are respectively: \begin{equation}\label{e:1delee1} \partial^\mu{\left(\frac{E}{y}\right)}=0, \end{equation} \begin{equation}\label{e:1dphie1} \partial^\mu{\left(\frac{\partial_\mu{\phi}}{y}\right)}+ \left(1-E^2\right) \frac{\alpha\phi}{y}=0, \end{equation} where $y$ is the kinetic part of the action (\ref{e:1ddbi}): $\mathcal{L}=-Vy$. For the pure gauge field case with vanishing $\phi$, only the equation (\ref{e:1delee1}) is left, which reduces to $\partial^\mu E=0$. Hence, the electric field $E$ without coupling to any other fields in the DBI theory can only be constant on D-strings. With non-trivial $\phi$, the above equations of motion (\ref{e:1delee1}) and (\ref{e:1dphie1}) give respectively if we denote $r=E^2$ \begin{equation}\label{e:1delee2} \frac{1}{2}(U+\partial\phi\cdot\partial\phi)\partial^\mu r-r[\frac{1}{2}\partial^\mu(\partial\phi\cdot\partial \phi)+\alpha(1-r)\phi\partial^\mu\phi]=0, \end{equation} \begin{eqnarray}\label{e:1dphie2} U(1-r)[\Box\phi+\alpha(1-r)\phi]+\frac{1}{2}U\partial r\cdot\partial\phi \nonumber \\ +[(\partial\phi \cdot\partial\phi)\Box\phi-\partial_\mu\phi\partial_\nu \phi\partial^\mu\partial^\nu\phi]=0 \end{eqnarray} It is clear to see that there is an exact solution for $\phi$ when the electric field is constant: \begin{equation}\label{e:s1dphiemsol} \phi(x^\mu)=\phi_+e^{ik\cdot x}+\phi_-e^{-ik\cdot x}, \textrm{ }\textrm{ } \textrm{ } k^2=(1-r)\alpha. \end{equation} In the special case $r=1$, $\phi$ becomes a massless scalar and has the massless scalar solutions (\ref{e:maslesscasol1}) and (\ref{e:maslesscasol2}). In the homogeneous case, Eqs.\ (\ref{e:1delee2}) and (\ref{e:1dphie2}) become respectively \begin{equation}\label{e:hom1delee2} \dot{r}(U-\dot{\phi}^2)+2r\dot{\phi}[\ddot{\phi} -\alpha(1-r)\phi]=0, \end{equation} \begin{equation}\label{e:hom1dphie2} 2(1-r)[\ddot{\phi}-\alpha(1-r)\phi]+\dot{r}\dot{\phi}=0. \end{equation} These two equations are respectively similar to Eqs.\ (\ref{e:homYphieq}) and (\ref{e:homphiYeq}) for the case of coupling to transverse massless scalars. Similarly, from them one can get \begin{equation}\label{e:1demphiequiveq} 1-r=\frac{\dot{\phi}^2}{U}=\dot{X}^2, \end{equation} which also leads to the vanishing of the kinetic parts of the DBI actions (\ref{e:1dordbi}) and (\ref{e:1ddbi}). For the tachyon field $X=T$, this equation is also expected to be an achieved as the tachyon approaches the vacuum. When the electric field is constant, the homogeneous solution of $\phi$ is \begin{equation} \phi=\phi_+e^{\omega t}+\phi_-e^{-\omega t}, \textrm{ }\textrm{ }\textrm{ } \omega^2=\left(1-r\right)\beta^2. \end{equation} for a tachyon, or \begin{equation} \phi=\phi_s\sin{(\omega t)}+\phi_c\cos{(\omega t)}, \textrm{ }\textrm{ }\textrm{ } \omega^2=\left(1-r\right)\gamma^2, \end{equation} for an ordinary scalar. Thus the tachyon grows slower due to the presence of non-vanishing constant electric field. At late time, $|\dot{T}|\simeq\sqrt{1-r}$ and it is less than 1. Hence, the constant electric field tends to slow down the decay process of unstable D-strings. At the critical value $r=1$, the tachyon becomes a massless scalar which implies that the unstable D-string with a sufficiently strong electric field becomes stable. For a massive scalar $\phi$, the oscillating frequency of the field becomes smaller due to the presence of a non-vanishing electric field. \subsection{D2-branes} For D2-branes in the presence of gauge fields, the action (\ref{e:dbicougauge}) can be rewritten as \begin{equation}\label{e:2demdbi} S=-\int d^3x \mathcal{C}{(X)}\left[1+\partial_\mu{X} \partial^\mu{X}+\frac{1}{2}F_{\mu\nu}F^{\mu\nu}-G_X^2 \right]^{\frac{1}{2}}, \end{equation} where \begin{equation} G_X=\widetilde{F}^\mu\partial_\mu{X}, \end{equation} \begin{equation} \widetilde{F}^\mu=\frac{1}{2}\epsilon^{\mu\nu\rho} F_{\nu\rho}. \end{equation} $G_X$ are the coupling terms between the general scalar field $X$ and the dual gauge field strength. $\epsilon_{\mu\nu\rho}$ is a 3-dimensional Levi-Civita tensor with $\epsilon_{\mu\nu\rho}=-\epsilon^{\mu\nu\rho}$ and $\epsilon_{012}=1$. After the field redefinitions given in Sec.\ 2, we reach the action of $\phi$: \begin{equation}\label{e:2dphidbi} S=-\int d^3xV(\phi)y=-\int d^3xV(\phi)\left[\left(1+ \frac{1}{2}F_{\mu\nu}F^{\mu\nu}\right)U(\phi)+\partial_\mu {\phi}\partial^\mu{\phi}-G^2\right]^{\frac{1}{2}}, \end{equation} where $G=\widetilde{F}^\mu\partial_\mu{\phi}$. The equations of motion for the gauge field can be written: \begin{equation} \partial^\mu\left(\frac{F_{\mu\nu}-V(\phi)G\epsilon_{\mu\nu\rho} \partial^\rho{\phi}}{y}\right)=0, \end{equation} which can be recast into \begin{equation}\label{e:2demeq} \partial^\mu\left(\frac{F_{\mu\nu}}{y}\right)=V\epsilon_ {\mu\nu\rho}\partial^\mu\left(\frac{G}{y}\right)\partial^ \rho\phi. \end{equation} The equation of motion for the field $\phi$ is \begin{equation}\label{e:2dphieq} \partial^\mu\left(\frac{\partial_\mu{\phi}-G \widetilde{F}_\mu}{y}\right)+\left(1+\frac{1}{2} F_{\mu\nu}F^{\mu\nu}\right)\frac{\alpha\phi}{y}=0. \end{equation} \subsubsection{Time evolution} First, we consider the space-independent solutions from the DBI action (\ref{e:2dphidbi}) with the field strength elements: $\phi(t)$, $F_{01}=E_1(t)$, $F_{02}=E_2(t)$ and $F_{12}=B=const$. In this case, the only non-vanishing coupling term is $G=-\dot{\phi}B$ and Eq.\ (\ref{e:2demeq}) becomes \begin{equation} \partial^0\left(\frac{E_1(t)}{y}\right)=0, \textrm{ }\textrm{ } \textrm{ } \partial^0\left(\frac{E_2(t)}{y}\right)=0. \end{equation} They lead to \begin{equation} [(1-r)U-\dot{\phi}^2]\dot{E}_j+[\dot{\phi}\ddot{\phi} -\alpha(1-r)\phi\dot{\phi}+\frac{1}{2}U\dot{r}]E_j=0, \end{equation} where the index $j$ runs over $1,2$ and \begin{equation} r=\frac{E_1^2+E_2^2}{1+B^2}. \end{equation} Multiplying $E_j$ on both sides of this equation and summating over $j=1,2$, we get a single equation \begin{equation} \dot{r}(U-\dot{\phi}^2)+2r\dot{\phi}[\ddot{\phi} -\alpha(1-r)\phi]=0, \end{equation} On the other hand, the equation of motion of $\phi$ (\ref{e:2dphieq}) becomes \begin{equation} 2(1-r)[\ddot{\phi}-\alpha(1-r)\phi]+\dot{r}\dot{\phi}=0, \end{equation} Thus the above two equations of $r$ and $\phi$ are respectively the same as Eqs.\ (\ref{e:hom1delee2}) and (\ref{e:hom1dphie2}), except that the definition of $r$ is different here. Therefore, we can also get the same relation as Eq.\ (\ref{e:1demphiequiveq}) from the two equations. From the above equations of motion, we know that, when the ratio $r$ of the total energy densities of electromagnetic fields is constant, there is a time-dependent solution of $\phi$, which is \begin{equation} \phi=\phi_+e^{\omega t}+\phi_-e^{-\omega t}, \textrm{ }\textrm{ }\textrm{ } \omega^2=\left(1-r\right)\beta^2, \end{equation} for a tachyon, and \begin{equation} \phi=\phi_s\sin{(\omega t)}+\phi_c\cos{(\omega t)}, \textrm{ }\textrm{ }\textrm{ } \omega^2=\left(1-r\right)\gamma^2, \end{equation} for a massive scalar. Thus the effective mass squared $\omega^2$ also decreases for a non-vanishing value $r$. Specially, when $r=1$, the field $\phi$ becomes a massless scalar. Moreover, from the above solutions we can also learn the feature that if only we keep the ratio $r$ constant we will get the same solution $\phi$ with the same effective mass $\omega^2$, while the components of the electromagnetic fields $E_1$, $E_2$ and $B$ are allowed to change. Another feature we can learn from the expressions of $r$ and $\omega^2$ is that the electric fields tend to decrease while the magnetic field tends to increase the absolute values of the mass squared $\omega^2$. For a tachyon, this also leads to a slower growth rate of the field. At late time, the tachyon grows at a rate $|\dot{T}|\simeq\sqrt{1-r}<1$, which means that unstable D-branes decay slower in the presence of non-vanishing constant electromagnetic fields. For a massive scalar, the field has a lower oscillating frequency in the presence of non-vanishing constant electromagnetic fields, indicating that the massive scalar has a smaller effective mass. \subsubsection{Fixed electromagnetic fields} Next, we consider the spacetime-dependent case by setting all components of the electromagnetic fields $F_{01}=E_1$, $F_{02}=E_2$ and $F_{12}=B$ to be constant. Then one can express Eq.\ (\ref{e:2dphieq}) as \begin{eqnarray} U(1+\frac{1}{2}F\cdot F)[\Box\phi-\widetilde{F}_\mu \widetilde{F}_\nu\partial^\mu\partial^\nu\phi+(1+\frac{1}{2}F\cdot F)\alpha\phi]+ \nonumber \\ \textrm{$[$}(\partial\phi\cdot\partial\phi)\Box\phi -\partial_\mu\phi\partial_\nu\phi\partial^\mu\partial^\nu\phi]+\widetilde{F} _\mu\widetilde{F}_\nu[\partial^\nu\phi\partial^\mu(\partial\phi\cdot\partial\phi) \nonumber \\ -(\partial\phi\cdot\partial\phi)\partial^\mu\partial^\nu \phi-\partial^\mu\phi\partial^\nu\phi\Box\phi]=0, \end{eqnarray} where $\widetilde{F}_\mu=(B,E_2,-E_1)$. This equation still has the classical solution (\ref{e:phisol1}) or (\ref{e:phisol2}) but with the momenta satisfying \begin{equation}\label{e:D2fixmomcou} \begin{array}{ccl} k\cdot k & = &(\widetilde{F}\cdot k)^2 +(1+\frac{1}{2}F\cdot F)\alpha \\ & = &(k_0B+E_1k_2-E_2k_1)^2+(1+B^2)(1-r)\alpha. \end{array} \end{equation} This momentum condition implies that the momenta $k_\mu$ vary on a surface in the momentum space, which is decided by the three components of the electromagnetic fields. When $k_1=k_2=0$, we recover the time-dependent solution obtained in the previous discussion. Let us discuss the momentum coupling relation in detail below respectively for a tachyon, massless scalar and massive scalar. For the case $\alpha=0$, the solution $\phi$ does not necessarily describe a massless scalar since its mass squared $k^2\neq0$ when $\widetilde{F}\cdot k\neq0$ or $G\neq0$. If $G\neq0$, $\phi$ becomes a tachyon with $k^2>0$, propagating faster than light. For instance, setting $k_1/E_1=k_2/E_2$, we get the momentum condition $(1+B^2)k_0^2=k_1^2+k_2^2$, which describes a massless scalar in a transverse direction propagating faster than light. To avoid superluminal propagation, the allowed momentum mode $k_\mu$ must satisfy \begin{equation} \widetilde{F}\cdot k=0, \end{equation} when $\alpha=0$. In this case, the massless scalar remains as a massless scalar, but fluctuate only in some specific momentum mode that is decided by the constant electromagnetic fields on the D-brane. For $\alpha<0$, it is also possible that the mass squared $k^2$ becomes positive from the originally negative value $k^2=-\gamma^2$, making the field $\phi$ become a tachyon from a massive scalar. Thus, to avoid becoming a tachyon, the field $\phi$ can only be in some specific modes $k_\mu$ that satisfy \begin{equation} (\widetilde{F}\cdot k)^2\leq(1+B^2)(1-r)\gamma^2. \end{equation} For $\alpha>0$, the solution $\phi$ will be always a tachyon with $k^2>0$ when $r<1$. But, for the tachyon $\phi$, the time-like component $k_0$ is complex $k_0=i\omega$. In this case, there are two situations for the above momentum condition (\ref{e:D2fixmomcou}) in order to keep $\omega$, $k_1$ and $k_2$ to be real: (a) When $B=0$, the momenta satisfy \begin{equation} (1-E_2^2)k_1^2+(1-E_1^2)k_2^2+2E_1E_2k_1k_2 =(1-E_1^2-E_2^2)\beta^2-\omega^2. \end{equation} For a given $\omega$, $k_1$ and $k_2$ can only vary on the circumstance of an ellipse that is decided by the values of $E_1$ and $E_2$. The maximum value of $|\omega|$ happens when $k_1=k_2=0$. So the constraint on $\omega$ is $|\omega|\leq\beta\sqrt{1-E_1^2-E_2^2}$. (b) When $B\neq 0$, $k_1$ and $k_2$ must satisfy $k_1/E_1=k_2/E_2$. Then the relations between the momenta are \begin{equation}\label{e:momreii} k_1^2=\frac{E_1^2}{r}[(1-r)\beta^2-\omega^2], \textrm{ }\textrm{ } \textrm{ } k_2^2=\frac{E_2^2}{r}[(1-r)\beta^2-\omega^2], \end{equation} where $r=(E_1^2+E_2^2)/(1+B^2)$ as before. The constraint on $\omega$ is $|\omega|\leq\beta\sqrt{1-r}$. From Eq.\ (\ref{e:momreii}), we know that $\partial_1\phi=0$ if $E_1=0$ and $\partial_2\phi=0$ if $E_2=0$ when $B\neq0$. This is consistent with the requirement of symmetry. \subsubsection{Electromagnetic wave} Now we consider the case with a single gauge field component $A_2(x^\mu)$ along the $x^2$ direction. We assume that $A_2$ is homogeneous along $x^2$ direction, i.e., $A_2=A_2(x^0,x^1)$, so that it satisfies the Lorentz gauge $\partial^\mu A_\mu=\partial^2 A_2=0$. In the following discussion, we denote the indices $\mu,\nu=0,1,2$ and $m,n,a,b=0,1$. The components of the field strength are: \begin{eqnarray} F_{01}=0, & F_{02}=\partial_0 A_2(x^m)=E_2(x^m), & F_{12}=\partial_1 A_2(x^m)=B(x^m). \end{eqnarray} For convenience, we denote $A_2$ by $A$. As before, let us consider the DBI action for pure gauge field with vanishing field $\phi=0$. The equation of motion (\ref{e:2demeq}) of the gauge field becomes: \begin{equation}\label{e:2dbiemeq} \partial^m\partial_m A+[(\partial_mA\partial^mA)\partial^n\partial_nA-\partial_mA\partial_nA\partial^m\partial^nA]=0. \end{equation} Obviously, a solution from this equation is \begin{equation}\label{2demsol} A=A_+e^{iq_mx^m}+A_-e^{-iq_mx^m}, \textrm{ }\textrm{ }\textrm{ } q_0^2=q_1^2, \end{equation} where $A_\pm$ are constants. It describes a beam of planar electromagnetic waves propagating along the $x^2$ direction on the D2-brane. It is the same solution as the one from the Maxwell equation under the Lorentz gauge $\partial^\mu F_{\mu\nu}=\Box A=0$. Let us further consider the case with non-trivial $\phi$. Since there is only a single component of the gauge field $A=A_2$, the coupling term can be expressed as $G=\epsilon_{mn2}\partial^m\phi\partial^nA=\epsilon_{mn}\partial^m\phi\partial^nA$. Eq.\ (\ref{e:2demeq}) contains three equations, with $\nu=0,1,2$. For the $\nu=0,1$ components, $\partial^\mu F_{\mu\nu}=0$. They are respectively \begin{eqnarray} \epsilon_{\mu 0\rho}\{\frac{1}{2}G[\partial^\mu(\partial\phi\cdot\partial\phi) +U\partial^\mu(\partial^mA\partial_mA)]-[U(1+\partial^mA\partial_mA)+\partial\phi\cdot\partial\phi] \nonumber \\ \partial^\mu G\}\partial^\rho\phi+U\partial_0A[\alpha\phi(1+\partial^mA\partial_mA) \partial^2\phi+\frac{1}{2}\partial^2(\partial\phi\cdot\partial\phi)-G \partial^2G]=0, \end{eqnarray} and \begin{eqnarray} \epsilon_{\mu 1\rho}\{\frac{1}{2}G[\partial^\mu(\partial\phi\cdot\partial\phi) +U\partial^\mu(\partial^mA\partial_mA)]-[U(1+\partial^mA\partial_mA)+\partial\phi\cdot\partial\phi] \nonumber \\ \partial^\mu G\}\partial^\rho\phi+U\partial_1A[\alpha\phi(1+\partial^mA\partial_mA) \partial^2\phi+\frac{1}{2}\partial^2(\partial\phi\cdot\partial\phi)-G \partial^2G]=0. \end{eqnarray} For the $\nu=2$ component, we use $\partial^\mu F_{\mu\nu}=\partial_m\partial^mA$, $\partial_\mu\widetilde{F}^{\mu\nu}=0$ and $\epsilon_{m2n}=-\epsilon_{mn}$ to get \begin{equation} J_1+J_3=0, \end{equation} where the first and the third order terms of $A$ are respectively \begin{eqnarray} J_1=U(U+\partial\phi\cdot\partial\phi)\partial^m\partial_mA+\epsilon_{mn}[ \frac{1}{2}G\partial^m\phi\partial^n(\partial\phi\cdot\partial\phi) -(\partial\phi\cdot\partial\phi)\partial^m\phi\partial^nG] \nonumber \\ -U[\epsilon_{mn}\partial^m\phi\partial^nG+\alpha\phi\partial^m\phi \partial_mA+\frac{1}{2}\partial^mA\partial_m(\partial\phi\cdot\partial\phi)], \nonumber \end{eqnarray} \begin{eqnarray} J_3=U(\partial^mA\partial_mA)[U\partial^m\partial_mA-\epsilon_{mn}\partial^m\phi\partial^n G]-U[\frac{1}{2}U\partial^mA\partial_m(\partial^aA\partial_aA)+ \nonumber \\ \alpha\phi(\partial^aA\partial_aA)\partial^m\phi\partial_mA] +UG[\partial^mG\partial_mA-G\partial^m\partial_mA+ \nonumber \\ \frac{1}{2}\epsilon_{mn}\partial^m\phi\partial^n(\partial^aA\partial_aA)] \nonumber \end{eqnarray} The equation of motion of $\phi$ is \begin{equation} I_1+I_3=0, \end{equation} where the first and the third order terms of $\phi$ are respectively \begin{eqnarray} I_1=(1+\partial^mA\partial_mA)[\Box\phi+(1+\partial^mA\partial_mA)\alpha\phi- \epsilon_{mn}\epsilon_{ab}\partial^m(\partial^a\phi\partial^bA)\partial^nA] \nonumber \\ +\frac{1}{2}[\epsilon_{mn}\epsilon_{ab}\partial^m(\partial^cA\partial_cA) \partial^nA\partial^a\phi\partial^bA-\partial_m\phi\partial^m(\partial^aA\partial_aA)], \nonumber \end{eqnarray} \begin{eqnarray} I_3=\alpha\phi^2(1+\partial^mA\partial_mA)[\Box\phi+(1+\partial^mA\partial_mA) \alpha\phi-\epsilon_{mn}\epsilon_{ab}\partial^m(\partial^a\phi\partial^bA) \partial^nA] \nonumber \\ +[(\partial\phi\cdot\partial\phi)\Box\phi-\partial_\mu\phi\partial_\nu\phi \partial^\mu\partial^\nu\phi]-\epsilon_{mn}\epsilon_{ab}(\partial\phi \cdot\partial\phi)\partial^m(\partial^a\phi\partial^bA)\partial^nA \nonumber \\ +G\partial_\mu\phi\partial^\mu G +G\epsilon_{mn}[\frac{1}{2}\partial^m(\partial \phi\cdot\partial\phi)\partial^nA-\partial^m\phi\partial^nA\Box\phi] \nonumber \\ +\frac{1}{2}\alpha\phi^2[\epsilon_{mn}G\partial^m(\partial^aA\partial_aA) \partial^nA-\partial\phi\cdot\partial(\partial^aA\partial_aA)]. \nonumber \end{eqnarray} Not that we denote $\partial\phi\cdot\partial\phi=\partial_\mu\phi\partial^\mu\phi$ and $\Box\phi=\partial_\mu\partial^\mu\phi$ in the above equations. The solutions of the gauge field and the general scalar field can be of the form \begin{equation}\label{e:r2demphisol} A=A_+e^{iq_mx^m}+A_-e^{-iq_mx^m}, \textrm{ }\textrm{ }\textrm{ } q_0^2=q_1^2, \end{equation} \begin{equation}\label{e:2dphiemsol} \phi(x^\mu)=\phi_+e^{ik_\mu x^\mu}+\phi_-e^{-ik_\mu x^\mu}, \textrm{ }\textrm{ } \textrm{ }\frac{k_0}{q_0}=\frac{k_1}{q_1}, \textrm{ }\textrm{ }\textrm{ } k^2=k_2^2=\alpha, \end{equation} where $A_\pm$ and $\phi_{1,2}$ are all arbitrary constants. The momentum coupling relation is equivalent to \begin{eqnarray}\label{e:2dphiemcourel} q_1=\pm q_0, & k_1=\pm k_0, & k_2=\pm\sqrt{\alpha}. \end{eqnarray} Therefore, the gauge field solution with non-vanishing $\phi$ is just the same as that from the DBI action for the pure gauge field. The above solutions of $A$ and $\phi$ that satisfy the coupling condition $k_0/q_0=k_1/q_1$ gives rise to \begin{equation} G=0. \end{equation} Let us discuss the solutions (\ref{e:r2demphisol}) and (\ref{e:2dphiemsol}) in detail for different $\alpha$ in what follows: (i) For a massless scalar $\phi$ with $\alpha=0$, its mass squared is $k^2=k_2=0$. In this case, the solutions (\ref{e:r2demphisol}) and (\ref{e:2dphiemsol}) of the two massless fields $A$ and $\phi$ are real and so are valid. The solutions indicate that D$2$-branes carrying a beam of electromagnetic waves are fluctuating in an oscillating mode, since the field $\phi$ with $\alpha=0$ can be viewed as a transverse massless scalar. (ii) For a tachyon $\phi$, $k_2^2=\beta^2$ and so $k_2=\pm\beta$. From the coupling relation in Eq.\ (\ref{e:2dphiemsol}), we learn that the momentum components $k_0$ and $k_1$ must be both real or both imaginary since $q_0$ and $q_1$ are real. When $k_0$ and $k_1$ are both imaginary, the field $\phi$ is a tachyon with a growing mode. In this case, the solution (\ref{e:2dphiemsol}) is easily seen to be complex. If we demand that $\phi$ is real, then $k_0$ and $k_1$ must also be real. In this case, the field $\phi$ oscillates and does not grow and it describes a scalar field propagating faster than light. There is another possibility that we can get a real solution for the field $\phi$. In terms of the momentum coupling condition in (\ref{e:2dphiemcourel}), we specially set $k_0=k_1=0$. In this case, the solution (\ref{e:2dphiemsol}) is a stationary kink solution: $\phi(x^\mu)=\phi(x^2)=\phi_s\sin(\beta x^2)+\phi_c\cos(\beta x^2)$. (iii) For a massive scalar $\phi$, $k_2^2=-\gamma^2$ and so $k_2=\pm i\gamma$ is imaginary. The momenta $k_0$ and $k_1$ can also be both real or both imaginary. When they are real, the spacetime-dependent solution (\ref{e:2dphiemsol}) is complex since $k_2$ is imaginary. If we demand the solution $\phi$ is real, then $k_0$ and $k_1$ should be imaginary. In this case, the field $\phi$ grows. We can also get a real solution for the field $\phi$ in the case of $k_0=k_1=0$, which leads the solution (\ref{e:2dphiemsol}) to a stationary solution: $\phi=\phi_+e^{\beta x^2}+\phi_-e^{-\beta x^2}$. \subsection{D3-branes} In the $p=3$ case, the DBI action (\ref{e:dbicougauge}) can be rewritten as \begin{equation}\label{e:3ddbi} S=-\int d^4x\mathcal{C}{(X)}\left[1+\partial_\mu{X} \partial^\mu{X}+\frac{1}{2}F_{\mu\nu}F^{\mu\nu}-\frac {1}{16}(F_{\mu\nu}\widetilde{F}^{\mu\nu})^2-G_X^2 \right]^{\frac{1}{2}}, \end{equation} where \begin{equation} \widetilde{F}^{\mu\nu}=\frac{1}{2}\epsilon^{\mu\nu \rho\lambda}F_{\rho\lambda}, \end{equation} \begin{equation} G_X^2=(\widetilde{F}^{\mu\nu}\partial_\nu X)(\widetilde{F}_{\mu\rho}\partial^\rho X). \end{equation} After the field redefinitions, the action for a general scalar $\phi$ is obtained \begin{equation}\label{e:3dphidbi} \mathcal{L}=-Vy=-V\left[\left(1+\frac{1}{2}F\cdot F -\frac{1}{16}(F\cdot\widetilde{F})^2\right)U+\partial_\mu {\phi}\partial^\mu{\phi}-G^2\right]^{\frac{1}{2}}, \end{equation} where $G^2=G^\mu G_\mu=(\widetilde{F}^{\mu\nu}\partial_\nu\phi) (\widetilde{F}_{\mu\rho}\partial^\rho\phi)$. The equations of motion of the gauge field and the field $\phi$ are respectively \begin{equation}\label{e:3demeq} \partial^\mu\left(\frac{F_{\mu\nu}-\frac{1}{4}(F\cdot \widetilde{F})\widetilde{F}_{\mu\nu}}{y}\right)=V \epsilon_{\mu\nu\rho\lambda}\partial^\mu\left(\frac{G^\rho} {y}\right)\partial^\lambda\phi, \end{equation} and \begin{equation}\label{e:3dphieq} \partial^\mu\left(\frac{\partial_\mu\phi+\widetilde{F} _{\mu\nu}G^\nu}{y}\right)+\left(1+\frac{1}{2}F\cdot F- \frac{1}{16}(F\cdot\widetilde{F})^2\right)\frac{\alpha \phi}{y}=0. \end{equation} \subsubsection{Time evolution} First, we also consider the space-independent case with the electromagnetic field components: $\textbf{E}(t)=(E_1(t),E_2(t),E_3(t))$ and $\textbf{B}=(B_1,B_2,B_3)=const$. So $G^2=\dot{\phi}^2(B_1^2+B_2^2+B_3^2)$ and Eq.\ (\ref{e:3demeq}) gives \begin{eqnarray} \partial^0\left(\frac{E_1+(\textbf{B}\cdot\textbf{E})B_1} {y}\right)=0, \nonumber \\ \partial^0\left(\frac{E_2+(\textbf{B}\cdot\textbf{E})B_2} {y}\right)=0, \\ \partial^0\left(\frac{E_3+(\textbf{B}\cdot\textbf{E})B_3} {y}\right)=0. \nonumber \end{eqnarray} From these three equations, we can get \begin{equation}\label{e:3dtemeq} \partial^0\left(\frac{E_1}{y}\right)= \partial^0\left(\frac{E_2}{y}\right)= \partial^0\left(\frac{E_3}{y}\right)=0. \end{equation} Following the procedure in the previous discussion, we can get the solution of $\phi$ when the electric fields are constant: \begin{equation} \phi=\phi_+e^{\omega t}+\phi_-e^{-\omega t}, \textrm{ } \textrm{ }\textrm{ } \omega^2=\left(1-r\right)\beta^2, \end{equation} for a tachyon, and \begin{equation} \phi=\phi_s\sin{(\omega t)}+\phi_c\cos{(\omega t)}, \textrm{ } \textrm{ }\textrm{ } \omega^2=\left(1-r\right)\gamma^2, \end{equation} for a massive scalar, where \begin{equation} r=\frac{\textbf{E}\cdot\textbf{E}} {1+\textbf{B}\cdot\textbf{B}}. \end{equation} The results are similar to those on D-strings and D$2$-brane: for a tachyon field $\phi$, the growth rate becomes smaller which means that unstable D$3$-branes decay slower, and, for a massive scalar $\phi$, the oscillating frequency becomes smaller due to the presence of non-vanishing constant electromagnetic fields. The mass squared of $\phi$ also relies on the ratio $r$ but not necessarily on the components of the electromagnetic fields. We can also learn from the expressions of $r$ and $\omega^2$ that the electric fields tend to decrease while the magnetic fields tend to increase the absolute values of the mass squared $\omega^2$. Finally, we can still get the same equation as Eq.\ (\ref{e:1demphiequiveq}) but with a different definition of $r$ here. \subsubsection{Fixed electromagnetic fields} If we set the electromagnetic fields $F_{\mu\nu}$ to be constant, Eq.\ (\ref{e:3dphieq}) can be expressed as \begin{eqnarray} U[1+\frac{1}{2}F\cdot F-\frac{1}{16}(F\cdot\widetilde{F})^2] \{\Box\phi-\widetilde{F}_{\mu\nu}\widetilde{F}^{\mu\rho}\partial^ \nu\partial_\rho\phi+\alpha[1+\frac{1}{2}F\cdot F- \nonumber \\ \frac{1}{16}(F\cdot\widetilde{F})^2]\phi\} +[(\partial\phi\cdot\partial\phi)\Box\phi-\partial_\mu\phi\partial_\nu\phi\partial^\mu \partial^\nu\phi]+\widetilde{F}_{\mu\nu}\widetilde{F}^{\mu\rho}[ \partial^\nu\phi\partial_\rho\phi\Box\phi \nonumber \\ -\frac{1}{2}\partial_\alpha\phi\partial^\alpha(\partial^\nu\phi\partial_\rho\phi) +(\partial\phi\cdot\partial\phi)\partial^\nu\partial_\rho\phi-\frac{1}{2}\partial_\rho \phi\partial^\nu(\partial\phi\cdot\partial\phi)]+ \nonumber \\ \widetilde{F}_{\mu\nu}\widetilde{F}^{\mu\rho}\widetilde{F}_{ \alpha\beta}\widetilde{F}^{\alpha\gamma}[\partial^\beta\phi\partial_\gamma\phi\partial^\nu \partial_\rho\phi-\frac{1}{2}\partial_\rho\partial^\nu(\partial^\beta\phi\partial_\gamma \phi)]=0. \end{eqnarray} The solution of this equation can be the one (\ref{e:phisol1}) or (\ref{e:phisol2}) with the momenta satisfying \begin{equation} k^2=(\widetilde{F}_{\mu\nu}k^\nu)(\widetilde{F}^{\mu\rho} k_\rho)+[1+\textbf{B}^2-\textbf{E}^2-(\textbf{E}\cdot \textbf{B})^2]\alpha, \end{equation} which are similar to the results on D$2$-branes. To guarantee the massless scalar and massive scalar propagating no faster than light, the momenta $k_\mu$ of the field $\phi$ must be restricted to be some particular values. \subsubsection{Electromagnetic waves} Let us switch on a gauge field background with a single component $A_3$ along the $x^3$ direction on the D$3$-brane. To satisfy the Lorentz gauge, we still assume that $A_3$ is homogeneous along the $x^3$ direction, i.e., $\partial^3A_3=0$. Then the non-vanishing components of the field strength are: \begin{eqnarray} F_{03}=\partial_0A_3=E_3, & F_{13}=\partial_1A_3=B_2, & F_{23}=\partial_2A_3=-B_1. \end{eqnarray} So $F_{\mu\nu}\widetilde{F}^{\mu\nu}=0$. For convenience, we denote $A_3$ by $A$ and $m=0,1,2$ in what follows. Setting $\phi=0$, the DBI action (\ref{e:3ddbi}) and (\ref{e:3dphidbi}) becomes the DBI action for the gauge field in $d=4$ dimensions. Then the gauge field equation (\ref{e:3demeq}) becomes the same form as Eq.\ (\ref{e:2dbiemeq}) but for $m=0,1,2$ and correspondingly the solution has the same form as the one (\ref{2demsol}), describing a beam of planar electromagnetic waves propagating along the $x^3$ direction. This solution also satisfies the Maxwell equation $\partial^\mu\partial_\mu A=0$. We now include the nontrivial field $\phi$ in the discussion. The components of $G_\mu$ are \begin{eqnarray} G_0=\partial_1A\partial_2\phi-\partial_2A\partial_1\phi, & G_1=\partial_0A\partial_2\phi-\partial_2A\partial_0\phi, \nonumber \\ G_2=\partial_1A\partial_0\phi-\partial_0A\partial_1\phi, & G_3=0. \end{eqnarray} We can expect that the solutions of $A$ and $\phi$ in Eq.\ (\ref{e:3demeq}) should be similar to those on D2-branes except that we are considering the case with one more dimension. From the discussion on D2-branes, we have learned that the coupled solutions of $A$ and $\phi$ would lead to $G^2=0$. So what we need to do is to check whether the conditions $G_\mu=0$ can give the electromagnetic wave solution and the tachyon profile solution to their equations of motion (\ref{e:3demeq}) and (\ref{e:3dphieq}). Assuming $G_\mu=0$, Eq.\ (\ref{e:3demeq}) and Eq.\ (\ref{e:3dphieq}) can be reduced to \begin{equation} \partial^m\left(\frac{\partial_mA}{y}\right)=0, \end{equation} \begin{equation} \partial^\mu\left(\frac{\partial_\mu\phi}{y}\right) +(1+\partial^mA\partial_mA)\frac{\alpha\phi}{y}=0, \end{equation} where $y=\sqrt{(1+\partial^mA\partial_mA)U+\partial\phi\cdot\partial\phi}$. The two equations respectively give \begin{eqnarray} [U(1+\partial_mA\partial^mA)+\partial\phi\cdot\partial\phi]\partial^n\partial_nA- \partial_nA[\frac{1}{2}U\partial^n(\partial_mA\partial^mA) \\ \nonumber +\alpha\phi(1+\partial_mA \partial^mA)\partial^n\phi+\frac{1}{2}\partial^n(\partial\phi\cdot\partial\phi)]=0, \end{eqnarray} \begin{eqnarray} U(1+\partial_mA\partial^mA)[\Box\phi+\alpha\phi(1+\partial_mA\partial^mA)]- \frac{1}{2}U\partial\phi\cdot\partial(\partial_mA\partial^mA) \nonumber \\ +[(\partial\phi\cdot\partial\phi)\Box\phi-\partial_\mu\phi\partial_\nu\phi \partial^\mu\partial^\nu\phi]=0 \end{eqnarray} The solutions of $A$ and $\phi$ from the above equations of motion can be given respectively: \begin{equation}\label{e:3demphisol} A=A_+e^{iq_mx^m}+A_-e^{-iq_mx^m}, \textrm{ }\textrm{ } \textrm{ } q_0^2=q_1^2+q_2^2, \end{equation} \begin{equation}\label{e:3dphiemsol} \phi(x^\mu)=\phi_+e^{ik_\mu x^\mu}+\phi_-e^{-ik_\mu x^\mu}, \textrm{ }\textrm{ } \textrm{ }\frac{k_0}{q_0}=\frac{k_1}{q_1} =\frac{k_2}{q_2}, \textrm{ }\textrm{ }\textrm{ } k_3^2=\alpha, \end{equation} where the momentum coupling relation in Eq.\ (\ref{e:3dphiemsol}) comes from the condition $G_\mu=0$. Thus, the coupling relation is similar to that on D$2$-brane. (i) For a massless scalar $\phi$ with $\alpha=0$, the solution (\ref{e:3dphiemsol}) can be real and so is valid. The solution and momentum coupling relation indicate that the massless scalar $\phi$ propagates together with the gauge field $A$, which describes a D$3$-brane fluctuating in an oscillating mode in one of the transverse direction. (ii) For a tachyon $\phi$ with $\alpha=\beta^2$, the real solution can exist when $k_0$, $k_1$ and $k_2$ are all real since $k_3=\sqrt{\alpha}$ is real. But in this case the field $\phi$ describes an oscillating scalar that propagates faster than light. Another possibility that there exists a real solution is that we set $k_0=k_1=k_2=0$. Then we can get a stationery tachyon kink solution. (iii) For a massive scalar $\alpha=-\gamma^2$, the real solution can be obtained when $k_0$, $k_1$ and $k_2$ are all imaginary since $k_3$ is imaginary. But the solution indicates that the field grows with time and so is unstable. In the case of $k_0=k_1=k_2=0$, we can also get a real stationary solution. \section{Conclusions} \label{sec:conclusionsch7w} We have shown that the equations of motion from the DBI effective action for a tachyon or an ordinary scalar field admit exact classical solutions if we choose appropriate potentials and field redefinitions. These solutions can also be obtained from the DBI action including the world-volume massless fields, which makes it possible to investigate the exact coupling relation between a general scalar $\phi$ (it can be a tachyon or an ordinary scalar) and the massless fields on a D-brane. The obtained solution of an arbitrary field from the DBI action including multiple fields is simply the one obtained from the DBI action with all other fields vanishing, except that in the former case there is a momentum coupling constraint which comes from the vanishing of the coupling terms in the DBI action. The main results we obtained from these exact solutions and momentum coupling relations include: (i) For a tachyon, the obtained spacetime-dependent solution can only be complex, which is invalid since we are considering the DBI effective theory of a real field. But in the purely time-dependent case or purely space-dependent case, there exist real solutions. For an ordinary scalar, we find real-valued spacetime-dependent solutions corresponding to travelling waves with arbitrary amplitude. (ii) In the case of coupling to the massless scalar fields $Y^I$ in the transverse directions to the D-brane, we find solutions in which the tachyon or massive scalar field $\phi$ couples to uniformly moving $Y^I$, but not oscillating ones, in terms of the momentum coupling conditions between $\phi$ and $Y^I$. The solution of $\phi$ implies that the effective mass of the tachyon or massive scalar $\phi$ appears to decrease due to the uniform motion of $Y^I$ (or of the whole D-brane since $Y^I$ describe the fluctuation of the D-brane). The amount of the reduced mass depends on the value of the velocity of the whole D-brane. The values of its components are allowed to change, i.e. the direction of the velocity of the whole D-brane can vary. At a critical value of the velocity, the tachyon or massive scalar field $\phi$ even loses all its mass and becomes a massless scalar. The result of mass loss implies that unstable D-branes decay slower when moving faster and that the massive scalar oscillates less frequently on D-branes that move faster. (iii) In the case of coupling to gauge fields, we considered three situations. (a) In the space-independent case on D-branes of arbitrary dimensions or on D-strings, the effective mass of the general scalar $\phi$ coupled to constant electromagnetic fields also appears to decrease. The amount of the reduced mass depends on the ratio $r$ of the energy densities of the electromagnetic fields. At the critical value $r=1$, the general scalar $\phi$ even loses all its mass. The solution of $\phi$ indicates that the electric fields tend to reduce while the magnetic fields tend to increase its effective mass. For the tachyon field case, this means that the electric fields tend to slow down while the magnetic fields tend to expedite the decay process of unstable D-branes. (b) In the spacetime-dependent case on D$p$-branes with $p\geq2$, the momenta of the general scalar that is coupled to constant electromagnetic fields can only vary on some restricted surface in the momentum space that is decided by the components of the electromagnetic fields. In order to keep the massless scalar and massive scalar propagating no faster than light, their momentum must satisfy some extra constraints. (c) On D$p$-branes with $p\geq2$, the gauge fields can vary as a beam of propagating electromagnetic waves. When the solution of $\phi$ is real, the tachyon field $\phi$ is stable, behaving in an oscillating mode, and the massive scalar field $\phi$ is unstable, behaving in a growing mode. In the presence of propagating electromagnetic waves, real stationery kink solutions of $\phi$ can also exist. For a massless scalar $\phi$, the real solution indicates that it propagates together with the gauge field $A$. That is, D-branes carrying a beam of propagating electromagnetic waves are fluctuating in an oscillating mode that is related to the oscillating mode of the electromagnetic waves. \section*{Acknowledgements\markboth{Acknowledgements}{Acknowledgements}} We would like to thank M. Hindmarsh for many useful discussions. \newpage \bibliographystyle{JHEP}
{'timestamp': '2009-10-27T01:02:48', 'yymm': '0910', 'arxiv_id': '0910.5010', 'language': 'en', 'url': 'https://arxiv.org/abs/0910.5010'}
\section{Introduction} As quantum computing and deep learning have recently begun to draw attentions, notable research achievements have been pouring over past decades. In the field of deep learning, the problems which were considered as their inherent limitations like gradient vanishing, local minimum, learning inefficiencies in large-scale parameter training are gradually being conquered~\cite{pieee202105park}. On the one hand, innovative new deep learning algorithms such as quantum neural network (QNN), convolutional neural network (CNN), and recurrent neural network (RNN) are completely changing the way various kinds of data are processed. Meanwhile, the field of quantum computing has also undergone rapid developments in recent years. Quantum computing, which has been recognized only for its potential for a long time, has opened up a new era of enormous potentials with the recent advances of variational quantum circuits (VQC). The surprising potentials of the variational quantum algorithms were made clear by solving various combinatorial optimization problems and the intrinsic energy problems of molecules, which were difficult to solve using conventional methods, and further extensions are considered to design machine learning algorithms using quantum computing. Among them, quantum deep learning fields are growing rapidly, inheriting the achievements of existing deep learning research. Accordingly, numerous notable achievements related to quantum deep learning have been published, and active follow-up studies are being conducted at this time. In this paper, we first briefly introduce the background knowledge, basic principles of quantum deep learning, and look at the current research directions. We then discuss the various directions and challenges of future research in quantum deep learning. \subsection{Quantum Computing} Quantum computers use qubits as the basic units of computation, which represent a superposition state between $|0\rangle$ and $|1\rangle$~\cite{app20choi,ictc19choi,icoin20choi,ictc20oh}. A single qubit state can be represented as a normalized two-dimensional complex vector, i.e., \begin{equation} \label{eq:qubit} |\psi\rangle = \alpha|0\rangle + \beta|1\rangle ,\|\alpha\|^2 + \|\beta\|^2 = 1 \end{equation} and $\|\alpha\|^2$ and $\|\beta\|^2$ are the probabilities of observing $|0\rangle$ and $|1\rangle$ from the qubit, respectively. This can be also geometrically represented using polar coordinates $\theta$ and $\phi$, \begin{equation}\label{eq:bloch} |\psi\rangle = \cos(\theta/2)|0\rangle + e^{i\phi}\sin(\theta/2)|1\rangle, \end{equation} where $0\leq\theta\leq\pi$ and $0\leq\phi\leq\pi$. This representation maps a single qubit state into the surface of 3-dimensional unit sphere, which is called Bloch sphere. A multi qubit system can be represented as the tensor product of $n$ single qubits, which exists as a superposition of $2^n$ basis states from $|00...00\rangle$ to $|11...11\rangle$. Quantum entanglement appears as a correlation between different qubits in this system. For example, in a 2-qubit system $\frac{1}{\sqrt{2}}|00\rangle + \frac{1}{\sqrt{2}}|11\rangle$, the observation of the first qubit directly determines that of the second qubit. Those systems are controlled by quantum gates in a quantum circuit to perform a quantum computation on its purpose~\cite{icoin21choi,icoin21oh}. Quantum gates are unitary operators mapping a qubit system into another one, and as classical computing, it is known that every quantum gate can be factorized into the combination of several basic operators like rotation operator gates and CX gate~\cite{electronics20choi}. Rotation operator gates $R_x(\theta), R_y(\theta), R_z(\theta)$ rotates a qubit state in Bloch sphere around corresponding axis by $\theta$ and CX gate entangles two qubits by flipping a qubit state if the other is $|1\rangle$. Those quantum gates utilizes quantum superposition and entanglement to take an advantage over classical computing, and it is well known that quantum algorithms can obtain an exponential computational gain over existing algorithms in certain tasks such as prime factorization~\cite{shor1999polynomial}. \section{Quantum Deep Learning} \subsection{Variational Quantum Circuits (VQC)} A variational quantum circuit (VQC) is a quantum circuit using rotation operator gates with free parameters to perform various numerical tasks, such as approximation, optimization, classification. An algorithm using a variational quantum circuit is called variational quantum algorithm (VQA), which is a classical-quantum hybrid algorithm because its parameter optimization is often performed by a classical computer. Since its universal function approximating property~\cite{biamonte2021universal}, many algorithms using VQC~\cite{cerezo2020variational} are designed to solve various numerical problems~\cite{farhi2014quantum,kandala2017hardware,app20choi,electronics20choi,apwcs21kim}. This flow led to many applications of VQA in machine learning and is also for replacing the artificial neural network of the existing model with VQC~\cite{schuld2019quantum,cong2019qcnn,bausch2020recurrent,dong2008quantum}. VQC is similar to artificial neural networks in that it approximates functions through parameter learning, but has differences due to the several characteristics of quantum computing. Since all quantum gate operations are reversible linear operations, quantum circuits use entanglement layers instead of activation functions to have multilayer structures. These VQCs are called quantum neural networks, and this paper will look at them through classification according to their structure and characteristics. \subsection{Quantum Neural Networks} \begin{figure}[htp] \centering \includegraphics[width=1\columnwidth]{QNN.PNG} \caption{Illustration of QNN with the input $|\psi\rangle$, the parameter $\theta$ and linear entanglement structure.} \label{fig:QNN} \end{figure} In this section, we try to demonstrate how a basic quantum neural network(QNN) works with a simple example described in the Fig.~\ref{fig:QNN}. The way a QNN processes data is as follows. First, the input data is encoded into the corresponding qubit state of an appropriate number of qubits. Then, the qubit state is transformed through the parameterized rotation gates and entangling gates for a given number of layers. The transformed qubit state is then measured by obtaining expected value of a hamiltonian operator, such as Pauli gates. These measurements are decoded back into the form of appropriate output data. The parameters are then updated by an optimizer like Adam optimizer. A neural network constructed in the form of VQC can perform various roles in various forms, which will be explored as quantum neural networks. \subsubsection{Quantum Convolutional Neural Networks} \begin{figure}[htp] \centering \includegraphics[width=1\columnwidth]{QCNN.PNG} \caption{Illustration of QCNN with the input $|\psi\rangle$, the parameter $\theta$ with single convolution and pooling layer.} \label{fig:QCNN} \end{figure} Quantum convolutional neural network (QCNN) was proposed in \cite{cong2019qcnn}, implementing the convolution layer and pooling layer on the quantum circuits. According to the previous research results in \cite{ictc20oh,garg2020advances}, the QCNN circuit computation proceeds as follows. The first step is same as any other QNN models, encoding input data into a qubit state with rotation operator gates. Then the convolution layer with quasi-local unitary gates filters the input data into a feature map. The pooling layer with controlled rotation operators then downsizes the feature map. By repeating this process sufficiently, the fully connected layer acts on the qubit state as classical CNN models. Finally, the measurement of the qubit state is decoded into an output data with desired sizes. The circuit parameters are updated with gradient descent based optimizer after each measurements. Unfortunaltely, in the current quantum computing environment~\cite{preskill2018quantum}, QCNN is difficult to perform better than the existing classical CNN. However, it is expected that the QCNN will be able to obtain sufficient computational gains over the classical ones in future quantum computing environment where larger-size quantum calculations are possible~\cite{cong2019qcnn,ictc20oh}. \section{Future Work Directions and Challenges} \subsection{Applications of Quantum Deep Learning to Reinforcement Learning} There are many research results applying deep learning to reinforcement learning to derive optimal actions from a complex state space~\cite{mnih2013playing,twc201912choi,tvt201905shin,tvt202106jung}. However, reinforcement learning research using quantum deep learning~\cite{dong2008quantum,chen2020variational,jerbi2021variational} is still in its infancy. The current approach step is to replace the policy training network with a quantum neural network from the existing deep neural network, but there remains the possibility of many algorithms applying various ideas of classical deep reinforcement learning researches. In particular, if it is proved that quantum computational gains can be obtained through QNN in a situation of high computational complexity due to the complex Markov decision process environment, quantum reinforcement learning will open a new horizon for reinforcement learning research. \subsection{Applications of Quantum Deep Learning to Communication Networks} The QNN and quantum reinforcement learning algorithms can be used in various research fields, and this paper considers the applications in terms of communications and networks. In terms of the acceleration of computation in fully distributed platforms, e.g., blockchain~\cite{isj202003saad,isj2021boo}, QNN can be used. In addition, various advanced communication technologies such as Internet of Things (IoT)~\cite{jsac201811dao,isj2021dao}, millimeter-wave networks~\cite{tvt2021jung,jcn201410kim}, caching networks~\cite{tmc202106malik,twc201910choi,twc202012choi,twc202104choi}, and video streaming/scheduling~\cite{ton201608kim,jsac201806choi,tmc201907koo,tmc2021yi} are good applications of QNN and quantum reinforcement learning algorithms. \subsection{Challenges} \subsubsection{Gradient Vanishing} Vanishing gradient is a crucial problem in quantum deep learning as of classical deep learning. The problem of gradient disappearance while backpropagating many hidden layers has been considered a chronic problem in deep neural network computation. Since quantum neural networks also use gradient descent method training their parameters as classical ones, they have to solve the same problem. Classical deep learning models solve this problem by utilizing an appropriate activation function, but quantum deep learning does not use an activation function, thus eventually, a different solution is needed. A former research~\cite{mcclean2018barren} called this quantum gradient vanishing pheonomena as barren plateaus, while proving that when the number of qubits increases, the probability of occurring barren plateaus increases exponentially. This can be avoided by setting good initial parameters in small-scale QNN, but it is unavoidable to deal with this problem when designing large-scale QNN. This is an open problem for which a solution is not yet clear. \subsubsection{Near-Term Device Compatibility} Noisy intermediate scale quantum (NISQ)~\cite{preskill2018quantum}, which means fewer qubits and a lot of computational error of near-term quantum devices, has already become a familiar term to quantum researchers. Many algorithms designed to implement quantum computational gains do not work at all in this NISQ environment, and are expected to be implemented at least several decades later. For example, a practical implementation of the Shor's algorithm requires at least thousand of qubits even without an error correction processes, current quantum devices have only a few tens of qubits with non-negligible computational error rate of several percent. However, due to the relatively small circuit depth and qubit requirements, VQA and QNN based on them are tolerant to these environmental constraints. Nevertheless, in order to increase the data processing capability of quantum neural network, it is necessary to consider near-term device compatibility. For example, using many multi-qubit controlling gates for quantum entanglement is theoretically thought to increase the performance of QNN, but it entails a large error rate and a complicated error correction process. Therefore, it is essential to design an algorithm regarding these tradeoffs in quantum deep learning research. \subsubsection{The Quantum Advantage} The term quantum supremacy may lead to the illusion that quantum algorithms are always better than classical algorithms performing the same function. However, given the inherent limitations of quantum computing, quantum computing benefits can only be realized through well-thought-out algorithms under certain circumstances. In fact, especially among variational quantum-based algorithms, only a few of them have proven their quantum advantage in a limited situation. Due to the universal approximation property of QNN, it is known that quantum deep learning can perform most of the computations performed in classical deep learning~\cite{biamonte2021universal}. Nevertheless, if one approaches simply based on this fact without the consideration of quantum gain, the result may be much inefficient compared to the existing classical algorithm. Therefore, in designing a new QNN-based deep learning algorithm, it is necessary to justify it by articulating its advantages over the corresponding classical models. \section{Conclusion}\label{sec:sec5} This paper introduces the basic concepts of quantum neural networks and their applications scenarios in various fields. Furthermore, this paper presents research challenges and potential solutions of quantum neural network computation. \section*{Acknowledgment} This work was supported by the National Research Foundation of Korea (2019M3E4A1080391). Joongheon Kim is a corresponding author of this paper. \bibliographystyle{IEEEtran}
{'timestamp': '2021-08-04T02:17:25', 'yymm': '2108', 'arxiv_id': '2108.01468', 'language': 'en', 'url': 'https://arxiv.org/abs/2108.01468'}
\section{\label{sec:Intro} Introduction} Phonons are the quasiparticles of lattice vibrations and therefore one of the fundamental excitations in a solid body. Of special interest are surface acoustic waves (SAWs), which are bound to the surface of a body. These waves find various applications in industry and research \cite{Delsing2019,Edmonson2004,Mauricio2005}. Since the velocity of acoustic waves is about 100.000 times slower than the speed of light, they are used in miniaturized devices at GHz frequencies. Furthermore, SAWs can be excited in a controlled and efficient manner by so-called interdigital transducers (IDT) on piezoelectric substrates. Therefore, devices based on SAWs are essential building blocks for RF technology and widely used in current wireless devices for telecommunication (LTE and 5G), sensors \cite{Polewczyk2020,Kuszewski2018} or radar systems. Beside these technological applications, SAWs are also subject of fundamental research. Due to the advances in nano-lithography, higher frequencies and smaller wavelengths can be achieved. This opens the possibility to efficiently couple SAWs to other systems (e.g., qubits\cite{Satzinger2018, Manenti2017}, spin waves \cite{Verba2018,Dreher2012} and other hybrid systems \cite{Whiteley2019}). In this context, since SAW can carry phonon angular momentum, the conversion of angular momentum between the phonon and spin system has recently attracted significant attention \cite{Sasaki,Long2018}.\\ Techniques based on IDTs working as sender and receiver offer a very sensitive way to probe the transmitted power. However, this technique provides only limited information about the spacial distribution of the acoustic field in between the IDTs. This field, however, is an important basis for many phenomena. It can be mapped by various methods like atomic force microscopy \cite{Hesjedal2001}, scanning electron microscopy \cite{Eberharter1980} and x-ray measurements \cite{Sauer1999}. In this paper, we want to emphasize micro-focussed Brillouin light scattering spectroscopy ($\upmu$BLS) as an additional method for mapping SAWs in the GHz regime. This method is already well established in the field of magnonics for probing the spacial distribution of spin waves on the micrometer scale \cite{Sebastian2015}. \section{\label{sec:Methods}Methods} Brillouin light scattering is the inelastic scattering of photons by quasiparticles such as phonons. During the BLS process, momentum and energy of the scattered photons are changed. Wavevector-resolved BLS spectroscopy is a standard tool to study thermally\cite{Wittkowski2000} and externally excited\cite{Vincent2005,Kruger2004} phonons. By varying the angle of incidence, the probed wavevector can be defined, whereby the dispersion relation can be mapped. However, the spatial resolution is limited by the laser spot size, which is usually some tens of $\upmu$m. Hence, this technique is not suited to investigate phonons on the micro- and nanoscale.\\ For this reason, Carlotti et \textit{al}. \cite{Carlotti2018} emphasized to use micro-focussed BLS, a technique developed in the field of magnonics \cite{Chumak2015}. A scheme of the setup is shown in Fig.\ref{fig:fig1} (a). While the physical principle of the scattering process stays the same, the light of a solid state laser ($\lambda=\unit[532]{nm}$) is strongly focussed onto the sample by a microscope objective. In this study, the objective has a numerical aperture of $NA=0.75$ and a magnification of 100x. This allows a spatial resolution down to $\unit[250]{nm}$ \cite{Sebastian2015}. Due to the focussing, the angle of incidence is no longer well-defined. Therefore, the BLS signal averages over a large wave-vector regime. The shift in frequency of the reflected photons can be measured by a Tandem Fabry-P\'erot interferometer (TFPI). The intensity of the scattered light is proportional to the power of the acoustic wave. Furthermore, the polarization of the backscattered light can be analysed. The polarization entering the interferometer can be chosen by a $\lambda$/2-plate in front of it. Since the interferometer itself is optimized for only one polarization axis, this allows to investigate the polarization of the scattered light. For investigations at the microscale, it is essential to control the position of the laser spot on the sample. Hence, an automatized positioning system combined with microscopic imaging is used. Both the data acquisition by the TFPI and the positioning system were controlled by central programming interfaces developed by THATec Innovation GmbH.\\ The investigated samples consists of single SAW resonators, which have been fabricated on commercial GaN/Si wafer (produced by NTT-AT Japan). Undoped GaN ($\unit[1]{\upmu m}$) is grown on a Si substrate with a $\unit[0.3]{\upmu m}$ buffer layer. This is a so-called "fast on slow" structure, where confined modes, like Sezawa waves, can exist beside the Rayleigh mode. Three different IDT structures have been produced by electron beam lithography and conventional lift-off techniques. Their finger/interdigit widths are $w=\unit[170]{nm},~\unit[200]{nm}~\mathrm{and}~\unit[250]{nm}$. The IDTs are contacted by a PicoProbe connected to an RF generator. \begin{figure}[h] \begin{center} \scalebox{1}{\includegraphics[width=8.0 cm, clip]{Fig1neu1}} \end{center} \caption{\label{fig:fig1} a) schematic BLS setup. The laser light is focussed by a microscope objective on the sample. The polarization of the backscattered light can be rotated by a $\lambda$/2-plate and is analysed by the polarization-sensitive Tandem Fabry-P\'erot interferometer. b) Normalized BLS intensity (logarithmic scale) for different excitation frequencies $f_{\mathrm{RF}}$ applied to the IDT. The square of the calculated excitation efficiency for the Rayleigh mode (blue) and Sezawa mode (red) is shown. c) Two-dimensional time-averaged BLS intensity map of a SAW emitted from an IDT with $f_{\mathrm{RF}}=\unit[6.25]{GHz}$. The SAW form a confined beam with a periodic interference pattern.} \end{figure} \section{\label{sec:Experiment}Experimental Results} An RF voltage applied to the IDT generates an alternating strain in the piezoelectric layer, and SAWs can be generated, which propagate away from the IDT. Fig.\ref{fig:fig1} (b) shows the time-averaged BLS intensity close to the IDT with a finger/interdigit width $w=\unit[170]{nm}$ as a function of the RF frequency $f_{\mathrm{RF}}$. The different maxima can be identified as the Rayleigh mode (R) at $\unit[6.25]{GHz}$ and the first Sezawa mode (S) at $\unit[7.0]{GHz}$. Furthermore, a pseudo-bulk mode (PB) has been found at higher frequencies. These three modes can also be observed for the other two devices with different finger/interdigit widths and therefore different wavevectors. The extracted frequencies for all three devices are shown in Fig.\ref{fig:fig3} (a). They correspond well to the dispersion relation that was obtained based on data presented by Mueller et \textit{al}. \cite{Muller2015} for a GaN layer on Si. Moreover, the square of excitation efficiency has been calculated based on the Fourier transform of the IDTs geometry \cite{Royer2000} as a function of the wavevector $k$. Taking into account the dispersion relation, the excitation efficiency can be calculated as a function of the frequency for a given mode. This is shown in blue for the Rayleigh mode and in red for the first Sezawa mode. Both frequencies of the intensity maxima correspond well to the measured data. The noise level of the measurement has been added to the calculated curves. Please note that a absolute comparison of the calculated excitations efficiencies of Rayleigh and Sezawa mode is not possible, since the model does not take into account the strain profile of the IDT, the amplitude profile of the SAW (over the film thickness, respectively) and the surface sensitivity of the BLS. However, for the further discussion, it is important to notice that the excitation efficiency of the Sezawa mode at the Rayleigh mode resonance is not vanishing and vice versa.\\ An additional interesting parameter, which has to be addressed is the polarization of the scattered light. It has been analyzed by a rotation of the $\lambda$/2-plate in combination with the polarization-sensitive TFPI. Rayleigh waves have been excited at $f=\unit[6.25]{GHz}$ and Sezawa waves at $f=\unit[6.9]{GHz}$. As a reference polarization, elastically scattered light from a laser mode has been used. Figure \ref{fig:fig0} shows that the measured BLS intensity follows a $\sin^2$ dependence of the $\lambda$/2-plate angle, indicating that the scattered light is linearly polarized and not changed in its polarization direction. This is in agreement to studies carried out with wavevector-resolved BLS\cite{Carlotti2018}. \begin{figure}[h] \begin{center} \scalebox{1}{\includegraphics[width=8.0 cm, clip]{PolRS}} \end{center} \caption{\label{fig:fig0} BLS intensity of Rayleigh waves a) and Sezawa waves b) as a function of the angle of the $\lambda$/2-plate in Fig.1. Zero angle corresponds to the reference polarization of elastically scattered light. The polarization of the backscattered light is not changed during the scattering process with Rayleigh waves or Sezawa waves.} \end{figure} As mentioned above, $\upmu$BLS is a scanning method, which allows to measure the acoustic field at the surface of the sample. As an example, Fig.\ref{fig:fig1} (c) shows the time-averaged BLS intensity of emitted SAWs with a frequency of $\unit[6.25]{GHz}$. Since the aperture of the IDT ($a=\unit[50]{\upmu m}$) is large compared to the wavelength of the SAW ($\lambda=\unit[680]{nm}$), the waves are emitted in a confined beam with no visible diffraction. Hence, the wave can be treated as a plain wave in the following explanation. \begin{figure}[h] \begin{center} \scalebox{1}{\includegraphics[width=8.0 cm, clip]{Fig2_final}} \end{center} \caption{\label{fig:fig2} a) Normalized time-averaged BLS intensity for an excitation frequency $f=\unit[6.25]{GHz}$ along the propagation direction extracted from Fig.\ref{fig:fig1}c). b) Normalized FFT of the BLS intensity. The wave-vector difference $\Delta k$ can be extracted from the position of the maximum.} \end{figure} It is important to note that the time-averaged BLS intensity oscillates along the propagation direction of the SAW with a periodicity $p=\unit[4.59]{\upmu m}$. For better visibility of this phenomenon Fig.\ref{fig:fig2} (a) shows the BLS intensity summed over the width of the measured area. This periodic oscillation in intensity can be explained by the interference of co-propagating Rayleigh and Sezawa waves with the same frequency, but different wavevectors. This is possible due to the finite length of the IDT and the associated linewidth in k-space (see Fig~\ref{fig:fig1}~(b)).\\ We assume two plane waves $\Psi_i$: \begin{equation} \Psi_i = A_i \cos(k_{i}x-\omega t). \label{eqn:eqn1} \end{equation} The BLS intensity is proportional to the SAW intensity averaged over one oscillation period, which is expressed by: \begin{equation} \label{eqn:eqn3} \begin{split} I(x) & =\vert{\int{\Psi_R(x,t)+\Psi_S(x,t)\ dt }}\vert^2\\ & =A_R^2+A_S^2+{2{A_R A_S}}\cos[\Delta k x] \end{split} \end{equation} with \begin{equation} \Delta k = k_R-k_S = {{2\pi}\over p}. \label{eqn:eqn4} \end{equation} From the amplitude of the observed modulation the ratio between the amplitude of Rayleigh and Sezawa wave can be estimated to $A_S/A_R \approx 0.5$. This value is larger than one might estimate from the simple model shown in Fig. 1. However, as already explained before, the different excitation efficiencies together with the different BLS scattering cross sections for the two modes prevents a direct comparison. \\ \begin{figure}[h] \begin{center} \scalebox{1}{\includegraphics[width=8.0 cm, clip]{Fig3avr}} \end{center} \caption{\label{fig:fig3} a) Dispersion relation for SAWs in a $\unit[1.3]{\upmu m}$ thick GaN layer on Si substrate. From the BLS measurements for different excitation frequencies the maxima are evaluated and the positions are shown (black dots) for all tested devices (see Fig. \ref{fig:fig2} (b)). b) The wave-vector difference $\Delta k$ for a given frequency between Rayleigh and Sezawa mode has been determined by BLS measurements along the propagation direction. The predicted wave-vector differences based on the dispersion relation are represented in green color.} \end{figure} The wave-vector difference $\Delta k$ has been extracted from the Fourier transform of the BLS intensity along the propagation direction, as shown in Fig.\ref{fig:fig2} (b). To confirm the interpretation of the two co-propagating and interfering waves, additional measurements at different excitation frequencies $f_{\mathrm{RF}}$ have been performed. Therefore, the BLS intensity was mapped on a line along the propagation direction and the beating pattern was analyzed like shown above. All measurements showed similar oscillations. The resulting wave-vector differences are shown in Fig.\ref{fig:fig3} (b). In the investigated frequency range the wave-vector difference $\Delta k$ is nearly constant, which is in a good agreement with the prediction based on the dispersion relation of the GaN-Si system. While for small wavevectors the slope for the Rayleigh and the Sezawa mode varies it is comparable to waves with frequencies above $\unit[2]{GHz}$. Deviations can be attributed to the fact that the value of the wave-vector difference is very sensitive to small changes of dispersion.\\ \section{\label{sec:Conclusion}Conclusion} In this article we have used micro-focussed Brillouin light scattering spectroscopy to investigate the excitation and propagation of SAWs in a GaN layer on a Si substrate. In the first part, the excitation efficiency of the IDT and dispersion relation has been verified. We have found that Rayleigh and Sezawa waves are excited at the same frequency by an IDT structure. These waves interfere coherently while they propagate and their time-averaged intensity is spatially varying with a periodicity determined by their wavevector difference at the excitation frequency. This pattern offers the opportunity to have a time-independent, space-modulated acoustic field with well defined wave vector direction. \begin{acknowledgments} Financial support by the EU Horizon 2020 research and innovation program within the CHIRON project (contract no. 801055) is gratefully acknowledged. \end{acknowledgments} \bibliographystyle{apsrev4-1}
{'timestamp': '2020-09-11T02:13:39', 'yymm': '2009', 'arxiv_id': '2009.04797', 'language': 'en', 'url': 'https://arxiv.org/abs/2009.04797'}
\section{Introduction} \label{sect.1} The spectrum of bottomonium is very rich with a large number of the levels below the $B \bar B$ threshold. Among them three well established $\Upsilon(n\,{}^3S_1)~(n=1,2,3)$ mesons \cite{ref.1} and the $1\,{}^3D_2$ state, discovered in \cite{ref.2}. There exist numerous studies of low-lying bottomonium levels, where different QCD motivated models are used \cite{ref.3,ref.4,ref.5,ref.6,ref.7,ref.8,ref.9}. However, a number of these levels have not been observed yet. Observation of the $D$-wave states which lie below the open beauty threshold, is a difficult experimental task, as demonstrated in the CLEO experiment \cite{ref.2}, where to discover the $1\,{}^3D_2$ level, four-photon cascade measurements in the $\Upsilon(3S)$ radiative decays have been performed. In particular, neither the $1\,{}^3D_1$ state nor the members of the $2D$ multiplet, for which potential model calculations give masses around 10.45 GeV \cite{ref.4,ref.10}, i.e., below the $B\bar B$ threshold, are as yet observed. One of the reasons for that is the very small dielectron widths of pure $n\,{}^3D_1$ states (for any $n$): here and in \cite{ref.10} their values $\sim 1$ eV are obtained. For that reason an observation of a pure $D$-wave vector state directly in $e^+e^-$ experiments seems to be impossible at the present stage. However, the situation may change for the $D$-wave vector states which lie above the open beauty threshold. For these bottomonium states the dielectron widths may become larger, as happens in the charmonium family, where the experimental dielectron width of $\psi(3770)$ (which is only 30 MeV above the $D\bar D$ threshold) is already ten times larger than for a pure $1\,{}^3D_1$ state \cite{ref.11,ref.12}. Moreover, the width of the $2\,{}^3D_1$ resonance $\psi(4160)$ is almost equal to that of $\psi(4040)$, which therefore cannot be considered as a pure $3\,{}^3S_1$ state. Such an increase of the dielectron width of a $D$-wave vector state and at the same time a decrease of the width of an $S$-wave state can occur if a rather large $S$---$D$ mixing between both states takes place \cite{ref.11,ref.12,ref.13}. A theoretical study of the $S$---$D$ mixing between vector states is more simple in bottomonium than in the charmonium family, since the experimental dielectron widths are now measured for the six states $\Upsilon(nS)~(n=1,...,6)$ \cite{ref.1}. It is also essential that in the recent CLEO experiments the dielectron widths of low-lying levels, $\Upsilon(nS)~(n=1,2,3)$, and their ratios were measured with great accuracy \cite{ref.14}. These three levels can indeed be considered as pure $S$-wave states, because for them the $S$---$D$ mixing is possible only via tensor forces, which give very small mixing angle (see the Appendix). Then these pure $S$-wave states can be studied in the single-channel approach (SCA). Here in particular, we use the well-developed relativistic string Hamiltonian (RSH) \cite{ref.15}. Moreover, just for these levels a comparison of experimental and calculated dielectron widths and their ratios can be considered as an important test of the theoretical approach and also of the calculated wave functions (w.f.) at the origin. There are not many theoretical studies of higher bottomonium states \cite{ref.10,ref.16,ref.17,ref.18}. Strictly speaking, for this task one needs to solve a many-channel problem, knowing the interactions within all channels and between them. Unfortunately, this program is not realized now, although some important steps in this direction have been done recently in \cite{ref.19}, where a theory of the interactions between the channels due to a strong coupling to virtual (open) $B\bar B$ ($B_s\bar B_s$) channels was developed. In this paper, using the RSH for the low-lying states $\Upsilon(nS)(n=1,2,3)$, we obtain a good description of the dielectron widths and their ratios. After that we apply the same approach to higher bottomonium states (above the $B\bar B$ threshold), where the accuracy of our calculations is becoming worse: in particular, within the SCA one cannot calculate the mass shift of a higher resonance, which can occur owing to coupling to open channel(s). However, for the dielectron widths it is most important to define the w.f. at the origin with a good accuracy. There exist several arguments in favor of the validity of the the SCA for higher bottomonium states: \begin{enumerate} \item First, in charmonium this approximation gives the masses and the dielectron widths of $\psi(3770)$, $\psi(4040)$, $\psi(4160)$, and $\psi(4415)$ with a good accuracy, providing a self-consistent description of their dielectron widths \cite{ref.12,ref.13}. \item Secondly, an open channel (e.g. $B\bar B$) can be considered as a particular case of a four-quark system $Q\bar Q q\bar q$, and, as shown in \cite{ref.20}, this open channel cannot significantly change the w.f. at the origin of heavy quarkonia, because the magnitude of a four-quark w.f. at the origin is two orders smaller than for a $Q\bar Q$ meson. \item Finally, the masses of higher bottomonium states, calculated in SCA, appear to be rather close to the experimental values, giving a difference between them equal to at most $50\pm 15$ MeV. \end{enumerate} For pure $n\,{}^3D_1$ bottomonium states (any $n$) our calculations give very small dielectron widths, $\leq 2$ eV, while for higher $D$-wave vector states their dielectron widths can increase owing to $S$---$D$ mixing through an open channel(s). Here an important point is that the mass of a higher $n\,{}^3D_1$ state $(n\geq 3)$ appears to be only 40-50 MeV larger than that of the $\Upsilon((n+1)S)$ state \cite{ref.4}, \cite{ref.10}, thus increasing the probability of the $S$---$D$ mixing between these resonances. We will define the mixing angle here in a phenomenological way, as it was done in charmonium \cite{ref.11}, \cite{ref.13}. Owing to the $S$---$D$ mixing the dielectron widths of mixed $D$-wave bottomonium resonances appear to increase by two orders of magnitude and reach $\sim 100\pm 30$ eV, while the dielectron widths of the initially pure $n\,{}^3S_1$ resonances $(n=4,5,6)$ decrease. We also calculate the vector decay constants in bottomonium and briefly discuss the possibility to observe ``mixed $D$-wave" resonances in $e^+e^-$ experiments. \section{Mass spectrum} \label{sect.2} The spectrum and the w.f. of the bottomonium vector states $(L=0,2)$ are calculated with the help of the RSH and a universal static potential from \cite{ref.21}. This Hamiltonian has been successfully applied to light \cite{ref.22} and heavy-light mesons \cite{ref.23,ref.24}, and also to heavy quarkonia \cite{ref.7,ref.25,ref.26}. In bottomonium the spin-averaged masses $M(nL)$ of the $nL$ multiplets are determined by a simpler mass formula than for other mesons, because it does not contain the self-energy and the string contributions which in bottomonium are very small, $\leq 1$ MeV, and can be neglected \cite{ref.7}. As a result, the mass $M(nL)$ just coincides with the eigenvalue (e.v.) of the spinless Salpeter equation (SSE) or with the e.v. of the einbein equation, derived in the so-called einbein approximation (EA) \cite{ref.23}. Here we use the EA, because the $nS$-wave functions, defined by the EA equation, has an important advantage as compared to the solutions of SSE: they are finite near the origin, while the $nS$-w.f. of the SSE diverge for any $n$ and have to be regularized (e.g. as in \cite{ref.4}), introducing unknown parameters. At the same time the difference between the EA and SSE masses is small, $\leq 15$ MeV (the e.v. in the SSE are always smaller than the e.v. in the EA) and can be included in the theoretical error. In Table~\ref{tab.1} the centroid masses $M(nL)$ are given both for the SSE and the EA, calculated for the same set of input parameters. We do not discuss here the singlet ground state $\eta_b$, recently discovered by BaBar \cite{ref.27}, because the hyperfine (HF) interaction introduces extra, not well-known parameters, while our goal here is to describe the bottomonium data, not introducing extra parameters and using a universal potential which contains only fundamental parameters - the QCD constant $\Lambda$ and the string tension. The masses of $\Upsilon(n\,{}^3S_1)$ are very close to the centroid masses $M(nS)$ (with the exception of the ground state), because the HF splittings are small \cite{ref.26}: for higher radial excitations the difference between $M(n\,{}^3S_1)$ and the centroid mass is $\leq 4$ MeV for $n\geq 3$. Moreover, for the $D$-wave multiplets the fine-structure splittings are small \cite{ref.4} and therefore the calculated centroid masses $M(nD)$ coincide with $M(n\,{}^3D_1)$ within the theoretical error. The RSH is defined by the expression from \cite{ref.15,ref.22}: \begin{equation} H_0=\frac{\textbf{p}^2+m_b^2}{\omega}+\omega+V_B(r). \label{eq.1} \end{equation} Here $m_b$ is the pole mass of the $b$ quark, for which the value $m_b=4.832$ GeV is used. This number corresponds to the current mass $m_b=4.235$ GeV, which coincides with the conventional current $b$-quark mass, equal to $4.20\pm 0.07$ GeV \cite{ref.1}. The static potential $V_B(r)$, defined below in Eq.(\ref{eq.5}), contains the symbol B, which shows that this potential was derived in background perturbation theory \cite{ref.15}. In (\ref{eq.1}) the variable $\omega$ can be defined in two ways: If the extremum condition is put on the Hamiltonian $H_0$, $\omega$ is equal to the kinetic energy operator, $\omega=\sqrt{\mathbf{p}^2+m_b^2}$. Substituting this operator $\omega$ into $H_0$, one arrives at the well-known SSE. However, the $S$-wave w.f. of the SSE diverge near the origin and for their definition one needs to use a regularization procedure, in this way introducing several additional parameters. Instead we prefer to use the EA, where the variable $\omega$ is determined from another extremum condition, put on the e.v. $M(nL)$. Then $\omega(nL)$ is not an operator anymore, but is equal to the matrix element (m.e.) of the kinetic energy operator and plays the role of a dynamical (constituent) quark mass. This constituent mass $\omega(nL)$ grows with increasing quantum numbers and this fact appears to be very important for light and heavy-light mesons \cite{ref.23,ref.24}, while in bottomonium the difference between the dynamical mass $\omega_{nL}$ and the pole mass $m_b$ is not large, changing from $\sim 170$ MeV for the $1S$ ground state up to $\sim 300$ MeV for higher states, like $6S$. In the framework of the EA the w.f. of heavy-light mesons have been calculated and successfully applied to determine the pseudoscalar decay constants of the $D$, $D_s$, $B$, and $B_s$ mesons, giving a good agreement with experiment \cite{ref.24}. It is of interest to notice that in bottomonium the masses, calculated in EA and SSE, and also in the nonrelativistic (NR) case (where $\omega_{\rm NR}(nL)= m_b$ for all states), do not differ much, even for higher states: such mass differences are $\leq 40$ MeV (see below and Tables \ref{tab.1} and \ref{tab.2}). Still, the w.f. at the origin, calculated in EA, takes into account the relativistic corrections and gives rise to a better agreement with the experimental dielectron widths than in the NR approach. In EA the masses $M(nL)$ are defined by the mass formula: \begin{equation} M(nL)=\omega_{nL}+\frac{m_b^2}{\omega_{nL}}+ E_{nL}(\omega_{nL}), \label{eq.2} \end{equation} where $\omega(nL)$ and the e.v. $E_{nL}$ have to be defined solving two self-consistent equations \cite {ref.12,ref.23}, namely \begin{equation} \left[\frac{\textbf{p}^2}{\omega_{nL}}+V_B(r) \right]\varphi_{nL}(r) = E_{nL}~ \varphi_{nL}(r), \label{eq.3} \end{equation} and the equation \begin{equation} \omega^2_{nL}=m^2_b-\frac{\partial E_{nL}}{\partial \omega_{nL}}. \label{eq.4} \end{equation} In (\ref{eq.3}) we use for all mesons the universal static potential $V_B(r)$ from \cite{ref.7,ref.21}: \begin{equation} V_B(r) =\sigma(r)\, r - \frac43 \frac{\alpha_B(r)}{r}, \label{eq.5} \end{equation} with the following set of the parameters: \begin{equation} \begin{array}{ll} m_b=4.832\,{\rm GeV}, & \Lambda_B(n_f=5)=0.335\ $GeV$,\\ M_B=0.95\,{\rm GeV}, & \sigma_0=0.178\ $GeV$^2. \label{eq.6} \end{array} \end{equation} The QCD (vector) constant $\Lambda_B$, which determines the vector coupling constant $\alpha_B(r)$ (see Eq.~(\ref{eq.7}) below), depends on the number of flavors and can be expressed via the QCD constant in the $\overline{\textrm{\textrm{MS}}}$ regularization scheme; the connection between both constants has been established in \cite{ref.21,ref.28}. In particular, the two-loop constant $\Lambda_B(n_f=5)=335$ MeV in Eq.~(\ref{eq.6}) corresponds to the two-loop $\Lambda_{\overline{\textrm{MS}}}=244$ MeV, since they are related as $\Lambda_B(n_f=5)=1.3656~\Lambda_{\overline{\rm MS}}$ \cite{ref.28}. However, one cannot exclude that for low-lying bottomonium levels, like $1S$, $1P$, and $1D$, the choice $n_f=4$, equal to the number of active flavors, might be preferable, giving for their masses a better agreement with experiment. Here for simplicity we take $n_f=5$ for all states, because we are mostly interested in higher states, above the open beauty threshold. The constant $\sigma_0$ occurs in the expression for the variable string tension $\sigma(r)$ given by Eq.~(\ref{eq.8}). \begin{table}[t] \caption{\label{tab.1} The spin-averaged masses $M(nL)$ (MeV) of low-lying multiplets, calculated in nonrelativistic (NR) case, for spinless Salpeter equation (SSE), and in einbein approximation (EA). In all cases the parameters of $V_B(r)$ are taken from Eq.~(\ref{eq.6})} \begin{center} \begin{tabular}{cllll} \hline \hline State &~NR &~SSE &~EA &~~~Exp. \cite{ref.1} \\ \hline $1S$ &~~9469 &~~9453 &~~9462 &~~9460.30$\pm$0.26~$(1^3S_1)$ \\ $1P$ &~~9894 &~~9884 &~~9888 &~~9900.1$\pm$0.6 \\ $2S$ &~10028 &~10010 &~10021 &~10023.3$\pm$0.3~$(2^3S_1)$ \\ $1D$ &~10153 &~10144 &~10146 &~10161.1$\pm$1.7~$(1^2D_1)$ \\ $2P$ &~10270 &~10256 &~10261 &~10260.0$\pm$0.6 \\ $1F$ &~10355 &~10345 &~10347 & \quad- \\ $3S$ &~10379 &~10356 &~10369 &~10355.2$\pm$0.5~$(3^3S_1)$ \\ $2D$ &~10460 &~10446 &~10450 &~\quad- \\ $3P$ &~10562 &~10541 &~10551 &~\quad- \\ \hline \hline \end{tabular} \end{center} \end{table} The vector coupling in the coordinate space $\alpha_B(r)$ is defined via the strong coupling in the momentum space $\alpha_{B}(q)$ \cite{ref.21}: \begin{eqnarray} \alpha_B(r) & = & \frac{2}{\pi}\int\limits_0^\infty dq\frac{\sin(qr)}{q}\alpha_B(q), \nonumber \\ \alpha_B(q) & = & \frac{4\pi}{\beta_0t_B}\left(1-\frac{\beta_1}{\beta_0^2} \frac{\ln t_B}{t_B}\right) \label{eq.7} \end{eqnarray} with $t_B=\frac{\ln(\mathbf{q}^2+M_B^2)}{\Lambda_B^2}$. The solutions of Eq.~(\ref{eq.3}) are calculated here considering two types of confining potential in (\ref{eq.5}): one with the string tension equal to a constant, $\sigma_0=0.178$ GeV$^2$, and the other with the string tension $\sigma(r)$ dependent on the $Q\bar Q$ separation $r$. Such a dependence of the string tension on $r$ appears if the creation of virtual light $q\bar q$ pairs is taken into account, causing a flattening of the confining potential at large distances, $\geq 1.0$ fm. This effect may become important for bottomonium states with $R(nL)\geq 1.0$ fm, giving a decrease of the masses (e.v.). The explicit expression of $\sigma(r)$ is taken here from \cite{ref.29}, where it was deduced from the analysis of radial Regge trajectories of light mesons: \begin{equation} \sigma(r) =\sigma_0 (1-\gamma f(r)) \label{eq.8} \end{equation} with the parameters taken from \cite{ref.29}: $\gamma=0.40$, $f(r\to 0) =0 $, $f(r\to\infty) =1.0$. \begin{table}[ht] \caption{\label{tab.2} The spin-averaged masses $M(nL)$ (MeV) of higher bottomonium states in the NR case, for the SSE, and in einbein approximation (EA) for the potential $V_B(r)$ (\ref{eq.5})} \begin{center} \begin{tabular}{cllll} \hline \hline State &~NR &~SSE &~EA &~~~Exp. \cite{ref.1} \\ \hline $2F$ &~10623 &~10607 &~10613 & ~\quad- \\ $4S$ &~10657 &~10630 &~10645 & ~10579.4$\pm$1.2~$(4^3S_1)$~ \\ $3D$ &~10717 &~10698 &~10705 & ~\quad- \\ $4P$ &~10808 &~10783 &~10795 & ~\quad- \\ $3F$ &~10857 &~10835 &~10844 & ~\quad- \\ $5S$ &~10894 &~10862 &~10880 & ~10865$\pm$8~($\Upsilon(10860)$)~ \\ $4D$ &~10942 &~10916 &~10928 & ~\quad- \\ $5P$ &~11024 &~10998 &~11009 & ~\quad- \\ $6S$ &~11100 &~11067 &~11084 & ~11019$\pm$8~($\Upsilon(11020)$)~ \\ $5D$ &~11139 &~11109 &~11123 & ~\quad- \\ $7S$ &~11278 &~11240 &~11262 & ~\quad- \\ $6D$ &~11310 &~11270 &~11295 & ~\quad- \\ \hline \hline \end{tabular} \end{center} \end{table} In Tables \ref{tab.1} and \ref{tab.2} the masses $M(nL)$ are given only for the flattening potential (\ref{eq.8}): for low-lying levels they coincide with the masses calculated using the linear potential (with $\sigma=\textrm{const}=\sigma_0$) within $\leq 2$ MeV. For higher states and using the flattening potential, the masses (e.v.) are smaller by $\sim 10-60$ MeV (see the numbers in Table~\ref{tab.3}). In particular, the mass difference is only 12 MeV for the $4S$ and $3D$ states and already 40 MeV for the $6S$ and $5D$ states, reaching 64 MeV for the $7S$ state. It is evident that for a flattening potential the masses $M(nS)$ (any $n$) are closer to the experimental values. \begin{table}[t] \caption{\label{tab.3} The masses of higher bottomonium states (MeV) for the static potential Eq.~(\ref{eq.5}) with the parameters (\ref{eq.6}) and two confining potentials: linear with $\sigma_0=0.178$ GeV$^2$ and the flattening potential Eq.~(\ref{eq.8})} \begin{center} \begin{tabular}{lccccccccc} \hline \hline ~State & $2D$ & $4S$ & $3D$ & $4P$ & $5S$ & $4D$ & $6S$ & $5D$ & $7S$\\ \hline ~Linear~\quad &~10456~ &~10656~ &~10717~ &~10812~ &~10901~ &~10950~ &~11122~ &~11163~ &~11326~\\ ~Flatt.\quad &10450 &10645 &10705 &10795 &10880 &10928 &11084 &11123 &11262\\ \hline \hline \end{tabular} \end{center} \end{table} For a comparison in Tables~\ref{tab.1} and \ref{tab.2} the masses $M_{\rm NR}(nL)$, calculated in the NR approximation (where $\omega(nL)$=\textrm{const}=$m_b$ for all states) are also given for the same static potential. These masses $M_{\rm NR}(nL)$ are always 10$-$20 MeV larger than in the EA, but in its turn the EA masses are 10$-$20 MeV larger that the e.v. of the SSE. Such a small difference between the EA and SSE masses is taken into account here by including it in the theoretical error. For our further analysis it is also important that due to the flattening effect the w.f. at the origin (for higher states) are becoming significantly smaller than for the linear potential, providing a better agreement with the experimental dielectron widths. Sometimes the point of view is taken that in bottomonium the nonperturbative effects (determined by the confining potential) play an insignificant role for low-lying levels. To clarify this point we have compared two m.e. for a given $nS$ state: of the confining (nonperturbative) potential $<\sigma(r) r>$ and of the gluon-exchange (GE) (or ``perturbative") potential, respectively, introducing their ratio $\eta(nS)$: \begin{equation} \eta(nS)=\frac{<\sigma(r) r>_{nS}}{<|V_{\textrm{GE}}(r)|>_{nS}}. \label{eq.9} \end{equation} The results of our calculations are presented in Table~\ref{tab.4}. \begin{table}[ht] \caption{\label{tab.4} The ratios $\eta(nS)$} \begin{center} \begin{tabular}{lccccccc} \hline\hline ~State~ & $1S$ & $2S$ & $3S$ & $4S$ & $5S$ & $6S$ \\ \hline $\eta(nS)$ &~0.24~&~0.93~&~1.80~&~2.78~&~3.87~&~5.12~\\ \hline\hline \end{tabular} \end{center} \label{tab.IV} \end{table} The values of $\eta(nS)$ from Table~\ref{tab.4} show that only for the $1S$ ground state the nonperturbative contribution is rather small, equal to $24\%$, while already for the $2S$ state both contributions are equally important. For higher $nS$ states the nonperturbative contribution dominates, being $\sim (n-1)$ times larger than the perturbative one. For that reason the GE potential can even be considered as a perturbation for higher resonances. We estimate the accuracy of our calculations to be equal to 15 MeV. The calculated masses weakly depend on the admissible variations of the parameters taken (the same accuracy was obtained in studies of heavy-light mesons \cite{ref.24} and the charmonium family \cite{ref.12}). Still, for higher resonances the accuracy of the calculated masses is worse, since the influence of open channel(s) is not taken into account. Here we can only estimate possible hadronic (decay channel) shifts, while comparing calculated and experimental masses: for $\Upsilon(10580)$ and $\Upsilon(11020)$ a downward shift $\sim 50\pm15$ MeV is expected, while the mass $M(5S)$, calculated in single-channel approximation, is close to the experimental mass of $\Upsilon(10860)$ (see Table~\ref{tab.2}). Up to now, many bottomonium states, even those which lie below the $B\bar B$ threshold, have not yet been discovered, among them the $1D$ multiplet (two states), the $2D$ and $1F$ multiplets, and maybe, the $3P$ multiplet, for which the centroid mass $M(3P)=10550(15) MeV\footnote{Here and below in the brackets we give a theoretical uncertainty}$ , very close to the threshold, is predicted (see Table~\ref{tab.1}). The observation of these ``missing" levels would be very important for the theory. For further analysis it is also important that the differences between the masses of the $(n+1)S$ and $nD$ states ($n\geq 3$) are small, decreasing for larger $n$: their values are equal to 60, 48, and 39 MeV for $n=3$, $4$, and $5$, respectively (see Table~\ref{tab.3}). The w.f. of the $nS$ and $nD$ states are given in the Appendix, together with m.e. like $\omega(nL)$, $\langle\textbf{p}^2\rangle$, and those which are needed to determine the dielectron widths and vector decay constants. Also we estimate the relativistic corrections, calculating the velocities $v^2/c^2$ for different states: their values do not change much, from 0.07 for $\Upsilon(1S)$ up to 0.11 for $\Upsilon(6S)$ (see the Appendix). These numbers illustrate the accuracy of the NR approximation. In conclusion we would like to stress two points again: first, in bottomonium the centroid masses $M(nL)$ coincide with the e.v. of the dynamical equation; secondly, the nonperturbative dynamics dominates for all $\Upsilon(nS)$ with $n\geq 2$. \section{Dielectron widths} \label{sect.3} The dielectron widths are defined here with the help of the van Royen$-$Weisskopf formula \cite{ref.30} and taking into account the QCD radiative corrections \cite{ref.31}. The widths $\Gamma_{ee}(nS)$ and $\Gamma_{ee}(nD)$ can also be expressed through the vector decay constants $f_V$, for which explicit expressions were derived in the framework of the field correlator method in \cite{ref.24}. For the $S$-wave states we have \begin{equation} \Gamma_{ee}(n\,{}^3S_1)=\frac{4 \pi e_b^2\alpha^2}{3M_{nS}}f_V^2(nS)\beta_V = \frac{4e^2_b\alpha^2}{M^2_{nS}}|R_{nS}(0)|^2\xi_{nS} \beta_V, \label{eq.10} \end{equation} and a similar expression is valid for the $D$-wave vector states: \begin{equation} \Gamma_{ee}(n\,{}^3D_1)=\frac{4 \pi e_b^2\alpha^2}{3M_{nD}}f_V^2(nD)\beta_V = \frac{4e^2_b\alpha^2}{M^2_{nD}}|R_{nD}(0)|^2\xi_{nD} \beta_V, \label{eq.11} \end{equation} if the $D$-wave w.f. at the origin is defined according to the expression Eq.~(\ref{eq.14}) below, which was derived in \cite{ref.32}. In Eqs.~(\ref{eq.10}) and (\ref{eq.11}) the QCD one-loop perturbative corrections enter via the factor $\beta_V$ \cite{ref.31}: \begin{equation} \beta_V=1-\frac{16}{3\pi}\alpha_s(M_V). \label{eq.12} \end{equation} However, one cannot exclude that higher order perturbative corrections may not be small and therefore, strictly speaking the factor $\beta_V$, as well as $\alpha_s(M_V)$ in Eq.~(\ref{eq.12}), has to be considered as an effective constant. Nevertheless this factor cannot be used as an arbitrary parameter. In different approaches its value typically varies in the range $0.75\pm 0.05$ \cite{ref.8,ref.16,ref.33}, which corresponds to an effective coupling $\alpha_s(M_V)=0.14\pm 0.04$. (About the choice of the renormalization scale, taken here equal to the mass of a vector $b\bar b$ meson $M_V$, see the discussion in \cite{ref.34}). Here we will neglect in the scale the difference between the mass values for higher states, since all of them lie in narrow range, 10.6$-$11.1 GeV. As a first step we analyse here the dielectron widths of low-lying levels $\Gamma_{ee}(\Upsilon(nS))~(n=1,2,3)$ and their ratios $r(m/n)$, because these do not depend on the factor $\beta_V$. As a second step, the values of $\beta_V$ are extracted from the magnitudes of the dielectron widths, which are now known with great accuracy owing to the CLEO data \cite{ref.14}. Surprisingly, just the same value $\beta_V=0.80\pm 0.01$ is extracted from our fits to three dielectron widths $\Gamma_{ee}(nS)~(n=1,2,3)$. This value of $\beta_V$ corresponds to $\alpha_s(\sim 10.6\, $GeV$)=0.12\pm0.01$, which appears to be $\sim 15\%$ smaller than the strong coupling $\alpha_s(10.330\, $GeV$)=0.142\pm 0.056$, recently extracted from the CLEO data on the total cross sections in $e^+e^{-}$ annihilation \cite{ref.35}. It is reasonable to assume that such a difference may occur due to second and third order perturbative corrections, which were taken into account in the CLEO analysis, while second and higher-order perturbative corrections to the dielectron widths are not calculated yet. Taking the central value from \cite{ref.35}, $\alpha_s(10.330\, $GeV$)=0.142$, one obtains $\beta_V=0.76$, which is only $5\%$ smaller than our number $\beta_V(M_V)=0.80$. From this comparison one can estimate that in bottomonium the contribution from unknown higher order corrections to the dielectron width is positive and small, $\leq 10\%$. In theoretical studies of the dielectron widths and vector decay constants QCD radiative corrections are often neglected, i.e., $\beta_V=1.0$ is taken \cite{ref.36,ref.37,ref.38}, while in our analysis only with $\beta_V=0.80(1)$ a good description of the dielectron widths is achieved. On the contrary, in \cite{ref.17} a significantly smaller number, $\beta_V=0.46$, is exploited. Probably, such a small value of $\beta_V$ (or large strong coupling) has been used in \cite{ref.17} in order to suppress the large values of the w.f. at the origin for low-lying states, obtained in their model. Thus we start with the ratios of the dielectron widths for the $n\,{}^3S_1$ states $(n=1,2,3)$: \begin{equation}\label{eq.13} r(m/n) = \frac{\Gamma_{ee}(mS)}{\Gamma_{ee}(nS)}= \frac{(M(nS)^2)(R_{mS}(0))^2}{(M(mS)^2)(R_{nS}(0))^2}, \end{equation} which are fully determined by the w.f. at the origin (the masses are known from experiment). Taking the w.f. at the origin calculated here, from the Appendix, one arrives at the values of $r(m/n)$ given in Table ~\ref{tab.5}. \begin{table}[h] \caption{\label{tab.5} The ratios of the dielectron widths $r(m/n)$ for low-lying $n\,{}^3S_1$ states} \begin{center} \begin{tabular}{llll} \hline\hline &~ $r(2/1)$ &~ $r(3/1)$ &~ $r(3/2)$ \\ \hline ~Theory\quad &~ 0.465 &~ 0.339 &~ 0.728 \\ ~Exp. \cite{ref.14} \quad &~$0.457\pm0.008$&~ $0.329\pm0.006$ &~ $0.720\pm0.016$\\ \hline\hline \end{tabular} \end{center} \end{table} Both the calculated and the experimental ratios agree with each other with an accuracy of $\leq 3\%$ and this result can be considered as a good test of our approach, as well as of the w.f. at the origin calculated here. Next we calculate the absolute values of the dielectron widths, which allow to extract the QCD factor $\beta_V$. From three dielectron widths $\Gamma_{ee}(nS)~(n=1,2,3)$ the same value $\beta_V=0.80(1)$ has been extracted. Later everywhere $\beta_V=0.80$ is used, for which the dielectron widths of low-lying and higher states are given in Tables~\ref{tab.6} and \ref{tab.7}, respectively. \begin{table}[h] \caption{\label{tab.6} The dielectron widths of $n\,{}^3S_1~ (n=1,2,3)$ and $n\,{}^3D_1~(n=1,2)$ keV with $\beta_V=0.80$} \begin{center} \begin{tabular}{ccc} \hline\hline Widths & Theory & Exp. \cite{ref.14} \\ \hline $~\Gamma_{ee}(1S)~$ & 1.320 &~1.354$\pm$0.024 \\ $\Gamma_{ee}(2S)$ & 0.614 &~0.619$\pm$0.014 \\ $\Gamma_{ee}(3S)$ & 0.447 &~0.446$\pm$0.011 \\ \hline $\Gamma_{ee}(1D)$ &~0.614$\times10^{-3}$ & \\ $\Gamma_{ee}(2D)$ &~1.103$\times10^{-3}$ & \\ \hline\hline \end{tabular} \end{center} \end{table} For low-lying levels the dielectron widths (with $\beta_V=0.80$) agree with the experimental numbers within 3\% accuracy (see Table~\ref{tab.6}). The dielectron widths calculated here are compared with other theoretical predictions \cite{ref.17,ref.18} in Table~\ref{tab.7}: in \cite {ref.17} rather small dielectron widths are obtained, mostly due to the small $\beta_V=0.46$ taken there. This value is $70\%$ smaller, i.e., the QCD radiative corrections are larger, than in our case. In \cite{ref.18}, as well as in our calculations, for $\Upsilon(10580)$ the dielectron width is larger than in experiment, while for the ground state their dielectron width is three times smaller than in our calculations and in experiment. \begin{table}[h] \caption{\label{tab.7} The dielectron widths $\Gamma_{ee}(nS)$ (keV) of pure $S$-wave states} \begin{center} \begin{tabular}{ccccccc} \hline\hline State & $1S$ & $2S$ & $3S$ & $ 4S$ & $5S$ & $6S$ \\ \hline GVGV\footnote{The numbers given are taken from second paper in \cite{ref.17}} \cite{ref.17} & 1.01 & 0.35 & 0.25 & 0.18 & 0.14&-\\ CO \cite{ref.18} & 0.426 & 0.356 & 0.335 & 0.311&-&-\\ This paper & 1.320 & 0.614 & 0.447 & 0.372 & 0.314 & 0.270\\ Exp. \cite{ref.1} &~$1.340\pm0.018$ &~$0.612\pm0.011$&~$0.443\pm0.008$&~$0.272\pm0.029$ &~$0.31\pm0.07$&~ $0.13\pm0.03$\\ \hline\hline \end{tabular} \end{center} \end{table} In conclusion of this section we would like to stress again that: \begin{enumerate} \item The value $\beta_V=0.80(1)$ should be considered as an effective constant, which implicitly takes into account the contributions from higher perturbative corrections. We expect that higher-order perturbative corrections are positive and rather small, $\leq 10\%$). In the absence of higher corrections, the effective coupling, $\alpha_s(10.6~{\rm GeV}) \sim 0.12(1)$ taken here, appears to be slightly smaller than the strong coupling, extracted from the analysis of the cross sections of $e^+e^-\rightarrow \textrm{hadrons}$ \cite{ref.35}. \item The calculated ratios of the dielectron widths (for low-lying levels), which are independent of the unknown QCD factor $\beta_V$, agree with experimental ratios with an accuracy better than $3\%$. Therefore one can expect that in our approach the w.f. at the origin (for low-lying levels) are calculated with a good accuracy. \item The dielectron widths calculated here (with $\beta_V=0.80$) agree with experiment with an accuracy better than $5\%$. \end{enumerate} \section{The S$-$D mixing between the $(n+1)\,{}^3S_1$ and $n\,{}^3D_1$ bottomonium states} \label{sect.4} In contrast to the case of the low-lying levels, the calculated dielectron widths of pure $nS$ vector states with $n=4$ and $n=6$ exceed the experimental values: for the $4S$ and $6S$ states they are $25\%$ and two times larger than the experimental widths of $\Upsilon(10580)$ and $\Upsilon(11020)$, respectively. Such a suppression of the dielectron widths occurs if one or more channels are open. Some reasons for that have been discussed in \cite{ref.16}, where it was that in particular in the Cornell coupled-channel model \cite{ref.39} the dielectron widths of higher charmonium states are not suppressed. Here as in \cite{ref.13}, we assume that an open channel cannot significantly affect the w.f. at the origin calculated in closed-channel approximation. This assumption is based on the study of a four-quark system in \cite{ref.20}, where the calculated w.f at the origin of a four-quark system, like $Q\bar Q q\bar q$, appears to be about two orders smaller than that of a heavy meson $Q\bar Q$. We expect this statement also to be true of a continuum w.f. at the origin (the w.f. of an open channel), which can be considered as a particular case of a four-quark system (this does not exclude that a continuum channel can strongly affect the $Q\bar Q$ w.f. at large distances). Thus it is assumed that a suppression of the dielectron widths of higher states occurs due to the $S$---$D$ mixing between the $(n+1)S$ and $nD$ vector states, which happen to have close values of their masses. We also show in the Appendix that in bottomonium the $S$---$D$ mixing due to tensor forces appears to be very small, giving a mixing angle $\theta_T < 1^\circ$. For the $D$-wave states their w. f. at the origin is defined here as in \cite{ref.32}: \begin{equation} \label{eq.14} R_D(0)= \frac{5R_D''(0)}{2\sqrt{2}\omega_b^2}, \end{equation} and for the mixed states their physical (mixed) w.f. are given by \begin{eqnarray} R_{{\rm phys}~S}(0) & = & \cos\theta R_S(0)- \sin\theta \ R_D(0), \label{eq.15} \\ R_{{\rm phys}~D}(0) & = & \sin\theta R_S(0)+ \cos\theta\ R_D(0). \label{eq.16} \end{eqnarray} The w.f. at the origin of pure $S$- and $D$-wave states and the derivatives $R_{nD}''(0)$ are given in the Appendix together with other m.e. which are needed to calculate the physical w.f. at the origin (see Table~\ref{tab.14}). As seen from Table~\ref{tab.12}, the w.f. $R_{nD}(0)$ are small and therefore the dielectron widths of pure $n\,{}^3D_1$ states appear to be very small, $\leq 2$ eV. Their values are given in Table~\ref{tab.8}. Notice that our widths are $\sim 10$ times smaller than those in \cite{ref.36}). On the contrary, the dielectron width of the $4\,{}^3S_1$ resonance is 25\% larger. To obtain agreement with the experimental value, $\Gamma_{ee}(10580)=0.273\pm 0.022$ keV, we take into account the $4S$---$3D$ mixing and determine the mixing angle, $\theta=27^\circ\pm 4^\circ$ from this fit. Thus $\Upsilon(10580)$ cannot be considered a pure $4S$ vector state, it is mixed with the initially pure $3\,{}^3D_1$ state. This second ``mixed" state will be denoted here as $\tilde \Upsilon(\sim 10700)$, it acquires the dielectron width $\Gamma_{ee}(\tilde\Upsilon (10700))=0.095$ keV, which is 60 times larger than the width of the pure $3\,{}^3D_1$ state. The dielectron widths are given in Table~\ref{tab.8} in two cases: without the $S$---$D$ mixing ($\theta=0$) and for mixed states, taking the same mixing angle $\theta=27^\circ$ for all higher states. An interpretation of the experimental width of $\Upsilon(10860)$ cannot be done in an unambiguous way: the calculated $\Gamma_{ee}(5\,{}^3S_1)$ of a pure $5S$ state ($\theta=0$) just coincides with the central value of the experimental $\Gamma_{ee}(\Upsilon(10860))=0.31\pm 0.07$ keV. It could mean that for some unknown reason the $5S$ and $4D$ vector states are not mixed. However, there exists another possibility, because the width $\Gamma_{ee}(10860)$ has a rather large experimental error. In particular, for the mixing angle $\theta=27^\circ$ one obtains $\Gamma_{ee}(\Upsilon(10860))=0.23$ keV, which just coincides with the lower bound of the experimental value. To decide which of the two possibilities is realized, more precise measurements of $\Gamma_{ee}(\Upsilon(10860))$ are needed. \begin{table}[h] \caption{\label{tab.8} The dielectron widths (keV) for the mixing angles $\theta=0$ and $\theta=27^\circ$ ($\beta_V=0.80$)} \begin{center} \begin{tabular}{cccc} \hline\hline ~Widths~ &\multicolumn{2}{c}{Theory} & Exp. \cite{ref.14} \\ \cline{2-3} &no mixing &with mixing & \\ \hline $\Gamma_{ee}(4S)$ & 0.372 &~0.273 &~0.272$\pm$0.029~\\ $\Gamma_{ee}(3D)$ &~ 1.435$\times 10^{-3}$&0.095 & \\ $\Gamma_{ee}(5S)$ & 0.314 &0.230 & 0.31$\pm$0.07 \\ $\Gamma_{ee}(4D)$ &~ 1.697$\times 10^{-3}$&0.084 & \\ $\Gamma_{ee}(6S)$ & 0.270 & 0.196 & 0.13$\pm$0.03 \\ $\Gamma_{ee}(5D)$ &~ 1.878$\times 10^{-3}$& 0.075 & \\ \hline\hline \end{tabular} \end{center} \end{table} An interesting opportunity can be realized for the originally pure $5\,{}^3D_1$ resonance. The experimental dielectron width of $\Upsilon(11020)$ is very small, $\Gamma_{ee}(11020)=0.13\pm 0.03$ keV, i.e., it is two times smaller than the number calculated here without the $6S$---$5D$ mixing (i.e., $\theta=0$). Even for the mixing angle $\theta=27^\circ$, the theoretical value is still $26\%$ larger compared to the experimental one (see Table~\ref{tab.8}). To fit the experimental number a rather large mixing angle, $40^\circ\pm 5^\circ$, has to be taken. For a such a large angle the dielectron widths of both resonances, $\tilde\Upsilon(5D)$ (with mass $\sim 11120$ MeV) and $\Upsilon(11020)$, appear to be almost equal: \begin{equation}\label{eq.17} \left\{ \begin{array}{lll} \Gamma_{ee}(\Upsilon(11020))&=&0.137\pm 0.025~~ \textrm{keV}, \\ \Gamma_{ee}(\tilde\Upsilon(5D))&=&0.135\pm 0.025~~ \textrm{keV}. \end{array} \right. \end{equation} It is of interest to notice that this large angle is close to the value of the mixing angle $\theta\cong 35^\circ$, which has been extracted in \cite{ref.12} and \cite {ref.40} to fit the dielectron widths in the charmonium family: $\psi(4040)$, $\psi(4160)$, and $\psi(4415)$. \section{Decay constants in vector channels} \label{sect.5} The decay constant in vector channel $f_V(nL)$ is expressed via the dielectron width in a simple way, as in Eqs.~(\ref{eq.10}) and (\ref{eq.11}). Therefore, from the experimental widths the ``experimental" decay constants can be easily obtained. Still an uncertainty is left, coming from the theoretical error of about $10\%$ in the QCD factor $\beta_V$. Also in many papers perturbative one-loop corrections are neglected, i.e., $\beta_V=1.0$ is taken \cite{ref.36,ref.37,ref.38}. This makes a comparison with other calculations more difficult. In our calculations we take $\beta_V=0.80$, which is slightly more than $\beta_V\sim 0.70$, used in \cite{ref.8} and \cite{ref.33}. To determine the experimental $f_V(\textrm{exp})$ we take in this section the experimental data from PDG \cite{ref.1} (not the CLEO data \cite{ref.14}), which are used in most of the cited theoretical papers. These decay constants are given in Table~\ref{tab.9}, both for $\beta_V=1.0$ and $\beta_V=0.80$, the difference between them is $\sim 10\%$. The theoretical predictions for $f_V$ give significantly different numbers \cite{ref.36,ref.37,ref.38}, which shows that the decay constants are rather sensitive to the dynamical parameters of the interaction and the model used. For a comparison we give in Tables ~\ref{tab.9} and \ref{tab.10} the decay constants $f_V(nS)$ and $f_V(nD)$, calculated here and in Ref.~\cite{ref.36}, where the relativistic Bethe$-$Salpeter method was used. All values needed for our calculations are presented in the Appendix. \begin{table} \caption{ \label{tab.9} The decay constants $f_V(nS)$ (MeV)} \begin{center} \begin{tabular}{ccccccc} \hline\hline State &~~~~$1S$~~~~ &~~~~$2S$~~~~ &~~~~$3S$~~~~ &~~~~$4S$~~~~ &~~~~$5S$~~~~ &~~~~$6S$~~~~ \\ \hline $\beta_V=1.0$ \cite{ref.36} &498(20) &366(27) &304(27)&259(22) &228(16) &- \\ ~This paper, $\beta_V=0.80$&794 &557 &483 &383 &355 &331\\ ~$f_V$(exp) for $\beta_V=0.80$ \cite{ref.1} &$798\pm6$& $556\pm6$& $481\pm5$& $381\pm19$& $413\pm45$& $268\pm30$\\ $f_V$(exp) for $\beta_V=1.0$ \cite{ref.1} &$715\pm5$& $497\pm5$& $430\pm4$& $341\pm17$& $369\pm40$& $240\pm27$\\ \hline\hline \end{tabular} \end{center} \end{table} For a comparison we also mention here the values of $f_V(1S)$ from \cite{ref.37,ref.38} where the QCD factor $\beta_V=1.0$ was used: $f_V(1S)= 529$ MeV in \cite{ref.37} is significantly smaller than the value $f_V(1S)=705(27)$ MeV in \cite{ref.38}, which is very close to the ``experimental'' $f_V(1S)$ (see Table~\ref{tab.9}). On the contrary, in \cite{ref.33} the perturbative corrections have been taken into account with $\beta_V\sim 0.66$. There the values of $f_V$ are not given but the dielectron widths of the $nS$ vector states $(n=1,2,3)$ are in reasonable agreement with experiment. For the $D$-wave states the w.f at the origin, the second derivatives, and other m.e. determining the vector decay constants $f_V(nD)$, are given in the Appendix. The calculated $f_V(nD)$ are presented in Table~\ref{tab.10} together with the numbers from \cite{ref.36}. Unfortunately, at present there are no experimental data on the dielectron widths for those states. \begin{table} \caption{\label{tab.10} The decay constants $f_V(nD)$ (MeV)} \begin{center} \begin{tabular}{cccccc} \hline\hline State & ~~~~$1D$~~~~ & ~~~~$2D$~~~~& ~~~~$3D$~~~~ & ~~~~$4D$~~~~ & ~~~~$ 5D$~~~~\\ \hline $\beta_V=1.0$~\cite{ref.36} & 261(21) & 155(11)& 178(10)&-&-\\ This paper, $\beta_V=1.0$, $\theta=0^\circ$ &18 & 24 & 28 &~31~ &~33~\\ ~~~This paper, $\beta_V=1.0$, $\theta=27^\circ$~\footnote{In bottomonium the $2S-1D$ and $3S-2D$ states, occuring below the threshold, do not mix via tensor forces, see a discussion in Appendix.} & - & - & 226 &~215~&~206~\\ \hline\hline \end{tabular} \end{center} \end{table} The decay constants of pure $nD$ vector states $(n=1,2,3)$, calculated in our approach, appear to be $\sim 10$ times smaller than those from \cite{ref.36}, where the Bethe$-$Salpeter equation was used, and the reason behind such a large discrepancy remains unclear. However, if the $4S$---$3D$ mixing is taken into account (with $\theta=27^\circ$), then the values of $f_V(3D)$ are close to each other in both approaches. \section{Summary and Conclusions} \label{sect.6} In this paper we have calculated the bottomonium spectrum and shown that the masses of the $(n+1)S$ and $nD$ states (for a given $n\geq 2$) are close to each other. We also assume here that between these states $S$---$D$ mixing takes place, which allows to describe the dielectron widths of higher states with a good accuracy. There are several arguments in favor of such a mixing. \begin{enumerate} \item Suppression of the dielectron widths of $\Upsilon(10580)$ and $\Upsilon(11020)$. \item Similarity with the $S$---$D$ mixing effects in the charmonium family. \item Strong coupling to the $B\bar B$ $ (B_s\bar B_s)$ channel. This fact has been supported by recent observations of the resonances in the processes like $e^+e^-\rightarrow \Upsilon(nS) \pi^+ \pi^-~ (n=1,2,3)$ and the theoretical analysis in \cite{ref.19}. \end{enumerate} The important question arises whether it is possible to observe the mixed $D$-wave vector resonances in $e^+e^-$ experiments. Our calculations give $M(3D)=10700(15)$ MeV (not including a possible hadronic shift) and $\Gamma_{ee}(\tilde\Upsilon(3D))\sim 95$ eV for the mixing angle $\theta=27^\circ$, which is three times smaller than $\Gamma_{ee}(\Upsilon(10580))$. For such a width an enhancement of this resonance in the $e^+e^-$ processes might be suppressed, as compared to the peak from the $\Upsilon(10580)$ resonance. The situation remains unclear with the $5S$---$4D$ mixing, because the dielectron width of $\Upsilon(10860)$ contains a rather large experimental error and a definite conclusion about the value of the mixing angle, or no mixing at all, cannot be drawn. We have considered both cases here, obtaining the mass $10930\pm 15({\rm th})$ MeV for the $4D$ state. It looks more probable to observe the resonance $\tilde\Upsilon(5D)$ (with the mass $\sim 11120$ MeV), for which the dielectron width may be almost equal to that of the conventional $\Upsilon(11020)$ resonance. However, since the cross sections of $e^+e^-$ processes depend also on other unknown parameters, like the total width and branching ratio to hadronic channels, the possibility to observe a mixed $5D$ vector resonance may be smaller than for $\Upsilon(11020)$, even for equal dielectron widths. Recently new observations in the mass region 10.6--11.0 GeV have been reported \cite{ref.41,ref.42}. The resonance $\Upsilon(10890)$, considered to be identical to $\Upsilon(10860)$, has been observed by the Belle Collaboration \cite{ref.41}. Two resonances in the same region, $\Upsilon(10876)$ and $\Upsilon(10996)$, have been measured by the BaBar Collaboration \cite{ref.42}, they are supposed to be the conventional $\Upsilon(10860)$ and $\Upsilon(11020)$. Still there are some differences between the masses and total widths of the resonances from \cite{ref.41,ref.42}, and the PDG data \cite{ref.1}, so that further analysis of their parameters is needed. Also one cannot exclude that an overlap of $\Upsilon(11020)$ with the still unobserved $\tilde\Upsilon(5D)$ resonance is possible, which can distort the shape and other resonance parameters of the conventional $\Upsilon(11020)$ resonance. \begin{acknowledgments} This work is supported by the Grant NSh-4961.2008.2. One of the authors (I.V.D.) is also supported by the grant of the {\it Dynasty Foundation} and the \textit{Russian Science Support Foundation}. \end{acknowledgments} \section*{Appendix \\The wave functions at the origin and some matrix elements} \label{sect.A}
{'timestamp': '2009-09-03T13:47:51', 'yymm': '0903', 'arxiv_id': '0903.3643', 'language': 'en', 'url': 'https://arxiv.org/abs/0903.3643'}
\section{Introduction} The spirit of most extra-dimensional models of particle physics is to translate observed or desirable properties of ordinary 4D particle interactions into particular shapes or features (like warping or brane positions) within an assumed extra-dimensional geometry. In principle these features are hoped to be obtained by minimizing the energy of deforming the extra dimensions, but it is in practice a challenge to do so explicitly. Part of what makes this challenging is the fact that general covariance makes energy in itself not a useful criterion for distinguishing amongst various solutions. For instance for closed geometries invariance under time reparameterization implies {\em all} solutions have precisely zero energy. This has long been understood in cosmology, where the explanation of the geometry of the present-day universe is seen to be contingent on the history of how it evolved in the distant past. A similar understanding is also likely for the shapes of any present-day extra dimensions, suggesting we should seek to explain their properties in terms of how they have evolved over cosmological times. This is not the approach taken by most models of extra-dimensional cosmology, however, which usually explicitly assume extra dimensions to be stabilized at fixed values as the observed four dimensions change in time. This approach is taken usually for technical reasons: it is difficult to find explicit time-dependent solutions to the full higher-dimensional field equations. Instead, models of extra-dimensional cosmology usually use one of two simplifying approximations: either `mirage' or `4D effective' cosmology. In `mirage' cosmology \cite{MirageCosmology} brane-localized observers experience time-dependent geometries because they move through a static extra-dimensional bulk. In these models the branes are usually taken as `probe' branes, that don't back-react on the static bulk. An exception to this is for Randall-Sundrum type cosmologies \cite{RScosmo} involving codimension-1 branes, for which the Israel junction conditions \cite{IJC} allow back-reaction to be explicitly computed. In these models all extra-dimensional features are usually fixed from the get-go. In `effective 4D' cosmology the Hubble scale is assumed to be much smaller than the Kaluza-Klein (KK) mass scale, so that all of the time dependence in the geometry can be computed within the effective 4D theory, where some extra-dimensional features (like moduli) boil down to the values of various scalar fields. This is the approach most frequently used for string inflation, for example \cite{SIreviews}. Here some changes to the extra dimensions can be followed by seeing how the corresponding modulus fields evolve. But this can only be done for sufficiently slow expansion and only after it is already assumed that the extra dimensions are so small that the 4D approximation is valid. In particular, it cannot follow evolution where all dimensions are initially roughly the same size, to explain why some dimensions are larger than others. Our goal in this paper is to take some first steps towards going beyond these two types of approximations. To this end we explore the implications of previously constructed time-dependent solutions \cite{scaling solutions} to the full higher-dimensional field equations of chiral gauged 6D supergravity \cite{NS}, including the effects of back reaction from several codimension-2 source branes. When doing so it is crucial to work with a geometry with explicitly compactified extra dimensions, including a mechanism for stabilizing the extra-dimensional moduli, since it is well known that these can compete with (and sometimes ruin) what might otherwise appear as viable inflationary models\footnote{For early steps towards inflationary 6D models see \cite{HML}. } \cite{SIreviews}. For the system studied here this is accomplished using a simple flux-stabilization mechanism, that fixes all bulk properties except the overall volume modulus. Incorporating the back-reaction of the branes in these solutions is the main feature new to this paper. It is important because it allows the explicit determination of how the extra-dimensional geometry responds to the choices made for a matter field, which we assume to be localized on one of the source branes. It also provides a mechanism for lifting the one remaining flat direction, through a codimension-two generalization of the Goldberger-Wise mechanism \cite{GW} of codimension-one Randall-Sundrum models. In order to compute the back-reaction we extend to time-dependent geometries the bulk-brane matching conditions that were previously derived for codimension-two branes only in the limit of maximally symmetric on-brane geometries \cite{Cod2Matching, BBvN, BulkAxions, susybranes}. We then apply these conditions to the time-dependent bulk geometries to see how their integration constants are related to physical choices made for the dynamics of an `inflaton' scalar field that we assume to be localized on one of the source branes. For the solutions we describe, the scale factor of the on-brane dimensions expands like $a(t) \propto t^p$, and our main interest is on the accelerating solutions (for which $p > 1$). The parameter $p$ is an integration constant for the bulk solution, whose value becomes related to the shape of the potential for the on-brane scalar. de Sitter solutions \cite{6DdS} are obtained in the limit $p \to \infty$, which corresponds to the limit where the on-brane scalar potential becomes independent of the inflaton. What is most interesting is what the other dimensions do while the on-brane geometry inflates: their radius expands with a universal expansion rate, $r(t) \propto \sqrt t$, that is $p$-independent for any finite $p$. (By contrast, the extra dimensions do not expand at all for the special case of the de Sitter solutions.) The different expansion rates therefore cause the accelerated expansion of the on-brane directions to be faster than the growth of the size of the extra-dimensional directions; possibly providing the seeds of an understanding of why the on-brane dimensions are so much larger at the present epoch, in our much later universe. Because the extra dimensions expand (rather than contract), the Kaluza-Klein mass scale falls with time, putting the solution deeper into the domain of validity of the low-energy semiclassical regime. Equivalently, the higher-dimensional gravity scale falls (in 4D Planck units) during the inflationary epoch. This opens up the intriguing possibility of reconciling a very low gravity scale during the present epoch with a potentially much higher gravity scale when primordial fluctuations are generated during inflation. In the limit where the motion is adiabatic, we verify how the time-dependence of the full theory is captured by the solutions of the appropriate effective low-energy 4D theory. The 4D description of the inflationary models turns out to resemble in some ways an extended inflation model \cite{ExtInf}, though with an in-principle calculable potential for the Brans-Dicke scalar replacing the cosmological-constant sector that is usually assumed in these models. The rest of this paper is organized as follows. The next section, \S2, summarizes the field equations and solutions that describe the bulk physics in the model of interest. A particular focus in this section is the time-dependence and the asymptotics of the solutions in the vicinity of the two source branes. These are followed in \S3\ by a description of the dynamics to be assumed of the branes, as well as the boundary conditions that are dictated for the bulk fields by this assumption. The resulting matching conditions are then used to relate the parameters of the bulk solution to the various brane couplings and initial conditions assumed for the brane-localized scalar field. \S4\ then describes the same solutions from the point of view of a 4D observer, using the low-energy 4D effective theory that captures the long-wavelength physics. The low-energy field equations are solved and shown to share the same kinds of solutions as do the higher-dimensional field equations, showing how the two theories can capture the same physics. Some conclusions and outstanding issues are discussed in \S5. Four appendices provide the details of the brane properties; the derivation of the time-dependent codimension-two matching conditions; and the dimensional reduction to the 4D effective theory. \section{The bulk: action and solutions} In this section we summarize the higher-dimensional field equations and a broad class of time-dependent solutions, whose properties are matched to those of the source branes in the next section. For definiteness we use the equations of 6D chiral gauged super-gravity \cite{NS} with flux-stabilized extra dimensions. The minimal number of fields to follow are the 6D metric, $g_{\ssM \ssN}$, and dilaton, $\phi$, plus a flux-stabilizing Maxwell potential, $A_\ssM$. Although other fields are present in the full theory, only these three need be present in the simplest flux-stabilized solutions \cite{SSs, ConTrunc}. The action for these fields is \be \label{BulkAction} S_\mathrm{bulk} = - \int \exd^6 x \sqrt{-g} \; \left\{ \frac1{2\kappa^2} \, g^{\ssM\ssN} \Bigl( \cR_{\ssM \ssN} + \pd_\ssM \phi \, \pd_\ssN \phi \Bigr) + \frac14 \, e^{-\phi} \cF_{\ssM\ssN} \cF^{\ssM\ssN} + \frac{2 \, g_\ssR^2}{\kappa^4} \, e^\phi \right\} \,, \ee where $\kappa^2 = 8\pi G_6 = 1/M_6^4$ defines the 6D Planck scale and $\cF = \exd \cA$ is the field strength for the Maxwell field, whose coupling is denoted by $g$. The coupling $g$ can be, but need not be, the same as the coupling $g_\ssR$ that appears in the scalar potential, since supersymmetry requires $g_\ssR$ must be the gauge coupling for a specific $U(1)_\ssR$ symmetry that does not commute with supersymmetry. $g$ would equal $g_\ssR$ if $\cA_\ssM$ gauges this particular symmetry, but need not otherwise. The field equations coming from this action consist of the Einstein equation \be \label{BulkEinsteinEq} \cR_{\ssM\ssN} + \partial_\ssM \phi \, \partial_\ssN \phi + \kappa^2 e^{-\phi} \cF_{\ssM \ssP} {\cF_\ssN}^\ssP - \left( \frac{\kappa^2}{8} \, e^{-\phi} \cF_{\ssP\ssQ} \cF^{\ssP \ssQ} - \frac{g_\ssR^2}{\kappa^2} \, e^\phi \right) g_{\ssM\ssN} = 0 \,, \ee the Maxwell equation \be \label{BulkMaxwellEq} \nabla_\ssM (e^{-\phi} \cF^{\ssM \ssN}) = 0 \,, \ee and the dilaton equation \be \label{BulkDilatonEq} \Box \phi - \frac{2 \, g_\ssR^2 }{\kappa^2} \, e^\phi + \frac{\kappa^2}4 \, e^{-\phi} \cF_{\ssM\ssN} \cF^{\ssM\ssN} = 0 \,. \ee Notice these equations are invariant under the rigid rescaling, \be \label{scaleinv} g_{\ssM \ssN} \to \zeta \, g_{\ssM \ssN} \quad \hbox{and} \quad e^\phi \to \zeta^{-1} \, e^\phi \,, \ee with $\cA_\ssM$ unchanged, which ensures the existence of a zero-mode that is massless at the classical level, and much lighter than the generic KK scale once quantum effects are included. \subsection{Bulk solutions} The exact solutions to these equations we use for cosmology are described in \cite{scaling solutions} (see also \cite{CopelandSeto}). Their construction exploits the scale invariance of the field equations to recognize that exact time-dependent solutions can be constructed by scaling out appropriate powers of time from each component function. \subsubsection*{Time-dependent ansatz} Following \cite{scaling solutions} we adopt the following ansatz for the metric, \be \label{metric-ansatz} \exd s^2 = (H_0\tau)^c \left\{ \left[ -e^{2\omega(\eta)} \exd\tau^2 +e^{2 \alpha (\eta)} \delta_{ij} \exd x^i \exd x^j \right] + \tau^{2} \left[e^{2v(\eta)} \exd\eta^2 + e^{2 \beta(\eta)}\exd\theta^2\right] \right\} \,, \ee while the dilaton and Maxwell field are assumed to be \be \label{dilaton-ansatz} e^\phi = \frac{e^{\varphi(\eta)} }{(H_0\tau)^{2+c}} \quad \hbox{and} \quad \cA_\theta = \frac{ A_\theta(\eta) }{H_0} \,. \ee The power of time, $\tau$, appearing in each of these functions is chosen to ensure that all of the $\tau$-dependence appears as a common factor in each of the field equations. The 6D field equations then reduce to a collection of $\tau$-independent conditions that govern the profiles of the functions $\varphi$, $\omega$, $\alpha$, $\beta$, $v$ and $A_\theta$. For later convenience we briefly digress to describe the properties of these profiles in more detail. \subsubsection*{Radial profiles} Explicitly, with the above ansatz the Maxwell equation becomes \be A_\theta'' + \(\omega + 3 \alpha - \beta - v - \varphi \)' A_\theta' = 0 \,, \ee where primes denote differentiation with respect to the coordinate $\eta$. The dilaton equation similarly is \be \varphi'' + \( \omega + 3 \alpha - v + \beta \)' \varphi' + (2+c)(1+2c) \, e^{2(v-\omega)} + \frac{\kappa^2}2 \, e^{-(2\beta + \varphi)}(A_\theta')^2 -\frac{2g_\ssR^2}{\kappa^2 H_0^2} \, e^{2v+\varphi} = 0 \,. \ee The $\tau$-$\eta$ Einstein equation is first order in derivatives, \be \label{eq:Einst} (2c+1) \, \omega' +3\alpha' + (2+c) \, \varphi' = 0 \,, \ee while the rest are second order \ba \omega''+\(\omega+3\alpha -v +\beta \)' \omega' +\frac{\kappa^2} 4 \, e^{-(2 \beta + \varphi)} (A_\theta')^2 + \frac{g_\ssR^2}{\kappa^2 H_0^2} \, e^{2v+\varphi} -\left(c^2+\frac{5c}{2} +4 \right) \, e^{2(v-\omega)} &=&0\nn\\ \beta'' + \(\omega+3\alpha -v +\beta\)' \beta' +\frac{3\kappa^2}4 \, e^{-(2\beta +\varphi)}(A_\theta')^2 + \frac{g_\ssR^2}{\kappa^2 H_0^2} \, e^{2v+\varphi} -\frac12(c+2)(2c+1) \, e^{2(v-\omega)} &=&0 \nn\\ \alpha'' + \(\omega+3\alpha -v+\beta\)' \alpha' - \frac{\kappa^2}4 \, e^{-(2\beta +\varphi)}(A_\theta')^2 +\frac{g_\ssR^2}{\kappa^2 H_0^2} \, e^{2v+\varphi} -\frac{c}{2}\,(2c+1) \, e^{2(v-\omega)} &=&0\nn\\ \omega'' + 3\alpha'' + \beta'' + (\omega')^2 +3(\alpha')^2 + (\beta')^2 + (\varphi')^2 -\(\omega+3\alpha +\beta\)' v'\qquad\qquad\qquad\qquad\quad&&\nn\\ +\frac{3\kappa^2}4 \, e^{-(2\beta +\varphi)}(A_\theta')^2 +\frac{g_\ssR^2}{\kappa^2 H_0^2} \, e^{2v+\varphi} -\frac12(c+2)(2c+1) \, e^{2(v-\omega)} &=&0 \,. \nn\\ \ea One linear combination of these --- the `Hamiltonian' constraint for evolution in the $\eta$ direction --- also doesn't involve any second derivatives, and is given by \ba \label{eq:Hamconst} &&(\varphi')^2 - 6\(\omega + \alpha + \beta\)' \alpha' - 2 \omega' \beta' \nn\\ && \qquad\qquad +\frac{\kappa^2}2 \, e^{-(2\beta +\varphi)}(A_\theta')^2 - \frac{4g_\ssR^2}{\kappa^2 H_0^2} \, e^{2v+\varphi} +4 (c^2+c+1) \, e^{2(v-\omega)} = 0 \,. \ea As shown in \cite{scaling solutions}, these equations greatly simplify if we trade the four functions $\alpha$, $\beta$, $\omega$ and $\varphi$ for three new functions $\cx$, $\cy$ and $\cz$, using the redefinitions \ba \label{XYZdef} \omega&=&-\frac\cx4+\frac\cy4+ \left( \frac{2+c}{2c} \right) \cz \,, \qquad \alpha = -\frac\cx4+\frac\cy4- \left( \frac{2+c}{6c} \right) \cz \,,\nn\\ && \qquad \beta = \frac{3\cx}4+\frac\cy4+\frac\cz2 \quad \hbox{and} \quad \varphi = \frac\cx2-\frac\cy2-\cz \,. \ea Only three functions are needed to replace the initial four because these definitions are chosen to identically satisfy eq.~\pref{eq:Einst} which, for the purposes of integrating the equations in the $\eta$ direction, can be regarded as a constraint (because it doesn't involve any second derivatives). The function $v$ can be chosen arbitrarily by redefining $\eta$, and the choice \be v = -\frac\cx4+\frac{5\cy}4+\frac\cz2 \,, \ee proves to be particularly simple \cite{scaling solutions}. In terms of these variables the Maxwell equation becomes \be A_\theta'' - 2\cx' A_\theta' = 0 \,, \ee the dilaton equation is \be \(\frac12\cx-\frac12\cy-\cz\)''+(c+2)(2c+1) \, e^{2(\cy-\cz/c)} + \frac{\kappa^2}2 \, e^{-2\cx}(A_\theta')^2 - \frac{2g_\ssR^2}{\kappa^2 H_0^2} \, e^{2\cy}=0 \,, \ee and the remaining Einstein equations are \ba \(-\frac14\cx+\frac14\cy+\frac{2+c}{2c}\cz\)''+\frac{\kappa^2} 4 \, e^{-2 \cx} (A_\theta')^2 + \frac{g_\ssR^2}{\kappa^2 H_0^2} \, e^{2\cy} -\left( c^2+\frac{5c}{2}+4 \right) \, e^{2(\cy-\cz/c)} &=&0\nn\\ \( \frac34\cx+\frac14\cy+\frac12\cz \)'' +\frac{3\kappa^2}4 \, e^{-2\cx}(A_\theta')^2 + \frac{g_\ssR^2}{\kappa^2 H_0^2} \, e^{2\cy} -\frac12(c+2)(2c+1) \, e^{2(\cy-\cz/c)} &=&0\nn\\ \(\frac14\cx+\frac14\cy-\frac{2+c}{6c}\cz \)'' - \frac{\kappa^2}4 \, e^{-2\cx}(A_\theta')^2 +\frac{g_\ssR^2}{\kappa^2 H_0^2} \, e^{2\cy} -\frac12c(2c+1) \, e^{2(\cy-\cz/c)} &=&0 \nn\\ \(-\frac14\cx+\frac54\cy+\frac12\cz\)'' +(\cx')^2-(\cy')^2+\frac43 \, \frac{1+c+c^2}{c^2} \, (\cz')^2 \qquad\qquad\qquad\qquad&&\nn\\ +\frac{3\kappa^2}4 \, e^{-2\cx}(A_\theta')^2 +\frac{g_\ssR^2}{\kappa^2 H_0^2} \, e^{2\cy} -\frac12(c+2)(2c+1) \, e^{2(\cy-\cz/c)} &=&0 \,. \nn\\ \ea The combination of twice the second Einstein equation plus the Dilaton equation is completely independent of $\cy$ and $\cz$. This combination and the Maxwell equation can be exactly integrated, giving \ba \label{eq:chisoln} A_\theta &=& q \int\exd\eta \; e^{2\cx}\nn\\ e^{-\cx} &=& \left( \frac{\kappa \, q}{\lambda_1} \right) \cosh\left[ \lambda_1(\eta-\eta_1) \right], \ea where $q$, $\lambda_1$ and $\eta_1$ are integration constants. The remaining field equations then reduce to \ba \label{bulkXY} \cy''+\frac{4g_\ssR^2}{\kappa^2 H_0^2} \, e^{2\cy} - 4 (1+c+c^2) \, e^{2\cy-2\cz/c}&=&0\nn\\ \hbox{and} \qquad \cz'' - 3c\, e^{2\cy-2\cz/c}&=&0 \,, \ea together with the first-order constraint, eq.~\pref{eq:Hamconst}, that ensures that only two of the `initial conditions' --- $\cx'$, $\cy'$ and $\cz'$ --- are independent. \subsubsection*{Asymptotic forms} With these coordinates the singularities of the metric lie at $\eta \to \pm \infty$, which is interpreted as the position of two source branes. We now pause to identify the asymptotic forms to be required by the metric functions as these branes are approached. There are two physical conditions that guide this choice. First, we wish the limits $\eta \to \pm \infty$ to represent codimension-two points, rather than codimension-one surfaces, and so require $e^{2\beta} \to 0$ in this limit. In addition, we require the two extra dimensions to have finite volume, which requires $e^{\beta + v} \to 0$. In Appendix \ref{app:AsForm} we argue, following \cite{scaling solutions}, that these conditions require both $\cy''$ and $\cz''$ must vanish in the limit $\eta \to \pm \infty$, and so the functions $\cy$ and $\cz$ asymptote to linear functions of $\eta$ for large $|\eta|$: \be \label{eq:YZbcs} \cy \to \cy_\infty^\pm \mp\lambda_2^\pm\eta \quad \hbox{and} \quad \cz \to \cz_\infty^\pm \mp\lambda_3^\pm\eta \quad \hbox{as} \quad \eta \to \pm \infty \,, \ee where $\cy_\infty^\pm$, $\cz_\infty^\pm$, $\lambda_2^\pm$ and $\lambda_3^\pm$ are integration constants. The signs in eqs.~\pref{eq:YZbcs} are chosen so that $\lambda_2^\pm$ and $\lambda_3^\pm$ give the outward-pointing normal derivatives: {\em e.g.} $\lim_{\eta \to \pm \infty} N \cdot \partial \cy = \lambda_2^\pm$, where $N_\ssM$ denotes the outward-pointing unit normal to a surface at fixed $\eta$. Not all of the integration constants identified to this point are independent of one another, however. In particular, the asymptotic form as $\eta \to +\infty$ can be computed from that at $\eta \to - \infty$ by integrating the field equations, and so cannot be independently chosen. In principle, given a value for $c$ and for all of the constants $\lambda_i^+$, $\cx_\infty^+$, $\cy_\infty^+$ and $\cz_\infty^+$, integration of the bulk field equations yields the values for $\lambda_i^-$, $\cx_\infty^-$, $\cy_\infty^-$ and $\cz_\infty^-$. In addition, the integration constants need not all be independent even restricting our attention purely to the vicinity of only one of the branes. There are several reasons for this. One combination of these field equations --- the `Hamiltonian' constraint, eq.~\pref{eq:Hamconst} --- imposes a condition\footnote{If this constraint is satisfied as $\eta \to -\infty$, the equations of motion automatically guarantee it also holds as $\eta \to + \infty$.} that restricts the choices that can be made at $\eta \to - \infty$, \be \label{eq:powersconstraint} (\lambda_2^\pm)^2 = \lambda_1^2 + \frac43 \left( \frac{1+c+c^2}{c^2} \right) (\lambda_3^\pm)^2 \,. \ee Also, it turns out that the constants $\cx_\infty^\pm$ are not independent of the other parameters describing the bulk solution, like the flux-quantization integer $n$ to be discussed next. Next, flux quantization for the Maxwell field in the extra dimensions also imposes a relation amongst the integration constants. In the absence of brane sources, flux quantization implies \cite{scaling solutions} \be \frac n{g} = \frac{q}{H_0} \int_{-\infty}^\infty\exd\eta \; e^{2\cx} = \frac{\lambda_1^2}{q\kappa^2 H_0} \, \int_{-\infty}^\infty\exd\eta \, \cosh^{-2}\left[\lambda_1(\eta-\eta_1)\right] = \frac{2\lambda_1}{q\kappa^2 H_0} \,, \ee where $n$ is an integer. This gets slightly modified when branes are present, if the branes are capable of carrying a brane-localized Maxwell flux \cite{BulkAxions, susybranes} (as is the case in particular for the branes considered in \S3, below). In this case the flux-quantization condition is modified to \be \label{eq:bulkfluxquant} \frac n{g} = \sum_b \frac{\Phi_b(\phi)}{2\pi} +\frac{2\lambda_1}{q\kappa^2 H_0} \,, \ee where $\Phi_b$ is the flux localized on each brane. (More on this when we discuss brane properties in more detail in \S3.) Finally, since the above solutions transform into one other under constant shifts of $\eta$, we may use this freedom to reparameterize $\eta \to \eta+\eta_1$ to eliminate $\eta_1$, in which case \be e^{-\cx}=\frac{\kappa\, q}{\lambda_1} \, \cosh(\lambda_1\eta) = \frac{4\pi g}{\kappa H_0(2\pi n-g\sum_b\Phi_b)} \, \cosh(\lambda_1\eta). \ee {}From this we see that the asymptotic form for $\cx$ is \be \cx\to\cx_\infty^\pm\mp\lambda_1\eta\,, \ee with \be \cx_\infty^\pm = \ln\left[ \kappa H_0\( \frac n{g}-\sum_b\frac{\Phi_b}{2\pi}\) \right] \,. \ee This shows explicitly how $\cx_\infty^\pm$ is related to other integration constants. All told, this leaves $c$, $H_0$, $\lambda_2^-$, $\lambda_3^-$, $\cy^-$ and $\cz^-$ (or, equivalently, $c$, $H_0$, $\lambda_2^+$, $\lambda_3^+$, $\cy^+$ and $\cz^+$) as the six independent integration constants of the bulk solution. These we relate to brane properties in subsequent sections. \subsection{Interpretation as 4D cosmology} In order to make contact with the cosmology seen by a brane-localized observer, we must put the 4D metric into standard Friedmann-Lemaitre-Robertson-Walker (FLRW) form. In particular, we should do so for the 4D Einstein-frame metric, for which the 4D Planck scale is time-independent. \subsubsection*{4D Einstein frame} Recall the 6D metric has the form \ba g_{\ssM \ssN} \, \exd x^\ssM\exd x^\ssN &=& (H_0\tau)^c \Bigl\{ \left[ -e^{2\omega} \exd\tau^2 + e^{2 \alpha} \delta_{ij} \,\exd x^i \exd x^j \right] + \tau^{2} \left[ e^{2v} \exd\eta^2 + e^{2\beta} \exd\theta^2 \right] \Bigr\} \nn\\ &=& \hat g_{\mu\nu}\exd x^\mu \exd x^\nu + \frac{(H_0\tau)^{2+c}}{H_0^2} \left[e^{2v}\exd\eta^2 +e^{2\beta} \exd\theta^2 \right] \,, \ea and denote by $\hat R_{\mu \nu}$ the Ricci tensor constructed using $\hat g_{\mu\nu}$. In terms of these, the time dependence of the 4D Einstein-Hilbert term is given by \be \frac{1}{2 \kappa^2} \sqrt{-g} \; g^{\ssM \ssN} R_{\ssM \ssN} = \frac{1}{2 \kappa^2 H_0^2} \sqrt{-\hat g} \; \hat g^{\mu\nu} \hat R_{\mu\nu} \; e^{\beta +v}(H_0 \tau)^{2+c} + \cdots \,. \ee This time dependence can be removed by defining a new 4D Einstein-frame metric \be \tilde g_{\mu\nu} = (H_0\tau)^{2+c} \hat g_{\mu\nu} \,, \ee whose components are \be \tilde g_{\mu\nu} \, \exd x^\mu \exd x^\nu = (H_0\tau)^{2+2c} \left[ -e^{2\omega} \exd \tau^2 + e^{2\alpha} \delta_{ij} \, \exd x^i\exd x^j\right] \,. \ee \subsubsection*{FLRW time} FLRW time is defined for this metric by solving $\exd t = \pm (H_0 \tau)^{1 + c} \exd \tau$. There are two cases to consider, depending on whether or not $c=-2$. If $c \ne -2$, then \be H_0 t = \frac{|H_0 \tau|^{2+c}}{|2+c|} \qquad ( \hbox{if} \quad c \ne -2)\,, \ee where the sign is chosen by demanding that $t$ increases as $\tau$ does. (If $c < -2$ then $t$ rises from 0 to $\infty$ as $\tau$ climbs from $-\infty$ to 0.) This puts the 4D metric into an FLRW-like form \be \label{FLRWwarpedform} \tilde g_{\mu\nu} \, \exd x^\mu \exd x^\nu = - e^{2\omega} \, \exd t^2 + a^2(t) \, e^{2\alpha} \delta_{ij} \, \exd x^i\exd x^j \,, \ee where \be a(t) = ( |c+2| \, H_0 t)^p \quad \hbox{with} \quad p = \frac{1+ c}{2+c} \qquad ( \hbox{if} \quad c \ne -2)\,. \ee \FIGURE[ht]{ \epsfig{file=pVSc2.eps,angle=0,width=0.35\hsize} \caption{A plot of the power, $p$, controlling the scale factor's expansion, vs the parameter $c$ appearing in the higher-dimensional ansatz. } \label{fig:pvsc} } Notice that $p > 1$ if $c < -2$, with $p \to 1$ as $c \to - \infty$ and $p \to + \infty$ when $c \to -2$ from below (see fig.~\ref{fig:pvsc}). This describes an accelerated power-law expansion, resembling the power-law expansion of `extended inflation' \cite{ExtInf} for which $\ddot a/a = p\,(p-1)/t^2 > 0$. Similarly, $p < 0$ if $-2 < c < -1$, with $p \to 0$ as $c \to - 1$ and $p \to - \infty$ as $c \to -2$ from above. Since $p < 0$ this describes a 4D universe that contracts as $t$ increases. Finally $0 < p < 1$ if $c > -1$, climbing monotonically from zero with increasing $c$ until $p \to 1$ as $c \to + \infty$. Since $\ddot a/a < 0$, this describes decelerated expansion. If $c=-2$, we instead define \be H_0 t = - \ln|H_0\tau| \qquad ( \hbox{if} \quad c = -2) \,, \ee in which case the FLRW metric again takes the form of eq.~\pref{FLRWwarpedform}, with \be a(t) = e^{H_0 t} \qquad ( \hbox{if} \quad c = -2)\,. \ee This is the limiting case of the de Sitter-like solutions, found in \cite{6DdS}. It may seem a surprise to find de Sitter solutions, given the many no-go results \cite{dSnogo}, however these de Sitter solutions thread a loop-hole in the no-go theorems. The loop-hole is the benign-looking assumption of compactness: that integrals of the form $I := \int \exd^n x \; \sqrt{g} \; \Box X$ must vanish, where $X$ is a suitable combination of bulk fields. This assumption is violated due to the back-reaction of the branes, since this can force the bulk fields to become sufficiently singular near the branes to contribute nonzero contributions to integrals like $I$ \cite{6DdS, BBvN}. \subsubsection*{$t$-dependence of other bulk fields} Recalling that the extra-dimensional metric has the form \be \exd s^2 = \frac{|H_0 \tau|^{2+c}}{H_0^2} \left( e^{2v } \exd \eta^2 + e^{2\beta} \exd \theta^2 \right) \,, \ee we see that the linear size of the extra dimensions is time-independent if $c = -2$, but otherwise behaves as \be r(t) \propto \frac{|H_0 \tau|^{1 + c/2}}{H_0} = \frac{(|c+2| \, H_0 t)^{1/2}}{H_0} \qquad ( \hbox{if} \quad c \ne -2) \,. \ee This shows that the extra dimensions universally grow as $r \propto \sqrt t$ for any $c \ne -2$. In particular $r(t)$ grows even if $a(t) \propto t^p$ shrinks (which happens when $p < 0$: {\em i.e.} when $-2 < c < -1$). When $a(t)$ grows, it grows faster than $r(t)$ whenever $p > \frac12$, and which is true for both $c < -2$ and for $c > 0$. It is true in particular whenever the expansion of the on-brane directions accelerates ({\em i.e.} when $p > 1$). When $0 < p < \frac12$ (and so $-1 < c < 0$) it is the extra dimensions that grow faster. Another useful comparison for later purposes is between the size of $r(t)$ and the 4D Hubble length, $H^{-1}(t)$. Since neither $r$ nor $H$ depends on time when $c = -2$, this ratio is also time-independent in this limit. But for all other values of $c$, the Hubble scale is given by $H := \dot a/a = p/t$, with $p = (c+1)/(c+2)$, as above. Consequently, the ratio of $H$ to the KK scale, $m_\KK = 1/r$, is given by \be H(t)\, r(t) \propto \frac{|c+1|}{( |c+2| \, H_0 t)^{1/2}} \,, \ee and so decreases as $t$ evolves. The dilaton also has a simple time-dependence when expressed as a function of $t$. It is time-dependent if $c = -2$, but otherwise evolves as \be e^\phi \propto \frac{1}{(H_0 \tau)^{2+c}} \propto \frac{1}{t} \qquad ( \hbox{if} \quad c \ne -2) \,, \ee which shows that $r^2 e^\phi$ remains independent of $t$ for all $c$. Notice that this implies that evolution takes us deeper into the regime of weak coupling, since it is $e^\phi$ that is the loop-counting parameter of the bulk supergravity \cite{susybranes, TNCC}. \section{Brane actions and bulk boundary conditions} It is not just the geometry of the universe that is of interest in cosmology, but also how this geometry responds to the universal energy distribution. So in order to properly exploit the above solutions to the field equations it is necessary to relate its integration constants to the physical properties of the matter that sources it. In the present instance this requires specifying an action for the two source branes that reside at $\eta \to \pm \infty$. To this end we imagine one brane to be a spectator, in the sense that it does not involve any on-brane degrees of freedom. Its action therefore involves only the bulk fields, which to lowest order in a derivative expansion is\footnote{Although nominally involving one higher derivative than the tension term, the magnetic coupling, $\Phi$, describes the amount of flux that can be localized on the brane \cite{BulkAxions, susybranes}, and can be important when computing the energetics of flux-stabilized compactifications in supergravity because of the tendency of the tension to drop out of this quantity \cite{susybranes, TNCC}. We follow here the conventions for $\Phi$ adopted in \cite{BulkAxions, susybranes}, which differ by a factor of $e^{-\phi}$ from those of \cite{TNCC}.} \be \label{eq:Sspec} S_s = - \int \exd^4x \sqrt{-\gamma} \; \left\{ T_s - \frac12 \, \Phi_s \, e^{-\phi} \, \epsilon^{mn} \cF_{mn} + \cdots \right\} \,. \ee Here $T_s$ and $\Phi_s$ are dimensionful parameters, $\gamma_{mn}$ is the induced on-brane metric, and $\epsilon^{mn}$ is the antisymmetric tensor defined on the two dimensions transverse to the brane. Physically, $T_s$ denotes the tension of the spectator brane, while the magnetic coupling, $\Phi_s$, has the physical interpretation of the amount of flux that is localized at the brane \cite{BulkAxions, susybranes} (see Appendix \ref{app:FluxQ}). To provide the dynamics that drives the bulk time dependence we imagine localizing a scalar field --- or inflaton, $\chi$ --- on the second, `inflaton', brane with action \be \label{eq:Sinf} S_i = - \int \exd^4x \sqrt{- \gamma} \; \left\{ T_i + f(\phi) \Bigl[ \gamma^{\mu\nu} \pd_\mu \chi \pd_\nu \chi + V(\chi) \Bigr] - \frac12 \, \Phi_i \, e^{-\phi} \, \epsilon^{mn} \cF_{mn} + \cdots \right\} \,. \ee As before $T_i$ and $\Phi_i$ denote this brane's tension and bulk flux, both of which we assume to be independent of the bulk dilaton, $\phi$. In what follows we assume the following explicit forms, \be f(\phi) = e^{-\phi} \quad \hbox{and} \quad V(\chi) = V_0 + V_1 \, e^{\zeta \chi} + V_2 \, e^{2\zeta \chi} + \cdots \,, \ee but our interest is in the regime where the term $V_1 \, e^{\zeta \chi}$ dominates all the others in $V(\chi)$, and so we choose the coefficients $V_k$ appropriately. These choices --- $f = e^{-\phi}$ and $V = V_1 \, e^{\zeta \, \chi}$, as well as the $\phi$-independence of $T_s$, $T_i$, $\Phi_s$ and $\Phi_i$ --- are special because they preserve the scale invariance, eq.~\pref{scaleinv}, of the bulk equations of motion. As we see below, these choices for the functions $f(\phi)$ and $V(\chi)$ are required in order for the equations of motion for $\chi$ to be consistent with the power-law time-dependence we assume above for the solution in the bulk. In order to see why this is true, we require the matching conditions that govern how this action back-reacts onto the properties of the bulk solution that interpolates between the two branes. This requires the generalization to time-dependent systems of the codimension-two matching conditions worked out elsewhere \cite{Cod2Matching, BBvN} for the special case of maximally symmetric on-brane geometries. These matching conditions generalize the familiar Israel junction conditions that relate bulk and brane properties for codimension-one branes, such as those encountered in Randall-Sundrum type models \cite{RS}. \subsection{Time-dependent brane-bulk matching} When the on-brane geometry is maximally symmetric --- {\em i.e.} flat, de Sitter or anti-de Sitter --- the matching conditions for codimension-two branes are derived in refs.~\cite{Cod2Matching} (see also \cite{PST}), and summarized with examples in ref.~\cite{BBvN}. In Appendix \ref{matchingderivation} we generalize these matching conditions to the case where the on-brane geometry is time-dependent, in order to apply it to the situation of interest here. In this section we describe the result of this generalization. For simplicity we assume axial symmetry in the immediate vicinity of the codimension-2 brane, with $\theta$ being the coordinate labeling the symmetry direction and $\rho$ labeling a `radial' off-brane direction, with the brane located at $\rho = 0$. We do not demand that $\rho$ be proper distance, or even that $\rho$ be part of a system of orthogonal coordinates. However we do assume that there exist coordinates for which there are no off-diagonal metric components that mix $\theta$ with other coordinates: $g_{a \theta} = 0$. With those choices, the matching conditions for the metric are similar in form to those that apply in the maximally symmetric case: \be \label{eq:cod2matching-g} - \frac12 \left[ \sqrt{g_{\theta\theta}} \, \left(K^{mn}-K \proj^{mn}\right) - {\rm flat} \right] = \frac{\kappa^2}{2\pi} \, \frac1{\sqrt{-\gamma}} \, \frac{\delta S_b}{\delta g_{mn}} \,, \ee while those for the dilaton and Maxwell field are \be \label{eq:cod2matching-phi} - \sqrt{g_{\theta\theta}} \, N^m\nabla_m\phi = \frac{\kappa^2}{2\pi} \, \frac1{\sqrt{-\gamma}} \, \frac{\delta S_b}{\delta\phi} \,, \ee and \be \label{eq:cod2matching-A} - \sqrt{g_{\theta\theta}} \, e^{-\phi} \, N_mF^{mn} = \frac{\kappa^2}{2\pi} \, \frac1{\sqrt{-\gamma}} \, \frac{\delta S_b}{\delta A_n} \,. \ee Here the action appearing on the right-hand-side is the codimension-two action, such as eq.~\pref{eq:Sspec} or \pref{eq:Sinf}, and `flat' denotes the same result for a metric without a singularity at the brane position. We define the projection operator $\proj^m_n = \delta^m_n - N^m N_n$, where $N^m$ is the unit normal to the brane, pointing into the bulk. The induced metric $\gamma_{mn}$ is the projection operator restricted to the on-brane directions, and has determinant $\gamma$. In principle the indices $m,n$ in eqs.~\pref{eq:cod2matching-g} run over all on-brane\footnote{When the metric has off-diagonal components mixing $\rho$ and brane directions, then $m,n$ also run over $\rho$. In our metric ansatz, those matching conditions vanish identically.} coordinates as well as $\theta$, and this might seem to present a problem since the codimension-2 action is not normally expressed as a function of $\theta$, since this is a degenerate coordinate at the brane position. However, the $\theta - \theta$ matching condition is never really required, because it is not independent of the others. Its content can instead be found from the others by using the `Hamiltonian' constraint, eq.~\pref{eq:Hamconst}, in the near-brane limit \cite{Cod2Matching, BBvN, otheruvcaps}. \subsubsection*{Specialization to the bulk solutions} Specialized to the geometry of our bulk ansatz, the above considerations lead to the following independent matching conditions for the inflationary brane. Writing the 4D on-brane coordinates as $\{ x^\mu \} = \{ t, x^i \}$, the $tt$, $ij$ and dilaton matching conditions become \ba \label{eq:matchingform} \Bigl[ e^{\beta-v}(\partial_n \beta + 3 \partial_n \alpha) \Bigr]_b &=& 1- \frac{\kappa^2}{2\pi} \left\{ T - H_0\Phi e^{-\varphi-v-\beta}A_\theta' + f(\phi) \left[ -\pd_\tau \chi \, \pd^\tau \chi + V(\chi) \right] \right\} \nn\\ \Bigl[ e^{\beta-v}(\partial_n \beta + 2 \partial_n \alpha + \partial_n \omega)\Bigr] &=&1 -\frac{\kappa^2}{2\pi} \, \left\{ T - H_0\Phi e^{-\varphi-v-\beta}A_\theta' + f(\phi) \left[ \pd_\tau \chi \, \pd^\tau \chi + V(\chi) \right] \right\} \nn\\ \Bigl[ e^{\beta-v} \partial_n \phi \Bigr] &=& \frac{\kappa^2}{2\pi} \, \left( f'(\phi) \left[ \pd_\tau\chi \, \pd^\tau\chi + V(\chi) \right] + H_0\Phi e^{-\varphi-v-\beta}A_\theta' \right) \,, \ea with $\partial_n = \pm \partial_\eta$ denoting the inward-pointing (away from the brane) radial derivative, and both sides are to be evaluated at the brane position --- {\em i.e.} with bulk fields evaluated in the limit\footnote{As we see below, any divergences in the bulk profiles in this near-brane limit are to be absorbed in these equations into renormalizations of the parameters appearing in the brane action.} $\eta \to \mp \infty$. In these equations $f'$ denotes $\exd f/\exd \phi$ while $A'_\theta = \partial_\eta A_\theta = F_{\eta \theta}$. \subsubsection*{Consistency with assumed time-dependence} We first record what $f(\phi)$ and $V(\chi)$ must satisfy in order for the matching conditions, eqs.~\pref{eq:matchingform}, to be consistent with the time-dependence assumed for the bulk cosmological solutions of interest here. Evaluating the left-hand side of the matching conditions, eqs.~\pref{eq:matchingform} using the ans\"atze of eqs.~\pref{metric-ansatz} and \pref{dilaton-ansatz} shows that they are time-independent. The same must therefore also be true of the right-hand side. We choose $f(\phi)$ and $V(\chi)$ by demanding that the time-dependence arising due to the appearance of $\phi$ on the right-hand side cancel with time-dependence of the $\chi$-dependent pieces. Comparing the bottom two equations of \pref{eq:matchingform} then shows that the time-dependence of $f(\phi)$ and $f'(\phi)$ must be the same, and so $f(\phi) = C e^{k\phi}$ for some constants $C$ and $k$. The scale $C$ can be absorbed into the normalization of $\chi$, and so is dropped from here on. Similarly, comparing the top two of eqs.~\pref{eq:matchingform} shows that the quantity $g^{\tau \tau} \partial_\tau \chi \, \partial_\tau \chi$ must scale with time in the same way as does $V(\chi)$. Furthermore, any scaling of $\chi$ with time must satisfy the $\chi$ equation of motion, found by varying the brane action with respect to $\chi$: \be \label{eq:chieqn} \pd_\mu \left[ \sqrt{- \gamma} \; e^{k\phi} \pd^\mu \chi \right] - \sqrt{- \gamma} \; e^{k\phi} V'(\chi) = 0 \,. \ee Specialized to a homogeneous roll, $\chi=\chi(\tau)$, this simplifies to \be \label{eq:chieqntau} \pd_\tau \left[ (H_0 \tau)^{2c} (H_0 \tau)^{-k(c+2)} e^{-2\omega}(H_0\tau)^{-c} \pd_\tau \chi \right] + (H_0 \tau)^{2c} (H_0 \tau)^{-k(c+2)} V'(\chi) = 0 \,. \ee All of these conditions are satisfied provided we assume a potential of the form \be V(\chi) = V_1 \, e^{\zeta \chi} \,, \ee and an inflaton solution of the form \be \label{chisoln} \chi = \chi_0+\chi_1\ln|H_0\tau| \,, \ee since in this case the time-dependence of the $\chi$ field equation factors. In what follows it is notationally useful to define $\hat V_1 := V_1 \, e^{\zeta \chi_0}$, allowing eqs.~\pref{eq:chieqn} and \pref{eq:chieqntau} to be rewritten as \be \label{inflaton-eom} H_0^2 e^{-2\omega} = \frac{\hat V_1 \zeta }{\chi_1(3+2\zeta \chi_1)}\,. \ee Notice that if $\zeta\chi_1 > 0$ then $V_1$ must also be non-negative. In this case the conditions that $\pd_\tau \chi \pd^\tau \chi$ and $V(\chi)$ scale like $e^{-k\phi}$ boil down to \be (H_0\tau)^{-c-2} \propto \tau^{\zeta \chi_1} \propto \tau^{k(c+2)} \,, \ee and so consistency between the scaling solutions and the matching condition implies $k = -1$, and so $f(\phi) = e^{-\phi}$ as anticipated earlier. It also determines the bulk time exponent $c$ in terms of brane properties: \be \label{ceqn} \zeta \chi_1 = - (c+2) \,. \ee \subsection{Relation between brane parameters and physical bulk quantities} We now use the above tools to establish more precisely the connection between brane properties and the physical characteristics of the bulk geometry. \subsubsection*{Determination of integration constants} Specializing the matching to the choices $f(\phi) = e^{-\phi}$ and $V(\chi) = V_1 e^{\zeta \chi}$, and using the $\tau$-dependence of the bulk and brane fields described in \S2, gives the matching conditions in a form that determines the bulk integration constants in terms of properties of the two branes. Consider first the spectator brane, for which the matching conditions are \ba \label{redmatching} e^{\beta-v} \( \lambda_2^+ - \frac{\lambda_3^+}c \) &=& 1 - \frac{\kappa^2 T_s}{2\pi} + \frac{\kappa^2}{2\pi}H_0\Phi_se^{-\varphi-v-\beta}A_\theta'\nn\\ e^{\beta-v} \( \lambda_2^+ + \frac{1+2c}{3c} \, \lambda_3^+ \) &=& 1 - \frac{\kappa^2T_s}{2\pi} + \frac{\kappa^2}{2\pi}H_0\Phi_se^{-\varphi-v-\beta}A_\theta' \\ e^{\beta-v} \( \lambda_1 - \lambda_2^+ - 2 \lambda_3^+ \) &=& \frac{\kappa^2}{\pi}H_0\Phi_se^{-\varphi-v-\beta}A_\theta' \,, \nn \ea with all quantities evaluated at $\eta \to + \infty$. The difference between the first two of these implies \be \lambda_3^+ = 0\,, \ee for the asymptotic geometry near the spectator brane, which also implies\footnote{ From the constraint alone, $\lambda_1 = - \lambda_2^+$ is also allowed. The requirement of codimensions-2 branes together with finite volume excludes this possibility. For details, see appendix \ref{app:AsForm}} $\lambda_1=\lambda_2^+$ once the bulk constraint, eq.~\pref{eq:powersconstraint}, is used. This is then inconsistent with the third matching condition at this brane unless we also choose the spectator brane to contain no flux, $\Phi_s=0$. Given this, the matching conditions then degenerate into the usual defect-angle/tension relation \cite{TvsA}, which for the coordinates used here reads \be \lambda_1=\lambda_2^+=e^{v-b}\(1-\frac{\kappa^2 T_s}{2\pi}\) \,. \ee This summarizes the near-brane geometry for a pure-tension brane for which $T_s$ does not depend on $\phi$. Next consider the inflaton brane, for which matching implies \ba e^{\beta-v} \( \lambda_2^- - \frac{\lambda_3^-}c \) &=& 1- \frac{\kappa^2}{2\pi} \, e^{-\varphi} \left[ e^{-2\omega} (H_0\chi_1)^2 + \hat V_1 - H_0 \, \Phi_i \, e^{-v-\beta}A_\theta'\right] - \frac{\kappa^2 T_i}{2\pi} \nn\\ e^{\beta-v} \( \lambda_2^- + \frac{1+2c}{3c} \, \lambda_3^- \) &=& 1- \frac{\kappa^2}{2\pi} \, e^{-\varphi} \left[ - e^{-2\omega} (H_0\chi_1)^2 + \hat V_1 - H_0 \, \Phi_i \, e^{-v-\beta}A_\theta'\right] - \frac{\kappa^2 T_i}{2\pi} \nn\\ e^{\beta-v} \( \lambda_1 - \lambda_2^- - 2 \lambda_3^- \) &=& \frac{\kappa^2}{\pi} \, e^{-\varphi} \left[ e^{-2\omega}(H_0\chi_1)^2 - \hat V_1 + H_0 \, \Phi_i \, e^{-v-\beta} A_\theta'\right] \,, \ea with the fields evaluated at $\eta \to - \infty$. Using the first two matching conditions to eliminate $\lambda_2^-$, and using eqs.~\pref{inflaton-eom} and \pref{ceqn} to eliminate $H_0$ and $c$ allows the isolation of $\lambda_3^-$, giving \be \label{lambda3matching} e^{\beta-v}\lambda_3^- = \frac{\kappa^2 \hat V_1}{2\pi} \( \frac{6+3 \, \zeta \chi_1}{3 + 2 \, \zeta \chi_1}\) \, e^{-\varphi}\,. \ee In general, matching for the inflaton brane is more subtle, since for it the above matching conditions typically diverge when evaluated at the brane positions. As usual \cite{Bren}, this divergence is absorbed into the parameters of the brane action, as we now briefly sketch. \subsubsection*{Brane renormalization} In general, in the near-brane limit $\beta-v=\cx-\cy$ varies linearly with $\eta$, approaching $\cx_\infty^\pm - \cy_\infty^\pm \mp (\lambda_1 - \lambda_2^-) \, \eta$ as $\eta \to \pm \infty$. This shows that unless $\lambda_2^\pm=\lambda_1$ (which with the constraint, eq.~\pref{eq:powersconstraint}, then implies $\lambda_3^\pm=0$), the left-hand sides of the above matching conditions diverge. These divergences are generic to codimension-two and higher sources, as is familiar from the divergence of the Coulomb potential at the position of any source (codimension-3) point charges (in 3 space dimensions). We absorb these divergences into renormalizations of the brane parameters, which in the present instance are $V_1$, $\zeta$, $T_i$ and $\Phi_i$, together with a wave-function renormalization of the on-brane field, $\chi$ (which for the present purposes amounts to a renormalization of $\chi_1$). To this end we regularize the matching conditions, by evaluating them at a small but nonzero distance away from the brane --- {\em i.e.} for $|\eta| = 1/\epsilon$ very large --- and assign an $\epsilon$-dependence to the couplings in such a way as to ensure that the renormalized results are finite as $\epsilon \to 0$. This is a meaningful procedure because the values of these parameters are ultimately determined by evaluating physical observables in terms of them, and measuring the values of these observables. Ultimately all of the uncertainties associated with the $\epsilon$ regularization cancel once the renormalized parameters are eliminated in this way in terms of observables, since a theory's predictive value is in the correlations it implies among the values of these observables. In this section we (temporarily) denote the resulting renormalized (finite) brane parameters by a bar, {\em e.g.} for $\eta = -1/\epsilon$, \be \label{zrendef} \zeta \to \overline\zeta := Z_\zeta(\epsilon) \, \zeta \,, \quad V_1 \to \overline V_1 := Z_\ssV (\epsilon) \, V_1 \,, \quad \chi_1 \to \overline \chi_1 := Z_\chi (\epsilon) \, \chi_1 \quad \hbox{and so on} \,. \ee We define the parameters $Z_\ssV$, $Z_\zeta$ {\em etc.} so that $\overline \zeta$, $\overline V_1$ and the others remain finite. Since, as we show later, the integration constants like $\lambda_i^\pm$ are directly relatable to physical observables, the above matching conditions give us guidelines on how the various couplings renormalize. For instance, inspection of eq.~\pref{ceqn} shows that the product $\overline \zeta \, \overline \chi_1$ should remain finite, since it determines the physically measurable quantity $c$. Consequently \be \label{zetachiZs} Z_\zeta(\epsilon) Z_\chi (\epsilon) = \hbox{finite} \,. \ee Next, the finiteness of $\zeta \chi_1$ together with the particular combination of matching conditions that sets $\lambda_3^-$ --- {\em i.e.} eqn. ~\pref{lambda3matching} --- shows that when $\eta = -1/\epsilon$ we must define \be \label{eq:ZVexp} Z_\ssV = \frac{\overline V_1}{V_1} = e^{-\left[ \lambda_3^- + \frac32 \, (\lambda_2^- - \lambda_1) \right]/\epsilon} + \hbox{(finite)} \,, \ee in order to compensate for the divergent behaviour of $e^{\varphi + \beta - v}$. Using this in the inflaton equation, eq.~\pref{inflaton-eom}, and keeping in mind that (see below) $H_0$ is a physical parameter, we find \be H_0^2 \propto e^{-\( \lambda_1 - \lambda_2^- + \frac{2}{c} \lambda_3^- \) /\epsilon} \; \frac{\zeta }{\chi_1} \,, \ee and so this, together with eq.~\pref{zetachiZs}, leads to $Z_\zeta (\epsilon) / Z_\chi(\epsilon) = \hbox{finite}$. If we absorb only the exponential dependence on $1/\epsilon$ into the renormalizations --- {\em e.g.} taking $\hbox{`finite'} = 0$ in eq.~\pref{eq:ZVexp} --- this implies \ba Z_\zeta &=& e^{ - \frac12 \( \lambda_1 - \lambda_2^- + \frac{2}{c} \, \lambda_3^- \) / \epsilon} \nn\\ Z_\chi &=& e^{\frac12 \( \lambda_1 - \lambda_2^- +\frac{2}{c} \, \lambda_3^- \)/\epsilon} \,. \ea Finally, the matching conditions involving $T_i$ are rendered finite by defining \ba 1 - \frac{\kappa^2 \overline T_i}{2\pi} &:=& e^{- ( \lambda_2^- - \lambda_1 )/\epsilon} \( 1 - \frac{\kappa^2 T_i}{2\pi} \) + \hbox{(finite)} \,. \ea $\Phi_i$ does not require a divergent renormalization, as it appears as a finite quantity in the matching conditions. \subsubsection*{Connection to physical properties} Since the above section uses the finiteness of the bulk integration constants, $\lambda_i^\pm$, $H_0$, $c$ {\em etc.}, we pause here to relate these quantities more explicitly to physical observables. This ultimately is what allows us to infer the values taken by the finite renormalized parameters. First, $c$ and $H_0$, directly determine the power of time with which the scale factor for the on-brane dimensions expand, and is thereby measurable through cosmological observations that determine $\dot a/a$, $\ddot a/a$ and so on. Similarly, the volume of the extra dimensions is, \be \cV_2 = \int \exd^2x \, \sqrt{g_2} = 2\pi (H_0\tau)^c\tau^2\int_{-\infty}^\infty \exd\eta \, \exp\(\frac\cx2+\frac{3\cy}2 +\cz\), \ee and the proper distance between the branes is given by \be L = (H_0\tau)^{c/2}\tau \int_{-\infty}^\infty \exd\eta \, \exp\( -\frac\cx4 +\frac{5\cy}4 + \frac\cz2 \) \,. \ee It is through relations such as these that physical quantities get related to the integration constants. In particular, convergence of these integrals implies conditions on the signs of the combinations $\lambda_1+4\lambda_2^\pm+2\lambda_3^\pm$ and $-\lambda_1+5\lambda_2^\pm+2\lambda_3^\pm$, all of which must be finite. The same is true of $\lambda_2$, which can be regarded as a function of the other two powers through the constraint \pref{eq:powersconstraint}. Finally, the fluxes, $\Phi_s$ and $\Phi_i$, appear in the flux quantization condition and are directly related to a (finite) physical quantity: the magnetic charge of the branes. The renormalized tensions, $T_s$ and $T_i$, similarly enter into expressions for the deficit angle at the corresponding brane location. \subsection{The 6D perspective in a nutshell} Before turning to the view as seen by a 4D observer, this section first groups the main results obtained above when using the time-dependent matching conditions, eqs.~\pref{redmatching}, to relate the constants of the bulk scaling solution to the (renormalized) parameters in the source-brane actions, eqs.~\pref{eq:Sspec} and \pref{eq:Sinf}. The physical couplings that we may specify on the inflaton brane are the renormalized quantities $V_1$, $\zeta $, $T_i$ and $\Phi_i$ (and we henceforth drop the overbar on renormalized quantities). On the spectator brane we similarly have $T_s$ and $\Phi_s$. We also get to specify `initial conditions' for the on-brane inflaton: $\chi_0$ and $\chi_1$, as well as the integer, $n$, appearing in the flux-quantization condition. Of these, $\chi_0$ and $V_1$ only appear in the combination $\hat V_1 = V_1 \, e^{\zeta \chi_0}$, and so the value of $\hat V_1$ can be regarded as an initial condition for the inflaton rather than a choice for a brane coupling. Altogether these comprise 8 parameters: 5 brane couplings; 1 bulk flux integer; and 2 inflaton initial condition. We now summarize the implications these parameters impose on the integration constants in the bulk, and identify any consistency conditions amongst the brane properties that must be satisfied in order to be able to interpolate between them using our assumed scaling bulk solution. \subsubsection*{Time dependence} First off, consistency of the scaling ansatz for the time dependence of all fields gives \be \label{cvszetachi} c = -2 - \zeta \chi_1 \,. \ee Notice that this involves only the brane coupling $\zeta$ --- whose value determines the flatness of the inflaton potential --- and the inflaton initial condition, $\chi_1$. In particular, $c = -2$, corresponding to a de Sitter on-brane geometry, if either $\zeta$ or $\chi_1$ is chosen to vanish. Next, we take the inflaton equation of motion on the brane to give the bulk parameter $H_0$ in terms of choices made on the inflationary brane: \be H_0^2 = e^{-\frac12 (\cx_\infty^- - \cy_\infty^-) +\frac{\zeta \chi_1}{2+\zeta \chi_1} \, \cz_\infty^-} \left( \frac{\hat V_1}{3+2 \, \zeta \chi_1} \right) \frac{\zeta }{ \chi_1} \,. \ee Among other things, this shows that the choice $\chi_1 = 0$ does not satisfy the $\chi$ field equation unless $\zeta$ or $V_1$ vanish. \subsubsection*{Consistency relations} Consider next how the number of couplings on the branes restricts the other integration constants in the bulk. Start with the spectator brane. Near the spectator brane we have $\lambda_3^+ = 0$ and \be \label{lamsums} \lambda_1 = \lambda_2^+ = e^{\cy_\infty^+ - \cx_\infty^+} \(1-\frac{\kappa^2 T_s}{2\pi}\) \,, \ee as well as $\Phi_s = 0$. Specifying $T_s$ therefore imposes two relations among the four remaining independent bulk integration constants, $\lambda_1$, $\lambda_2^+$, $\cy_\infty^+$ and $\cz_\infty^+$, relevant to asymptotics near the spectator brane. We regard eq.~\pref{lamsums} as being used to determine the value of two of these, $\lambda_2^+$ and $\cy_\infty^+$ say. Next we use the bulk equations of motion, eqs.~\pref{eq:chisoln} and \pref{bulkXY}, to integrate the bulk fields across to the inflaton brane. Starting from a specific choice for the fields and their $\eta$-derivatives at the spectator brane, this integration process leads to a unique result for the asymptotic behaviour at the inflaton brane. Given the 2-parameter set of solutions consistent with the spectator brane tension, integration of the bulk field equations should generate a 2-parameter subset of the parameters describing the near-inflaton-brane limit. Now consider matching at the inflaton brane. The three asymptotic powers describing the near-brane limit for the inflaton brane can be expressed as \ba \label{lamsumi} \lambda_1 &=& e^{\cz_\infty^- - \frac32 ( \cx_\infty^- - \cy_\infty^-)} \frac{\kappa^2 \hat V_1}{2\pi} \, \( \frac{\zeta \chi_1}{3+2 \, \zeta \chi_1} \) + e^{\cy_\infty^- - \cx_\infty^-} \( 1 -\frac{\kappa^2 T_i}{2\pi} \) + \frac{3\kappa^2 H_0 q \, \Phi_i}{2\pi} \nn\\\nn\\ \lambda_2^- &=& e^{ \cz_\infty^- - \frac32 ( \cx_\infty^- - \cy_\infty^-)} \frac{\kappa^2 \hat V_1}{2\pi} \( \frac{-6-3 \, \zeta \chi_1}{3+2 \, \zeta \chi_1} \) + e^{ \cy_\infty^- - \cx_\infty^-} \( 1 - \frac{\kappa^2 T_i}{2\pi} \) +\frac{\kappa^2 H_0 q\, \Phi_i}{2\pi}\\ \nn\\ \lambda_3^- &=& e^{ \cz_\infty^- - \frac32 ( \cx_\infty^- - \cy_\infty^-)} \frac{\kappa^2 \hat V_1}{2\pi} \( \frac{6+3 \, \zeta \chi_1}{3+2 \, \zeta \chi_1} \) \,, \nn \ea which follow from three of the four matching conditions at the inflaton brane.\footnote{Recall that for time-independent systems there are 3 metric matching conditions -- $(tt)$, $(ij)$ and $(\theta \theta)$ -- plus that for the dilaton, $\phi$. The Hamiltonian constraint then imposes one relation amongst these three conditions, that can be regarded as implicitly fixing how the brane action depends on $g_{\theta\theta}$.} Notice that the constant $q$ appearing here can be regarded as being a function of the flux-quantization integer $n$ and the inflaton-brane flux coupling, $\Phi_i$: \be q = \frac{4\pi g \lambda_1}{\kappa^2H_0[2\pi n -g\sum_b\Phi_b]} = \frac{4\pi g \lambda_1 }{\kappa^2H_0[2\pi n -g\Phi_i]}\,. \ee The three parameters $\lambda_1$, $\lambda_2^-$ and $\lambda_3^-$ are not independent because they must satisfy the constraint, eq.~\pref{eq:powersconstraint}, \be \label{hamconst2} (\lambda_2^-)^2 - (\lambda_1)^2 = \frac43 \left( \frac{1+c+c^2}{c^2} \right) (\lambda_3^-)^2 =\frac{12 + 12 \, \zeta \chi_1 +4(\zeta \chi_1)^2}{12 +12\, \zeta \chi_1 +3(\zeta \chi_1)^2} \; (\lambda_3^-)^2 \,, \ee whose validity follows as a consequence of the field equations because the same constraint holds for the parameters, $\lambda_1$, $\lambda_2^+$ and $\lambda_3^+$, that control the bulk asymptotics near the spectator brane. In principle, for a given set of inflaton-brane couplings we can regard two of eqs.~\pref{lamsumi} as fixing the remaining two free bulk parameters. The third condition does not over-determine these integration constants of the bulk, because the constraint, eq.~\pref{hamconst2}, is satisfied as an identity for all of the 2-parameter family of bulk solutions found by matching to the spectator brane. Consequently the third of eqs.~\pref{lamsumi} must be read as a constraint on one of the inflaton-brane properties. If we take this to be $\hat V_1$, say, then it can be interpreted as a restriction on the initial condition, $\chi_0$, in terms of the spectator-brane tension. This restriction is the consistency condition that is required if we wish to interpolate between the two branes using the assumed bulk scaling solution. \subsubsection*{Inflationary choices} In the end of the day we see that consistency with the bulk geometry does not preclude us from having sufficient freedom to adjust brane properties like $\zeta$ and $\chi_1$ to dial the parameters $c$ and $H_0$ freely. This shows that there is enough freedom in our assumed brane properties to allow treating these bulk parameters as independent quantities that can be freely adjusted. In particular, we are free to choose the product $\zeta \chi_1$ to be sufficiently small and positive -- {\em c.f.} eq.~\pref{cvszetachi} -- to ensure an accelerated expansion: {\em i.e.} that $c$ is just slightly more negative than the de Sitter value of $-2$. This is the adjustment that is required to assure a `slow roll' within this model. We also see that the time-dependence of the solution is such that the brane potential energy shrinks as the brane expands. That is, evaluated at the solution, eq.~\pref{chisoln}, \be \label{Vovertime} \Bigl. V_1 \, e^{\zeta \chi} \Bigr|_{\rm soln} = \hat V_1 \, | H_0 \tau |^{\zeta \chi_1} = \hat V_1 \, \Bigl(|c+2| \, H_0 t \Bigr)^{\zeta \chi_1/(2+c)} = \frac{\hat V_1}{ \zeta \chi_1 H_0 t } \,. \ee This shows how inflation might end in this model. Suppose we take \be V(\chi) = V_0 + V_1 \, e^{\zeta \chi} + V_2 \, e^{2 \, \zeta \chi} + \cdots \,, \ee where $V_1$ is chosen much larger than $V_0$ or the other $V_k$'s. Then if $\chi$ is initially chosen so that $V(\chi) \simeq V_1 \, e^{\zeta \chi}$ is dominated by the term linear in $e^{\zeta \chi}$, then the above scaling bulk solution can be consistent with the brane-bulk matching conditions. But eq.~\pref{Vovertime} shows that this term shrinks in size when evaluated at this solution (as also do the terms involving higher powers of $e^{\zeta \chi}$), until eventually the $V_0$ term dominates. Once $V_0$ dominates the bulk scaling solution can no longer apply, plausibly also implying an end to the above accelerated expansion of the on-brane geometry. If $V(\chi) \simeq V_0$, then the inflaton brane effectively has a $\phi$-dependent tension, $T_{\rm eff} = T_i + V_0 \, e^{-\phi}$, which breaks the bulk scale invariance and so can lift the bulk's flat direction \cite{Cod2Matching, BBvN, susybranes} and change the dynamics of the bulk geometry. Although this likely ends the inflationary evolution described above, it is unlikely in itself to provide a sufficiently graceful exit towards a successful Hot Big Bang epoch. Earlier calculations for maximally-symmetric branes show that such a tension leads to an effective potential (more about which below) proportional to $T_{\rm eff}' \propto - V_0 \, e^{-\phi}$, which points to a continued runaway along the would-be flat direction rather than a standard hot cosmology. We leave for further work the construction of a realistic transition from extra-dimensional inflation to later epochs, but expect that a good place to seek this interface is by modifying the assumption that $\Phi_s$ and/or $\Phi_i$ remain independent of $\phi$, since it is known \cite{TNCC} that when $\sum_b \Phi_b \propto e^{\phi}$ the low-energy scalar potential can act to stabilize $\phi$ at a minimum where the low-energy effective potential vanishes (classically). \section{The view from 4D} We now ask what the above dynamics looks like from the perspective of a 4D observer, as must be possible on general grounds within an effective theory in the limit when the Hubble scale, $H$, is much smaller than the KK scale. We can find the 4D description in this limit by explicitly compactifying the 6D theory. Our goal when doing so is to show how the low-energy 4D dynamics agrees with that of the explicit higher-dimensional solution, and to acquire a better intuition for how this inflationary model relates to more familiar 4D examples. \subsection{The 4D action} The simplest way to derive the functional form of the low-energy 4D action (at least at the classical level) is to use the classical scale invariance of the bulk field equations, since these are preserved by the choices we make for the branes --- at least during the inflationary epoch where $V \simeq V_1 \, e^{\zeta \chi}$. Since this symmetry must therefore also be a property of the classical 4D action, there must exist a frame for which it can be written in the following scaling form: \ba \label{4Deffaction} S_{\rm eff} &=& - \int \exd^4x \sqrt{ - \hat g_4} \; e^{-2 \varphi_4} \left[ \frac1{2\kappa_{4}^2} \hat g^{\mu\nu} \( \hat R_{\mu\nu} + Z_\varphi\, \pd_\mu \varphi_4 \pd_\nu\varphi_4 \) \right. \nn\\ && \qquad\qquad\qquad\qquad\qquad\qquad \left. \phantom{\frac12} + f^2 \, \hat g^{\mu\nu} \pd_\mu \chi \pd_\nu \chi + U_\JF \( e^{\zeta \chi - \varphi_4} \) \right] \\ &=& - \int \exd^4x \sqrt{ - {\bf g}_4} \; \left[ \frac1{2\kappa_{4}^2} {\bf g}^{\mu\nu} \( {\bf R}_{\mu\nu} + (6 + Z_\varphi) \, \pd_\mu \varphi_4 \pd_\nu\varphi_4 \) \right. \nn\\ && \qquad\qquad\qquad\qquad\qquad\qquad \left. \phantom{\frac12} + f^2 \, {\bf g}^{\mu\nu} \pd_\mu \chi \pd_\nu \chi + e^{2 \varphi_4} U_\JF \( e^{\zeta \chi - \varphi_4} \) \right] \,, \nn \ea where $\varphi_4$ denotes the 4D field corresponding to the flat direction of the bulk supergravity and $\chi$ is the 4D field descending from the brane-localized inflaton. The second version gives the action in the 4D Einstein frame, whose metric is defined by the Weyl transformation: \be {\bf g}_{\mu\nu} = e^{-2\varphi_4} \hat g_{\mu\nu} \,. \ee The potential, $U_\JF$, is an a-priori arbitrary function of the scale-invariant combination $e^{\zeta \chi - \varphi_4}$, whose functional form is not dictated purely on grounds of scale invariance. The detailed form of $U_\JF$ and the values of the constants $\kappa_4$, $Z_\varphi$ and $f$, are calculable in terms of the microscopic parameters of the 6D theory by dimensional reduction. As shown in detail in Appendix \ref{app:dimred}, we find $Z_\varphi = -4$, \ba \label{kappaJFandfmatching} \frac{1}{2 \kappa_4^2} &=& \int \exd \theta \exd \eta \; \frac{e^{-\omega + 3\alpha + \beta + v}}{2 \kappa^2 H_0^2} = \frac\pi{ \kappa^2 H_0^2} \int \exd \eta \; e^{2\cy-2\cz/c} \nn\\ &=& \frac\pi{ \kappa^2 H_0^2} \int \exd \eta \; \frac{\cz''}{3c} = -\frac{\pi\lambda_3^-}{H_0^2\kappa^2c} \nn\\ f^{2} &=& e^{-\cx_\infty^- +\cy_\infty^- - \frac2c\cz_\infty^-} \left( \frac{23-2c}{28+8c} \right) \,, \ea while the potential becomes \be \label{VEFmatching} V_\EF := e^{2\varphi_4} \, U_\JF = - C e^{2\varphi_4} + D e^{\zeta\chi + \varphi_4} \,, \ee with the constants $C$ and $D$ evaluating to \ba \label{CDmatching} C &=& \frac54 \, q H_0 \Phi_i - e^{-\cx_\infty^- + \cy_\infty^-} \(\frac{2\pi}{\kappa^2} - T_i \) - e^{-\cx_\infty^+ + \cy_\infty^+} T_s \nn\\ D &=& \frac54 e^{-\frac32 (\cx_\infty^- - \cy_\infty^-) + \cz_\infty^-} V_1\,. \ea In the regime of interest, with $\kappa^2 T_i/2\pi \ll 1$ and $\kappa^2 T_s /2\pi \ll 1$ and $V_1 > 0$, both $C$ and $D$ are positive. The unboundedness from below of $V_\EF$ as $\varphi_4 \to \infty$ is only an apparent problem, since the domain of validity of the semiclassical calculations performed here relies on the bulk weak-coupling condition, $e^{\varphi_4} \ll 1$. \subsection*{4D dynamics} The classical field equations obtained using this 4D effective action consist of the following scalar equations, \ba \frac{2}{\kappa_{4}^2} \, \Box\varphi_4 &=& -2C \, e^{2\varphi_4} + D \, e^{\zeta \chi + \varphi_4} \nn\\ 2{f^2} \, \Box\chi &=& \zeta D \, e^{\zeta \chi + \varphi_4} \,, \ea and the trace-reversed Einstein equations \be {\bf R}_{\mu\nu} + 2 \, \pd_\mu \varphi_4 \pd_\nu\varphi_4 + {2\kappa_{4}^2}{f^2} \, \pd_\mu \chi \pd_\nu \chi +\kappa_{4}^2 V_{\EF} \, {\bf g}_{\mu\nu} = 0 \,. \ee This system admits scaling solutions, with all functions varying as a power of time, \ba \label{4Dpowerlaw} {\bf g}_{\mu\nu} &=& (H_0\tau)^{2+2c} \( \eta_{\mu\nu} \, \exd x^\mu \exd x^\nu \) \nn\\ e^{\varphi_4} &=& e^{\varphi_{40}} (H_0\tau)^{-2-c} \nn\\ e^{\zeta\chi} &=& e^{\zeta \chi_0} (H_0\tau)^{\zeta\chi_1} = e^{\zeta \chi_0} (H_0\tau)^{-2-c}\,. \ea Notice that the consistency of the field equations with the power-law time-dependence requires $\zeta \chi_1=-2-c$, just like in six dimensions ({\em c.f.} eq.~\pref{ceqn}). With this, the scalar equations of motion are \ba \frac{2}{\kappa_{4}^2} \,H_0^2 (2c^2+5c+2) &=& -2 C \, e^{2 \varphi_{40}} + D \, e^{\zeta \chi_0 + \varphi_{40}} \nn\\ -2 (2c+1) {H_0^2 \, \chi_1}{f^2} &=& \zeta D \, e^{\zeta \chi_0 + \varphi_{40}} \,, \ea and the Einstein equations become \ba \frac{H_0^2}{\kappa_{4}^2} \( 2c^2+5c+5 \) + {2H_0^2 \, \chi_1^2}{f^2} &=& - C\, e^{2 \varphi_{40}} +D \, e^{\zeta \chi_0 + \varphi_{40}} \nn\\ \frac{H_0^2}{\kappa_{4}^2} (2c^2+3c+1) &=& -C\, e^{2 \varphi_{40}} +D\, e^{\zeta \chi_0 + \varphi_{40}} \,. \ea These four equations are to be solved for the three variables $\chi_0$, $\chi_1$ and $\varphi_{40}$ appearing in the power-law ansatz, eqs.~\pref{4Dpowerlaw}. This is not an over-determined problem because the four equations are not independent (a linear combination of the two scalar equations gives the second Einstein equation). Subtracting the two Einstein equations yields \be {\chi_1^2}{f^2} = - \frac{2+c}{\kappa_{4}^2} = \frac{\zeta\chi_1}{\kappa_{4}^2} \,, \ee and so discarding the trivial solution, $\chi_1=0$, we find \be \label{eq:chi1soln} {\chi_1} = \frac{\zeta}{\kappa_{4}^2 f^2} \,. \ee Next, dividing the two scalar equations gives the relation \be \label{solntoscalareqn} -\frac{2c^2+5c+2}{(2c + 1) \kappa_4^2 f^2 \chi_1} = -\frac{c+2}{\zeta} = \frac{1}{\zeta} \(1 - \frac{2C}{D} \, e^{\varphi_{40} - \zeta \chi_0 } \) \,, \ee where the first equality uses eq.~\pref{eq:chi1soln}. Combining eqs.~\pref{ceqn}, \pref{eq:chi1soln} and \pref{solntoscalareqn} finally gives \be \label{chivsCD} \frac{\zeta^2}{\kappa_{4}^2 f^2} = 1 - \frac{2C}{D} \, e^{\varphi_{40} - \zeta \chi_0 } \,. \ee This last equation shows that the scaling ansatz is only consistent with the field equations if $\chi_0$ is chosen appropriately, in agreement with what was found by matching between branes in the 6D perspective. It also shows, in particular, that $\zeta \chi_1$ can be dialed to be small and positive by suitably adjusting the scale-invariant (and time-independent) quantity $\varphi_4 - \zeta \chi$ so that the right-hand side of eq.~\pref{chivsCD} is sufficiently small and positive. This is not inconsistent with the microscopic choices made for the branes because the ratio $C/D$ is positive. The upshot is this: the above relations precisely reproduce the counting of parameters and the properties of the solutions of the full 6D theory, once the low-energy parameters $C$, $D$, $\kappa_4$ and $f$ are traded for the underlying brane properties, using eqs.~\pref{kappaJFandfmatching} and \pref{CDmatching}. \subsection{The 4D inflationary model} The 4D effective description also gives more intuition of the nature of the inflationary model, and why the scalar evolution can be made slow. Notice that the action, eq.~\pref{4Deffaction}, shows that the scalar target space is flat in the Einstein frame. Consequently, the slow-roll parameters are controlled completely by the Einstein-frame potential, eq.~\pref{VEFmatching}. In particular, \ba \varepsilon_\varphi &:=& \left( \frac{1}{V_\EF} \; \frac{\partial V_\EF}{\partial \varphi_4} \right)^2 = \left( \frac{ - 2 + (D/C) e^{\zeta\chi - \varphi_4}}{ - 1 + (D/C) e^{\zeta\chi - \varphi_4}} \right)^2 \nn\\ \varepsilon_\chi &:=& \frac{1}{\kappa_4^2 f^2} \left( \frac{1}{V_\EF} \; \frac{\partial V_\EF}{\partial \chi} \right)^2 = \frac{\zeta^2}{\kappa_4^2 f^2} \left( \frac{ (D/C) e^{\zeta\chi - \varphi_4} }{ - 1 + (D/C) e^{\zeta\chi - \varphi_4}} \right)^2 \,. \ea This shows that there are two conditions required for $V_\EF$ to have sufficiently small first derivatives for slow-roll inflation. First, $\varepsilon_\chi \ll 1$ requires $\zeta^2 \ll \kappa_4^2 f^2$, in agreement with the 6D condition $\zeta \chi_1 \ll 1$ once eq.~\pref{eq:chi1soln} is used. Second, $\varepsilon_\varphi \ll 1$ is generically {\em not} true, but can be made to be true through a judicious choice of initial conditions for $\zeta \chi - \varphi_4$: $(D/C) \, e^{\zeta \chi - \varphi_4} = 2 + \cO(\zeta \chi_1)$, in agreement with eq.~\pref{chivsCD}. Notice that in this case $\varepsilon_\chi \simeq \cO[ \zeta \chi_1 ]$ while $\varepsilon_\varphi \simeq \cO[(\zeta \chi_1)^2] \ll \varepsilon_\chi$. Next, consider the second derivatives of $V_\EF$: \ba \eta_{\varphi\varphi} &:=& \left( \frac{1}{V_\EF} \; \frac{\partial^2 V_\EF}{\partial \varphi_4^2} \right) = \frac{ - 4 + (D/C) e^{\zeta\chi - \varphi_4}}{ - 1 + (D/C) e^{\zeta\chi - \varphi_4}} \simeq -2 + \cO(\zeta \chi_1) \nn\\ \eta_{\varphi\chi} &:=& \frac{1}{\kappa_4 f} \left( \frac{1}{V_\EF} \; \frac{\partial^2 V_\EF}{ \partial \varphi_4 \partial \chi} \right) = \frac{\zeta}{\kappa_4 f} \left( \frac{ (D/C) e^{\zeta\chi - \varphi_4} }{ - 1 + (D/C) e^{\zeta\chi - \varphi_4}} \right) \simeq \frac{2\,\zeta}{\kappa_4 f} + \cO(\zeta \chi_1)\\ \eta_{\chi\chi} &:=& \frac{1}{\kappa_4^2 f^2} \left( \frac{1}{V_\EF} \; \frac{\partial^2 V_\EF}{ \partial \chi^2} \right) = \frac{\zeta^2}{\kappa_4^2 f^2} \left( \frac{ (D/C) e^{\zeta\chi - \varphi_4} }{ - 1 + (D/C) e^{\zeta\chi - \varphi_4}} \right) \simeq \frac{2\,\zeta^2}{\kappa_4^2 f^2} + \cO(\zeta \chi_1) \,,\nn \ea where the last, approximate, equality in each line uses eq.~\pref{chivsCD}. \FIGURE[ht]{ \epsfig{file=potentialexample.eps,angle=0,width=0.45\hsize} \caption{Sample potential evaluated for $C=D=1$ and $\zeta=0.3$. The red line denotes the path taken by the scaling solutions. } \label{fig:potential} } Notice that $\eta_{\varphi\varphi}$ is not itself small, even when $\zeta \ll \kappa_4 f$ and eq.~\pref{chivsCD} is satisfied. However, in the field-space direction defined by $\vec n := \vec\varepsilon/|\vec \varepsilon|$ we have $n_\chi \simeq \cO(1)$ and $n_\varphi \simeq \cO(\zeta \chi_1)$ and so \be \eta_{ab} n^a n^b = \cO( \zeta \chi_1) = \cO \left( \frac{ \zeta^2}{ \kappa_4^2 f^2} \right) \ll 1 \,. \ee Because $\eta_{\varphi\varphi}$ is negative and not small, slow roll is achieved only by choosing initial conditions to lie sufficiently close to the top of a ridge, with initial velocities chosen to be roughly parallel to the ridge (see Fig.~\ref{fig:potential}). For single-field 4D models such an adjustment is unstable against de Sitter fluctuations of the inflaton field, and although more difficult to compute in the higher-dimensional theory, the low-energy 4D potential suggests that similar considerations are likely also to be true here. \section{Conclusions} In a nutshell, the previous sections describe a family of --- previously known \cite{scaling solutions} --- exact, explicit, time-dependent solutions to the field equations of 6D supergravity in the presence of two space-filling, positive-tension source branes. The solutions describe both the cosmological evolution of the on-brane geometry and the change with time of the extra-dimensional geometry transverse to the branes. These solutions have explicitly compact extra dimensions, with all but one modulus stabilized using an explicit flux-stabilization mechanism. The time evolution describes the dynamics of the one remaining would-be modulus of the bulk geometry to the back-reaction of the source branes. \subsection{Bugs and features} The new feature added in this paper is to identify a choice for the dynamics of a brane-localized scalar field whose evolution is consistent with the bulk evolution, and so can be interpreted as the underlying dynamics that gives rise to the bulk evolution. In order to find this choice for the brane physics we set up and solve the codimension-two matching problem for time-dependent brane geometries, extending earlier analyses \cite{Cod2Matching, BBvN, susybranes} of these matching conditions for systems with maximally symmetric on-brane geometries. We also find the 4D theory that describes this system in the limit of slow evolution, where a low-energy effective field theory should apply. The low-energy theory turns out to be a simple scalar-tensor system involving two scalar fields in 4 dimensions: one corresponding to the brane-localized mode and one corresponding to the would-be flat direction of the bulk geometry. We verify that the 4D system has time-dependent solutions that reproduce those of the full 6D equations (as they must). In particular, we identify a region of parameter space that describes an inflationary regime, including a limit for which the on-brane geometry is de Sitter. (The de Sitter solution is not a new one \cite{6DdS}, and evades the various no-go theorems \cite{dSnogo} because the near-brane behavior of the bulk fields dictated by the brane-bulk matching does not satisfy a smoothness assumption --- `compactness' --- that these theorems make.) For parameters near the de Sitter limit, the evolution is accelerated and takes a power-law slow-roll form, $a(t) \propto t^p$ with $p > 1$. (The de Sitter solution is obtained in the limit $p \to \infty$.) From the point of view of the low-energy 4D theory, the de Sitter solution corresponds to sitting at the top of a ridge, and the scaling solutions describe motion near to and roughly parallel with this ridge. Experience with the 4D potential suggests that the initial conditions required to obtain inflation in this model are likely to require careful tuning. {}From the 4D perspective, the inflationary scenario resembles old models of extended inflation \cite{ExtInf}, for which accelerated power-law expansion is found to arise when Brans-Dicke theory is coupled to matter having an equation of state $w = -1$. Having a Brans-Dicke connection is perhaps not too surprising, despite earlier difficulties finding extended inflation within a higher-dimensional context. Part of what is new here relative to early work is the scale invariance of the bulk supergravity that is not present, for example, in non-supersymmetric 6D constructions \cite{ExtInfKK}. Another new feature is brane-localized matter, which was not present in early searches within string theory \cite{ExtInfStr}. Brans-Dicke-like theories arise fairly generically in the low-energy limit of the 6D supergravity of interest here because back-reaction tends to ensure that the bulk dilaton, $\varphi_4$, couples to brane-localized brane matter in this way \cite{BulkAxions, susybranes}. For cosmological applications it is interesting that the 4D limit of the higher-dimensional system is not {\em exactly} a Brans-Dicke theory coupled to matter. It differs by having a scalar potential (rather than a matter cosmological constant), that is calculable from the properties of the underlying branes. It also differs by being `quasi-Brans Dicke', in that the scalar-matter coupling tends to itself depend on the Brans-Dicke field, $\varphi_4$. Both of these features are potentially attractive for applications because successful cosmology usually requires the Brans-Dicke coupling to be relatively large during inflation compared with the largest values allowed by present-day solar-system constraints \cite{ExtInfProb}. Having both field-dependent couplings and a scalar potential can allow these properties to be reconciled, by having the potential drive the scalar at late times to a value for which the coupling is small. (See, for instance, \cite{6Dquint} for a sample cosmology which uses this mechanism in a related example.) A noteworthy feature of the inflationary geometries is that the extra dimensions are not static (although they become static in the strict de Sitter limit). Instead they expand with $r(t) \propto \sqrt t$, while the scale factor of the on-brane directions expands even faster, $a(t) \propto t^p$ with $p > 1$. As a result the Kaluza-Klein mass scale shrinks, as does the higher-dimensional gravity mass scale (measured in 4D Planck units), during the inflationary expansion. If embedded into a full inflationary picture, including the physics of the late-epoch Hot Big Bang, such an inflationary scenario can have several attractive properties. First, the relative expansion rates of the various dimensions might ultimately explain why the four on-brane dimensions are much larger than the others. It might also explain why two internal dimensions might be bigger than any others, if it were embedded into a 10-dimensional geometry with the `other' 4 dimensions stabilized. A second attractive feature is the disconnect that this scenario offers between the gravity scale during inflation and the gravity scale in the present-day universe.\footnote{In this our model is similar in spirit to ref.~\cite{VolInf}.} Inflationary models such as these can allow the current gravity scale to be low (in the multi-TeV range in extreme cases), and yet remain consistent with the observational successes of generating primordial fluctuations at much higher scales. Inflationary models like this might also point to a way out of many of the usual cosmological problems faced by low gravity-scale models \cite{ADD, MSLED}, such as a potentially dangerous oversupply of primordial KK modes. \subsection{Outstanding issues} The model presented here represents only the first steps down the road towards a realistic inflationary model along these lines, however, with a number of issues remaining to be addressed. Perhaps the most important of these are related to stability and to ending inflation and the transition to the later Hot Big Bang cosmology. Besides identifying the Standard Model sector and how it becomes reheated, it is also a challenge to identify why the cosmic expansion ends and why the present-day universe remains four-dimensional and yet is so close to flat. What is intriguing from this point of view is the great promise that the same 6D supergravity used here also has for addressing some of these late-universe issues \cite{TNCC}, especially for the effective cosmological constant of the present-day epoch. In particular, these 6D theories generically lead to scalar-tensor theories at very low energies,\footnote{Remarkably, the same mechanism that can make the vacuum energy naturally small in 6D supergravity also protects this scalar's mass to be very light \cite{6Dquint, TNCC, susybranes}.} and so predict a quintessence-like Dark Energy \cite{6Dquint}. Successfully grafting the inflationary scenario described here onto this late-time cosmology remains unfinished, yet might provide a natural theory of initial conditions for the quintessence field as arising as a consequence of an earlier inflationary period (see \cite{QInf} for some other approaches to this problem, and \cite{DERev} for a more comprehensive review). Other outstanding issues ask whether (and if so, how) the extra dimensions help with the problems of many 4D inflationary models: initial-condition problems, fine-tuning and naturalness issues, and so on. Since some of these questions involve `Planck slop' coming from the UV completion \cite{SIreviews}, a helpful step in this direction might be to identify a stringy provenance for the 6D gauged chiral supergravity studied here \cite{CP}. Another interesting direction asks about the existence and properties of cosmological solutions that explore the properties of the extra dimensions more vigorously than is done by the model considered here. That is, although our model here solves the full higher-dimensional field equations, it is only the volume modulus of the extra-dimensional geometry that evolves with time, with all of the other KK modes not changing. Although our calculation shows that this is consistent with the full equations of motion, even for Hubble scales larger than the KK scale, it is probably not representative of the general case when $H > m_\KK$. More generally one expects other KK modes to become excited by the evolution, allowing a richer and more complex evolution. There remains much to do. \section*{Acknowledgements} We thank Allan Bayntun and Fernando Quevedo for helpful discussions. Our research was supported in part by funds from the Natural Sciences and Engineering Research Council (NSERC) of Canada. Research at the Perimeter Institute is supported in part by the Government of Canada through Industry Canada, and by the Province of Ontario through the Ministry of Research and Information (MRI).
{'timestamp': '2011-08-15T02:00:51', 'yymm': '1108', 'arxiv_id': '1108.2553', 'language': 'en', 'url': 'https://arxiv.org/abs/1108.2553'}
\section{Introduction and main results} In this note we are interested in the existence versus non-existence of stable sub- and super-solutions of equations of the form \begin{equation} \label{eq1} -div( \omega_1(x) \nabla u ) = \omega_2(x) f(u) \qquad \mbox{in $ {\mathbb{R}}^N$,} \end{equation} where $f(u)$ is one of the following non-linearities: $e^u$, $ u^p$ where $ p>1$ and $ -u^{-p}$ where $ p>0$. We assume that $ \omega_1(x)$ and $ \omega_2(x)$, which we call \emph{weights}, are smooth positive functions (we allow $ \omega_2$ to be zero at say a point) and which satisfy various growth conditions at $ \infty$. Recall that we say that a solution $ u $ of $ -\Delta u = f(u)$ in $ {\mathbb{R}}^N$ is stable provided \[ \int f'(u) \psi^2 \le \int | \nabla \psi|^2, \qquad \forall \psi \in C_c^2,\] where $ C_c^2$ is the set of $ C^2$ functions defined on $ {\mathbb{R}}^N$ with compact support. Note that the stability of $u$ is just saying that the second variation at $u$ of the energy associated with the equation is non-negative. In our setting this becomes: We say a $C^2$ sub/super-solution $u$ of (\ref{eq1}) is \emph{stable} provided \begin{equation} \label{stable} \int \omega_2 f'(u) \psi^2 \le \int \omega_1 | \nabla \psi|^2 \qquad \forall \psi \in C_c^2. \end{equation} One should note that (\ref{eq1}) can be re-written as \begin{equation*} - \Delta u + \nabla \gamma(x) \cdot \nabla u ={ \omega_2}/{\omega_1}\ f(u) \qquad \text{ in $ \mathbb{R}^N$}, \end{equation*} where $\gamma = - \log( \omega_1)$ and on occasion we shall take this point of view. \begin{remark} \label{triv} Note that if $ \omega_1$ has enough integrability then it is immediate that if $u$ is a stable solution of (\ref{eq1}) we have $ \int \omega_2 f'(u) =0 $ (provided $f$ is increasing). To see this let $ 0 \le \psi \le 1$ be supported in a ball of radius $2R$ centered at the origin ($B_{2R}$) with $ \psi =1$ on $ B_R$ and such that $ | \nabla \psi | \le \frac{C}{R}$ where $ C>0$ is independent of $ R$. Putting this $ \psi$ into $ (\ref{stable})$ one obtains \[ \int_{B_R} \omega_2 f'(u) \le \frac{C}{R^2} \int_{R < |x| <2R} \omega_1,\] and so if the right hand side goes to zero as $ R \rightarrow \infty$ we have the desired result. \end{remark} The existence versus non-existence of stable solutions of $ -\Delta u = f(u)$ in $ {\mathbb{R}}^N$ or $ -\Delta u = g(x) f(u)$ in $ {\mathbb{R}}^N$ is now quite well understood, see \cite{dancer1, farina1, egg, zz, f2, f3, wei, ces, e1, e2}. We remark that some of these results are examining the case where $ \Delta $ is replaced with $ \Delta_p$ (the $p$-Laplacian) and also in many cases the authors are interested in finite Morse index solutions or solutions which are stable outside a compact set. Much of the interest in these Liouville type theorems stems from the fact that the non-existence of a stable solution is related to the existence of a priori estimates for stable solutions of a related equation on a bounded domain. In \cite{Ni} equations similar to $ -\Delta u = |x|^\alpha u^p$ where examined on the unit ball in $ {\mathbb{R}}^N$ with zero Dirichlet boundary conditions. There it was shown that for $ \alpha >0$ that one can obtain positive solutions for $ p $ supercritical with respect to Sobolev embedding and so one can view that the term $ |x|^\alpha$ is restoring some compactness. A similar feature happens for equations of the form \[ -\Delta u = |x|^\alpha f(u) \qquad \mbox{in $ {\mathbb{R}}^N$};\] the value of $ \alpha$ can vastly alter the existence versus non-existence of a stable solution, see \cite{e1, ces, e2, zz, egg}. We now come to our main results and for this we need to define a few quantities: \begin{eqnarray*} I_G&:=& R^{-4t-2} \int_{ R < |x|<2R} \frac{ \omega_1^{2t+1}}{\omega_2^{2t}}dx , \\ J_G&:=& R^{-2t-1} \int_{R < |x| <2R} \frac{| \nabla \omega_1|^{2t+1} }{\omega_2^{2t}} dx ,\\I_L&:=& R^\frac{-2(2t+p-1)}{p-1} \int_{R<|x|<2R }{ \left( \frac{w_1^{p+2t-1}}{w_2^{2t}} \right)^{\frac{1}{p-1} } } dx,\\ J_L&:= &R^{-\frac{p+2t-1}{p-1} } \int_{R<|x|<2R }{ \left( \frac{|\nabla w_1|^{p+2t-1}}{w_2^{2t}} \right)^{\frac{1}{p-1} } } dx,\\ I_M &:=& R^{-2\frac{p+2t+1}{p+1} } \int_{R<|x|<2R }{ \left( \frac{w_1^{p+2t+1}}{w_2^{2t}} \right)^{\frac{1}{p+1} } } \ dx, \\ J_M &:= & R^{-\frac{p+2t+1}{p+1} } \int_{R<|x|<2R }{ \left( \frac{|\nabla w_1|^{p+2t+1}}{w_2^{2t}} \right)^{\frac{1}{p+1} } } dx. \end{eqnarray*} The three equations we examine are \[ -div( \omega_1 \nabla u ) = \omega_2 e^u \qquad \mbox{ in $ {\mathbb{R}}^N$ } \quad (G), \] \[ -div( \omega_1 \nabla u ) = \omega_2 u^p \qquad \mbox{ in $ {\mathbb{R}}^N$ } \quad (L), \] \[ -div( \omega_1 \nabla u ) = - \omega_2 u^{-p} \qquad \mbox{ in $ {\mathbb{R}}^N$ } \quad (M),\] and where we restrict $(L)$ to the case $ p>1$ and $(M)$ to $ p>0$. By solution we always mean a $C^2$ solution. We now come to our main results in terms of abstract $ \omega_1 $ and $ \omega_2$. We remark that our approach to non-existence of stable solutions is the approach due to Farina, see \cite{f2,f3,farina1}. \begin{thm} \label{main_non_exist} \begin{enumerate} \item There is no stable sub-solution of $(G)$ if $ I_G, J_G \rightarrow 0$ as $ R \rightarrow \infty$ for some $0<t<2$. \item There is no positive stable sub-solution (super-solution) of $(L)$ if $ I_L,J_L \rightarrow 0$ as $ R \rightarrow \infty$ for some $p- \sqrt{p(p-1)} < t<p+\sqrt{p(p-1)} $ ($0<t<\frac{1}{2}$). \item There is no positive stable super-solution of (M) if $ I_M,J_M \rightarrow 0$ as $ R \rightarrow \infty$ for some $0<t<p+\sqrt{p(p+1)}$. \end{enumerate} \end{thm} If we assume that $ \omega_1$ has some monotonicity we can do better. We will assume that the monotonicity conditions is satisfied for big $x$ but really all ones needs is for it to be satisfied on a suitable sequence of annuli. \begin{thm} \label{mono} \begin{enumerate} \item There is no stable sub-solution of $(G)$ with $ \nabla \omega_1(x) \cdot x \le 0$ for big $x$ if $ I_G \rightarrow 0$ as $ R \rightarrow \infty$ for some $0<t<2$. \item There is no positive stable sub-solution of $(L)$ provided $ I_L \rightarrow 0$ as $ R \rightarrow \infty$ for either: \begin{itemize} \item some $ 1 \le t < p + \sqrt{p(p-1)}$ and $ \nabla \omega_1(x) \cdot x \le 0$ for big $x$, or \\ \item some $ p - \sqrt{p(p-1)} < t \le 1$ and $ \nabla \omega_1(x) \cdot x \ge 0$ for big $ x$. \end{itemize} There is no positive super-solution of $(L)$ provided $ I_L \rightarrow 0$ as $ R \rightarrow \infty$ for some $ 0 < t < \frac{1}{2}$ and $ \nabla \omega_1(x) \cdot x \le 0$ for big $x$. \item There is no positive stable super-solution of $(M)$ provided $ I_M \rightarrow 0$ as $ R \rightarrow \infty$ for some $0<t<p+\sqrt{p(p+1)}$. \end{enumerate} \end{thm} \begin{cor} \label{thing} Suppose $ \omega_1 \le C \omega_2$ for big $ x$, $ \omega_2 \in L^\infty$, $ \nabla \omega_1(x) \cdot x \le 0$ for big $ x$. \begin{enumerate} \item There is no stable sub-solution of $(G)$ if $ N \le 9$. \item There is no positive stable sub-solution of $(L)$ if $$N<2+\frac{4}{p-1} \left( p+\sqrt{p(p-1)} \right).$$ \item There is no positive stable super-solution of $(M)$ if $$N<2+\frac{4}{p+1} \left( p+\sqrt{p(p+1)} \right).$$ \end{enumerate} \end{cor} If one takes $ \omega_1=\omega_2=1$ in the above corollary, the results obtained for $(G)$ and $(L)$, and for some values of $p$ in $(M)$, are optimal, see \cite{f2,f3,zz}. We now drop all monotonicity conditions on $ \omega_1$. \begin{cor} \label{po} Suppose $ \omega_1 \le C \omega_2$ for big $x$, $ \omega_2 \in L^\infty$, $ | \nabla \omega_1| \le C \omega_2$ for big $x$. \begin{enumerate} \item There is no stable sub-solution of $(G)$ if $ N \le 4$. \item There is no positive stable sub-solution of $(L)$ if $$N<1+\frac{2}{p-1} \left( p+\sqrt{p(p-1)} \right).$$ \item There is no positive super-solution of $(M)$ if $$N<1+\frac{2}{p+1} \left( p+\sqrt{p(p+1)} \right).$$ \end{enumerate} \end{cor} Some of the conditions on $ \omega_i$ in Corollary \ref{po} seem somewhat artificial. If we shift over to the advection equation (and we take $ \omega_1=\omega_2$ for simplicity) \[ -\Delta u + \nabla \gamma \cdot \nabla u = f(u), \] the conditions on $ \gamma$ become: $ \gamma$ is bounded from below and has a bounded gradient. In what follows we examine the case where $ \omega_1(x) = (|x|^2 +1)^\frac{\alpha}{2}$ and $ \omega_2(x)= g(x) (|x|^2 +1)^\frac{\beta}{2}$, where $ g(x) $ is positive except at say a point, smooth and where $ \lim_{|x| \rightarrow \infty} g(x) = C \in (0,\infty)$. For this class of weights we can essentially obtain optimal results. \begin{thm} \label{alpha_beta} Take $ \omega_1 $ and $ \omega_2$ as above. \begin{enumerate} \item If $ N+ \alpha - 2 <0$ then there is no stable sub-solution for $(G)$, $(L)$ (here we require it to be positive) and in the case of $(M)$ there is no positive stable super-solution. This case is the trivial case, see Remark \ref{triv}. \\ \textbf{Assumption:} For the remaining cases we assume that $ N + \alpha -2 > 0$. \item If $N+\alpha-2<4(\beta-\alpha+2)$ then there is no stable sub-solution for $ (G)$. \item If $N+\alpha-2<\frac{ 2(\beta-\alpha+2) }{p-1} \left( p+\sqrt{p(p-1)} \right)$ then there is no positive stable sub-solution of $(L)$. \item If $N+\alpha-2<\frac{2(\beta-\alpha+2) }{p+1} \left( p+\sqrt{p(p+1)} \right)$ then there is no positive stable super-solution of $(M)$. \item Further more 2,3,4 are optimal in the sense if $ N + \alpha -2 > 0$ and the remaining inequality is not satisfied (and in addition we assume we don't have equality in the inequality) then we can find a suitable function $ g(x)$ which satisfies the above properties and a stable sub/super-solution $u$ for the appropriate equation. \end{enumerate} \end{thm} \begin{remark} Many of the above results can be extended to the case of equality in either the $ N + \alpha - 2 \ge 0$ and also the other inequality which depends on the equation we are examining. We omit the details because one cannot prove the results in a unified way. \end{remark} In showing that an explicit solution is stable we will need the weighted Hardy inequality given in \cite{craig}. \begin{lemma} \label{Har} Suppose $ E>0$ is a smooth function. Then one has \[ (\tau-\frac{1}{2})^2 \int E^{2\tau-2} | \nabla E|^2 \phi^2 + (\frac{1}{2}-\tau) \int (-\Delta E) E^{2\tau-1} \phi^2 \le \int E^{2\tau} | \nabla \phi|^2,\] for all $ \phi \in C_c^\infty({\mathbb{R}}^N)$ and $ \tau \in {\mathbb{R}}$. \end{lemma} By picking an appropriate function $E$ this gives, \begin{cor} \label{Hardy} For all $ \phi \in C_c^\infty$ and $ t , \alpha \in {\mathbb{R}}$. We have \begin{eqnarray*} \int (1+|x|^2)^\frac{\alpha}{2} |\nabla\phi|^2 &\ge& (t+\frac{\alpha}{2})^2 \int |x|^2 (1+|x|^2)^{-2+\frac{\alpha}{2}}\phi^2\\ &&+(t+\frac{\alpha}{2})\int (N-2(t+1) \frac{|x|^2}{1+|x|^2}) (1+|x|^2)^{-1+\frac{\alpha} {2}} \phi^2. \end{eqnarray*} \end{cor} \section{Proof of main results} \textbf{ Proof of Theorem \ref{main_non_exist}.} (1). Suppose $ u$ is a stable sub-solution of $(G)$ with $ I_G,J_G \rightarrow 0$ as $ R \rightarrow \infty$ and let $ 0 \le \phi \le 1$ denote a smooth compactly supported function. Put $ \psi:= e^{tu} \phi$ into (\ref{stable}), where $ 0 <t<2$, to arrive at \begin{eqnarray*} \int \omega_2 e^{(2t+1)u} \phi^2 &\le & t^2 \int \omega_1 e^{2tu} | \nabla u|^2 \phi^2 \\ && +\int \omega_1 e^{2tu}|\nabla \phi|^2 + 2 t \int \omega_1 e^{2tu} \phi \nabla u \cdot \nabla \phi. \end{eqnarray*} Now multiply $(G)$ by $ e^{2tu} \phi^2$ and integrate by parts to arrive at \[ 2t \int \omega_1 e^{2tu} | \nabla u|^2 \phi^2 \le \int \omega_2 e^{(2t+1) u} \phi^2 - 2 \int \omega_1 e^{2tu} \phi \nabla u \cdot \nabla \phi,\] and now if one equates like terms they arrive at \begin{eqnarray} \label{start} \frac{(2-t)}{2} \int \omega_2 e^{(2t+1) u} \phi^2 & \le & \int \omega_1 e^{2tu} \left( | \nabla \phi |^2 - \frac{ \Delta \phi}{2} \right) dx \nonumber \\ && - \frac{1}{2} \int e^{2tu} \phi \nabla \omega_1 \cdot \nabla \phi. \end{eqnarray} Now substitute $ \phi^m$ into this inequality for $ \phi$ where $ m $ is a big integer to obtain \begin{eqnarray} \label{start_1} \frac{(2-t)}{2} \int \omega_2 e^{(2t+1) u} \phi^{2m} & \le & C_m \int \omega_1 e^{2tu} \phi^{2m-2} \left( | \nabla \phi |^2 + \phi |\Delta \phi| \right) dx \nonumber \\ && - D_m \int e^{2tu} \phi^{2m-1} \nabla \omega_1 \cdot \nabla \phi \end{eqnarray} where $ C_m$ and $ D_m$ are positive constants just depending on $m$. We now estimate the terms on the right but we mention that when ones assume the appropriate monotonicity on $ \omega_1$ it is the last integral on the right which one is able to drop. \begin{eqnarray*} \int \omega_1 e^{2tu} \phi^{2m-2} | \nabla \phi|^2 & = & \int \omega_2^\frac{2t}{2t+1} e^{2tu} \phi^{2m-2} \frac{ \omega_1 }{\omega_2^\frac{2t}{2t+1}} | \nabla \phi|^2 \\ & \le & \left( \int \omega_2 e^{(2t+1) u} \phi^{(2m-2) \frac{(2t+1)}{2t}} dx \right)^\frac{2t}{2t+1}\\ &&\left( \int \frac{ \omega_1 ^{2t+1}}{\omega_2^{2t}} | \nabla \phi |^{2(2t+1)} \right)^\frac{1}{2t+1}. \end{eqnarray*} Now, for fixed $ 0 <t<2$ we can take $ m $ big enough so $ (2m-2) \frac{(2t+1)}{2t} \ge 2m $ and since $ 0 \le \phi \le 1$ this allows us to replace the power on $ \phi$ in the first term on the right with $2m$ and hence we obtain \begin{equation} \label{three} \int \omega_1 e^{2tu} \phi^{2m-2} | \nabla \phi|^2 \le \left( \int \omega_2 e^{(2t+1) u} \phi^{2m} dx \right)^\frac{2t}{2t+1} \left( \int \frac{ \omega_1 ^{2t+1}}{\omega_2^{2t}} | \nabla \phi |^{2(2t+1)} \right)^\frac{1}{2t+1}. \end{equation} We now take the test functions $ \phi$ to be such that $ 0 \le \phi \le 1$ with $ \phi $ supported in the ball $ B_{2R}$ with $ \phi = 1 $ on $ B_R$ and $ | \nabla \phi | \le \frac{C}{R}$ where $ C>0$ is independent of $ R$. Putting this choice of $ \phi$ we obtain \begin{equation} \label{four} \int \omega_1 e^{2tu} \phi^{2m-2} | \nabla \phi |^2 \le \left( \int \omega_2 e^{(2t+1)u} \phi^{2m} \right)^\frac{2t}{2t+1} I_G^\frac{1}{2t+1}. \end{equation} One similarly shows that \[ \int \omega_1 e^{2tu} \phi^{2m-1} | \Delta \phi| \le \left( \int \omega_2 e^{(2t+1)u} \phi^{2m} \right)^\frac{2t}{2t+1} I_G^\frac{1}{2t+1}.\] So, combining the results we obtain \begin{eqnarray} \label{last} \nonumber \frac{(2-t)}{2} \int \omega_2 e^{(2t+1) u} \phi^{2m} &\le& C_m \left( \int \omega_2 e^{(2t+1) u} \phi^{2m} dx \right)^\frac{2t}{2t+1} I_G^\frac{1}{2t+1}\\ &&- D_m \int e^{2tu} \phi^{2m-1} \nabla \omega_1 \cdot \nabla \phi. \end{eqnarray} We now estimate this last term. A similar argument using H\"{o}lder's inequality shows that \[ \int e^{2tu} \phi^{2m-1} | \nabla \omega_1| | \nabla \phi| \le \left( \int \omega_2 \phi^{2m} e^{(2t+1) u} dx \right)^\frac{2t}{2t+1} J_G^\frac{1}{2t+1}. \] Combining the results gives that \begin{equation} \label{last} (2-t) \left( \int \omega_2 e^{(2t+1) u} \phi^{2m} dx \right)^\frac{1}{2t+1} \le I_G^\frac{1}{2t+1} + J_G^\frac{1}{2t+1}, \end{equation} and now we send $ R \rightarrow \infty$ and use the fact that $ I_G, J_G \rightarrow 0$ as $ R \rightarrow \infty$ to see that \[ \int \omega_2 e^{(2t+1) u} =0, \] which is clearly a contradiction. Hence there is no stable sub-solution of $(G)$. (2). Suppose that $u >0$ is a stable sub-solution (super-solution) of $(L)$. Then a similar calculation as in (1) shows that for $ p - \sqrt{p(p-1)} <t < p + \sqrt{p(p-1)}$, $( 0 <t<\frac{1}{2})$ one has \begin{eqnarray} \label{shit} (p -\frac{t^2}{2t-1} )\int \omega_2 u^{2t+p-1} \phi^{2m} & \le & D_m \int \omega_1 u^{2t} \phi^{2(m-1)} (|\nabla\phi|^2 +\phi |\Delta \phi |) \nonumber \\ && +C_m \frac{(1-t)}{2(2t-1)} \int u^{2t} \phi^{2m-1}\nabla \omega_1 \cdot \nabla \phi. \end{eqnarray} One now applies H\"{o}lder's argument as in (1) but the terms $ I_L$ and $J_L$ will appear on the right hand side of the resulting equation. This shift from a sub-solution to a super-solution depending on whether $ t >\frac{1}{2}$ or $ t < \frac{1}{2}$ is a result from the sign change of $ 2t-1$ at $ t = \frac{1}{2}$. We leave the details for the reader. (3). This case is also similar to (1) and (2). \hfill $ \Box$ \textbf{Proof of Theorem \ref{mono}.} (1). Again we suppose there is a stable sub-solution $u$ of $(G)$. Our starting point is (\ref{start_1}) and we wish to be able to drop the term \[ - D_m \int e^{2tu} \phi^{2m-1} \nabla \omega_1 \cdot \nabla \phi, \] from (\ref{start_1}). We can choose $ \phi$ as in the proof of Theorem \ref{main_non_exist} but also such that $ \nabla \phi(x) = - C(x) x$ where $ C(x) \ge 0$. So if we assume that $ \nabla \omega_1 \cdot x \le 0$ for big $x$ then we see that this last term is non-positive and hence we can drop the term. The the proof is as before but now we only require that $ \lim_{R \rightarrow \infty} I_G=0$. (2). Suppose that $ u >0$ is a stable sub-solution of $(L)$ and so (\ref{shit}) holds for all $ p - \sqrt{p(p-1)} <t< p + \sqrt{p(p-1)}$. Now we wish to use monotonicity to drop the term from (\ref{shit}) involving the term $ \nabla \omega_1 \cdot \nabla \phi$. $ \phi$ is chosen the same as in (1) but here one notes that the co-efficient for this term changes sign at $ t=1$ and hence by restriction $t$ to the appropriate side of 1 (along with the above condition on $t$ and $\omega_1$) we can drop the last term depending on which monotonicity we have and hence to obtain a contraction we only require that $ \lim_{R \rightarrow \infty} I_L =0$. The result for the non-existence of a stable super-solution is similar be here one restricts $ 0 < t < \frac{1}{2}$. (3). The proof here is similar to (1) and (2) and we omit the details. \hfill $\Box$ \textbf{Proof of Corollary \ref{thing}.} We suppose that $ \omega_1 \le C \omega_2$ for big $ x$, $ \omega_2 \in L^\infty$, $ \nabla \omega_1(x) \cdot x \le 0$ for big $ x$. \\ (1). Since $ \nabla \omega_1 \cdot x \le 0$ for big $x$ we can apply Theorem \ref{mono} to show the non-existence of a stable solution to $(G)$. Note that with the above assumptions on $ \omega_i$ we have that \[ I_G \le \frac{C R^N}{R^{4t+2}}.\] For $ N \le 9$ we can take $ 0 <t<2$ but close enough to $2$ so the right hand side goes to zero as $ R \rightarrow \infty$. Both (2) and (3) also follow directly from applying Theorem \ref{mono}. Note that one can say more about (2) by taking the multiple cases as listed in Theorem \ref{mono} but we have choice to leave this to the reader. \hfill $ \Box$ \textbf{Proof of Corollary \ref{po}.} Since we have no monotonicity conditions now we will need both $I$ and $J$ to go to zero to show the non-existence of a stable solution. Again the results are obtained immediately by applying Theorem \ref{main_non_exist} and we prefer to omit the details. \hfill $\Box$ \textbf{Proof of Theorem \ref{alpha_beta}.} (1). If $ N + \alpha -2 <0$ then using Remark \ref{triv} one easily sees there is no stable sub-solution of $(G)$ and $(L)$ (positive for $(L)$) or a positive stable super-solution of $(M)$. So we now assume that $ N + \alpha -2 > 0$. Note that the monotonicity of $ \omega_1$ changes when $ \alpha $ changes sign and hence one would think that we need to consider separate cases if we hope to utilize the monotonicity results. But a computation shows that in fact $ I$ and $J$ are just multiples of each other in all three cases so it suffices to show, say, that $ \lim_{R \rightarrow \infty} I =0$. \\ (2). Note that for $ R >1$ one has \begin{eqnarray*} I_G & \le & \frac{C}{R^{4t+2}} \int_{R <|x| < 2R} |x|^{ \alpha (2t+1) - 2t \beta} \\ & \le & \frac{C}{R^{4t+2}} R^{N + \alpha (2t+1) - 2t \beta}, \end{eqnarray*} and so to show the non-existence we want to find some $ 0 <t<2$ such that $ 4t+2 > N + \alpha(2t+1) - 2 t \beta$, which is equivalent to $ 2t ( \beta - \alpha +2) > (N + \alpha -2)$. Now recall that we are assuming that $ 0 < N + \alpha -2 < 4 ( \beta - \alpha +2) $ and hence we have the desired result by taking $ t <2$ but sufficiently close. The proof of the non-existence results for (3) and (4) are similar and we omit the details. \\ (5). We now assume that $N+\alpha-2>0$. In showing the existence of stable sub/super-solutions we need to consider $ \beta - \alpha + 2 <0$ and $ \beta - \alpha +2 >0$ separately. \begin{itemize} \item $(\beta - \alpha + 2 <0)$ Here we take $ u(x)=0$ in the case of $(G)$ and $ u=1$ in the case of $(L)$ and $(M)$. In addition we take $ g(x)=\E$. It is clear that in all cases $u$ is the appropriate sub or super-solution. The only thing one needs to check is the stability. In all cases this reduces to trying to show that we have \[ \sigma \int (1+|x|^2)^{\frac{\alpha}{2} -1} \phi^2 \le \int (1+|x|^2)^{\frac{\alpha}{2}} | \nabla\phi |^2,\] for all $ \phi \in C_c^\infty$ where $ \sigma $ is some small positive constant; its either $ \E$ or $ p \E$ depending on which equation were are examining. To show this we use the result from Corollary \ref{Hardy} and we drop a few positive terms to arrive at \begin{equation*} \int (1+|x|^2)^\frac{\alpha}{2} |\nabla\phi|^2\ge (t+\frac{\alpha}{2})\int \left (N-2(t+1) \frac{|x|^2}{1+|x|^2}\right) (1+|x|^2)^{-1+\frac{\alpha} {2}} \end{equation*} which holds for all $ \phi \in C_c^\infty$ and $ t,\alpha \in {\mathbb{R}}$. Now, since $N+\alpha-2>0$, we can choose $t$ such that $-\frac{\alpha}{2}<t<\frac{n-2}{2}$. So, the integrand function in the right hand side is positive and since for small enough $\sigma$ we have \begin{equation*} \sigma \le (t+\frac{\alpha}{2})(N-2(t+1) \frac{|x|^2}{1+|x|^2}) \ \ \ \text {for all} \ \ x\in \mathbb{R}^N \end{equation*} we get stability. \item ($\beta-\alpha+2>0$) In the case of $(G)$ we take $u(x)=-\frac{\beta-\alpha+2}{2} \ln(1+|x|^2)$ and $g(x):= (\beta-\alpha+2)(N+(\alpha-2)\frac{|x|^2}{1+|x|^2})$. By a computation one sees that $u$ is a sub-solution of $(G)$ and hence we need now to only show the stability, which amounts to showing that \begin{equation*} \int \frac{g(x)\psi^2}{(1+|x|^{2 })^{-\frac{\alpha}{2}+1}}\le \int\frac{|\nabla\psi|^2}{ (1+|x|^2)^{-\frac{\alpha}{2}} }, \end{equation*} for all $ \psi \in C_c^\infty$. To show this we use Corollary \ref{Hardy}. So we need to choose an appropriate $t$ in $-\frac{\alpha}{2}\le t\le\frac{N-2}{2}$ such that for all $x\in {\mathbb{R}}^N$ we have \begin{eqnarray*} (\beta-\alpha+2)\left( N+ (\alpha-2)\frac{|x|^2}{1+|x|^2}\right) &\le& (t+\frac{\alpha}{2})^2 \frac{ |x|^2 }{(1+|x|^2}\\ &&+(t+\frac{\alpha}{2}) \left(N-2(t+1) \frac{|x|^2}{1+|x|^2}\right). \end{eqnarray*} With a simple calculation one sees we need just to have \begin{eqnarray*} (\beta-\alpha+2)&\le& (t+\frac{\alpha}{2}) \\ (\beta-\alpha+2) \left( N+ \alpha-2\right) & \le& (t+\frac{\alpha}{2}) \left(N-t-2+\frac{\alpha}{2}) \right). \end{eqnarray*} If one takes $ t= \frac{N-2}{2}$ in the case where $ N \neq 2$ and $ t $ close to zero in the case for $ N=2$ one easily sees the above inequalities both hold, after considering all the constraints on $ \alpha,\beta$ and $N$. We now consider the case of $(L)$. Here one takes $g(x):=\frac {\beta-\alpha+2}{p-1}( N+ (\alpha-2-\frac{\beta-\alpha+2}{p-1}) \frac{|x|^2}{1+|x|^2})$ and $ u(x)=(1+|x|^2)^{ -\frac {\beta-\alpha+2}{2(p-1)} }$. Using essentially the same approach as in $(G)$ one shows that $u$ is a stable sub-solution of $(L)$ with this choice of $g$. \\ For the case of $(M)$ we take $u(x)=(1+|x|^2)^{ \frac {\beta-\alpha+2}{2(p+1)} }$ and $g(x):=\frac {\beta-\alpha+2}{p+1}( N+ (\alpha-2+\frac{\beta-\alpha+2}{p+1}) \frac{|x|^2}{1+|x|^2})$. \end{itemize} \hfill $ \Box$
{'timestamp': '2011-08-17T02:00:55', 'yymm': '1108', 'arxiv_id': '1108.3118', 'language': 'en', 'url': 'https://arxiv.org/abs/1108.3118'}
\section{Introduction} Chlorine is a minor constituent of the Universe, with a solar abundance of only $\sim 3 \times10^{-7}$ relative to hydrogen \citep{Asplund2009}, that is orders of magnitude smaller than the elemental abundances of carbon, nitrogen, and oxygen. Nevertheless, chlorine has a strong chemical tendency to form hydrides (e.g., \H2Cl+, HCl) whose abundances can be as high as those of CH and H$_2$O, for example, in diffuse molecular clouds \citep{Neufeld2010, Sonnentrucker2010, Lis2010}. This is because of its unique thermo-chemistry properties: first, chlorine has an ionization potential of 12.97 eV, which is lower than that of hydrogen, such that it is predominantly in its ionized stage, Cl$^+$, in the atomic phase of the interstellar medium (ISM), and second, Cl$^+$ can react exothermically with H$_2$ triggering an active chemistry. In fact, the interstellar chemical network of chlorine is thought to be relatively simple, dominated by a handful of hydrides \citep{Neufeld2009}, though the vast majority of interstellar chlorine will be in its neutral form in the presence of even small amounts of H$_2$. The reaction of Cl$^+$ with H$_2$ readily forms HCl$^+$, which has been detected in absorption in Galactic diffuse clouds \citep{DeLuca2012} harboring a significant amount (3--5\%) of the gas-phase chlorine. In turn, HCl$^+$ can further react exothermically with H$_2$ to form chloronium (\H2Cl+; first observed by \citealp{Lis2010}). Chloronium itself can react with free electrons to either form hydrogen chloride \citep[HCl; historically the first chlorine-bearing molecule detected in the ISM;][]{Blake1985} or release neutral chlorine. There are several pathways to destroy HCl through photodissociation, photoionization, or reactions with He$^+$, H$_3^+$, and C$^+$. Methyl chloride, CH$_3$Cl, was also recently identified in the young stellar object IRAS $16293-2422$ and the gaseous coma of comet 67P/Churyumov-Gerasimenko \citep{Fayolle2017}, suggesting that chlorine chemistry in space also extends to more complex species (see, e.g., \citealp{Acharyya2017} for gas-grain chemical models of chlorine chemistry). From the nucleosynthetic point of view, elemental chlorine is produced on both short timescales in core-collapse supernovae, and long timescales in Asymptotic Giant Branch (AGB) stars and Type Ia supernovae \citep[p.164,][]{Clayton2003}. Chlorine is produced during oxygen burning, from fast reactions with the more abundant alpha elements, in stars massive enough to ignite it. In core-collapse supernovae excess neutrons cause a great increase in chlorine abundance, and hence they produce the majority of chlorine in the universe through explosive nucleosynthesis. Type Ia supernovae produce smaller amounts of chlorine through explosive oxygen burning. Chlorine can be found in two stable isotopes, \hbox{$^{35}{\rm Cl}$}\ and \hbox{$^{37}{\rm Cl}$}, with different sources producing different relative abundances. Low metallicity core-collapse supernovae, like the ones that likely enriched the pre-Solar nebula, produce a \hbox{$^{35}{\rm Cl}$}/\hbox{$^{37}{\rm Cl}$}\ ratio of $\sim$3 as measured in the Solar system \citep{Asplund2009}. Higher metallicity core-collapse supernovae can produce lower ratios \citep{Kobayashi2006,Kobayashi2011}. The isotopic ratios produced in AGB stars also depend heavily on metallicity, with low metallicities producing lower \hbox{$^{35}{\rm Cl}$}/\hbox{$^{37}{\rm Cl}$}\ values \citep{Cristallo2011}. Molecular absorbers at intermediate redshift ($z\sim1$) are powerful probes of the physico-chemical properties of distant galaxies. They offer a cosmological perspective on the chemical evolution of the Universe, for example with measurements of isotopic ratios at look-back times on the order of half or more of the present age of the Universe, but unfortunately, only a handful of such absorbers have been identified so far \citep[e.g.,][]{Combes2008,Wiklind2018}. In this paper, we collect observations of chlorine-bearing species in the two best studied and most molecule-rich redshifted radio molecular absorbers, with the goal of investigating their chemical properties and measuring the $^{35}$Cl/$^{37}$Cl isotopic ratio in different absorbing sightlines. The molecular absorber located in front of the quasar \hbox{PKS\,1830$-$211}\ is a face-on spiral galaxy at a redshift z=0.88582 (hereafter labeled MA0.89). The intervening galaxy lenses the quasar into two bright and compact images, leading to two independent sightlines in the southwest and northeast where absorption is detected (hereafter MA0.89~SW and MA0.89~NE), with impact parameters of $\sim$2 kpc and 4 kpc, respectively, in the absorber. These two compact, lensed images are embedded in a faint structure, reminiscent of an Einstein ring, seen at low radio frequencies \citep{Jauncey1991}. A wealth of molecules and their rare isotopologs have been detected in this absorber, especially in MA0.89 SW, which is characterized by a large H$_2$ column density ($\sim 2 \times 10^{22}$~cm$^{-2}$) and moderate gas density $\sim 10^3$~cm$^{-3}$ \citep[e.g.][]{Wiklind1996, Muller2011, Muller2013, Muller2014}. MA0.89 NE is characterized by a lower H$_2$ column density ($\sim 1 \times 10^{21}$~cm$^{-2}$) and more diffuse gas composition as shown by the enhancement of the relative abundances of hydrides such as OH$^+$, H$_2$O$^+$, and H$_2$Cl$^+$ \citep[e.g.,][]{Muller2016b}. The second molecular absorber (hereafter labeled MA0.68), located in front of the quasar \hbox{B\,0218$+$357}, shares similar properties with MA0.89. It is a nearly face-on spiral galaxy at an intermediate redshift z=0.68466, also lensing the background quasar into two main compact images and an Einstein ring, seen at low radio frequencies and centered on the faintest image. Molecular absorption has only been detected in one line of sight (toward the brightest image), at an impact parameter of $\sim$2 kpc from the absorber's center \citep{Wiklind1995, Muller2007}. A number of molecules and their isotopologs have also been observed in MA0.68 \citep[e.g.][]{Wallstrom2016}, which has an H$_2$ column density ($\sim 8 \times 10^{21}$~cm$^{-2}$) that is intermediate between MA0.89 SW and MA0.89 NE. With these two absorbers, we are thus exploring three independent lines of sight, with different redshifts, galactocentric distances, H$_2$ column densities, visual extinctions, metallicities, local radiation fields, cosmic-ray ionization rates, etc. Our observations and results are presented in Sections \ref{sec:obs} and \ref{sec:results}. We discuss our findings in terms of chemical properties derived from chlorine-bearing species and $^{35}$Cl/$^{37}$Cl isotopic ratios in Section~\ref{sec:discussion}. \section{Observations} \label{sec:obs} Observations of the two absorbers were obtained with the Atacama Large Millimeter/submillimeter Array (ALMA) between 2014 and 2017. A summary of these observations is given in Table~\ref{tab:obs}. Details of the observational setups and data analysis are described below for each absorber separately. \begin{table*}[ht] \caption{Summary of ALMA observations.} \label{tab:obs} \begin{center} \begin{tabular}{ccccccccc} \hline \hline Target & Species & Rest freq. $^{(a)}$ & Sky freq. $^{(b)}$ & $\delta v$ $^{(c)}$ & Dates of observations & Project ID \\ & & (GHz) & (GHz) & (\hbox{km\,s$^{-1}$})& & \\ \hline \hbox{PKS\,1830$-$211} & ortho-H$_2^{35}$Cl$^+$ & 189.225 & 100.341 & 1.5 & 2014, Jul. 21 and Aug. 26 & 2013.1.00020.S \\ & ortho-H$_2^{37}$Cl$^+$ & 188.428 & 99.919 && observed simultaneously with previous & \\ & para-H$_2^{35}$Cl$^+$ & 485.418 & 257.404 & 1.1& 2014, Jun. 06 and Jul. 29 & 2013.1.00020.S \\ & para-H$_2^{37}$Cl$^+$ & 484.232 & 256.775 && observed simultaneously with previous & \\ & H$^{35}$Cl & 625.919 & 331.908 & 0.9 & 2014, May 3, 6 & 2012.1.00056.S \\ & & & && 2014 Jun 30 & 2013.1.00020.S \\ & H$^{37}$Cl & 624.978 & 331.409 && observed simultaneously with previous & \\ \hline \hbox{B\,0218$+$357} & ortho-H$_2^{35}$Cl$^+$ & 189.225 & 112.322 & 2.6 & 2016 Oct 22, 2017 May 02 & 2016.1.00031.S \\ & ortho-H$_2^{37}$Cl$^+$ & 188.428 & 111.849 && observed simultaneously with previous & \\ \hline \end{tabular} \tablefoot{ $a)$ Rest frequencies of the strongest hyperfine component, taken from the Cologne Database for Molecular Spectroscopy \citep[CDMS; ][]{Muller2001}. $b)$ Sky frequencies are given taking redshifts $z_{abs}=0.88582$ and 0.68466 for MA0.89 and MA0.68, respectively, in the heliocentric frame. $c)$ Velocity resolution. } \end{center} \end{table*} \paragraph{MA0.68:} Observations of \H2Cl+ were obtained on 2016 October 22 and 2017 May 02. During the first run, the precipitable water vapor content was $\sim 2.4$~mm. The array was in a configuration where the longest baseline was 1.7~km. The bandpass calibrator was J\,0237+2848 ($\sim 2.2$~Jy at 100~GHz) and the gain calibrator was J\,0220+3241 ($\sim 0.2$~Jy at 100~GHz). During the second run, the precipitable water vapor content was $< 1$~mm. The array's longest baseline was 1.1~km. The bandpass calibrator was J0510+1800 ($\sim 2.6$~Jy at 100~GHz) and the gain calibrator J\,0205+3212 ($\sim 0.9$~Jy at 100~GHz). The data calibration was done with the CASA package following standard procedures. We further improved the data quality with a step of self-calibration of the phase, using a model of two point-sources for MA0.68. When combined, the two datasets result in a synthesized beam of $\sim 1.0 \arcsec \times 0.6 \arcsec$ (P.A.$=-176^\circ$) at 100~GHz, slightly larger than the separation of $0.3\arcsec$ between the two lensed images of the quasar. However, we extracted the spectra along each line of sight by directly fitting each visibility set separately using the CASA-Python task UVMULTIFIT \citep{Marti-Vidal2014}, with a model of two point-sources. This is possible in the Fourier plane, thanks to the high signal-to-noise ratio of the data and the known and simple geometry of the source. The final spectrum was the weighted average of the individual spectra. \paragraph{MA0.89:} We complemented previous observations of the ortho and para forms of \H2Cl+\ toward MA0.89 presented by \cite{Muller2014} and \cite{LeGal2017} with new observations of HCl obtained with ALMA during three observing runs: in 2014 May 03, 2014 May 06, and 2014 Jun 30. Both HCl isotopologs were observed in the same spectral window, and within the same receiver tuning as the para-H$_2$O$^+$ data presented by \cite{Muller2017}. The data reduction followed the same method as for MA0.68 data described above (self-calibration on the quasar, extraction of the spectra from visibility fitting, and weighted-averaged final spectrum). \section{Results} \label{sec:results} \begin{table*}[ht] \caption{Summary of results and line-of-sight properties.}\label{tab:ncol} \begin{center} \begin{tabular}{cccccccc} \hline \hline Sightline & $z_{\rm abs}$ & A$_{V,tot}$ & N(H$_2$) & N(H) & N(H$_2$Cl$^+$) $^a$ & N(HCl) $^a$ & $^{35}$Cl/$^{37}$Cl \\ & & (mag) & (cm$^{-2}$) & (cm$^{-2}$) & (cm$^{-2}$) & (cm$^{-2}$) & \\ \hline MA0.89 SW & 0.88582 & 20 & $2 \times 10^{22}$ & $1 \times 10^{21}$ & $(2.3 \pm 0.3) \times 10^{13}$ & $(2.7 \pm 0.5) \times 10^{13}$ & $2.99 \pm 0.05$ $^b$ \\ & & & & & & & $3.28 \pm 0.08$ $^c$ \\ MA0.89 NE & 0.88582 & 2 & $1 \times 10^{21}$ & $2 \times 10^{21}$ & $(5.2 \pm 0.5) \times 10^{12}$ & $< 2.4 \times 10^{11}$ & $3.3 \pm 0.3$ $^b$ \\ \hline MA0.68 & 0.68466 & 8 & $8 \times 10^{21}$ & $4 \times 10^{20}$ & $(1.4 \pm 0.2) \times 10^{13}$ & -- & $2.2 \pm 0.3$ $^b$ \\ \hline \end{tabular} \tablefoot{ $a)$ Total column density, including both $^{35}$Cl and $^{37}$Cl isotopologs, and ortho and para forms (assuming OPR=3) of \H2Cl+; $b)$ Calculated from H$_2$Cl$^+$ isotopologs; $c)$ Calculated from HCl isotopologs.} \end{center} \end{table*} \begin{figure*}[ht!] \begin{center} \includegraphics[width=\textwidth]{fig-spec-all.eps} \caption{Absorption spectra of chlorine-bearing species observed toward MA0.89~SW (top three panels), MA0.89~NE (middle three panels), and MA0.68 (bottom panel). Hyperfine structure is indicated in green for $^{35}$Cl-isotopologs of each line toward MA0.89~SW. Hyperfine structure was deconvolved by fitting multiple Gaussian velocity components; these fits are shown in red on top of spectra, with residuals shown in blue. On the right, deconvolved opacity profiles (assuming optically thin lines with $f_c=1$, as explained in Section~\ref{sec:results}), normalized to their peak opacity, are shown, together with opacity profiles of some other species for comparison. The profile of p-H$_2$O$^+$ is of the strongest hyperfine component of the $1_{10}$-$1_{01}$ transition, at rest frequency of 607~GHz. The profile of o-H$_2$S is the $1_{10}$-$1_{01}$ transition, at rest frequency of 168.8~GHz, which was observed simultaneously with o-\H2Cl+\ in the same tuning.} \label{fig:spec} \end{center} \end{figure*} The spectra observed along the three lines of sight are shown in Fig.~\ref{fig:spec}. H$_2$Cl$^+$ is detected in all sightlines, unsurprisingly since it is ubiquitous in the Galactic diffuse and translucent gas \citep{Lis2010,Neufeld2012}. On the other hand, HCl is detected in only one of the two sightlines toward \hbox{PKS\,1830$-$211}\ (MA0.89~SW), with a stringent upper limit toward the other. It was not observed toward MA0.68. The \H2Cl+\ absorptions reach at most a few percent of the continuum level for all sightlines, so we must determine whether the absorption is optically thin and whether the source covering factor, $f_c$, is less than unity. In MA0.89~SW, the ground state transition of species like CH$^+$ and H$_2$O is strongly saturated and reaches a depth of nearly 100\% of the continuum level, implying that $f_c \approx 1$ for these species \citep{Muller2014,Muller2017}. In MA0.89~NE, the situation is not so clear, but the deepest absorption reaches a depth of several tens of percent of the continuum level \citep{Muller2017}. For MA0.68, the covering factor of the 557~GHz water line is known to be large and close to unity \citep{Combes1997}, as confirmed by recent ALMA data (Muller, private communication). The HCl absorption reaches a depth of $\sim$10\% of the continuum level toward MA0.89~SW, so without further evidence we consider $f_c=1$ for all species and all sightlines -- hence an optically thin line regime -- and our derived column densities (Table~\ref{tab:ncol}) can be viewed, strictly speaking, as lower limits. When detected, both H$_2$Cl$^+$ and HCl show both their \hbox{$^{35}{\rm Cl}$}- and \hbox{$^{37}{\rm Cl}$}-isotopologs, observed simultaneously in the same spectral window (see a summary of the spectroscopic parameters of both species in Table~\ref{tab:hfs}). Considering optically thin lines and given the very small difference in their relative weight, the abundance ratio of the $^{35}$Cl- and $^{37}$Cl-isotopologs should reflect the $^{35}$Cl/$^{37}$Cl isotopic ratio (see Section~\ref{sec:35Cl/37Cl}). In MA0.89, the ground-state transitions of both the ortho and para forms of \H2Cl+\ were observed. It should be noted that, from the same data, \cite{LeGal2017} determined ortho-to-para ratios (OPR) in agreement with the spin statistical weight 3:1 for both sightlines, within the uncertainties. In MA0.68, only ortho-\H2Cl+\ was observed. We adopted an OPR=3 for the calculations of \H2Cl+\ total column densities in all sightlines. Finally, both \H2Cl+\ and HCl harbor a hyperfine structure that needs to be deconvolved before comparing intrinsic absorption profiles. This was done by fitting an intrinsic normalized profile, composed by the sum of individual Gaussian velocity components and convolved with the hyperfine structure (see Table~\ref{tab:hfs}), using $\chi^2$ minimization and taking the errors to be 1$\sigma$. Both the $^{35}$Cl- and $^{37}$Cl-isotopologs were treated in a common fit, with the $^{35}$Cl/$^{37}$Cl ratio as a free parameter. In order to assess whether the hyperfine structure deconvolution was robust, we fitted the ortho and para lines of \H2Cl+\ separately for MA0.89 SW. We obtained a good solution, with residuals at the noise level, with four Gaussian velocity components for this line of sight. For MA0.89 NE and MA0.68 the detections have lower signal-to-noise ratios, so we limited the number of Gaussian components to one for sake of simplicity. These fits still yield residuals consistent with the noise level. All the velocity profiles in Fig.~\ref{fig:spec} were normalized to the peak opacity to enable a straightforward comparison. There is a striking difference in the opacity profiles of \H2Cl+\ and HCl toward MA0.89~SW: the former has prominent line wings, at velocities $| v | = 10-50$~\hbox{km\,s$^{-1}$}, while the latter is mostly a narrow component centered at $v=0$~\hbox{km\,s$^{-1}$}\ with FWHM $\sim 20$~\hbox{km\,s$^{-1}$}. In addition to the chlorine-bearing species of interest here, we also show in Fig.~\ref{fig:spec} the opacity profiles for some other species observed at the same epoch (to prevent effects from time variations of the absorption profile, see \citealp{Muller2008,Muller2014}). In MA0.89~SW, we find that the profile of \H2Cl+\ best matches that of H$_2$O$^+$, while the profile of HCl best matches that of H$_2$S. Similarly, in MA0.68, we observe that the FWHM of the \H2Cl+\ opacity is noticeably larger than that of H$_2$S, suggesting again that the two species trace different gas components. Using the column density ratios between the two lines of sight of MA0.89, $\gamma_{SW/NE}$, \citet{Muller2016b,Muller2017} could classify the different species in two simple categories: those with low $\gamma_{SW/NE}$ ($\lesssim 5$), for example ArH$^+$, OH$^+$, H$_2$Cl$^+$, and H$_2$O$^+$, which are known to trace gas with low molecular hydrogen fraction, \hbox{$f_{\rm H_2}$}\footnote{ \hbox{$f_{\rm H_2}$} = 2$\times$n(H$_2$)/[n(H)+2$\times$n(H$_2$)]}, of 1--10\% (see, e.g., \citealt{Neufeld2016}), and those with $\gamma_{SW/NE}$ above 20, for example CH, HF, HCO$^+$, HCN, and H$_2$S, tracing gas with much higher molecular hydrogen fraction, $>30$\% and up to 100\%. Since MA0.89~NE is dominated by gas with low \hbox{$f_{\rm H_2}$}\ the non-detection of HCl in this line of sight, with a large $\gamma_{SW/NE} > 90$, suggests that HCl is a tracer of high-\hbox{$f_{\rm H_2}$} gas, in agreement with chemical predictions (see Section~\ref{sec:discussion}). The line of sight towards MA0.68 is not yet fully characterized, but the estimated column density of H$_2$ and relative abundance of \H2Cl+\ suggest that it is intermediate between MA0.89~NE and SW in term of molecular hydrogen fraction. \section{Discussion} \label{sec:discussion} \subsection{Chemistry} \begin{table*}[ht!] \caption{Abundance ratios and derived parameters for diffuse gas in MA0.89 SW and MA0.89 NE} \label{tab:relat} \begin{center} \begin{tabular}{c|c|c|c} \hline\hline & MA0.89 SW & MA0.89 NE & Reference/comment \\ \hline [\HHClp] / [HCl] & 0.8 & $>$ 17 & present work \\ $[\rm{OH}^+]$ / [H$_2$O$^+$] & 5.9 & 11 & \cite{Muller2016b} \\ \hbox{$f_{\rm H_2}$}\ & $\sim$ 0.04 & $\sim$ 0.02 & \cite{Muller2016b} \\ $\zeta$ (s$^{-1}$) & $2 \times 10^{-14}$ & $3 \times 10^{-15}$ & \cite{Muller2016b} \\ $x_e$ & $2.5 \times 10^{-4}$ & $2.4 \times 10^{-4}$ & using [OH$^+$]/[H$_2$O$^+$] (Eq.\ref{eq:xefromOH+/H2O+}) \\ G$_0$ & $\sim$ 0.5 $^\dagger$ & $>$ 10 & using [H$_2$Cl$^+$]/[HCl] and $x_e$ (Eq.\ref{eq:G0fromH2Cl+/HCl}) \\ \hline \end{tabular} \tablefoot{$\dagger$ Likely underestimated due to extinction.} \end{center} \end{table*} Given the lack of observational constraints other than molecular abundances in the lines of sight through MA0.68 and MA0.89, and the fact that they sample different clouds and potentially different gas components, we have limited ourselves to a simple analytical model to explore the chlorine chemistry and how it varies with a few key parameters. For diffuse cloud conditions, HCl and \HHClp\ are linked in a simple way, as HCl results from one specific channel of the dissociative recombination of \HHClp\ and is principally destroyed by photodissociation: \begin{equation} \mathrm{\HHClp + e^- \rightarrow HCl + H} \end{equation} \begin{equation} \mathrm{HCl + \gamma \rightarrow H + Cl} \end{equation} \noindent with reaction rates $k_1$ = 7.5 $\times$ 10$^{-8}$ (T/300)$^{-0.5}$ cm$^3$ s$^{-1}$ (see Appendix~\ref{app:ChemicalConsiderations}) and $k_2$ = 1.8 $\times 10^{-9}$ G$_0$ exp(-2.9 $A_V$) s$^{-1}$, where G$_0$ is the scaling factor of the interstellar UV radiation field (ISRF), expressed in Draine units \citep{Heays2017}. Under these conditions HCl is almost completely destroyed. From these two reactions, we can write, at steady-state: \begin{equation} [\HHClp ]/ [{\rm{HCl}}] = \frac{k_2}{k_1 n_{e^-}} \end{equation} \noindent and, introducing the fractional ionization $x_e$ = $n(e^-) /n_{\rm{H}}$ and neglecting the $A_V$ dependence of the photodissociation rate as in a diffuse line of sight $A_V << 1$, we get \begin{equation} \label{eq:G0fromH2Cl+/HCl} \frac{[\HHClp]}{[{\rm{HCl}}]} \sim 1.4 \times 10^{-2} \frac{G_0}{n_{\rm{H}} x_e} \left(\frac{T}{100}\right)^{0.5}. \end{equation} \noindent A large [\HHClp]/[HCl] ratio is obtained for large values of G$_0$/$n_{\mathrm{H}}$ as suggested by \citet{Neufeld2009}. We can perform a similar analysis for the [\OHp]/[\HHOp] ratio by considering the main formation process of \HHOp\ via the \OHp\ + \HH\ reaction and its destruction by dissociative recombination and reaction with molecular hydrogen: \begin{equation} \mathrm{\OHp + \HH \rightarrow \HHOp + H} \end{equation} \begin{equation} \mathrm{\HHOp + e^- \rightarrow products} \end{equation} \begin{equation} \mathrm{\HHOp + \HH \rightarrow H_3O^+ + H} \end{equation} \noindent with reaction rates $k_5 = 1.7 \times 10^{-9}$ cm$^3$ s$^{-1}$, $k_6 = 4.3 \times 10^{-7}$ (T/300)$^{-0.5}$ cm$^3$ s$^{-1}$, and $k_7 = 6.4 \times 10^{-10}$ cm$^3$ s$^{-1}$. Substituting the fractional ionization $x_e$ and the molecular fraction \hbox{$f_{\rm H_2}$}\ we get \begin{equation*} \frac{[\OHp]}{[\HHOp]} = \frac{k_6 [e^-] + k_7 [\HH]} {k_5 [\HH]} = \frac{2 k_6 x_e + k_7 \hbox{$f_{\rm H_2}$}}{k_5 \hbox{$f_{\rm H_2}$}}, \end{equation*} \begin{equation} \label{eq:xefromOH+/H2O+} \sim 8.8 \times 10^2 \frac{x_e}{\hbox{$f_{\rm H_2}$}} \left(\frac{100}{T}\right)^{0.5} + 0.38 . \end{equation} Using the derived molecular fractions from \cite{Muller2016b}, we can estimate the fractional ionization from the [\OHp]/[\HHOp] ratio in the two MA0.89 lines of sight, assuming T = 100~K. The [\HHClp]/[HCl] ratio then allows for the derivation of G$_0$, again following \cite{Muller2016b} in the assumption of $n_{\rm{H}}$ = 35 cm$^{-3}$. The calculated values are reported in Table \ref{tab:relat}. As the derivation does not consider any dependence on the visual extinction, the derived values of the radiation field correspond to lower limits. We note that the SW line of sight has a significantly higher molecular fraction, and hence a larger total visual extinction ($A_V$; see Section~\ref{subsec:sightlines}), than the NE line of sight, so its ISRF is likely to be underestimated in our reasoning. The results depend strongly on the value of $A_V$, but as an example: for a cloud of $A_V$ = 1 in MA0.89~SW, we calculate G$_0$ = 9. This suggests that both MA0.89 SW and MA0.89 NE have more intense radiation fields than in the Solar neighborhood. Our simple model is broadly consistent with the results of \citet{Neufeld2009}, though we note that the chlorine chemical network still has some uncertainties, as discussed in Appendix~\ref{app:ChemicalConsiderations}, which can significantly impact the model results. \citeauthor{Neufeld2009} find a column density ratio [\HHClp]/[HCl]~$\sim1$, as we measure in MA0.89, for a two-sided slab model with G$_0$ = 10 and $n_{\rm{H}}$ = 315 cm$^{-3}$ at $A_V\sim1$. However, the predicted column densities of these molecules are significantly smaller than the observed $\sim$10$^{13}$~cm$^{-2}$ in our lines of sight. \begin{figure}[ht] \begin{center} \includegraphics[width=8.8cm]{fig-ratio-CHLORINEBearingSpecies.eps} \caption{Abundance ratio of H$_2$Cl$^+$ (derived from ortho and para forms, shown in top and bottom panels, respectively) over HCl across absorption profile toward MA0.89~SW. We use measured ortho-to-para ratio of three to obtain total H$_2$Cl$^+$ column density. Abundance ratio is calculated only for channels where signal-to-noise ratio of normalized profile of both \H2Cl+\ and HCl is higher than five.} \label{fig:abundanceRatio-H2Cl+-HCl} \end{center} \end{figure} \subsubsection{Properties of each sightline} \label{subsec:sightlines} Here we discuss more specifically our results towards MA0.68 and MA0.89 and compare the physical and chemical properties in the different sightlines. \paragraph{MA0.89 SW:} This sightline is rich in molecular species, with almost 50 different species detected so far \citep[e.g.,][]{Muller2011,Muller2014}, and the H$_2$ column density is relatively high, at $\sim 10^{22}$~cm$^{-2}$. We can estimate the total visual extinction $A_{V,tot}$ in MA0.89 SW using Bohlin's law \citep{Bohlin1978} and the H and H$_2$ column densities \citep{Chengalur1999,Muller2014}. We find $A_{V,tot} \approx 20$ mag, which is shared among a number of individual clouds or kinematical structures. Obviously, there is at least one, most likely a few, cloud(s) with $A_V$ high enough ($\sim 1$) in the line of sight for HCl to be detectable. Indeed, the line profile of HCl indicates two peaks close to $v \sim 0$~\hbox{km\,s$^{-1}$}. Those two velocity components were previously noticed as two distinct clouds with different chemical setups, as they show opposite peaks in their CF$^+$ vs CH$_3$OH absorption \citep{Muller2016}, suggesting chemical segregation. The multi-phase composition of the absorbing gas in this sightline has been clearly revealed by the recent observations of hydrides with ALMA \citep{Muller2014,Muller2016,Muller2017}. Accordingly, the absorption from \H2Cl+\ and HCl can be simply understood as tracing gas with different molecular fraction. In Fig.~\ref{fig:abundanceRatio-H2Cl+-HCl}, we see a clear difference between the core and wings of the line profiles: in the line center the gas is denser or better shielded and HCl dominates, with a [\H2Cl+]/[HCl] ratio of $\sim 0.5$, while in the wings the ratio increases to $\gtrsim 1$ and \H2Cl+\ dominates. This is consistent with different gas properties (i.e., \hbox{$f_{\rm H_2}$}) between the core and wings. \paragraph{MA0.89 NE:} Only \H2Cl+\ is detected in this line of sight, which is characterized by gas at low \hbox{$f_{\rm H_2}$}. The [H$_2$Cl$^+$]/[HCl] abundance ratio has a lower limit of $\sim$17, more than one order of magnitude higher than in the SW line of sight. Following the method outlined above, we estimate a total $A_V \approx 2$ mag. Based on a statistical analysis, \cite{Muller2008} argued that the large time variations of the opacity profiles\footnote{due to the intrinsic activity of the background quasar, causing morphological changes, hence changing illumination to the foreground absorber.} imply a relatively small number of individual clouds in the line of sight, $\lesssim 5$, and that the bulk of the absorption should arise from clouds not much smaller than the background continuum source, $\sim 1$~pc. ALMA observations of the CH$^+$ absorption \citep{Muller2017} reveal a complex velocity profile which can be fitted with $\sim 10$ different Gaussian velocity components, some with widths as small as a few \hbox{km\,s$^{-1}$}, spanning a total velocity interval between $-300$~\hbox{km\,s$^{-1}$}\ and $-100$~\hbox{km\,s$^{-1}$}. Taking a simplified model of $\sim$10 individual clouds along the line of sight results in each having an average $A_V$ of $\sim$0.2. From the chemical predictions, we indeed expect some production of \H2Cl+\ at this level of $A_V$ but very little HCl, consistent with the non-detection of HCl. \paragraph{MA0.68} Here, we estimate a total $A_V \approx 8$ magnitudes. The overall absorption profile is narrow compared to that of MA0.89 and we estimate the number of clouds to be $\sim 3$ based on the number of distinct velocity features in the profile \citep[e.g.][]{Wallstrom2016}, suggesting characteristics intermediate between MA0.89~SW and MA0.89~NE. The abundance of \H2Cl+\ relative to H$_2$ is also intermediate between the MA0.89 sightlines. \subsubsection{Elemental chlorine and abundances of chlorine-bearing molecules} Models of chlorine chemistry \citep[e.g.,][]{Neufeld2009} find that in the presence of H$_2$ most chlorine resides in the form of the neutral atom Cl. Observationally, a strong correlation is found between the column densities of Cl and H$_2$, in both the local ISM \citep{Jura1974,Sonnentrucker2006,Moomey2012} and in high-redshift damped Lyman-$\alpha$ absorption systems \citep{Balashev2015} over three orders of magnitude in H$_2$ column densities, with a relation\footnote{from an independent least-squares fit of the data from \citet{Moomey2012} and \citet{Balashev2015}.} \begin{equation} \mathrm{log[N_{Cl}]} = (0.79 \pm0.06) \times \mathrm{log[N_{H_2}]} - (2.13 \pm1.15) . \label{eq:ClH2corr} \end{equation} \noindent This trend is found to be independent of the overall gas metallicity, suggesting that neutral chlorine could be an excellent tracer of H$_2$. Assuming that the correlation in Eq.~\ref{eq:ClH2corr} also holds for our molecular absorbers at intermediate redshifts, we can infer the column density of neutral chlorine corresponding to the column density of H$_2$ in the different sightlines. With the observed column densities of H$_2$Cl$^+$ and HCl, we can then estimate the fraction of chlorine in these molecules (assuming neutral chlorine is vastly dominant and summing the \hbox{$^{35}{\rm Cl}$}\ and \hbox{$^{37}{\rm Cl}$}-isotopologs). Accordingly, we estimate that about 0.7\%, 1.8\%, and 0.9\% of chlorine is in the form of H$_2$Cl$^+$ for MA0.89~SW, MA0.89~NE, and MA0.68, respectively. For MA0.89~SW, we also estimate that about 0.9\% of Cl is in HCl. These values are uncertain by about an order of magnitude (due to dispersion of the $\mathrm{N_{Cl}-N_{H_2}}$ correlation and uncertainty in the H$_2$ column density), and averaged over the whole sightline. \subsubsection{HCl$^+$ and CH$_3$Cl} Besides \H2Cl+\ and HCl, we here discuss briefly some other chlorine-bearing species, already detected or potentially present in the ISM in our intermediate-redshift absorbers. As previously mentioned, HCl$^+$ is the first chlorine-bearing species to be formed in the chlorine-chemistry network from the reaction of Cl$^+$ and H$_2$, in gas with a low enough \hbox{$f_{\rm H_2}$} to maintain some Cl$^+$, and as such, HCl$^+$ is an excellent tracer of the diffuse gas component. HCl$^+$ has a complex spectrum, with multiple ground-state hyperfine transitions around 1.44~THz \citep{DeLuca2012}. Unfortunately, those transitions fall out of ALMA bands for a redshift of 0.89 and HCl$^+$ cannot be observed toward \hbox{PKS\,1830$-$211}. It could in principle be observed toward \hbox{B\,0218$+$357} in ALMA Band~10, although with the observational challenge of a weak background continuum source at low elevation. Another chlorine-bearing molecule, recently detected in the ISM, is methyl chloride (CH$_3$Cl). It has several hyperfine transitions which were covered in a deep spectral scan in the 7~mm band with the Australia Telescope Compact Array toward \hbox{PKS\,1830$-$211} \citep{Muller2011}. No features related to CH$_3$Cl (rest frequency near 79.8~GHz) are detected down to a few per mil of the continuum level. Simulated absorption spectra for the conditions appropriate for the MA0.89 absorber yield a peak optical depth of $\tau = 1.95\times 10^{-3}$ for the hyperfine-structure blend, with an upper limit on the column density of $N({\rm CH}_3{\rm Cl}) = 5\times 10^{12}$ cm$^{-2}$. CH$_3$Cl was recently identified in the young stellar object IRAS $16293-2422$ by \citet{Fayolle2017}, who infer a CH$_3$Cl column density from these high-resolution ALMA data under the assumption of a fixed excitation temperature of 102 K of $4.6\times 10^{14}$ cm$^{-2}$ over a $0\farcs 5$ beam. A previous observation of HCl toward the same source by \citet{Peng2010} yielded well-constrained optical depths in the $J=1-0$ transition but referred to a much larger beam size of $13\farcs 5$. If the HCl result is re-interpreted for a fixed excitation temperature of 102 K, the resulting column density of HCl would be $8.9\times 10^{14}$ cm$^{-2}$, and if the ratio of column densities is simply scaled by the ratio of beam areas, then the abundance ratio would be [CH$_3$Cl]/[HCl] $= 7\times 10^{-4}$. This is likely to be an underestimate, because the HCl-emitting source probably does not fill the $13\farcs 5$ beam. Even so, the [CH$_3$Cl]/[HCl] ratio in IRAS $16293-2422$ is not inconsistent with the value $(4\pm 2)\times10^{-3}$ determined from the Rosetta Orbiter Spectrometer for Ion and Neutral Analysis (ROSINA) measurements of the gaseous coma of comet 67P/Churyumov-Gerasimenko \citep{Fayolle2017}. This [CH$_3$Cl]/[HCl] ratio is consistent with our upper limit on CH$_3$Cl in MA0.89~SW. \subsection{$^{35}$Cl/$^{37}$Cl isotopic ratio} \label{sec:35Cl/37Cl} The $^{35}$Cl/$^{37}$Cl isotopic ratio has been studied in a range of Galactic sources ranging from circumstellar envelopes to molecular clouds and star-forming regions. The measured ratios vary between $\sim$1 and 5, though most fall around 2.5 \citep{Maas2018} with uncertainties large enough to be consistent with the solar system ratio of 3.13 \citep{Lodders2009}. This value is measured from meteorites, and hence is indicative of the conditions at the formation of the solar system. Galactic chemical evolution models by \citet{Kobayashi2011} predict that the chlorine isotopic ratio in the solar system neighborhood has decreased, with increasing metallicity, since its formation to about 1.8 at the present day and Solar metallicity. From \H2Cl+\ absorption observed in 2012 in MA0.89~SW, \cite{Muller2014b} found a ratio $^{35}$Cl/$^{37}$Cl~=~$3.1 _{-0.2} ^{0.3}$, consistent with the Solar system value. With the better quality of these new 2014 data, we revise this ratio ($2.99 \pm 0.05$) and we are also able to measure it in MA0.89~NE ($3.3 \pm 0.3$). These two measurements, at galactocentric radii of 2 and 4 kpc, respectively, are consistent each other, suggesting there are only small effects from metallicity or stellar population differences in the disk of MA0.89. In addition, we obtain another measurement from the HCl isotopologs in MA0.89~SW, $3.28 \pm 0.08$, relatively consistent with previous values and suggesting a good mixing between the different gas components with low and high \hbox{$f_{\rm H_2}$}. In MA0.68, we find a $^{35}$Cl/$^{37}$Cl ratio of $2.2 \pm 0.3$, from H$_2$Cl$^+$ isotopologs. \citet{Wallstrom2016} found overall very similar C, N, O, and S isotopic ratios between MA0.68 and MA0.89~SW, which show clear evolution effects compared to the ratios found in the Solar neighborhood. The $^{35}$Cl/$^{37}$Cl ratio is the first to deviate between MA0.89~SW and MA0.68, and it would be interesting to confirm this difference with further observations. A $^{35}$Cl/$^{37}$Cl ratio around three reflects nucleosynthesis products mainly from massive stars and core-collapse supernovae \citep{Kobayashi2011}. On the other hand, a \hbox{$^{35}{\rm Cl}$}/\hbox{$^{37}{\rm Cl}$}\ ratio of $\sim 2$ requires nucleosynthesis by either low-metallicity AGB stars or higher-metallicity Type II supernovae. As AGB stars affect their environment on long timescales, the first possibility requires MA0.68 to be a relatively old galaxy. For the second possibility, higher-metallicity Type II supernovae require short timescales and an intrinsically more metal-rich galaxy. The simplest explanation of the difference in chlorine isotopic ratios between MA0.89 and MA0.68 might be that MA0.68 has an intrinsically higher metallicity. \section{Summary and conclusions} \label{sec:conclusions} We investigate the absorption of chlorine-bearing molecules in two molecular absorbers (three independent lines of sight with different properties) at intermediate redshift, MA0.89 located toward \hbox{PKS\,1830$-$211}\ ($z_{abs}=0.89$) and MA0.68 towards \hbox{B\,0218$+$357}\ ($z_{abs} = 0.68$). \H2Cl+\ was observed, and detected, toward all sightlines. HCl was observed only toward \hbox{PKS\,1830$-$211}, but detected only in one of the two sightlines. Our results and conclusions are summarized as follows: \begin{itemize} \item The comparison of the absorption spectra of chlorine-bearing species between the different sightlines and with that of other species, namely H$_2$O$^+$ and H$_2$S, provides us with a simple classification based on molecular fraction (\hbox{$f_{\rm H_2}$}). \H2Cl+\ and HCl trace the gas component with low and high \hbox{$f_{\rm H_2}$}\ gas, respectively. This picture is consistent with predictions from chemical modeling, where HCl requires higher visual extinction ($A_V \sim 1$) than \H2Cl+\ to form. \item These two chlorine-bearing species can hence be used together to characterize the column of absorbing gas. In particular, toward MA0.89~SW, we find that the [\H2Cl+]/[HCl] abundance ratio varies from $\sim$0.5 at the line center, to $\sim$2 in the line wings, reflecting the multi-phase composition of the gas along the line of sight. The same ratio has a lower limit more than one order of magnitude higher toward MA0.89~NE. \item The detection of the \hbox{$^{35}{\rm Cl}$}- and \hbox{$^{37}{\rm Cl}$}- isotopologs allows us to measure the \hbox{$^{35}{\rm Cl}$}/\hbox{$^{37}{\rm Cl}$}\ isotopic ratio at look-back times of about half the present age of the Universe. We find basically the same values at galactocentric radii of 2~kpc (\hbox{$^{35}{\rm Cl}$}/\hbox{$^{37}{\rm Cl}$}\ $= 2.99 \pm 0.05$ from \H2Cl+\ and $3.28 \pm 0.08$ from HCl) and 4~kpc (\hbox{$^{35}{\rm Cl}$}/\hbox{$^{37}{\rm Cl}$}\ $= 3.3 \pm 0.3$) in the disk of MA0.89 and in the Solar neighborhood (\hbox{$^{35}{\rm Cl}$}/\hbox{$^{37}{\rm Cl}$}\ $\sim 3$), while we find a lower ratio (\hbox{$^{35}{\rm Cl}$}/\hbox{$^{37}{\rm Cl}$}\ $= 2.2 \pm 0.3$) in MA0.68. This could be interpreted as MA0.68 having an intrinsically higher metallicity. It would be interesting to obtain more measurements of the \hbox{$^{35}{\rm Cl}$}/\hbox{$^{37}{\rm Cl}$}\ isotopic ratio, including at higher redshifts, to investigate whether this isotopic ratio is a useful tracer of evolution. \item The comparison of the observed abundances for \H2Cl+\ and HCl with that predicted from chemical modeling suggests the need for a stronger interstellar radiation field in the disk of these absorbers than in the Solar neighborhood. Evidence for a slightly increased cosmic-ray ionization rate of atomic hydrogen was also found in MA0.89 \citep{Muller2016b}, potentially related to the higher star formation activity at these intermediate redshifts. \end{itemize} In short, the two chlorine-bearing species \H2Cl+\ and HCl are sensible probes of the interstellar medium in redshifted absorbers. They offer diagnostics of the molecular fraction, gas composition, and nucleosynthesis enrichment via the \hbox{$^{35}{\rm Cl}$}/\hbox{$^{37}{\rm Cl}$}\ ratio. Their ground-state transitions are readily observable, for instance with ALMA, in a wide range of redshifts ($z \sim 0-6$), making them interesting tools to probe physico-chemical properties and evolution effects in the young Universe. \begin{acknowledgement} This paper uses the following ALMA data:\\ ADS/JAO.ALMA\#2012.1.00056.S \\ ADS/JAO.ALMA\#2013.1.00020.S \\ ADS/JAO.ALMA\#2016.1.00031.S. \\ ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada) and NSC and ASIAA (Taiwan) and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. S.H.J.W. acknowledges support by the Ministry of Science and Technology of Taiwan under grants MOST104-2628-M-001-004-MY3 and MOST107-2119-M-001-031-MY3, and from Academia Sinica under AS-IA-106-M03. \end{acknowledgement}
{'timestamp': '2019-08-13T02:23:10', 'yymm': '1908', 'arxiv_id': '1908.04188', 'language': 'en', 'url': 'https://arxiv.org/abs/1908.04188'}
\section{Introduction}\label{sec:intro} A fundamental problem in biology involves the origins of an innovation that allowed the development of organisms in our biosphere, beyond complex chemical reaction networks: the emergence of cells \citep{MajorTransitions,LuisiBook}. Cells define a clear scale of organization and, given their spatially confined structure, they constitute efficient units where molecules can easily interact, coordinate their dynamical patterns and establish a new level of selection. However, although it is often assumed that there was a transition from some type of `less-organised' prebiotic chemistry (surely including catalytic cycles) to a cell-based living chemistry, little is yet known concerning the potential pathways that could be followed to cross it. Once in place, protocell assemblies would require available resources for their maintenance and, thus, would naturally get inserted in diverse competitive dynamics in which the main selective unit would be the whole protocellular system. In this context, aggregate-level evolution is the right scale of analysis to be considered. Different types of protocellular systems of diverse complexity have been studied from a theoretical standpoint \citep{varelaMaturanaUribe1974,ganti1975,dyson1985,segre2001,soleReproductionComputation2007, maciaSole2007,mavelliRuizMirazo2007,ruizMirazo2011}. In particular, by considering the coupling of a template carrying information with vesicle replication and metabolism, it has been shown that Darwinian selection is the expected outcome of competition in a protocellular world \citep{munteanu2007}. However, an early pre-Darwinian stage in the development of biological organisms was likely to be dominated by supramolecular systems disconnected from information, closer to elementary forms of metabolism and strongly constrained by the molecular diversity of the available chemical repertoire. What type of competition and cooperation processes were at work in the chemical world leading to the emergence of early protocells? Here, in that context, processes able to favour asymmetries in the chemical composition of vesicles should be expected to play a relevant role in defining the conditions under which protocellular assemblies could thrive. Recent laboratory experiments have actually shown how simple physical changes made to the lipid membrane of vesicles can drive competition between those vesicles when the supply of lipid is limited. First, \citep{chenSzostak2004} reported competitive dynamics in a population of vesicles, whereby vesicles that were osmotically swollen by an encapsulated cargo of RNA (or sucrose) stole lipids from their empty osmotically relaxed counterparts by virtue of absorbing lipids more quickly. More recent experimental work has turned attention to other possible selective advantages of protocells, such as phospholipid- \citep{budinSzostak2011} and peptide- \citep{adamalaSzostak2013} driven competition amongst vesicles. Instead of membrane tension, the main factor for competition here is a different type of molecule inserted in the lipid bilayer, which changes its physical properties. In the first case, which will be the main focus of this work, fatty acid vesicles endowed with a membrane fraction of phospholipid are observed to steal lipid molecules from phospholipid-deficient neighbours, who shrink, whilst the former grow and keep their potential for division. Two basic physical mechanisms have been postulated to underlie phospholipid-driven growth, as explained in Fig. \ref{fig:indirect_direct_mechanisms} under the terms \emph{indirect} and \emph{direct} effects. \begin{figure} \begin{center} \includegraphics[width=7 cm]{indirect_direct.pdf} \end{center} \caption{ {\bf Two mechanisms of phospholipid-driven growth.} \textbf{A} \emph{Indirect effect}, whereby the presence of phospholipid in a vesicle membrane drives growth simply through a geometric asymmetry: only the lipid section of the bilayer (grey) is able to release lipid (orange arrows) whereas the whole of the bilayer surface (made of lipids and phospholipids) is able to absorb lipid monomer (green arrows). Phospholipid fraction is pictured as one continuous block to highlight the principle only. The indirect effect can be created also by non-lipid surfactant molecules (e.g. peptides) residing long enough in the membrane to increase surface absorption area. \textbf{B} \emph{Direct effect}, whereby the acyl tails of the phospholipids have high affinity for packing closer to each other and increasing bilayer order, thus making the exit of the simple lipids more difficult. The direct effect is specific to the molecular structure of phospholipids. In both cases, growth eventually stops when the phospholipid fraction in the membrane becomes diluted. } \label{fig:indirect_direct_mechanisms} \end{figure} With the aim to complement experimental results, and in an attempt to better formalise and investigate competition processes at play, in this paper we develop a mathematical model of a competing population of vesicles. The model is based at the level of lipid kinetics, following the approach of \citep{ENVIRONMENT2010}. A vesicle in the population absorbs and releases lipid to and from its membrane at rates that depend on the current physical properties of that particular vesicle (such as membrane composition or extent of swelling). Using physically realistic parameters (i.e. lipid molecule sizes, vesicle aggregation numbers and CVC concentrations - see Table \ref{table:parameters}) we are able to qualitatively and often quantitatively reproduce experimental data for phospholipid-driven and osmotically driven competition. The paper is organised as follows. The Methods section introduces the kinetic model. A mean-field analysis is performed to give insight into why we should expect phospholipid-driven competition to result from a basic version of the model kinetics, followed by the description of a fast numerical method for solving the final equilibrium state of the full model. Then, the vesicle mixing procedure is specified, in order to be able to interface the model with experimental observations. The Results section summarises how well the kinetic model is able to reproduce experimental results and observations, including also some predictions for still untested protocell competition scenarios. Finally, in the Discussion section, we consider several possible limitations of our approach and conclude the study. \[ \, \] \noindent \framebox{\begin{minipage}[t]{1\columnwidth} \section*{Author Summary}\label{sec:author-summary} Synthetic protocell biology is bringing forward exciting experimental results that allow us to conceive in more realistic terms how the first living organisms could have emerged and started a process of Darwinian evolution. A remarkable finding has been the capacity of lipid vesicle populations to undergo competition and selection processes without the need of nucleotide replication mechanisms. This opens a completely new research avenue to explore and characterize pre-Darwinian modes of evolution leading to the first protocellular systems. In this work, we develop a mathematical model of vesicle competition to complement ongoing experimental efforts and to also provide a reliable way to investigate scenarios or conditions that are difficult to survey in the wet lab. Our model, which is based at the level of lipid kinetics, is demonstrated to reproduce diverse reported results and is helpful in providing new insights about the molecular mechanisms underlying protocell growth and competition dynamics. \end{minipage}} \section{Methods} \subsection{Theoretical Model of Vesicle Competition} The competition model involves a set of $n$ vesicles \[ {\cal V} = \{ {\cal V}_1, ..., {\cal V}_n \} \] each one characterized by a quadruple \[ {\cal V}_j = (L_\mu^j, P_\mu^j, L_c^j, B_c^j) \] embedded in a finite volume environment $\cal E$ defined by a triple $(\Omega_e, L_e,B_e)$. Each competing vesicle consists of a unilamellar membrane of two different lipid types: a fixed number of phospholipids $P_\mu$ (e.g. di-oleoyl-phosphatidic acid, DOPA) and a variable number of simple fatty acid lipids $L_\mu$ (e.g. oleic acid, OA). The $L$ lipids in the bilayer continuously exchange with the vesicle internal water pool (also considered a well-mixed chemical domain), and $\cal E$, whereas the $P$ phospholipids are considered approximately stationary due to their comparatively slow exchange rate (a reasonable assumption, given the very low CVC values of standard phospholipids compared to other species in water). The internal water pool of each vesicle hosts $L_c$ lipid monomers and also $B_c$ buffer species, which cannot permeate the bilayer but provide osmotic stability. These buffer species are also present in $\cal E$ with constant number $B_e$. Vesicles compete with each other by consequence of uptaking/releasing simple lipids $L$ from/to $\cal E$, which is a common limited resource. The initial system of vesicles is taken to be the result of mixing different vesicle populations, and is a closed system in a non-equilibrium state. The system equilibrates to a final state following the dynamics described below, with some vesicles growing bigger in surface at the expense of others, which shrink. We ignore spatial correlations and the possibility of direct vesicle-vesicle interactions, and assume a well-mixed set of vesicles (Fig. \ref{fig:schematic}). \begin{figure*} \begin{center} \includegraphics[width=13 cm]{schematic.pdf} \end{center} \caption{ {\bf Kinetic model of vesicle competition.} \textbf{A} Our model approach considers as a starting point a population of vesicles (of generally heterogeneous sizes and membrane compositions) in a well-mixed environment. \textbf{B} Each vesicle has a membrane composed of simple single chain lipids $L$, e.g. oleic acid (OA), and \textbf{C} sometimes more complex double chain phospholipids $P$, e.g. di-oleoyl-phosphatidic acid (DOPA). \textbf{D} outlines the kinetic interactions between vesicles. Here two vesicles are displayed. Vesicle 1, on the left hand side, has a mixed membrane with approximately 10 mol \% phospholipids $P$ (black) and the remainder single chain lipids $L$ (grey). Vesicle 2 consists purely of simple lipids $L$. In the ensuing competition, phospholipid-laden vesicle 1 will grow at the expense of vesicle 2, which will shrink.} \label{fig:schematic} \end{figure*} More precisely, each vesicle ${\cal V}_j$ is considered to release lipids to both aqueous phases (at each side of the bilayer) at the equal rate of $\lambda_j^\text{out}=k_\text{out} L_\mu^j {\textbf r}(\rho_j)$, and absorb lipids from each phase at rate $\lambda_j^\text{in}= k_\text{in} S_\mu^j [L] {\textbf u}(\Phi_j)$, where $[L]$ is the molar concentration of lipid monomer in the respective phase. The uptake and release kinetics are symmetric on each side of the bilayer, which means that the lipid monomer concentration inside and outside each vesicle will be equal $[L]_c^j = [L]_e = [L]^*$ at equilibrium. Flip-flop of the simple lipid $L$ between membrane leaflets is considered very fast with respect to its uptake and release rates, and thus the bilayer is modelled as a single oily phase. The total number of lipids in the system $L_t$ is a conserved quantity set by the initial condition of mixing, always equal to the number of lipid monomers in the environment $L_e$, plus the number of lipids composing the vesicles. Therefore, at all times: \begin{equation} L_e + \sum_{j=1}^n \left( L_c^j + L_\mu^j \right) - L_t = 0 \label{eq:L_t-conservation} \end{equation} The state of the system is captured by enumerating the number of lipids in each of the aqueous pools inside the vesicles, and each of the vesicle membranes. The ODE system consists of $2n$ equations, two for each vesicle: \begin{equation} \frac{dL_c^j}{dt} = k_\text{out} L_\mu^j {\textbf r}(\rho_j) - k_\text{in} S_\mu^j [L]_c^j {\textbf u}(\Phi_j) \label{eq:L_c-ODE} \end{equation} \begin{equation} \frac{dL_\mu^j}{dt} = -2k_\text{out} L_\mu^j {\textbf r}(\rho_j) + k_\text{in} S_\mu^j ([L]_c^j+[L]_e) {\textbf u}(\Phi_j) \label{eq:L_mu-ODE} \end{equation} and $L_e$ can be deduced from constraint (\ref{eq:L_t-conservation}), once all $L_c$ and $L_\mu$ have been calculated at time $t$. Explaining the choice of lipid $L$ release kinetics, each lipid in a pure $L$ membrane is considered to have a uniform probability per unit time $k_\text{out}$ of disassociating from the membrane, and function ${\textbf r}$ modifies this probability based on the current molecular fraction of phospholipid in the membrane $\rho = \frac{P_\mu}{P_\mu + L_\mu}$. In order to account for the \emph{direct effect}, we define function $0 \le \textbf{r}(\rho) \le 1$ to be monotonically decreasing with increasing $\rho$, meaning that increasing phospholipid fraction generally decreases bilayer fluidity, slowing down the rate of $L$ release from the membrane \citep{budinSzostak2011}. In a first approximation, $\textbf{r}$ is assumed linear: \begin{equation} \textbf{r}(\rho)=1-d\rho \label{eq:function-r} \end{equation} where parameter $0 \le d \le 1$ tunes how the lipid release rate is affected by phospholipid content ($1$ being maximally affected and $0$ being not at all). Conversely, lipid uptake kinetics reflect that the probability of uptaking a lipid $L$ to the membrane is proportional to the density of lipid monomer in the immediate vicinity of the respective bilayer surface (i.e. the concentration of lipid in the surrounding medium), the area of surface available for absorption $S_\mu$ and function ${\textbf u}$, based on the dimensionless geometric factor $\Phi= S_{\mu} / \sqrt[3]{36\pi \Omega^{2}}$ (where $\Omega$ is vesicle aqueous volume, in the same units as $S_\mu$). We define a conditional function \begin{equation} \textbf{u}({\Phi})= \begin{cases} exp\left(\frac{1}{\Phi}-1\right), & \Phi<1\\ 1, & \Phi\ge1 \end{cases}\label{eq:function-u} \end{equation} to denote that lipid uptake is only increased when the the bilayer is stressed ($\Phi<1$). Flaccid vesicles do not have extra enhancement of lipid uptake rate. Rationale for this function stems from \citep{ENVIRONMENT2010}, to account for osmotically-driven competition between vesicles \citep{chenSzostak2004}. To clarify some final assumptions, the vesicle surface area, referred to as $S_\mu = \frac{1}{2} (L_\mu \alpha_L + P_\mu \alpha_P)$, is the water-exposed area of the inner bilayer leaflet, or alternatively the water-exposed area of the outer bilayer leaflet of the vesicle. Membrane thickness is therefore considered negligible. Uptake and release kinetic constants $k_\text{in}$ and $k_\text{out}$ are set taking into account that spherical vesicles made purely of $L$ should be in equilibrium when the lipid monomer concentration inside and outside the vesicle is the experimentally observed CVC concentration for that amphiphilic compound (e.g. oleic acid), and also considering that $L$ uptake is orders of magnitude faster than $L$ release. For mixed membrane vesicles containing both $L$ and $P$ lipids, we assume that the lipid kinetics equations define what lipid monomer concentration inside and outside the vesicle $[L]_\text{eq}$ is necessary to keep the mixed membrane vesicle in equilibrium (however, in reality, the CVC of mixed lipid solutions is not a trivial matter \citep{cape2011}). For the purpose of lipid competition, $\cal E$ has a fixed volume of $\Omega_e$ litres, and each vesicle ${\cal V}_j$ has, in principle, a variable volume internal water pool of $\Omega_j = \Omega_e(L_c^j+B_c^j)/(L_e+B_e)$ litres. This condition ensures that, at all times, the interior of each vesicle is isotonic with respect to $\cal E$. However, we make the simplifying assumption in this work that vesicles exist in a solution with a comparatively high buffer concentration. Thus, each vesicle has an approximately constant aqueous volume $\Omega_j \approx \Omega_e(B_c^j/B_e)$ largely determined by the number of buffer molecules it has trapped inside the internal water pool, with $L$ flux to and from the water pool having marginal osmotic effects. Model parameters are given in Table \ref{table:parameters}. \subsection{Mean Field Approximation} With the goal to gain intuition about why one should expect phospholipid fraction and surface growth to be correlated in the vesicle population model described, we can make a mean field approximation. This approach considers a reduced scenario where many details associated to the full model are ignored in order to keep only the logic of the problem (Fig. \ref{fig:meanfield_ricard}). \begin{figure} \begin{center} \includegraphics[width=8.3cm]{meanfield.pdf} \end{center} \caption{ {\bf Meanfield model of vesicle population dynamics.} Considering vesicles as simplified aggregates permits some analytical treatment.} \label{fig:meanfield_ricard} \end{figure} The first simplification will be to ignore the internal structure of the vesicles, describing them instead as coarse-grained `aggregates', denoted by pairs ${\cal V}_j = (L_j, P_j)$, which contain just lipids and phospholipids. This step can be considered justified on the grounds that, at equilibrium, the amount of lipid monomer residing in the vesicle water pools (which typically have tiny volumes, around 1 quintillionth of a litre) is marginal as compared to the lipid composing the vesicle membranes. Since the internal structure or topology of the vesicles is disregarded, it actually amounts to treating them as elongated micelles or flat bilayers. The second simplification involves reducing the lipid uptake and release equations to their most basic form, independent of membrane tension (${\textbf u}(\Phi)=1$) and independent of membrane phospholipid fraction (${\textbf r}(\rho)=1$) respectively. Thus, the ODE system reduces to $n$ simplified equations, where for each aggregate: \begin{equation} \frac{dL_j}{dt} = -k_\text{out} L_j + \frac{1}{2} k_\text{in} (L_j \alpha_L + P_j \alpha_P) [L]_e \label{eq:L-ODE-aggregate} \end{equation} Under these conditions, at equilibrium, the molar lipid concentration in the environment $[L]_{e}=[L]_\text{eq}$ is related to the number of lipids and phospholipids in an aggregate by the following function: \begin{equation} f(L_j, P_j) = [L]_\text{eq} = \frac{2k_\text{out}}{k_\text{in}} \frac{L_j}{L_j \alpha_L + P_j \alpha_P} \label{eq:function-f-aggregate} \end{equation} For a fixed number of phospholipids $P_j > 0$, the mapping $f: L_j \rightarrow [L]_\text{eq}$ can be verified to be one-to-one, meaning that each aggregate is in equilibrium at only one specific outside lipid concentration, dependent on the number of lipids $L_j$ it contains. Thus, no multiple equilibria of the population are allowed from this type of aggregate dynamics. Now consider two arbitrarily chosen aggregates $i$ and $j$ in the population of $n$ aggregates, which are competing for lipid. Their ODEs, when written as: \[ \frac{dL_i}{dt} = -k_\text{out} L_i + \eta (L_i \alpha_L + P_i \alpha_P)(L_t - \sum_{m=1}^n L_m) \] \[ \frac{dL_j}{dt} = -k_\text{out} L_j + \eta (L_j \alpha_L + P_j \alpha_P)(L_t - \sum_{m=1}^n L_m) \] where $\eta = {k_\text{in}}/{2N_A \Omega_e}$, are reminiscent of the Lotka-Volterra competition equations associated to species sharing and competing for a common set of resources \citep{lotka1925book}. If we look for the equilibrium solutions of the previous system, using $dL_i/dt=dL_j/dt=0$, we obtain \begin{equation} \frac{L_i \alpha_L + P_i \alpha_P}{L_j \alpha_L + P_j \alpha_P} = \frac{L_i}{L_j} \end{equation} which leads to the following proportionality relation at equilibrium: \begin{equation} L_i = \left ( \frac{P_i}{P_j} \right ) L_j \label{eq:proportionality-at-eq-aggregate} \end{equation} This result immediately tells us that, for a given fraction $P_i/P_j$ the relative sizes of the two chosen vesicles are correlated. Unless $P_i=P_j$ one of the vesicles will be larger and the second smaller. For each pair $(P_i, P_j)$ with $P_i \ne P_j$ a single solution is found. When functions ${\textbf u}$ and/or ${\textbf r}$ are not constant, unless they have a trivial form, it is generally not possible to show analytically what shape the correlation between phospholipid fraction and surface growth will take. However, in the next section we develop a fast numerical way to find the equilibrium configuration of the fully-fledged vesicle population model, with vesicles recovering their internal structure. As compared to numerically integrating the ODE set, the method provides the extra advantages of (i) being faster and thus scaling better for large vesicle populations and (ii) being able to calculate competition `tipping points' (i.e. critical points that mark the transition between growing and shrinking) directly. \subsection{Fast Computation of Competition Equilibrium} In this section we provide a general numerical approach to solving the equilibrium configuration of a possibly heterogeneous population of vesicles competing for a limited supply of lipid. These vesicles may be osmotically swelled, laden with phospholipid, or a mixture of both, and can be arbitrary in number. The method allows the lipid uptake and release functions $\textbf{u}$ and $\textbf{r}$ to take arbitrary forms, subject to some requirements detailed below. We start by defining a function $f: L_\mu \rightarrow [L]_\text{eq}$, like (\ref{eq:function-f-aggregate}), which gives the inside/outside lipid monomer concentration $[L]_\text{eq}$ necessary to maintain a particular vesicle ${\cal V}_j$ at equilibrium, given that this vesicle has a specific number of lipids/phospholipids in the membrane, and a specific volume: \begin{equation} f(L_\mu^j, P_\mu^j, \Omega_j)=[L]_\text{eq}^j=\frac{2k_\text{out}}{k_\text{in}} \frac{L_\mu^j}{L_\mu^j \alpha_L + P_\mu^j \alpha_P} \frac{\textbf{r}(\rho_j)}{\textbf{u}({\Phi_j})} \label{eq:function-f} \end{equation} The inverse of this function yields useful information: it is the mapping of $[L]_\text{eq}$ to the number of lipids which must exist in the membrane of a particular vesicle, in order for that vesicle to be at equilibrium. However, due to the difficulty in isolating $L_\mu$ from the potentially non-linear functions $\textbf{u}$ and $\textbf{r}$, in most cases the inverse mapping is not possible to write in closed form. Nevertheless, if uptake and release functions $\textbf{u}$ and $\textbf{r}$ make function $f$ both (i) one-to-one and onto and (ii) continuous, then it follows that the inverse mapping is a function $f^{-1}$, which can be numerically calculated for vesicle ${\cal V}_j$ by using $f$ and binary searching for an $L_\mu$ which satisfies: \begin{equation} f^{-1}([L]_\text{eq}, P_\mu^j, \Omega_j) = L_\mu^j\;|\;f(L_\mu^j, P_\mu^j, \Omega_j) - [L]_\text{eq} = 0 \label{eq:function-f-inverse} \end{equation} using appropriate search bounds (normally: $L_\mu^\text{min}=0$, $L_\mu^\text{max}=L_t$). Crucially, having a means to calculate $f^{-1}$ gives a way of determining the total number of lipids existing in all equilibrated vesicle membranes, given that the inside/outside lipid monomer concentration in the heterogeneous vesicle mixture is $[L]_\text{eq}$. For each $[L]_\text{eq}$, we know that each vesicle has a \emph{unique} number of membrane lipids $L_\mu$, because $f^{-1}$ is itself one-to-one. This means that a certain $[L]_\text{eq}$ can only admit one single equilibrium configuration of vesicles, not multiple equilibrium configurations, and this lack of ambiguity is a desirable property for the method. The lipid monomer concentration $[L]^*$ inside/outside all vesicles in this single equilibrium configuration can be found by making use of the lipid conservation principle (\ref{eq:L_t-conservation}): \begin{multline} [L]^* = [L]_\text{eq}\;|\;\sum_{j=1}^n f^{-1}([L]_\text{eq}, P_\mu^j, \Omega_j) + \\ [L]_\text{eq}N_A\Omega_e - L_t = 0 \label{eq:L-star} \end{multline} That is, at $[L]^*$, the lipid making up the membranes of all equilibrated vesicles, plus the lipid monomer inside and outside the vesicles is equal to the total lipid in the system $L_t$ set by the initial condition. Expression (\ref{eq:L-star}) can also be solved by binary search of $[L]_\text{eq}$ between appropriate bounds, normally $[L]_\text{eq}^\text{(min)}=\text{max}(f(L_\mu=0))$ over all vesicles, $[L]_\text{eq}\text{(max)}=\text{min}(f(L_\mu=L_t))$ over all vesicles. Finally, knowing $[L]^*$ allows to fully reconstruct the final sizes of all vesicles at equilibrium by substituting $[L]_\text{eq}=[L]^*$ into (\ref{eq:function-f-inverse}) for each vesicle ${\cal V}_j$. In the equilibrated population, some vesicles will have grown larger in surface area at the expense of others which will have shrunk. When a population of vesicles has competed for lipid via phospholipid-driven competition, the `tipping point' is the critical number of membrane phospholipids $P_\mu^\text{crit}$ separating those vesicles which have lost lipid from those which have gained lipid, and is found by: \begin{equation} P_\mu^\text{crit} = P_\mu \;|\; f^{-1}([L]^*, P_\mu^j, \Omega_j) - \frac{2S_{\mu} - P_\mu\alpha_P}{\alpha_L} = 0 \label{eq:Pu-critical} \end{equation} where $S_{\mu} = S_{\mu}^0$, again solvable by binary searching, this time in the range $0 \leq P_\mu \leq 2S_{\mu}^0 / \alpha_P$, (from a pure lipid membrane to a pure phospholipid membrane). Expression (\ref{eq:Pu-critical}) amounts to asking how many phospholipids a hypothetical vesicle would require in order not to grow in surface area when the lipid monomer concentration has stabilised at $[L]^*$. Likewise, the number of phospholipids required to achieve any arbitrary surface area growth can be found by setting $S_{\mu}$ to the value desired. The critical phospholipid number can be stated more usefully as the critical phospholipid molecular fraction \begin{equation} \rho_0^\text{crit}=\frac{P_\mu^\text{crit}\alpha_L}{2S_{\mu}^0+P_\mu^\text{crit}(\alpha_L- \alpha_P)} \label{eq:rho0-critical} \end{equation} a vesicle has in the initial condition, a time when all vesicles have a surface of $S_{\mu}^0$. For osmotically-driven competition, the critical volume separating shrinking vesicles from growing vesicles is found by searching (\ref{eq:Pu-critical}) for vesicle volume instead: \begin{equation} \Omega^\text{crit} = \Omega \;|\; f^{-1}([L]^*, P_\mu, \Omega) - \frac{2S_{\mu} - P_\mu\alpha_P}{\alpha_L} = 0 \label{eq:Vv-critical} \end{equation} where $S_{\mu} = S_{\mu}^0$. This may be alternatively stated as the critical $\Phi$ in the initial condition: \begin{equation} \Phi_0^\text{crit}= \frac{S_{\mu}^0}{\sqrt[3]{36\pi (\Omega^\text{crit})^{2}}} \label{eq:phi0-critical} \end{equation} If no sign change results when evaluating the functions (\ref{eq:function-f-inverse} - \ref{eq:Vv-critical}) at the upper and lower search bounds, then the respective equation cannot be solved by this numerical bisection approach. Otherwise typically 30 iterations of binary search were used to converge to an accurate answer\footnote{Model vesicles described in this work either contain just fatty acid membranes, or fatty acid mixed with one other type of phospholipid. Another avenue (not explored here) would be to vary the type of phospholipid from vesicle to vesicle. In this case, competition equilibrium can still be calculated by adding two more arguments to function $f$: the first would detail the head area for the phospholipid in vesicle ${\cal V}_j$; the second would describe how this phospholipid changes bilayer fluidity and alters the simple lipid $L$ release rate (e.g. parameter $d$ to function \textbf{r} could be supplied).}. \subsection{Modelling Vesicle Mixing} In order to interface our theoretical model with experimentally reported results, it is necessary to define an acceptable procedure for mixing two competing vesicle populations. The basic mixing procedure outlined in this section establishes the boundary conditions for competition, which are (i) the number of vesicles present, (ii) their respective compositions, (iii) the environment volume $\Omega_e$ and (iv) the total amount of lipid $L$ in the system ($L_t$). Below, turning more specific to match experimental scenarios, simple lipids $L$ are considered to be OA, and phospholipids $P$ are considered to be DOPA. \subsubsection*{Competition Volume} The equilibrium finding method outlined in the `Fast Computation of Competition Equilibrium' section above requires summing over a finite number of vesicles. Likewise, dynamic simulations of the model require a finite ODE set. However, vesicle populations in a real laboratory experiment will typically have millions of vesicles competing for lipid. In our modelling approach, it is therefore necessary to consider a small volume `patch' of each of the solutions being mixed. Each patch volume is large enough to contain enough vesicles so as to be \emph{representative} of the vesicle density in the solution it pertains to, but no so many vesicles that numerical solution becomes infeasibly slow. A patch volume $\Omega_p = \Omega_\text{stoi}$ litres (Table \ref{table:parameters}) was utilised for stoichiometric calculations using the equilibrium finding method, which translates into around 2000 vesicles being involved in 1:1 mixing. Full dynamic simulation of the model with the Gillespie Direct SSA algorithm forced a yet smaller patch volume $\Omega_{p} = \Omega_\text{dyn}$ litres to be used (to give $\tau$ jumps which were not too small), translating into around 40 vesicles being involved in 1:1 mixing. \subsubsection*{Mixing for Phospholipid-Driven Competition} In order to mix a fixed population of DOPA:OA vesicles which have molecular fraction $\rho$ of DOPA in their membranes, in ratio $R$ with a variable population of pure OA vesicles, we assume the following basic steps. Firstly, a suspension of OA lipid monomers in concentration $RC_{0}$ molar (assuming $RC_{0} \gg \text{CVC}$ for oleic acid) is extruded (possibly multiple times) through 100nm diameter pores. This leads to a more homogeneous population of 100nm diameter pure OA unilamellar vesicles. Each vesicle is assumed spherical ($\Phi=1$) with aqueous volume $\Omega^0$. The molar concentration of OA vesicles in the extruded suspension is approximately \begin{equation} C_\text{ves}^\text{OA}=\frac{RC_{0}}{N^\text{OA}} \label{eq:cvesOA} \end{equation} where $N^\text{OA}$ is called the `aggregation number', equal to the total number of lipids forming a vesicle bilayer (in this case, just OA lipids). The lipid monomer concentration in the aqueous solution inside/outside the vesicles is $[L]_\text{eq}^\text{OA}$, the CVC value, maintaining them at equilibrium. Secondly, a mixed suspension containing both OA lipid monomers (in molar concentration $C_{0}$) and DOPA phospholipids (in molar concentration $gC_{0}$, where $g = \frac{\rho}{1-\rho}$) is extruded through 100nm diameter pores. This, similarly, leads to a population of 100nm diameter unilamellar DOPA:OA vesicles. Again each vesicle is assumed spherical ($\Phi=1$) with aqueous volume $\Omega^0$, but now part of the bilayer consists of DOPA phospholipid in molecular fraction $\rho$. The molar concentration of DOPA:OA vesicles in the extruded suspension is approximately \begin{equation} C_\text{ves}^\text{DOPA:OA}=\frac{C_{0}(1+g)}{N^\text{DOPA:OA}} \label{eq:cvesDOPAOA} \end{equation} where the aggregation number $N^\text{DOPA:OA}$ is now calculated as the sum of both the OA lipids and DOPA phospholipids making up each closed bilayer. In turn, the OA lipid monomer concentration inside/outside the vesicles is $[L]_\text{eq}^\text{DOPA:OA}$, the CVC value for the model DOPA:OA vesicles, maintaining them at equilibrium. Competition starts ($t=0$) when the extruded vesicle solutions above are mixed. We mix a volume $\Omega_p$ of each solution, creating a new mixed system of volume $\Omega_e = 2\Omega_p$, containing DOPA:OA vesicles in number $N_A \Omega_p C_\text{ves}^\text{DOPA:OA}$ and OA vesicles in number $N_A \Omega_p C_\text{ves}^\text{OA}$. The initial lipid monomer concentration in the environment becomes $\frac{1}{2}([L]_\text{eq}^\text{DOPA:OA}+[L]_\text{eq}^\text{OA})$. Throughout mixing, and during competition, buffer concentration is constant at $[B]$ in all solutions, at a value high enough for vesicles to maintain approximately constant volume $\Omega^0$. Modelling the opposite scenario, namely a fixed population of pure OA vesicles mixed with a variable population of DOPA:OA vesicles, just requires switching the $R$ multiplier from (\ref{eq:cvesOA}) to (\ref{eq:cvesDOPAOA}). \subsubsection*{Mixing for Osmotically-Driven Competition} When a fixed population of isotonic OA vesicles is to be mixed in ratio $R$ with a variable population of swelled OA vesicles, again two extruded vesicle suspensions are prepared. The first is prepared in buffer at molar concentration $[B]$ and extruded through 100nm diameter pores, leading to unilamellar OA vesicles at $\Phi=1$ in molar concentration \begin{equation} C_\text{ves}^\text{isotonic}=\frac{C_{0}}{N^\text{OA}} \end{equation} The second suspension is prepared in a solution which contains an additional membrane impermeable (or slowly permeating) solute, such as sucrose, mixed with the buffer, increasing the overall molar concentration of osmotically active species to $[B]_0 \ge [B] + 0.7$. This suspension is made of unilamellar OA vesicles at $\Phi=1$ in molar concentration $RC_\text{ves}^\text{isotonic}$, and each vesicle encapsulates buffer at concentration $[B]_0$. The buffer concentration outside the vesicles in the second suspension is then reduced to $[B]$, making the external solution hypotonic with respect to the vesicle interiors. The vesicles swell to maximum size, and then transiently break, allowing for the escape of buffer molecules in excess. They later reseal with a residual buffer gradient of $[B]_\Delta^\text{max}=0.16$M across the membrane, corresponding to a maximum osmotic pressure of 4 atm \citep{chenSzostak2004}. In our model, each vesicle is therefore assumed to swell to volume $\Omega = \Omega^0(1 + (0.16/[B]))$, which remains constant for the duration of competition. The decrease of the environmental buffer concentration is considered to take place at the same instant of mixing with the initial isotonic population. This defines the initial condition ($t=0$) when competition starts. The mixed overall volume is $\Omega_e = 2\Omega_p$ where isotonic vesicles number $N_A \Omega_p C_\text{ves}^\text{isotonic}$, and the swelled vesicles number an $R$ multiple of this. The lipid monomer concentration in this new, larger environment is initially $[L]_\text{eq}^\text{OA}$. \subsubsection*{Vesicle Breakage} During competition, to a first approximation, we assume that all of the original vesicles remain intact, with none breaking apart through excessive osmotic stress. By using the Morse equation for osmotic pressure and data supplied in \citep[Supplementary Material]{chenSzostak2004}, we were able to calculate an approximate burst tolerance $\epsilon \approx 0.21$ for our model pure oleate vesicles, where these vesicles burst through excessive osmotic pressure when $\Phi < 1 - \epsilon$. Pure OA vesicles reached a minimum of $\Phi=0.77$ in our phospholipid-driven competition simulations, and a minimum of $\Phi=0.70$ in our osmotic-driven competition simulations reported in Fig. \ref{fig:experimental_comparison}. These values do not overly exceed the burst tolerance. \subsubsection*{Control Experiments: Mixing With Buffer} Mixing a vesicle population with a buffer solution is modelled as doubling the current system volume and diluting the initial vesicle density to one half. In this case, we assume that the buffer solution contains no vesicles, but free lipid monomer at concentration just below the CVC of oleic acid\footnote{General note: The above procedures define a `concentration approach' to mixing, where two equal volumes are mixed, and the number of vesicles in the variable population is controlled by increasing or decreasing vesicle concentration. Another approach to mixing would be the `volume approach' whereby the variable population has a fixed vesicle concentration, but instead a variable volume which controls the number of vesicles present. Volume mixing was found to produce nearly equivalent outcomes, so only results following the concentration mixing procedure are here reported.}. \section{Results} \subsection*{Two Competing Populations: Comparison with Experimental Results} Figure \ref{fig:experimental_comparison} compares predictions made by our kinetic model against experimentally reported surface growth of vesicles in phospholipid-driven \cite{budinSzostak2011} and osmotically-driven \cite{chenSzostak2004} competition. \begin{figure*}[ht] \begin{center} \includegraphics[width=17.34cm]{experimental_comparison.pdf} \end{center} \caption{ {\bf Comparison between kinetic model predictions and experimental results.} Top plots show \emph{dynamics} of phospholipid-driven competition. \textbf{A} Surface growth of DOPA:OA vesicles over time (green lines) and \textbf{B} surface shrinkage of OA vesicles over time (blue lines), when a population of DOPA:OA ($\rho=0.1$) vesicles are mixed 1:1 with pure OA vesicles (following our mixing procedure with $\Omega_{p}=\Omega_\text{dyn}$, 25 DOPA:OA are mixed with 20 OA). Coloured dots in figure backgrounds reproduce original experimental results from \cite[Figs 1A, 1B therein]{budinSzostak2011} respectively. Middle plots show \emph{vesicle stoichiometry effects} in phospholipid-driven competition. \textbf{C} Continued surface growth of DOPA:OA population as more OA vesicles added and \textbf{D} plateau in surface shrinkage of OA vesicles as more DOPA:OA vesicles added. Black markers in figure backgrounds reproduce experimental results from \cite[Figs 1C, 1D therein]{budinSzostak2011} respectively. Bottom plots show osmotically-driven competition results. \textbf{E} Growth dynamics of swelled OA vesicles (blue line) and shrinkage of isotonic vesicles (red line) compared against experimental best-fit exponential decay curves (grey lines) from \cite[Figs 1D, 1B therein]{chenSzostak2004} respectively. \textbf{F} Stoichiometry effects in osmotically-driven competition: shrinkage of OA vesicle surface reaches a plateau as more swelled vesicles are added. Black markers in figure background reproduce experimental results from \cite[Fig. 2A therein]{chenSzostak2004}. See text for discussion.} \label{fig:experimental_comparison} \end{figure*} \begin{figure*}[!ht] \begin{center} \includegraphics[width=17.34cm]{tipping_points.pdf} \end{center} \caption{ {\bf Lipid competition tipping points.} \textbf{A} Phospholipid competition between 30 model phospholipid-laden vesicles, each with DOPA fraction randomly assigned over the uniform interval $0 < \rho_0 < 1$. Depending on its initial DOPA fraction, each vesicle starts at a point on the horizontal blue line, and grows (green arrows) or shrinks (red arrows) to a point on the black line. The form of the black line is \emph{specific} to this particular competing population, and is computed by (\ref{eq:L-star}). Orange crosses show agreement with equilibrated stochastic simulation of the model, validating the fast computation of competition equilibrium method. Competition `tipping point' is shown by blue circle: any vesicle with $\rho_0^\text{crit} > 0.573$ gains lipids from its competitors. \textbf{B} Phospholipid competition in four different populations of 30 model vesicles, with DOPA fraction randomly assigned over uniform intervals (i) $0 < \rho_0 < 0.25$, (ii) $0 < \rho_0 < 0.5$, (iii) $0.25 < \rho_0 < 0.75$ and (iv) $0.3 < \rho_0 < 1.0$, demonstrating the context-dependence of competition. \textbf{C} Osmotic competition between 30 model oleate vesicles each swelled by extra internal sucrose, randomly assigned over the uniform interval $0 \le [B]_\Delta < 0.16$ molar. Any vesicle starting at tension state $\Phi_0^\text{crit} > 0.802$ gains lipids from its competitors. See text for full discussion. } \label{fig:tipping_points} \end{figure*} Top figures \ref{fig:experimental_comparison}A and \ref{fig:experimental_comparison}B show the \emph{dynamics} of surface area change in phospholipid-driven competition. Figure \ref{fig:experimental_comparison}A details, in real time, the relative surface area of a tracked population of DOPA:OA ($\rho=0.1$) vesicles, when this population is mixed $1:1$ with (i) pure OA vesicles (green lines), (ii) DOPA:OA ($\rho=0.1$) vesicles (blue lines) or (iii) buffer (black lines). In Fig. \ref{fig:experimental_comparison}B, the tracked population is instead pure OA vesicles, which are mixed $1:1$ with the same three options outlined above. Stochastic simulation of our lipid kinetics model \cite{g1976} correctly predicts that when mixed $1:1$, DOPA:OA vesicles steal lipid and grow (green lines, \ref{fig:experimental_comparison}A) at the expense of the pure OA vesicles, which shrink (blue lines, \ref{fig:experimental_comparison}B). In this case, there is also fairly good quantitative agreement with the experimentally observed time courses, with the \emph{indirect effect} alone ($d=0$ lines) accounting for most of the surface area change in our model. For the other cases, the kinetic model correctly predicts approximately no surface area change (no competition) when similar populations are mixed, or when a population is mixed with buffer. Middle figures \ref{fig:experimental_comparison}C and \ref{fig:experimental_comparison}D show phospholipid-driven competition from a different angle: that of \emph{vesicle stoichiometry}. Stoichiometry explores the final equilibrium size of vesicles in a tracked population, when this population is mixed with a different population containing approximately $R$ times as many vesicles. In this approach, the trend of final equilibrium surface area size versus mixing ratio is explored, rather than the dynamics on the way to equilibrium. Figure \ref{fig:experimental_comparison}C details final surface area of a tracked population of DOPA:OA ($\rho=0.1$) vesicles, when this population is mixed $1:R$ with a population of pure OA vesicles. Figure \ref{fig:experimental_comparison}D details the opposite scenario, whereby the tracked population is OA vesicles, mixed 1:R with DOPA:OA vesicles. The $R=1$ cases in Figs \ref{fig:experimental_comparison}C and \ref{fig:experimental_comparison}D correspond to the surface sizes reached in the limit of time in Figs \ref{fig:experimental_comparison}A and \ref{fig:experimental_comparison}B respectively. Calculating competition equilibrium by means of the fast computation approach outlined in the Methods section, we were able to verify that our model exhibits continual growth of DOPA:OA ($\rho=0.1$) vesicles as more OA vesicles are added (Fig. \ref{fig:experimental_comparison}C). In the opposite scenario, we also verified that the model shows a plateau in the shrinkage of pure OA vesicles as more DOPA:OA ($\rho=0.1$) vesicles are added (Fig. \ref{fig:experimental_comparison}D). In both cases, the \emph{indirect effect} ($d=0$ lines) alone drives the majority of the surface size change, with the \emph{direct effect} then `tuning' the fit to experimental outcomes. Importantly, the general outcome of phospholipid-driven competition in our model is for vesicles stealing lipid to grow in surface and finish at high $\Phi > 1$ values (excess surface, flaccid), and for vesicles losing lipid to suffer reduced surface, finishing at $\Phi < 1$ values (osmotically tense, spherical). This is observed experimentally, and indeed provides the basis for the conjecture that phospholipid-laden vesicles are more likely to divide spontaneously when gentle external shearing forces are applied \cite[p5250]{budinSzostak2011}. Moving to osmotically-driven competition, Fig. \ref{fig:experimental_comparison}E shows stochastic simulation of a swelled population of vesicles competing with an initially isotonic (non-swelled) population. Simulation outcomes match quite well the experimental best-fit time courses, in particular for the growth of the swelled vesicles (not so accurately for the shrinkage of the non-swelled vesicles). In any case, it must be noted that the original experimental data has considerable variance. Then, Fig. \ref{fig:experimental_comparison}F shows that the kinetics model qualitatively reproduces the stoichiometric observation whereby adding more swelled vesicles to a population of initially non-swelled vesicles will cause the shrinkage of the non-swelled vesicles to plateau, rather than to continue (note the logarithmic scale of Fig. \ref{fig:experimental_comparison}F). As with phospholipid-driven competition, in our model, the pure OA vesicles involved in osmotically-driven competition can finish at a variety of surface sizes. However, unlike phospholipid-driven competition, all OA vesicles will finish with the same $\Phi < 1$ value (equal osmotic stress). Here, surface changes do not translate to final differences in $\Phi$, partly because the vesicles start with different aqueous volumes. This residual osmotic swelling is also observed experimentally in vesicles stealing lipid through osmotically-driven competition. In fact, it stands as the main criticism of the osmotically-driven competition scenario: swelled vesicles have to overcome a stronger energetic barrier in order to divide, making this an improbable route to spontaneous vesicle division \cite{adamalaSzostak2013}. Our kinetic model can also be used to make predictions or to find competition `tipping points' in the more general scenario where completely heterogeneous populations of phospholipid-laden and/or osmotically swollen vesicles compete for lipid (Figs \ref{fig:tipping_points} and \ref{fig:competition3d}), even if some of these experiments have not been realised in the lab yet. \subsection*{Competition Tipping Points in Diverse Populations} Figure \ref{fig:tipping_points}A shows that within a population of phospholipid-laden vesicles, where each vesicle has a randomly assigned phospholipid fraction in the membrane between 0 and 100\%, the critical DOPA fraction needed for growth (tipping point), in this case, is just over 57\%. Figure \ref{fig:tipping_points}B compares different heterogeneous populations competing for phospholipid, and reveals an important observation: \emph{competition is always context dependent}. That is to say, a certain amount of membrane phospholipid does not guarantee a certain final surface area. Rather, final surface depends on the boundary conditions of the competition event (that is, the parameters influencing the solution of (\ref{eq:L-star})), which includes the number and composition of competitor vesicles present\footnote{The initial lipid $L$ content of each individual vesicle is not explicitly part of these boundary conditions. In fact, in our model, when total lipid $L_t$ is fixed, initial vesicle surface sizes have no effect on the final equilibrium of the system, only on the transient dynamics leading there.}. For example, population (i) in Fig. \ref{fig:tipping_points}B has vesicles with low DOPA fraction as compared to vesicles in population (iv), yet in some cases, the vesicles in the former population have larger final surface growth than vesicles in the latter. This concurs with the experimental observation that even small differences in phospholipid content can drive growth \cite[p5251]{budinSzostak2011}. The dotted black lines in Figs \ref{fig:tipping_points}A and \ref{fig:tipping_points}B are the same competition events run when the \emph{direct effect} is present, and maximally enabled ($d=1$). The direct effect makes the competition tipping point slightly lower, but no general statement can be made about the extent to which it affects vesicle growth, for this again depends on the specifics of the competition event. For example, the direct effect has marginal influence on vesicle growth trends in the population shown in Fig. \ref{fig:tipping_points}B (iii), but is more relevant in population (ii). Figure \ref{fig:tipping_points}C shows that in a heterogeneous population where pure OA model vesicles are swelled with residual buffer up to 0.16M, vesicles with low initial $\Phi$ values steal lipid from those with higher (less swelled) $\Phi$ values, with the tipping point between growing and shrinking at $\Phi_0^\text{crit}=0.8$. As a last remark, orange crosses marked on Figs \ref{fig:tipping_points}A and \ref{fig:tipping_points}C show that full stochastic simulations of the model (run all the way to equilibrium) agree with and thus validate the fast computation of competition equilibrium method. \subsection*{Theoretical Predictions Beyond Current Experimental Results} Finally, we were able to explore more widely some of the the parameter space for phospholipid-driven and osmotically-driven competition, using our model to make some predictions. Figure \ref{fig:competition3d}A shows the stoichiometry results of phospholipid-driven competition in this wider context. A population of DOPA:OA ($\rho=0.1$) vesicles is mixed with a second population, but the phospholipid content of the second population, as well as the mixing ratio $R$, are varied. Taking a slice through the surface labelled `pop1' when $\rho^\text{pop2}_0 = 0$ shows the result reported in Fig. \ref{fig:experimental_comparison}C as the red line. Interestingly, this figure predicts that the absolute growth of the second population of vesicles will be maximal, when their phospholipid fraction is around 50\%, and will decline again towards no overall growth as the phospholipid fraction approaches 100\%. Figure \ref{fig:competition3d}B explores the stoichiometry of osmotically-driven competition in a similar way to phospholipid-driven competition. A fixed population of swelled vesicles is mixed with a second population, and the trapped residual buffer inside vesicles in the second population, as well as the mixing ratio $R$ to the second population, are varied. To conclude these predictions, Fig. \ref{fig:competition3d}C shows the effects of osmotically-driven versus phospholipid-driven competition, still a completely unreported scenario in the experimental literature, whereby a population of swelled pure oleate vesicles competes for lipid with a population of DOPA:OA vesicles. \noindent \begin{table*} \begin{tabular}{|l|l|l|l|} \hline Parameter & Description & Value & Unit \\ \hline $S_\mu^0$ & \scriptsize {Surface area of 100nm spherical vesicle} & \scriptsize {$3.142\times10^{4}$} & {\scriptsize {[}$nm^2${]}}\\ $\Omega^0$ & \scriptsize {Volume of 100nm spherical vesicle} & \scriptsize {$5.236\times10^{5}$} & {\scriptsize {[}$nm^3${]}}\\ $[B]$ & \scriptsize {Buffer concentration} & \scriptsize {$0.2$} & {\scriptsize {[}$M${]}}\\ $k_\text{out}$ & \scriptsize {OA lipid release constant} & \scriptsize {$7.6\times10^{-2}$} & {\scriptsize {[}$s^{-1}${]}}\\ $k_\text{in}$ & \scriptsize {OA lipid uptake constant} & \scriptsize {$7.6\times10^{3}$} & {\scriptsize {[}$s{}^{-1}$$M^{-1}$$nm^{-2}${]}}\\ $\alpha_L$ & \scriptsize {OA lipid head area$^{(1)}$} & \scriptsize {0.3} & {\scriptsize {[}$nm^{2}${]}}\\ $\alpha_P$ & \scriptsize {DOPA lipid head area$^{(1)}$} & \scriptsize {0.7} & {\scriptsize {[}$nm^{2}${]}}\\ $\epsilon$ & \scriptsize {OA vesicle burst tolerance} & \scriptsize {$0.21$} & {}\\ $[L]_\text{eq}^\text{OA}$ & \scriptsize {100nm OA vesicle, OA monomer equilibrium concentration} & \scriptsize {$6.667\times10^{-5}$} & {\scriptsize {$[M]$}}\\ $[L]_\text{eq}^\text{DOPA:OA}$ & \scriptsize {100nm DOPA:OA vesicle, OA monomer equilibrium concentration} & \scriptsize {$5.294\times10^{-5}$} & {\scriptsize {$[M]$}}\\ $N^\text{OA}$ & \scriptsize {100nm OA vesicle aggregation number} & \scriptsize {209439} & {\scriptsize {total lipids}}\\ $N^\text{DOPA:OA}$ & \scriptsize {100nm DOPA:OA vesicle aggregation number} & \scriptsize {184799} & {\scriptsize {total lipids}}\\ $\Omega_\text{stoi}$ & \scriptsize {Competition volume unit for stoichiometric calculations} & \scriptsize {$3.478\times10^{-13}$} & {\scriptsize {[}$dm^{3}${]}}\\ $\Omega_\text{dyn}$ & \scriptsize {Competition volume unit for dynamics simulations} & \scriptsize {$6.956\times10^{-15}$} & {\scriptsize {[}$dm^{3}${]}}\\ $C_0$ & \scriptsize {Mix concentration unit} & \scriptsize {0.001} & {\scriptsize {[}$M${]}}\\ $[B]_\Delta^\text{max}$ & \scriptsize {Residual buffer concentration inside maximally swelled OA vesicles$^{(2)}$} & \scriptsize {$0.16$} & {\scriptsize {$[M]$}}\\ \hline \end{tabular} \caption{ {\bf Vesicle competition model parameters.}} \label{table:parameters} \end{table*} \begin{figure*}[ht] \begin{center} \includegraphics[width=17.34cm]{competition3d.pdf} \end{center} \caption{ {\bf Wider exploration of three different vesicle competition scenarios.} Relative surface growths of two vesicle populations is explored in a broader context for three different competition scenarios detailed by the key. \textbf{A} Phospholipid competition. Population 1, a fixed population of vesicles with initial DOPA phospholipid fraction $\rho_0^{pop1}=0.1$, is mixed $1:R$ with population 2, whose vesicles have initial DOPA fraction $\rho_0^{pop2}$. \textbf{B} Osmotic competition. Population 1, a fixed population of vesicles swelled by residual buffer $[B]_\Delta=0.08M$, is mixed $1:R$ with population 2, whose vesicles are swelled by residual buffer $[B]_\Delta^{pop2}$. \textbf{C} Phospholipid versus osmotically-driven competition. Vesicles with initial DOPA fraction $\rho_0^{pop1}$ are mixed $1:1$ with pure oleate vesicles swelled by residual buffer $[B]_\Delta^{pop2}$. In all cases, for DOPA laden vesicles, the direct effect is maximally enabled ($d=1$). Blue lines on plots highlight when the relative surface growth is 1.} \label{fig:competition3d} \end{figure*} \section{Discussion} In this work, we have presented a theoretical model of the transfer kinetics of lipid molecules between vesicles. We have shown that reasonable rate equations chosen for simple lipid uptake and release allow to reproduce fairly well data from controlled laboratory experiments on phospholipid-driven competition and osmotically-driven competition. Furthermore, we have been able to predict the outcome of several yet-to-be-performed experiments. Thus, it is time to recapitulate, considering possible limitations of our approach, clarifying several points that remain open, and giving a more general perspective on the problem addressed. The main assumption we made when modelling phospholipid-driven competition is that the membrane phospholipid fraction is approximately stationary with respect to the timescale of simple lipid transfer between the supramolecular structure (i.e., the membrane bilayer) and the aqueous solution (both inwards and outwards)\footnote{ Likewise, the assumption we make with osmotically-driven competition is that the residual buffer inside the vesicles permeates very slowly through the bilayer membrane.}. In reality, the off-rate of a lipid molecule from a bilayer is inversely proportional to the number of carbon atoms in the acyl chain of the lipid concerned, and phospholipids do have a small non-zero transfer rate (with a half time from hours to days \cite{budinSzostak2011}). If phospholipid transfer was included in our model, the equilibrium reached in the limit of time would always be that of a completely homogeneous population. This is because the $P$ phospholipid would redistribute amongst the vesicles until all were equilibrated with the same phospholipid monomer concentration in solution $[P]_\text{eq}$, which is trivially when all vesicles have the same number of membrane phospholipids $P_\mu$. With no remaining asymmetries in $P_\mu$ to drive competition, all vesicles would finish with the same number of simple lipids $L$. The appearance and then disappearance of competition would follow the same type of dynamics as those experimentally reported for nervonic acid \cite[Fig. 4D, therein]{budinSzostak2011} which redistributes between vesicles (simulation results not shown). However, if vesicles contained a metabolism which synthesised phospholipid, then lasting $P_\mu$ asymmetries between vesicles could be continually maintained as steady states, in spite of the exchanging $P$ fraction. The results of this study can be interpreted as the competition advantage bestowed upon a vesicle by a membrane phospholipid fraction \emph{given that this fraction is somehow maintained as constant}. Not explicitly modelling phospholipid synthesis processes grants a simplified lipid scenario (i.e. a materially-closed system which subsequently settles to equilibrium) where some analysis can be carried out. The next point that deserves discussion is the role of the \emph{direct effect} in driving the phospholipid-driven competition simulations performed with our model. A first observation to make is that even when the direct effect is disabled ($d=0$), the remaining indirect effect can account for the majority of the vesicle surface growth observed experimentally in phospholipid-driven competition (blue lines, Figs \ref{fig:experimental_comparison}C and \ref{fig:experimental_comparison}D). Thus, whilst a direct effect could improve the fit to experimental results, we should conclude from our treatment of the problem and the results obtained, that the indirect effect is the main mechanism driving vesicle growth dynamics. A second observation is that, as stated in the Results in reference to Fig. \ref{fig:tipping_points}B, the exact contribution of the direct effect depends on the specific details of the competition scenario. In the case of the latter figure, the lipid release multiplier function \textbf{r} linearly decreases the simple lipid off-rate with increasing phospholipid content, but this `context dependence' should also be true for different choices of function \textbf{r}. One curiosity in the results (both in vitro and in silico) is how DOPA:OA ($\rho=0.1$) vesicles grow continually as more OA vesicles are added (Fig. \ref{fig:experimental_comparison}C). This is unintuitive, since the growth of the DOPA:OA vesicles should imply a dilution of their phospholipid content, which would seemingly reduce the indirect and direct effects, thus giving a negative feedback to eventually curb the DOPA:OA growth profile. The reason why our model reproduces this continuous growth result has to do with the mathematics underlying the kinetic modelling. In the limit of infinite $L_\mu$ lipids in the membranes of our model DOPA:OA vesicles, the inside/outside lipid concentration required to sustain them at equilibrium (given by function $f$, (\ref{eq:function-f})) tends to \emph{but crucially never actually reaches} the CVC concentration of pure oleic acid: \begin{equation} \underset{L_{\mu}\rightarrow\infty}{lim} f = \frac{2k_\text{out}}{k_\text{in} \alpha_L} = [L]_\text{eq}^{OA} \label{eq:function-f-limit} \end{equation} This is true, even if a model DOPA:OA vesicle contains just one single phospholipid in the membrane. Now, as more OA vesicles are mixed with the DOPA:OA vesicles, the population becomes increasingly dominated by OA vesicles and the lipid monomer concentration in the environment subsequently rises toward $[L]_\text{eq}^{OA}$. As this happens, (\ref{eq:function-f-limit}) implies that the DOPA:OA vesicles will be absorbing more and more $L$ lipids, in order to grow to a size in equilibrium with the external lipid monomer concentration. The DOPA:OA growth is thus halted only by the number of lipids in the system being limited to $L_t$. In our kinetics model, this continuous growth happens with or without the direct effect present. The direct effect can drive larger growths when vesicles are small, but as surface area increases and the direct effect diminishes, it is the indirect effect which persists and continues to drive growth as more OA vesicles are added. A final point worth highlighting is that when the lipid uptake function $\textbf{u}$ given in (\ref{eq:function-u}) is not conditional, as we assumed, but simply \begin{equation} \textbf{u}({\Phi})= exp\left(\frac{1}{\Phi}-1\right) \label{eq:function-u-continuous-only} \end{equation} for all membrane states (which denotes that even flaccid vesicles have differential rates of lipid uptake), then, quite interestingly, the continuous DOPA:OA growth effect cannot be reproduced. In this case, it can be shown that \begin{equation} \underset{L_{\mu}\rightarrow\infty}{lim} f = \frac{2k_\text{out}}{k_\text{in} \alpha_L}\cdot exp(1) > [L]_\text{eq}^{OA} \label{eq:function-f-limit2} \end{equation} meaning that the DOPA:OA vesicles do not show the same continued growth as the lipid monomer concentration in the environment rises toward $[L]_\text{eq}^{OA}$. Rather, the DOPA:OA have much slower growth, and they even have a finite stable size when the outside lipid monomer concentration is exactly $[L]_\text{eq}^{OA}$. Thus, to best reproduce experimental outcomes, a crucial part of our lipid uptake kinetics was to accelerate lipid uptake \emph{only} in osmotically stressed vesicle states, not in flaccid ones. Understanding in full detail the dynamics of these colloidal systems is certainly not an easy task. In any case, we consider this work just as a step further in the development of semi-realistic, coarse-grained descriptions of phenomena that, in reality, are extremely complex. Self-assembly processes involving heterogeneous component mixtures and the formation of dynamic supramolecular structures that could hypothetically lead to biologically relevant forms of material organization, like protocells \cite{mouritsen2005,rasmussen2009}, constitute a tremendous challenge, indeed, both for experimental and theoretical `systems chemistry' research \cite{ruizMirazo2014} and for synthetic biology \cite{soleReproductionComputation2007,sole2009}. In particular, the connection between basic metabolic reaction networks and membrane dynamics (including stationary growth and division cycles \cite{mavelliRuizMirazo2013}) needs to be explored much more extensively, since it is one of the key aspects to establish a plausible route from physics and chemistry towards biological phenomenology. \section*{Acknowledgments} R.S. and B.S.-E. acknowledge support from the Botin Foundation and by the Santa Fe Institute. K.R.-M. acknowledges support from the Basque Government (Grant IT 590-13), Spanish Ministry of Science (MINECO Grant FFI2011-25665), COST Action CM 1304 (Emergence and Evolution of Complex Chemical Systems). F.M. acknowledges support from MIUR (PRIN 2010/11 2010BJ23MN\_003). \bibliographystyle{apalike}
{'timestamp': '2014-01-31T02:06:52', 'yymm': '1401', 'arxiv_id': '1401.7803', 'language': 'en', 'url': 'https://arxiv.org/abs/1401.7803'}
\section{Introduction} Let $S$ be a polynomial ring over a field $K$ and $I$ a squarefree monomial ideal of $S$. We denote by $G(I)$, the minimal set of monomial generators of $I$. The \textit{arithmetical rank} of $I$, denoted by $\ara I$, is defined by the minimal number $r$ of elements $a_1, \ldots, a_r \in S$ such that \begin{displaymath} \sqrt{(a_1, \ldots, a_r)} = \sqrt{I}. \end{displaymath} When the above equality holds, we say that \textit{$a_1, \ldots, a_r$ generate $I$ up to radical}. By definition, $\ara I \leq \mu (I)$ holds, where $\mu (I)$ denotes the cardinality of $G(I)$. On the other hand, Lyubeznik \cite{Ly83} proved that \begin{equation} \label{eq:ara>pd} \pd_S S/I \leq \ara I, \end{equation} where $\pd_S S/I$ denotes the projective dimension of $S/I$ over $S$. Since $\height I \leq \pd_S S/I$ always holds, we have \begin{displaymath} \height I \leq \pd_S S/I \leq \ara I \leq \mu (I). \end{displaymath} Then it is natural to ask when $\ara I = \pd_S S/I$ holds. Many authors investigated this problem; see \cite{Barile96, Barile08-1, Barile08-2, BKMY, BariTera08, BariTera09, Kim_h2CM, KTYdev1, KTYdev2, Ku, Mo, SchmVo}. In particular in \cite{KTYdev1, KTYdev2}, it was proved that the equality $\ara I = \pd_S S/I$ holds for the two cases that $\mu (I)-\height I \le 2$ and $\arithdeg I-\indeg I \le 1$. Here $\arithdeg I$ denotes the \textit{arithmetic degree} of $I$, which is equal to the number of minimal primes of $I$, and $\indeg I$ denotes the \textit{initial degree} of $I$; see Section 2. As a result we know that $\ara I = \pd_S S/I$ for squarefree monomial ideals $I$ with $\mu (I) \le 4 $ or with $\arithdeg I \le 3 $. \par In this paper, hence, we concentrate our attention on the following two cases: $\mu (I) = 5 $ and $\arithdeg I = 4$. And the main result of this paper is as follows: \begin{theorem} \label{claim:MainResult} Let $I$ be a squarefree monomial ideal of $S$. Suppose that $I$ satisfies one of the following conditions$:$ \begin{enumerate} \item $\mu (I) \leq 5$. \item $\arithdeg I \leq 4$. \end{enumerate} Then \begin{displaymath} \ara I = \pd_S S/I. \end{displaymath} \end{theorem} Note that there exists an ideal $I$ with $\mu (I) = 6$ such that $\ara I > \pd_S S/I$ when $\chara K \neq 2$; see \cite[Section 6]{KTYdev2}. \par After we recall some definitions and properties of Stanley--Reisner ideals and hypergraphs in Sections 2 and 3, we give a combinatorial characterization for a squarefree monomial ideal $I$ with $\pd_S S/I =\mu (I)-1$ using hypergraphs in Section 4. It is necessary because we must use the fact that projective dimension of $S/I$ is characteristic-free for a squarefree monomial ideal $I$ with $\mu(I)=5$ in Section 6. \par In the case that $\mu (I) \le 4 $ all the squarefree monomial ideals are classified using hypergraphs in \cite{KTYdev1, KTYdev2}. But in the case that $\mu (I) =5$, $\height I=2$ and $\pd S/I=3$, which is an essential difficult part for our problem, a similar classification is practically impossible because of their huge number. According to a computer, there are about $2.3 \cdot 10^6$ hypergraphs corresponding to such ideals. We need a reduction. Hence we focus on the set of the most ``general" members among them. We call it a {\it generic set}. In Section 5 we give a formal definition of a generic set and prove that it is enough to show $\ara I = \pd_S S/I$ for each member $I$ of the generic set. But we cannot obtain the generic set in our case without a computer. In Section 6, we give an algorithm to find a generic set for the ``connected" squarefree monomial ideals $I$ with $\mu (I) =5$, $\height I=2$ and $\pd S/I=3$ using {\it CoCoA} and {\it Nauty}. As a result of computation, we found that the generic set consists of just three ideals. In Section 7, we prove $\ara I = \pd_S S/I$ when $\mu (I) =5$ by showing the same equality for all three members of the generic set. \par Finally in Section 8, we focus on the squarefree monomial ideals $I$ with $\arithdeg I \leq 4$. Here we use another reduction. In terms of simplicial complexes as shown in \cite{BariTera08, Kim_h2CM}, we may remove a face with a free vertex from a simplicial complex $\Delta$ when we consider the problem whether $\ara I_{\Delta} = \pd_S S/I_{\Delta}$ holds. We translate it in terms of hypergraphs and after such reduction we show $\ara I = \pd_S S/I$ for remaining ideals $I$ with $\arithdeg I \leq 4$. \section{Preliminaries} In this section, we recall some definitions and properties which are needed to prove Theorem \ref{claim:MainResult}. \par Let $M$ be a Noetherian graded $S$-module and \begin{displaymath} F_{\bullet} : 0 \longrightarrow \bigoplus_{j \geq 0} S(-j)^{{\beta}_{p, j}} \longrightarrow \cdots \longrightarrow \bigoplus_{j \geq 0} S(-j)^{{\beta}_{0, j}} \longrightarrow M \longrightarrow 0 \end{displaymath} a graded minimal free resolution of $M$ over $S$, where $S(-j)$ is a graded free $S$-module whose $k$th piece is given by $S_{k-j}$. Then ${\beta}_{i, j} = {\beta}_{i, j} (M)$ is called a \textit{graded Betti number} of $M$ and ${\beta}_i = \sum_{j} {\beta}_{i, j}$ is called the $i$th \textit{$($total$)$ Betti number} of $M$. The projective dimension of $M$ over $S$ is defined by $p$ and denoted by $\pd_S S/I$ or by $\pd S/I$. The \textit{initial degree} of $M$ and the \textit{regularity} of $M$ are defined by \begin{displaymath} \indeg M = \min \{ j \; : \; {\beta}_{0, j} (M) \neq 0 \}, \quad \reg M = \max \{ j-i \; : \; {\beta}_{i, j} (M) \neq 0 \}, \end{displaymath} respectively. \par Next, we recall some definitions and properties of Stanley--Reisner ideals, especially Alexander duality. \par Let $V =[n]:= \{ 1, 2, \ldots, n \}$. A \textit{simplicial complex} $\Delta$ on the vertex set $V$ is a collection of subsets of $V$ with the conditions (a) $\{ v \} \in \Delta$ for all $v \in V$; (b) $F \in \Delta$ and $G \subset F$ imply $G \in \Delta$. An element of $V$ is called a \textit{vertex} of $\Delta$ and an element of $\Delta$ is called a \textit{face}. A maximal face of $\Delta$ is called a \textit{facet} of $\Delta$. Let $F$ be a face of $\Delta$. The \textit{dimension} of $F$, denoted by $\dim F$, is defined by $|F|-1$, where $|F|$ denotes the cardinality of $F$. If $\dim F = i$, then $F$ is called an $i$-face. The \textit{dimension} of $\Delta$ is defined by $\dim \Delta := \max \{ \dim F : F \in \Delta \}$. A simplicial complex which consists of all subsets of its vertex set is called a \textit{simplex}. Let $u$ be a new vertex and $F \subset V$. The \textit{cone from $u$ over $F$} is a simplex on the vertex set $F \cup \{u \}$; see \cite[Definition 1, p.\ 3687]{BariTera08}. We denote it by $\cone_u F$. Then the union $\Delta \cup \cone_u F$ is a simplicial complex on the vertex set $V \cup \{ u \}$. \par The \textit{Alexander dual complex} ${\Delta}^{\ast}$ of $\Delta$ is defined by ${\Delta}^{\ast} = \{ F \subset V : V \setminus F \notin \Delta \}$. If $\dim \Delta < n-2$, then ${\Delta}^{\ast}$ is also a simplicial complex on the same vertex set $V$. \par For a simplicial complex $\Delta$ on the vertex set $V = [n]$, we can associate a squarefree monomial ideal $I_{\Delta}$ of $S = K[x_1, \ldots, x_n]$ which is generated by all products $x_{i_1} \cdots x_{i_s}$, $1 \leq i_1 < \cdots < i_s \leq n$ with $\{ i_1, \ldots, i_s \} \notin \Delta$. The ideal $I_{\Delta}$ is called the \textit{Stanley--Reisner ideal} of $\Delta$. It is well known that the minimal prime decomposition of $I_{\Delta}$ is given by \begin{displaymath} I_{\Delta} = \bigcap_{F \in \Delta : \text{facet}} P_{F}, \end{displaymath} where $P_G = (x_i : i \in V \setminus G)$ for $G \subset V$. On the other hand, for a squarefree monomial ideal $I \subset S$ with $\indeg I \geq 2$, there exists a simplicial complex $\Delta$ on the vertex set $V = [n]$ such that $I = I_{\Delta}$. If $\dim \Delta < n-2$, i.e., $\height I \geq 2$, then we can consider the squarefree monomial ideal $I^{\ast} := I_{{\Delta}^{\ast}}$, which is called the \textit{Alexander dual ideal} of $I = I_{\Delta}$. Then \begin{displaymath} I^{\ast} = I_{{\Delta}^{\ast}} = (x^{V \setminus F} \; : \; \text{$F \in \Delta$ is a facet}), \end{displaymath} where $x^G = \prod_{i \in G} x_i$ for $G \subset V$. It is easy to see that $I^{\ast \ast} = I$, $\indeg I^{\ast} = \height I$, and $\arithdeg I^{\ast} = \mu (I)$ hold. Moreover, the equality $\reg I^{\ast} = \pd_S S/I$ also holds; see \cite[Corollary 1.6]{Terai}. \section{Hypergraphs} For this section, we refer to Kimura, Terai and Yoshida \cite{KTYdev1}, \cite{KTYdev2} for more detailed information. \par Let $V = [\mu]$. A \textit{hypergraph} $\calH$ on the vertex set $V$ is a collection of subsets of $V$ with $\bigcup_{F \in \calH} F = V$. The definitions and notations of the vertex, face, and dimension are the same as those for a simplicial complex. We set $B({\calH}) = \{ v \in V : \{ v \} \in \calH \}$ and $W(\calH)=V \setminus B(\calH)$. For a hypergraph $\calH$ on a vertex set $V$, we define the {\it $i$-subhypergraph} of $\calH$ by $\calH^{i }=\{ F \in \calH: \ \dim F= i\}$. We sometimes identify $B(\calH)$ with $\calH^{0}$. For $U \subset V(\calH)$, we define the restriction of a hypergraph $\calH$ to $U$ by $\calH_{U}=\{ F \in \calH: \ F \subset U \}$. \par A hypergraph $\calH$ on the vertex set $V$ is called \textit{disconnected} if there exist hypergraphs ${\calH}_1, {\calH}_2 \subsetneq \calH$ on vertex sets $V_1, V_2 \subsetneq V$, respectively such that ${\calH}_1 \cup {\calH}_2 = \calH$, $V_1 \cup V_2 = V$, and $V_1 \cap V_2 = \emptyset$. A hypergraph which is not disconnected is called \textit{connected}. \par Let $I$ be a squarefree monomial ideal of $S=K[x_1, \ldots, x_n]$ with $G(I) = \{ m_1, \ldots, m_{\mu} \}$. We associate a hypergraph $\calH (I)$ on the vertex set $V = [\mu]$ with $I$ by setting \begin{displaymath} \calH (I) := \big\{ \{ j \in V \; : \; \text{$m_j$ is divisible by $x_i$} \} \; : \; i=1, 2, \ldots, n \big\}. \end{displaymath} \begin{definition} \label{defn:def_var} Let $F$ be a face of $\calH (I)$. Then there exists a variable $x$ of $S$ such that \begin{equation} \label{eq:def_var} F = \{ j \in V \; : \; \text{$m_j$ is divisible by $x$} \}. \end{equation} We call a variable $x$ of $S$ with condition (\ref{eq:def_var}) a \textit{defining variable} of $F$ (in $\calH (I)$). \par Conversely, we say that \textit{a variable $x$ of $S$ defines a face $F$} of $\calH (I)$ if $x$ is a defining variable of $F$. \end{definition} Note that the choice of a defining variable is not necessarily unique. Since the minimal generators of a concrete squarefree monomial ideal $I$ do not have indices, we usually regard $\calH (I)$ as a hypergraph with unlabeled vertices. If $\calH (I)$ can be regarded as a subhypergraph of $\calH (J)$ as unlabeled hypergraphs for two squarefree monomial ideals $I$ and $J$, we write $\calH (I) \subset \calH (J)$ by abuse of language. \par On the other hand, we can construct a squarefree monomial ideal from a given hypergraph $\calH$ on the vertex set $V = [\mu]$ if $\calH$ satisfies the following \textit{separability condition}: \begin{displaymath} \begin{aligned} &\text{For any two vertices $i, j \in V$,} \\ &\text{there exist faces $F, G \in \calH$ such that $i \in F \setminus G$ and $j \in G \setminus F$. } \end{aligned} \end{displaymath} The way of construction is: first, we assign a squarefree monomial $A_F$ to each face $F \in \calH$ such that $A_F$ and $A_G$ are coprime if $F \neq G$. Then we set \begin{displaymath} I = (\prod_{\genfrac{}{}{0pt}{}{F \in \calH}{j \in F}} A_F \; : \; j = 1, 2, \ldots, \mu), \end{displaymath} which is a squarefree monomial ideal with $\calH (I) = \calH$ by virtue of the separability. When we assign a variable $x_F$ for each $F \in \calH$, we write the corresponding ideal as $I_{\calH}$ in $K[x_F : F \in \calH]$. \par For later use we prove the following proposition: \begin{proposition} \label{claim:hgraph} Let $I, I'$ be squarefree monomial ideals of polynomial rings $S, S'$, respectively. Suppose that $\mu (I)=\mu (I')$ and $\calH (I) \subset \calH (I')$. Then we have $\ara I \leq \ara I'$. \end{proposition} \begin{proof} Set $\calH = \calH (I)$, $\calH' = \calH (I')$. We may assume that $I=I_{\calH }$ and $I'=I_{\calH '}$ with ${\calH \subset \calH '}$. Set $G(I')=\{m'_1, \dots, m'_{\mu}\}$. Let $m_i$ for $i=1,2, \dots ,\mu$ be the monomial obtained by substitution of 1 to $x_F$ for $F \in \calH ' \setminus \calH $ in $m'_i$. We may assume that $G(I)=\{m_1, \dots, m_{\mu}\}$. Assume that $q_1', \ldots, q_r'$ generate $I'$ up to radical. Let $q_i$ for $i=1,2, \dots ,r$ be the polynomial obtained by substitution of 1 to $x_F$ for $F \in \calH ' \setminus \calH $ in $q'_i$. We show that $q_1, \ldots, q_r$ generate $I$ up to radical. Since $q'_i \in I'$ for $i=1,2, \dots ,r$, we have $q_i \in I$. On the other hand, suppose ${m'_i}^p \in ( q_1', \ldots, q_r')$ for some $p \ge 1$. Then we have $m_i^p \in ( q_1, \ldots, q_r)$. Hence $q_1, \ldots, q_r$ generate $I$ up to radical. \end{proof} \section{Squarefree monomial ideals whose projective dimension is close to the number of generators} Let $S= K[x_{1},x_{2},\ldots,x_{n}]$ be the polynomial ring in $n$ variables over a field $K$. We fix a squarefree monomial ideal $I= (m_{1}, m_{2}, \dots , m_{\mu})$, where $G(I)=\{m_{1}, m_{2}, \dots , m_{\mu} \}$ is the minimal generating set of monomials for $I$. For a squarefree monomial ideal $I$, by the Taylor resolution of $S/I$ we have $\pd_S S/I \le \mu (I)$. In this section we give a combinatorial characterization for the squarefree monomial ideal $I$ with $\pd_S S/I =\mu (I)-1$ using hypergraphs. \par First we consider the condition $\pd_S S/I = \mu (I)$. Then the following proposition is easy and well known. \begin{proposition} \label{claim:mu} The following conditions are equivalent for a squarefree monomial ideal $I:$ \begin{enumerate} \item[(1)] $\pd_S S/I = \mu (I)$. \item[(2)] For the hypergraph $\calH:=\calH(I)$ we have $B(\calH) =V(\calH)$. \end{enumerate} \end{proposition} \par By Lyubeznik\cite{Ly83} we have $\pd_S S/I = \mbox{\rm cd} I:= \max \{ i: H^{i}_{I}(S) \ne 0 \}$. Assuming that $\pd_S S/I \le \mu (I)-1$, we have $\pd_S S/I =\mu (I)-1$ if and only if $H^{\mu -1}_{I}(S) \ne 0$. We give a combinatorial interpretation for the condition $H^{\mu -1}_{I}(S) \ne 0$. \par Consider the following {\v C}ech complex: \begin{eqnarray*} \lefteqn{ C^{\bullet} = \bigotimes_{i=1}^{\mu} (0 \longrightarrow S \longrightarrow S_{m_{i}} \longrightarrow 0 ) }\\ & & = 0 \longrightarrow S \stackrel{\delta^{1}}{\longrightarrow} \bigoplus_{1 \le i \le \mu } S_{m_{i}} \stackrel{\delta^{2}}{\longrightarrow} \bigoplus_{1 \le i < j \le \mu } S_{m_{i}m_{j}} \stackrel{\delta^{3}}{\longrightarrow} \cdots \stackrel{\delta^{\mu}}{\longrightarrow} S_{m_{1}m_{2}\cdots m_{\mu}} \longrightarrow 0. \end{eqnarray*} We describe $\delta^{r+1}$ as follows. Put $R:= S_{m_{i_{1}}m_{i_{2}}\cdots m_{i_{r}}}$ and $\{j_{1},j_{2}, \dots , j_{s}\}= \{ 1,2, \cdots ,\mu \}\setminus \{i_{1},i_{2}, \dots , i_{r}\}$, where $j_{1}< j_{2} < \dots < j_{s}$ and $r+s=\mu$. Let $\psi _{j_p}: R \longrightarrow R_{m_{j_p}}$ be a natural map. For $u \in R$, we have $$ \delta^{r+1}(u)= \sum _{p=1}^{s}(-1)^{\mid \{q : i_{q}< j_{p}\}\mid } \psi _{j_p}(u)\\ =\sum _{p=1}^{s}(-1)^{j_{p}-p}\psi _{j_p}(u). $$ \par For $F \subset [n]$, we define $x^{F} :=\prod_{i \in F} x_{i}$. We define a simplicial complex $\Delta(F)$ by \begin{displaymath} \Delta(F)= \big\{ \{ i_{1},i_{2}, \dots , i_{r}\} \subset [\mu] \; : \ \ x^{F}\mid \prod _{j \in \{ 1,2, \cdots \mu \}\setminus \{i_{1},i_{2}, \dots , i_{r}\}}m_{j} \big\}. \end{displaymath} \par For $\alpha \in {\bf Z}^{n}$, there is a unique decomposition $\alpha=\alpha _{+} - \alpha_{-}$ such that $\alpha_{+}, \alpha_{-} \in {\bf N}^{n}$ and $\mbox{\rm supp }\alpha_{+} \cap \mbox{\rm supp }\alpha_{-} = \emptyset$. Then we have $\mbox{\rm supp }\alpha_{-}=\{ i : \alpha_{i}<0 \}.$ \begin{lemma}[{\cite[Lemma 2.5]{St1}}] For $\alpha \in {\bf Z}^{n}$ give an orientation for $ \Delta(\mbox{\rm supp }\alpha_{-})$ by $1 <2 < \cdots < \mu $. Then we have the following isomorphism of complexes$:$ $C^{\bullet}_{\alpha} \cong \tilde{C}_{\bullet} (\Delta(\mbox{\rm supp }\alpha_{-}))$ such that $C^{r}_{\alpha} \cong \tilde{C}_{\mu-r-1} (\Delta(\mbox{\rm supp }\alpha_{-}))$. \end{lemma} \par Then using the above lemma we have the following theorem: \begin{theorem} \label{claim:mu-1} The following conditions are equivalent for a squarefree monomial ideal $I:$ \begin{enumerate} \item[(1)] $\pd_S S/I = \mu (I)-1$. \item[(2)] The hypergraph $\calH:=\calH(I)$ satisfies $B(\calH) \neq V(\calH)$ and either one of the following conditions$:$ \begin{enumerate} \item[(i)] The graph $(W(\calH), \calH_{W(\calH)}^{1})$ contains a complete bipartite graph as a spanning subgraph. \item[(ii)] There exists $i \in B(\calH)$ such that $\{i, j\} \in \calH$ for all $j \in W(\calH)$. \end{enumerate} \end{enumerate} \end{theorem} \begin{proof} We may assume $B(\calH) \ne V(\calH)$. By the above lemma we have the following isomorphisms: \begin{eqnarray*} H^{\mu -1}_{I}(S)_{\alpha} & = & H^{\mu -1}(C^{\bullet})_{\alpha} \\ & \cong & \tilde {H}_{0}(\Delta(\mbox{\rm supp }\alpha_{-}); K)\\ & \cong & \tilde {H}_{0}(\Delta(\mbox{\rm supp }\alpha_{-})^{(1)}; K ), \end{eqnarray*} where $\Delta(\mbox{\rm supp }\alpha_{-})^{(1)} :=\{ F \in \Delta(\mbox{\rm supp }\alpha_{-}): \ \dim F \le 1 \}$ is the 1-skeleton of $\Delta(\mbox{\rm supp }\alpha_{-})$. Hence $H^{\mu -1}_{I}(S)_{\alpha} = 0$ if and only if $\Delta(\mbox{\rm supp }\alpha_{-})^{(1)}$ is connected. \par We claim that $\Delta(\mbox{\rm supp }\alpha_{-})^{(1)}$ is connected for all $\alpha \in {\bf Z}^{n}$ if and only if the graph $(U, {U \choose 2} \setminus \calH^{1}_{U})$ is connected for all $W(\calH) \subset U \subset V(\calH)$, where ${U \choose 2} :=\{ \{i,j \} \subset U: \ i \ne j \}$. Let $U$ be the vertex set of $\Delta(\mbox{\rm supp }\alpha_{-})^{(1)}$ for $\alpha \in {\bf Z}^{n}$. Then $U \supset W(\calH)$ and ${U \choose 2} \setminus \calH^{1}_{U} \subset \Delta(\mbox{\rm supp }\alpha_{-})^{(1)}$. Hence if $(U, {U \choose 2} \setminus \calH^{1}_{U})$ is connected, then so is $\Delta(\mbox{\rm supp }\alpha_{-})^{(1)}$. On the other hand, fix $U$ such that $W(\calH) \subset U \subset V(\calH)$. Put $U=W(\calH) \cup B'$, where $B' \subset B(\calH)$. By a suitable change of variables, we may assume that $B'=\{1,2,\dots ,p \}$. For $1 \le j \le p$, set $\{x_{i}: \, x_{i}| m_{j}, x_{i} \nmid m_{\ell} \mbox{ for } 1 \le \ell \le \mu \mbox{ with } \ell \ne j\} =\{x_{i_{j1}}, \dots , x_{i_{js_{j}}}\}$. Take $\alpha \in {\bf Z}^{n}$ such that $\mbox{\rm supp }\alpha_{-}=[n]\setminus \{i_{11}, \dots , i_{1s_{1}}, i_{21}, \dots , i_{2s_{2}}, \dots , i_{p1}, \dots , i_{ps_{p}} \}$. Then we have ${U \choose 2} \setminus \calH^{1}_{U}$ is equal to the set of all 2-faces in $\Delta(\mbox{\rm supp }\alpha_{-})$. Hence we have the claim. \par The graph $(U, {U \choose 2} \setminus \calH^{1}_{U})$ is connected for all $U$ such that $W(\calH) \subset U \subset V(\calH)$ if and only if the following conditions (I) and (II) are satisfied: \begin{enumerate} \item[(I)] The graph $(W(\calH), {W(\calH) \choose 2} \setminus \calH^{1}_{W(\calH) })$ is connected. \item[(II)] For $i \in B(\calH)$ set $U_{i}=W(\calH) \cup \{ i \}$. The graph $(U_{i}, {U_{i} \choose 2} \setminus \calH^{1}_{U_{i}})$ is connected for all $i \in B(\calH)$. \end{enumerate} Hence the condition $\pd_S S/I = \mu (I)-1$ holds if and only if one of the following conditions (i)' or (ii)' is satisfied: \begin{enumerate} \item[(i)'] The graph $(W(\calH), {W(\calH) \choose 2} \setminus \calH^{1}_{W(\calH) })$ is disconnected. \item[(ii)'] The graph $(U_{i}, {U_{i} \choose 2} \setminus \calH^{1}_{U_{i}})$ is disconnected for some $i \in B(\calH)$. \end{enumerate} The condition (i)' is equivalent to the condition (i). Under the assumption that the condition (i) (or equivalently (i)') does not hold the conditions (ii) and (ii)' are equivalent. Hence we are done. \end{proof} \begin{corollary} \label{claim:char-free} The condition $\pd_S S/I = \mu (I)-1$ is independent of the base field $K$ for a monomial ideal $I$. \end{corollary} \section{Generic set} We cannot restrict the number of variables in general when we classify squarefree monomial ideals with a certain given property. In that case, it is convenient to consider finitely generated squarefree monomial ideals in a polynomial ring with infinite variables. \par Let $K$ be a field and $S_{\infty} = K[x_1, x_2, \ldots]$ a polynomial ring with countably infinite variables over $K$. Let $\mathcal{I}$ be the set of all finitely generated squarefree monomial ideals of $S_{\infty}$. For $I \in \mathcal{I}$, we denote by $X(I)$ the set of all variables which appear in one of the minimal monomial generators of $I$. \begin{definition} \label{defn:GenSet} Let $\mathcal{C}$ be some property on $\mathcal{I}$. That is, the subset \begin{displaymath} \mathcal{I}(\mathcal{C}) := \{ I \in \mathcal{I}: I \mbox{ satisfies the property } \mathcal{C}\} \end{displaymath} is uniquely determined. A subset $\mathcal{A} \subset \mathcal{I}$ is called a \textit{generic set} on $\mathcal{C}$ if the following two conditions are satisfied: \begin{enumerate} \item $\mathcal{A} \subset \mathcal{I} (\mathcal{C})$. \item For any $J \in \mathcal{I} (\mathcal{C})$ with $G(J) = \{ m_1, \ldots, m_{\mu} \}$, there exist $I \in \mathcal{A}$ with $X(I) = \{ x_1, \ldots, x_n \}$ and $G(I) = \{ m_1', \ldots, m_{\mu}' \}$ where $m_{i}' = x_{t_{i1}} \cdots x_{t_{i j_i}}$, and (possibly trivial) squarefree monomials $M_1, \ldots, M_n$ on $X(J)$ that are pairwise coprime such that \begin{displaymath} M_{t_{i1}} \cdots M_{t_{i j_i}} = m_i, \qquad i = 1, 2, \ldots, \mu. \end{displaymath} \end{enumerate} \par We say that a generic set $\mathcal{A}$ on $\mathcal{C}$ is \textit{minimal} if $\mathcal{A}$ is minimal among generic sets on $\mathcal{C}$ with respect to inclusion, and we say $\mathcal{A}$ on $\mathcal{C}$ is \textit{reduced} if $$ K[x_F : F \in \calH (I)]/I_{\calH (I)} \cong K[X(I)]/ (I \cap K[X(I)]) $$ for all $I \in \mathcal{A}$. \end{definition} \par A minimal generic set has the following property: \begin{proposition} \label{claim:GenSet} Let $\mathcal{A}$ be a minimal generic set on some property $\mathcal{C}$. \begin{enumerate} \item For $J \in \mathcal{I} (\mathcal{C})$, there exists $I \in \mathcal{A}$ such that $\calH (J) \subset \calH (I)$ and $\mu (J) = \mu (I)$. \item If $\calH (I') \subset \calH (I)$ and $\mu (I') = \mu (I)$ for $I', I \in \mathcal{A}$, then $I' = I$. \end{enumerate} \end{proposition} \begin{proof} (1) By definition, there exists an ideal $I \in \mathcal{A}$ with the condition (2) of Definition \ref{defn:GenSet}. In particular, $\mu (I) = \mu (J) =: \mu$. We use the same notations as in Definition \ref{defn:GenSet}. Take a face $F \in \calH (J)$ and let $y_{F}$ be a defining variable of $F$ in $\calH (J)$. Take $i \in F$. Then $y_F$ divides $m_{i}$. Since $m_{i}$ can be written as the product $M_{t_{i 1}} \cdots M_{t_{i j_i}}$, we may assume that $M_{t_{i 1}}$ is divisible by $y_F$. Then for $1 \leq \ell \leq \mu$, the variable $y_F$ divides $m_{\ell}$ if and only if the variable $x_{t_{i1}}$ divides $m_{\ell}'$. This means that $x_{t_{i1}}$ defines $F$ of $\calH (I)$. \par (2) First note that in the proof of (1), we also proved that $\calH (J) \subset \calH (I)$ holds when $I$ and $J$ have the connection as in Definition \ref{defn:GenSet} (2). Therefore from the minimality of $\mathcal{A}$, it is enough to prove that if $\calH (J) \subset \calH(I)$ with $\mu (J) = \mu (I) = \mu$ for $I, J \in \mathcal{I}$, then $I$ and $J$ have that connection. \par Let $G(J) = \{ m_1, \ldots, m_{\mu} \}$ and $G(I) = \{ m_1', \ldots, m_{\mu}' \}$. For $F \in \calH (J)$ (resp.\ $G \in \calH (I)$), we denote by $M_F$ (resp.\ $M_G'$), the product of all defining variables of $F$ in $\calH (J)$ (resp.\ $G$ in $\calH (I)$). Then \begin{displaymath} m_i = \prod_{\genfrac{}{}{0pt}{}{F \in \calH (J)}{i \in F}} M_F, \qquad m '_i = \prod_{\genfrac{}{}{0pt}{}{F \in \calH (J)}{i \in F}} M_F' \prod_{\genfrac{}{}{0pt}{}{G \in \calH (I) \setminus \calH (J)}{ i \in G}} M_G'. \end{displaymath} Since $F \in \calH (I)$, the product $M_F' \neq 1$. Let $x_F$ be a variable which divides $M_F'$. Then the substitution $M_F$ to $x_F$; $1$ to the variables dividing one of $M_F'/x_F$, $M_G'$ yields the desired connection. \end{proof} By virtue of Propositions \ref{claim:hgraph} and \ref{claim:GenSet} we have the following proposition: \begin{proposition} \label{claim:Reduction} Let $\mathcal{A}$ be a generic set on some property $\mathcal{C}$. Suppose for all $I, J \in \mathcal{I}(\mathcal{C})$ we have $$ \pd_{K[X(I)]} K[X(I)]/(I \cap K[X(I)]) =\pd _{K[X(J)]} K[X(J)]/(J \cap K[X(J)]). $$ We also assume that $$ \ara (I \cap K[X(I)]) = \pd_{K[X(I)]} K[X(I)]/(I \cap K[X(I)]) $$ for all $I \in \mathcal{A}$. Then we have $$ \ara (I \cap K[X(I)]) = \pd_{K[X(I)]} K[X(I)]/(I \cap K[X(I)]) $$ for all $I \in \mathcal{I}(\mathcal{C})$. \end{proposition} \par Let $I$ be a squarefree monomial ideal of $S$. The next proposition guarantees that we may assume that $\calH (I)$ is connected when we consider the problem on the arithmetical rank. \begin{proposition} \label{claim:connected} Let $I_1, I_2$ be squarefree monomial ideals of $S$. Suppose that $X (I_1) \cap X (I_2) = \emptyset$. Then we have \begin{displaymath} \ara (I_1 + I_2) \leq \ara I_1 + \ara I_2. \end{displaymath} \par Moreover, if $\ara I_i = \pd_S S/{I_i}$ holds for $i = 1, 2$, then $\ara (I_1 + I_2) = \pd_S S/{(I_1 + I_2)}$ also holds. \end{proposition} \begin{proof} When $f_1, \ldots, f_{s_1} \in S$ generate $I_1$ up to radical and $g_1, \ldots, g_{s_2} \in S$ generate $I_2$ up to radical, it is clear that these $s_1 + s_2$ elements generate $I_1 + I_2$ up to radical. Hence, we have the desired inequality. \par Let $F_{\bullet}, G_{\bullet}$ be minimal free resolutions of $S/I_1, S/I_2$, respectively. Then $F_{\bullet} \otimes G_{\bullet}$ provides a minimal free resolution of $S/{(I_1 + I_2)}$, and its length is equal to \begin{displaymath} \pd_S S/I_1 + \pd_S S/I_2 = \ara I_1 + \ara I_2 \geq \ara (I_1 + I_2) \end{displaymath} when $\ara I_i = \pd_S S/{I_i}$ holds for $i = 1, 2$. Since the inequality $\ara (I_1 + I_2) \geq \pd_S S/(I_1 + I_2)$ always holds by (\ref{eq:ara>pd}), we have the desired equality. \end{proof} \par Let $I$ be a squarefree monomial ideal with $G(I) = \{ m_1, \ldots, m_{\mu} \}$. We say that $I$ is \textit{connected} if for any distinct indices $i, j \in [\mu]$, there exists a sequence of indices $i=i_0, i_1, \ldots, i_{\ell} = j$ such that $\gcd (m_{i_{k-1}}, m_{i_k}) \neq 1$ for all $k=1, \ldots,\ell$. In other words, the corresponding hypergraph $\calH (I)$ is connected. \begin{remark} \label{rmk:connected} Proposition \ref{claim:connected} guarantees that we may assume $\indeg I \geq 2$. If $\indeg I = 1$, then $I$ can be written as $I_1 + (x)$, where $x \notin X (I_1)$. Then $\pd_S S/I = \pd_S S/{I_1} + 1$ and $\ara I \leq \ara I_1 + 1$. Therefore if $\ara I_1 = \pd_S S/{I_1}$ holds, then $\ara I = \pd_S S/I$ also holds. \end{remark} \section{An algorithm to find a minimal reduced generic set} \label{sec:algo} In this section we present an algorithm for computing a minimal reduced generic set on the property: \begin{displaymath} {\mathcal{C}}: \ \mu (I) = 5, \ \pd_{K[{X(I)}]} K[{X(I)}]/(I \cap K[{X(I)}]) = 3, \ \height I = 2, \ \text{$I$ is connected}. \end{displaymath} \par The algorithm is composed mainly by three steps: \begin{enumerate} \item [Step 1)] List up all the connected hypergraphs with five vertices which satisfy the separability condition; \item [Step 2)] Choose the hypergraphs $\calH$ with $\height I_{\calH}= 2$ and with $\pd K[X(I_{\calH})]/I_{\calH}=3$; \item [Step 3)] Find a minimal reduced generic set on ${\mathcal{C}}$. \end{enumerate} \par Since we use a computer we must note that the projective dimension does not depend on the characteristic of $K$ when $\mu (I) = 5$. \begin{lemma} \label{claim:charfree} Let $I$ be a squarefree monomial ideal of $S$ with $\mu (I) = 5$. The projective dimension $\pd_S S/I$ does not depend on the characteristic of $K$. \end{lemma} \begin{proof} We fix the base field $K$. If $\pd_S S/I =1$, then $I$ is a principal ideal and the claim is clear. \par Suppose $\pd_S S/I =2$. Then the height of $I$ is either 1 or 2. We assume that $\height I=1$. Then the total Betti numbers do not change if we remove the prime components of height one from $I$. Hence we may assume that $\height I=2$. Since $\pd_S S/I =\height I$, $S/I$ is Cohen-Macaulay. Since Cohen-Macaulayness of the height two monomial ideals does not depend on the base field, we are done in this case. \par If $\pd_S S/I \ge 4$, then $\pd_S S/I$ does not depend on the characteristic of $K$ by Proposition \ref{claim:mu} and Corollary \ref{claim:char-free}. \par Finally, suppose $\pd_S S/I =3$. By the above argument it is not possible to have $\pd_S S/I \ne 3$ for another base field. Then we are done. \end{proof} \par Now we explain the details of each step. \par \bigskip \par {\bf Step 1):} To list up all the connected hypergraphs with five vertices, we consider a correspondence between a hypergraph and a bipartite graph. \par First we give necessary definitions. \par Let $G$ be a graph on the vertex set $V$ with the edge set $E(G)$. For a vertex $v \in V$, the set of neighbourhoods of $v$ is defined by $N(v) := \{ u \in V : \{ u, v \} \in E(G) \}$. For a subset $V' \subset V$, we write $G_{V'}$ the induced subgraph of $G$ on $V'$. For a vertex set $V$ we consider a labeled partition $V=X \cup Y$. It means that $X$ and $Y$ are labeled. We call $X$ the {\it first subset }(or the {\it indeterminate-part subset}) of $V$ and $Y$ the {\it second subset} (or the {\it generator-part subset}) of $V$. Note that a bipartite graph with a labeled partition $V=X \cup Y$ can be regarded as a directed bipartite graph on the partition $V=X \cup Y$ such that every edge $\{x, y\}$ has the direction from $x\in X$ to $y \in Y$. \begin{definition} \label{defn:incidence_graph} Set $X = \{ x_1, \ldots, x_n \}$, $Y = \{ y_1, \ldots, y_{\mu} \}$. A bipartite graph $G$ with the labeled partition $V = X \cup Y$ is said to be a \textit{corresponding graph} if it satisfies the following three conditions: \begin{enumerate} \item The graph $G$ is connected; \item $N(y_i)\not \subset N(y_j)$ for all $i\neq j$; \item $N(x_i) \ne N(x_j)$ for any $i \neq j$. \end{enumerate} \end{definition} \par Two corresponding graphs $G$ and $G'$ with labeled partitions $V=X \cup Y$ and $V'=X' \cup Y'$, respectively, are isomorphic if there are bijections $f: X \longrightarrow X'$ and $g: Y \longrightarrow Y'$ such that $\{x, y\} \in E(G)$ for $x\in X$ and $y \in Y$ if and only if $\{f(x), g(y)\} \in E(G')$. \par Now we describe the correspondence more concretely. \begin{proposition} \label{claim:incidence_graph} We set $X = \{ x_1, \ldots, x_n \}$, $Y = \{ y_1, \ldots, y_{\mu} \}$ and $V = X \cup Y$. Let $\mathcal{G}$ be the set of all isomorphism classes of corresponding graphs with the labeled partition $V= X \cup Y$ and $\mathcal{S}$ be the set of all isomorphism classes of connected hypergraphs with the vertex set $Y$ and with $n$ faces which satisfy the separability condition. \par Then the map $\phi : \mathcal{G} \rightarrow \mathcal{S}$ defined by sending $G \in \mathcal{G}$ to $\calH =\{F_1, \dots , F_n\} \in \mathcal{S}$ where $F_i=\{y_j : \{x_i, y_j\} \in E(G)\}$ gives a one-to-one correspondence between these two sets. \end{proposition} \begin{proof} Let $G$ be a corresponding graph on the labeled partition $V = X \cup Y$. The connectivity of $G$ corresponds to that of the hypergraph $\phi (G)$. The condition (2) of Definition \ref{defn:incidence_graph} corresponds to the separability condition for the hypergraph $\phi (G)$. Moreover the condition (3) of Definition \ref{defn:incidence_graph} guarantees that the hypergraph $\phi (G)$ has $n$ faces. Hence the map $\phi$ is well defined. A hypergraph $\calH=\{F_1, \dots , F_n\}$ on the vertex set $Y=\{y_1, \ldots, y_{\mu}\}$ with the separability condition is associated to the bipartite graph on the labeled partition $X \cup Y$, where $X=\{x_1, \dots, x_n\}$ with $E(G)=\{\{x_i,y_j\} : y_j \in F_i \}$. This correspondence gives the inverse map of $\phi$. \end{proof} \par By virtue of Proposition \ref{claim:incidence_graph}, it is enough to enumerate all the corresponding graphs with labeled partition $V= X \cup Y$ such that $\mid Y \mid =5$ and $\mid X \mid \le 2^{5}=32$ to list up the corresponding hypergraphs. \par To perform this computation we used an existent software called {\it Nauty }(see \cite{Mk}), whose main purpose is to calculate a set of non-isomorphic graphs in an efficient way. We also customized it by a routine in C language to obtain graphs satisfying the conditions in Definition \ref{defn:incidence_graph}. See the site \cite{KRT_soft} for technical information and examples of computation. In this site we also provide the source under GPL license to generate the software applications. \par As a result, in the case $\mu=5$ the number of the corresponding graphs, or equivalently the number of the 5-vertex hypergraphs with the separability condition, is around $1.8\cdot 10^7$. We denote the set of the above graphs by ${\mathcal{P}}_1$. \par \bigskip \par {\bf Step 2):} To each element $G \in {\mathcal{P}}_1$, we associate the squarefree monomial ideal $I=(m_1, \dots , m_5)$ in $S=K[x_1, \dots , x_{32}]$, where $m_i=\prod_{x_j \in N(y_i)}x_j$ for $i=1,\dots,5$. \par Then we can use {\it CoCoA} (\cite{Co}) to determine whether $\height I =2 $ and $\pd S/I=3$. Choose all graphs in ${\mathcal{P}}_1$ which correspond to an ideal satisfying the above properties. Set this subset as ${\mathcal{P}}_2$. \par By this computation we know that the cardinality of ${\mathcal{P}}_2$ is around $2.3 \cdot 10^6$. \par \bigskip \par {\bf Step 3): } For the last step of our algorithm we need to find a minimal generic set on the property ${\mathcal{C}}$, or, equivalently, the corresponding set of bipartite graphs. For this purpose we define a partial order on bipartite graphs: \begin{definition} \label{defn:pord_biparGraph} Let $G$ and $G'$ be bipartite graphs on the labeled partitions $X \cup Y$ and $X' \cup Y$, respectively. We introduce a partial order $\preceq$ by setting $G' \preceq G$ if and only if there is a subset $X''$ of $X$ such that $G_{X'' \cup Y} \cong G'$. \end{definition} \par We want to list up all the maximal corresponding graphs in ${\mathcal{P}}_2$ with respect to the partial order $\preceq$. To reach this goal we partition the set ${\mathcal{P}}_2$ of the corresponding graphs \begin{displaymath} {\mathcal{P}}_2 = \bigcup {\mathcal{G}}_i \end{displaymath} where $G \in {\mathcal{G}}_i$ if the first subset $X$ has the cardinality $i$. Let $\ell$ be the maximum $i$ such that ${\mathcal{G}}_i \neq \emptyset$. Obviously each $G \in {\mathcal{G}}_{\ell}$ is a maximal graph. Set ${\mathcal{M}}_{\ell}={\mathcal{G}}_{\ell}$. \par As the next step we list up the maximal graphs in ${\mathcal{G}}_{\ell - 1}$. For each graph $G \in {\mathcal{M}}_{\ell}$ and for each vertex $x_i \in X $ let $G_i' = G_{V(G) \setminus {\{ x_i \}}}$. We make the list $\{G_i' : G \in {\mathcal{G}}_{\ell},\ x_i \in X \}$. If a graph in ${\mathcal{G}}_{\ell - 1}$ is isomorphic to some $G_i'$ then we remove it. At the end of this process we have only the maximal graphs in ${\mathcal{G}}_{\ell - 1}$. We denote the set of the maximal graphs in ${\mathcal{G}}_{\ell - 1}$ by ${\mathcal{M}}_{\ell - 1}$. \par As the next step we similarly list up the maximal graphs in ${\mathcal{G}}_{\ell - 2}$. For each graph $G \in {\mathcal{G}}_{\ell -1}$ and for each vertex $x_i \in X$ let $G_i' = G_{V(G) \setminus {\{ x_i\}}}$. We make the list $ \{G_i' : G \in {\mathcal{G}}_{\ell -1},\ x_i \in X \}. $ If a graph in ${\mathcal{G}}_{\ell - 2}$ is isomorphic to a graph in the list, then we remove it. At the end of this process we have only the maximal graphs in ${\mathcal{G}}_{\ell - 2}$. We denote the set of the maximal graphs in ${\mathcal{G}}_{\ell - 2}$ by ${\mathcal{M}}_{\ell - 2}$. \par We repeat a similar procedure for each $i \leq \ell -3$ and define ${\mathcal{M}}_i$ as above. \par Set \begin{displaymath} {\mathcal{P}}_3 = \bigcup {\mathcal{M}}_i. \end{displaymath} Then ${\mathcal{P}}_3 $ is the set of maximal corresponding graphs. A corresponding set of squarefree monomial ideals gives a minimal generic set on $\mathcal{C}$. \par As the result performing the algorithm we have $\mid {\mathcal{P}}_3 \mid =3$. More concretely, we have the following theorem: \begin{theorem} \label{claim:result} Let $\mathcal{C}$ be the property on $ \mathcal{I}$ such that \begin{displaymath} \mathcal{I}(\mathcal{C}) =\left\{ I \in \mathcal{I} \; : \; \begin{aligned} &\mu (I) = 5, \ \pd_{K[{X(I)}]} K[{X(I)}]/(I \cap K[{X(I)}]) = 3, \\ &\height I = 2, \ \text{$I$ is connected} \end{aligned} \right\}. \end{displaymath} Then $\{ J_1, J_2, J_3 \}$ is a minimal reduced generic set on $\mathcal{C}$, where we define $J_i=(m_1, m_2, m_3, m_4, m_5 )$, $i=1,2,3$, as follows$:$ For $J_1$ where $\mid X(J_1) \mid = 23:$ \begin{displaymath} \begin{aligned} m_1 &= x_1 x_3 x_7 x_8 x_9 x_{10} x_{12} x_{14} x_{15} x_{18} x_{19} x_{20} x_{21} x_{22}, \\ m_2 &= x_2 x_4 x_5 x_8 x_9 x_{11} x_{13} x_{14} x_{17} x_{18} x_{19} x_{20} x_{21} x_{23}, \\ m_3 &= x_3 x_5 x_9 x_{11} x_{12} x_{15} x_{16} x_{17} x_{19} x_{20} x_{22} x_{23}, \\ m_4 &= x_4 x_6 x_{10} x_{11} x_{12} x_{13} x_{16} x_{18} x_{19} x_{21} x_{22} x_{23}, \\ m_5 &= x_6 x_7 x_{10} x_{13} x_{14} x_{15} x_{16} x_{17} x_{20} x_{21} x_{22} x_{23}. \end{aligned} \end{displaymath} \par For $J_2$ where $\mid X(J_2) \mid = 24:$ \begin{displaymath} \begin{aligned} m_1 &= x_1 x_4 x_7 x_9 x_{10} x_{12} x_{15} x_{16} x_{18} x_{19} x_{20} x_{22} x_{23} x_{24}, \\ m_2 &= x_2 x_5 x_8 x_9 x_{11} x_{14} x_{15} x_{17} x_{18} x_{19} x_{21} x_{22} x_{23} x_{24}, \\ m_3 &= x_3 x_6 x_7 x_8 x_{11} x_{12} x_{13} x_{16} x_{17} x_{19} x_{20} x_{21} x_{23} x_{24}, \\ m_4 &= x_4 x_5 x_{10} x_{11} x_{12} x_{13} x_{14} x_{15} x_{20} x_{21} x_{22} x_{23},\\ m_5 &= x_6 x_{10} x_{13} x_{14} x_{16} x_{17} x_{18} x_{20} x_{21} x_{22} x_{24}. \end{aligned} \end{displaymath} \par For $J_3 $ where $\mid X(J_3) \mid = 25:$ \begin{displaymath} \begin{aligned} m_1 &= x_1 x_5 x_6 x_7 x_{11} x_{12} x_{15} x_{18} x_{19} x_{20} x_{21} x_{22} x_{23} x_{24}, \\ m_2 &= x_2 x_5 x_8 x_9 x_{11} x_{13} x_{15} x_{16} x_{17} x_{18} x_{21} x_{22} x_{24} x_{25}, \\ m_3 &= x_3 x_6 x_8 x_{10} x_{12} x_{14} x_{15} x_{16} x_{17} x_{20} x_{21} x_{23} x_{24} x_{25}, \\ m_4 &= x_4 x_7 x_9 x_{10} x_{13} x_{14} x_{17} x_{18} x_{19} x_{20} x_{22} x_{23} x_{24} x_{25}, \\ m_5 &= x_{11} x_{12} x_{13} x_{14} x_{16} x_{19} x_{21} x_{22} x_{23} x_{25}. \end{aligned} \end{displaymath} \end{theorem} \begin{remark} We have performed a similar algorithm for a minimal generic set on the property: \begin{displaymath} {\mathcal{C}}: \ \mu (I) = 5, \ \pd_{K[{X(I)}]} K[{X(I)}]/(I \cap K[{X(I)}])= 3, \ \height I = 3, \ \text{$I$ is connected}. \end{displaymath} As a result, we know that a minimal generic set consists of 9 ideals, which is coincident with the list chosen from all corresponding hypergraphs in \cite{KTYdev2}, where a combinatorial argument is used without a computer. \end{remark} \section{The case $\mu (I) \leq 5$} In this section, we compute the arithmetical rank of a squarefree monomial ideal $I$ with $\mu (I) \leq 5$. \begin{theorem} \label{claim:5gen} Let $I$ be a squarefree monomial ideal of $S=K[x_1, \ldots, x_n]$. Suppose that $\mu (I) \leq 5$. Then \begin{displaymath} \ara I = \pd_S S/I. \end{displaymath} \end{theorem} \begin{proof} If $\height I = 1$, then $I$ is of the form \begin{displaymath} I = (x_1, \ldots, x_t)I'S, \end{displaymath} where $I'$ is a squarefree monomial ideal of $S' = K[x_{t+1}, \ldots, x_n]$ with $\height I' \geq 2$. Then the equality $\ara I = \pd_S S/I$ holds if the same equality holds for $I' \subset S'$. Hence we may assume that $\height I \geq 2$. By \cite{KTYdev1, KTYdev2}, we have already known $\ara I = \pd_S S/I$ when $\mu (I) - \height I \leq 2$ or $\mu (I) - \pd_S S/I \leq 1$. The remain cases are that $\mu (I) = 5$, $\height I = 2$, and $\pd_S S/I = 2, 3$. If $\pd_S S/I = 2$, then $\pd_S S/I = \height I = 2$, and thus $S/I$ is Cohen--Macaulay. The equality $\ara I = \pd_S S/I$ for such an ideal was proved in \cite{Kim_h2CM}. Therefore we only need to consider the case that $\mu (I) = 5$, $\height I = 2$, and $\pd_S S/I = 3$. By Proposition \ref{claim:Reduction} and Theorem \ref{claim:result}, it is enough to prove $\ara I = \pd_S S/I$ for the following three cases: \begin{enumerate} \item $I=J_1 \cap K[X]$ with $\mid X \mid = 23$; \item $I=J_2 \cap K[X]$ with $\mid X \mid = 24$; \item $I=J_3 \cap K[X]$ with $\mid X \mid = 25$. \end{enumerate} \par Since we have $\ara I \geq \pd_S S/I = 3$ by (\ref{eq:ara>pd}), it is sufficient to show $\ara I \leq 3$. \par \textbf{Case (1):} We prove that the following $3$ elements $g_1, g_2, g_3$ generate $I$ up to radical: \begin{displaymath} g_1 = x_{16} f_1 f_2, \quad g_2 = m_1 f_1 + m_4, \quad g_3 = m_2 f_2 + m_5, \end{displaymath} where \begin{displaymath} f_1 = \gcd (m_1, m_3) + \gcd (m_4, m_5) m_2, \quad f_2 = \gcd (m_2, m_3) + \gcd (m_4, m_5) m_1. \end{displaymath} Set $J = (g_1, g_2, g_3)$. Note that $\sqrt{x_{16} \gcd (m_1, m_3) \gcd (m_2, m_3)} = m_3$, where $\sqrt{m}$ denotes a product of all variables those are appear in a monomial $m$. Then it is easy to see that $\sqrt{J} \subset I$. We prove the opposite inclusion. \par Since $x_{16} m_4 m_5 = x_{16} (g_2 - f_1 m_1) (g_3 - f_2 m_2) \in J$, we have $x_7 x_{14} x_{15} x_{17} x_{20} m_4 \in \sqrt{J}$ and $x_4 x_{11} x_{12} x_{18} x_{19} m_5 \in \sqrt{J}$. Then $x_{17} m_4^2 = x_{17} m_4 (g_2 - f_1 m_1) \in \sqrt{J}$ because $x_7 x_{14} x_{15} x_{20}$ divides $m_1$. Therefore we have $x_{17} m_4 \in \sqrt{J}$, and thus $x_{17} f_1 m_1 = x_{17} (g_2 - m_4) \in \sqrt{J}$. Since $x_{17}$ divides $\gcd (m_2, m_3)$, the belonging $\gcd (m_2, m_3) g_1 \in J$ implies $x_{16} \gcd (m_2, m_3) f_1 \in \sqrt{J}$. Then $x_{16} \gcd (m_2, m_3) \gcd (m_1, m_3) \in \sqrt{J}$ and $x_{16} \gcd (m_2, m_3) \gcd (m_4, m_5) m_2 \in \sqrt{J}$ because $m_4$ divides $\gcd (m_1, m_3) \gcd (m_4, m_5) m_2$. Hence $m_3, \gcd (m_4, m_5) m_2 \in \sqrt{J}$. Similarly, we have $\gcd (m_4, m_5) m_1 \in \sqrt{J}$. Then $g_2 \in J$ (resp.\ $g_3 \in J$) implies that $m_1, m_4 \in \sqrt{J}$ (resp.\ $m_2, m_5 \in \sqrt{J}$), as desired. \par \textbf{Case (2):} We prove that the following $3$ elements $g_1, g_2, g_3$ generate $I$ up to radical: \begin{displaymath} g_1 = x_{13} \left( \frac{m_4}{x_{13}} + m_3 \right), \quad g_2 = m_1 m_2 \left( \frac{m_4}{x_{13}} + m_3 \right) + m_5, \quad g_3 = m_1 + m_2 + m_3. \end{displaymath} Set $J = (g_1, g_2, g_3)$. Then it is clear that $\sqrt{J} \subset I$. We prove the opposite inclusion. \par Since $x_{13} m_5 = x_{13} (g_2 - m_1 m_2 ((m_4/x_{13}) + m_3)) = x_{13} g_2 - m_1 m_2 g_1 \in J$, we have $m_5 \in \sqrt{J}$. Then $x_{13} x_{18} m_3^2 = x_{18} m_3 (g_1 - m_4) \in \sqrt{J}$ because $x_{18} m_3 m_4$ is divisible by $m_5 \in \sqrt{J}$. This implies $ x_{18} m_3 \in \sqrt{J}$. Since $x_{18}$ is a common factor of $m_1$ and $m_2$, we have $m_3^2 = m_3 (g_3 - m_1 - m_2) \in \sqrt{J}$. That is $m_3 \in \sqrt{J}$, and thus $m_4 = g_1 - x_{13} m_3, m_1 + m_2 = g_3 - m_3, m_1 m_2 (m_4/x_{13}) = g_2 - m_1 m_2 m_3 - m_5 \in \sqrt{J}$. Since $m_1 m_2$ is divisible by $m_4/x_{13}$, we have $m_1 m_2 \in \sqrt{J}$. Therefore $m_1, m_2 \in \sqrt{J}$, as desired. \par \textbf{Case (3):} Since the length of the Lyubeznik resolution (\cite{Ly88}) of $I = (m_5, m_1, m_2, m_3, m_4)$ with respect to this order of generators is equal to $3$, the inequality $\ara I \leq 3$ follows from \cite{Kim_lyures}. \end{proof} \section{The case $\arithdeg I \leq 4$} Let $I$ be a squarefree monomial ideal of $S$ with $\arithdeg I \leq 4$. In this section, we prove that $\ara I = \pd_S S/I$ holds for such an ideal $I$. \begin{theorem} \label{claim:Adual_4gen} Let $S$ be a polynomial ring over a field $K$ and $I$ a squarefree monomial ideal of $S$ with $\arithdeg I \leq 4$. Then \begin{displaymath} \ara I = \pd_S S/I. \end{displaymath} \end{theorem} \par To prove the theorem, first, we reduce to simple cases. Then we find $\pd_S S/I$ elements which generate $I$ up to radical. On this step, we use the result due to Schmitt and Vogel \cite{SchmVo}. \begin{lemma}[{Schmitt and Vogel \cite[Lemma, p.\ 249]{SchmVo}}] \label{claim:SchmVo} Let $R$ be a commutative ring and $\mathcal{P}$ a finite subset of $R$. Suppose that subsets ${\mathcal{P}}_0, {\mathcal{P}}_1, \ldots, {\mathcal{P}}_r$ of $\mathcal{P}$ satisfy the following three conditions$:$ \begin{enumerate} \item[{(SV1)}] $\bigcup_{\ell = 0}^r {\mathcal{P}}_{\ell} = \mathcal{P};$ \item[{(SV2)}] $|{\mathcal{P}}_0| = 1;$ \item[{(SV3)}] for all $\ell >0$ and for all elements $a, a'' \in {\mathcal{P}}_{\ell}$ with $a \neq a''$, there exist an integer ${\ell}' < \ell$ and an element $a' \in {\mathcal{P}}_{{\ell}'}$ such that $a a'' \in (a')$. \end{enumerate} Set $I = ({\mathcal{P}})$ and \begin{displaymath} g_{\ell} = \sum_{a \in {\mathcal{P}}_{\ell}} a, \quad \ell = 0, 1, \ldots, r. \end{displaymath} Then $g_0, g_1, \ldots, g_r$ generate $I$ up to radical. \end{lemma} \par First, we reduce to the case where $\dim \calH (I^{\ast}) \leq 1$. To do this, we need the following lemma. \begin{lemma} \label{claim:red_to_1dim} Let $I = I_{\Delta}$ be a squarefree monomial ideal of $S=K[x_1, \ldots, x_n]$ with $\arithdeg I \leq 4$ and $\indeg I \geq 2$. Then there exists a squarefree monomial ideal $I' = I_{{\Delta}'}$ such that $\dim \calH (I'^{\ast}) \leq 1$ and that $\Delta$ is obtained from ${\Delta}'$ adding cones recursively. \end{lemma} \begin{proof} Since $\mu (I^{\ast}) \leq 4$ and $\height I^{\ast} \geq 2$, we have $\dim \calH (I^{\ast}) \leq 2$ by \cite[Proposition 3.4]{KTYdev1}. We prove the lemma by induction on the number $c$ of $2$-faces of $\calH (I^{\ast})$. \par If $c=0$, then $\dim \calH (I^{\ast}) \leq 1$ and we may take ${\Delta}' = \Delta$. \par Assume $c \geq 1$. Then $\arithdeg I = 4$ since $\indeg I \geq 2$, and there exists a $2$-face $F \in \calH (I^{\ast})$. We may assume $F = \{ 1, 2, 3 \}$ and let $x_1, \ldots, x_t$ ($t \geq 1$) be all defining variables of $F$. Let \begin{displaymath} I = I_{\Delta} = P_1 \cap P_2 \cap P_3 \cap P_4 \end{displaymath} be the minimal prime decomposition of $I$. Then we can write $P_i = P_i' + (x_1)$ for $i=1, 2, 3$, where $x_1 \notin P_i'$. Note that $x_1 \notin P_4$. Set \begin{equation} \label{eq:prime_decomp} I' = P_1' \cap P_2' \cap P_3' \cap P_4 \quad (\subset S' = K[x_2, \ldots, x_n]), \end{equation} and let ${\Delta}'$ be the simplicial complex associated with $I'$. Then $\Delta = {\Delta}' \cup \cone_{x_1} (G_4)$, where $G_4$ is the facet of $\Delta'$ corresponding to $P_4$. If $t>1$, then $\calH ({I'}^{\ast}) = \calH (I^{\ast})$ and (\ref{eq:prime_decomp}) is still the minimal prime decomposition of $I'$. Hence we can reduce the case $t=1$ by applying the above operation $t-1$ times. If (\ref{eq:prime_decomp}) is not a minimal prime decomposition of $I'$, then $I' = P_1' \cap P_2' \cap P_3'$ and $\dim \calH (I'^{\ast}) \leq 1$. Otherwise $\calH (I'^{\ast}) = \calH (I^{\ast}) \setminus \{ F \}$ and the number of $2$-feces of $\calH (I'^{\ast})$ is smaller than that of $\calH (I^{\ast})$, as required. \end{proof} \par Barile and Terai \cite[Theorem 2, p.\ 3694]{BariTera08} (see also \cite[Section 5]{Kim_h2CM}) proved that if $\ara I' = \pd_{S'} {S'}/{I'}$ holds, then $\ara I = \pd_{S} {S}/{I}$ also holds, where the notations are the same as in Lemma \ref{claim:red_to_1dim}. Therefore we only need to consider the case where $\dim \calH (I^{\ast}) \leq 1$. \par Next, we reduce to the case where $\calH (I^{\ast})$ is connected. \begin{lemma} \label{claim:inter_section} Let $I_1, I_2$ be squarefree monomial ideals of $S$. Suppose that $X (I_1) \cap X (I_2) = \emptyset$. Then \begin{displaymath} \ara (I_1 \cap I_2) \leq \ara I_1 + \ara I_2 - 1. \end{displaymath} \par Moreover, if $\ara I_i = \pd_S S/{I_i}$ holds for $i = 1, 2$, then $\ara (I_1 \cap I_2) = \pd_S S/{(I_1 \cap I_2)}$ also holds. \end{lemma} \begin{proof} Let $\ara I_i = s_i + 1$ for $i=1, 2$. Then there exist $s_1 + 1$ elements $f_0, f_1, \ldots, f_{s_1} \in S$ (resp.\ $s_2 + 1$ elements $g_0, g_1, \ldots, g_{s_2} \in S$) such that those generate $I_1$ (resp.\ $I_2$) up to radical. \par Set \begin{displaymath} h_{\ell} = \sum_{j=0}^{\ell} f_{\ell - j} g_j, \quad \ell = 0, 1, \ldots, s_1 + s_2 \end{displaymath} and $J = (h_0, h_1, \ldots, h_{s_1 + s_2})$, where $f_i = 0$ (resp.\ $g_j = 0$) if $i > s_1$ (resp.\ $j > s_2$). If $\sqrt{J} = I_1 \cap I_2$, then \begin{displaymath} \ara (I_1 \cap I_2) \leq s_1 + s_2 + 1 = \ara I_1 + \ara I_2 - 1. \end{displaymath} \par We prove $\sqrt{J} = I_1 \cap I_2$. Since $h_{\ell} \in I_1 \cap I_2$, we have $\sqrt{J} \subset I_1 \cap I_2$. We prove the opposite inclusion. Note that $I_1 \cap I_2 = I_1 I_2$ since $X (I_1) \cap X (I_2) = \emptyset$. By Lemma \ref{claim:SchmVo}, we have $f_i g_j \in \sqrt{J}$ for all $0 \leq i \leq s_1$ and for all $0 \leq j \leq s_2$. Then \begin{displaymath} (f_0, f_1, \ldots, f_{s_1}) (g_0, g_1, \ldots, g_{s_2}) \subset \sqrt{J}. \end{displaymath} Hence \begin{displaymath} \sqrt{(f_0, f_1, \ldots, f_{s_1}) (g_0, g_1, \ldots, g_{s_2})} \subset \sqrt{J}. \end{displaymath} On the other hand, \begin{displaymath} \begin{aligned} \sqrt{(f_0, f_1, \ldots, f_{s_1}) (g_0, g_1, \ldots, g_{s_2})} &\supset \sqrt{(f_0, f_1, \ldots, f_{s_1})} \sqrt{(g_0, g_1, \ldots, g_{s_2})} \\ &= I_1 I_2 = I_1 \cap I_2. \end{aligned} \end{displaymath} Therefore we have $I_1 \cap I_2 \subset \sqrt{J}$, as desired. \par To prove the second part of the lemma, we set $I = I_1 \cap I_2$. Then $I^{\ast} = I_1^{\ast} + I_2^{\ast}$. Note that $X (I_1) \cap X (I_2) = \emptyset$ implies $X (I_1^{\ast}) \cap X (I_2^{\ast}) = \emptyset$. Since we have already known the inequality $\ara I \geq \pd_S S/I$ by (\ref{eq:ara>pd}), it is sufficient to prove the opposite inequality. \par By assumption, we have \begin{equation} \label{eq:inter_section1} \ara I = \ara (I_1 \cap I_2) \leq \ara I_1 + \ara I_2 - 1 = \pd_S S/I_1 + \pd_S S/I_2 - 1. \end{equation} On the other hand, since $X (I_1^{\ast}) \cap X (I_2^{\ast}) = \emptyset$, we have \begin{displaymath} {\beta}_{i,j} (S/{I^{\ast}}) = \sum_{\ell = 0}^j \sum_{m=0}^i {\beta}_{m, \ell} (S/{I_1^{\ast}}) {\beta}_{i-m, j - \ell} (S/{I_2^{\ast}}). \end{displaymath} Assume that the regularity of $S/{I_1^{\ast}}$ (resp.\ $S/{I_2^{\ast}}$) is given by ${\beta}_{i_1, j_1} (S/{I_1^{\ast}}) \neq 0$ (resp.\ ${\beta}_{i_2, j_2} (S/{I_2^{\ast}}) \neq 0$). Then \begin{displaymath} \reg S/{I_1^{\ast}} = j_1 - i_1, \qquad \reg S/{I_2^{\ast}} = j_2 - i_2, \end{displaymath} and ${\beta}_{i_1 + i_2, j_1 + j_2} (S/{I^{\ast}}) \geq {\beta}_{i_1, j_1} (S/{I_1^{\ast}}) {\beta}_{i_2, j_2} (S/{I_2^{\ast}}) > 0$. Hence, \begin{equation} \label{eq:inter_section2} \reg I^{\ast} = \reg (S/{I^{\ast}}) + 1 \geq j_1 + j_2 - (i_1 + i_2) + 1. \end{equation} Therefore \begin{displaymath} \begin{alignedat}{3} \pd_S S/I = \reg I^{\ast} &\geq \reg S/{I_1^{\ast}} + \reg S/{I_2^{\ast}} + 1 &\quad &\text{(by (\ref{eq:inter_section2}))}\\ &= \reg I_1^{\ast} + \reg I_2^{\ast} - 1 &\quad &\quad \\ &= \pd_S S/{I_1} + \pd_S S/{I_2} -1 &\quad &\quad \\ &\geq \ara I &\quad &\text{(by (\ref{eq:inter_section1}))}. \end{alignedat} \end{displaymath} \end{proof} \par Let $I$ be a squarefree monomial ideal with $\arithdeg I \leq 4$. By Remark \ref{rmk:connected} we may assume $\indeg I \geq 2$. It was proved in \cite[Theorems 5.1 and 6.1]{KTYdev1} that $\ara I = \pd_S S/I$ holds when $\arithdeg I - \indeg I \leq 1$. Hence we only need to consider the case where $\arithdeg I =4$ and $\indeg I = 2$, equivalently, $\mu (I^{\ast}) = 4$ and $\height I^{\ast} = 2$. Then corresponding hypergraphs $\calH := \calH (I^{\ast})$ were classified in \cite[Section 3]{KTYdev2}. As a consequence of Lemmas \ref{claim:red_to_1dim} and \ref{claim:inter_section}, we may assume that $\calH$ is connected and $\dim \calH \leq 1$. Moreover since we know $\ara I = \pd_S S/I$ holds when $\arithdeg I = \reg I$, i.e., $\mu (I^{\ast}) = \pd_S S/I^{\ast}$ by \cite[Theorem 5.1]{KTYdev1}, we may assume that $W({\calH}) \neq \emptyset$ by Proposition \ref{claim:mu}. Thus $\calH$ coincides with one of the following hypergraphs: \par \begin{picture}(65,50)(-15,0) \put(-15,35){${\calH}_{1}$:} \put(10,10){\circle{5}} \put(35,10){\circle{5}} \put(10,35){\circle{5}} \put(35,35){\circle{5}} \put(12.5,10){\line(1,0){20}} \put(12.5,35){\line(1,0){20}} \put(10,12.5){\line(0,1){20}} \put(35,12.5){\line(0,1){20}} \end{picture} \begin{picture}(65,50)(45,0) \put(45,35){${\calH}_2$:} \put(70,10){\circle{5}} \put(95,10){\circle{5}} \put(70,35){\circle{5}} \put(95,35){\circle*{5}} \put(72.5,10){\line(1,0){20}} \put(72.5,35){\line(1,0){20}} \put(70,12.5){\line(0,1){20}} \put(95,12.5){\line(0,1){20}} \end{picture} \begin{picture}(65,50)(105,0) \put(105,35){${\calH}_3$:} \put(130,10){\circle{5}} \put(155,10){\circle*{5}} \put(130,35){\circle{5}} \put(155,35){\circle*{5}} \put(132.5,10){\line(1,0){20}} \put(132.5,35){\line(1,0){20}} \put(130,12.5){\line(0,1){20}} \put(155,12.5){\line(0,1){20}} \end{picture} \begin{picture}(65,50)(165,0) \put(165,35){${\calH}_4$:} \put(190,10){\circle*{5}} \put(215,10){\circle{5}} \put(190,35){\circle{5}} \put(215,35){\circle*{5}} \put(192.5,10){\line(1,0){20}} \put(192.5,35){\line(1,0){20}} \put(190,12.5){\line(0,1){20}} \put(215,12.5){\line(0,1){20}} \end{picture} \begin{picture}(65,50)(225,0) \put(225,35){${\calH}_5$:} \put(250,10){\circle*{5}} \put(275,10){\circle*{5}} \put(250,35){\circle{5}} \put(275,35){\circle*{5}} \put(252.5,10){\line(1,0){20}} \put(252.5,35){\line(1,0){20}} \put(250,12.5){\line(0,1){20}} \put(275,12.5){\line(0,1){20}} \end{picture} \par \begin{picture}(65,50)(285,0) \put(285,35){${\calH}_6$:} \put(310,10){\circle{5}} \put(335,10){\circle{5}} \put(310,35){\circle{5}} \put(335,35){\circle{5}} \put(312.5,10){\line(1,0){20}} \put(312.5,35){\line(1,0){20}} \put(310,12.5){\line(0,1){20}} \put(335,12.5){\line(0,1){20}} \put(312,33){\line(1,-1){21}} \end{picture} \begin{picture}(65,50)(-15,0) \put(-15,35){${\calH}_7$:} \put(10,10){\circle{5}} \put(35,10){\circle{5}} \put(10,35){\circle{5}} \put(35,35){\circle*{5}} \put(12.5,10){\line(1,0){20}} \put(12.5,35){\line(1,0){20}} \put(10,12.5){\line(0,1){20}} \put(35,12.5){\line(0,1){20}} \put(12,33){\line(1,-1){21}} \end{picture} \begin{picture}(65,50)(45,0) \put(45,35){${\calH}_8$:} \put(70,10){\circle{5}} \put(95,10){\circle*{5}} \put(70,35){\circle{5}} \put(95,35){\circle{5}} \put(72.5,10){\line(1,0){20}} \put(72.5,35){\line(1,0){20}} \put(70,12.5){\line(0,1){20}} \put(95,12.5){\line(0,1){20}} \put(71.5,33.5){\line(1,-1){21.5}} \end{picture} \begin{picture}(65,50)(105,0) \put(105,35){${\calH}_9$:} \put(130,10){\circle{5}} \put(155,10){\circle*{5}} \put(130,35){\circle{5}} \put(155,35){\circle*{5}} \put(132.5,10){\line(1,0){20}} \put(132.5,35){\line(1,0){20}} \put(130,12.5){\line(0,1){20}} \put(155,12.5){\line(0,1){20}} \put(132,33){\line(1,-1){21.5}} \end{picture} \begin{picture}(65,50)(165,0) \put(160,35){${\calH}_{10}$:} \put(190,10){\circle*{5}} \put(215,10){\circle{5}} \put(190,35){\circle{5}} \put(215,35){\circle*{5}} \put(192.5,10){\line(1,0){20}} \put(192.5,35){\line(1,0){20}} \put(190,12.5){\line(0,1){20}} \put(215,12.5){\line(0,1){20}} \put(192,33){\line(1,-1){21}} \end{picture} \par \begin{picture}(65,50)(225,0) \put(220,35){${\calH}_{11}$:} \put(250,10){\circle*{5}} \put(275,10){\circle*{5}} \put(250,35){\circle{5}} \put(275,35){\circle*{5}} \put(252.5,10){\line(1,0){20}} \put(252.5,35){\line(1,0){20}} \put(250,12.5){\line(0,1){20}} \put(275,12.5){\line(0,1){20}} \put(252,33){\line(1,-1){21.5}} \end{picture} \begin{picture}(65,50)(285,0) \put(280,35){${\calH}_{12}$:} \put(310,10){\circle*{5}} \put(335,10){\circle{5}} \put(310,35){\circle{5}} \put(335,35){\circle*{5}} \put(312.5,10){\line(1,0){20}} \put(312.5,35){\line(1,0){20}} \put(310,12.5){\line(0,1){20}} \put(335,12.5){\line(0,1){20}} \put(311.5,11.5){\line(1,1){22}} \end{picture} \begin{picture}(65,50)(-15,0) \put(-20,35){${\calH}_{13}$:} \put(10,10){\circle*{5}} \put(35,10){\circle*{5}} \put(10,35){\circle{5}} \put(35,35){\circle*{5}} \put(12.5,10){\line(1,0){20}} \put(12.5,35){\line(1,0){20}} \put(10,12.5){\line(0,1){20}} \put(35,12.5){\line(0,1){20}} \put(11.5,11.5){\line(1,1){22}} \end{picture} \begin{picture}(65,50)(45,0) \put(40,35){${\calH}_{14}$:} \put(70,10){\circle{5}} \put(95,10){\circle{5}} \put(70,35){\circle{5}} \put(95,35){\circle{5}} \put(72.5,10){\line(1,0){20}} \put(72.5,35){\line(1,0){20}} \put(70,12.5){\line(0,1){20}} \put(95,12.5){\line(0,1){20}} \put(72,12){\line(1,1){21}} \put(72,33){\line(1,-1){21}} \end{picture} \begin{picture}(65,50)(105,0) \put(100,35){${\calH}_{15}$:} \put(130,10){\circle{5}} \put(155,10){\circle{5}} \put(130,35){\circle{5}} \put(155,35){\circle*{5}} \put(132.5,10){\line(1,0){20}} \put(132.5,35){\line(1,0){20}} \put(130,12.5){\line(0,1){20}} \put(155,12.5){\line(0,1){20}} \put(132,12){\line(1,1){21}} \put(132,33){\line(1,-1){21}} \end{picture} \par \begin{picture}(65,50)(165,0) \put(160,35){${\calH}_{16}$:} \put(190,10){\circle{5}} \put(215,10){\circle*{5}} \put(190,35){\circle{5}} \put(215,35){\circle*{5}} \put(192.5,10){\line(1,0){20}} \put(192.5,35){\line(1,0){20}} \put(190,12.5){\line(0,1){20}} \put(215,12.5){\line(0,1){20}} \put(192,12){\line(1,1){21}} \put(192,33){\line(1,-1){21.5}} \end{picture} \begin{picture}(65,50)(225,0) \put(220,35){${\calH}_{17}$:} \put(250,10){\circle*{5}} \put(275,10){\circle*{5}} \put(250,35){\circle{5}} \put(275,35){\circle*{5}} \put(252.5,10){\line(1,0){20}} \put(252.5,35){\line(1,0){20}} \put(250,12.5){\line(0,1){20}} \put(275,12.5){\line(0,1){20}} \put(251.5,11.5){\line(1,1){22}} \put(252,33){\line(1,-1){21.5}} \end{picture} \begin{picture}(65,50)(285,0) \put(280,35){${\calH}_{18}$:} \put(310,10){\circle*{5}} \put(335,10){\circle{5}} \put(310,35){\circle{5}} \put(335,35){\circle{5}} \put(312.5,35){\line(1,0){20}} \put(310,12.5){\line(0,1){20}} \put(335,12.5){\line(0,1){20}} \put(312,33){\line(1,-1){21}} \end{picture} \begin{picture}(65,50)(-15,0) \put(-20,35){${\calH}_{19}$:} \put(10,10){\circle*{5}} \put(35,10){\circle*{5}} \put(10,35){\circle{5}} \put(35,35){\circle{5}} \put(12.5,35){\line(1,0){20}} \put(10,12.5){\line(0,1){20}} \put(35,12.5){\line(0,1){20}} \put(12,33){\line(1,-1){21.5}} \end{picture} \begin{picture}(65,50)(45,0) \put(40,35){${\calH}_{20}$:} \put(70,10){\circle*{5}} \put(95,10){\circle*{5}} \put(70,35){\circle{5}} \put(95,35){\circle*{5}} \put(72.5,35){\line(1,0){20}} \put(70,12.5){\line(0,1){20}} \put(95,12.5){\line(0,1){20}} \put(72,33){\line(1,-1){21.5}} \end{picture} \par \begin{picture}(65,50)(105,0) \put(100,35){${\calH}_{21}$:} \put(130,10){\circle{5}} \put(155,10){\circle*{5}} \put(130,35){\circle{5}} \put(155,35){\circle*{5}} \put(132.5,10){\line(1,0){20}} \put(130,12.5){\line(0,1){20}} \put(155,12.5){\line(0,1){20}} \put(132,33){\line(1,-1){21.5}} \end{picture} \begin{picture}(65,50)(165,0) \put(160,35){${\calH}_{22}$:} \put(190,10){\circle*{5}} \put(215,10){\circle*{5}} \put(190,35){\circle{5}} \put(215,35){\circle*{5}} \put(192.5,10){\line(1,0){20}} \put(190,12.5){\line(0,1){20}} \put(215,12.5){\line(0,1){20}} \put(192,33){\line(1,-1){21.5}} \end{picture} \begin{picture}(65,50)(225,0) \put(220,35){${\calH}_{23}$:} \put(250,10){\circle*{5}} \put(275,10){\circle*{5}} \put(250,35){\circle{5}} \put(275,35){\circle{5}} \put(252.5,35){\line(1,0){20}} \put(250,12.5){\line(0,1){20}} \put(275,12.5){\line(0,1){20}} \end{picture} \begin{picture}(65,50)(285,0) \put(280,35){${\calH}_{24}$:} \put(310,10){\circle*{5}} \put(335,10){\circle*{5}} \put(310,35){\circle{5}} \put(335,35){\circle*{5}} \put(312.5,35){\line(1,0){20}} \put(310,12.5){\line(0,1){20}} \put(335,12.5){\line(0,1){20}} \end{picture} \par Note that $\calH$ is contained in the hypergraph $\calH_{17}$: \par \noindent \begin{center} \begin{picture}(55,60)(-10,-5) \put(-25,39){${\calH}_{17}$:} \put(10,10){\circle*{5}} \put(2,1){$2$} \put(35,10){\circle*{5}} \put(38,1){$3$} \put(10,35){\circle{5}} \put(2,39){$1$} \put(35,35){\circle*{5}} \put(38,39){$4$} \put(12.5,10){\line(1,0){20}} \put(12.5,35){\line(1,0){20}} \put(10,12.5){\line(0,1){20}} \put(35,12.5){\line(0,1){20}} \put(11.5,11.5){\line(1,1){22}} \put(12,33){\line(1,-1){21.5}} \end{picture} \end{center} \par \noindent Throughout, we label the vertices of $\calH$ as above. Then $I$ is of the form $I = P_1 \cap P_2 \cap P_3 \cap P_4$ with \begin{displaymath} \begin{aligned} P_1 &= (x_{11}, \ldots, x_{1 i_1}, x_{41}, \ldots, x_{4 i_4}, x_{51}, \ldots, x_{5 i_5}), \\ P_2 &= (x_{11}, \ldots, x_{1 i_1}, x_{21}, \ldots, x_{2 i_2}, x_{61}, \ldots, x_{6 i_6}, y_{21}, \ldots, y_{2 j_2}), \\ P_3 &= (x_{21}, \ldots, x_{2 i_2}, x_{31}, \ldots, x_{3 i_3}, x_{51}, \ldots, x_{5 i_5}, y_{31}, \ldots, y_{3 j_3}), \\ P_4 &= (x_{31}, \ldots, x_{3 i_3}, x_{41}, \ldots, x_{4 i_4}, x_{61}, \ldots, x_{6 i_6}, y_{41}, \ldots, y_{4 j_4}), \end{aligned} \end{displaymath} where $\{ x_{st} \}$, $\{ y_{u v} \}$ are all distinct variables of $S$ and $i_s \geq 0$, $j_u \geq 0$. Then we have \begin{displaymath} I^{\ast} = (X_1 X_4 X_5, X_1 X_2 X_6 Y_2, X_2 X_3 X_5 Y_3, X_3 X_4 X_6 Y_4), \end{displaymath} where \begin{displaymath} X_s = x_{s1} \cdots x_{s i_s}, \quad s = 1, 2, 3, 4; \qquad Y_u = y_{u1} \cdots y_{u j_u}, \quad u = 2, 3, 4. \end{displaymath} Here we set $X_s = 1$ (resp.\ $Y_u = 1$) when $i_s = 0$ (resp.\ $j_u=0$). \par Let $N = i_1 + \cdots + i_6 + j_2 + j_3 + j_4$. Then one can easily construct a graded minimal free resolution of $I^{\ast} $ and compute $\reg I^{\ast}$. \begin{lemma} \label{claim:reg4gen} Let $I$ be a squarefree monomial ideal with $\indeg I^{\ast} \geq 2$. Suppose that $\calH := \calH (I^{\ast})$ coincides with one of $\calH_1, \ldots, \calH_{24}$. \begin{enumerate} \item When $\calH$ coincides with one of $\calH_{11}, \calH_{17}, \calH_{20}$, \begin{displaymath} \pd_S S/I = \reg I^{\ast} = \max \{ N - j_2 - 2, N - j_3 - 2, N - j_4 - 2 \}. \end{displaymath} \item When $\calH = \calH_{22}$, \begin{displaymath} \pd_S S/I = \reg I^{\ast} = \max \{ N - j_2 - 2, N - j_3 - 2 \}. \end{displaymath} \item When $\calH$ coincides with one of $\calH_4, \calH_5, \calH_{12}, \calH_{13}, \calH_{24}$, \begin{displaymath} \pd_S S/I = \reg I^{\ast} = \max \{ N - j_2 - 2, N - j_4 - 2 \}. \end{displaymath} \item Otherwise, $\pd_S S/I = \reg I^{\ast} = N-2$. \end{enumerate} \end{lemma} \par Now we find $\pd_S S/I$ elements of $I$ which generate $I$ up to radical. First, we consider the case of $\calH_{17}$. The following construction for $\calH = \calH_{17}$ is also valid for the other cases except for the two cases of $\calH_{1}$ and $\calH_{14}$. \par Set \begin{displaymath} r_1 = N - j_4 - 3, \quad r_2 = N - j_3 -3, \quad r_3 = N - j_2 -3, \quad r = \max \{ r_1, r_2, r_3 \}. \end{displaymath} Then $\pd_S S/I = r+1$. We define sets ${\mathcal{P}}_{\ell}^{(1)}, \ldots, {\mathcal{P}}_{\ell}^{(8)}$ as follows and set ${\mathcal{P}}_{\ell} = {\mathcal{P}}_{\ell}^{(1)} \cup \cdots \cup {\mathcal{P}}_{\ell}^{(8)}$ for $\ell = 0, 1, \ldots, r$: \begin{displaymath} \begin{aligned} {\mathcal{P}}_{\ell}^{(1)} &= \left\{ x_{1 {\ell}_1} x_{3 {\ell}_3} \; : \; {\ell}_1 + {\ell}_3 = \ell + 2; \ 1 \leq {\ell}_s \leq i_s \ (s = 1, 3) \right\}, \\ {\mathcal{P}}_{\ell}^{(2)} &= \left\{ x_{1 {\ell}_1} w_{3 {\ell}_3} w_{4 {\ell}_4} \; : \; \begin{aligned} &{\ell}_1 + {\ell}_3 + {\ell}_4 + i_3 = \ell + 3, \\ &1 \leq {\ell}_1 \leq i_1; \ 1 \leq {\ell}_3 \leq i_2 + i_5 + j_3; \ 1 \leq {\ell}_4 \leq i_4 + i_6 + j_4 \end{aligned} \right\}, \\ {\mathcal{P}}_{\ell}^{(3)} &= \left\{ x_{3 {\ell}_3} w_{1 {\ell}_1} w_{2 {\ell}_2} \; : \; \begin{aligned} &{\ell}_3 + {\ell}_1 + {\ell}_2 + i_1 = \ell + 3, \\ &1 \leq {\ell}_3 \leq i_3; \ 1 \leq {\ell}_1 \leq i_4 + i_5; \ 1 \leq {\ell}_2 \leq i_2 + i_6 + j_2 \end{aligned} \right\}, \\ {\mathcal{P}}_{\ell}^{(4)} &= \left\{ x_{2 {\ell}_2} x_{4 {\ell}_4} \; : \; {\ell}_2 + {\ell}_4 + i_1 + i_3 = \ell + 2; \ 1 \leq {\ell}_s \leq i_s \ (s = 2, 4) \right\}, \\ {\mathcal{P}}_{\ell}^{(5)} &= \left\{ x_{4 {\ell}_4} w_{2 {\ell}_2} w_{3 {\ell}_3} \; : \; \begin{aligned} &{\ell}_4 + ({\ell}_2 - i_2) + ({\ell}_3 - i_2) + i_1 + i_2 + i_3 = \ell + 3, \\ &1 \leq {\ell}_4 \leq i_4; \ i_2 < {\ell}_2 \leq i_2 + i_6 + j_2; \ i_2 < {\ell}_3 \leq i_2 + i_5 + j_3 \end{aligned} \right\}, \\ {\mathcal{P}}_{\ell}^{(6)} &= \left\{ x_{2 {\ell}_2} x_{5 {\ell}_5} w_{4 {\ell}_4} \; : \; \begin{aligned} &{\ell}_2 + {\ell}_5 + ({\ell}_4 - i_4) + i_1 + i_3 + i_4 = \ell + 3, \\ &1 \leq {\ell}_s \leq i_s \ (s = 2, 5); \ i_4 < {\ell}_4 \leq i_4 + i_6 + j_4 \end{aligned} \right\}, \\ {\mathcal{P}}_{\ell}^{(7)} &= \left\{ x_{5 {\ell}_5} x_{6 {\ell}_6} \; : \; {\ell}_5 + {\ell}_6 + i_1 + \cdots + i_4 = \ell + 2; \ 1 \leq {\ell}_s \leq i_s \ (s = 5, 6) \right\}, \\ {\mathcal{P}}_{\ell}^{(8)} &= \left\{ x_{5 {\ell}_5} y_{2 {\ell}_2} y_{4 {\ell}_4} \; : \; \begin{aligned} &{\ell}_5 + {\ell}_2 + {\ell}_4 + i_1 + \cdots + i_4 + i_6 = \ell + 3, \\ &1 \leq {\ell}_5 \leq i_5; \ 1 \leq {\ell}_u \leq j_u \ (u = 2, 4) \end{aligned} \right\}. \end{aligned} \end{displaymath} Here, \begin{displaymath} \begin{aligned} w_{1 {\ell}_1} &= \left\{ \begin{alignedat}{3} &x_{4 {\ell}_1}, &\quad & 1 \leq {\ell}_1 \leq i_4, \\ &x_{5 {\ell}_1 - i_4}, &\quad & i_4 < {\ell}_1 \leq i_4 + i_5, \end{alignedat} \right. \\ w_{2 {\ell}_2} &= \left\{ \begin{alignedat}{3} &x_{2 {\ell}_2}, &\quad & 1 \leq {\ell}_2 \leq i_2, \\ &x_{6 {\ell}_2 - i_2}, &\quad & i_2 < {\ell}_2 \leq i_2 + i_6, \\ &y_{2 {\ell}_2 - i_2 - i_6}, &\quad & i_2 + i_6 < {\ell}_2 \leq i_2 + i_6 + j_2, \end{alignedat} \right. \\ w_{3 {\ell}_3} &= \left\{ \begin{alignedat}{3} &x_{2 {\ell}_3}, &\quad & 1 \leq {\ell}_3 \leq i_2, \\ &x_{5 {\ell}_3 - i_2}, &\quad & i_2 < {\ell}_3 \leq i_2 + i_5, \\ &y_{3 {\ell}_3 - i_2 - i_5}, &\quad & i_2 + i_5 < {\ell}_3 \leq i_2 + i_5 + j_3, \end{alignedat} \right. \\ w_{4 {\ell}_4} &= \left\{ \begin{alignedat}{3} &x_{4 {\ell}_4}, &\quad & 1 \leq {\ell}_4 \leq i_4, \\ &x_{6 {\ell}_4 - i_4}, &\quad & i_4 < {\ell}_4 \leq i_4 + i_6, \\ &y_{4 {\ell}_4 - i_4 - i_6}, &\quad & i_4 + i_6 < {\ell}_4 \leq i_4 + i_6 + j_4. \end{alignedat} \right. \end{aligned} \end{displaymath} \par Note that for each $k$ $(k=1, 2, \ldots, 8)$, the range of $\ell$ with ${\mathcal{P}}_{\ell}^{(k)} \neq \emptyset$ is given by the following list: \begin{displaymath} \begin{aligned} {\mathcal{P}}_{\ell}^{(1)}: \quad &0 \leq \ell \leq i_1 + i_3 -2, \\ {\mathcal{P}}_{\ell}^{(2)}: \quad &i_3 \leq \ell \leq N - j_2 - 3 = r_3, \\ {\mathcal{P}}_{\ell}^{(3)}: \quad &i_1 \leq \ell \leq N - j_3 - j_4 - 3, \\ {\mathcal{P}}_{\ell}^{(4)}: \quad &i_1 + i_3 \leq \ell \leq i_1 + \cdots + i_4 -2, \\ {\mathcal{P}}_{\ell}^{(5)}: \quad &i_1 + i_2 + i_3 \leq \ell \leq N - j_4 - 3 = r_1, \\ {\mathcal{P}}_{\ell}^{(6)}: \quad &i_1 + i_3 + i_4 \leq \ell \leq N - j_2 - j_3 -3, \\ {\mathcal{P}}_{\ell}^{(7)}: \quad &i_1 + \cdots + i_4 \leq \ell \leq i_1 + \cdots + i_6 -2, \\ {\mathcal{P}}_{\ell}^{(8)}: \quad &i_1 + \cdots + i_4 + i_6 \leq \ell \leq N - j_3 - 3 = r_2. \end{aligned} \end{displaymath} \par Now, we verify that ${\mathcal{P}}_{\ell}$, $\ell = 0, 1, \ldots, r$ satisfy the conditions (SV1), (SV2), and (SV3). In this case, the condition (SV1) means that $\bigcup_{\ell = 0}^{r} {\mathcal{P}}_{\ell}$ generates $I$, and it is satisfied. Since ${\mathcal{P}}_0 = {\mathcal{P}}_0^{(1)} = \{ x_{11} x_{31} \}$, the condition (SV2) is also satisfied. To check the condition (SV3), let $a, a''$ be two distinct elements in ${\mathcal{P}}_{\ell}$. We denote the indices of $a$ (resp.\ $a''$) by ${\ell}_s$ (resp.\ ${\ell}_s''$). First suppose $a, a'' \in {\mathcal{P}}_{\ell}^{(k)}$. Then there exists $s$ such that ${\ell}_s \neq {\ell}_s''$ and we may assume ${\ell}_s < {\ell}_s''$. Replacing ${\ell}_s''$ to ${\ell}_s$ in $a''$, we obtain required elements $a' \in {\mathcal{P}}_{{\ell}'}^{(k)}$ with ${\ell}' < \ell$. For example, take two distinct elements $a = x_{1 {\ell}_1} x_{3 {\ell}_3}, a'' = x_{1 {\ell}_1''} x_{3 {\ell}_3''} \in {\mathcal{P}}_{\ell}^{(1)}$ with ${\ell}_1 < {\ell}_1''$. Then $a' = x_{1 {\ell}_1} x_{3 {\ell}_3''} \in {\mathcal{P}}_{{\ell}'}^{(1)}$, where ${\ell}' = {\ell}_1 + {\ell}_3'' - 2 < {\ell}_1'' + {\ell}_3'' - 2 = \ell$. Next, we assume $a \in {\mathcal{P}}_{\ell}^{(k)}$ and $a'' \in {\mathcal{P}}_{\ell}^{(k'')}$ with $k < k''$. If $k=1$, $k'' = 2$, then $a = x_{1 {\ell}_1} x_{3 {\ell}_3}$, $a'' = x_{1 {\ell}_1''} w_{3 {\ell}_3''} w_{4 {\ell}_4''}$. We can take $a' = x_{1 {\ell}_1''} x_{3 {\ell}_3} \in {\mathcal{P}}_{{\ell}'}^{(1)}$, where \begin{displaymath} \begin{aligned} {\ell}' &= {\ell}_1'' + {\ell}_3 - 2 \\ &= (\ell + 3 - ({\ell}_3'' + {\ell}_4'' + i_3)) + {\ell}_3 - 2 \\ &= \ell + 1 - {\ell}_3'' - {\ell}_4'' - (i_3 - {\ell}_3) \\ &< \ell. \end{aligned} \end{displaymath} Similarly, we can check the existence of required ${\ell}'$ and $a' \in {\mathcal{P}}_{{\ell}'}^{(k')}$ for other pairs $(k, k'')$; we just mention the choices for $a'$ in Table \ref{tab:H1_list}. \begin{table \caption{The choices for $a'$ in the case of ${\calH}_{17}$.} \label{tab:H1_list} \begin{center} \begin{tabular}{cccl} \hline $k$ & $k''$ & $k'$ & $a'$ \\ \hline $1$ & $2$ & $1$ & $x_{1 {\ell}_1''} x_{3 {\ell}_3}$ \\ $1$ & $3$ & $1$ & $x_{1 {\ell}_1} x_{3 {\ell}_3''}$ \\ $1$ & $4$--$8$ & \multicolumn{2}{c}{These cases do not occur.} \\ $2$ & $3$ & $1$ & $x_{1 {\ell}_1} x_{3 {\ell}_3''}$ \\ $2$ & $4$ & $2$ & $x_{1 {\ell}_1} x_{2 {\ell}_2''} x_{4 {\ell}_4''} = x_{1 {\ell}_1} w_{3 {\ell}_2''} w_{4 {\ell}_4''}$ \\ $2$ & $5$ & $2$ & $x_{1 {\ell}_1} w_{3 {\ell}_3''} x_{4 {\ell}_4''} = x_{1 {\ell}_1} w_{3 {\ell}_3''} w_{4 {\ell}_4''}$ \\ $2$ & $6$ & $2$ & $x_{1 {\ell}_1} x_{2 {\ell}_2''} w_{4 {\ell}_4''} = x_{1 {\ell}_1} w_{3 {\ell}_2''} w_{4 {\ell}_4''}$ \\ $2$ & $7$ & $2$ & $x_{1 {\ell}_1} x_{5 {\ell}_5''} x_{6 {\ell}_6''} = x_{1 {\ell}_1} w_{3 i_2 + {\ell}_5''} w_{4 i_4 + {\ell}_6''}$ \\ $2$ & $8$ & $2$ & $x_{1 {\ell}_1} x_{5 {\ell}_5''} y_{4 {\ell}_4''} = x_{1 {\ell}_1} w_{3 i_2 + {\ell}_5''} w_{4 i_4 + i_6 + {\ell}_4''}$ \\ $3$ & $4$ & $3$ & $x_{3 {\ell}_3} x_{4 {\ell}_4''} x_{2 {\ell}_2''} = x_{3 {\ell}_3} w_{1 {\ell}_4''} w_{2 {\ell}_2''}$ \\ $3$ & $5$ & $3$ & $x_{3 {\ell}_3} x_{4 {\ell}_4''} w_{2 {\ell}_2''} = x_{3 {\ell}_3} w_{1 {\ell}_4''} w_{2 {\ell}_2''}$ \\ $3$ & $6$ & $3$ & $x_{3 {\ell}_3} x_{5 {\ell}_5''} x_{2 {\ell}_2''} = x_{3 {\ell}_3} w_{1 i_4 + {\ell}_5''} w_{2 {\ell}_2''}$ \\ $3$ & $7$ & $3$ & $x_{3 {\ell}_3} x_{5 {\ell}_5''} x_{6 {\ell}_6''} = x_{3 {\ell}_3} w_{1 i_4 + {\ell}_5''} w_{2 i_2+ {\ell}_6''}$ \\ $3$ & $8$ & $3$ & $x_{3 {\ell}_3} x_{5 {\ell}_5''} y_{2 {\ell}_2''} = x_{3 {\ell}_3} w_{1 i_4 + {\ell}_5''} w_{2 i_2 + i_6 + {\ell}_2''}$ \\ $4$ & $5$ & $4$ & $x_{2 {\ell}_2} x_{4 {\ell}_4''}$ \\ $4$ & $6$ & $4$ & $x_{2 {\ell}_2''} x_{4 {\ell}_4}$ \\ $4$ & $7,8$ & \multicolumn{2}{c}{These cases do not occur.} \\ $5$ & $6$ & $4$ & $x_{2 {\ell}_2''} x_{4 {\ell}_4}$ \\ $5$ & $7$ & $5$ & $x_{4 {\ell}_4} x_{6 {\ell}_6''} x_{5 {\ell}_5''} = x_{4 {\ell}_4} w_{2 i_2 + {\ell}_6''} w_{3 i_2+ {\ell}_5''}$ \\ $5$ & $8$ & $5$ & $x_{4 {\ell}_4} y_{2 {\ell}_2''} x_{5 {\ell}_5''} = x_{4 {\ell}_4} w_{2 i_2 + i_6 + {\ell}_2''} w_{3 i_2 + {\ell}_5''}$ \\ $6$ & $7$ & $6$ & $x_{2 {\ell}_2} x_{5 {\ell}_5''} x_{6 {\ell}_6''} = x_{2 {\ell}_2} x_{5 {\ell}_5''} w_{4 i_4+ {\ell}_6''}$ \\ $6$ & $8$ & $6$ & $x_{2 {\ell}_2} x_{5 {\ell}_5''} y_{4 {\ell}_4''} = x_{2 {\ell}_2} x_{5 {\ell}_5''} w_{4 i_4 + i_6 + {\ell}_4''}$ \\ $7$ & $8$ & $7$ & $x_{5 {\ell}_5''} x_{6 {\ell}_6}$ \\ \hline \end{tabular} \end{center} \end{table} \par \bigskip \par Next we consider the two exceptional cases: $\calH = \calH_1, \calH_{14}$: \par \noindent \begin{center} \begin{picture}(100,60)(-20,-5) \put(-20,39){${\calH}_1$:} \put(10,10){\circle{5}} \put(2,1){$2$} \put(35,10){\circle{5}} \put(38,1){$3$} \put(10,35){\circle{5}} \put(2,39){$1$} \put(35,35){\circle{5}} \put(38,39){$4$} \put(12.5,10){\line(1,0){20}} \put(12.5,35){\line(1,0){20}} \put(10,12.5){\line(0,1){20}} \put(35,12.5){\line(0,1){20}} \end{picture} \begin{picture}(90,60)(95,-5) \put(95,39){${\calH}_{14}$:} \put(130,10){\circle{5}} \put(122,1){$2$} \put(155,10){\circle{5}} \put(158,1){$3$} \put(130,35){\circle{5}} \put(122,39){$1$} \put(155,35){\circle{5}} \put(158,39){$4$} \put(132.5,10){\line(1,0){20}} \put(132.5,35){\line(1,0){20}} \put(130,12.5){\line(0,1){20}} \put(155,12.5){\line(0,1){20}} \put(132,12){\line(1,1){21}} \put(132,33){\line(1,-1){21}} \end{picture} \end{center} \par The case of ${\calH}_1$ is easy because in this case, \begin{displaymath} \begin{aligned} I &= (x_{11}, \ldots, x_{1 i_1}, x_{41}, \ldots, x_{4 i_4}) \cap (x_{11}, \ldots, x_{1 i_1}, x_{21}, \ldots, x_{2 i_2}) \\ &\cap (x_{21}, \ldots, x_{2 i_2}, x_{31}, \ldots, x_{3 i_3}) \cap (x_{31}, \ldots, x_{3 i_3}, x_{41}, \ldots, x_{4 i_4}) \\ &= (x_{1 {\ell}_1} x_{3 {\ell}_3} \; : \; 1 \leq {\ell}_s \leq i_s \ (s= 1, 3)) + (x_{2 {\ell}_2} x_{4 {\ell}_4} \; : \; 1 \leq {\ell}_s \leq i_s \ (s= 2, 4)) \\ &= (x_{11}, \ldots, x_{1 i_1}) \cap (x_{31}, \ldots, x_{3 i_3}) + (x_{21}, \ldots, x_{2 i_2}) \cap (x_{41}, \ldots, x_{4 i_4}). \end{aligned} \end{displaymath} Hence, $I$ is the sum of the two squarefree monomial ideals $I_1:=(x_{11}, \ldots, x_{1 i_1}) \cap (x_{31}, \ldots, x_{3 i_3})$ and $I_2:=(x_{21}, \ldots, x_{2 i_2}) \cap (x_{41}, \ldots, x_{4 i_4})$ with $\arithdeg I_i = \indeg I_i$ ($i=1, 2$). It is known by Schmitt and Vogel \cite[Theorem 1, p.\ 247]{SchmVo} that $\ara I_i = \pd_S S/I_i$ ($i=1, 2$). Since $X (I_1) \cap X (I_2) = \emptyset$, we have $\ara I = \pd_S S/I$ by Proposition \ref{claim:connected}. \par For the case of ${\calH}_{14}$, we use Lemma \ref{claim:SchmVo} again. In this case, $\pd_S S/I = N-2$. Note that $N = i_1 + \cdots + i_6$ since $j_2 = j_3 = j_4 = 0$. We define ${\mathcal{P}}_{\ell} := {\mathcal{P}}_{\ell}^{(1)} \cup \cdots \cup {\mathcal{P}}_{\ell}^{(5)}$, $\ell = 0, 1, \ldots, N-3$, by \begin{displaymath} \begin{aligned} {\mathcal{P}}_{\ell}^{(1)} &= \left\{ x_{1 {\ell}_1} x_{3 {\ell}_3} \; : \; {\ell}_1 + {\ell}_3 = \ell + 2; \ 1 \leq {\ell}_s \leq i_s \ (s = 1, 3) \right\}, \\ {\mathcal{P}}_{\ell}^{(2)} &= \left\{ x_{1 {\ell}_1} w_{3 {\ell}_3} w_{4 {\ell}_4} \; : \; \begin{aligned} &{\ell}_1 + {\ell}_3 + {\ell}_4 + i_3 = \ell + 3, \\ &1 \leq {\ell}_1 \leq i_1; \ 1 \leq {\ell}_3 \leq i_2 + i_5; \ 1 \leq {\ell}_4 \leq i_4 + i_6, \\ &\text{${\ell}_3 \leq i_2$ or ${\ell}_4 \leq i_4$} \end{aligned} \right\}, \\ {\mathcal{P}}_{\ell}^{(3)} &= \left\{ x_{3 {\ell}_3} w_{1 {\ell}_1} w_{2 {\ell}_2} \; : \; \begin{aligned} &{\ell}_3 + {\ell}_1 + {\ell}_2 + i_1 = \ell + 3, \\ &1 \leq {\ell}_3 \leq i_3; \ 1 \leq {\ell}_1 \leq i_4 + i_5; \ 1 \leq {\ell}_2 \leq i_2 + i_6, \\ &\text{${\ell}_1 \leq i_4$ or ${\ell}_2 \leq i_2$} \end{aligned} \right\}, \\ {\mathcal{P}}_{\ell}^{(4)} &= \left\{ x_{2 {\ell}_2} x_{4 {\ell}_4} \; : \; {\ell}_2 + {\ell}_4 + i_1 + i_3 = \ell + 2; \ 1 \leq {\ell}_s \leq i_s \ (s = 2, 4) \right\}, \\ {\mathcal{P}}_{\ell}^{(5)} &= \left\{ x_{5 {\ell}_5} x_{6 {\ell}_6} \; : \; {\ell}_5 + {\ell}_6 + i_1 + \cdots + i_4 = \ell + 3; \ 1 \leq {\ell}_s \leq i_s \ (s = 5, 6) \right\}. \\ \end{aligned} \end{displaymath} Here, \begin{displaymath} \begin{alignedat}{3} w_{1 {\ell}_1} &= \left\{ \begin{alignedat}{3} &x_{4 {\ell}_1}, &\quad & 1 \leq {\ell}_1 \leq i_4, \\ &x_{5 {\ell}_1 - i_4}, &\quad & i_4 < {\ell}_1 \leq i_4 + i_5, \end{alignedat} \right. &\qquad w_{2 {\ell}_2} &= \left\{ \begin{alignedat}{3} &x_{2 {\ell}_2}, &\quad & 1 \leq {\ell}_2 \leq i_2, \\ &x_{6 {\ell}_2 - i_2}, &\quad & i_2 < {\ell}_2 \leq i_2 + i_6, \end{alignedat} \right. \\ w_{3 {\ell}_3} &= \left\{ \begin{alignedat}{3} &x_{2 {\ell}_3}, &\quad & 1 \leq {\ell}_3 \leq i_2, \\ &x_{5 {\ell}_3 - i_2}, &\quad & i_2 < {\ell}_3 \leq i_2 + i_5, \end{alignedat} \right. &\qquad w_{4 {\ell}_4} &= \left\{ \begin{alignedat}{3} &x_{4 {\ell}_4}, &\quad & 1 \leq {\ell}_4 \leq i_4, \\ &x_{6 {\ell}_4 - i_4}, &\quad & i_4 < {\ell}_4 \leq i_4 + i_6. \end{alignedat} \right. \end{alignedat} \end{displaymath} \par Note that for each $k$ $(k=1, 2, \ldots, 5)$, the range of $\ell$ with ${\mathcal{P}}_{\ell}^{(k)} \neq \emptyset$ is given by the following list: \begin{displaymath} \begin{aligned} {\mathcal{P}}_{\ell}^{(1)}: \quad &0 \leq \ell \leq i_1 + i_3 -2, \\ {\mathcal{P}}_{\ell}^{(2)}: \quad &i_3 \leq \ell \leq \max \{ N - i_5 - 3, N - i_6 - 3 \}, \\ {\mathcal{P}}_{\ell}^{(3)}: \quad &i_1 \leq \ell \leq \max \{ N - i_5 - 3, N - i_6 - 3 \}, \\ {\mathcal{P}}_{\ell}^{(4)}: \quad &i_1 + i_3 \leq \ell \leq i_1 + \cdots + i_4 -2, \\ {\mathcal{P}}_{\ell}^{(5)}: \quad &i_1 + \cdots + i_4 - 1\leq \ell \leq i_1 + \cdots + i_6 -3 = N - 3. \end{aligned} \end{displaymath} \par By a similar way to the case of ${\calH}_{17}$, we can check that ${\mathcal{P}}_{\ell}$, $\ell = 0, 1, \ldots, N-3$ satisfy the conditions (SV1), (SV2), and (SV3). For the condition (SV3), as in ${\calH}_{17}$, we list the choices for $a' \in {\mathcal{P}}_{{\ell}'}^{(k')}$ from given $a, a'' \in {\mathcal{P}}_{\ell}$ in Table \ref{tab:H3_list}. \begin{table \caption{The choices for $a'$ in the case of ${\calH}_{14}$.} \label{tab:H3_list} \begin{center} \begin{tabular}{ccccl} \hline $k$ & $k''$ & $k'$ & additional condition & $a'$ \\ \hline $1$ & $2$ & $1$ & & $x_{1 {\ell}_1''} x_{3 {\ell}_3}$ \\ $1$ & $3$ & $1$ & & $x_{1 {\ell}_1} x_{3 {\ell}_3''}$ \\ $1$ & $4,5$ & \multicolumn{3}{c}{These cases do not occur.} \\ $2$ & $3$ & $1$ & & $x_{1 {\ell}_1} x_{3 {\ell}_3''}$ \\ $2$ & $4$ & $2$ & & $x_{1 {\ell}_1} x_{2 {\ell}_2''} x_{4 {\ell}_4''} = x_{1 {\ell}_1} w_{3 {\ell}_2''} w_{4 {\ell}_4''}$ \\ $2$ & $5$ & $2$ & ${\ell}_3 \leq i_2$ & $x_{1 {\ell}_1} w_{3 {\ell}_3} x_{6 {\ell}_6''} = x_{1 {\ell}_1} w_{3 {\ell}_3} w_{4 i_4 + {\ell}_6''}$ \\ $2$ & $5$ & $2$ & ${\ell}_4 \leq i_4$ & $x_{1 {\ell}_1} x_{5 {\ell}_5''} w_{4 {\ell}_4} = x_{1 {\ell}_1} w_{3 i_2 + {\ell}_5''} w_{4 {\ell}_4}$ \\ $3$ & $4$ & $3$ & & $x_{3 {\ell}_3} x_{4 {\ell}_4''} x_{2 {\ell}_2''} = x_{3 {\ell}_3} w_{1 {\ell}_4''} w_{2 {\ell}_2''}$ \\ $3$ & $5$ & $3$ & ${\ell}_1 \leq i_4$ & $x_{3 {\ell}_3} w_{1 {\ell}_1} x_{6 {\ell}_6''} = x_{3 {\ell}_3} w_{1 {\ell}_1} w_{2 i_2 + {\ell}_6''}$ \\ $3$ & $5$ & $3$ & ${\ell}_2 \leq i_2$ & $x_{3 {\ell}_3} x_{5 {\ell}_5''} w_{2 {\ell}_2} = x_{3 {\ell}_3} w_{1 i_4 + {\ell}_5''} w_{2 {\ell}_2}$ \\ $4$ & $5$ & \multicolumn{3}{c}{This case does not occur.} \\ \hline \end{tabular} \end{center} \end{table} \par This completes the proof of Theorem \ref{claim:Adual_4gen}. \begin{remark} \label{rmk:H1toH3} For the exceptional case $\calH = \calH_{14}$, we can also consider the corresponding elements on the construction for $\calH_{17}$; in fact, these generate $I$ up to radical. But in this case, the range of $\ell$ with ${\mathcal{P}}_{\ell} \neq \emptyset$ is $0 \leq \ell \leq N-2$ because of ${\mathcal{P}}_{\ell}^{(7)}$. Therefore, we cannot obtain $\ara I$ from this construction. This is also true for $\calH = \calH_1$. \end{remark} \begin{acknowledgement} The authors thank the referee for reading the manuscript carefully. \end{acknowledgement}
{'timestamp': '2011-07-05T02:03:20', 'yymm': '1107', 'arxiv_id': '1107.0563', 'language': 'en', 'url': 'https://arxiv.org/abs/1107.0563'}
\section{Introduction}\label{sec:intro} \cite{baker2016} has created a novel method to measure Economic Policy Uncertainty, the EPU index, which has attracted significant attention and been followed by a strand of literature since its proposal. However, it entails a carefully designed framework and significant manual efforts to complete its calculation. Recently, there has been significant progress in the methodology of the generation process of EPU\@, e.g.\ differentiating contexts for uncertainty~\citep{saltzman2018}, generating index based on Google Trend~\citep{castelnuovo2017}, and correcting EPU for Spain~\citep{ghirelli2019}. I wish to extend the scope of index-generation by proposing this generalized method, namely the Wasserstein Index Generation model (WIG). This model (WIG) incorporates several methods that are widely used in machine learning, word embedding~\citep{mikolov2013}, Wasserstein Dictionary Learning~\citep[WDL]{schmitz2018}, Adam algorithm~\citep{kingma2015}, and Singular Value Decomposition (SVD). The ideas behind these methods are essentially dimension reduction. Indeed, WDL reduces the dimension of the dataset into its bases and associated weights, and SVD could shrink the dimension of bases once again to produce unidimensional indices for further analysis. I test WIG’s effectiveness in generating the Economic Policy Uncertainty index~\citep[EPU]{baker2016}, and compare the result against existing ones~\citep{azqueta-gavaldon2017}, generated by the auto-labeling Latent Dirichlet Allocation \citep[LDA]{blei2003} method. Results reveal that this model requires a much smaller dataset to achieve better results, without human intervention. Thus, it can also be applied for generating other time-series indices from news headlines in a faster and more efficient manner. Recently, \cite{shiller2017} has called for more attention in collecting and analyzing text data of economic interest. The WIG model responds to this call in terms of generating time-series sentiment indices from texts by facilitating machine learning algorithms. \section{Methods and Material} \subsection{Wasserstein Index Generation Model}\label{subsec:wig} \citet{schmitz2018} proposes an unsupervised machine learning technique to cluster documents into topics, called the Wasserstein Dictionary Learning (WDL), wherein both documents and topics are considered as discrete distributions of vocabulary. These discrete distributions can be reduced into bases and corresponding weights to capture most information in the dataset and thus shrink its dimension. Consider a corpus with \(M\) documents and a vocabulary of \(N\) words. These documents form a matrix of \(Y=\left[y_{m}\right] \in \mathbb{R}^{N \times M}\), where \(m \in\left\{1, \dots, M\right\}\), and each \(y_{m} \in \Sigma^{N}\). We wish to find topics \(T \in \mathbb{R}^{N \times K}\), with associated weights \(\Lambda \in \mathbb{R}^{K \times M}\). In other words, each document is a discrete distribution, which lies in an \(N\)-dimensional simplex. Our aim is to represent and reconstruct these documents according to some topics \(T \in \mathbb{R}^{N \times K}\), with associated weights \(\Lambda \in \mathbb{R}^{K \times M}\), where \(K\) is the total number of topics to be clustered. Note that each topic is a distribution of vocabulary, and each weight represents its associated document as a weighted barycenter of underlying topics. We could also obtain a distance matrix of the total vocabulary \(C^{N \times N}\), by first generating word embedding and measuring word distance pairwise by using a metric function, that is, \(C_{ij} = d^2(x_i, x_j)\), where \(x \in \mathbb{R}^{N \times D}\), \(d(\boldsymbol{\cdot})\) is Euclidean distance, and \(D\) is the embedding depth. \footnote{\cite{saltzman2018} proposes differentiating the use of ``uncertainty'' in both positive and negative contexts. In fact, word embedding methods, for example, \ Word2Vec \citep{mikolov2013}, can do more. They consider not only the positive and negative context for a given word, but all possible contexts for all words. } Further, we could calculate the distances between documents and topics, namely the Sinkhorn distance. It is essentially a \(2\)-Wasserstein distance, with the addition of an entropic regularization term to ensure faster computation. \footnote{One could refer to \cite{cuturi2013} for the Sinkhorn algorithm and \cite{villani2003} for the theoretic results in optimal transport.} \begin{definition}[Sinkhorn Distance] Given \(\mu, \nu \in \mathscr{P}(\Omega)\), \(\mathscr{P}(\Omega)\) as a Borel probability measure on \(\Omega\), \(\Omega \subset \mathbb{R}^{N}\), and \(C\) as cost matrix, \begin{equation}\label{def:sinkhorn} \begin{aligned} S_{\varepsilon} (\mu, \nu; C) & := \min_{\pi \in \Pi(\mu, \nu)} \langle\pi , C\rangle + \varepsilon \mathcal{H}(\pi) \\ s.t.\ \Pi(\mu, \nu) & :=\left\{\pi \in \mathbb{R}_{+}^{N \times N}, \pi \mathds{1}_{N}=\mu, \pi^{\top} \mathds{1}_{N}=\nu\right\}, \end{aligned} \end{equation} where \(\mathcal{H}(\pi) := \langle\pi,\log(\pi)\rangle\) and \(\varepsilon\) is Sinkhorn weight. \end{definition} Given the distance function for a single document, we could set up the loss function for the training process: \begin{equation}\label{eq:lossfcn} \begin{aligned} & \min_{R, A} \sum_{m=1}^{M} \mathcal{L}\left(y_m, y_{S_{\varepsilon}} \left(T(R), \lambda_m(A) ; C, \varepsilon\right)\right), \\ & given~~t_{nk}(R) := \frac{e^{r_{nk}}}{\sum_{n'} e^{r_{n'k}}},~~ \lambda_{nk}(A) := \frac{e^{a_{km}}}{\sum_{k'} e^{a_{k'm}}}. \end{aligned} \end{equation} In Equation~\ref{eq:lossfcn}, \(y_{S_{\varepsilon}}\left(\boldsymbol{\cdot}\right)\) is the reconstructed document given topics \(T\) and weight\(\lambda\) under Sinkhorn distance (Equation.~\ref{def:sinkhorn}). Moreover, the constraint that \(T\) and \(\Lambda\) being distributions in Equation~\ref{def:sinkhorn} is automatically fulfilled by column-wise \textit{Softmax} operation in the loss function. The process is formulated in Algorithm~\ref{alg:wdl}, where we first initialized matrix \(R\) and \(A\) by taking a random sample from a Standard Normal distribution and take \textit{Softmax} on them to obtain \(T\) and \(\Lambda\). \(\nabla_{T}\mathcal{L(\boldsymbol{\cdot}~;\varepsilon)}\) and \(\nabla_{\Lambda}\mathcal{L(\boldsymbol{\cdot}~;\varepsilon)}\) are the gradients taken from the loss function with respect to topics \(T\) and weights \(\Lambda\). The parameters \(R\) and \(A\) are then optimized by the Adam optimizer with the gradient at hand and learning rate \(\rho\). \textit{Softmax} operation is operated again to ensure constraints being unit simplex (as shown in Equation~\ref{eq:lossfcn}). \begin{algorithm} \caption{Wasserstein Index Generation\protect} \begin{algorithmic}[1] \REQUIRE Word distribution matrix \(Y\). Batch size \(s\).\\ Sinkhorn weight \(\varepsilon\). Adam Learning rate \(\rho\). \ENSURE Topics \(T\), weights \(\Lambda\). \STATE Initialize \(R, A \sim \mathcal{N}(0,1)\). \STATE \(T \leftarrow Softmax(R)\), \(\Lambda \leftarrow Softmax(A)\). \FOR{Each batch of documents} \STATE \(R \leftarrow R - Adam\left(\nabla_{T} \mathcal{L(\boldsymbol{\cdot}~;\varepsilon)};~\rho\right)\),\\ \(A \leftarrow A - Adam\left(\nabla_{\Lambda} \mathcal{L(\boldsymbol{\cdot}~;\varepsilon)};~\rho\right)\). \STATE \(T \leftarrow Softmax\left(R\right)\), \(\Lambda \leftarrow Softmax\left(A\right)\). \ENDFOR \end{algorithmic} \label{alg:wdl} \end{algorithm} Next, we generate the time-series index. By facilitating Singular Value Decomposition (SVD) with one component, we can shrink the dimension of vocabulary from \(T^{N \times K}\) to \(\widehat T^{1 \times K}\). Next, we multiply \(\widehat T\) by \(\Lambda^{K \times M}\) to get \(Ind^{1 \times M}\), which is the document-wise score given by SVD\@. Adding up these scores by month and scaling the index to get a mean of 100 and unit standard deviation, we obtain the final index. \subsection{Data and Computation} I collected data from \textit{The New York Times} comprising news headlines from Jan.~1, 1980 to Dec.~31, 2018. The corpus contained 11,934 documents, and 8,802 unique tokens. \footnote{ Plots given in Figure~\ref{results}, however, are from Jan.~1, 1985 to Aug.~31, 2016 for maintaining the same range to be compared with that from~\cite{azqueta-gavaldon2017}.} Next, I preprocess the corpus for further training process, for example, by removing special symbols, combining entities, and lemmatizing each token. \footnote{Lemmatization refers to the process of converting each word to its dictionary form according to its context.} Given this lemmatized corpus, I facilitate Word2Vec to generate embedding vectors for the entire dictionary and thus am able to calculate the distance matrix \(C\) for any pair for words. To calculate the gradient (as shown in Algorithm~\ref{alg:wdl}), I choose the automatic differentiation library, PyTorch \citep{paszke2017}, to perform differentiation of the loss function and then update the parameters using the Adam algorithm \citep{kingma2015}. To determine several important hyper-parameters, I use cross validation as is common in machine learning techniques. One-third of the documents are set for testing data and the rest are used for the training process: Embedding depth \(D = 10\), Sinkhorn weight \(\varepsilon = 0.1\), batch size \(s = 64\), topics \(K = 4\), and Adam learning rate \(\rho = 0.005\). Once the parameters are set at their optimal values, the entire dataset is used for training, and thus, the topics \(T\) and their associated weights \(\Lambda\) are obtained. \section{Results}\label{results} \begin{figure}[H] \centering \includegraphics[width=\linewidth]{newsvdplot_anno.png} \caption{ Original EPU~\protect\citep{baker2016}, EPU with LDA~\protect\citep{azqueta-gavaldon2017}, and EPU with WIG in Sec.~\protect\ref{subsec:wig}. } \label{fig:epu} \end{figure} As shown in Figure~\ref{fig:epu}, the EPU index generated by the WIG model clearly resembles the original EPU\@. Moreover, the WIG detects the emotional spikes better than LDA, especially during major geopolitical events, such as ``Gulf War I,'' ``Bush Election,'' ``9/11,'' ``Gulf War II,'' and so on. For comparison, I calculated the cumulated difference between the original EPU with that generated by WIG and LDA, respectively (Figure~\ref{fig:cumsumdiff}, \ref{appen}). Results indicate that the WIG model slightly out-performs LDA. To further examine this point, I apply the Hodrick–Prescot filter\footnote{ The HP filter was applied with a monthly weighted parameter 129600. } to three EPU indices, and calculate the Pearson's and Spearman's correlation factors between the raw series, cycle components, and trend components, as shown in~\ref{tab:correlation}, \ref{appen}. These tests also suggest that the series generated by WIG capture the EPU's behavior better than LDA over this three-decade period. Moreover, this method only requires a small dataset compared with LDA\@. The dataset used in this article contains only news headlines, and the dimensionality of the dictionary is only a small fraction compared with that of the LDA method. The WIG model takes only half an hour for computation and still produces similar results.\footnote{ Comparison of datasets are in Table~\ref{tab:comparison}, \ref{appen}. } Further, it extends the scope of automation in the generation process. Previously, LDA was considered an automatic-labeling method, but it continues to require human interpretation of topic terms to produce time-series indices. By introducing SVD, we could eliminate this requirement and generate the index automatically as a black-box method. However, it by no means loses its interpretability. The key terms are still retrievable, given the result of WDL, if one wishes to view them. Last, given its advantages, the WIG model is not restricted to generating EPU, but could potentially be used on any dataset regarding a certain topic whose time-series sentiment index is of economic interest. The only requirement is that the input corpus be related to that topic, but this is easily satisfied. \section{Conclusions} I proposed a novel method to generate time-series indices of economic interest using unsupervised machine learning techniques. This could be applied as a black-box method, requiring only a small dataset, and is applicable to any time-series indices' generation. This method incorporates deeper methods from machine learning research, including word embedding, Wasserstein Dictionary Learning, and the widely used Adam algorithm. \section*{Acknowledgements} I am grateful to Alfred Galichon for launching this project and to Andr\'es Azqueta-Gavald\'on for kindly providing his EPU data. I would also like to express my gratitude to referees at the 3rd Workshop on Mechanism Design for Social Good (MD4SG~'19) at ACM Conference on Economics and Computation (EC~’19) and the participants at the Federated Computing Research Conference (FCRC 2019) for their helpful remarks and discussions. I also appreciate the helpful suggestions from the anonymous referee. This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
{'timestamp': '2019-11-27T02:03:13', 'yymm': '1908', 'arxiv_id': '1908.04369', 'language': 'en', 'url': 'https://arxiv.org/abs/1908.04369'}
\section{Introduction}\label{section-Introduction} We consider the following nonlinear heat equation (NLH) \begin{equation} \label{NLH} \left\{ \begin{array}{l} u_t=\Delta u + |u|^{p-1}u,\\ u(.,0)=u_0\in L^\infty (\R^N,\R), \end{array} \right. \mbox{ } \end{equation} where $p>1$ and $u(x,t):\R^N\times [0,T)\to \R$. Equation \eqref{NLH} is considered as a model for many physical situations, such as heat transfer, combustion theory, thermal explosion, etc. (see more in Kapila \cite{KSJAM80}, Kassoy and Poland \cite{KPsiam80,KPsiam81}, Bebernes and Eberly \cite{BEbook89}). Firstly, note that equation \eqref{NLH} is well-posed in $L^\infty$. More precisely, for each $u_0 \in L^\infty(\R^N)$, one of the following statements holds: \begin{itemize} \item either the solution is global in time; \item or the maximum existence time is finite i.e. $T_{max} <+\infty $ and \begin{equation}\label{norm-infty-u-infty} \lim_{t \to T_{max}} \left\| u(\cdot, t) \right\|_{L^\infty} = +\infty. \end{equation} \end{itemize} In particular, $ T_{max} >0$ is called the blowup time of the solution and a point $a \in \mathbb{R}^N$ is called a blowup point of the solution if there exists sequence $(a_n,t_n) \to (a,T)$ as $ n \to +\infty$ such that $$ \left| u(a_n, t_n) \right| \to +\infty \text{ as } n \to + \infty. $$ \iffalse We can see that we have the trivial blowup solution to \eqref{NLH} given as follows $$ \psi_T(t) = \kappa (T-t)^{-\frac{1}{p-1}} \text{ where } \kappa = (p-1)^{-\frac{1}{p-1}}. $$ Accordingly to the classification investigated by Merle and Matano \cite{MMcpam04}, the blowup solutions of the equation \eqref{NLH} satisfy the following estimate \begin{equation}\label{defi-type-I} \left\| u(\cdot, t) \right\|_{L^\infty(\mathbb R^N)} \le C \psi_T(t), \forall t \in [0,T), \end{equation} called by \textit{Type I} blowup solutions, otherwise, they are of \textit{ Type II}. In the context of the paper, we aim to study the \textit{Type I}. There is a huge of literature concerned by the study of blow-solution of \textit{Type I}, we cite for example ...\\ \fi \medskip Blowup for equation \eqref{NLH} has been studied intensively by many mathematicians and no list can be exhaustive. This is the case for the question of deriving blowup profiles, which is completely understood in one space dimension (see in particular Herrero and Vel\'azquez \cite{HVasps92,HVdie92,HVaihn93,HVcras94}), unlike the higher dimensional case, where much less is known (see for example Vel\'azquez \cite{Vcpde92,VELtams93,VELiumj93}, Zaag \cite{ZAAaihp02,ZAAcmp02, Zdmj06,Zmme02} together with the recent contributions by Merle and Zaag \cite{MZimrn21,MZ22}). \medskip In the one dimensional case, Herrero and Vel\'azquez proved the following, unless the solution is space independent (see also Filippas and Kohn \cite{FKcpam92}): \begin{itemize} \item[$(i)$] Either \begin{equation*} \sup_{|x-a| \le K \sqrt{(T-t)|\ln(T-t)|}} \left| (T-t)^{\frac{1}{p-1}}u(x,t) - \varphi\left(\frac{x-a}{\sqrt{(T-t)|\ln(T-t)|}} \right) \right| \to 0, \end{equation*} where $\varphi(z)=(p-1+b_g|z|^{2k})^{\frac{1}{p-1}}$ and $b_g= \frac{(p-1)^2}{4p}$ is unique. (note that Herrero and Vel\'azquez proved that this behavior is generic in \cite{HVcras94,HVasnsp92}). \item[$(ii)$] Or \begin{equation*} \sup_{|x-a| \le K (T-t)^{\frac{1}{2k}}} \left| (T-t)^{\frac{1}{p-1}}u(x,t) - \varphi_k\left(\frac{x-a}{(T-t)^{\frac{1}{2k}}} \right) \right| \to 0, \end{equation*} where $\varphi_k(z)=(p-1+b|z|^{2k})^{\frac{1}{p-1}}$, where $b$ is an \textit{arbitrary} positive number. \end{itemize} In particular, we are interested in constructing blowup solution with a prescribed behaviors, via a ``generic approximation'', called the blowup profile of the solution. \\ The existence of such solutions was observed by Vel\'{a}zquez, Galaktionov and Herrero \cite{VelGalHer91} who indicated formally how one might find these solution. Later, Bricmont and Kupiainen \cite{BKnon94}, will give a rigorous proof of construction of such profiles (see also Herrero and Vel\'azquez \cite{HVdie92} for the profile $\varphi_4$). In \cite{AVjfdram97}, Angenent and Vel\'{a}zquez gives a construction of blow up solution for the mean curvature flow inspied by the construrction of (ii). Most of the constructions are made in one dimension $N=1$. In higher dimension $N\geq 2$, recently Merle and Zaag give the construction of a new profile of type I with a superlinear power in the Sobolev subcritical range, for more details see \cite{MZ22}. \medskip In this paper we revisit the construction of ii) given in Section 4 of \cite{BKnon94}. Our construction has the advantage that it uses the modulation parameter. We shall use a topological "shooting" argument to prove existence of Solutions constructed in Theorem \ref{Theorem-principal}. The construction is essentially an adaptation of Wazewski's principle (see \cite{Conbook78}, chapter II and the references given there). The use of topological methods in the analysis of singularities for blow-up phenomena seems to have been introduced by Bressan in \cite{Breiumj90}. \medskip \noindent The following is the main result in the paper \begin{theorem} \label{Theorem-principal}Let $p>1$ and $k \in \mathbb N, k \ge 2$, then there exist $\delta_0$ and $ \tilde T >0$ such that for all $\delta \in (0,\delta_0)$ and $T \in (0,\tilde T)$, we can construct initial datum $u_0 \in L^\infty(\R)$ such that the corresponding solution to equation \eqref{NLH} blowup in finite time $T$ and only at the origin. Moreover, there exists the flow $b(t) \in C^1(0,T)$ such that the following description is valid\\ (i) For all $t\in [0,T)$, it holds that \begin{equation}\label{theorem-intermediate} \left \| (T-t)^{\frac{1}{p-1}} u(\cdot, t) - f_{b(t)}\left(\frac{|\cdot|^{2k}}{T-t} \right)\right \|_{L^\infty(\mathbb{R})}\lesssim (T-t)^{\frac{\delta}{2}(1-\frac{1}{k})} \text{ as } s \to \infty. \end{equation} (ii) There exists $b^*>0$ such that $b(t)\to b^*$ as $t\to T$ and \begin{equation}\label{estimate-b-t-b-*} \left| b(t) - b^* \right| \lesssim (T-t)^{\delta (1-\frac 1k)}, \forall t \in (0,T), \end{equation} where $f_{b(t)}$ is defined by \begin{equation} f_{ b(t)}(y)= \left(p-1+ b(t) y^{2k}\right)^{-\frac{1}{p-1}}\;\;. \end{equation} \end{theorem} \begin{rem} One of the most important steps of the proof is to project the linearized partial differential equation \eqref{equation-q} on the $H_m$, given by \eqref{eigenfunction-Ls}. We note that this is technically different from the work of Bricomont and Kupiainen \cite{BKnon94}, where the authors project the integral equation. Consequently, we will have additional difficulty coming from the projection of the the different terms, see for example Lemma \ref{Lemma-Pn_partialq} and Lemma \ref{Lemma-P-n-mathcal-L-s}. \end{rem} \begin{rem} We note that $ \frac{b_0}{2} \le b(t) \le 2 b_0 $ and \eqref{estimate-b-t-b-*}, it holds that \begin{equation*} \left\| \left( p-1 + b(t) y^{2k} \right)^{-\frac{1}{p-1}} - \left( p-1 + b^* y^{2k} \right)^{-\frac{1}{p-1}} \right\|_{L^\infty(\mathbb{R})} \lesssim \left| b(t) - b^* \right| \lesssim (T-t)^{\delta\left( 1 -\frac{1}{k}\right)}, \end{equation*} which yields \begin{equation} \left \| (T-t)^{\frac{1}{p-1}} u(\cdot, t) - f_{b^*}\left(\frac{|\cdot|^{2k}}{T-t} \right)\right \|_{L^\infty(\mathbb{R})}\lesssim (T-t)^{\frac{\delta}{2}(1-\frac{1}{k})} \text{ as } s \to \infty. \end{equation} \end{rem} The paper is organised as follows. In Section \ref{section-Formulation} and \ref{Section-Spectral-properties-Ls}, we give the formulation of the problem. In Section \ref{section-Proof-assuming-estimates} we give the proof of the existence of the profile assuming technical details. In particular, we construct a shrinking set and give an example of initial data giving rise to the blow-up profile and at the end of the section we give the proof of Theorem \ref{Theorem-principal}. The topological argument of Section \ref{section-Proof-assuming-estimates} uses a number of estimates given by Proposition \ref{proposition-ode}, we give the proof of this proposition in Section \ref{Section-proof-proposition-ode}. \textbf{Acknowledgement:} The author Giao Ky Duong is supported by the scientific research project of An Giang University under the Grant 22.05.SP. \section{Formulation of the problem}\label{section-Formulation} Let us consider $T>0$, and $k \in \N, k \ge 2$, and $u$ be a solution to \eqref{NLH} which blows up in finite time $T>0$. Then, we introduce the following \textit{blow-up variable}: \begin{equation}\label{change-variable} w(y,s)=(T-t)^{-\frac{1}{p-1}}u(x,t),\;y=\frac{x}{(T-t)^{\frac{1}{2k}}},\;\; s=-\ln (T-t). \end{equation} Since $u$ solves \eqref{NLH}, for all $(x,t)\in\R^N\times[0,T)$, then $w(y,s)$ reads the following equation \begin{equation}\label{equation-w} \frac{\partial w}{\partial s}=I^{-2}(s) \Delta w - \frac{1}{2k} y \cdot \nabla w -\frac{1}{p-1} w +|w|^{p-1}w, \end{equation} where $I(s)$ is defined by \begin{equation}\label{defi-I-s} I(s) = e^{\frac{s}{2}\left(1-\frac{1}{k} \right)}. \end{equation} Adopting the \textit{setting} investigated by \cite{BKnon94}, we consider $C^1$-flow $b $ and introduce \begin{equation}\label{decompose-equa-w-=q} w (y,s)= f_{b(s)}(y) \left(1 + e_{b(s)}(y)q(y,s) \right) \end{equation} where $f_b$ and $e_b$ respectively defined as \begin{equation}\label{defi-profile} f_b(y)= \left(p-1+ b y^{2k}\right)^{-\frac{1}{p-1}}, \end{equation} and \begin{eqnarray} e_b(y) = \left( p-1 + b |y|^{2k} \right)^{-1} \label{defi-e-b}. \end{eqnarray} and the flow $b$ will arise as an unknown functions that will be constructed together with the linearized solution $q$. Since $f_b e_b=f_b^{p}$, by \eqref{decompose-equa-w-=q} $q$ can be written as follows \begin{eqnarray}\label{decom-q-w-} q=wf_b^{-p}- (p-1+by^{2k}). \end{eqnarray} \noindent In the following we consider $(q,b)(s)$ which satisfies the following equation \beqtn\label{equation-q} \pa_s q =\mathcal{L}_s q+ \mathcal{N} (q) +\mathcal{D}_s(\nabla q)+\mathcal{R}_s(q) +b'(s)\mathcal{M}(q), \eeqtn where \begin{eqnarray} \mathcal{L}_s q & = & I^{-2}(s) \Delta q-\frac{1}{2k}y \cdot \nabla q+q,\;\;\ I(s)=\dsp e^{\frac s2(1-\frac 1k)},\label{operator-Ls}\\ \mathcal{N}(q)&=&\left| 1+e_bq \right|^{p-1}(1+e_bq)-1-p e_b q \label{nonlinear-term}\\\mathcal{D}_s(\nabla q)&=&-\frac{4pkb}{p-1}I^{-2}(s) e_by^{2k-1}\nabla q, \label{equation-Ds} \\ \mathcal{R}_s(q)&=& I^{-2}(s)y^{2k-2} \left (\alpha_1+\alpha_2 y^{2k}e_b+(\alpha_3+\alpha_4 y^{2k}e_b)q \right), \label{equation-Rs} \\ \mathcal{M} (q) & = &\frac{p}{p-1}y^{2k} (1+ e_bq) \label{new-term} \end{eqnarray} and the constants $\alpha_i$ are given by \begin{equation}\label{defi-constant-in-R} \begin{matrix} \alpha_1 =-2k(2k-1)\frac{b}{p-1}; & \alpha_2=4pk^2\frac{b^2}{(p-1)^2}; & \alpha_3=-2pk(2k-1)\frac{b}{p-1};\alpha_4 =4p(2p-1)k^2\frac{b^2}{(p-1)^2} .\\ \end{matrix} \end{equation} \iffalse Our goal is to prove the following Proposition: \begin{prop} There exists $ s_1<\infty$ and $\varepsilon >0$, such that for $s_0 > s_1$ and $g $ in $C^0(\mathbb{R})$ such that the equation \eqref{equation-w} with initial data \eqref{initial-data} has a unique classical solution, which satisfies \begin{equation}\label{theorem-intermediate} \left \|w(.,s)- f_{b(s)}(.)\right \|_{\infty}\to 0\mbox{ as $s\to \infty$}, \end{equation} and $$ b(s) = b(T, b_0, p, k) + O(e^{-s(k-1)}), \text{ and } b(T, b_0, p, k) >0. $$ \begin{equation}\label{defi-profile} f_b(y)= \left(p-1+ b y^{2k}\right)^{-\frac{1}{p-1}},\;\; k>1, \end{equation} and $f_b$ satisfy \begin{equation} 0=-\frac k 2\nabla f_b-\frac{1}{p-1}f_b+|f_b|^{p-1}f_b. \end{equation} \end{prop} First we introduce the derivation of $w$ from $f_b$. It is convenient to write $w $ in the form \beqtn\label{definition-q} w(y,s)=f_{b}(y) \left (1+e_b(y)q(y,s)\right), \eeqtn where, \begin{equation}\label{defi-e-b} e_b(y)=\left (p-1+by^{2k}\right)^{-1} . \end{equation} \fi \iffalse \begin{rem} From \eqref{definition-q}, we can write \[q=(w-f_b) (f_b e_b)^{-1}\] we note that $f_b e_b=f_b^{p}$, then we obtain \[q=(w-f_b) \left (p-1+by^{2k}\right )^{\frac{p}{p-1}}\] \textcolor{blue}{\[q=wf_b^{-p}-\left (p-1+by^{2k}\right )\] } \en{rem} \medskip \fi \section{Decomposition of the solution}\label{Section-Spectral-properties-Ls} \subsection{ Fundamental solution involving to $\mathcal{L}_s$} Let us define Hilbert space $L^2_{\rho_s}(\R)$ by \beqtn\label{define-L-2-rho-s} L^2_{\rho_s}(\R)=\{f \in L^2(\R),\; \int_{\R}f^2\rho_s dy<\infty\}, \eeqtn where \begin{equation}\label{defi-rho-s} \displaystyle \rho_s=\frac{I(s)}{\sqrt{4\pi}} e^{-\frac{I_{s}^{2}y^2}{4}}, \end{equation} and $I(s)$ is defined by \eqref{defi-I-s}.\\ In addition, we denote \beqtn\label{eigenfunction-Ls} H_m(y,s)=I^{-m}(s)h_m(I(s) y)=\sum_{\ell=0}^{[\frac{m}{2}]}\frac{m!}{\ell!(m-2\ell)!}(-I^{-2}(s))^\ell y^{m-2\ell} \eeqtn where $h_m(z)$ be Hermite polynomial (physic version) \beqtn\label{definition-h-n-z} h_m(z)=\sum_{\ell=0}^{[\frac{m}{2}]}\frac{m!}{\ell!(m-2\ell)!}(-1)^\ell z^{m-2\ell}. \eeqtn In particular, it is well known that \[\int h_nh_m\rho_s dy=2^nn!\delta_{nm},\] which yields \beqtn\label{scalar-product-hm} \dsp ({H}_n(.,s),{H}_m(.,s))_s=\int {H}_n(y){H}_m(y)\rho_s(y)dy=I^{-2n}2^n n!\delta_{nm}. \eeqtn \textbf{Jordan block's decomposition of $\mathcal{L}_s$} \medskip By a simple computation (relying on fundamental identities of Hermite polynomials), we have \beqtn\label{Ls-Hm} \mathcal{L}_s H_m(y,s)= \left\{ \begin{array}{lll} & \left(1-\frac{m}{2k} \right)H_m(y,s)+m(m-1)(1-\frac{1}{k})I^{-2}(s)H_{m-2}&\mbox{ if }m\geq 2\\ & \left(1-\frac{m}{2k} \right)H_m(y,s)&\mbox{ if }m=\{0,1\} \end{array} \right. . \eeqtn We define $\mathcal{K}_{s,\sigma}$ as the fundamental solution to \beqtn \pa_s \mathcal{K}_{s\sigma}=\mathcal{L}_s \mathcal{K}_{s\sigma} \text{ for } s > \sigma \mbox{ and }\mathcal{K}_{\sigma\sigma}=Id. \eeqtn By using the Mehler's formula, we can explicitly write its kernel as follows \beqtn\label{Kernel-Formula} \dsp \mathcal{K}_{s\sigma}(y,z)=e^{s-\sigma}\mathcal{F} \left ( e^{-\frac{s-\sigma}{2k}}y-z \right ) \eeqtn where \beqtn\label{Kernel-Formula-F} \dsp \mathcal{F}(\xi)=\frac{L(s,\sigma)}{\sqrt{4\pi}}e^{-\frac{L^2(s,\sigma)\xi^2}{4}}\mbox{ where } L(s, \sigma) =\frac{I(\sigma)}{\sqrt{1-e^{-(s-\sigma)}}}\mbox{ and }I(s)=\dsp e^{\frac s2(1-\frac 1k)}. \eeqtn In addition, we have the following equalities \beqtn \mathcal{K}_{s\sigma}H_n(.,\sigma)=e^{(s-\sigma)(1-\frac{n}{2k})}H_n(.,s), n \ge 0. \label{kernel-Hn} \eeqtn \iffalse b) \textit{Multi-dimensional case:} Let $N \ge 2$ and the case is a natural extension of the setting in the one dimensional one. Indeed, we introduce $L^2_{\rho_s}(\mathbb{R}^N)$ as in \eqref{define-L-2-rho-s} with $$ \rho_s (y) = \frac{I^N(s)}{(4\pi)^\frac{N}{2}} e^{- \frac{I^2(s)|y|^2}{4}}, y \in \mathbb{R}^N.$$\\ In addition, let $\alpha$ be a multi-index in $\mathbb{N}^N$, $\alpha = (\alpha_1,...,\alpha_N)$ and $|\alpha|= \alpha_1+...+\alpha_N$. Similarly the one dimensional case, we have Jordan's block's decomposition \begin{equation} \mathcal{L}_s H_\alpha (y) = \left\{ \begin{array}{rcl} \left( 1 - \frac{|\alpha|}{2k} \right) H_\alpha(y) + \end{array} \right. \end{equation} Corresponding to eigenvalue $\lambda_m = 1 - \frac{m}{2k}$, eigenspace $\mathcal{E}_m$ is given by $$\mathcal{E}_m = \left\langle H_\alpha(y), |\alpha| =m \right\rangle, $$ where $H_\alpha$ defined by \begin{eqnarray*} H_\alpha(y,s) = \Pi_{i=1}^N H_{\alpha_i}(y_i,s) \text{ with } H_{\alpha_i} \text{ given in } \eqref{eigenfunction-Ls}. \end{eqnarray*} In particular, semigroup $\mathcal{K}_{s,\sigma}$ has the same structure as the first case that its kernel explicitly given by $$ \mathcal{K}_{s, \sigma}(y,z) = \frac{e^{s- \sigma} L^N(s, \sigma) }{(4\pi)^\frac{N}{2}} e^{-\frac{L^2(s,\sigma)}{4} \left|e^{-\frac{s -\sigma}{2k} y - z} \right|}.$$ \fi \subsection{ Decomposition of $q$.} For the sake of controlling the unknown function $q \in L^2_{\rho_s}$, we will expand it with respect to the polynomials $H_m(y,s)$. We start by writing \begin{equation}\label{decomposition-q2} q(y,s) = \sum_{m=0}^{[M]}q_m(s) H_m(y,s)+ q_-(y,s) \equiv \dsp q_+(y,s)+q_-(y,s), \end{equation} where constant $[M]$ be the largest integer less than $M$ with \begin{equation}\label{defi-M} M=\frac{2kp}{p-1} . \end{equation} From \eqref{scalar-product-hm}, we have \beqtn\label{defi-q_m} \begin{array}{rcl} q_m(s) = P_m(q) = \dsp \frac{\left\langle q,H_m \right\rangle_{L^2_{\rho_s}}}{\langle H_m,H_m\rangle_{L^2_{\rho_s}}}, \end{array} \eeqtn as the projection of $q$ on $H_m$. In addition, $q_-(y,s)$ can be seen as the projection of $q$ onto $\{H_m, m \ge [M]+1\}$ and we also denote as follow \beqtn\label{projector-P-} q_-=P_-(q). \eeqtn \subsection{Equivalent norms } Let us consider $L^\infty_M$ defined by \begin{equation}\label{defi-L-M} L^{\infty}_M(\R)=\{g \text{ such that } (1+|y|^M)^{-1} g \in L^\infty (\R)\}, \end{equation} and $L^\infty_M$ is complete with the norm \begin{equation}\label{defi-norm-L-M} \|g\|_{L^\infty_M} = \|(1+|y|^M)^{-1} g \|_{L^\infty}, \end{equation} \iffalse Considering $C^0(\R)$ which we introduce the norms for: Let us consider $q\in C^0(\R)$ with the decomposition in \eqref{decomposition-q2}, \fi we introduce \beqtn\label{norm-q} \|q\|_s=\sum_{m=0}^{[M]}|q_m|+|q_-|_s, \eeqtn where \beqtn\label{norm-q-||-s} |q_-|_s=\displaystyle \sup_{y}\frac{|q_-(y,s)|}{I(s)^{-M}+|y|^M}. \eeqtn It is straightforward to check that \[C_1(s)\|q\|_{L^\infty_M} \leq \|q\|_s\leq C_2(s)\|q\|_{L^\infty_M}\mbox{ where }C_{i,i=1,2}(s) >0.\] \iffalse In particular, we introduce $\|.\|_s$ as follows \beqtn\label{norm-q-2} \|q\|_s=\sum_{m=0}^{[M]}|q_m|+|q_-|_s, \eeqtn where \beqtn\label{defi-|-|-s-norm} |q_-|_s=\displaystyle \sup_{y}\frac{|q_-(y,s)|}{I(s)^{-M}+|y|^M} \eeqtn As a matter of fact, we have the following equivalence: \[C_1(s)\|q\|_{L^\infty_M} \leq \|q\|_s\leq C_2(s)\|q\|_{L^\infty_M} \mbox{ for some } C_1(s), C_2(s) \in \R^{*}_+,\] which yields \fi Finally, we derive that $L^\infty_M(\mathbb{R})$ is also complete with the norm $\|.\|_s$. \section{The existence assuming some technical results}\label{section-Proof-assuming-estimates} As mentioned before, we only give the proof in the one dimensional case. This section is devoted to the proof of Theorem \ref{Theorem-principal}. We proceed in five steps, each of them making a separate subsection. \begin{itemize} \item In the first subsection, we define a shrinking set $V_{\delta,b_0}(s)$ and translate our goal of making $q(s)$ go to $0$ in $L^\infty_M(\mathbb{R})$ in terms of belonging to $V_{\delta,b_0}(s)$. \item In the second subsection We exhibit a $k$ parameter initial data family for equation \eqref{equation-q} whose coordinates are very small (with respect to the requirements of $V_{\delta,b_0}(s)$) except for the $k+1$ first parameter $q_0,..,q_{2k-1}$. \item In the third subsection, we solve the local in time Cauchy problem for equation \eqref{equation-q} coupled with some orthogonality condition. \item In the fourth subsection, using the spectral properties of equation \eqref{equation-q} , we reduce our goal from the control of $q(s)$ (an infinite dimensional variable) in $V_{\delta,b_0}(s)$ to the control of its $2k$ first components $(q_0,..,q_{2k-1})$ (a (k)-dimensional variable) in $\left[ -I(s)^{-\delta}, I(s)^{-\delta} \right]^{2k}$. \item In the last subsection, we solve the finite dimensional problem using the shooting lemma and conclude the proof of Theorem \ref{Theorem-principal}. \end{itemize} \subsection{Definition of the shrinking set $V_{\delta,b_0}(s)$} In this part, we introduce the shrinking set that controls the asymptotic behaviors of our solution \begin{definition}\label{definition-shrinking-set} Let us consider an integer $k > 1$, the reals $ \delta >0 $, $ b_0 >0$ and $M$ given by \eqref{defi-M}, we define $V_{\delta,b_0}(s)$ be the set of all $(q,b) \in L^\infty_M \times \mathbb{R}$ satisfying \begin{equation}\label{bound-for-q-m} \left| q_{m} \right| \le I^{-\delta }(s) ,\quad \forall\; 0\leq m \leq [M],\;\; m\not = 2k, \end{equation} \begin{eqnarray} \left| q_{2k} \right| \le I^{-2\delta } (s) , \end{eqnarray} \begin{equation}\label{bound-for-q--} \left| q_- \right|_s \le I^{-\delta}(s), \quad \end{equation} and \begin{equation}\label{bound-b} \frac{b_0}{2}\leq b \leq 2 b_0, \end{equation} where $q_m$ and $q_-$ defined in \eqref{decomposition-q2}, $I(s)$ defined as in \eqref{defi-I-s} and $|\cdot |_s$ norm defined in \eqref{norm-q-||-s}. \end{definition} \subsection{ Preparation of Initial data} In this part, we aim to give a suitable family of initial data for our problem. Let us consider $(d_0, d_1,...,d_{2k-1}) \in \R^{2k}$,$\delta>0 $ and $ b_0 >0$, we then define \begin{equation}\label{initial-data-new} \psi(d_0,...,d_{2k-1},y,s_0)=\sum_{i=0}^{2k-1} d_i I^{-\delta }(s_0) y^i, \end{equation} \iffalse and $g$ will fixed later so that it guarantees the orthogonal condition at $s_0$: \begin{equation}\label{condition-g-initial-s-0} P_{2k}( q(s_0)) = 0, \text{ and } \|g \|_{L^\infty_M} \le I^{-\delta}(s_0), \end{equation} \fi then, we have the following result \begin{lemma}[Decomposition of initial data in different components]\label{lemma-initial-data} Let us consider $(d_i)_{0\le i \le 2k-1} \in \R^{2k}$ satisfying $ \max_{0 \le i \le 2k-1 } \left| d_i \right| \le 1 $ and $b_0 >0$ given arbitrarily. Then, there exists $ \delta_1(b_0)$ such that for all $\delta \le \delta_1$, there exists $s_1(\delta_1, b_0) \ge 1$ such that for all $s_0 \ge s_1$, the following properties are valid with $\psi(d_0,...,d_{2k-1})$ defined in \eqref{initial-data-new}: \begin{itemize} \item[(i)] There exits a quadrilateral $ \mathbb{D}_{s_0} \subset \left[-2,2\right]^{2k} $ such that the mapping \begin{equation}\label{defi-mapping-Gamma-initial-data} \begin{gathered} \Gamma: \mathbb{D}_{s_0} \to \mathbb{R}^{2k} \hfill \\ \hspace{-1.5cm} (d_0,...,d_{2k-1}) \mapsto (\psi_0,...,\psi_{2k-1}) \hfill \\ \end{gathered}, \end{equation} is linear one to one from $ \mathbb{D}_{s_0}$ to $\hat{\mathcal{V}}(s_0)$, with \begin{equation}\label{define-hat-V-A-s} \hat{\mathcal{V}}(s) = \left[ -I(s)^{-\delta}, I(s)^{-\delta} \right]^{2k}, \end{equation} where $(\psi_0,...,\psi_{2k-1})$ are the coefficients of initial data $\psi(d_0,...,d_{2k-1})$ given by the decomposition \eqref{decomposition-q2}. In addition to that, we have \begin{equation}\label{des-Gamma-boundary-ne-0} \left. \Gamma \right|_{\partial \mathbb{D}_{s_0}} \subset \partial \hat{\mathcal{V}}(s_0) \text{ and } \text{deg}\left(\left. \Gamma \right|_{\partial \mathbb{D}_{s_0}} \right) \ne 0. \end{equation} \item[(ii)] For all $(d_0,...,d_{2k-1}) \in \mathbb{D}_{s_0}$, the following estimates are valid \begin{eqnarray} \left| \psi_0 \right| \le I^{-\delta}(s_0),.... , \left| \psi_{2k-1} \right| \le I^{-\delta}(s_0), \quad \psi_{2k} =\psi_{M} =0 \text{ and } \psi_-\equiv 0. \end{eqnarray} \end{itemize} \end{lemma} \begin{proof} The proof of the Lemma is quite the same as \cite[Proposition 4.5]{TZpre15}. - \textit{The proof of item (i):} From \eqref{initial-data-new}, definition's $H_n $ in \eqref{eigenfunction-Ls} and \eqref{defi-q_m}, we get \begin{eqnarray} \left| \psi_n (s_0) - d_n I^{-\delta}(s_0) \right| \le C(d_0,...,d_{2k-1}) I^{-\delta - 2}(s_0) \end{eqnarray} which concludes item (i). - \textit{ The proof of item (ii):} From \eqref{initial-data-new}, $\psi(d_0,...,d_{2k-1},s_0)$ is a polynomial of order $2k-1$, so it follows that $$ \psi_{n} =0, \forall n \in \{2k,..,M\} \text{ and } \psi_{-} \equiv 0.$$ In addition, since $(d_0,...,d_{2k-1}) \in \mathbb{D}_{s_0}$, we use item (i) to deduce that $(\psi_0,...,\psi_{2k-1}) \in \hat{\mathcal{V}}(s_0)$ and $$ \left| \psi_{n} \right| \le I^{-\delta}(s_0), \forall n \in \{ 0,...,2k-1\} $$ which concludes item (ii)and the proof Lemma \ref{lemma-initial-data}. \end{proof} \begin{rem} Note that $s_0= -\ln (T)$ is the \textit{master constant}. In almost every argument in this paper it is almost to be sufficiently depending on the choice of all other constants ($\delta_0$ and $b_0$). In addition, we denote $C$ as the universal constant that is independent to $b_0$ and $s_0$. \end{rem} \subsection{Local in time solution for the problem \eqref{equation-q} $ \&$ \eqref{Modulation-condition}} As we setup in the beginning, besides main solution $q$, modulation flow $b$ plays an important role in our analysis, since it helps us to disable the perturbation of the neutral mode corresponding to eigenvalue $\lambda_{2k} = 0$ of linear operator $\mathcal{L}_s$. In particular, the modulation flow is one of the main contributions of our paper. The uniqueness of the flow $b$ is defined via the following orthogonal condition \beqtn\label{Modulation-condition} \langle q, H_{2k} \rangle_{L^2_{\rho_s}} = 0. \eeqtn As a matter of fact, the cancellation ensures that $q_{2k} =0$, the projection of the solution on $ H_{2k}$, corresponding to eigenvalue $ \lambda_{2k} =0 $, since the neutral issues to the control of our solution. Consequently, our problem given by \eqref{equation-q} is coupled with the condition \eqref{Modulation-condition}. In the following, we aim to establish the local existence and uniqueness. \begin{prop}[Local existence of the coupled problem \eqref{equation-q} $\&$ \eqref{Modulation-condition}] Let $(d_{i})_{0\leq i\leq 2k-1} \in \mathbb{R}^{2k} $ satisfying $\max_{0\le i \le 2k-1} |d_i| \le 2$ and $\delta >0, b_0 >0$, there exists $s_2 ( \delta, b_0) \ge 1$, such that for all $ s_0 \ge s_2$, the following property holds: If we choose initial data $\psi$ as in \eqref{initial-data-new}, then, there exists $s^* > s_0$ such that the coupled problem \eqref{equation-q} $ \&$ \eqref{Modulation-condition} uniquely has solution on $[s_0, s^* ]$. Assume furthermore that the solution $(q,b)(s) \in \mathcal{V}_{ \delta, b_0}(s)$ for all $s \in [s_0,s^*]$, then, the solution can be extended after the time $s^*$ i.e. the existence and uniqueness of $(q,b)$ are valid on $[s_0,s^*+\varepsilon]$, for some $\varepsilon >0$ small. \end{prop} \begin{proof} Let us consider initial $w_0$ defined as in \eqref{decompose-equa-w-=q} with $q(s_0) = \psi(d_0,d_1,...,d_{2k-1}) $ given as in \eqref{initial-data-new}, since equation \eqref{NLH} is locally well-posedness in $L^\infty$, then, the solution $w$ to equation \eqref{equation-w} exists on $\left[s_0, \tilde s \right]$ for some $\tilde s > s_0 $. Next, we need to prove that $w$ is uniquely decomposed as in \eqref{decompose-equa-w-=q} and $(q,b)(s) $ solves \eqref{equation-q} and \eqref{Modulation-condition}. The result follows the Implicit function theorem. Let us define the functional $\mathscr{F}$ by \begin{equation}\label{defimathscr-F-functional} \mathscr{F}(s,b) = \left\langle w f_b^{-p} - \left( p-1 +b y^{2k} \right), H_{2k} \right\rangle_{L^2_{\rho_s}}. \end{equation} For $b_0 >0$, and at $s=s_0$, from $\psi(d_0,...,d_1)$'s definition in \eqref{initial-data-new}, it directly follows that \begin{equation}\label{equality-F=0} \mathscr{F}(s_0, b_0) =0. \end{equation} \iffalse Regarding to \eqref{initial-data-new}, $g$ need to satisfy \begin{equation}\label{condition-on-g} \langle g f_{b_0}^{- p}, H_{2k} \rangle_{L^2_{\rho_s}} =0. \end{equation} \fi \noindent \medskip Next, we aim to verify \begin{eqnarray}\label{partial-F-s-0ne-0} \frac{ \partial \mathcal{F}}{\partial b } (s_0, b_0) \ne 0. \end{eqnarray} From \eqref{defimathscr-F-functional}, we obtain \begin{eqnarray} \frac{\partial \mathcal{F}}{\partial b}(s,b) = \left\langle w \frac{p y^{2k}}{p-1} f_b^{-1} - y^{2k}, H_{2k} \right\rangle_{L^2_{\rho_s}}.\label{formula-partial-F} \end{eqnarray} \noindent According to \eqref{decompose-equa-w-=q}, we express $w(s_0)$ as follows $$ w(y,s_0) = f_{b_0} \left( 1 + I^{-\delta}(s_0)f_{b_0}^{p-1}(y,s_0)\sum_{i=0}^{2k-1} d_i y^i \right). $$ Then, we have \begin{eqnarray} & & \frac{\partial \mathcal{F}}{\partial b}(s_0,b_0) = I^{-\delta}(s_0)\frac{p}{p-1}\left\langle f_{b_0}^{p-1}(y,s_0) \sum_{i=0}^{2k-1} d_i y^{i+2k}, H_{2k} \right\rangle_{L^2_{\rho_{s_0}}} \label{scalar-product-modulation-1}\\ &+&\frac{1}{p-1} \left\langle y^{2k}, H_{2k} \right\rangle_{L^2_{\rho_{s_0}}} :=A+B.\nonumber \end{eqnarray} Using \eqref{eigenfunction-Ls} and \eqref{scalar-product-hm}, we immediately have $$ B=\frac{2^{4k}(2k)!}{p-1}I^{-4k}(s_0).$$ \\ In addition, we use \eqref{defi-e-b} to get the following expression \begin{equation}\label{eb0-modulation} e_{b_0}(y) = (p-1)^{-1} \left( \sum_{l=0}^L \left(- \frac{b y^{2k}}{p-1} \right)^l + \left( -\frac{b y^{2k}}{p-1}\right)^{L+1} \right), \end{equation} for $L \in \mathbb{N}, L \ge 2$ arbitrarily. \noindent Now, we decompose the part $A$ in \eqref{scalar-product-modulation-1} by \[\begin{array}{l} A=I^{-\delta}(s_0)\frac{p}{p-1}\left\langle f_{b_0}^{p-1}(y,s_0) \sum_{i=0}^{2k-1} d_i (s_0) y^{i+2k}, H_{2k} \right\rangle_{L^2_{\rho_{s_0}}}\\ =\dsp I^{-\delta}(s_0)\sum_{i=0}^{2k-1} d_i \left (\int_{|y|\leq 1}e_{b_0}(y,s_0) y^{i+2k} H_{2k} \rho_{s_0}(y)dy+ \int_{|y|\geq 1}e_{b_0}(y,s_0) y^{i+2k} H_{2k} \rho_{s_0}(y)dy\right ) \\ =A_1+A_2.\\ \end{array} \] Since $e_{b_0}y^{2k}$ is bounded, we apply Lemma \ref{small-integral-y-ge-I-delta} to get $$ |A_2| \lesssim I^{-4k-\delta}(s_0),$$ provided that $s_0 \ge s_{2,2}(\delta)$. Besides that, we use \eqref{eb0-modulation} with $L\ge 2$ arbitrarily and we write $A_1$ as follows \begin{eqnarray*} A_1 = (p-1)^{-1} I^{-\delta}(s_0) \sum_{i=0}^{2k-1} d_i \int_{|y| \le 1} \left[ \sum_{j=0}^L \left( -\frac{b \xi^{2k}}{p-1} \right)^l +\left( -\frac{b \xi^{2k}}{p-1} \right)^L \right] y^{i +2k} H_{2k}(s_0) \rho_{s_0} dy. \end{eqnarray*} Using Lemmas \ref{lemma-scalar-product-H-m} and \ref{small-integral-y-ge-I-delta}, we get \begin{equation*} |A_1|\lesssim I^{-4k - \delta}(s_0). \end{equation*} By adding all related terms, we obtain \begin{equation*} \frac{ \partial \mathcal{F}}{ \partial b} (s_0, b_0) = I^{-4k}(s_0)2^{4k} (2k)! \left(1+ O(I^{-\delta}(s_0))\right) \ne 0, \end{equation*} provided that $s_0 \ge s_{2,3}(\delta, b_0)$. Thus, \eqref{partial-F-s-0ne-0} follows. By equality \eqref{equality-F=0} and \eqref{partial-F-s-0ne-0} and using the Implicit function Theorem, we obtain the existence of a unique $s^* >0$ and $b \in C^1(s_0,s^*)$ such that $q$ defined as in \eqref{decom-q-w-}, verifies \eqref{equation-q}, and the orthogonal condition \eqref{Modulation-condition} hold. Moreover, if we assume furthermore that $(q,b)$ is shrunk in the set $V_{A,\delta,b_0}(s)$ for all $s \in [s_0,s^*]$, then, we can repeat the computation for \eqref{expansion=partial-F-b-s-0} in using the bounds given in Definition \eqref{definition-shrinking-set} and we obtain \begin{eqnarray*} \frac{ \partial \mathcal{F}}{ \partial b} \left. \right|_{(s,b) = (s^*, b(s^*))} = I^{-4k}(s^*) 2^{4k} (2k)! \left( 1 + O(I^{-\delta}(s^*)) \right) \ne 0. \end{eqnarray*} Then, we can apply the Implicit function theorem to get the existence and uniqueness of $(q,b)$ on the interval $[s^*,s^* +\varepsilon]$ for some $\varepsilon >0$ small and the conclusion of the Lemma completely follows. \end{proof} \iffalse the decomposition is valid on the interval $[s_0,s_1]$ for some $s_1 > s_0$. \noindent \medskip Now, we assume that the solution $(q,b)(s)$ exists on $[s_0,s^*]$, we will prove \eqref{definition-q} remains valid on $[s^*, s^* + \varepsilon ]$ for some $\varepsilon >0$. Similarly, it is sufficient to prove \begin{equation}\label{partial-F-s-*} \frac{ \partial \mathscr{F}}{ \partial b} (s,b) \left. \right|_{(s,b) = (s^*, b(s^*))} \ne 0. \end{equation} Using the fact that $ w = f_b + f_b^p q$ and \eqref{formula-partial-F}, we get \begin{eqnarray*} & & \frac{ \partial \mathscr{F}}{ \partial b} (s,b) \left. \right|_{(s,b) = (s^*, b(s^*))} = \left\langle w \frac{p y^{2k}}{p-1} f_b^{-1} - y^{2k}, H_{2k} \right\rangle_{L^2_{\rho_s}}\\ & = & \left\langle y^{2k}, H_{2k}(s^*) \right\rangle_{L^2_{\rho_{s^*}}} + \left\langle \frac{p}{p-1} y^{2k} f_{b(s^*)}^{p-1} q(s^*) , H_{2k}(s^*) \right\rangle_{L^2_{\rho_{s^*}}}. \end{eqnarray*} Firstly, we have \begin{eqnarray*} \left\langle y^{2k}, H_{2k}(s^*) \right\rangle_{L^2_{\rho_{s^*}}} = I^{-4k}(s^*) 2^{4k} (2k!) + O\left(I^{-4k -2}(s^*) \right), \end{eqnarray*} and using the fact that $V_{A, \delta, b_0}(s^*)$, we apply Lemma \ref{small-integral-y-ge-I-delta} to obtain \begin{eqnarray*} & &\left\langle \frac{p}{p-1} y^{2k} f_{b(s^*)}^{p-1} q(s^*) , H_{2k}(s^*) \right\rangle_{L^2_{\rho_{s^*}}} \\ &=& \int_{y \le I^{-\delta}(s^*)} \frac{p}{p-1} y^{2k} f_{b(s^*)}^{p-1}(y) q(y,s^*) , H_{2k}(y,s^*) \rho_{s^*}(y) dy + O(e^{-\frac{I(s^*)}{8}}). \end{eqnarray*} In particular, on the interval $[0,I^{-\delta}(s^*)]$, we use Taylor expansion to get \begin{eqnarray*} y^{2k} f_{b(s^*)}^{p-1}(y) q(y,s^*) = \sum_{j=0}^M q_j(s^*)\sum_{l=0}^{2k+L-1+j} a_{j,l} H_{l}(y,s^*) + A_2(y,s^*), \end{eqnarray*} where \begin{eqnarray*} \left| A_2(y,s^*) \right| &\le & C\left[ I^{-\delta}(s^*) y^{2k + L} + y^{2k} |q_-(y,s^*)| \right] \\ & \le & C A I^{-\delta}(s^*) \left[ y^{2k + L} + y^{2k} ( I^{-M}(s^*) + y^M) \right] . \end{eqnarray*} Then, it follows \begin{eqnarray*} \left|\int_{y \le I^{-\delta}(s^*)} \rho_{s^*}(y) dy \right| \le CA I^{-4k -\delta} (s^*). \end{eqnarray*} Finally, we get \begin{eqnarray*} \frac{ \partial \mathscr{F}}{ \partial b} (s,b) \left. \right|_{(s,b) = (s^*, b(s^*))} = I^{-4k}(s^*) 2^{4k} (2k)! \left( 1 + O(AI^{-4k-\delta}(s^*)) \right) \ne 0, \end{eqnarray*} provided that $s^* \ge s_0 \ge s_1(A, \delta)$, thus, \eqref{partial-F-s-*} follows and the proof of the Lemma is concluded. \fi \subsection{Reduction to a finite dimensional problem} As we defined shrinking set $V_{\delta, b_0 }$ in Definition \ref{definition-shrinking-set}, it is sufficient to prove there exists a unique global solution $(q,b)$ on $[s_0, +\infty)$ for some $s_0 $ sufficient large that $$ (q,b)(s) \in V_{\delta, b_0}(s), \forall s \ge s_0.$$ In particular, we show in this part that the control of infinite problem is reduced to a finite dimensional one. To get this key result, we first show the following \text{priori estimates }. \begin{prop}[A priori estimates] \label{proposition-ode} Let $b_0 > 0$ and $k \in \mathbb{N}, k \ge 2, b_0 >0$, then there exists $\delta_{3}(k, b_0) > 0$ such that for all $\delta \in (0,\delta_3)$, there exists $s_3(\delta, b_0)$ such that for all $s_0 \ge s_3$, the following property holds: Assume $(q,b)$ is a solution to problem \eqref{equation-q} $\&$ \eqref{Modulation-condition} that $(q,b)(s) \in \mathcal{V}_{\delta, b_0}(s) $ for all $s\in[\tau, \bar s]$ for some $\bar s \geq s_0$, and $q_{2k}(s)=0$ for all $s\in [\tau, \bar s]$, then for all $s\in [\tau,s_1], s_0 \le \tau \le \bar s $, the following properties hold: \begin{itemize} \item[(i)] (ODEs on the finite modes). For all $j \in \{ 0,...,[M] \}$, we have $$\left |q_j'(s)-\left( 1-\frac{j}{2k}\right)q_j(s) \right |\leq CI^{-2\delta}(s). $$ \item[$(ii)$] (Smallness of the modulation $b(s)$). It satisfies that \begin{equation*}\label{estimat-derivative-b} \left| b'(s) \right| \leq C I^{-\delta}(s)\mbox{ and }\frac 34b_0\leq b(s)\leq \frac 54b_0. \end{equation*} \item[$(iii)$] (Control of the infinite-dimensional part $q_-$): We have the following a priory estimate \[ \begin{array}{lll} \left| q_-(s)\right|_s &\le & e^{-\frac{s-\tau}{p-1}} \left| q_-(\tau)\right|_\tau + C \left( I^{-\frac{3}{2} \delta}(s) + e^{-\frac{s-\tau}{p-1}} I^{-\frac{3}{2}\delta}(\tau)\right). \end{array} \] \end{itemize} \end{prop} \begin{proof}[Proof of Proposition \ref{proposition-ode}] This result plays an important role in our proof. In addition to that, the proof based on a long computation which is technical. To help the reader in following the paper, we will give the complete proof in Section \ref{proposition-ode}. \end{proof} Consequently, we have the following result \begin{prop}[Reduction to a finite dimensional problem]\label{propositionn-transversality} Let $b_0 >0$ and $ k \in \mathbb{N}, k \ge 2 $, then there exists $\delta_4(b_0)$ such that for all $ \delta \in (0,\delta_4)$, there exists $s_4(b_0, \delta)$ such that for all $s_0 \ge s_4$, the following property holds: Assume that $(q,b)$ is a solution to \eqref{equation-q} $\&$ \eqref{Modulation-condition} corresponding to initial data $(q,b)(s_0) = (\psi(d_0,...,d_{2k-1}),s_0)$ where $\psi(d_0,...,d_{2k-1}),s_0)$ defined as in \eqref{initial-data-new} with $ \max_{0 \le i \le 2k-1} |d_i| \le 2 $; and $(q,b)(s)\in V_{\delta,b_0}(s)$ for all $s \in [s_0, \bar s]$ for some $\bar s > s_0$ that $(q,b)( \bar s) \in \partial V_{\delta,b_0}( \bar s)$, then the following properties are valid: \begin{itemize} \item[(i)] \textbf{(Reduction to finite modes)}: Consider $q_0,...,q_{2k-1}$ be projections defined as in \eqref{defi-q_m} then, we have $$\left (q_0,..,q_{2k-1}\right )(\bar s) \in \partial \hat{V}(\bar s),$$ where $I(s)$ is given by \eqref{defi-I-s}.\\ \item[(ii)] \textbf{(Transverse crossing)} There exists $m\in\{0,..,2k-1\}$ and $\omega \in \{-1,1\}$ such that \[\omega q_m(s_1)=I(s_1)^{-\delta}\mbox{ and }\omega \frac{d q_m}{ds}>0.\] \end{itemize} \end{prop} \begin{rem} In (ii) of Proposition \ref{propositionn-transversality}, we show that the solution $q(s)$ crosses the boundary $\partial V_{\delta, b_0}(s)$ at $s_1$ with positive speed, in other words, that all points on $\pa V_{\delta, b_0}(s_1)$ are \textit{strict exit points} in the sense of \cite[Chapter 2]{Conbook78}. \end{rem} \begin{proof} Let us start the proof Proposition \ref{propositionn-transversality} assuming Proposition \ref{proposition-ode}. Let us consider $\delta \le \delta_3$ and $s_0 \ge s_3$ that Proposition \ref{proposition-ode} holds.\\ \noindent - \textit{Proof of item (i)} To get the conclusion of this item, we aim to show that for all $s \in [s_0, \bar s]$ \begin{equation}\label{improve-q-j-ge-2k+1} \left| q_j(s) \right| \le \frac{1}{2} I^{-\delta}(s), \forall j \in \{ 2k+1,...,[M] \} (\text{note that } q_{2k} \equiv 0), \end{equation} and \begin{equation}\label{improve-q_-} \left| q_-(s) \right|_s \le \frac{1}{2} I^{-\delta}(s), \end{equation} + For \eqref{improve-q-j-ge-2k+1}: From item (i) of Proposition \ref{proposition-ode}, we have $$ \left[ q_j(s) \pm \frac{1}{2} I^{-\delta}(s) \right]' = \left( 1 - \frac{j}{2k} \right) q_j(s) \pm \frac{\delta}{2} \left( \frac{1}{2k} - \frac{1}{2} \right) I^{-\delta}(s) + O(I^{-2 \delta}(s)). $$ Hence, with $ j > 2k, \delta \le \delta_{4,1}$ and initial data $q_j(s_0) =0$ that $ q_j(s_0) \in \left( -\frac{1}{2}I^{-\delta}(s_0), \frac{1}{2} I^{-\delta}(s_0) \right) $, it follows that $$ q_j(s) \in \left( -\frac{1}{2}I^{-\delta}(s), \frac{1}{2} I^{-\delta}(s) \right), \forall s \in [s_0, \bar s_0], $$ which concludes \eqref{improve-q-j-ge-2k+1}. + For \eqref{improve-q_-}: Let consider $\sigma \ge 1 $ fixed later. We divide into two cases that $ s - s_0 \le s_0 $ and $s - s_0 \ge s_0$. According to the first case, we apply item (iii) with $\tau = s_0$ that \begin{eqnarray*} \left| q_-(s) \right|_s \le C \left( I^{-\frac{3}{2}\delta}(s) + e^{-\frac{s-s_0}{p-1}} I^{-\frac{3}{2}\delta}(s_0) \right) \le \frac{1}{2} I^{-\delta}(s), \end{eqnarray*} provided that $\delta \le \delta_{4,2}$ and $s_0 \ge s_{4,2}(\delta) $. In the second case, we use item (iii) again with $\tau = s - s_0 $, and we obtain \begin{eqnarray*} \left| q_-(s) \right|_s & \le & e^{-\frac{s_0}{p-1}} I^{-\delta}(\tau) + C\left( I^{-\frac{3}{2}\delta} + e^{-\frac{s_0}{p-1}} I^{-\frac{3}{2}\delta}(\tau) \right)\\ & \le & C ( e^{-\frac{s_0}{p-1}} I^{\delta}(s) I^{-\frac{3}{2}\delta}(\tau) + I^{-\frac{1}{2}\delta}(s) ) I^{-\delta}(s) \le \frac{1}{2} I^{-\delta}(s). \end{eqnarray*} Thus, \eqref{improve-q_-} follows. Finally, using the definition of $V_{\delta, b_0}(s)$, the fact $(q,b)(\bar s) \in \partial V_{\delta, b_0}(\bar s)$, estimates \eqref{improve-q-j-ge-2k+1}, \eqref{improve-q_-}, and item (ii) of Proposition \ref{propositionn-transversality}, we get the conclusion of item (ii). \iffalse We define $\sigma=s-\tau$ and take $s_0=-\ln T\ge 3 \sigma,\;(\mbox{ that is } T\le e^{-3\sigma})$ so that for all $\tau \geq s_0$ and $s\in [\tau, \tau +\sigma]$, we have \[\tau\leq s\leq \tau +\sigma\leq \tau+\frac{1}{3} s_0\leq \frac{4}{3} \tau.\] From (i) of proposition Proposition \ref{proposition-ode}, we can write for all $2k\leq j\leq [M] $ \[\left |\left ( e^{-(1-\frac{j}{2k})t}q_j(t)\right )' \right |\leq C e^{-(1-\frac{j}{2k})t}I^{-2\delta}(t),\] We consider that $\tau \leq s\leq \frac{4}{3}\tau$. Integrating the last inequality between $\tau$ and $s$, we obtain \[ \begin{array}{lll} |q_j(s)|&\leq &e^{-(1-\frac{j}{2k})(\tau-s)} q_j(\tau)+ C(s-\tau)e^{(1-\frac{j}{2k})s} I^{-2\delta}(\tau)\\ &\leq &e^{(1-\frac{j}{2k})(s-\tau)} q_j(\tau)+ C(s-\tau)e^{(1-\frac{j}{2k})s} I^{-\frac{4}{3}\delta}(s), \end{array} \] There exists $\tilde s_1=\max \{s_j,\;5\le j\le 9\}$, such that if $s\geq \tilde s_1$, then we can easily derive \[|q(s)|\leq Ce^{(1-\frac{j}{2k})(s-\tau)} I^{-\frac{4}{3}\delta}(s)+I^{-\frac{7}{6}\delta}(s)< \frac{1}{2}I^{-\delta}(s).\] In a similar fashion, exists $\tilde s_2=\max \{s_j,\;9\le j\le 13\}$, for all $s\geq \tilde s_2$, we obtain \[|q_-(s)|_s< \frac{1}{2}I^{-\delta}(s).\] Thus we finish the prove of (ii) of Proposition \ref{propositionn-transversality}.\\ \fi \noindent - \textit{Proof of item (ii)}: From item (ii) of Proposition \ref{propositionn-transversality}, there exist $m=0,..2k-1$ and $\omega=\pm 1$ such that $q_m(s_1)=\omega I(s_1)^{-\delta}$. By (ii) of Proposition \ref{proposition-ode}, we see that for $\delta>0$ \[\omega q_m'(s_1)\geq (1-\frac{m}{2k})\omega q_m(s_1)-CI^{-2\delta}(s_1)\geq C\left ((1-\frac{m}{2k})I^{-\delta}(s_1)-I^{-2\delta}(s_1)\right )>0,\] which concludes the proof of Proposition \ref{propositionn-transversality}. It remains to prove Proposition \ref{proposition-ode}. This will be done in Section \ref{Section-proof-proposition-ode}. \end{proof} \subsection{Topological\textit{ ``shooting method``} for the finite dimension problem and proof of Theorem \ref{Theorem-principal}} In this part we aim to give the complete proof to Theorem \ref{Theorem-principal} by using a topological \textit{shooting method}: \begin{proof}[The proof of Theorem \ref{Theorem-principal}] Let us consider $\delta>0$, $T > 0,(T= e^{-s_0})$ , $(d_0,..,d_{2k-1}) \in \mathbb{D}_{s_0}$ such that problem \eqref{equation-q} $\&$ \eqref{Modulation-condition} with initial data $ \psi(d_0,...,d_{2k-1},s_0)$ defined as in \eqref{initial-data-new} has a solution $(q(s),b(s))_{d_0,..,d_{2k-1}}$ defined for all $s \in [s_0,\infty)$ such that \beqtn \|q(s)\|_{L^\infty_M} \leq C I^{-\delta}(s)\mbox{ and } |b(s)-b^*|\leq C I^{-2\delta}(s), \label{goal-of-the proof} \eeqtn for some $b^*>0$. Let $b_0,\delta$ and $ s_0$ such that Lemma \ref{initial-data-new}, Propositions \ref{propositionn-transversality} and Proposition \ref{proposition-ode} hold, and we denote $T= e^{-s_0}$ (positive since $s_0$ is large enough). We proceed by contradiction, from (ii) of Lemma \ref{initial-data-new}, we assume that for all $(d_0,...,d_{2k-1}) \in \mathbb{D}_{s_0}$ there exists $s_*=s_*(d_0,..,d_{2k-1}) < +\infty$ such that \begin{equation*} \begin{array}{ll} q_{d_0,..,d_{2k-1}}(s)\in V_{\delta,b_0}(s), & \forall s\in [s_0, s_*], \\ q_{d_0,..,d_{2k-1}}(s_*)\in \pa V_{\delta,b_0}(s_*).& \end{array} \end{equation*} By using item (i) of Proposition \ref{propositionn-transversality}, we get $(q_0,..,q_{2k-1})(s_*) \in \pa \hat{\mathcal{V}}(s_*)$ and we introduce $\Phi$ by \[\Phi: \begin{array}{ll} \mathbb{D}_{s_0}\to \pa [-1,1]^{2k}&\\ (d_0,..d_{2k-1})\to I^{\delta}(s)(q_0,..,q_{2k-1})(s_*), \end{array} \] which is well defined and satisfies the following properties: \begin{itemize} \item[$(i)$] $\Phi$ is continuous from $\mathbb{D}_{s_0}$ to $\pa [-1,1]^{2k}$ thanks to the continuity in time of $q$ on the one hand, and the continuity of $s_*$ in $(d_0,...,d_{2k-1})$ on the other hand, which is a direct consequence of the trasversality in item (ii) of Proposition \ref{propositionn-transversality}. \item[(ii)] It holds that $\Phi \left. \right|_{\partial \mathbb{D}_{s_0}}$ has nonzero degree. Indeed, for all $(d_0,...,d_{2k-1}) \in \partial \mathbb{D}_{s_0}$, we derive from item (i) of Lemma \ref{lemma-initial-data} that $s_*(d_0,...,d_{2k-1}) =s_0$ and $$ \text{ deg}\left( \Phi \left. \right|_{\partial \mathbb{D}_{s_0}} \right) \ne 0. $$ \end{itemize} From Wazewski's principle in degree theory such a $\Phi$ cannot exist. Thus, we can prove that there exists $(d_0,...,d_{2k-1}) \in \mathbb{D}_{s_0}$ such that the corresponding solution $(q,b)(s) \in V_{\delta, b_0}(s), \forall s \ge s_0$. \iffalse and by (iii) of Proposition \ref{propositionn-transversality}, $\Phi$ is continuous.\\ In the following we will prove that $\Phi$ has nonzero degree, which mean by the degree theory (Wazewski's principle) that for all $s\in [s_0, \infty )$ $q(s)$ remains in $V_{\delta,b_0}(s)$, which is a contradiction with the Exit Proposition.\\ Indeed Using Lemma \ref{initial-data-new}, and the fact that $q(-\ln T)=\psi_{d_0,..,d_{2k-1}}$, we see that when $(d_0,..,d_{2k-1})$ is on the boundary of the quadrilateral $\mathbb{D}_T$, $q_0,..,q_{2k-1}(-\ln T)\in \pa [-I^{-2\delta}(s),I^{-2\delta}(s)]^{2k}$ and $q(-\ln T)\in V_{\delta,b_0}(-\ln T)$ with strict inequalities for the other components.\\ By the Exit proposition \ref{propositionn-transversality}, $q(s)$ leaves $V_{\delta,b_0}$ at $s_0=-\ln T$, hence $s_*=-\ln T$.\\ Using (ii) of Proposition \ref{propositionn-transversality}, we get that the restriction of $\Phi$ on he boundary of $\mathbb{D}_T$ is of degree $1$, which means by the shooting method that for all $s\in [s_0, \infty )$ $q(s)$ remains in $V_{\delta,b_0}(s)$, which is a contradiction.\\ We conclude that there exists a value $(d_0,..,d_{2k-1})\in \mathbb{D}$ such that for all $s\geq -\ln T$, $q_{d_0,..,d_{2k-1}}\in V_{\delta,b_0}(s)$, which means that \beqtn\label{estimation-linftyM-q} \left \|\frac{q}{1+|y|^M}\right\|_{L^\infty}\leq C I^{-\delta}(s), \eeqtn and using the definition of\fi In particular, we derive from \eqref{decompose-equa-w-=q}, $M=\frac{2kp}{p-1}$, and the following estimate \[|f_be_b|=|f_b^p|\leq C(1+|y|^{-\frac{2kp}{p-1}})= C(1+|y|^{-M}) \] that \[\|w(y,s)-f_{b}\|_{L^\infty}=\|f_{b}e_bq\|_{L^\infty} \leq C I^{-\delta}(s).\] So, we conclude item (i) of Theorem \ref{Theorem-principal}.\\ The proof of item (ii): From (ii) of Proposition \ref{proposition-ode}, it immediately follows that there exists $b^* \in \mathbb{R}^*_+$ such that $$ b(s) \to b^* \text{ as } s \to +\infty, $$ which is equivalent to $$ b(t) \to b^* \text{ as } t \to T.$$ In particular, by integrating the first inequality given by between $s$ and $\infty$ and using the fact that $b(s)\to b^*$ (see \eqref{convegence-b-s}), we obtain \[|b(s)-b^*|\leq Ce^{-\delta s(1-\frac{1}{k})}.\] Note that $s = -\ln(T-t)$ then, \eqref{goal-of-the proof} follows and the conclusion of item (ii) of Theorem \ref{Theorem-principal}. \\ \end{proof} \section{Proof to Proposition \ref{proposition-ode} }\label{Section-proof-proposition-ode} In this section, we prove Proposition \ref{proposition-ode}. We just have to project equation \eqref{equation-q} to get equations satisfied by the different coordinates of the decomposition \eqref{decomposition-q2}. More precisely, the proof will be carried out in 2 subsections, \begin{itemize} \item In the first subsection, we write equations satisfied by $qj$, $0\le j\leq M$, then, we prove (i), (ii) of Proposition \ref{proposition-ode}. \item In the second subsection, we first derive from equation \eqref{equation-q} an equation satisfied by $q_-$ and prove the last identity in (iii) of Proposition \ref{proposition-ode}. \end{itemize} \subsection{The proof to items (i) and (ii) of Proposition \ref{proposition-ode} }\label{subsection-proof-i-ii} \begin{itemize} \item In Part 1, we project equation \eqref{equation-q} to get equations satisfied by $q_j$ for $0 \leq j\leq [M]$. \item In Part 2: We will use the precise estimates from part I to conclude items (i) and (ii) of Proposition \ref{proposition-ode}. \end{itemize} \medskip \textbf{Part 1: The projection of equation \eqref{equation-q} on the eigenfunctions of the operator $\mathcal{L}_s$.} Let $(q,b)$ be solution to problem \eqref{equation-q} $\&$ \eqref{Modulation-condition} trapped in $V_{\delta, b_0}(s)$ for all $s \in [s_0, \bar s]$ for some $\bar s > s_0$. Then, we have the following: \medskip \textbf{a) First term $\pa_s q$:} In this part, we aim to estimate the error between $\partial_s q_n(s)$ and $P_n(\partial_s q)$ by the following Lemma \iffalse \begin{lemma} There exist $\delta_0 > 0 $ such that for all $0<\delta < \delta_0$ and $b_0 >0$, there exists $s_{2}(A,\delta, b_0) \ge 1$ such that for all $ s_0 \ge s_2$, the following property holds: Assume that $(q,b) \in V_{\delta, b_0}(s )$ for all $s \in [s_0,s^*], $ for some $s^* >s_0$, satisfying \eqref{equation-q}- \eqref{modulation-equation}, then, the following estimates hold \begin{equation}\label{esti-par-q-m-P-partial-s-q} \left| \partial_s q_n - P_n(\partial_s q) -n \left( 1- \frac{1}{k} \right) q_n \right| \le C A^{\max(n-1,0)} I^{j -2k } (s), \quad \forall s \in (s_0,s^*), \forall n \in \{0,1,...,2k-1\}. \end{equation} and for $ n=2k$, we have \begin{equation}\label{partial-q-2k-s} \left| \partial_s q_{2k} - P_{2k} (\partial_s q) - 2k \left( 1-\frac{1}{k} \right) q_{2k} \right| \le CA I^{-\delta}(s). \end{equation} \end{lemma} \fi \begin{lemma}\label{Lemma-Pn_partialq} For all $n \in \{0,1,...,[M]\}$, it holds that $$ P_{n} (\partial_s q)=\partial_s q_n (s) - \left (1-\frac 1k\right )(n+1)(n+2) I^{-2}(s) q_{n+2} (s), \forall s \in [s_0, \bar s]. $$ \end{lemma} \begin{proof}We only give the proof when $n\geq 2$, for $n=0,1$ it is easy to derive the result. Using \eqref{defi-q_m}, we have the following equality $$\langle H_n, H_n \rangle_{L^2_{\rho_s}} q_n(s) = \langle q,H_n(s)\rangle_{L^2_{\rho_s}},$$ which implies \begin{eqnarray*} \langle H_n, H_n \rangle_{L^2_{\rho_s}} \partial_s q_n(s) & = &\langle \partial_s q, H_n \rangle_{L^2_{\rho_s}} + \langle q, \partial_s H_n (s)\rangle_{L^2_{\rho_s}} + \left\langle q, H_n (s) \frac{ \partial_s \rho_s}{\rho_s} \right\rangle_{L^2_{\rho_s}} \\ & - & \partial_s \langle H_n,H_n \rangle_{\rho_s} q_n, \end{eqnarray*} which yields \begin{eqnarray*} P_n(\pa_s q) & = & \partial_s q_n - \langle q, \partial_s H_n (s)\rangle_{L^2_{\rho_s}}\langle H_n, H_n \rangle_{L^2_{\rho_s}} ^{-1} \\ & & - \left\langle q, H_n (s) \frac{ \partial_s \rho_s}{\rho_s} \right\rangle_{L^2_{\rho_s}} \langle H_n, H_n \rangle_{L^2_{\rho_s}} ^{-1} + \partial_s \langle H_n,H_n \rangle_{\rho_s} \langle H_n, H_n \rangle_{L^2_{\rho_s}} ^{-1} q_n. \end{eqnarray*} \medskip \begin{eqnarray*} \partial_s q_n & = &\langle \partial_s q, H_n \rangle_{L^2_{\rho_s}}\langle H_n, H_n \rangle_{L^2_{\rho_s}} ^{-1} + \langle q, \partial_s H_n (s)\rangle_{L^2_{\rho_s}}\langle H_n, H_n \rangle_{L^2_{\rho_s}} ^{-1} \\ & & + \left\langle q, H_n (s) \frac{ \partial_s \rho_s}{\rho_s} \right\rangle_{L^2_{\rho_s}} \langle H_n, H_n \rangle_{L^2_{\rho_s}} ^{-1} - \partial_s \langle H_n,H_n \rangle_{\rho_s} \langle H_n, H_n \rangle_{L^2_{\rho_s}} ^{-1} q_n. \end{eqnarray*} Thus, we can write \begin{equation}\label{estimate-q-s-partial-sn} \partial_s q_n = P_{n}( \partial_s q) + \tilde L, \end{equation} where \begin{eqnarray*} \tilde L = \langle q, \partial_s H_n (s)\rangle_{L^2_{\rho_s}}\langle H_n, H_n \rangle_{L^2_{\rho_s}} ^{-1} + \left\langle q, H_n (s) \frac{ \partial_s \rho_s}{\rho_s} \right\rangle_{L^2_{\rho_s}} \langle H_n, H_n \rangle_{L^2_{\rho_s}} ^{-1} - \partial_s \langle H_n,H_n \rangle_{\rho_s} \langle H_n, H_n \rangle_{L^2_{\rho_s}} ^{-1} q_n. \end{eqnarray*} We now aim to estimate $\tilde L$ provided that $(q(s),b(s)) \in V_{A,b_0,\delta}(s) $ and we also recall that $$ q = \sum_{j=1}^M q_j H_j + q_-. $$ + For $\partial_s \langle H_n,H_n \rangle_{\rho_s} \langle H_n, H_n \rangle_{L^2_{\rho_s}} ^{-1} q_n $: We have the facts that $$\langle H_n,H_n \rangle_{\rho_s} = I^{-2n}(s) 2^n n!, \text{ and } I(s) = e^{\frac{s}{2}\left(1 -\frac{1}{k} \right) } , $$ which implies \begin{eqnarray*} \partial_s \langle H_n, H_n \rangle_{L^2_{\rho_s}} = - n \left( 1 -\frac{1}{k} \right) \langle H_n, H_n \rangle_{L^2_{\rho_s}}. \end{eqnarray*} So, we obtain $$ \partial_s \langle H_n,H_n \rangle_{L^2_{\rho_s}} \langle H_n, H_n \rangle_{L^2_{\rho_s}} ^{-1} q_n(s) = - n \left( 1 -\frac{1}{k} \right) q_n(s).$$ + For $\left\langle q, H_n (s) \frac{ \partial_s \rho_s}{\rho_s} \right\rangle_{L^2_{\rho_s}} \langle H_n, H_n \rangle_{L^2_{\rho_s}}^{-1} $: Using the fact that $$ \partial_s \rho_s = \frac{1}{2} \left( 1 -\frac{1}{k}\right) \rho_s - \frac{1}{4} \left( 1 - \frac{1}{k} \right) I^2(s) y^2 \rho_s , $$ which yields \begin{eqnarray*} \left\langle q, H_n (s) \frac{ \partial_s \rho_s}{\rho_s} \right\rangle_{L^2_{\rho_s}} & = & \frac{1}{2} \left( 1 -\frac{1}{k} \right) \langle q, H_n(s) \rangle_{L^2_{\rho_s}} - \frac{1}{4} \left( 1 -\frac{1}{k} \right) \langle q, I^2(s) y^2 H_n(s) \rangle_{L^2_{\rho_s}}\\ & = & \frac{1}{2} \left( 1 -\frac{1}{k} \right) q_n \langle H_n,H_n \rangle_{L^2_{\rho_s}} - \frac{1}{4}\left( 1 -\frac{1}{k} \right) \langle q, I^2(s) y^2 H_n(s) \rangle_{L^2_{\rho_s}} . \end{eqnarray*} Thus, we derive \begin{eqnarray*} \left\langle q, H_n (s) \frac{ \partial_s \rho_s}{\rho_s} \right\rangle_{L^2_{\rho_s}} \langle H_n, H_n \rangle_{L^2_{\rho_s}}^{-1} = \frac{1}{2} \left(1 -\frac{1}{k} \right) q_n - \frac{1}{4} \left(1 -\frac{1}{k} \right) \langle q, I^2(s) y^2 H_n(s) \rangle_{L^2_{\rho_s}} \langle H_n, H_n \rangle_{L^2_{\rho_s}}^{-1} \end{eqnarray*} Using the polynomial Hermite identities, we obtain \[z^2h_n=zh_{n+1}+2nzh_{n-1}=h_{n+2}+2(2n+1)h_n+4n(n-1)h_{n-2},\] and we find the following identify \begin{equation*} y^2 H_{n}(y,s) = H_{n+2} (y,s) + (4n+2) I^{-2}(s) H_n(y,s) + 4 n(n-1) I^{-4}(s) H_{n-2}(y,s) \end{equation*} \iffalse note that when $n=0,$ or $1$ the sum in the right hand side vanishes, and $q \in V_{A,\delta, b_0}(s), \forall s \in [s_0,s^*]$, we get \begin{eqnarray*} \langle q, I^2(s) y^2 H_n(s) \rangle_{L^2_{\rho_s}} \langle H_n, H_n \rangle_{L^2_{\rho_s}}^{-1} = (4n+2) q_n + O(AI^{-\delta -2}(s)). \end{eqnarray*} \fi This implies that \begin{eqnarray*} & & \langle q, I^2(s) y^2 H_n(s) \rangle_{L^2_{\rho_s}} \\ & = & I^{2}(s) \left[ q_{n+2} \| H_{n+2}\|^2_{L^2_{\rho}} + I^{-2}(s) q_n (s) \|H_n\|^2_{L^2_\rho} + 4n(n-1) q_{n-2} I^{-4}(s) \|H_{n-2}\|^2_{L^2_{\rho}} \right], \end{eqnarray*} which yields \begin{eqnarray*} & & \left\langle q, H_n (s) \frac{ \partial_s \rho_s}{\rho_s} \right\rangle_{L^2_{\rho_s}} \langle H_n, H_n \rangle_{L^2_{\rho_s}}^{-1} \\ &= & -n \left(1 - \frac{1}{k} \right)q_n - n(n-1) \left(1-\frac{1}{k} \right) q_{n-2} I^{-2}(s) \frac{\|H_{n-2}\|^2_{L^2_\rho}}{\|H_{n}\|^2_{L^2_\rho}} \\ &+&\left(1-\frac{1}{k} \right) (n+2)(n+1) I^{-2}(s) q_{n+2}, \end{eqnarray*} for all $ n \in \{0,...,[M]\} \text{ and } \forall s \in [s_0,s^*]$ (with convention that $q_j =0$ if $j<0$) and for some $\tilde c_n \in \mathbb{R}$. \\ + $ \langle q, \partial_s H_n (s)\rangle_{L^2_{\rho_s}}\langle H_n, H_n \rangle_{L^2_{\rho_s}} ^{-1} $: \begin{eqnarray*} \partial_s H_n(s) &=& - n I'(s) I^{-n-1}(s) h_n(I(s)y) + I'(s) y h'_n(I(s)y) I^{-n}(s) \\ & = & - \frac{n}{2} \left(1-\frac{1}{k} \right) H_n(s) + \frac{n}{2} \left( 1-\frac{1}{k} \right) y H_{n-1}(s). \end{eqnarray*} Let us recall the following identify on Hermite's polynomial \begin{equation} y H_{n-1} (y,s) = H_n(y,s) + I^{-2} (s) 2(n-1) H_{n-2}(y,s). \end{equation} So, we can rewrite $\partial_s H_n$ as follows \begin{equation}\label{formula-partia-s-Hn} \partial_s H_n (y,s) = n (n-1) \left(1 -\frac{1}{k} \right) I^{-2}(s) H_{n-2}(y,s). \end{equation} Thus, we obtain \begin{eqnarray*} \langle q, \partial_s H_n (s)\rangle_{L^2_{\rho_s}}\langle H_n, H_n \rangle_{L^2_{\rho_s}} ^{-1} &=& n (n-1) \left(1-\frac{1}{k} \right) I^{-2}(s)q_{n-2} \frac{\|H_{n-2}\|^2_{L^2_{\rho}}}{ \| H_{n}\|^2_{L^2_\rho}}. \end{eqnarray*} Finally, we obtain \begin{equation*} \partial_s q_n = P_{n}( \partial_s q) + \left (1-\frac 1k\right )(n+1)(n+2) q_{n+2}, \forall n \in \{0,1....,[M]\}, \end{equation*} which concludes the proof of the Lemma. \end{proof} \textbf{ b) Second term $\mathcal{L}_s (q)$} \begin{lemma}\label{Lemma-P-n-mathcal-L-s} For all $0\leq n\leq [M]$, it holds that \begin{equation}\label{P-n-mathcal-L-n-s-q} P_n( \mathcal{L}_s q)= \left(1-\frac{n}{2k}\right)q_n+(1-\frac{1}{k})(n+1)(n+2) I^{-2} q_{n+2}. \end{equation} \end{lemma} \begin{proof} As in the proof of Lemma \ref{Lemma-Pn_partialq}, we only give the proof when $n\geq 2$, for $n=0,1$ it is easy to derive the result. We write $P_n( \mathcal{L}_s q)$ as follows: \[\begin{array}{lll} P_n(\mathcal{L}_s q)&=& \dsp \int\left ( I^{-2}(s) \Delta q-\frac{1}{2}y \cdot \nabla q+q \right )H_n \rho_s dy+ \int\frac 12(1-\frac{1}{k})y\nabla q H_n \rho_s dy \\ & =&A_1+\frac 12(1-\frac{1}{k})A_2. \end{array} \] In the following we will use Hermite polynomial identity \eqref{Hermite-identities-ell=2} given bu Lemma \ref{Hermite_Identies}. Using integration by part and polynomial identities we obtain \[ \begin{array}{rcl} \dsp A_1 &=& \int \left (I^{-2}(s) \Delta q-\frac{1}{2}y \cdot \nabla q+q\right ) H_n \rho_s dy\\ &=&\dsp \int I^{-2}\div{(\nabla q\rho_s)} +q H_{n} dy,\\ &=& -\dsp I^{-2} \int\nabla q nH_{n-1}\rho_s dy+q_n\|H_n\|_{L^2_{\rho_s}}^2,\\ &=&\dsp n(n-1)I^{-2}\int q H_{n-2} \rho_s-\frac n 2 \int y qH_{n-1}\rho_s dy+q_n\|H_n\|_{L^2_{\rho_s}}^2\\ &=&-\dsp \left (1-\frac{n}{2}\right ) q_n \|H_n\|_{L^2_{\rho_s}}^2. \end{array} \] By a similar computation, using the change of variable $z=Iy$ and we introduce $\rho(z)=I^{-1}\rho_s(y)$, we get \[\begin{array}{l} A_2=\dsp \int y\nabla q H_n\rho_s dy\\ = \dsp \left (- \int q H_{n}\rho_s dy- \int q nyH_{n-1}\rho_s dy + \frac{I^2}{2}\int qy^2H_{n} \rho_sdy\right ) \\ = \dsp \left (-q_n\|H_n\|^2-I^{-n} n\int q z h_{n-1}\rho dz + \frac{1}{2} I^{-n}\int z^2 q h_{n}\rho dz\right ) \\ \dsp \left (-q_n\|H_n\|^2-I^{-n} n\int q (h_n+(n-1)h_{n-2})\rho dz + \frac{1}{2} I^{-n}\int z^2 q h_{n}\rho dz\right ) \\ \end{array} \] Using the polynomial Hermite identities that \[z^2h_n=zh_{n+1}+2nzh_{n-1}=h_{n+2}+2(2n+1)h_n+4n(n-1)h_{n-2},\] which yields \[\begin{array}{rcl} A_2 &=& \dsp \left(-q_n\|H_n\|^2 -I^{-n} n\int q z h_{n-1} \rho dz + \frac{1}{2} I^{-n-2}\int z^2 q h_{n}\rho dz\right) \\ &=&\dsp -q_n\|H_n\|^2-I^{-n} n\int q (h_n+2(n-1)h_{n-2})\rho dz \\ &+& \frac{1}{2} I^{-n}\int q [h_{n+2}+2(2n+1)h_n+4n(n-1)h_{n-2}]\rho dz \\ & = & -q_n\|H_n\|^2-nq_n\|H_n\|^2-2n(n-1) I^{-2}q_{n-2}\|H_{n-2}\|^2+\frac{1}{2}q_{n+2}I^2\|H_{n+2}\|^2\\ &+& (2n+1) q_n\|H_n\|^2+2n(n-1)q_{n-2}I^{-2}\|H_{n-2}\|^2\\ &=&\left (n q_n+ 2(n+2)(n+1) I^{-2} q_{n+2}\right )\|H_n\|^2. \end{array} \] Thus, we obtain by adding all related terms that \[\dsp P_n( \mathcal{L}_s q)= \left(1-\frac{n}{2k}\right)q_n+(1-\frac{1}{k})(n+2)(n+1) I^{-2} q_{n+2}, \] which concludes the proof of Lemma \ref{Lemma-P-n-mathcal-L-s}. \end{proof} \textbf{c) Third term, the nonlinear term $\mathcal{N} (q)$}\\ In this part, we aim to estimate to the projection of $\mathcal{N}(q)$ on $H_n,$ for some $n \in \{0,1,...,[M] \} $. More precisely, we have the following Lemma: \begin{lemma}\label{projecion-H-n-N} Let $b_0 > 0$, then, there exists $ \delta_5(b_0)>0$ such that for all $ \delta \in (0, \delta_5)$ there exists $s_5(b_0,\delta) \ge 1$ such that for all $s_0 \ge s_5$, the following property is valid: Assume $(q,b)(s) \in V_{\delta, b_0}(s)$ for all $s \in [s_0,\bar s]$ for some $\bar s > s_0$, then, we have \beqtn\label{bound-N} \left| P_n(\mathcal{N}) \right| \leq CI^{-2\delta }(s), \forall \text{ and } n \in \{0,1,..., [M]\}, \eeqtn for all $s \in \left[s_0, \bar s \right]$ and $0 \le n \le [M]$. \end{lemma} \begin{proof} We argue as in \cite{BKnon94}. First, let us recall the nonlinear term $\mathcal{N}$ and $ P_n(\mathcal{N}) $ defined as in \eqref{nonlinear-term} and \eqref{defi-q_m}, respectively. The main goal is to use the estimates defined in $V_{\delta, b_0}(s)$ to get an improved bound on $P_n(\mathcal{N})$. Firstly, we recall the following identity \begin{equation}\label{e-b-identity-L} e_b(y)=(p-1)^{-1}\left (\sum_{\ell=0}^{L}\left (-\frac{by^{2k}}{p-1}\right )^\ell + \left (-\frac{by^{2k}}{p-1}\right )^{L+1}e_b(y)\right), \forall L \in \mathbb{N}^*. \end{equation} \iffalse The main goal of this Lemma is to improve the bounds of the $P_n(\mathcal{N})$. The main idea follows the fact that, the projection only is affected on a very small neighborhood of $0$, for instant $[0,I^{-\delta}(s)]$ with $ I^{-\delta}(s) \to 0 \text{ as } s \to +\infty$, and the rest part on $[I^{-\delta}, +\infty)$ we be ignored. In addition to that, on the main interval, we can apply Taylor expansion as well to get some cancellation to the nonlinear term, that finally deduces a good estimate on the projection $ P_n(\mathcal{N})$. Let us start to the detail of the proof: We now recall of necessary estimates for the proof in the below. \fi From the fact that $ (q,b)(s) \in V_{\delta, b_0}(s) $ for all $s \in [s_0, s_1]$, then get the following \begin{eqnarray} \left| e_{b}(y) q(y) \right| = | e_{b}(y)| \left| \left( \sum_{m=0}^M q_m(s) H_m(y,s) +q_-(y,s) \right) \right| \le C I^{-\delta }(s) (1 + |y|^M), \label{rough-esti-e-b-q} \end{eqnarray} which implies \begin{eqnarray} \left|\mathcal{N}(q)(y,s) \right| \le C \left| 1 + e_{b}(y,s) q(y,s) \right|^p \le C[1+I^{-p-\delta}(s)(1 + |y|^{Mp})]. \label{rough-estimate-N-q} \end{eqnarray} By applying Lemma \ref{small-integral-y-ge-I-delta} with $f(y) =\mathcal{N}(y)$ and $K =pM, \delta =0$, we obtain \begin{eqnarray}\label{projection-xc-N} \left| \int_{|y| \ge 1} \mathcal{N}(y,s) H_n(y,s)\rho_{s}(y)dy \right| \le C e^{-\frac{I(s)}{8}}, \forall n \in \{0,1,...,[M]\}, \end{eqnarray} then it follows \begin{eqnarray} \left| \int_{|y| \ge 1} \mathcal{N}(y,s) H_n(y,s)\rho_{s}(y)dy \right| \le C I^{-2\delta -2n}, \forall n \in \{0,1,...,[M]\}.\label{esti-integral-N-lea-I-delta} \end{eqnarray} provided that $s_0 \ge s_{1,1}(\delta, M)$. We here claim that the following estimate \begin{eqnarray} \left|\int_{|y| \le 1} \mathcal{N}(y,s) H_n(y,s)\rho_{s}(y)dy \right| \le C I^{-2\delta -2n}, \forall n \in \{ 0,1..., [M]\}, \label{integral-N-H-n-les-I-delta} \end{eqnarray} directly concludes the proof of Lemma \ref{projecion-H-n-N}. Indeed, let us assume that \eqref{esti-integral-N-lea-I-delta} and \eqref{integral-N-H-n-les-I-delta} hold, then we derive $$ \left| \left\langle \mathcal{N}, H_n(y,s) \right\rangle_{L^2_{\rho_s} } \right| \le C I^{-2\delta -2n}(s), \forall n \in \{0,1...,[M]\}, $$ which implies $$ |P_n(\mathcal{N})| \le CI^{-2\delta}(s), \forall s \in [s_0,s_1] \text{ and } n \in \{0,1...,[M]\}, $$ since and it concludes \eqref{projecion-H-n-N} and also Lemma \ref{projecion-H-n-N}. Now, it remains to prove \eqref{integral-N-H-n-les-I-delta}. From \eqref{rough-esti-e-b-q}, we have \begin{eqnarray}\label{esti-e-b-q-yle-1} \left| e_{b(s)}(y) q(y,s) \right| \le CI^{-\delta}(s), \forall s \in [s_0,s_1] \text{ and } |y| \leq 1. \end{eqnarray} then, we apply Taylor expansion to function $\mathcal{N}(q)$ in the variable $z = q e_{b}$ (here we usually denote $b$ standing for $b(s)$) and we get \begin{eqnarray} \mathcal{N}(q)&=&|1+e_bq|^{p-1}(1+e_bq)-1-p e_b q =\sum_{j=2}^{K}c_j (e_bq)^j + R_K, \label{defi-R-K} \end{eqnarray} where $K $ will be fixed later and the reader should bear in mind that we only consider $|y| \leq 1$ in this part. For the remainder $R_K$, we derive from \begin{eqnarray} \left| R_K(y,s) \right| \le C \left| e_{b}(y)q(y,s) \right|^{K+1} \le C I^{-\delta (K+1)}(s).\label{property-R-K} \end{eqnarray} Besides that, we recall from \eqref{decomposition-q2} that $ q = q_+ + q_-$ and we have then express \begin{eqnarray*} \sum_{j=2}^K c_j (e_{b} q )^j = \sum_{j=2}^{K}d_{j,j} (e_bq_+)^j + \sum_{j=2}^K \sum_{\ell=0}^{i-1} d_{j,\ell} e_{b}^j(q_+)^\ell (q_-)^{j-\ell} = A + S, \end{eqnarray*} where \begin{eqnarray} A = \sum_{j=2}^{K}d_{j} (e_bq_+)^j \text{ and } S =\sum_{j=2}^K \sum_{\ell=0}^{j-1} \tilde d_{j,\ell} e_{b}^j(q_+)^\ell (q_-)^{j-\ell}, \text{ for some } d_j, \tilde d_{j,\ell} \in \mathbb{R}.\label{defi-A-S} \end{eqnarray} From the above expressions, we can decompose $\mathcal{N}$ by $$ \mathcal{N} = A + S + R_K,$$ and we also have \begin{eqnarray*} \int_{|y| \le 1 } \mathcal{N}(y,s) H_n(y,s) \rho(y) dy &=& \int_{|y| \le 1} A H_n(y,s) \rho(y) dy +\int_{|y| \le 1} S H_n(y,s) \rho(y) dy \\ &+& \int_{|y| \le 1} R_K H_n(y,s) \rho(y) dy. \end{eqnarray*} - \textit{The integral for $R_K$} Note that $H_n$ defined in \eqref{eigenfunction-Ls} satisfies $$ \left| H_n(y,s) \right| \le C (1 + |y|^n) \le C, \forall |y| \le 1, $$ hence, it follows from \eqref{property-R-K} that \begin{eqnarray} \left|\int_{|y| \le 1 } R_K(y,s) H_n(y,s)\rho_s(y) dy\right| &\le & C I^{-\delta(K+1)}(s) \int_{|y| \le 1} e^{-\frac{I^2(s)y^2}{4}} I(s) dy \nonumber\\ & \le & C I^{1-\delta (K+1)}(s) \le CI^{-2\delta - 2n }(s), \forall s \in [s_0,s_1],\label{estimate-on-integral-R-K} \end{eqnarray} provided that $K \ge K_1(\delta,M)$. \noindent - \textit{ The integral for $S$:} Since $(q,b)(s) \in V_{\delta,b_0}(s)$, for all $s \in [s_0,\bar s]$, we can estimate as follows \begin{eqnarray*} \left| q_+(y,s) \right|^\ell + |q_-(y,s)|^\ell = \left| \sum_{m=0}^M q_m(s) H_{m}(y,s) \right|^\ell + C I^{-\ell \delta}(s)(I^{-M}(s)+|y|^M)^\ell \le C I^{-\ell \delta}(s), \end{eqnarray*} for all $ |y| \le 1 \text{ and } \ell \in \mathbb{N}.$ Regarding to \eqref{defi-A-S}, we can estimate as follows \begin{eqnarray*} \left| S(y,s) \right| \le C \left(\left|q_+(y,s) \right| |q_-(y,s)| +|q_-(y,s)|^2 \right)\le C I^{-2\delta}(s) ( I^{-M}(s) + |y|^M ), \end{eqnarray*} provided that $s_0 \ge s_{1,3}(K)$. Thus, we derive \begin{eqnarray*} \left| \int_{|y| \le 1} S(y,s) H_n(y,s) \rho_s(y) dy \right| \le C I^{-2 \delta}(s) \int_{|y| \le 1} \left(I^{-M}(s) + |y|^M \right) |H_n(y,s)| e^{-\frac{I^2(s) y^2 }{4}} I(s) dy. \end{eqnarray*} Accordingly to \eqref{eigenfunction-Ls} and changing of variable $z = I(s) y$, we have \begin{eqnarray} & & \hspace{-0.8cm} \int_{|y| \le 1} \left(I^{-M}(s) + |y|^M \right) |H_n(y,s)| e^{-\frac{I^2(s) y^2 }{4}} I(s) dy \label{changin-variable-z}\\ &=& I^{-M-n}(s) \int_{|z| \le I(s)} (1+|z|^M) |h_n(z)| e^{-\frac{|z|^2}{4}} dz \le C I^{-M-n}(s). \nonumber \end{eqnarray} Finally, we have \begin{eqnarray} \left| \int_{|y| \le 1} S(y,s) H_n(y,s) \rho_s(y) dy \right| \le CI^{-2\delta-2n}(s), \forall n \le M,\label{integral-S-H-n} \forall s \in [s_0,s_1], \end{eqnarray} provided that $s_0 \ge s_{1,3}(K)$. \noindent - \textit{The integral for $A$}: From \eqref{decomposition-q2} and \eqref{eb0-modulation}, we write \iffalse \begin{eqnarray*} e_{b}(y) = \sum_{\ell =0}^{K-1} E_\ell b^\ell y^{2\ell k } + O(y^{K 2k }), \text{ with } E_j \in \R. \end{eqnarray*} Then, with $q_+ $ defined as in \eqref{decomposition-q2}, we conclude \fi \begin{eqnarray*} \left( e_{b}q_+ \right)^j = \left( \sum_{\ell =0}^{K-1} E_\ell b^\ell y^{2\ell k } \right)^j \left( \sum_{m=0}^{\left[ M \right]} q_m H_m \right)^j + O(|q_+|^2 y^{K(2k)} ), \forall j \ge 2. \end{eqnarray*} By using the technique in \eqref{changin-variable-z} (changing variable $z = I(s) y$), we obtain \begin{eqnarray} \int_{|y| \le 1 } |y|^{K(2k)} |q_+|^2(y) \rho_s (y) dy & \le & C I^{-2\delta}(s) \int_{|y| \le 1 } |y|^{K(2k)} \left( \sum_{m=0}^{\left[ M\right]} \left|H_m(y,s) \right| \right)^2 \rho_s dy \nonumber\\ &\le & I^{-2\delta -K(2k) }(s) \le C I^{-2\delta - 2n}(s), \label{estimates-the-errors-y-K} \end{eqnarray} provided that $ K \ge K_2(\delta, M)$ large enough. In addition, we derive from $H_m$'s definition defined in \eqref{eigenfunction-Ls} that \begin{eqnarray*} \left( \sum_{\ell=0}^{K-1} E_\ell b^\ell y^{(2k)\ell} \right)^j \left( \sum_{m=0}^{\left[ M \right]} q_m H_m \right)^j = \sum_{k=0}^{L} \mathcal{A}_k(s) y^k \text{ where } L = j\left( \left[M \right] + (K-1)(2k)\right), \end{eqnarray*} and $\mathcal{A}_j$ satisfying \begin{eqnarray*} \left| \mathcal{A}_j(s) \right| \le C I^{-2\delta}(s). \end{eqnarray*} Now, we apply Lemmas \ref{lemma-scalar-product-H-m} and \ref{small-integral-y-ge-I-delta} to deduce \begin{eqnarray} \left| \int_{|y| \le 1 } \left( \sum_{n=0}^{K-1} E_n b^n y^{(2k)n} \right)^j \left( \sum_{m=0}^{ [M] } q_m H_m \right)^j H_n(y,s)\rho_s(y) dy \right| \le C I^{-2\delta -2n} (s). \label{estinate-polynomial-q-+} \end{eqnarray} Thus, we get \begin{eqnarray} \left| \int_{|y| \le 1} A(y,s) H_n(y,s) \rho_s(y) dy \right| \le CI^{-2\delta-2n}(s), \forall n \le M, \forall s \in [s_0,s_1]. \label{estimate-A} \end{eqnarray} According to \eqref{estimate-on-integral-R-K}, \eqref{integral-S-H-n} and \eqref{estimate-A}, we have \begin{eqnarray} \left| \int_{|y| \le 1} \mathcal{N}(q) H_n(y,s) \rho_s(y) dy \right| \le C I^{-2\delta -2n}(s),\label{estimate-} \end{eqnarray} provided that $s_0 \ge s_{1,3}(K)$, and $K \ge K_2$. Thus, \eqref{integral-N-H-n-les-I-delta} follows which concludes the conclusion of the Lemma. \end{proof} \iffalse \newpage then, we can write \beqtn\label{PnN} \begin{array}{lll} P_n(\mathcal{N}) &=& \dsp \int_{|y|\leq b(s)^{-\frac{1}{2k}} }\mathcal{N} H_n \rho_s dy+ \int_{|y|\geq b(s)^{-\frac{1}{2k}} } \mathcal{N} H_n \rho_s dy\\ &&=\mathcal{N}_1+\mathcal{N}_2.\\ & \end{array} \eeqtn \textbf{ Estimation of $\mathcal{N}_1$}\\ Using the Taylor expansion for $|y|\leq b(s)^{-\frac{1}{2k}}$ and \eqref{decomposition-q2} we can write \beqtn\label{decomposition-interior-N} \begin{array}{lll} \mathcal{N}&=&\sum_{j=2}^{K}c_j (e_bq)^j+R_K,\\ &=& \sum_{j=2}^{K}c_j (e_bq_+)^j+S+R_K,\\ &=&A+ S+R_K. \end{array} \eeqtn \begin{cl} \label{estimation-inter-A-S-RK} \beqna (i)\;\; \left |\int_{|y|\leq b(s)^{-\frac{1}{2k}}} AH_n\rho_s dy\right |&\leq& CI_s^{-2\delta} \label{estimation-inter-A}\\ (ii)\;\; \left |\int_{|y|\leq b(s)^{-\frac{1}{2k}}} SH_n\rho_s dy\right |&\leq& CI_s^{-2\delta}\label{estimation-inter-S}\\ (iii)\;\; \left |\int_{|y|\leq b(s)^{-\frac{1}{2k}}} R_K H_n\rho_s dy\right |&\leq& CI_s^{-2\delta}\label{estimation-inter-RK} \eeqna \end{cl} \textit{Proof of the claim: } We note that in the the region $|y|\leq b(s)^{\frac{1}{2k}}$, $e_b$ is bounded from above and below. We deduce that the reminder $R_K$ is bounded as follows \beqtn\label{bound-RK} |R_K|\leq C|e_b q|^{K+1}\leq C|q|^{K+1}, \eeqtn Thus using the definition of the shrinking set \eqref{definition-shrinking-set}, we obtain the following estimation for a $K>K(\delta),$ \beqna \label{bound-RK1} \int_{|y|\leq b(s)^{-\frac{1}{2k}}} \left |R_KH_n\rho_s dy\right |\leq C I_s^{-n}I_s^{-\delta(K+1)}\leq C I_s^{-2\delta}. \eeqna Now, let us estimate the second term $S$; \[S=\sum_{j=2}^{K}d_j e_b^j\sum_{n=0}^{j} q_+^nq_-^{j-n},\] \[|S|\leq C\sum_{j=2}^{K}\sum_{n=0}^{j} (\sum_{m=0}^{[M]} |q_m|)^n|q_-|_s^{j-n}\] By the properties of the shrinking set and the bound for $q_-$, we obtain for $|y|\leq b(s)^{-\frac{1}{2k}}$ \beqtn\label{estimation-S} |S|\leq C I_s^{-2\delta}\left (I_s^{-M}+|y|^M\right ), \eeqtn thus we obtain \eqref{estimation-inter-S} \[\left |\int_{|y|\leq b(s^{-\frac{1}{2k}})} S H_n\rho_sdy\right |\leq C I_s^{n-M} I_s^{-2\delta}\leq C I_s^{-2\delta}. \] \medskip We can write $e_b$ in terms of $y^{2k}$ as follows \beqtn\label{eb-decomposition} e_b(y)=(p-1)^{-1}\left (\sum_{l=0}^{L}\left (-\frac{by^{2k}}{p-1}\right )^l+\left (-\frac{by^{2k}}{p-1}\right )^{L+1}e_b(y)\right ), \eeqtn then, we obtain \beqtn\label{decomposition-A} \begin{array}{lll} A&=& \sum_{j=2}^{K}c_j\left ( e_b q_+ \right )^j \\ & =&\dsp \sum_{\textbf{n},p} c_{\textbf{n},p}b(s)^{\frac{p}{2k}}y^p \displaystyle \Pi_{i=1}^{[M]}q_{i}^{n_i}H_i^{n_i}+\textcolor{red}{I_s^{-2\delta}}b(s)^{\frac{2k(L+1)}{2k}} y^{2k(L+1)}Q,\\ &=&A_1+A_2, \end{array} \eeqtn where $\textbf{n}=\left ( n_1,.....,n_{M}\right )$, $\sum n_i\geq 2$ and $Q$ is bounded. Now we note that \beqtn\label{bound-pnyph} P_m(y^p\Pi_{i=1}^{[M]} H_{i}^{n_i}) \left \{ \begin{array}{lll} =0 & \mbox{if } p+\sum n_i< m \\ \leq I_s^{n-p-\sum n_i} & \mbox{if }p+\sum n_i\geq m \end{array} \right . \eeqtn Using \eqref{bound-pnyph} and \eqref{definition-shrinking-set}, we get \beqtn\label{bound-A1} \left |\int_{|y|\leq b(s)^{-\frac{1}{2k}}} A_1 H_n\rho_sdy\right |\leq C I_s^{-2\delta}. \eeqtn Taking $L$ such that $2k(L+1)\geq M$, we obtain \beqtn\label{bound-A2} \left |\int_{|y|\leq b(s)^{-\frac{1}{2k}}} A_2 H_n\rho_sdy\right |\leq C I_s^{-2\delta}. \eeqtn Thus, using \eqref{estimation-inter-A}, \eqref{estimation-inter-S} and \eqref{estimation-inter-RK}, we obtain \beqtn\label{bound-N1} |\mathcal{N}_1| \leq C I_s^{-2\delta}. \eeqtn \medskip \textbf{Estimation of $\mathcal{N}_2$} On the other hand for $|y|\geq b(s)^{-\frac{1}{2k}}$, from the defintion of $\mathcal{N}$, we have \beqtn |\mathcal{N}|\leq C|e_bq|^{p}, \eeqtn then by the definition of $e_b$ and the shrinking set \eqref{definition-shrinking-set} we get for $|y|\geq b(s)^{-\frac{1}{2k}}$ \[|\mathcal{N}|\leq CI_s^{-p\delta} |y|^{(M-2k)p}b^{-p},\] by the fact that $M=2kp(p-1)^{-1}$, we obtain \beqtn\label{outer-esti-N} |\mathcal{N}|\leq CI_s^{-p\delta} |y|^{M}b^{-p}. \eeqtn Using the fact that $\rho_s(y)\leq Ce^{-cI_s^2 b(s)^{-\frac{1}{k}}}$ when $|y|\geq b(s)^{\frac{1}{2k}}$, the integral of $\mathcal{N}$ over $|y|\geq b(s)^{-\frac{1}{2k}}$ can be bounded by \beqtn\label{outer-esti-N-2} \dsp Ce^{-cI_s^2 b(s)^{-\frac{1}{k}}}\int_{|y|\geq b(s)^{-\frac{1}{2k}}}I_s^{-p\delta} |y|^{M}b^{-p}|H_n(y)|\sqrt {\rho_s(y)}dy\leq Cb^{-p} I_s^{-p\delta}e^{-cI_s^2 b(s)^{-\frac{1}{k}}}, \eeqtn as $b^{-p}e^{-cI_s^2 b(s)^{-\frac{1}{k}}}\leq C$, we deduce that \beqtn\label{bound-N2} |\mathcal{N}_2|\leq CI_s^{-2\delta}.\eeqtn From \eqref{bound-N1} and \eqref{bound-N2} We obtain \eqref{bound-N} and conclude the proof of our Lemma. \fi \medskip \textbf{d) Fourth term $b'(s)\mathcal{M} (q)$.} Let us consider $\mathcal{M}$'s definition that \[\mathcal{M}(q)=\frac{p}{p-1}y^{2k} (1+e_bq),\] we have then the following result: \begin{lemma}\label{lemma-P-n-M} Let $b_0 >0$, then there exists $\delta_6(b_0)$ such that for all $ \delta \in (0, \delta_6)$, then there exists $ s_6(\delta, b_0) \ge 1$ such that for all $s_0 \ge s_6$ the following folds: Assume $(q,b)(s) \in V_{\delta,b_0}(s), \forall s \in [s_0, \bar s]$ for some $\bar s$ arbitrary, then it holds that \begin{equation}\label{project-P-n-M} P_{n} \left( \mathcal{M} (q) (s) \right) = \left\{ \begin{array}{rcl} \frac{p}{p-1} + O(I^{-\delta}(s)) & \text{ if } & n = 2k \\ O(I^{-\delta}(s)) & \text{ if } & n \ne 2k, n \in \{0,1,...,[M]\} \end{array} \right. . \end{equation} for all $s \in [s_0, \bar s]$. \end{lemma} \begin{proof} We firstly decompose as follows $$ \left\langle \mathcal{M}, H_n(y,s) \right\rangle_{ L^2_{\rho_s}} = \left\langle \frac{p}{p-1} y^{2k} , H_n(y,s) \right\rangle_{ L^2_{\rho_s}} + \left\langle \frac{p}{p-1}y^{2k}e_b(y) q , H_n(y,s) \right\rangle_{ L^2_{\rho_s}}.$$ From \eqref{eigenfunction-Ls}, we get the following \begin{eqnarray}\label{part-1-M} \left\langle \frac{p}{p-1} y^{2k} , H_n(y,s) \right\rangle_{ L^2_{\rho_s}} = \frac{p}{p-1} \left\{ \begin{array}{rcl} \|H_{2k}\|^2_{L^2_{\rho_s}} & \text{ if } & n=2k\\[0.2cm] O(I^{-2k-2}(s)) & \text{ if } & n < 2k \\ 0 & \text{ if } & n > 2k \end{array} \right. , \end{eqnarray} Now we focus on the scalar product $$ \left\langle \frac{p}{p-1}y^{2k}e_b(y) q , H_n(y,s) \right\rangle_{ L^2_{\rho_s}} .$$ We decompose \begin{eqnarray*} \left\langle \frac{p}{p-1}y^{2k}e_b(y) q , H_n(y,s) \right\rangle_{ L^2_{\rho_s}} &=& \int_{|y| \le 1} \frac{p}{p-1} y^{2k} e_n(y) q H_n(y,s) \rho_s (y) dy \\ &+&\int_{|y| \ge 1} \frac{p}{p-1} y^{2k} e_n(y) q H_n(y,s) \rho_s (y) dy. \end{eqnarray*} Since $q \in V_{ \delta, b_0}(s)$ for all $ s \in [s_0,s^*]$, the following estimate holds \begin{eqnarray*} \left| \frac{p}{p-1} y^{2k} e_b(y) q \right| \le C I^{-\delta}(s) |y|^{2k} (1+ |y|^{M}). \end{eqnarray*} Using Lemma \ref{small-integral-y-ge-I-delta}, we conclude \begin{eqnarray} & & \left| \int_{|y| \ge 1 } \frac{p}{p-1} y^{2k} e_b(y) q H_n(y,s) \rho_s (y) dy \right| \label{integral_M-I-ge-I-delta} \\ &\le & C I^{-\delta} e^{-\frac{1}{8} I(s)} \le C I^{-2\delta}(s), \forall s \in [s_0,s^*],\nonumber \end{eqnarray} provided that $s_0 \ge s_3(\delta)$. Let us decompose \begin{eqnarray*} \frac{p}{p-1} y^{2k} e_b(y) q = \frac{p}{p-1} y^{2k} e_b(y) q_+ + \frac{p}{p-1} y^{2k} e_b(y) q_- . \end{eqnarray*} Since $q \in V_{\delta, b_0}(s)$ and $e_b$ bounded, we get \begin{eqnarray*} \left| \frac{p}{p-1} y^{2k} e_b(y) q_- \right| \le C I^{-\delta}(s) |y|^{2k} (I^{-M}(s) + |y|^M). \end{eqnarray*} By the same technique in \eqref{changin-variable-z}, we obtain \begin{equation}\label{bound-for-y-2k-e-b-q-} \left| \int_{|y| \le 1 } \frac{p}{p-1} y^{2k} e_b(y) q_- H_n (y,s) \rho_s(y) dy \right| \le CI^{-2\delta -2n}(s), \forall s \in [s_0,s^*] \text{ and } n \in \{ 0,1...,[M]\}. \end{equation} In addition, using \eqref{decomposition-q2} and \eqref{eb0-modulation}, we write \begin{eqnarray*} \frac{p}{p-1} y^{2k} e_b(y) q_+ = \sum_{i=0}^M \sum_{j=1}^{K} m_{i,j} b^j q_i(s) y^{2kj} H_i(y,s) + O\left( I^{-\delta}(s) y^{(K+1)2k} (1 + |y|^M ) \right). \end{eqnarray*} Repeating the technique in \eqref{changin-variable-z} (changing variable $z = I(s) y$), we obtain \begin{eqnarray*} \left| \int_{|y| \le 1} I^{-\delta}(s) y^{(K+1)2k} (1 + |y|^M ) H_n(y,s) \rho_s(y) dy \right| \le CI^{-2\delta -2n} (s), \forall s \in [s_0,s^*], n \in \{ 0,1,..., M\}, \end{eqnarray*} provided that $ K $ large enough. Besides that, we use the fact that $ q \in V_{\delta, b_0}(s) $ to get \begin{eqnarray*} \left| q_j(s) \right| \le CI^{-\delta}(s), \end{eqnarray*} and $H_i$ can be written by a polynomial in $y$, we apply Lemma \ref{lemma-scalar-product-H-m} and Lemma \ref{small-integral-y-ge-I-delta}, we derive \begin{eqnarray*} & & \left| \int_{|y| \le 1} \left( \sum_{i=0}^M \sum_{j=1}^{K} m_{i,j} b^j q_i(s) y^{2kj} H_i(y,s) \right) H_n(y,s) \rho_s(y) dy \right| \\ &\le & CI^{-\delta -2n}(s), \forall s \in [s_0,s^*] \text{ and } n \in \{ 0,1,..., [M]\}. \end{eqnarray*} Finally, we get \begin{equation}\label{bound-y-2k-e-b-q+} \left| \int_{|y| \le 1 } \frac{p}{p-1} y^{2k} e_b(y) q_+ H_n (y,s) \rho_s(y) dy \right| \le CI^{-\delta -2n}(s), \forall s \in [s_0,s^*] \text{ and } n \in \{ 0,1...,[M]\}. \end{equation} Now, we combine \eqref{bound-for-y-2k-e-b-q-} with \eqref{bound-y-2k-e-b-q+} to imply \begin{equation}\label{integra-M-y-le-I-delta} \left| \int_{|y| \le 1 } \frac{p}{p-1} y^{2k} e_b(y) q H_n (y,s) \rho_s(y) dy \right| \le CI^{-\delta -2n}(s), \forall s \in [s_0,s^*] \text{ and } n \in \{ 0,1...,[M]\}. \end{equation} We use \eqref{integral_M-I-ge-I-delta} and \eqref{integra-M-y-le-I-delta} to conclude \begin{equation}\label{part-2-M} \left| \left\langle \frac{p}{p-1}y^{2k}e_b(y) q , H_n(y,s) \right\rangle_{ L^2_{\rho_s}} \right| \le CI^{-\delta-2n}(s), \forall s \in [s_0,s^*] \text{ and } n \in \{ 0,1...,[M]\}. \end{equation} Finally, by \eqref{part-1-M} and \eqref{part-2-M} we conclude the proof of the Lemma. \end{proof} \medskip \textbf{e) Fifth term $\mathcal{D}_s (q)$}\\ \begin{lemma}[Estimation of $P_n(\mathcal{D}_s)$] \label{lemma-P-n-mathcal-D} Let $b > 0$, then there exists $\delta_7(b_0) > 0$ such that for all $\delta \in (0,\delta_7)$ , there exists $s_7(\delta, b_0)$ such that for all $s_0 \ge s_7$, the following property holds: Assume $(q,b)(s) \in V_{\delta, b_0}(s)$ for all $ s \in [s_0,\bar s]$ for some $\bar s \ge s_0$, then we have \begin{equation}\label{projec-P-n-mathcal-D} \left| P_n(\mathcal{D}_s(q)) \right| \leq C I^{-2\delta} (s), \text{ for all } s \in [s_0,\bar s], \end{equation} for all $ 0 \le n \le M $. \end{lemma} \begin{proof} Let us now recall from \eqref{equation-Ds} that \[\mathcal{D}_s(\nabla q)=-\frac{4pkb}{p-1}I_s^{-2} y^{2k-1}e_b\nabla q.\] From \eqref{defi-q_m} and \eqref{scalar-product-hm}, it is sufficient to estimate to \begin{eqnarray*} \left\langle \mathcal{D}_s, H_n(y,s) \right\rangle_{L^2_{\rho_s} } = \int_{\mathbb R} \left(-\frac{4pkb}{p-1} I^{-2}(s) y^{2k-1} e_b \nabla q H_n(y,s) \rho_s(y) dy \right) \end{eqnarray*} From the fact that $\nabla (H_n)=nH_{n-1}, \rho_s(y) = \frac{I(s)}{4\pi} e^{-\frac{I^2(s) y^2}{4}}$, we use integration by parts to derive \begin{eqnarray*} & & \langle \mathcal{D}_s, H_n(y,s) \rangle_{L^2_{\rho_s} } \\ & = & \frac{4pkb}{p-1}I^{-2}(s)\left (\int \nabla (y^{2k-1}e_b)q H_n \rho_s(y)dy,\right. + n\int y^{2k-1}e_b q H_{n-1} \rho_s dy \left . -\frac 1 2I^2(s)\int y^{2k} e_b q y H_n \rho_s dy \right ). \end{eqnarray*} Then, we explicitly write the scalar product by four integrals as follows \begin{eqnarray*} \left\langle \mathcal{D}_s, H_n(y,s) \right\rangle_{L^2_{\rho_s} } &=&\frac{4pkb}{p-1}I^{-2}(s)\left \{(2k-1)\int y^{2k-2}e_bq H_n \rho_s(y)dy\right . -2kb\int y^{4k-2} e_b^{2}q H_n \rho_s(y)dy\\ &&+n\int y^{2k-1}e_b q H_{n-1} \rho_s dy \left . -\frac 1 2I^2(s)\int y^{2k} e_b q y H_n \rho_s dy \right\}. \end{eqnarray*} By the technique established in Lemma \ref{lemma-P-n-M}, we can prove $$ \left| \left\langle \mathcal{D}_s, H_n(y,s) \right\rangle_{L^2_{\rho_s} } \right| \le C I^{-2\delta -2n}, \forall s \in [s_0,s^*], \text{ and } n \in \{0,1,...,[M] \}. $$ which concludes \eqref{projec-P-n-mathcal-D} and the conclusion of the Lemma follows. \end{proof} \iffalse Then we write $$ \begin{array}{rcl} P_n(\mathcal{D}_s)&=& \dsp \int_{|y|\leq b(s)^{-\frac{1}{2k}}} \mathcal{D}_s H_n\rho_s dy+ \int_{|y|\geq b(s)^{-\frac{1}{2k}}} \mathcal{D}_s H_n\rho_s dy \\ & =&\mathcal{D}_1+\mathcal{D}_2. \end{array} $$ Arguing as in the proof of Claim \ref{estimation-M1-M2} and using the properties of our shrinking set \eqref{definition-shrinking-set}, we obtain \beqna |P_n(\mathcal{D}_s)| \leq C I^{-2\delta}(s). \eeqna \fi \medskip \textbf{f) Sixth term $\mathcal{R}_s (q)$} \begin{lemma}[Estimation of $P_n(\mathcal{R}_s)$]\label{Lemma-Rs-n} Let $b_0 > 0$, then there exists $\delta_8(b_0) >0$ such that for all $ \delta \in (0, \delta_8)$ there exists $s_8(b_0, \delta) \ge 1$ such that for all $ s_0 \ge s_8$, the following holds \begin{equation}\label{bound-P-n-mathcal-R-s} \left| P_n(\mathcal{R}_s(q)) \right| \leq C I^{-2\delta}(s), \end{equation} for all $s \in [s_0,\bar s]$ and $0 \le n \le M$. \end{lemma} \begin{proof} The technique is quite the same as the others terms in above. Firstly, we write $\mathcal{R}_s$'s definition given in \eqref{equation-Rs} as follows \[ \mathcal{R}_s(q)= I^{-2}(s)y^{2k-2}\left (\alpha_1+\alpha_2 y^{2k}e_b+(\alpha_3+\alpha_4 y^{2k}e_b)q \right), \] then, we have the following \begin{eqnarray*} P_n(\mathcal{R}_s) = \frac{\left\langle \mathcal{R}_s, H_n(y,s) \right\rangle_{L^2_{\rho_s}} }{ \left\| H_n(s), \right\|_{L^2_{\rho_s}}^2 }, \end{eqnarray*} where $\left\| H_n(s) \right\|_{L^2_{\rho_s}}^2$ computed in \eqref{scalar-product-hm}. In particular, we observe that \eqref{bound-P-n-mathcal-R-s} immediately follows by \begin{eqnarray} \left| \left\langle \mathcal{R}_s, H_n(y,s) \right\rangle_{L^2_{\rho_s}} \right| \le CI^{-2\delta- 2n}, \forall s \in [s_0,s^*] \text{ and } \forall n \in \{ 0,1,...,[M]\}.\label{scalar-product-mathcal-R-H-n} \end{eqnarray} Besides that the technique of the proof of \eqref{scalar-product-mathcal-R-H-n} is proceed as in Lemma \ref{lemma-P-n-M}. For that reason, we kindly refer the reader to check the details and we finish the proof of the Lemma \end{proof} \iffalse projecting $\mathcal{R}_s$ on $H_m$ gives: \[\begin{array}{lll} P_n(\mathcal{R}_s) &=&\dsp \int_{|y|\leq b(s)^{-\frac{1}{2k}} }\mathcal{R}_sH_n \rho_s dy+ \int_{|y|\geq b(s)^{-\frac{1}{2k}} } \mathcal{R}_sH_n \rho_s dy\\ &&=\mathcal{R}_1+\mathcal{R}_2. \end{array} \] To get the estimation for $\mathcal{R}_1$ and $\mathcal{R}_2$, we proceed as in the proof of (i) of Claim \ref{estimation-M1-M2}.\\ For $\mathcal{R}_2$, we note from the definition of $\mathcal{R}$ that when $b(s)y^{2k}\geq 1$ \[|\mathcal{R}_s|\leq CI_s^{-2\delta}|y|^{M-2},\] which allows us to conclude as in the proof of (ii) of Claim \ref{estimation-M1-M2}. \fi \textbf{Part 2: Proof of (i) and (ii) of Proposition \ref{proposition-ode}}: \medskip \noindent \textit{- Proof of (i) of Proposition \ref{proposition-ode}:}\\ Combining Lemma \ref{Lemma-Pn_partialq}-\ref{Lemma-Rs-n} the estimates defined in $V_{\delta, b_0}(s)$, we obtain (i) of Proposition \ref{proposition-ode} \[\forall n \in \{0,..[M]\},\;\;\left |\pa_s q_n-\left( 1-\frac{n}{2k} \right)q_n\right |\leq CI^{-2\delta}(s), \forall s \in [s_0, \bar s],\] provided that $\delta \le \delta_3$ and $s_0 \ge s_3(\delta, b_0)$. Thus, we conclude item (i). \medskip \noindent \textit{- Proof of (ii) of Proposition \ref{proposition-ode}: Smallness of the modulation parameter. }\\ Let us recall the equation satisfied by $q$: \beqtn\label{equation-q-bis} \pa_s q =\mathcal{L}_s q+b'(s)\mathcal{M}(q) +\mathcal{N} (q)+\mathcal{D}_s(\nabla q)+\mathcal{R}_s(q), \eeqtn this part aims to obtain an estimation of the modulation parameter $b(s)$. For this we will project the equation \eqref{equation-q-bis} on $H_{2k}$ and take on consideration that $q_{2k}=0$, we obtain \beqtn\label{modulation-equation} 0=\frac{p}{p-1}b'(s)\left (1+ P_{2k}(y^{2k}e_bq)\right )+P_{2k}(\mathcal{N}) +P_{2k}(\mathcal{D}_s)+P_{2k}(\mathcal{R}_s), \eeqtn Using estimations given by equation \eqref{bound-N} and Lemmas \ref{lemma-P-n-M}, \ref{lemma-P-n-mathcal-D} and \ref{Lemma-Rs-n}, we obtain \beqtn\label{inequality-b} |b'(s)|\leq CI(s)^{-2\delta}=C e^{\delta\frac{1-k}{k}s}, \eeqtn where $0<\delta\leq \min (\delta_j,5\le j\le 8 )$ is a strictly positive real, which gives us the smallness of the modulation parameter in i) of Proposition \ref{proposition-ode} and we obtain \beqtn b(s)\to b^*\mbox{ as }s\to \infty,\;\; (t\to T). \label{convegence-b-s} \eeqtn Integrating inequality \eqref{inequality-b} between $s_0$ and infinity, we obtain \[ |b^*-b_0|\leq C e^{\delta\frac{1-k}{k}s_0},\] we conclude that there exist $s_{9}$ such that dor for $s_0\geq s_9$ big enough, we have \[\frac{3}{4} b_0\leq b^*\leq \frac{5}{4} b_0,\] which is (ii) of Proposition \ref{proposition-ode}. \subsection{The proof to item (iii) of Proposition \ref{proposition-ode} }\label{proof-item-iii} Here, we prove the last identity of Proposition \ref{proposition-ode}. As in the previous subsection, we proceed in two parts: \begin{itemize} \item In Part 1, we project equation \eqref{equation-q} using projector $P_-$ defined in \eqref{projector-P-} . \item In Part 2, we prove the estimate on $q_-$ given by (iii) of Proposition \ref{proposition-ode}. \end{itemize} \textbf{Part 1: The projection of equation \eqref{equation-q} using the projector $P_-$.} Let $(q,b)$ be solution to problem \eqref{equation-q} $\&$ \eqref{Modulation-condition} trapped in $V_{\delta, b_0}(s)$ for all $s \in [s_0, \bar s]$ for some $\bar s > s_0$. Then, we have the following results: \medskip \textbf{First term $\pa_s q$.}\\ \begin{lemma} For all $s \in [s_0, \bar s]$, it holds that \begin{equation}\label{esti-par-q-m-P-partial-s-q_-} P_-(\partial_s q)=\partial_s q_- - I^{-2}(1-\frac{1}{k})\sum_{n=[M]-1}^{[M]}(n+1)(n+2)q_{n+2}(s)H_n. \end{equation} \end{lemma} \begin{proof} We firstly have \[ \begin{array}{lll} P_-(\partial_s q)-\partial_s q_- &= &-\displaystyle \left (\partial_s q-P_-(\partial_s q)\right )+\left (\partial_s q-\partial_s q_-\right ) , \\ &=&-\displaystyle \sum_{n=0}^{[M]} P_n(\partial_s q)H_n+\sum_{n=0}^{[M]}\partial_s (q_n H_n),\\ &=&-\displaystyle \sum_{n=0}^{[M]} P_n(\partial_s q)H_n+\sum_{n=0}^{[M]}\partial_s q_n H_n+\sum_{n=2}^{[M]}q_n\partial_s H_n, \end{array} \] we recall by \eqref{formula-partia-s-Hn} that for all $n\ge 2$ \[ \partial_s H_n (y,s) = n (n-1) \left(1 -\frac{1}{k} \right) I^{-2}(s) H_{n-2}(y,s),\] then by Lemma \ref{Lemma-Pn_partialq}, we obtain the desired result \[P_-(\partial_s q)=\partial_s q_- -I^{-2}\left(1-\frac{1}{k}\right)\sum_{n=[M]-1}^{[M]}(n+1)(n+2)q_{n+2}(s)H_n.\] \end{proof} \textbf{Second term $\mathcal L_s q$.}\\ By the spectral properties given in Section \ref{Section-Spectral-properties-Ls}, we can write \begin{lemma} For all $s \in [s_0, \bar s]$, it holds that \[P_-(\mathcal{L}_s q)=\mathcal{L}_s q_- -I^{-2} (1-\frac{1}{k}) \displaystyle\sum_{n=[M]-1}^{[M]}(n+1)(n+2)q_{n+2} H_n.\] \end{lemma} \begin{proof} We write \[ \begin{array}{lll} P_-(\mathcal{L}_s q)-\mathcal{L}_s q_- &= &-\displaystyle \left (\mathcal{L}_s q-P_-(\mathcal{L}_s q)-\right )+\left (\mathcal{L}_s q-\mathcal{L}_s q_-\right ) , \\ &=&-\displaystyle \sum_{n=0}^{[M]} P_n(\mathcal{L}_s q) H_n+ \mathcal{L}_s \left (q-q_-\right ),\\ &=&-\displaystyle \sum_{n=0}^{[M]} P_n(\mathcal{L}_s q) H_n+ \sum_{n=0}^{[M]}q_n\mathcal{L}_s(H_n) . \end{array} \] From \eqref{Ls-Hm}, we obtain \[\begin{array}{lll} \displaystyle \sum_{n=0}^{[M]}q_n\mathcal{L}_s (H_n)&=&\displaystyle q_0+(1-\frac{n}{2k})q_1 H_1+\sum_{n=2}^{[M]}q_n\left [(1-\frac{n}{2k})H_n+I^{-2} n(n-1)(1-\frac{1}{k})H_{n-2}\right ],\\ &=& \displaystyle\sum_{n=0}^{M} (1-\frac{n}{2k})q_n H_n+ I^{-2} (1-\frac{1}{k}) \displaystyle\sum_{n=0}^{M-2}(n+1)(n+2)q_{n+2} H_n \end{array}, \] We deduce from Lemma \ref{Lemma-P-n-mathcal-L-s} that \[P_-(\mathcal{L}_s q)-\mathcal{L}_s q_-=- I^{-2} (1-\frac{1}{k})\left [ M(M+1)q_{M+1} H_{M-1} -(M+1)(M+2)q_{M+2} H_{M} \right ].\] \end{proof} \medskip \textbf{Third term $\mathcal{N}$.}\\ \begin{lemma}\label{lemma-estimation-P--N} Let $b_0 >0$, then there exists $\delta_{10}(b_0)$ such that for all $ \delta \in (0, \delta_{10})$, then there exists $ s_{10}(\delta, b_0) \ge 1$ such that for all $s_0 \ge s_{10}$ the following folds: Assume $(q,b)(s) \in V_{\delta,b_0}(s), \forall s \in [s_0, \bar s]$ for some $\bar s$ arbitrary, then it holds that \[|P_-(\mathcal{N})|\leq C\left (I(s)^{-2\delta}+I(s)^{-p\delta} \right )\left (I(s)^{-M}+|y|^M\right ). \] \end{lemma} \begin{proof} We argue as in \cite{BKnon94}. We recall from \eqref{nonlinear-term} that \[\mathcal{N}(q)=|1+e_bq|^{p-1}(1+e_bq)-1-p e_b q. \] We proceed in a similar fashion as in the projection $P_n(\mathcal{N})$, we will give estimations in the outer region $|y|\geq 1$ and the inner region $|y|\leq 1$. Let us first define $\chi_0$ a $C_0^{\infty}(\R^+,[0,1])$, with $supp(\chi) \subset [0,2]$ and $\chi_0=1$ on $[0,1]$, we define \beqtn\label{def-chi} \chi (y)=\chi_0\left (|y|\right ). \eeqtn Using the fact that \[\mathcal{N}= \chi \mathcal{N}+\chi^c\mathcal{N}, \] we claim the following: \begin{cl}\label{estimation-P-N} \begin{eqnarray} (i)\;\; \left |P_-(\chi^c \mathcal{N} )\right |&\leq &C I(s)^{-\delta p}\left (I(s)^{-M}+|y|^M\right ),\\ (ii)\;\; \left |P_-(\chi \mathcal{N} )\right |&\leq &C I(s)^{-2\delta}\left (I(s)^{-M}+|y|^M\right ). \end{eqnarray} \end{cl} \begin{proof} First, we will estimate $P_-(\chi^c \mathcal{N})$, then $P_-(\chi \mathcal{N})$ and conclude the proof of the lemma.\\ (i) Let us first write \[ \begin{array}{lll} P_-(\chi^c \mathcal{N})&=&\chi^c \mathcal{N} -\sum_{n\leq [M]+1} P_n(\chi^c \mathcal{N})H_n\\ &=&\dsp \chi^c \mathcal{N} -\sum_{n\leq [M]+1}\frac{\int_{|y|\geq 1 } \mathcal{N} H_n \rho_s dy}{\|H_n\|_{L^{2}_{\rho_s}}} H_n, \end{array} \] using the definition of the shrinking set we can write \[|\chi^c(\mathcal{N})|\leq |\chi^c (CI^{-\delta}e_b|y|^M)^p|=|\chi^c \left (CI^{-\delta}(e_by^{2k})|y|^{\frac{2k}{p-1}}\right )^p|,\] by the fact that $|e_by^{2k}|\leq C$ and $M=\frac{2kp}{p-1}$, we have \[|\chi^c(\mathcal{N})|\leq CI^{-\delta p}|y|^M \] Then using \eqref{projection-xc-N} we deduce (i) of Claim \ref{estimation-P-N}: \beqtn |P_-(\chi^c \mathcal{N})|\leq CI(s)^{-\delta p}\left ( I(s)^{-M}+|y|^M\right ). \label{bound-P-Ncchi} \eeqtn (ii) In the inner region $|y|\leq 1$, we proceed as the in the proof of Lemma \ref{projecion-H-n-N}. For $|y|\leq 1$, using the Taylor expansion as in \eqref{defi-R-K}, we write \[\chi \mathcal{N}=\chi\left (A+S+R_K\right ),\] where $A$ and $S$ are given by \eqref{defi-A-S} \[ A =\chi \sum_{j=2}^{K}d_{j} (e_bq_+)^j \text{ and } S =\chi \sum_{j=2}^K \sum_{\ell=0}^{j-1} \tilde d_{j,\ell} e_{b}^j(q_+)^\ell (q_-)^{j-\ell}, \text{ for some } d_j, \tilde d_{j,\ell} \in \mathbb{R}. \] We get for $K$ large, \beqtn\label{bound-RK2} |\chi R_K|\leq \mathcal{I}_s^{-\delta} I(s)^{-M}. \eeqtn We proceed in a similar fashion as in he proof of Lemma \ref{projecion-H-n-N}, we write $A$ as \beqtn \begin{array}{lll}\label{A1-A2} A&=&\chi \sum_{j=2}^{K}d_j\left ( e_b q_+ \right )^j \\ & =&\dsp\chi \sum_{\textbf{n},p} c_{\textbf{n},p}b(s)^{\frac{p}{2k}}y^p \displaystyle \Pi_{i=1}^{[M]}q_{i}^{n_i}H_i^{n_i}+I(s)^{-2\delta}b(s)^{\frac{2k(L+1)}{2k}} y^{2k(L+1)}\chi Q,\\ & = &A_1+A_2, \end{array} \eeqtn where $\chi Q$ is bounded. Then, we divide the sum $A_1$ as follows \beqtn \begin{array}{lll} A_1&=&\chi\dsp \sum_{\textbf{n},p} c_{\textbf{n},p}b(s)^{\frac{p}{2k}}y^p \displaystyle \Pi_{i=1}^{[M]}q_{i}^{n_i}H_i^{n_i},\\ & =&\dsp \chi\dsp \sum_{\textbf{n},p, p+\sum n_i\leq M} c_{\textbf{n},p}b(s)^{\frac{p}{2k}}y^p \displaystyle \Pi_{i=1}^{[M]}q_{i}^{n_i}H_i^{n_i} +\dsp \chi\sum_{\textbf{n},p,p+\sum n_i> M} c_{\textbf{n},p}b(s)^{\frac{p}{2k}}y^p \displaystyle \Pi_{i=1}^{[M]}q_{i}^{n_i}H_i^{n_i}\\ &=& A_{1,1}+A_{1,2}, \end{array} \eeqtn In the first sum, $A_{1,1}$,we replace $\chi=1-\chi^c$ by $-\chi^c$, since $1$ will not contribute to $A_-$. Using the fact that $|y|\geq 1$ and by \eqref{bound-b}, we get \[\dsp \chi^c \left |y^p \Pi_{i=1}^{[M]}H_i^{n_i}\right |\leq C |y|^M.\] Since $H_m$ is bounded as follows \[|H_m(y,s)|\leq C(I(s)^{-m}+|y|^m),\] we obtain by \eqref{bound-b} \[\dsp \chi \left |y^p \Pi_{i=1}^{[M]}H_i^{n_i}\right |\leq C(I(s)^{-M}+|y|^M). \] We conclude by the definition of the shrinking set given by \eqref{definition-shrinking-set}, that \beqtn |A_{1,2}|\leq CI(s)^{-2\delta}\chi(y) \left (I(s)^{-M}+|y|^{M} \right ). \eeqtn By the properties of the shrinking set and the bound for $q_-$, we obtain the bound for the term $A_2$, defined by \eqref{A1-A2}, more precisely we have \[ |A_2|\leq CI(s)^{-2\delta}\chi(y) \left (I(s)^{-M}+|y|^{M} \right ).\] Then, we conclude that \beqtn\label{bound-A-} |P_-(A)|=|A_-|\leq C I(s)^{-2\delta}(I(s)^{-M}+|y|^M). \eeqtn which yields the conclusion of item (ii). \end{proof} Now, we return to the proof of the Lemma. We deduce by \eqref{bound-P-Ncchi}, \eqref{bound-RK2} and \eqref{bound-A-} the following estimation for $P_-(\mathcal{N})$ \beqtn |P_-(\mathcal{N})|=|\mathcal{N}_-|\leq C (I(s)^{-2\delta}+I(s)^{-p\delta})( I(s)^{-M}+|y|^M), \eeqtn thus end the proof of Lemma \ref{lemma-estimation-P--N}. \end{proof} \textbf{Fourth term $b'(s)\mathcal{M} (q)$.}\\ \begin{lemma}\label{lemma-estimation-P--M}Let $b_0 >0$, then there exists $\delta_{11}(b_0)$ such that for all $ \delta \in (0, \delta_{11})$, then there exists $ s_{10}(\delta, b_0) \ge 1$ such that for all $s_0 \ge s_{11}$ the following folds: Assume $(q,b)(s) \in V_{\delta,b_0}(s), \forall s \in [s_0, \bar s]$ for some $\bar s$ arbitrary, then it holds that \[|P_-(\mathcal{M})|\leq C I(s)^{-\delta}\left (I(s)^{-M}+|y|^M\right ). \] \end{lemma} We recall that \[\mathcal{M}=\frac{p}{p-1}y^{2k} (1+e_bq),\] then, we can write \[ P_-\left (\mathcal{M}(q)\right )=\frac{p}{p-1}P_-(y^{2k}e_bq).\] Let us write \[P_-(y^{2k}e_bq)= P_-(\chi y^{2k}e_bq)+P_-(\chi^c y^{2k}e_bq), \] we claim the following: \begin{cl}\label{estimation-P-M} \begin{eqnarray} (i)\;\; \left |P_-(\chi^c y^{2k} e_b q )\right |&\leq &C I(s)^{-\delta}\left (I(s)^{-M}+|y|^M\right ),\\ (ii)\;\; \left |P_-(\chi y^{2k} e_b q)\right |&\leq & C I(s)^{-\delta}\left (I(s)^{-M}+|y|^M\right ). \end{eqnarray} \end{cl} \begin{proof} Let us first write \[ \begin{array}{lll} P_-(\chi^c y^{2k} e_b q)&=&\chi^c y^{2k} e_b q -\sum_{n\leq [M]+1} P_n(\chi^c y^{2k} e_b q)H_n\\ &=&\dsp \chi^c y^{2k} e_b q -\sum_{n\leq [M]+1}\frac{\int_{|y|\geq b(s)^{-\frac{1}{2k}} } y^{2k} e_b q H_n \rho_s dy}{\|H_n\|^2_{L^{2}_{\rho_s}}}H_n, \end{array} \] When $ |y|\geq 1$, using \eqref{definition-shrinking-set}, we can write \[|y^{2k}e_b q|\leq C |q|\leq \frac{CI(s)^{-\delta}}{b(s)}|y|^M \leq CI(s)^{-\delta}|y|^M .\] ii) As for i), we Write \[P_-(\chi y^{2k}e_b q)=\chi y^{2k} e_b q-\sum_{n\leq M+1} P_n(\chi^c y^{2k} e_b q).\] By Lemma \ref{lemma-P-n-M} we have $\left |\sum_{n\leq M+1} P_n(\chi^c y^{2k} e_b q) \|H_n\|^{-2}_{L^{2}_{\rho_s}} \right |\leq C I(s)^{-\delta}$. \\ We conclude using the definition of the shrinking set and we obtain the following estimation \[|\chi y^{2k} e_b q|\leq C I(s)^{-\delta}. \] \end{proof} \textbf{Fifth term $\mathcal{D}_s (\nabla q)$} \begin{lemma} Let $b_0 >0$, then there exists $\delta_{12}(b_0)$ such that for all $ \delta \in (0, \delta_{12})$, then there exists $ s_{12}(\delta, b_0) \ge 1$ such that for all $s_0 \ge s_{12}$ the following folds: Assume $(q,b)(s) \in V_{\delta,b_0}(s), \forall s \in [s_0, \bar s]$ for some $\bar s$ arbitrary, then it holds that \[P_-(\mathcal{D}_s)\leq C I^{-2\delta}\left (I(s)^{-M}+|y|^M\right ).\] \end{lemma} \begin{proof} Let us first write \[\begin{array}{lll} P_-(\mathcal{D}_s) &=&\mathcal{D}_s-\dsp \sum_{n=0}^{[M]} P_n(\mathcal{D}_s)H_n, \end{array}\] Since we are using the properties given by the shrinking set in Definition \ref{definition-shrinking-set}, it will be more convenient to estimate \beqtn \begin{array}{lll} d &=&\dsp \int_{\sigma}^{s}d\tau\mathcal{K}_{s,\tau }(y,z)\mathcal{D}_s(\nabla q). \\ \end{array} \eeqtn Using integration by parts, we obtain \beqtn \begin{array}{lll} d&=&\dsp 4pkb(p-1)^{-1}\int_{\sigma}^{s}d\tau I({\tau})^{-2} \int dz \partial _z\left (\mathcal{K}_{s,\tau }(y,z) e_b(z) z^{2k-1}\right )q(z,\tau),\\ &=&\dsp 4pkb(p-1)^{-1}\int_{\sigma}^{s}d\tau I({\tau})^{-2} \int dz \mathcal{K}_{s,\tau }(y,z)\partial _z\left ( e_b(z) z^{2k-1}\right )q(z,\tau)\\ &&\dsp +pkb(p-1)^{-1}\int_{\sigma}^{s}d\tau I({\tau})^{-2} \int dz \partial _z(\mathcal{K}_{s,\tau }(y,z)) e_b(z) z^{2k-1}q(z,\tau),\\ &=&d_1+d_2. \end{array} \label{decomposition-integral-d} \eeqtn For the estimation of the firs term $d_1$, we argue in a similar fashion as in the projection of $P_n(\mathcal{M})$, see Lemma \ref{lemma-P-n-M}. For the second term, we argue as in Bricomont Kupiainen \cite{BKnon94}. Indeed, we need to bound $\partial _z K_{s,\tau}$. From equations \eqref{Kernel-Formula} we obtain \beqtn |\partial _z(\mathcal{K}_{s,\tau }(y,z)|\leq C L\mathcal{F}_{\frac 12 L^2}\left (e^{\frac{s-\tau}{2k}}y-z\right )\leq \frac{CI(s)}{\sqrt{s-\tau}}\mathcal{F}_{\frac 12 L^2}\left (e^{\frac{s-\tau}{2k}}y-z\right ), \eeqtn where $L=\frac{I(s)^{2}}{(1-e^{-(s-\sigma)})}$, $\mathcal{F}$ defined by \eqref{Kernel-Formula-F} $\text{ and } I(s)=\dsp e^{\frac s2(1-\frac 1k)}$. Then, by Definition \ref{definition-shrinking-set}, we obtain \[|d_2|\leq I(s)^{-1} I(s)^{-\delta}\] and we conclude that there exist $\delta_?$ such that for all $0<\delta\leq \delta_?$, \beqtn\label{P--mathcalDs-part1} |d|\leq CI^{-2\delta }\left (I(s)^{-M}+|y|^M\right ). \eeqtn \medskip On the other hand by Lemma \ref{projec-P-n-mathcal-D}, we obtain \beqtn\label{P--mathcalDs-part2} |\sum_{n=0}^{[M]} P_n(\mathcal{D}_s)H_n|\leq C I^{-2\delta}\left (I(s)^{-M}+|y|^M\right ). \eeqtn We conclude from \eqref{P--mathcalDs-part1}, \eqref{P--mathcalDs-part1} that \[P_-(\mathcal{D}_s)\leq C I^{-2\delta}\left (I(s)^{-M}+|y|^M\right ).\] \end{proof} \medskip \textbf{Sixth term $\mathcal{R}_s(q)$} \begin{lemma}\label{P--mathcal-Rs} Let $b_0 >0$, then there exists $\delta_{13}(b_0)$ such that for all $ \delta \in (0, \delta_{13})$, then there exists $ s_{13}(\delta, b_0) \ge 1$ such that for all $s_0 \ge s_{13}$ the following folds: Assume $(q,b)(s) \in V_{\delta,b_0}(s), \forall s \in [s_0, \bar s]$ for some $\bar s$ arbitrary, then it holds that \beqna |P_-(\mathcal{R}_s (q))| \leq C I(s)^{-2\delta}\left (I(s)^{-M}+|y|^M\right ). \eeqna \end{lemma} \begin{proof} By \eqref{equation-Rs} \[ \mathcal{R}_s (q)= I(s)^{-2}y^{2k-2}\left (\alpha_1+\alpha_2 y^{2k}e_b+(\alpha_3+\alpha_4 y^{2k}e_b)q \right), \] we proceed as for the estimation of $P_-(\mathcal{M})$. \end{proof} \textbf{Part 2: Proof of the identity (iii) in Proposition \ref{proposition-ode} (estimate on $q_-$)} If we apply the projector $P_-$ to the equation of \eqref{equation-q}, we obtain \begin{eqnarray*} \partial_s q_-=\mathcal{L}_s q_-+ P_-\left (\mathcal{N} (q) +\mathcal{D}_s(q)+\mathcal{R}_s (q)+ b'(s)\mathcal{M}(q)\right ) \end{eqnarray*} Using the kernel of the semigroup generated by $\mathcal{L}_s$, we get for all $s\in [\tau, s_1]$ The integral equation of the equation above is \[ \begin{array}{lll} q_-(s)&=&\mathcal{K}_{s\tau} q_-(\tau)\\ &&+\displaystyle \int_{\tau}^{s} \mathcal{K}_{s s' } \left (P_-\left [\mathcal{N} (q)+\mathcal{D}_s(\nabla q)+\mathcal{R}_s (q)+b'(s')\mathcal{M}(q)\right ]\right )ds'. \end{array} \] Using Lemma \ref{lemma-estimation-K-phi}, we get \[ \begin{array}{lll} |q_-(s)|_{s}&\leq& e^{-\frac{1}{p-1}(s-\tau)}|q_-(\tau)|_{\tau}\\ && +\dsp \int_{\tau}^{s} e^{-\frac{1}{p-1}(s-s')} \left |P_-\left [\mathcal{N} (q)+\mathcal{D}_s(\nabla q)+\mathcal{R}_s (q)+b'(s')\mathcal{M}(q)\right ]\right |_{s} ds' \end{array} \] By Lemma \ref{lemma-estimation-P--N}, Lemma \ref{lemma-estimation-P--M}, Lemma \ref{P--mathcal-Rs} , equations \eqref{P--mathcalDs-part1}, \eqref{P--mathcalDs-part2} and the smalness of the modulation parmeter $b(s)$ given by (ii) of Proposition \ref{proposition-ode}, we obtain \[ \begin{array}{lll} |q_-(s)|_{s} &\leq& e^{-\frac{1}{p-1}(s-\tau)}|q_-(\tau)|_{\tau}+\dsp \int_{\tau}^{s} e^{-\frac{1}{p-1}(s-s')} I(s')^{-\delta\frac{\min(p,2)+1}{2}} ds'. \end{array} \] Then, for $\delta \le \delta_3$, it holds that $$ \left| q_-(s)\right|_s \le e^{-\frac{s-\tau}{p-1}} \left| q_-(\tau)\right|_\tau + C \left( I^{-\frac{3}{2} \delta}(s) + e^{-\frac{s-\tau}{p-1}} I^{-\frac{3}{2}\delta}(\tau)\right). $$ which concludes the proof of the last identity of Proposition \ref{proposition-ode}.
{'timestamp': '2022-06-10T02:13:50', 'yymm': '2206', 'arxiv_id': '2206.04378', 'language': 'en', 'url': 'https://arxiv.org/abs/2206.04378'}
\section{INTRODUCTION} In recent years, great achievements have been made in 3D area \cite{pon2020object,wang2020pointtracknet,battrawy2019lidar,wei2019conditional,qi2017pointnet++,griffiths2020finding}. Beyond 2D perception \cite{deng2009imagenet,simonyan2014very,huang2017densely,ren2015faster,liu2016ssd,he2017mask,li2017fully,wei2018quantization}, 3D scene understanding is indispensable and important because of wide real-world appliations. As one of hot 3D topics , 3D object detection \cite{qin2019monogrnet, li2019stereo, li2019gs3d,srivastava2019learning,simonelli2019disentangling,wang2019pseudo,ku2019improving,brazil2019m3d,liu2019deep,qi2019deep} focuses on the problem of detecting objects' tight bounding boxes in 3D space. It has attracted more and more attention due to eager demand in computer vision and robotics, \emph{e.g. } autonomous driving, robot navigation and augmented reality. Point clouds are one of the key 3D representations, which consist of points in 3D space. Since point clouds provide rich geometric information, many 3D object detection methods \cite{ku2018joint, yi2020segvoxelnet, zhou2019iou, chen2019fast, yan2018second, liang2019multi,shi2019pv} use them as inputs. Thanks to the development of deep learning, many approaches \cite{ mousavian20173d,yan2018second, liang2019multi,shi2019pv} adopt neural networks for 3D object detection, which achieve remarkable performances on various benchmarks \cite{dai2017scannet,geiger2012we,song2015sun}. Despite high accuracy, these learning-based methods require large amounts of 3D groundtruth. However, annotating these 3D data is a complex task demanding accurate quality control. The labeling process is expensive and time-consuming even utilizing crowd-sourcing systems (\emph{e.g. } MTurk). Compared with 3D labels, Tang \emph{et al.} \cite{tang2019transferable} mention that labeling 2D bounding boxes can be 3-16 times faster. Benefitting from large-scale 2D datasets, such as ImageNet \cite{deng2009imagenet} and MS COCO \cite{lin2014microsoft}, we can also obtain accurate 2D bounding boxes from high-performance 2D detectors. In addition, with the improvement of 3D sensors, it becomes much easier to get high-quality raw 3D training data. Thus, a valuable direction is exploring how to leverage 2D bounding boxes along with sparse point clouds from LiDAR for 3D object detection. \begin{figure}[tb] \centering \includegraphics[width=0.85\linewidth]{intro.jpg} \caption{An overview of our proposed framework. In our algorithm, we only need 2D labels along with sparse point clouds from LiDAR. We find that semantic information in 2D bounding boxes and 3D structure information in point clouds are enough for 3D vehicle detection.} \label{fig:overview} \vspace{-6mm} \end{figure} In this work, we propose frustum-aware geometric reasoning (FGR) for weakly supervised 3D vehicle detection. Fig. \ref{fig:overview} illustrates the overview of our method. Our framework only leverages 2D bounding boxes annotations and sparse point clouds. It consists of two components: coarse 3D segmentation and 3D bounding box estimation. In coarse 3D segmentation stage, we first employ RANdom SAmple Consensus (RANSAC) \cite{fischler1981random} to remove the ground part and extract the frustum area for each object. To tackle cases with different point densities, we propose a context-aware adaptive region growing method that leverages contextual information and adjusts the threshold adaptively. The aim of 3D bounding box estimation stage is to calculate 3D bounding boxes of segmented vehicle point clouds. More specifically, we design an anti-noise framework to estimate bounding rectangles in Bird's Eye View (BEV) and localize the key vertex. Then, the key vertex along with its edges are used to intersect the frustum. Based on 2D bounding boxes, our pipeline can generate pseudo 3D labels , which are finally used to train a 3D detector. To evaluate our method, we conducted experiments on KITTI Object Detection Benchmark \cite{geiger2012we}. Experimental results show that FGR achieves comparable performance with fully supervised methods and can be applied as a 3D annotator given 2D labels. The code is available at {\small \url{https://github.com/weiyithu/FGR}}. \section{Related Work} \noindent \textbf{LiDAR-based 3D Object Detection:} Obtaining high-quality 3D data, LiDAR-based methods \cite{ku2018joint, yi2020segvoxelnet, zhou2019iou, chen2019fast, yan2018second, liang2019multi, shi2019pv} are able to predict more accurate results than image-based methods. AVOD \cite{ku2018joint} adopts RPN to generate 3D proposals, which leverages a detection network to perform accurate oriented 3D bounding box regression and category classification. Lang \emph{et al.} \cite{lang2019pointpillars} develop a well-designed encoder called PointPillars to learn the representation of point clouds organized in vertical columns. Similar to PointPillars, PointRCNN \cite{shi2019pointrcnn} also utilizes PointNet \cite{qi2017pointnet} as the backbone. It consists of two stages and directly uses point clouds to produce 3D proposals. Following \cite{shi2019pointrcnn}, to better refine proposals, STD \cite{yang2019std} designs a sparse to dense strategy. Beyond PointRCNN \cite{shi2019pointrcnn}, Part-$A^2$ \cite{shi2020points} has the part-aware stage and the part-aggregation stage, which can better leverages free-of-charge part supervisions from 3D labels. Compared with two-stage detectors, single-stage detectors \cite{yang20203dssd, he2020structure} are more efficient. 3DSSD \cite{yang20203dssd} adopts a delicate box prediction network including a candidate generation layer and an anchor-free regression head to balance speed and accuracy. SA-SSD \cite{he2020structure} leverages the structure information of 3D point clouds with an auxiliary network . \noindent \textbf{Frustum-based 3D Object Detection:} Some of 3D object detection methods \cite{shen2019frustum,qi2018frustum,wang2019frustum} first extract frustum area which is tightly fitted with 2D bounding boxes. In particular, F-PointNet \cite{qi2018frustum} predicts 3D bounding boxes from the points in frustums and achieves efficiency as well as high recall for small objects. It can also handle strong occlusion or cases with very sparse points. Beyond F-PointNet, F-ConvNet \cite{wang2019frustum} aggregates point-wise features as frustum level feature vectors and arrays these aggregated features as the feature map. It also applies FCN to fuse frustum-level features and refine features over a reduced 3D space. Frustum VoxNet \cite{shen2019frustum} is another work relying on frustums. Instead of using the whole frustums, it determines the most interested parts and voxelize these parts of the frustums. Inspired by these methods, we also utilize frustum area to calculate 3D bounding boxes. \noindent \textbf{Weakly Supervised / Semi-supervised 3D Object Detection:} Seldom works \cite{tang2019transferable, qin2020weakly, meng2020weakly, zhao2020sess} focus on weakly supervised or semi-supervised 3D object detection. Leveraging 2D labels and part of 3D labels, Tang \emph{et al.} \cite{tang2019transferable} propose a cross-category semi-supervised method (CC), which transfers 3D information from other classes to the current class while maintaining 2D-3D consistency. With weaker supervision, our method still outperforms CC for a large margin. Also without using any 3D labels, VS3D \cite{qin2020weakly} only adopts a pretrained model as the supervision. However, the performance of their method is much worse than fully-supervised methods. Recently, Meng \emph{et al.} \cite{meng2020weakly} propose a two-stage architecture for weakly supervised 3D object detection, which requires a few precisely labeled object instances. Compared with their method, our FGR does not need any 3D labels. Mentioned in \cite{tang2019transferable, qin2020weakly}, annotating 3D bounding boxes is time-consuming and it's worthy exploring 3D object detection from only 2D labels. \begin{figure*}[tb] \centering \includegraphics[width=0.95\linewidth]{pipeline.jpg} \caption{The pipeline of FGR. Our framework consists of two parts. For coarse 3D segmentation, we first estimate ground plane and remove it from the whole point clouds. Then we adopt context-aware adaptive region growing algorithm to get segmentation mask. However, there still exists noise in segmented points set. To solve this problem, we propose the anti-noise key vertex localization. Finally we use key vertex and key edges to intersect the frustum to predict 3D bounding boxes.} \label{fig:pipeline} \vspace{-1mm} \end{figure*} \section{Frustum-aware Geometric Reasoning} Given 2D bounding boxes and LiDAR point clouds, our method aims at 3D vehicle detection without 3D labels. Fig. \ref{fig:pipeline} illustrates the pipeline of frustum-aware geometric reasoning. Our framework can generate pseudo 3D labels, which is then used to train a 3D detector. There are two components in our approach: coarse 3D segmentation and 3D bounding box estimation. We will introduce these two parts respectively in this section. \subsection{Coarse 3D Segmentation} \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} A straightforward method for point clouds segmentation is applying region growing algorithm \cite{adams1994seeded} to separate out objects from frustum area, which is regarded as our baseline. However, our experimental results demonstrate that the baseline method is not accurate. \noindent \textbf{Ground Plane Estimation:} We observe that because most of vehicles lying on the ground, ground points will heavily affect region growing results. To alleviate the influence, we first adopt RANSAC \cite{fischler1981random} to estimate ground plane and remove it. \noindent \textbf{Context-aware Adaptive Region Growing:} It is difficult to detect the vehicle which is highly occluded by others. However, if we can first detect other vehicles and remove their points from the whole point clouds, it will be much easier to segment current vehicle. Given a series of 2D bounding boxes $\{B_i\}$, we extract frustums and sort them according to the median depth of points in frustums. The vehicles nearer to camera will be executed with algorithm earlier. \begin{algorithm*}[tb] \setlength{\belowcaptionskip}{-20pt} \caption{ The pipeline of context-aware adaptive region growing} \label{alg} \begin{algorithmic}[1] \Require Whole point cloud $P_{all}$, \ points set in frustum region $F^i$ of vehicle $O^i$, \ distance thresholds $\{\phi^k\}$ \Ensure Segmented points set $M_{best}^i$ \For{$k=1$ to $n$:} \While{ there exists a point $P^i_j$ in $F_i$ which doesn't belong to any connected component in $\{C_{jk}^{i}\}$} \State Set up an empty points set $C_{jk}^{i}$, add $P^i_j$ to $C_{jk}^{i}$ \While{there exists a point $p$ in $C_{jk}^{i}$ which has not been processed} \State Search points set $q$ from whole point cloud $P_{all}$ whose Euclidean distance to $p$ is smaller than $\phi^k$ \State For each point in $q$, \ check if the point has already been processed, \ and if not add it to $C_{jk}^{i}$ \EndWhile \State If $\frac{|C_{jk}^{i} \cap F^i|}{|C_{jk}^{i}|} < \theta_{seg}$, we treat this connected component as a outlier and remove it \EndWhile \State From connected component set $\{C_{jk}^i\}$, select $M^i_k$ which has the most points as the segmentation result for $\phi^k$ \EndFor \State From $\{M^i_k\}$, select $M_{best}^i$ which has the most points as the final segmentation result. \end{algorithmic} \end{algorithm*} Algorithm \ref{alg} shows the pipeline of context-aware adaptive region growing. For a distance threshold $\phi_{k}$, we treat two points as neighbour points if their Euclidean distance is smaller than $\phi_k$. According to this definition, for each vehicle $O^i$, we randomly select a point $P_{j}^i$ in frustum area and calculate its connected component $C_{jk}^{i}$. We repeat the previous steps until each point in $F^i$ belongs to a connected component. Finally, the vehicle segmentation mask is selected from $\{C_{jk}^{i}\}$. Among these connected components, we filter out $C_{jk}^{i}$ if it satisfies following criterion: \begin{equation} \frac{|C_{jk}^{i} \cap F^i|}{|C_{jk}^{i}|} < \theta_{seg} \end{equation} where $\theta_{seg}$ is a threshold and $| \cdot |$ means the size of the set. This criterion indicates that if the proportion of a connected component inside frustum is smaller than a threshold, we consider this component belongs to other objects but not current vehicle. After removing these outliers, we choose the connected component $M^i_k$ which has the most points as the segmentation set of vehicle $O^i$. For each threshold $\phi_k$, we calculate segmentation set $M^i_k$. It's reasonable to set different $\phi_k$ for different objects according their point densities. Thus, we traverse threshold set $\{\phi_k\}$ and select the segmented points set which has the most points as the final segmentation result $M_{best}^i$. \subsection{3D Bounding Box Estimation} A baseline method to calculate 3D bounding boxes of segmented points set $M_{best}^i$ is to estimate minimum-area rectangle which encases all points. Although this is a mature algorithm, it is sensitive to noise points and doesn't make use of frustum information. Different from this baseline approach, for the vehicle $O^i$, we first locate the key vertex $V^i$ of $M_{best}^i$ in an anti-noise manner. Then based on this key vertex, we calculate the final 3D bounding box by computing the intersection with frustum boundaries. This stage is conducted on Bird's Eye View (BEV). \begin{figure}[tb] \centering \includegraphics[width=0.9\linewidth]{vertex.pdf} \caption{The illustration of anti-noise key vertex localization. Adopting a iterative framework, our method can detect noise points and remove them. The algorithm will stop if key vertex's position changes slightly between two iterations.} \label{fig:vertex} \vspace{-6mm} \end{figure} \noindent \textbf{Anti-noise Key Vertex Localization:} Fig. \ref{fig:vertex} illustrates the procedure of anti-noise key vertex localization. For a bounding rectangle, we can separate out four right triangles according to its four vertexes. We define the right angle vertex of the triangle containing the most points as the key vertex and its corresponding legs as key edges. \begin{figure}[tb] \centering \includegraphics[width=0.7\linewidth]{ambiguity.png} \caption{Using the area as the objective function will bring ambiguity. For the points in the red triangle, there are two optimal bounding rectangles (yellow rectangle and green rectangle have the same area). However, the yellow rectangle is what we want. } \label{fig:ambiguity} \vspace{-2mm} \end{figure} In many cases, we can only observe points in one right triangle. If we use the area as the objective function to calculate the optimal bounding rectangle, we will get two results, which are shown in Fig. \ref{fig:ambiguity}. To address this problem, we design a new objective function. We assume that we want to estimate the bounding rectangle of points set $Q$, and key edges are $l_1, l_2$ respectively. The following equation describes our new objective equation $f$: \begin{equation} \begin{aligned} f &= \frac{|Q_{0}|}{|Q|} \\ Q_{0} = \{q|\ q \in Q, &\ ||q, l_1|| > \theta_{rect} , \ ||q, l_2|| > \theta_{rect}\} \end{aligned} \end{equation} where $||q, l||$ is the vertical distance between point $q$ and edge $l$, and $\theta_{rect}$ is the threshold. If most points are close to key edges, we think this bounding rectangle is well-estimated. We first enumerate rectangles encasing $Q$ whose orientations are from 0 degree to 90 degrees with 0.5 degree interval. Then we select one which has minimum $f$ as the bounding rectangle of $Q$. Because accurate segmentation labels are unavailable, our predicted segmentation masks always obtain noise, which will heavily influence the calculation of bounding rectangles. For the noise point, if we delete it from $Q$ and recompute bounding rectangle, the position of the key vertex will vary a lot. In contrast, the key vertex is stable if we remove a few inlier points because there are many points lying on the key edges. In light of this discovery, we develop an iterative framework. For an estimated bounding rectangle, we delete points near to the key edges and recompute a new bounding rectangle. If the key vertex changes slightly, we treat the key vertex of this new bounding rectangle as $V^i$, and if not we will repeat above process until satisfying this condition. \begin{figure}[tb] \centering \includegraphics[width=0.8\linewidth]{intersection.pdf} \caption{We use the key vertex and key edges to intersect the frustum. From these intersection points, we can get final 3D bounding box. } \label{fig:intersection} \vspace{-5mm} \end{figure} \noindent \textbf{Frustum Intersection:} Due to the occlusion, the observed points sometimes only cover a part of the whole vehicle. In other words, it's inaccurate to directly use bounding rectangles as the final prediction. Fortunately, we are able to compute accurate results leveraging 2D bounding boxes (frustums). From Fig. \ref{fig:intersection}, we can easily get 8 vertexes of the 3D bounding boxes given the key vertex and key edges of bounding rectangles. \section{Experiments} In this section, we conduct extensive experiments to verify the effectiveness of our method. First we will describe the dataset and implementation details. Then we will exhibit main results on KITTI dataset \cite{geiger2012we}. Experiments in \S\ref{sec:ablation} verify the rationality of our method. Finally, we will show some qualitative results. \subsection{Dataset and Implementation Details} \noindent \textbf{KITTI Dataset:} As a frequently used 3D object detection benchmark, the KITTI dataset \cite{geiger2012we} contains 7,481 training pairs and 7,518 testing pairs of RGB images and point clouds. We follow \cite{chen2017multi} to divide original training samples into train split (3712 samples) and val split (3769 samples). Our algorithm was evaluated on car category samples. For experiments using 3D detectors, we report Average Precision for 3D and Bird's Eye View (BEV) tasks. According to the 2D bounding box height, occlusion and truncation level, the benchmark is divided into three difficulty levels: Easy, Moderate and Hard. When evaluating the quality of generated labels, we set mean IoU and precision as evaluation metric. To better evaluate precision, we utilized IoU threshold of 0.3, 0.5 and 0.7 respectively. \noindent \textbf{Implementation Details:} In KITTI dataset \cite{geiger2012we}, vehicles' roll/pitch angle can be seemed as zero and we only need to estimate yaw angle (\emph{i.e. } orientation). Therefore, we can leverage Bird's Eye View (BEV) to detect vehicles instead of operating on 3D space. We set threshold $\theta_{seg}$ as 0.8 for context-aware adaptive region growing. $\{\phi^k\}$ was enumerated from 0.1 to 0.7 with 0.1 interval. As for anti-noise key vertex localization, we choosed one-tenth length of key edges as $\theta_{rect}$. During the localization process, if the change of key vertex's position was smaller than 0.01 meter, we would stop iterative algorithm and get final bounding rectangles. PointRCNN \cite{shi2019pointrcnn} was adopted as our 3D detector backbone. We finally filtered 3D bounding boxes with extremely wrong sizes. \begin{table*}[t!] \centering \caption{Performance comparison with other fully supervised methods on KITTI test set.} \resizebox{0.7\textwidth}{!}{ \begin{tabular}{|c|c||ccc|ccc|} \hline \multirow{2}{*}{Method} & \multirow{2}{*}{3D labels}& \multicolumn{3}{c|}{AP\textsubscript{3D}(IoU=0.7)} & \multicolumn{3}{c|}{AP\textsubscript{BEV}(IoU=0.7)} \\ & & \multicolumn{1}{c}{Easy} & \multicolumn{1}{c}{Moderate} & Hard & \multicolumn{1}{c}{Easy} & \multicolumn{1}{c}{Moderate} & Hard \\ \hline MV3D \cite{chen2017multi}& \checkmark &74.97& 63.63 & 54.00 &86.62 & 78.93 &69.80 \\ AVOD \cite{ku2018joint}& \checkmark & 76.39 &66.47 & 60.23 &89.75& 84.95& 78.32 \\ F-PointNet \cite{qi2018frustum} & \checkmark& 82.19 &69.79 &60.59 &91.17 &84.67 &74.77 \\ SECOND \cite{yan2018second}& \checkmark& 83.34 &72.55 &65.82&89.39 &83.77 &78.59 \\ PointPillars \cite{lang2019pointpillars} & \checkmark& 82.58 &74.31 &68.99&90.07 &86.56 &82.81 \\ PointRCNN \cite{shi2019pointrcnn}& \checkmark& 86.96 &75.64 &70.70 & 92.13& 87.39& 82.72 \\ SegVoxelNet \cite{yi2020segvoxelnet}& \checkmark& 86.04 &76.13 &70.76 & 91.62& 86.37& 83.04 \\ F-ConvNet \cite{wang2019frustum} & \checkmark&87.36 &76.39 &66.69& 91.51 & 85.84 &76.11 \\ Part-$A^2$ \cite{shi2020points} & \checkmark&87.81 &78.49& 73.51 & 91.70 & 87.79 & 84.61 \\ SASSD \cite{he2020structure} & \checkmark&88.75 &79.79 &74.16 & 95.03 & 91.03 &85.96 \\ \hline FGR & $\times$& 80.26& 68.47& 61.57& 90.64& 82.67& 75.46\\ \hline \end{tabular}} \vspace{-4mm} \label{tab:test} \end{table*} \begin{table}[tb] \centering \caption{Performance comparison with other weakly supervised methods on KITTI val set.} \resizebox{0.37\textwidth}{!}{ \begin{tabular}{|c||c|c|c|} \hline \multirow{2}{*}{Method} & \multicolumn{3}{c|}{AP\textsubscript{3D}(Easy)} \\ & \multicolumn{1}{c}{IoU=0.25} &\multicolumn{1}{c}{IoU=0.5} &\multicolumn{1}{c|}{IoU=0.7 } \\ \hline CC \cite{tang2019transferable} &69.78 &- &- \\ VS3D \cite{qin2020weakly}& - & 40.32& - \\ WS3D \cite{meng2020weakly}& - &- & 84.04 \\ \hline FGR & \bf{97.24} & \bf{97.08} & \bf{86.11} \\ \hline \end{tabular}} \label{tab:val} \end{table} \iffalse \begin{table}[tb] \centering \caption{Performance comparison with other weakly supervised methods on KITTI val set.} \resizebox{0.4\textwidth}{!}{ \begin{tabular}{|c|c||c|c|c|} \hline \multirow{2}{*}{IoU} & \multirow{2}{*}{Method} & \multicolumn{3}{c|}{AP\textsubscript{3D}} \\ & & \multicolumn{1}{c}{Easy} &\multicolumn{1}{c}{Moderate} &\multicolumn{1}{c|}{Hard } \\ \hline \multirow{2}{*}{0.25} &CC \cite{tang2019transferable} &69.78 &58.66 &51.40 \\ &FGR & \bf{97.24} & \bf{89.67} & \bf{89.06} \\ \hline \multirow{2}{*}{0.5}&VS3D \cite{qin2020weakly}& 40.32 & 37.36 & 31.09 \\ &FGR & \bf{97.08} & \bf{89.55} & \bf{88.76} \\ \hline \multirow{2}{*}{0.7}&WS3D \cite{meng2020weakly}& 84.04 &75.10& \bf{73.29} \\ &FGR & \bf{86.11} & \bf{74.86} & 67.87 \\ \hline \end{tabular}} \label{tab:val} \end{table} \fi \begin{table}[tb] \centering \caption{Evaluation results with different 3D backbones on KITTI val set} \resizebox{0.45\textwidth}{!}{ \begin{tabular}{|c|c||ccc|} \hline \multirow{2}{*}{Method} & \multirow{2}{*}{3D labels} & \multicolumn{3}{c|}{AP\textsubscript{3D}} \\ & & \multicolumn{1}{c}{Easy} &\multicolumn{1}{c}{Moderate} &\multicolumn{1}{c|}{Hard} \\ \hline PointRCNN \cite{shi2019pointrcnn} & \checkmark& 88.88 &78.63& 77.38 \\ F-ConvNet \cite{wang2019frustum} & \checkmark& 89.02 &78.80 &77.09 \\ Part-$A^2$ \cite{shi2020points} & \checkmark& 89.47& 79.47 & 78.54 \\ \hline Ours (PointRCNN) & $\times$ &86.11 & 74.86 & 67.87 \\ Ours (F-ConvNet) & $\times$ & 86.40 & 73.87 & 67.03 \\ Ours (Part-$A^2$) & $\times$ & 86.33 & 73.75 & 67.53 \\ \hline \end{tabular}} \vspace{-4mm} \label{tab:backbone} \end{table} \iffalse \begin{table*}[t!] \centering \caption{Performance comparison with other methods on KITTI val set.} \resizebox{0.95\textwidth}{!}{ \begin{tabular}{|c||ccc|ccc|ccc|ccc|} \hline \multirow{2}{*}{Method} & \multicolumn{3}{c|}{AP\textsubscript{3D}(IoU=0.5)} & \multicolumn{3}{c|}{AP\textsubscript{BEV}(IoU=0.5)} & \multicolumn{3}{c|}{AP\textsubscript{3D}(IoU=0.7)} & \multicolumn{3}{c|}{AP\textsubscript{BEV}(IoU=0.7)} \\ & \multicolumn{1}{c}{Easy} & \multicolumn{1}{c}{Moderate} & Hard & \multicolumn{1}{c}{Easy} & \multicolumn{1}{c}{Moderate} & Hard & \multicolumn{1}{c}{Easy} & \multicolumn{1}{c}{Moderate} & Hard & \multicolumn{1}{c}{Easy} & \multicolumn{1}{c}{Moderate} & Hard \\ \hline MonoGRNet\cite{qin2019monogrnet} &53.63& 44.29& 33.26 &56.49& 47.37& 38.39 & 16.46& 13.61& 10.36 &28.47& 23.98& 19.57 \\ Stereo R-CNN \cite{li2019stereo} &84.90 & 75.35&59.07 &84.78 &76.23& 65.75 & 62.78 &47.32& 39.81 &71.50&58.64&49.65 \\ F-PointNet \cite{qi2018frustum} & 98.14 & 88.58& 78.55& 98.31& 88.88& 78.89 & 86.49& 81.11& 65.52& 95.28& 85.34&68.65\\ F-ConvNet \cite{wang2019frustum} &99.09&90.57& 80.44 & 99.14& 90.58& 80.49 & 89.04& 86.88& 70.29& 98.10&89.90& 71.90\\ \hline CubeSLAM \cite{yang2019cubeslam} & 1.20 & 0.59& 0.08 &1.45 &0.74 &0.19 &0.00 & 0.00& 0.00&0.00 &0.00 &0.00\\ CC \cite{tang2019transferable} &51.32& 42.56& 31.22 &57.19& 46.63& 36.78 & 14.89& 12.91& 9.95 &27.76& 23.04& 18.90\\ Ours &84.98 &76.12 &66.84 &84.89& 84.22& 67.93 & 64.41& 57.90& 44.77& 74.48& 68.92& 54.71\\ \hline \end{tabular}} \vspace{-4mm} \label{tab:detector} \end{table*} \fi \subsection{Main Results} Table \ref{tab:test} shows the comparison with other fully supervised methods on KITTI test set. Despite without using any 3D labels, our FGR still achieves comparable performance with some fully supervised methods. We also compare FGR with other weakly-supervised methods on KITTI validation set. However, the supervision of these methods are different with ours (\emph{i.e. } CC\cite{tang2019transferable} uses 2D labels and 3D labels from other classes, VS3D \cite{qin2020weakly} uses a 2D pretrained model and WS3D \cite{meng2020weakly} uses part of 3D labels) and we just report these results for reference. From Table \ref{tab:val}, we can see that our approach outperforms these weakly supervised methods. We should notice that despite the supervision of our method is weaker than CC\cite{tang2019transferable} , FGR still outperforms it for a large margin. To better demonstrate the effectiveness of FGR, we also show results using other 3d detector backbones in Table \ref{tab:backbone}. In addition, compared with fully supervised baselines, the performances of our FGR degrade marginally, which indicates that a potential application of FGR is to annotate 3D data given 2D bounding boxes and point clouds. These experimental results confirm that we are able to detect vehicles accurately without the aid of 3D groundtruth. \subsection{Ablation Studies} \label{sec:ablation} We conducted experiments to confirm the effectiveness of each module in our method. To decouple the influence of 3D detectors, we evaluate the quality of generated labels in this subsection. \begin{table}[tb] \vspace{-5mm} \centering \caption{Ablation studies for coarse 3D segmentation. } \resizebox{0.48\textwidth}{!}{ \begin{tabular}{|c|c||c|c|c|c|} \hline ground plane & \multirow{2}{*}{adaptive} & \multirow{2}{*}{Mean IoU} & \multicolumn{3}{c|}{Precision} \\ estimation& & & \multicolumn{1}{c}{IoU=0.3} &\multicolumn{1}{c}{IoU=0.5} &\multicolumn{1}{c|}{IoU=0.7} \\ \hline & & 0.5311 & 74.40 & 61.05 & 41.09\\ $\checkmark$& & 0.6563 & 84.82 & 74.34 & 58.96 \\ $\checkmark$&$\checkmark$ & \bf{0.7845} & \bf{97.90} & \bf{96.70} & \bf{83.28} \\ \hline \end{tabular}} \label{tab:ablation_seg} \end{table} \begin{figure}[tb] \centering \includegraphics[width=0.9\linewidth]{seg.pdf} \caption{Ablation studies for coarse 3D segmentation. Without ground plane estimation, ground points will be segmented as inlier points. Also, the best threshold for different vehicles varys according to point densities. Yellow bounding boxes indicate groundtruth. Best viewed in BEV. } \label{fig:ablation_seg} \vspace{-5mm} \end{figure} \noindent \textbf{Coarse 3D Segmentation:} We did ablation studies to verify the necessity of ground plane estimation and effectiveness of context-aware adaptive region growing. We fixed $\phi$ as 0.4 for the experiment that did not use adaptive thresholds. Table \ref{tab:ablation_seg} and Fig. \ref{fig:ablation_seg} illustrate quantitative and qualitative results respectively. From the second column in the figure, we can see that some ground points will be treated as inliers without ground plane estimation. In this way, the points of the vehicle will connect other regions through ground points, which accounts for a failed segmentation. From the third and fourth columns, we conclude that a fixed threshold $\phi$ is unable to handle all cases. On the one hand, if $\phi$ is too small, we will get incomplete segmentation masks (the first and second rows). On the other hand, if $\phi$ is too large, the segmented set will include many outliers (the third row). The best threshold $\phi$ for these three cases are 0.7, 0.6 and 0.3 respectively. \begin{figure*}[tb] \centering \includegraphics[width=1.0\linewidth]{visual.pdf} \caption{Qualitative results. We illustrate RGB images, Bird's Eye View (BEV) and LiDAR point clouds respectively. Predicted bounding boxes are drawn in purple while groundtruth is in yellow.} \label{fig:visual} \vspace{-3mm} \end{figure*} \begin{figure} \centering \includegraphics[width=0.9\linewidth]{rect.pdf} \caption{Ablation studies for 3D bounding box estimation. If we adopt baseline method, the bounding rectangle will include noise points. Due to the occlusion, it's necessary to implement frustum intersection to better leverage 2D bounding boxes. Yellow bounding boxes indicate groundtruth while purple ones are prediction results. Green points represent key vertexes and cyan lines are frustum boundaries. Best viewed in BEV.} \label{fig:ablation_rect} \end{figure} \begin{table}[tb] \centering \caption{Ablation studies for 3D bounding box estimation. } \resizebox{0.48\textwidth}{!}{ \begin{tabular}{|c|c||c|c|c|c|} \hline anti-noise key & frustum & \multirow{2}{*}{Mean IoU} & \multicolumn{3}{c|}{Precision} \\ vertex localization & intersection & & \multicolumn{1}{c}{IoU=0.3} &\multicolumn{1}{c}{IoU=0.5} &\multicolumn{1}{c|}{IoU=0.7} \\ \hline & & 0.5418 & 76.42 & 56.25 & 41.32\\ $\checkmark$& & 0.6255 & 89.37 & 69.31 & 54.50 \\ $\checkmark$&$\checkmark$ & \bf{0.7845} & \bf{97.90} & \bf{96.70} & \bf{83.28} \\ \hline \end{tabular}} \label{tab:ablation_rect} \vspace{-3mm} \end{table} \noindent \textbf{3D Bounding Box Estimation:} In this part, we incrementally applied anti-noise key localization module and frustum intersection module to our framework. Table \ref{tab:ablation_rect} and Fig. \ref{fig:ablation_rect} exhibit experimental results. From the second column in the figure, we find that there still exists noise points after coarse 3D segmentation. If we directly operate on these noise points, we will get loose bounding rectangles. In addition, attributed to the occlusion, bounding rectangles tend to be smaller than groundtruth (the third column in Fig. \ref{fig:ablation_rect}). We will get wrong results if we directly use these bounding boxes. However, the position of the key vertex is correct, which can be leveraged to intersect frustums along with key edges. \subsection{Qualitative Results} To better illustrate the superiority of our method, we visualize some generated labels in Fig. \ref{fig:visual}. For simple cases which are non-occluded and have enough points, our method can achieve remarkable accuracy. Moreover, we are surprised to find that our method is also able to handle occlusion cases. These qualitative results demonstrate the effectiveness of FGR and also verify that it's possible to combine semantic information in 2D bounding boxes and 3D structure in point clouds for vehicle detection. \section{Conclusion} In this paper, we propose frustum-aware geometric reasoning for 3D vehicle detection without using 3D labels. The proposed framework consists of coarse 3D segmentation and 3D bounding box estimation modules. Our method achieves promising performance compared with fully supervised methods. Experimental results indicate that a potential application of FGR is to annotate 3D labels. \section*{ACKNOWLEDGMENT} This work was supported in part by the National Natural Science Foundation of China under Grant U1713214, Grant U1813218, Grant 61822603, in part by Beijing Academy of Artificial Intelligence (BAAI), and in part by a grant from the Institute for Guo Qiang, Tsinghua University. {\small \bibliographystyle{IEEEtran}
{'timestamp': '2021-05-18T02:28:28', 'yymm': '2105', 'arxiv_id': '2105.07647', 'language': 'en', 'url': 'https://arxiv.org/abs/2105.07647'}
\section{Introduction} Knowledge Base Population (KBP) is the process of populating (or building from scratch) a Knowledge Base (KB) with new knowledge elements. A considerable number of Natural Language Processing (NLP) tasks, such as question answering, use knowledge bases. Thus, there is an unfading need for more completed and populated knowledge bases. FarsBase \cite{asgari2019farsbase} is the first knowledge base in the Persian language. Despite other knowledge bases like DBpedia~\cite{auer2007dbpedia} and BabelNet~\cite{navigli2010babelnet}, which minimally support the Persian language, FarsBase is specially designed for the Persian Language. Similar to other knowledge bases, FarsBase faces some severe challenges such as staying up-to-date and expanding with existing knowledge. Some knowledge bases such as Wikidata~\cite{vrandevcic2014wikidata} rely on the human resources to annotate structured data and to prevent the entrance of erroneous knowledge instances to the knowledge base. Unlike Wikidata, FarsBase does not have such community support, which emphasized the need for a system to extract knowledge automatically and to prevent wrong relation instances from being passed to the knowledge base. In this paper, we introduce FarsBase Knowledge Base Population System (FarsBase-KBP) to address this issue. Note that, while our KBP system uses different modules employing different methods for relation extraction, the extracted knowledge by them should be checked to detect possible redundancy or conflict. For example, if two modules extract two different birthdays for the same person, it means that one or both of these modules have generated erroneous fact, indicating that there is a conflict that should be detected. Additionally, there is also the problem of mapping predicates, subjects, and objects in the extracted triples, which can be extracted as raw text, to the canonicalized predicates and entities in the knowledge base. We address these issues using canonicalization techniques. It should also be noted that despite English and other high-resource languages which can rely on annotated data and use supervised methods, in Persian, which is considered a low-resource language, due to the lack of required training gold datasets, supervised methods are generally not applicable. To overcome this problem, we employed unsupervised and distantly supervised methods which do not require such data. Our contributions are as follows: \begin{enumerate} \item Unlike Relation Extraction (RE) systems, in an Information Extraction (IE) system, the relations are not canonicalized, the arguments of the extracted relations are in the form of plain text, and they are not linked to an existing knowledge base. To address this issue, we proposed an entity linking system which links subjects and objects to knowledge base entities, alongside with a canonicalization system which maps relations to knowledge (see section \ref{sec:mapper}). The entity linking system applies an entity linking method for the Persian language, namely ParsEL~\cite{asgaribidhendi2020parsel}, to link the arguments of the relation as the subject and object in a FarsBase triple. For the first time, we introduce a canonicalization system which is designed especially for the Persian language and links relationship types to the pre-defined FarsBase predicate set. In this linking process, we use current triples and mapping data in FarsBase, as well as extracted knowledge from other FarsBase-KBP modules. Both entity linking and canonicalization systems are state-of-the-art in the Persian language. \item To the best of our knowledge, there are few studies on relation and knowledge extraction for the Persian language, and above that, there are no studies on canonicalization (relation to knowledge mapping) and knowledge fusion in the Persian language as well. Here we propose four modules for information extraction and two modules for relation extraction; all of them are innovative and state-of-the-art methods for the Persian language. For example, the Dependency Pattern (DP) and Persian Syntactic Information Extractor (PSIE{}) modules introduced and devised for the first time in FarsBase-KBP. TokensRegex is also used in the Persian language for the first time and uses FarsBase hirechial ontology classes instead NER. \item For the first time, we introduce and publish a gold dataset by which we evaluate the performance of knowledge extraction in FarsBase-KBP. This dataset contains 22015 sentences, in which the entities and relation types are linked to the FarsBase ontology. This gold dataset can be reused for benchmarking KBP systems in the Persian language. \end{enumerate} The rest of this paper is structured as follows. In section \ref{related-work}, a review of the literature of knowledge base population will be presented, then we mention other researches that employ knowledge fusion to make use of different sources for knowledge extraction, and finally, we discuss studies on knowledge extraction and knowledge base population for the Persian language. In section \ref{proposed-approach}, we give a brief background of other researches related to each of the modules of FarsBase-KBP, and then we explain how each component of our system operates and how all these components work together to improve the quality of the extracted knowledge. In the last two sections, we present the results of our experiments and the obtained conclusions. \section{Related Work} \label{related-work} Knowledge Base Population is defined as the process of extending a knowledge base with information extracted from the text. The goal is to update the knowledge base or keeping it current with new information \cite{glass2018dataset}. In this section, we first review literature in the field of knowledge base population, then we categorize and describe studies in this field. The last subsection presents related studies focused on the Persian language. \subsection{Automatic Knowledge Base Population} Automatic knowledge base construction and population have recently received significant attention in academic researches. As the size of the knowledge bases kept growing, the need for automatic construction and population of knowledge bases arose. Existing knowledge bases are usually highly incomplete. For example, only 6.2\% of \textit{persons} from Freebase \cite{bollacker2008freebase} have \textit{place of birth} \cite{min2013distant}, and only 1\% of them have \textit{ethnicity} \cite{west2014knowledge}. Additionally, manual completion of existing knowledge bases is expensive and time-consuming. Therefore, automatic construction of knowledge bases from scratch, populating them with missing information, and adding new knowledge to them has attracted a lot of academic attention \cite{adel2018deep}. Text Analysis Conference (TAC) is an annual series of open technology evaluations organized by the National Institute of Standards and Technology (NIST). The KBP track of TAC encourages the development of the systems that can extract information from unstructured text in order to populate an existing knowledge base or to construct a cold-start (built from scratch) knowledge base \cite{getman2018laying}. TAC KBP track consists of several tasks such as entity discovery and linking, and slot filling \cite{adel2018deep}. Slot Filling is a version of KBP where specific missing information (slots) are searched through the document collection and filled with desired values \cite{glass2018dataset}. Knowledge base population task is a follow-up task to relation extraction. KBP systems usually consist of one or more relation extractors or knowledge extractors. Relation extraction and knowledge extraction are well studied, yet growing fields of research. These extractors usually utilize basic NLP tasks such as Named Entity Recognition (NER), dependency or constituency parsing, and Entity Linking (EL) to find triples (subject, object, and predicate). Some systems such as FRED \cite{gangemi2017semantic} propose the integration of a stack of native Semantic Web (SW) machine reading tasks. FRED is a machine reader for the semantic web which extracts knowledge from the raw text in the form of RDF graph representation. This extracted knowledge can be used to populate a knowledge base \cite{consoli2015using}. We will provide more details about relation and knowledge extraction literature in the proposed approach section when FarsBase-KBP extractor modules are described. \subsection{KBP Categories} Regarding their method of extraction and source of information, there are four main categories of literature in this field \cite{dong2014knowledge} \textbf{Built on Structured Data:} Approaches which populate knowledge bases using structured data sources (like Wikipedia infoboxes), such as DBpedia \cite{auer2007dbpedia}, Freebase \cite{bollacker2008freebase} YAGO \cite{suchanek2007yago}, and YAGO2 \cite{hoffart2013yago2}. Yago facts are automatically extracted from Wikipedia, using a combination of heuristic and rule-based methods. YAGO2 is a more recent extension of YAGO in which entities, facts, and events are anchored in both time and space. \textbf{Open Information Extraction, Web-scale:} Approaches which apply open Information Extraction (IE) techniques which are applied to the entire Web, ranging from Reverb~\cite{fader2011identifying}, PRISMATIC~\cite{fan2010prismatic} and OLLIE~\cite{schmitz2012open} to MINIE~\cite{gashteovski2017minie} and Graphene~\cite{cetto2018graphene}. Graphene is a recent Open IE approach that presents a two-layered hierarchical representation of syntactically simplified sentences in the form of core facts and accompanying contexts that are semantically linked by rhetorical relations. \textbf{Making Taxonomies:} Approaches which construct taxonomies (is-a hierarchies), as opposed to general KBs with multiple types of predicates such as Probase \cite{wu2012probase}. Probase is a universal, general-purpose, and probabilistic taxonomy which is automatically constructed from a corpus of 1.6 billion web pages. It consists of an iterative learning algorithm to extract ``is-a'' pairs from web texts, plus a taxonomy construction algorithm to connect these pairs to a hierarchical structure. \textbf{Fixed Ontology, Web-scale:} Approaches which extract information from the entire web, but use a fixed ontology (schema) such as PROSPERA~\cite{nakashole2011scalable}, DeepDive~\cite{niu2012deepdive}, Knowledge Vault~\cite{dong2014knowledge}, and Never-Ending Language Learner (NELL)~\cite{carlson2010toward}. NELL is the first knowledge base that uses automatic extraction of knowledge with very little human intervention. The original architecture of NELL consists of four knowledge extraction components. The knowledge integrator module handles knowledge fusion in NELL. This module promotes knowledge instances to beliefs if it is extracted by one high-confidence source or by multiple sources. The latest version of NELL \cite{mitchell2018never} keeps the same architecture plus five new extractor modules are added, such as an image classifier. Other more recent knowledge bases employ automatic knowledge extraction methods, such as DeepDive and Knowledge Vault. The main difference between Knowledge Vault and NELL is that Knowledge Vault fuses facts extracted from the text with the prior knowledge derived from the Freebase~\cite{bollacker2008freebase}. The proposed knowledge base population system, FarsBase-KBP, is classified in the last category. FarsBase-KBP knowledge fusion module utilizes a similar approach as knowledge integrator in NELL, although despite NELL, it links the entities of the extracted triples to FarsBase. FarsBase-KBP and NELL also differ in how human intervenes in the knowledge fusion module. In NELL, the triples are transferred directly to the knowledge base after the fusion stage, while at a limited daily time window, some of the knowledge base triples are examined by experts, and false triples are identified. However, in FarsBase-KBP, any triple extracted by the fusion module must be checked by a human expert and added to the knowledge base if approved. \subsection{Entity Linking} In the context of KBP, Entity Linking (EL) is the task of mapping all of the subjects and some of the objects of the triples in a raw text to their corresponding entities in a knowledge base. With the appearance of FarsBase, EL has become a possible task for the Persian language as well. In our previous work, we proposed ParsEL~\cite{asgaribidhendi2020parsel}, which is a language-independent end-to-end entity linking system that utilizes both context-dependent and context-independent features for entity linking. ParsEL is the first entity linking system applied to the Persian language with FarsBase as the external dataset. Our experiments showed that the proposed method outperforms Babelfy~\cite{moro2014entity} as the state-of-the-art of multilingual end-to-end entity linking algorithm. \subsection{Information Extraction} Information extraction (IE) is one of the essential tasks in NLP. Its purpose is to extract structured information from raw, unstructured text. An IE system receives the raw text and generates a set of triples or n-ary propositions, usually in the form of (subject, predicate, object) as structured information, in which the predicate is a part of the raw text that represents the relationship between the subject and some objects~\cite{shi2019brief}. Roy et al. provided a new supervised OIE approach that uses a set of unsupervised OIE systems and a small amount of tagged data. As its input features, this method utilizes the output of several unsupervised OIE systems as well as a diverse set of lexical and syntax information including word embedding, POS embedding, syntactic role embedding and dependency structure~\cite{roy2019supervising}. \subsection{Relation Extraction} Relation Extraction (RE) is a specific case of IE, in which entities and semantic (mapped) relations between them are identified in the input text. In other words, an RE system can predict whether the input text has a relationship for some arguments or not. Besides, the RE system must predict which relational class from a particular ontology predicts the identified relation of the input text. Supervised RE methods require training datasets to learn the model. Generating such annotated datasets for RE is time-consuming and expensive, hence resource-poor languages lack of such datasets. In a review study, Shi et al. Showed that if it is not possible to use supervised methods for relation extraction, distant supervision methods will yield the best results~\cite{shi2019brief}. Trisedya et al. proposed an end-to-end RE model for KB enrichment based on a neural encoder-decoder model, utilizing distantly supervised training data with co-reference resolution and paraphrase detection~\cite{trisedya2019neural}. Gao and his colleagues have proposed Neural Snowball, a new bootstrapping method for learning new relations by transferring semantic knowledge about existing relations. They designed Relation Siamese Networks (RelSN) to learn the criteria for the similarity of relations based on their tagged data and existing relations~\cite{gao2019neural}. Smirnova and Cudré-Mauroux have reviewed RE methods utilizing distant supervision and summarized the two main challenges in this field, noisy labels automatically collected from the KB and the evaluation and training problems induced by the incompleteness of the KB~\cite{smirnova2018relation}. \subsection{Canonicalization} One of the essential systems which contribute to construction and population of KBs are Open Information Extraction (OIE) approaches. However, one fundamental problem is that the relation phrases in the extracted triples of OIE system are not linked or mapped to the knowledge base ontology predicates, which leads to a large number of ambiguous and redundant triples. Canonicalization is the task of mapping the plain text relation phrases to appropriate predicates in the knowledge base. The quality of canonicalization can directly affect the quality of the KBP system. Various methods have been proposed for canonicalization. Putri et al. proposed a novel approach based on distant supervision and a Siamese network that compares two sequences of word embeddings, representing an OIE relation and a predefined KB relation~\cite{putri2019aligning}. Vashishth et al. proposed CESI~\cite{vashishth2018cesi}, a system for Canonicalization using Embeddings and Side Information which is a novel approach that performs canonicalization above learned embeddings of Open KBs. Side information earned from AMIE~\cite{galarraga2013amie} and Stanford KBP~\cite{surdeanu2012multi}. Lin and Chen proposed an approach for canonicalization which utilizes the side information of the original data sources, including the entity linking knowledge, the types of the candidate entities detected, as well as the domain knowledge of the source text. They jointly modelled the canonicalization problem of entity and relation phrases and proposed a clustering method and demonstrated the effectiveness of this approach through extensive experiments on two different datasets~\cite{lin2019canonicalization}. Other studies have suggested clustering methods for canonicalization. Galárraga and Heitz presented a machine-learning-based strategy utilizing the AMIE algorithm~\cite{galarraga2013amie} that can canonicalize Open IE triples, by clustering synonymous names and phrases~\cite{galarraga2014canonicalizing}. \subsection{Literature in Low-Resource Languages} In this subsection, by low-resource language (resource-poor language), we mean a language without a lot of labelled datasets and corpora dedicated for the training of the supervised methods. If we look at KBP as a whole system, there are different approaches proposed, most of which in the TAC-KBP challenges, which typically utilizes components such as IE, RE, EL, Canonicalization and Fusion components. The TAC-KBP challenges benchmarking are performed over three languages, namely English, Spanish and Chinees, which are not low-resource languages~\cite{getman2018laying}. The suggested state-of-the-art KBP benchmarking methods such as KnowledgeNet~\cite{mesquita2019knowledgenet} and CC-DBP~\cite{glass2018dataset} is also focused on English. To the best of our knowledge, considering KBP as a whole system, no specific research has been found for low-resource languages. However, given the subsystems of a KBP system such as IE and RE, some research has been done in low-resource languages, but most of them are cross-lingual. A few of these types of research, which are novel, up to date and state-of-the-art, are discussed subsequently. Taghizadeh et al. proposed a system for RE in Arabic by a cross-language learning method, utilizing the training data of other languages such as Universal Dependency (UD)~\cite{straka2015parsing} parsing and the similarity of UD trees in different languages and trains a RE model for Arabic text~\cite{taghizadeh2018cross}. Zakria et al. proposed a method for RE in Arabic exploiting Arabic Wikipedia articles properties. The proposed system extracts sentences that contain principle entity, secondary entity and relation from Wikipedia article, then utilizes WordNet and DBpedia to build the training set. Then a Naive Bayes Classifier is used to train and test the datasets~\cite{zakria2019relation}. AlArfaj have reviewed the different state-of-the-art RE methods for Arabic and showed that the majority of RE approaches utilize a combination of statistical and linguistic-pattern techniques. The review proposes that linguistic-pattern methods provide high precision, but their recall is very low, and patterns are specified in the regular expression form, which is challenging to cope with language variety~\cite{alarfaj2019towards}. Sarhan et al. proposed a semisupervised pattern-based bootstrapping technique for Arabic RE utilizing a dependency parser to omit negative relations as well as features like stemming, semantic expansion using synonyms, and an automatic scoring technique to measure the reliability of the generated patterns and extracted relations~\cite{sarhan2016semi}. SUN et al. proposed a distantly supervised RE model based on Piecewise Convolutional Neural Network (PCNN) to expand the Tibetan corpus. They also added self-attention mechanism and soft-label method to decrease wrong labels, and use Embeddings from ELMo language model~\cite{peters2018deep} to solve the semantic ambiguity problem~\cite{sun2019improved}. Lkhagvasuren and Rentsendorj presented MongoIE, which is an OIE system for the Mongolian language, utilizing rule-based and classification approaches. Their classification method showed better results than the rule-based approach~cite{lkhagvasuren2020open}. Peng~\cite{peng2017jointly} suggested that learning under low-resource conditions needs special techniques when there is inadequate training data, including distant supervision, multi-domain learning, and multi-task learning. However, no particular approach proposed for low-resource languages. \subsection{Literature in the Persian Language} There are few studies on knowledge extraction and knowledge base population in the Persian language. Moradi et al. \cite{moradi2015commonsense} propose a system focused on certain generic relation types, such as is-a relationship, to extract relation instances. Hasti Project proposed by Shamsfard and Barforoush \cite{shamsfard2004learning} is another research which introduced an automatic ontology building approach. It extracts lexical and ontological knowledge from Persian raw text and uses logical, template-driven, and semantic analysis methods. Momtazi and Moradiannasab \cite{momtazi2019statistical} also proposed a statistical n-gram method which extracts knowledge from unstructured Persian language texts. \section{Proposed Approach} \label{proposed-approach} In this section, we describe the architecture of FarsBase-KBP and propose our approach in which extracted triples from different extractors are integrated and stored into a knowledge graph. Six extractor components are used in total, including four information extractors and two relation extractors. The predicates in extracted triples by information extractors are plain texts, not predefined relations. Consequently, their predicates are not linked to relations in the knowledge base. Thus a canonicalization process is needed to link predicates of extracted relations to the Knowledge Graph (KG) resources. \subsection{FarsBase-KBP Architecture} The architecture of our system, FarsBase-KBP works on the top of six extractor modules and fuses their output triples. Figure \ref{fig:Arch} shows a block diagram of the architecture and data flow of FarsBase-KBP. First, a crawler crawls the Web and extracts raw texts. After applying a pre-processing stage, the extracted text will be delivered to the entity linking module. This module links entities of the pre-processed text and passes the processed text to each of the six extractor components. Two relation extractors produce triples which are mapped to KG entities and delivered to the Candidate-Fact-Triples-Repository (CFTR) for further process. Note that each triple has a corresponding confidence value which is assigned by its extractor module. Four information extractors work independently and produce information triples containing a subject, object, and not-canonicalized predicate. Then, these triples are delivered to the Canonicalization Module (CM). Next, the CM maps the plain-text predicates to KG ontology and produces triples with their corresponding confidences. At this stage, the canonicalized triples are ready to be stored into the candidate-facts-triples-repository. \begin{figure} \includegraphics[width=\linewidth]{Figures/Arch.jpg} \caption{The architecture of FarsBase knowledge base population system (FarsBase-KBP)} \label{fig:Arch} \end{figure} Afterwards, Knowledge Fusion module integrates triples which are stored in the CFTR and extracts triples with minimum confidence threshold predefined in the module. After this stage, triples are passed to human experts to check the correctness of the triples. The correct triples then are added to the FarsBase. \subsection{Extraction Modules} \subsubsection{PredPatt Extractor} To present a multilingual state-of-the-art unsupervised baseline method alongside with our other extraction modules, we utilized PredPatt of Decomp project~\cite{white2016universal} on Universal Dependencies (UD) Project~\cite{nivre2016universal} for the Persian language. UD project provides a standard syntactic annotation which can be used in different languages. This project purposes unified grammatical representation which is valid cross-linguistically and includes, universal set of dependency relations and POS tags. Seraji et al. presented UD for the Persian language~\cite{seraji2016universal}. While the UD project present syntactic annotation, Decomp project aims to propose a set of protocols for augmentation datasets in UD format with universal decompositional semantics. In more straightforward language, the UD project proposes syntactic annotation while Decomp project proposes semantic annotation in different languages. PredPatt is a software package for processing UD annotated corpora and annotating them with Decomp protocols. If we look at PredPatt as a black box, it takes a UD annotated dataset in the desired language as input, and its output is that dataset augmented with predicate-argument information alongside other syntactic and semantic annotations. Different studies propose that PredPatt can be utilized as an Open IE system~\cite{govindarajan2019decomposing,claro2019multilingual} and it overcomes other state-of-the-art Open IE systems~\cite{zhang2017evaluation}. Our main idea here is to utilize PredPatt to extract sets of predicate-arguments from input Persian sentences. For every input sentence, this module provides a predicate and two or more arguments. All possible permutation of arguments are then considered as subjects and objects, and the resulting triples are sent to the output as candidate relationships. For example, if three arguments are specified for a predicate in a sentence in the form of (arg1, pred, arg2, arg3), six triples are generated at the output including (arg1, pred, arg2), (arg1, pred, arg3), (arg2, pred, arg1), (arg2, pred, arg3), (arg3, pred, arg1), and (arg3, pred, arg2). It should be noted that these triples are not necessarily correct relationships. In the next section, we will discuss the details of the performance of this module as a state-of-the-art baseline on our gold dataset. \subsubsection{Dependency Pattern Extractor} The dependency pattern information extractor is the first of our novel information extractor modules. Extraction with dependency patterns is an innovative method that attempts to extract information triples using ``unique dependency trees''. The main idea is that sentences with the same structure of dependency trees can contain similar relationships. That means if a particular form of a dependency tree is observed in a sentence and the sentence contains a relationship, other sentences with the same dependency tree structure can probably contain a relationship with the same pattern. Extraction with dependency pattern is based on the fact that if a sentence contains some triples, other sentences with the same structure (same dependency pattern) contain the same triples too. In such cases, the subject, object, and predicate can be extracted from the words with the same indexes in all sentences. For example, if in a sentence, the first word is the subject, then in all other sentences with the same dependency pattern, the subject can be extracted from the first word as well~\cite{asgari2019farsbase}. Two dependency parse trees haves the same dependency pattern if similar trees are generated when words are replaced with their corresponding POS tags. In other words, the sequences of the POS tags in both sentences are precisely the same, and also the dependency type and head of each word in the same position are equal. To build this module, we utilized expert annotators to extract the desired rules. It should be noted that despite the use of human annotators, this method is not a supervised method. Because experts once extract the rules of this method and then can be used forever, and in fact, the system works as an unsupervised rule-based system which does not need any learning dataset. In order to produce the rules, we first extracted dependency patterns for all of the sentences in a raw text corpus and then calculated most frequent dependency patterns in the corpus. We then asked human annotators to inspect the sentences of each of those patterns and annotate subject, object and predicate, for all of the relationships which can be mined from the sentences. In this way, we can determine which of the word positions in a given dependency tree and in what order, define the components of a relationship, namely the subject, object, and predicate. In this fashion, we were able to extract a set of relationship patterns in specific dependency tree structures. Currently, 3499 frequent dependency patterns are extracted automatically from Wikipedia texts, and experts annotate 6\% of them. This module operates based on these extracted rules. For every new sentence the module receives, it first produces the dependency tree of the sentence and then compares this tree with the tree structure of the patterns generated earlier. If a matched dependency pattern is found, the module extracts a relation based on annotations defined for the pattern. \subsubsection{Persian Syntactic Information Extractor{} (PSIE{})} In the Persian Syntactic Information Extractor{}, the goal is to extract relation instances from an unlabeled and unannotated text, without any predefined relations as a train set. This method uses grammatical structures of sentences to extract relation instances. In this approach, all the predicates are based on verbs of the sentences, not in the predefined relations set, which is already known. Therefore, the evaluation of the results is not easy so that human experts intervention is required. Like other IE modules, another problem which was mentioned before is that predicates in the relation instances must be canonicalized to FarsBase ontology. This approach can be classified as an open information extraction approach. For the first time, KnowItAll \cite{etzioni2004web} introduced unsupervised knowledge extraction. The first model introduced in this project was TextRunner \cite{banko2007open}. TextRunner is comprised of three main components, namely Self-supervised learner, single-pass extractor, and redundancy-based assessor. Wikipedia-based open extractor(WOE) \cite{wu2010open} is another method of unsupervised knowledge extraction, which extracts knowledge from Wikipedia articles and uses dependency path between entities to improve the performance. OLLIE \cite{schmitz2012open} and BONIE \cite{saha2017bootstrapping} are two more recent unsupervised methods used for knowledge extraction. Our system uses a different unsupervised method for triple extraction, i.e. dependency parsing and constituency tree are combined to extract the triples. Details of this module have been published in \cite{asgari2019farsbase}. \subsubsection{RePersian - an automated ReVerb approach for the Persian language} ReVerb \cite{etzioni2011open} is an unsupervised knowledge extraction method that was first introduced and applied in the KnowItAll \cite{etzioni2004web} project. ReVerb aimed to improve the performance of TextRunner by imposing lexical and syntactical constraints on relations. We employ RePesian~\cite{sahebnassagh2020repersian}, which uses a novel approach to produce regular expressions in ReVerb automatically. To find out the most frequent regular expressions, Dadegan Dependency Parsing Treebank~\cite{dadegan2012persian} processed with an automated algorithm. The algorithm found the best regular expressions for the subject, object, and Mosnad in the Persian language. We have already explained the details of this method in the previously published RePersian~\cite{sahebnassagh2020repersian} article. \subsubsection{Distant Supervision} One of the main challenges in relation extraction is the time and effort needed to make a manually labelled dataset. Distant supervision module utilizes a knowledge base to address this issue. To be more precise, the idea is that if a sentence contains a pair of entities which are relevant to each other in the knowledge base, this sentence may represent the required semantic relation. Mintz et al. \cite{mintz2009distant} used DBpedia and a collection of Wikipedia articles to make a distantly supervised dataset, and then extracted thousands of new relation instances from that dataset. There have been many modifications to this method, like using multi-instance learning approach proposed by Manning \cite{surdeanu2012multi}. The single-instance approach assumes that each sentence has one pair of entities and only one semantic relation between them, but multi-instance approach jointly models all the instances of a pair of entities in text and all their relations. Recent studies have employed word embedding and various type of deep neural networks to perform relation extraction from distantly supervised datasets. Using word embedding eliminates the need for manually extracted features from sentences. Trisedya et al. \cite{distiawan2019neural} have recently proposed another distantly supervised relation extractor for knowledge base enrichment. Their neural end-to-end relation extraction model integrates the extraction and canonicalization tasks. Thus, their model reduces the error propagation between relation extraction and Named Entity Disambiguation. As a result, the existing approaches which are error-prone can be addressed by this model. To obtain high-quality training data, they adapt distant supervision and augment it with co-reference resolution and paraphrase detection. We use FarsBase, as our knowledge base, and the unstructured text that our crawler module retrieves to create a Distantly Supervised dataset (DS-dataset). First, all the sentences of the raw text are checked by the module. Any sentence that contains a pair of related FarsBase entities is considered as a candidate containing the semantic relation of the same relation of entity pair. Then, a piecewise convolutional neural network (PCNN) with multi-instance learning is employed to extract knowledge from the dataset. Should be noted that our distantly supervised dataset is the first dataset provided for the Persian language. Moreover, the distant supervision component also produces frequent tokens and compound verbs for every predicate, which is used in our canonicalization and integration phases. \subsubsection{TokensRegex} TokensRegex extraction module is a rule-base extraction module, which works based on Stanford TokensRegex \cite{chang2014tokensregex}. TokensRegex is a framework for specifying regular expressions over sequences of tokens and their additional side information, such as NER and POS tags, for identifying and acting on patterns in text. After the entity linking on the text, we have used the class of each entity instead of using named entity tags. Human experts have defined 166 TokensRegex rules for 58 frequent predicates on our gold dataset based on words, POS tags, and entity classes. Subject and object are defined in the body of each regular expression, and each expression refers to a predicate in FarsBase. Then, these rules were considered as the rules for relation extraction for this extractor module. Each sentence of the input raw text is examined according to the rules, and if the desired pattern is found, a relationship will be extracted. Figure \ref{fig:TokensRegex} shows an example of TokensRegex that matches a sentence, and Token Regex module extracts a fb:nationality triple. \begin{figure} \frame{\includegraphics[width=\linewidth]{Figures/TokenRegex.png}} \caption{A TokensRegex example matching fb:nationality predicate} \label{fig:TokensRegex} \end{figure} \subsection{Relation to Knowledge Mapper} \label{sec:mapper} To extract a standard triple for knowledge graph, we must link two types of data: \paragraph{Entity Linking (Relation arguments to KG entities)} In KG, all subjects must be an entity. Our entity linker, ParsEL, finds all entity mentions in the tokens of the sentence and disambiguates and links them to the knowledge graph. Disambiguation process is a challenging task. Entity linker assigns a confidence value to each link. If all tokens of an argument belong to the same entity, the argument will be linked to it. When there are two different entities for an argument with different confidence values, two triples are generated using both entities with lower confidence values (weighted by entity confidences). These triples have the chance to be transferred to the final KG by Knowledge Fusion Module if they are extracted from multiple sources. \paragraph{Canonicalization (Relation type to KG ontology predicate)} We use a canonicalization module (CM) component to map relations extracted by four IE components to Knowledge Graph ontology. A baseline method proposed alongside with our FarsBase Canonicalization module. \textbf{Baseline:} To implement a canonicalization baseline method, we examined the state-of-the-art methods as likely to be applicable to the Persian language. CESI~\cite{vashishth2018cesi} is one of the canonicalization methods introduced in the related works section. CESI needs side information to perform canonicalization, which is earned from AMIE~\cite{galarraga2013amie} and Stanford KBP~\cite{surdeanu2012multi}, latest is not present the Persian language. Therefore, this method could not be implemented for the Persian language. Lin and Chen approach for canonicalization is another state-of-the-art method, which we have considered to implement for the Persian language. Like CESI, this method utilizes the side information of the original data sources, including the entity linking knowledge, the types of the candidate entities detected, as well as the domain knowledge of the source text. The entity linking knowledge and domain knowledge in this approach is not available for the Persian language. Thus, this method could not be implemented for the Persian language as well. To prepare a baseline for canonicalization, we implemented the Galárraga and Heitz method~\cite{galarraga2014canonicalizing} for the Persian language. This method was the only method that could be implemented for the Persian language among the state-of-the-art methods introduced in the related works section. For this aim, we first gave the DS-dataset generated by the distant supervision module to the PredPatt, Dependency Pattern, RePersian and PSIE{} modules and several tuples were constructed. Then we gave all these generated tuples, which were 3059530 relationships, to the AMIE algorithm, and a number of rules were extracted. Then, we cluster each pair of rules if they have this logical relationship: (s1, Pred1, o1) => (s1, Pred2, o1) AND (s2, Pred2, o2) => (s2, Pred1, o2) Subsequently, a mapping table extracted from these clusters in the form of (raw text relations to the standard FarsBase relations), then this mapping table was controlled and filtered by human experts to remove the incorrect mappings, and finally, a mapping table was prepared as a baseline. It should be noted here that in Galárraga and Heitz method~\cite{galarraga2014canonicalizing}, the input data is extensive, and therefore this method offers outstanding results in English. Nevertheless, there is no such big data in Persian. All extractors have been applied to the DS-dataset, and more than three million tuples extracted, which is far less from corresponding dataset in English. As a result, only about six thousand rules were extracted by the AMIE algorithm. Among these rules, a large number of them, extracted as transitive relations, which are not useful in canonicalization. Also, some extracted rules did not follow the aforementioned logical relationship. Eventually, 22 clusters of rules were obtained. Also, in some of these clusters, there were only rules between FarsBase predicates, and no plain text relationship was available in this type of cluster, and practically no new information was produced for the canonicalization. Some of these clusters only included plain text relationship, without any FarsBase predicate. In the latter case, we mapped these types of clusters to the FarsBase predicates manually. With all these modifications, there are still 15 clusters left to prepare the mapping table. As a result, this algorithm did not provide good results due to the lack of sufficient resources in Persian, despite the fact that it is the only available state-of-the-art method that could be implemented for the Persian language. It should be noted that we did not use available Persian raw text corpora larger than Persian Wikipedia, such as MirasText~\cite{SABETI18.385}, to create DS-dataset. We created our DS-dataset with the help of Persian Wikipedia containing more than 125 million words, while The MirasText corpus has more than 1.4 billion words. Despite its much larger size, the MirasText corpus is not suitable for use in this case. MirasText have been extracted from the texts of Persian news websites. To create a distantly supervised dataset in Persian, we need to look for entities of FarsBase, the only available knowledge base for the Persian, in a raw corpus. In contrary to the MirasText, it is expected that in the Persian Wikipedia sentences, there are many more sentences in which the FarsBase entities are presentو for two reasons: (I) Because most of the FarsBase triplets are extracted from Wikipedia infoboxes and (II) most of the entities in the infoboxes are included in the sentences of Wikipedia articles, while this condition exists with much less density in the MirasText. \textbf{FarsBase Canonicalization module:} This module performs some steps to map a relation to standard FarsBase predicates which are explained below: \begin{enumerate} \item If the extracted relation predicate phrase exists in the mapping set of the Wikipedia to FarsBase, the algorithm will map it to the corresponding KG predicate. The mapping table of the Wikipedia infobox predicates to FarsBase ontology had been introduced in our previous work \cite{asgari2019farsbase}. \item If the previous criterion is not satisfied, mapper uses the information extracted in Distant Supervision Component to find the best match of candidate KG predicate. Mapper matches words and compound verbs of the sentence of the predicate with corresponding information from distant supervision component. For each candidate predicate match, the mapper denotes one positive point for the candidate predicate. Finally, the mapper selects the predicate with the highest rank. \end{enumerate} \subsection{Knowledge Fusion Module} Knowledge Fusion Module is a simple ensemble classifier. This module accepts a triple as a fact and sends it to the next component if one of these two conditions is satisfied: \begin{enumerate} \item If a triple (with any confidence) is extracted by more than one (two to five) extractor. \item If a triple is extracted by just one extractor while its corresponding confidence is higher than a specified threshold. \end{enumerate} \section{Experimental Results} \label{experiments} In this section, we introduce FarsBase-KBP Gold Dataset containing 22015 facts which are labelled and verified by human experts as gold data. This corpus has been used in the evaluation of all the components of FarsBase-KBP. Note that, all the modules are unsupervised, and the corpus is created only for the evaluation purposes. In this section, we introduce FarsBase-KBP Gold Dataset and then provide experimental results. \subsection{FarsBase-KBP Gold Dataset} We trained FarsBase-KBP and its components over a corpus of 22015 facts which are labelled by human experts as gold data. To build this corpus, we searched for subject and object of each FarsBase triples in Wikipedia articles in the Persian language. More precisely, a sentence is a candidate for a triple's predicate if the sentence contains both subject and object of the considered triple. Also, we have chosen 406 distinct frequent FarsBase predicates for which the sentences are annotated by human experts. At last, we collected a gold dataset with 22015 sentence and its corresponding subject, object, and predicates. This corpus has some automatic preprocessing as follows: \begin{itemize} \item Tokens of each sentence. \item Part of speech tags (POS). \item Named Entity Recognition tags (NER). \item Dependency Parsing trees. \item Linked entities to the FarsBase. \item FarsBase classes for subject and object. \end{itemize} Figure \ref{fig:GoldData} shows an instance of each triple in JSON format. In this example, the subject is ``Belarus'', and the object is ``Alexander Lukashenko'' and both are linked to the corresponding entities in FarsBase knowledge graph. The predicate is ``fkgo:leaderName'' which is the standard ontology-predicate in FarsBase and is approved by a human expert; thus this attribute is the gold part of the corpus. Other features like the subject, object, KG classes, tokens, NER tags, and POS tags are also included in the gold data. \begin{figure} \frame{\includegraphics[width=\linewidth]{Figures/GoldData.png}} \caption{An instance of gold corpus used to train FarsBase-KBP components} \label{fig:GoldData} \end{figure} Figure \ref{fig:PredicateFrequency} shows the distribution of predicate instances frequency in the gold data, which is sorted by frequency. Observation shows that there exists a normal distribution among them. In Table \ref{tab:expcond}, an example of the most frequent predicates in the gold corpus has been written. \begin{figure} \frame{\includegraphics[width=\linewidth]{Figures/PredicateFrequency.jpg}} \caption{Distribution of predicate instances based on frequency in the gold data} \label{fig:PredicateFrequency} \end{figure} \begin{table}[htbp] \centering \caption{The most frequent predicates in the gold corpus} \label{tab:expcond} \begin{tabular}{lclc} \hline Predicate & Instance Count & Predicate & Instance Count\\ \hline fkgo:location & 1110 & fkgo:type & 457\\ foaf:name & 915 & fkgo:order & 388\\ fkgo:birthPlace & 694 & fkgo:field & 347\\ fkgo:occupation & 679 & fkgo:team & 309\\ fkgo:speciality & 679 & fkgo:releaseDate & 292\\ fkgo:genus & 555 & fkgo:language & 269\\ fkgo:nationality & 474 & fkgo:family & 255\\ fkgo:birthDate & 471 & fkgo:notableWork & 237\\ \end{tabular} \end{table} \subsection{Evaluations} Table \ref{tab:Eval} shows results and evaluation metrics of each FarsBase-KBP extraction module. We fed system with the previously mentioned corpus and used it as a gold dataset to evaluate the Precision, Recall, and F\textsubscript{1} (harmonic mean of precision and recall) of each module. Note that, every sentence in the dataset may have multiple triples, but one triple per sentence has been defined. Also, each module may extract more than one triple for some of the sentences. To calculate the recall, we only consider the gold triples of our dataset. \begin{table}[htbp] \centering \caption{Evaluation Statistics} \label{tab:stat} \begin{tabular}{llllll} \hline Module Name & Triples & Corrects & Wrongs & OSO & Tri./Sen. \\ \hline DependencyPattern & 418 & 71 & 24 & 323 & 0.019 \\ DistantSupervision & 17745 & \textbf{4632} & \textbf{13113} & 0 & 0.806 \\ PredPatt & \textbf{66384} & 13 & 82 & \textbf{66289} & \textbf{3.0154} \\ RePersian & 7865 & 51 & 241 & 7573 & 0.3573 \\ TokensRegex & 37351 & 3306 & 917 & 33128 & 1.6966 \\ PSIE & 44809 & 94 & 484 & 44231 & 2.0354 \\ \hline Fusion & 39375 & 3730 & 1280 & 34365 & 1.8119 \\ \hline \end{tabular} \end{table} \begin{table}[htbp] \centering \caption{Evaluation Results} \label{tab:Eval} \begin{tabular}{llll} \hline Module Name & Precision & Recall & F\textsubscript{1} \\ \hline DependencyPattern & 0.7474 & 0.0032 & 0.0064 \\ DistantSupervision & 0.261 & \textbf{0.2104} & 0.233 \\ PredPatt & 0.1368 & 0.0006 & 0.0012 \\ RePersian & 0.1747 & 0.0023 & 0.0046 \\ TokensRegex.json & \textbf{0.7829} & 0.1502 & 0.252 \\ PSIE & 0.1626 & 0.0043 & 0.0083 \\ \hline Fusion & 0.7313 & 0.1779 & \textbf{0.2862} \\ \hline \end{tabular} \end{table} The first five rows in the table shows the following information for the five relation and knowledge extraction modules: \begin{itemize} \item Triples: The number of triples which are extracted from 22015 sentences. \item Corrects: The number of extracted triples which are correct and exists in the gold dataset. \item Wrongs: The number of extracted triples that their entities are correct but the relation is incorrect. \item OSO: The number of extracted triples not existing in the gold dataset. \item Tri. /Sen.: The number of extracted triples per sentence. \end{itemize} The last row shows these metrics for Fusion module when the confidence threshold is 0.9. As can be seen, the fusion module outperformed all of the individual modules based on f1 metric, while its precision and recall are comparable with the performance of the best extractors. The experts must approve each triple before adding to the knowledge graph. Therefore, the precision metric has the highest importance during the entire extraction process. Human intervention in a web-scale system, such as our proposed system, as well as other web-sclase machine reading systems, is for verification, not to create a supervised system. In this case, some human users are asked to review and accept or reject the triples which the system has calculated relatively high confidence for them. We can reduce the confidence threshold, and as a result, the recall will reach much higher results. However, this dramatically increases the number of triples that should be examined by human users and changes their function from a verifier to an annotator. Because human verifiers help the system voluntarily, providing such a volume of incorrect triplets for them to verify, will exhaust their motivation to continue doing so. In other words, we want human users to have only the role of verification and provide them with triples of high confidence to verify. Also, if for any reason human users are not available to verify triples, the system must work with the maximum precision to prevent the promotion of wrong triples to the FarsBase, even if it is at the cost of reducing the recall. Figure \ref{fig:Recall-F1-Threshold} shows the effect of changing the value of the confidence threshold on the Fusion module in contrast to recall and F\textsubscript{1} measures. The chart depicts values of recall and f-measure of fusion module for the different thresholds, and it shows the inverse relationship between the threshold and recall values. The plotted bullets define recall and F\textsubscript{1} measures for the other modules. Distant supervision meets the recall line before threshold=0.1, but other modules meet the line after 0.9. F\textsubscript{1} is also decreasing from 0.1. Because of the low precision of Distant Supervision module, it meets the line after 0.4 with the F\textsubscript{1} line. Other modules meet the line after 0.9. As we expect, the higher thresholds lead to a lower recall. \begin{figure} \frame{\includegraphics[width=\linewidth]{Figures/Recall-F1-Threshold.jpg}} \caption{Effect of changing the confidence threshold on Recall and F\textsubscript{1} measure of the Fusion module} \label{fig:Recall-F1-Threshold} \end{figure} Figure \ref{fig:threshold_changes} shows the effect of changing the confidence threshold on the precision value of the Fusion module. As we expect, higher thresholds lead to higher precision. The bullets on the chart show the precision of each single extraction module. According to the results, the precision metric of the fusion module outperforms two of the extraction modules when threshold=0, and when threshold=0.1, the fusion module extracts triples with precision higher than all of the modules except DependencyPattern module. Finally, the fusion module overtakes all the extraction modules when threshold=0.8. \begin{figure} \frame{\includegraphics[width=\linewidth]{Figures/Precision-Threshold.jpg}} \caption{Effect of changing the confidence threshold on Precision of the Fusion module} \label{fig:threshold_changes} \end{figure} Comparing \ref{fig:Recall-F1-Threshold} and Figure \ref{fig:threshold_changes}, we can induce that while higher threshold results in higher precision, it has a converse effect on recall and F\textsubscript{1} measures. Table \ref{tab:Common} shows the number of commonly extracted triples between each pair of modules. The second column shows the number of all triples extracted by the modules, and the third column illustrates the number of corrected triples, which are existed in the gold data. Note that, the common triples are calculated on all triples extracted by modules, not only the triples which are existed in the gold data. The Acquired results describe that Distant Supervision and TokensRegex modules can extract the most common triples (2300). To find out which pair of modules extract similar triples, we should pay attention to the number of triples extracted by the modules, and consider the portion of common triples to the average number of triples extracted by both modules. \begin{table}[htbp] \centering \caption{The number of common extracted triples between every pair of modules} \label{tab:Common} \begin{tabular}{lllllll} \hline Module & DisSup & TokReg & DepPat & PSIE & RePer & PredPatt \\ \hline DisSup & 0 & 2300 & 104 & 36 & 104 & 20 \\ TokReg & 2300 & 0 & 36 & 124 & 334 & 8 \\ DepPat & 104 & 36 & 0 & 20 & 36 & 10 \\ PSIE & 36 & 124 & 20 & 0 & 442 & 62 \\ RePer & 104 & 334 & 36 & 442 & 0 & 496 \\ PredPatt & 20 & 8 & 10 & 62 & 496 & 0 \\ \hline Triples & 17745 & 37351 & 418 & 44809 & 7865 & 66384 \\ Corrects & 4632 & 3306 & 71 & 94 & 51 & 13 \\ \hline \end{tabular} \end{table} \section{Conclusion} \label{conclusions} In this paper, we provide a novel gold corpus for training and evaluating knowledge extractors in the Persian language. We also introduce multiple unsupervised components for relation and knowledge extraction in the Persian language. The acquired results show that using a fusion module can increase the precision of knowledge extraction when it works above all individual extractor components. In future work, we will address the low recall and precision of some of our extractor components. We also will provide some new supervised and unsupervised modules to increase the recall and precision of the system. Also, we plan to add some extractors which will work on other types of resources, such as tabular data. Another improvement is to add a rule learner module to conclude facts and generate new triples. Also, by adding a type checker to the Knowledge Fusion module, the wrong triples based on the domain and range of each predicate can be filtered. Moreover, one of the problems is when the modules extract many triples at the end of each day so that verifying all the triples will be a tough task. One of the potential solutions is to utilize crowdsourcing to accelerate the triple verification process. The output of PredPatt module is hugely affected by the quality of universal dependency parsing. Based on our observations, the output of current universal dependency parser for the Persian language has many errors, especially for long sentences. We can create another UD corpus by converting Dadegan dependency parser dataset and implement a better UD parser in the future works. \section*{Acknowledgment} The authors would like to thank Mr. Mehrdad Nasser and Ms. Raana Saheb-Nassagh at Data Mining Laboratory, Faculty of Computer Science, Iran University of Science and Technology, for implementing their methods, namely Distant Supervision RE and RePersian, for FarsBase. The authors also wish to express their sincere appreciation to Dr. Sayyed Ali Hossayni for his constructive guidance and valuable comments. \section*{References}
{'timestamp': '2020-05-06T02:05:36', 'yymm': '2005', 'arxiv_id': '2005.01879', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.01879'}
\section{Introduction} \label{sec:intro} The empirical investigation of quiet Sun\footnote{In this work, we use the term ``quiet Sun'' to refer to the solar surface away from sunspots and active regions.} magnetism is a very important but extremely challenging problem. A large (probably dominant) fraction of the solar magnetic flux resides in magnetic accumulations outside active regions, forming network and inter-network patches (e.g., \citeNP{SNSA02}). It is difficult to obtain conclusive observations of these structures, mainly because of two reasons. First, the size of the magnetic concentrations is much smaller than the spatial resolution capability of modern spectro-polarimetric instrumentation. Estimates obtained with inversion codes yield typical values for the filling factor of the resolution element between $\sim$1\% and 30\% . The interpretation of the polarization signal becomes non-trivial in these conditions and one needs to make use of detailed inversion codes to infer the magnetic field in the atmosphere. Second, the observed signals are extremely weak (typically below $\sim$1\% of the average continuum intensity), demanding both high sensitivity and high resolution. Linear polarization is rarely observed in visible lines, so one is usually left with Stokes~$I$ and~$V$ alone. \citeN{S73} proposed to use the pair of \ion{Fe}{1} lines at 5247 and~5250~\AA \, which have very similar excitation potentials and oscillator strengths (and, therefore, very similar opacities) but different Land\'e factors, to determine the intrinsic field strength directly from the Stokes~$V$ line ratio. That work led to the subsequent popularization of this spectral region for further studies of unresolved solar magnetic structures. Later, the pair of \ion{Fe}{1} lines at 6302~\AA \, became the primary target of the Advanced Stokes Polarimeter (ASP, \citeNP{ELT+92}), mainly due to their lower sensitivity to temperature fluctuations. The success of the ASP has contributed largely to the currently widespread use of the 6302~\AA \, lines by the solar community. Recent advances in infrared spectro-polarimetric instrumentation now permit the routine observation of another very interesting pair of \ion{Fe}{1} lines, namely those at 15648 and 15653~\AA \, (hereafter, the 1.56~$\mu$m lines). Examples are the works of \citeN{LR99}; \citeN{KCS+03}. The large Land\' e factors of these lines, combined with their very long wavelengths, result in an extraordinary Zeeman sensitivity. Their Stokes~$V$ profiles exhibit patterns where the $\sigma$-components are completely split for fields stronger than $\sim$400~G at typical photospheric conditions. They also produce stronger linear polarization. On the downside, this spectral range is accesible to very few polarimeters. Furthermore, the 1.56~$\mu$m lines are rather weak in comparison with the above-mentioned visible lines. Unfortunately, the picture revealed by the new infrared data often differs drastically from what was being inferred from the 6302~\AA \, observations (e.g., \citeNP{LR99}; \citeNP{KCS+03}; \citeNP{SNSA02}; \citeNP{SNL04}; \citeNP{DCSAK03}), particularly in the inter-network. \citeN{SNSA03} proposed that the discrepancy in the field strengths inferred from the visible and infrared lines may be explained by magnetic inhomogeneities within the resolution element (typically 1\arcsec). If multiple field strengths coexist in the observed pixel, then the infrared lines will be more sensitive to the weaker fields of the distribution whereas the visible lines will provide information on the stronger fields (see also the discussion about polarimetric signal increase in the 1.56 $\mu$m lines with weakening fields in \citeNP{SAL00}). This conjecture has been tested recently by \citeN{DCSAK06} who modeled simultaenous observations of visible and infrared lines using unresolved magnetic inhomogeneities. A recent paper describing numerical simulations by \citeN{MGCRC06} casts some doubts on the results obtained using the 6302~\AA \, lines. Our motivation for the present work is to resolve this issue by observing simultaneously the quiet Sun at 5250 and 6302~\AA . We know that unresolved magnetic structure might result in different field determinations in the visible and the infrared, but the lines analyzed in this work are close enough in wavelengths and Zeeman sensitivities that one would expect to obtain the same results for both spectral regions. \section{Methodology} \label{sec:method} Initially, our goal was to observe simultaneously at 5250 and 6302~\AA \ because we expected the 5250~\AA \ lines to be a very robust indicator of intrinsic field strength, which we could then use to test under what conditions the 6302~\AA \ lines are also robust. Unfortunately, as we show below, it turns out that in most practical situations the 5250~\AA \ lines are not more robust than the 6302~\AA \ pair. This left us without a generally valid reference frame against which to test the 6302~\AA \ lines. We then decided to employ a different approach for our study, namely to analyze the uniqueness of the solution obtained when we invert the lines and how the solutions derived for both pairs of lines compare to each other. In doing so, there are some subtleties that need to be taken into consideration. An inversion technique necessarily resorts on a number of physical assumptions on the solar atmosphere in which the lines are formed and the (in general polarized) radiative transfer. This implies that the conclusions obtained from applying a particular inversion code to our data may be biased by the modeling implicit in the inversion. Therefore, a rigorous study requires the analysis of solutions from a wide variety of inversion procedures. Ideally, one would like to cover at least the most frequently employed algorithms. For our purposes here we have chosen four of the most popular codes, spanning a wide range of model complexity. They are described in some detail in section~\ref{sec:obs} below. When dealing with Stokes inversion codes, there are two very distinct problems that the user needs to be aware of. First, it may happen that multiple different solutions provide satisfactory fits to our observations. This problem is of a very fundamental nature. It is not a problem with the inversion algorithm but with the observables themselves. Simply put, they do not carry sufficient information to discriminate among those particular solutions. The only way around this problem is to either supply additional observables (e.g., more spectral lines) or to restrict the allowed range of solutions by incorporating sensible constraints in the physical model employed by the inversion. A second problem arises when the solution obtained does not fit the observed profiles satisfactorily. This can happen because the physical constraints in the inversion code are too stringent (and thus no good solution exists within the allowed range of parameters), or simply because the algorithm happened to stop at a secondary minimum. This latter problem is not essential because one can always discard ``bad'' solutions (i.e., those that result in a bad fit to the data) and simply try again with a different initialization. In this work we are interested in exploring the robustness of the observables in as much as they relate to the former problem, i.e. the underlying uniqueness of the solution. We do this by performing a large number of inversions with random initializations and analyzing the dependence of the solutions with the merit function $\chi^2$ (defined below). Ideally, one would like to see that for small values of $\chi^2$, all the solutions are clustered around a central value with a small spread (the behavior for large $\chi^2$ is not very relevant for our purposes here). This should happen regardless of the inversion code employed and the pair of lines analyzed. If that were the case we could conclude that our observables are truly robust. Otherwise, one needs to be careful when analyzing data corresponding to that particular scenario. \section{Observations and Data Reduction} \label{sec:obs} The observations used in this work were obtained during an observing run in March 2006 with the Spectro-Polarimeter for Infrared and Optical Regions (SPINOR, see \citeNP{SNEP+06}), attached to the Dunn Solar Telescope (DST) at the Sacramento Peak Observatory (Sunspot, NM USA, operated by the National Solar Observatory). SPINOR allows for the simultaneous observation of multiple spectral domains with nearly optimal polarimetric efficiency over a broad range of wavelengths. The high-order adaptive optics system of the DST (\citeNP{RHR+03}) was employed for image stabilization and to correct for atmospheric turbulence. This allowed us to attain sub-arcsecond spatial resolution during some periods of good seeing. The observing campaign was originally devised to obtain as much information as possible on the unresolved properties of the quiet Sun magnetic fields. In addition to the 5250 and 6302~\AA \, domains discussed here, we also observed the \ion{Mn}{1} line sensitive to hyperfine structure effects (\citeNP{LATC02}) at 5537~\AA \, and the \ion{Fe}{1} \ line pair at 1.56~$\mu$m. SPINOR was operated in a configuration with four different detectors which were available at the time of observations (see Table~\ref{tab:detectors}): The Rockwell TCM 8600 infrared camera, with a format of 1024$\times$1024 pixels, was observing the 1.56~$\mu$m region. The SARNOFF CAM1M100 of 1024$\times$512 pixels was used at 5250~\AA . Finally, the two dual TI TC245 cameras of 256$\times$256 pixels (the original detectors of the Advanced Stokes Polarimeter) were set to observe at 5537 and 6302~\AA . Unfortunately, we encountered some issues during the reduction of the 5537~\AA \, and 1.56~$\mu$m data and it is unclear at this point whether or very not they are usable. Therefore, in the remainder of this paper we shall focus on the analysis of the data taken at 5250 and 6302~\AA . The spectral resolutions quoted in the table are estimated as the quadratic sum of the spectrograph slit size, diffraction limit and pixel sampling. In order to have good spectrograph efficiency at all four wavelengths simultaneously we employed the 308.57~line~mm$^{-1}$ grating (blaze angle 52$^{\rm o}$), at the expense of obtaining a relatively low dispersion and spectral resolution (see Table~\ref{tab:detectors} for details). \begin{deluxetable}{lccccc} \tablewidth{0pt} \tablecaption{SPINOR detector configuration \label{tab:detectors}} \tablehead{Camera & Wavelength & Spectral resolution & Spectral sampling & Usable range & Field of view \\ & (\AA ) & (m\AA ) & (m\AA ) & (\AA ) & along the slit (\arcsec)} \startdata ROCKWELL & 15650 & 190 & 150 & 150 & 187 \\ SARNOFF & 5250 & 53 & 31 & 15 & 145 \\ TI TC245 & 5537 & 40 & 29 & 5 & 95 \\ TI TC245 & 6302 & 47 & 24 & 6 & 95 \\ \enddata \end{deluxetable} Standard flatfield and bias correction proceduress were applied to the images. Subsequent processing included the removal of spectrum curvature and the alignment of both polarized beams, using for this purpose a pair of hairlines inserted across the slit. Calibration operations were performed to determine the SPINOR polarimetric response matrix by means of calibration optics located at the telescope beam exit port. In this manner we can decontaminate the datasets from instrumental polarization introduced by the polarimeter. Finally, it is also important to consider the contamination introduced by the telescope. To this aim we obtained telescope calibration data with an array of linear polarizers situated over the DST entrance window. By rotating these polarizers to different angles, it is possible to feed light in known polarization states into the system. A cross-dispersing prism was placed in front of one of the detectors, allowing us to obtain calibration data simultaneously across the entire visible spectrum. Details on the procedure may be found in \citeN{SNEP+06}. In this paper we focus on two scan operations near disk center, one over a relatively large pore (at solar heliocentric coordinates longitude -25.40, latitude -3.68) and the other of a quiet region (coordinates longitude 0.01$^{\rm o}$ , latitude -7.14$^{\rm o}$ ). The pore map was taken with rather low spatial resolution ($\sim$1.5\arcsec) but exhibits a large range of polarization signal amplitudes. The quiet map, on the other hand, has very good spatial resolution ($\sim$0.6\arcsec) but the polarization signals are much weaker. The noise level, measured as the standard deviation of the polarization signal in continuum regions, is approximately 7$\times$10$^{-4}$ and 5$\times$10$^{-4}$ times the average quiet Sun continuum intensity at 5250 and 6302~\AA , respectively. We used several different inversion codes for the various tests presented here, namely: SIR (Stokes Inversion based on Response functions, \citeNP{RCdTI92}); MELANIE (Milne-Eddington Line Analysis using a Numerical Inversion Engine) and LILIA (LTE Inversion based on the Lorien Iterative Algorithm, \citeNP{SN01a}); and MISMA (MIcro-Structured Magnetic Atmosphere, \citeNP{SA97}). The simplest of these algorithms is MELANIE, which implements a Milne-Eddington type of inversion similar to that of \citeN{SL87}. The free parameters considered include a constant along the line of sight magnetic field vector, magnetic filling factor, line-of-sight velocity and several spectral line parameters that represent the thermal properties of the atmosphere (Doppler width $\Delta \lambda_D$, line-to-continuum opacity ratio $\eta_0$, source function $S$ and damping $a$). The \ion{Fe}{1} lines at 6302~\AA \, belong to the same multiplet and their $\eta_0$ are related by a constant factor. Assuming that the formation height is similar enough for both lines we can also consider that they have the same $\Delta \lambda_D$, $S$ and $a$. In this manner the same set of free parameters can be used to fit both lines. In the case of the 5250~\AA \, lines we only invert the \ion{Fe}{1} pair and assume that both lines have identical opacities $\eta_0$. SIR considers a model atmosphere defined by the depth stratification of variables such as temperature, pressure, magnetic field vector, line-of-sight velocity and microturbulence. Atomic populations are computed assuming LTE for the various lines involved, making it possible to fit observations of lines from multiple chemical elements with a single model atmosphere that is common to all of them. Unlike MELANIE, one can produce line asymmetries by incorporating gradients with height of the velocity and other parameters. LILIA is a different implementation of the SIR algorithm. It works very similarly with some practical differences that are not necessary to discuss here. MISMA is another LTE code but has the capability to consider three atmospheric components (two magnetic and one non-magnetic) that are interlaced on spatial scales smaller than the photon mean free path. Perhaps the most interesting feature of this code for our purposes here is that it implements a number of magneto-hydrodynamic (MHD) constrains, such as momentum, as well as mass and magnetic flux conservation. In this manner it is possible to derive the full vertical stratification of the model atmosphere from a limited number of free parameters (e.g., the magnetic field and the velocity at the base of the atmosphere). In all of the inversions presented here we employed the same set of atomic line parameters, which are listed in Table~\ref{tab:atomic}. \begin{deluxetable}{cccccc} \tablewidth{0pt} \tablecaption{Atomic line data \label{tab:atomic}} \tablehead{ & Wavelength & Excit. Potential & $\log(gf)$ & Lower level & Upper level \\ Ion & (\AA) & (eV) & & configuration & configuration } \startdata \ion{Fe}{1} & 4121.8020 & 2.832 & -1.300 & $^3$P$_2$ & $^3$F$_3$\\ \ion{Cr}{2} & 5246.7680 & 3.714 & -2.630 & $^4$P$_{1/2}$ & $^4$P$_{3/2}$\\ \ion{Fe}{1} & 5247.0504 & 0.087 & -4.946 & $^5$D$_2$ & $^7$D$_3$\\ \ion{Ti}{1} & 5247.2870 & 2.103 & -0.927 & $^5$F$_3$ & $^5$F$_2$\\ \ion{Cr}{1} & 5247.5660 & 0.961 & -1.640 & $^5$D$_0$ & $^5$P$_1$\\ \ion{Fe}{1} & 5250.2089 & 0.121 & -4.938 & $^5$D$_0$ & $^7$D$_1$\\ \ion{Fe}{1} & 5250.6460 & 2.198 & -2.181 & $^5$P$_2$ & $^5$P$_3$\\ \ion{Fe}{1} & 6301.5012 & 3.654 & -0.718 & $^5$P$_2$ & $^5$D$_2$\\ \ion{Fe}{1} & 6302.4916 & 3.686 & -1.235 & $^5$P$_1$ & $^5$D$_0$\\ \ion{Fe}{1} & 8999.5600 & 2.832 & -1.300 & $^3$P$_2$ & $^3$P$_2$\\ \enddata \end{deluxetable} \section{Results} \label{sec:results} Figures~\ref{fig:map1} and~\ref{fig:map2} show continuum maps and reconstructed magnetograms of the pore and quiet Sun scans in the 5250~\AA \, region. Notice the much higher spatial resolution in the quiet Sun observation (Fig~\ref{fig:map1}). Similar maps, but with a somewhat smaller field of views, can be produced at 6302~\AA . \begin{figure*} \epsscale{2} \plotone{f1.eps} \caption{Continuum intensity map (left) and reconstructed magnetogram (right) of the quiet Sun scan near 5250~\AA . The magnetogram shows the integrated Stokes~$V$ signal over a narrow bandwidth on the red lobe of the \ion{Fe}{1} 5250.2~\AA \, line. In both panels, values are referred to the average quiet Sun intensity. The angular scale on the vertical axis measures the distance from the upper hairline. \label{fig:map1} } \end{figure*} \begin{figure*} \epsscale{2} \plotone{f2.eps} \caption{Continuum intensity map (left) and reconstructed magnetogram (right) of the pore scan near 5250~\AA . The magnetogram shows the integrated Stokes~$V$ signal over a narrow bandwidth on the red lobe of the \ion{Fe}{1} 5250.2~\AA \, line. In both panels, values are referred to the average quiet Sun intensity. The angular scale on the vertical axis measures the distance from the upper hairline. \label{fig:map2} } \end{figure*} The first natural step in the analysis of these observations, before even considering any inversions, is to calculate the Stokes~$V$ amplitude ratio of the \ion{Fe}{1} line 5250.2 to 5247.0~\AA \, (hereafter, the line ratio). One would expect to obtain a rough idea of the intrinsic magnetic field strength from this value alone. A simple calibration was derived by taking the thermal stratification of the Harvard-Smithsonian Reference Atmosphere (HSRA, see \citeNP{GNK+71}) and adding random (depth-independent) Gaussian temperature perturbations with an amplitude (standard deviation) of $\pm$300~K, different magnetic field strengths and a fixed macroturbulence of 3~km~s$^{-1}$ (this value corresponds roughly to our spectral resolution). Line-of-sight gradients of temperature, field strength and velocity are also included. Figure~\ref{fig:calib} shows the line ratio obtained from the synthetic profiles as a function of the magnetic field employed to synthesize them. The line ratio is computed simply as the peak-to-peak amplitude ratio of the Stokes~$V$ profiles. Notice that when the same experiment is carried out with a variable macroturbulent velocity the scatter increases considerably, even for relatively small values of up to 1~km~s$^{-1}$ (right panel). The syntheses of Fig~\ref{fig:calib} consider the partial blends of all 6 lines in the 5250~\AA \, spectral range. \begin{figure*} \epsscale{2} \plotone{f3.eps} \caption{Line ratio calibration using synthetic Stokes profiles of \ion{Fe}{1} \ 5250~\AA \ emergent from the HSRA model atmosphere after adding random temperature perturbations and magnetic fields. Left: Macroturbulence was kept fixed at 3~km~s$^{-1}$. Right: Random macroturbulence varies between 0 and 1~km~s$^{-1}$. \label{fig:calib} } \end{figure*} Figure~\ref{fig:mapratios} shows the observed line ratios for both scans. According to our calibration (see above), ratios close to 1 indicate strong fields of nearly (at least) $\sim$2~kG, whereas larger values would suggest the presence of weaker fields, down to the weak-field saturation regime at (at most) $\sim$500~G corresponding to a ratio of 1.5. Figure~\ref{fig:mapratios} is somewhat disconcerting at first sight. The pore exhibits the expected behavior with strong $\sim$2~kG fields at the center that decrease gradually towards the outer boundaries until it becomes weak. The network and plage patches, on the other hand, contain relatively large areas with high ratios of 1.4 and even 1.6 at some locations. This is in sharp contrast with the strong fields ($\sim$1.5~kG) that one would expect in network and plage regions (e.g., \citeNP{SSH+84}; \citeNP{SAL00}; \citeNP{BRRCC00}). \begin{figure*} \epsscale{2} \plotone{f4.eps} \caption{Amplitude ratio of Stokes~$V$ in \ion{Fe}{1} 5250.2 to \ion{Fe}{1} 5247.0~\AA . Left: Network patches observed in the quiet Sun scan. Right: A small pore (centered on coordinates [10,-30] approximately) and surrounding plage. Ratios close to 1 are indicative of a strong field of nearly 2~kG. Ratios close to 1.6 correspond to the weak-field (up to $\sim$500~G) regime. Black areas exhibit a circular polarization amplitude smaller than 1\% and have not been inverted. \label{fig:mapratios} } \end{figure*} In view of these results we carried out inversions of the Stokes $I$ and $V$ profiles of the spectral lines in the 5250~\AA \, region emergent from the pore. We used the code LILIA considering a single magnetic component with a constant magnetic field embedded in a (fixed) non-magnetic background, taken to be the average quiet Sun. The resulting field-ratio scatter plot is shown in Figure~\ref{fig:calibpore}. Note that the scatter in this case is much larger than that obtained with the HSRA calibration above. It is important to point out that the line ratio depicted in the figure is that measured on the synthetic profiles. Therefore, the scatter cannot be ascribed to inaccuracies of the inversion. As a verification test we picked one of the models with kG fields that produced a line ratio of $\simeq$1.4 and synthesized the emergent profiles with a different code (SIR), obtaining the same ratio. We therefore conclude that, when the field is strong, many different atmospheres are able to produce similar line ratios if realistic thermodynamic fluctuations and turbulence are allowed in the model. For weak fields, the ratio tends to a value of $\simeq$1.5 without much fluctuations. \begin{figure} \epsscale{1} \plotone{f5.eps} \caption{Line ratio calibration using synthetic Stokes profiles obtained in the inversion of the pore. \label{fig:calibpore} } \end{figure} Accepting then that we could no longer rely on the line ratio of \ion{Fe}{1} 5247.0 and 5250.2~\AA \, as an independent reference to verify the magnetic fields obtained with the 6302~\AA \, lines, we considered the result of inverting each spectral region separately. Figure~\ref{fig:porefields} depicts the scatter plot obtained. At the center of the pore, where we have the strongest fields (right-hand side of the figure), there is a very good correlation between the results of both measurements. However, those points lay systematically below the diagonal of the plot. This may be explained by the different ``formation heights'' of the lines. The \ion{Fe}{1} lines at 5250~\AA \, generally form somewhat higher than those at 6302~\AA . If the field strength decreases with height, one would expect to retrieve a slightly lower field strength when using 5250~\AA . Unfortunately, the correlation breaks down for the weaker fields. In Figure~\ref{fig:poreerrors} we can see that both sets of lines yield approximately the same field strengths (with a slightly lower values for the 5250 inversions, as discussed above) for longitudinal fluxes above some $\sim$500~G. Below this limit our diagnosis is probably not sufficiently robust. \begin{figure} \epsscale{1} \plotone{f6.eps} \caption{Intrinsic field strengths determined from inverting the lines at 5250 (ordinates) and 6302 (abscissa) observed in the pore. The solid line shows the diagonal of the plot. \label{fig:porefields} } \end{figure} \begin{figure} \epsscale{1} \plotone{f7.eps} \caption{Differences in intrinsic field strengths measured at 6302 and 5250~\AA \, as a function of the average longitudinal magnetic flux density. The solid line represents the median value over 100~G bins. The dashed lines represent the standard deviation of the points in each bin. \label{fig:poreerrors} } \end{figure} If instead of the intrinsic field we consider the longitudinal magnetic flux, we obtain a fairly good agreement between both spectral regions. The Milne-Eddington inversions with MELANIE yield a Pearsons correlation coefficient of 0.89. In principle, the agreement is somewhat worse for the LILIA inversions, with a correlation coefficient of 0.60. However, we have found that this is due to a few outlayer points. Removing them results in a correlation coefficient of 0.82. The situation becomes more complicated in the network. The results of inverting a network patch with MELANIE and LILIA can be seen in Figure~\ref{fig:fluxes}. The inversions with MELANIE (upper panels) do not agree very well with each other. The correlation coefficient is only 0.23. The 5250~\AA \, map (upper right panel in the figure) looks considerably more noisy and rather homogeneous, compared to the 6302~\AA \, map (upper left). The LILIA inversions (lower panels) exhibit somewhat better agreement (correlation is 0.65), but again look very noisy at 5250~\AA . \begin{figure} \epsscale{1} \plotone{f8.eps} \caption{Longitudinal flux density from the inversions of a network patch at 6302 (left) and 5250~\AA \, (right). Upper panels: Inversions with the Milne-Eddington code MELANIE. Lower panels: Inversions with the LTE code LILIA. Spatial scales are in pixels. \label{fig:fluxes} } \end{figure} Figure~\ref{fig:NWa} depicts average profiles over a network patch. The 5250 line ratio for this profile is 1.21. We started by exploring the uniqueness of the magnetic field strength inferred with the simplest algorithm, MELANIE. Each one of these average profiles was inverted 100 times with random initializations. The results are presented in Figures~\ref{fig:uniqME5250} and~\ref{fig:uniqME6302}, which show the values obtained versus the goodness of the fit, defined as the merit function $\chi^2$, which in this work is defined as: \begin{equation} \label{eq:chisq} \chi^2={1 \over N_p} \sum_{i=1}^{N_p}{ (I_i^{obs}-I_i^{syn})^2 \over \sigma_i^2} \, , \end{equation} where $N_p$ is the number of wavelengths and $\sigma_i$ have been taken to be 10$^{-3}$, so that a value of $\chi^2$=1 would represent on average a good fit at the 10$^{-3}$ level. The average profiles inverted here have a much lower noise (near 10$^{-4}$) and thus it is some times possible to obtain $\chi^2$ smaller than 1. The $\chi^2$ represented in the plots is the one corresponding to the Stokes~$V$ profile only (although the inversion codes consider both $I$ and $V$, but $I$ is consistently well reproduced and does not help to discriminate among the different solutions). Most of the fits correspond to kG field, indicating that inversions of network profiles are very likely to yield high field strengths. However, there exists a very large spread of field strength values that provide reasonably good fits to the observed data. This is especially true for the 5250~\AA \, lines, for which it is possible to fit the observations virtually equally well with fields either weaker than 500~G or stronger than 1~kG. In the case of 6302~\AA \, the best solutions are packed around $\simeq$1.5~kG, although other solutions of a few hecto-Gauss (hG) are only slightly worse than the best fit. \begin{figure} \epsscale{1} \plotone{f9.eps} \caption{Average observed Stokes~$V$ profiles in a network patch (located approximately around coordinates [20,7] in Figure~\ref{fig:map1}). Ordinate values are related to the average quiet Sun continuum intensity. The 5250 line ratio (the relevant \ion{Fe}{1} lines are marked with arrows) is 1.21. \label{fig:NWa} } \end{figure} \begin{figure} \epsscale{1} \plotone{f10.eps} \caption{Representation of the solutions from 100 different inversions of the 5250~\AA \, region with random initializations obtained using the Milne-Eddington code MELANIE. The field strength inferred by the inversion is plotted versus the quality of the fit, as measured by the merit function $\chi^2$. \label{fig:uniqME5250} } \end{figure} \begin{figure} \epsscale{1} \plotone{f11.eps} \caption{Representation of the solutions from 100 different inversions of the 6302~\AA \, region with random initializations obtained using the Milne-Eddington code MELANIE. The field strength inferred by the inversion is plotted versus the quality of the fit, as measured by the merit function $\chi^2$. \label{fig:uniqME6302} } \end{figure} It could be argued that Milne-Eddington inversions are too simplistic to deal with network profiles, since they are known to exhibit fairly strong asymmetries (both in area and in amplitude) that cannot be reproduced by a Milne-Eddington model. With this consideration in mind, we made a similar experiment using the LILIA and SIR codes. Figures~\ref{fig:uniqLI5250} and~\ref{fig:uniqLI6302} show the results obtained with LILIA. The magnetic and non-magnetic atmospheres have been forced to have the same thermodynamics in order to reduce the possible degrees of freedom (on the downside, this introduces an implicit assumption on the solar atmosphere). Again, the 6302~\AA \, lines seem to yield somewhat more robust inversions. The best fits correspond to field strengths of approximately 1.5~kG, with weaker fields delivering somewhat lower fit quality. The 5250~\AA \, lines give nearly random results (although they tend to be clustered between 500 and 800~G there is a tail of good fits with up to almost 1400~G). Similar results are obtained using SIR. In order to give an idea of what the different $\chi^2$ values mean, we present some of the fits in Fig~\ref{fig:fits}. \begin{figure} \epsscale{1} \plotone{f12.eps} \caption{Representation of the solutions from 100 different inversions of the 5250~\AA \, region with random initializations obtained using the LTE code LILIA. The field strength inferred by the inversion is plotted versus the quality of the fit, as measured by the merit function $\chi^2$. \label{fig:uniqLI5250} } \end{figure} \begin{figure} \epsscale{1} \plotone{f13.eps} \caption{Representation of the solutions from 100 different inversions of the 6302~\AA \, region with random initializations obtained using the LTE code LILIA. The field strength inferred by the inversion is plotted versus the quality of the fit, as measured by the merit function $\chi^2$. \label{fig:uniqLI6302} } \end{figure} \begin{figure*} \epsscale{2} \plotone{f14.eps} \caption{Fits obtained with MELANIE (left) and LILIA (right) to the average network profile observed at 5250 (upper four panels) and 6302~\AA \, (lower four panels). For each case we show the best fit (smaller $\chi^2$) and a not so good one. \label{fig:fits} } \end{figure*} The thermal stratifications obtained in the 6302~\AA \, inversions are relatively similar, although the weaker fields require a hotter upper photosphere than the stronger fields (see Fig~\ref{fig:tlil6302}). On the other hand, the 5250~\AA \, inversions do not exhibit a clear correlation between the magnetic fields and temperature inferred (Fig~\ref{fig:tlil5250}). \begin{figure} \epsscale{1} \plotone{f15.eps} \caption{Temperature stratification of the models obtained from the 6302~\AA \, inversions with LILIA. Solid line: Models that include kG fields. Dashed line: Models with sub-kG field. \label{fig:tlil6302} } \end{figure} \begin{figure} \epsscale{1} \plotone{f16.eps} \caption{Temperature stratification of the models obtained from the 5250~\AA \, inversions with LILIA. Solid line: Models that include kG fields. Dashed line: Models with sub-kG field. \label{fig:tlil5250} } \end{figure} The MISMA code was also able to find good solutions with either weak or strong fields. The smallest $\chi^2$ values correspond systematically to kG fields, but there are also some reasonably good fits ($\chi^2 \simeq 1$) obtained with weak fields of $\sim$500~G (Figs~\ref{fig:misma6302} and~\ref{fig:misma5250}). However, we found that all the weak-field solutions for 6302~\AA \, have a temperature that increases outwards in the upper photosphere (Fig~\ref{fig:misma6302}). This might be useful to discriminate between the various solutions. The 5250~\AA \, lines, on the other hand, do not exhibit this behavior (Fig~\ref{fig:misma5250}). \begin{figure} \epsscale{1} \plotone{f17.eps} \caption{Inversions of the 6302~\AA \, region. Average magnetic field in the lower photosphere of the MISMA component that harbors more flux, as a function of the quality of the fit $\chi^2$. Asterisks: Solutions where the temperature decreases outwards in the upper photosphere. Diamonds: Solutions where the temperature increases outwards in the upper photosphere. \label{fig:misma6302} } \end{figure} \begin{figure} \epsscale{1} \plotone{f18.eps} \caption{Inversions of the 5250~\AA \, region. Average magnetic field in the lower photosphere of the MISMA component that harbors more flux, as a function of the quality of the fit $\chi^2$. Asterisks: Solutions where the temperature decreases outwards in the upper photosphere. Diamonds: Solutions where the temperature increases outwards in the upper photosphere. \label{fig:misma5250} } \end{figure} In principle it would seem plausible to discard the models with outward increasing temperature using physical arguments. This would make us conclude from the 6302~\AA \, lines that the fields are actually very strong, between $\sim$1.5 and~2.5~kG (it would not be possible to draw similar conclusions from the 5250~\AA \, lines). In any case, it would be desirable to have a less model-dependent measurement that could be trusted regardless of what the thermal stratification is. Interestingly, when we invert all four \ion{Fe}{1} lines simultaneously, both at 5250 and 6302~\AA , the weak-field solutions disappear from the low-$\chi^2$ region of the plot and the best solutions gather between approximately 1 and 1.4~kG (see Fig~\ref{fig:mismaboth}). \begin{figure} \epsscale{1} \plotone{f19.eps} \caption{Inversions of combined 5250 and 6302~\AA \, profiles. Average magnetic field in the lower photosphere of the MISMA component that harbors more flux, as a function of the quality of the fit $\chi^2$. \label{fig:mismaboth} } \end{figure} \citeN{SNARMS06} present a list of spectral lines with identical excitation potentials and oscillator strengths. We decided to test one of the most promising pairs, namely the \ion{Fe}{1} lines at 4122 and 9000~\AA. The choice was made based on their equivalent widths, Land\' e factors and also being reasonably free of blends in the quiet solar spectrum. Simulations similar to those in Figure~\ref{fig:calib} showed that the line-ratio technique is still incapable of retrieving an unambiguous field strength due to the presence of line broadening. Also, even if the {\it line} opacities are the same, the continuum opacities are significantly different at such disparate wavelengths, but in any case the line opacity is much stronger than the background continuum where the Stokes~$V$ lobes are formed. When we tested the robustness of inversion codes applied to this pair of lines, we obtained extremely reliable results as described below. Unfortunately we do not have observations of these two lines and therefore resorted on synthetic profiles to perform the tests. We used a 2-component reference model, where both components have the thermal structure of HSRA. The magnetic comonent has an arbitrary magnetic field and line-of-sight velocity. The field has a linear gradient and goes from $\simeq$1.6~kG at the base of the photosphere to 1~kG at continuum optical depth $\log(\tau_{5000})$=-4. The velocity field goes from 2.5~km~s$^{-1}$ to 0.4~km~s$^{-1}$ at $\log(\tau_{5000})$=-4. The magnetic component has a filling factor of 0.2. The reference macroturbulence is 3~km~s$^{-1}$, which is roughly the value inferred from our observations. The spectra produced by this model at 5250 and 6302~\AA \, are very similar to typical network profiles. We considered the reference 4122 and 9000~\AA \, profiles as simulated observations and inverted them with two different methods. \begin{figure} \epsscale{1} \plotone{f20.eps} \caption{Inversions of the proposed \ion{Fe}{1} 4122 and 9000~\AA \, profiles. Average magnetic field in the lower photosphere of the magnetic component as a function of the quality of the fit $\chi^2$. \label{fig:lilia9000} } \end{figure} In order to make the test as realistic as possible, we gave the inversion code a somewhat erroneous non-magnetic profile. Specifically, we multiplied the profile from the reference non-magnetic component by a factor of 0.9 and shifted it 1 pixel towards the red. The wavelength shift corresponds to roughly 0.36~km~s$^{-1}$ in both regions. This distorted non-magnetic profile was given as input to the inversion codes. We inverted the reference profiles using LILIA with 100 different initializations. Only $\sim$30\% of the inversions converged to a reasonably low value of $\chi^2$, with the results plotted in Figure~\ref{fig:lilia9000}. We can see that the inversions are extremely consistent over a range of $\chi^2$ much larger than in the previous cases. In fact, none of the solutions are compatible with weak fields, suggesting that these lines are much better at discriminating intrinsic field strengths. A much more demanding verification for the diagnostic potential of these new lines is to use a simpler scenario in the inversion than in the synthesis of the reference profiles, incorporating typical uncertainties in the calculation. After all, the real Sun will always be more complex than our simplified physical models. Thus, inverting the reference profiles with the Milne-Eddington code is an appropriate test. For this experiment we not only supplied the same ``distorted'' non-magnetic profile as above, but we also introduced a systematic error in the $\log(gf)$ of the lines. We forced the opacity of the 9000~\AA \, line to be 20\% lower than that of 4122 (instead of taking them to be identical, as their tabulated values would indicate). Again, we performed 100 different inversions of the reference set of profiles with random initializations. In this case the inversion results are astonishingly stable, with 98 out of the 100 inversions converging to a $\chi^2$ within 15\% of the best fit. The single-valued magnetic field obtained for those 98 inversions has a median of 1780~G with a standard deviation of only 3~G. The small scatter does not reflect the systematic errors introduced by several factors, including: a)the inability of the Milne-Eddington model to reproduce the comparatively more complex referece profiles; b)the artificial error introduced in the atomic parameters of the 9000~\AA \, line; c)the distortion (scale and shift) of the non-magnetic profile provided to the inversion code. A final caveat with this new pair is that, even though the synthetic atlas of \citeN{SNARMS06} indicates that the 9000~\AA \, line Stokes~$V$ profile is relatively free of blends, this still needs to be confirmed by observations (there is a very prominent line nearby that may complicate the analysis otherwise). \section{Conclusions} \label{sec:conc} The ratio of Stokes~$V$ amplitudes at 5250 and 5247~\AA \, is a very good indicator of the intrinsic field strength in the absence of line broadening, e.g. due to turbulence. However, line broadening tends to smear out spectral features and reduce the Stokes~$V$ amplitudes. This reduction is not the same for both lines, depending on the profile shape. If the broadening could somehow be held constant, one would obtain a line-ratio calibration with very low scatter. However, if the broadening is allowed to fluctuate, even with amplitudes as small as 1~km~s$^{-1}$, the scatter becomes very large. Fluctuations in the thermal conditions of the atmosphere further complicate the analysis. This paper is not intended to question the historical merits of the line-ratio technique, which led researchers to learn that fields seen in the quiet Sun at low spatial resolution are mostly of kG strength with small filling factors. However, it is important to know its limitations. Otherwise, the interpretation of data such as those in Figure~\ref{fig:mapratios} could be misleading. Before this work, most of the authors were under the impression that measuring the line ratio of the 5250~\AA \, lines would always provide an accurate determination of the intrinsic field strength. With very high-resolution observations, such as those expected from the Advanced Technology Solar Telescope (ATST, \citeNP{KRK+03}) or the Hinode satellite, there is some hope that most of the turbulent velocity fields may be resolved. In that case, the turbulent broadening would be negligible and the line-ratio technique would be more robust. However, even with the highest possible spatial resolution, velocity and temperature fluctuations along the line of sight will still produce turbulent broadening. From the study presented here we conclude that, away from active region flux concentrations, it is not straightforward to measure intrinsic field strengths from either 5250 or 6302~\AA \, observations taken separately. Weak-flux internetwork observations would be even more challenging, as demonstrated recently by \citeN{MG07}. Surprisingly enough, the 6302~\AA \, pair of \ion{Fe}{1} lines is more robust than the 5250~\AA \, lines in the sense that it is indeed possible to discriminate between weak and strong field solutions if one is able to rule out a thermal stratification with temperatures that increase outwards. Even so, this is only possible when one employs an inversion code that has sufficient MHD constrains (an example is the MISMA implementation used here) to reduce the space of possible solutions. The longitudinal flux density obtained from inversions of the 6302~\AA \, lines is better determined than those obtained with 5250~\AA . This happens regardless of the inversion method employed, although using a code like LILIA provides better results than a simpler one such as MELANIE. The best fits to average network profiles correspond to strong kG fields, as one would expect. An interesting conclusion of this study is that it is possible to obtain reliable results by inverting simultaneous observations at both 5250 and 6302~\AA . Obviously this would be possible with relatively sophisticated algorithms (e.g., LTE inversions) but not with simple Milne-Eddington inversions. The combination of two other \ion{Fe}{1} lines, namely those at 4122 and 9000~\AA , seems to provide a much more robust determination of the quiet Sun magnetic fields. Unfortunately, these lines are very distant in wavelength and few spectro-polarimeters are capable of observing them simultaneously. Examples of instrument with this capability are the currently operational SPINOR and THEMIS, as well as the planned ATST and GREGOR. Depending on the evolution time scales of the structures analyzed it may be possible for some other instruments to observe the blue and red lines alternatively. \acknowledgments This work has been partially funded by the Spanish Ministerio de Educaci\'on y Ciencia through project AYA2004-05792
{'timestamp': '2007-10-04T23:06:34', 'yymm': '0710', 'arxiv_id': '0710.1099', 'language': 'en', 'url': 'https://arxiv.org/abs/0710.1099'}
\section*{Abstract} {\bf We monitor the time evolution of the temperature of phononic collective modes in a one-dimensional quasicondensate submitted to losses. At long times the ratio between the temperature and the energy scale $mc^2$, where $m$ is the atomic mass and $c$ the sound velocity takes, within a precision of 20\%, an asymptotic value. This asymptotic value is observed while $mc^2$ decreases in time by a factor as large as 2.5. Moreover this ratio is shown to be independent on the loss rate and on the strength of interactions. These results confirm theoretical predictions and the measured stationary ratio is in quantitative agreement with the theoretical calculations. } \section{Introduction} There has been many effort and progress in the last decades for the realization and investigation of isolated many-body quantum systems. The effect of coupling to an environment has however regained interest in the last years. While such a coupling was manly considered as detrimental for the study of many-body quantum physics, it has been shown that proper engineering of the coupling to an environment could enable the realization of interesting many-body quantum states such as entangled states or highly correlated states~\cite{barreiro_open-system_2011,tomita_observation_2017}. The effect of coupling to an environment is still a widely open question. The simplest kind of coupling, which is ubiquitous in experiments, is a loss process where the particles leave the system. Losses are particularly relevant in exciton-polariton condensates but they are also present, or can be engineered, in ultra-cold atomic degenerate Bose gases. If one considers a Bose-Einstein condensate (BEC) wavefunction, losses are treated as a dissipative term added to the Gross-Pitaevskii equation, equation which describes the evolution of the BEC at the mean-field level. This approach was successful in describing the effect of local losses in an atomic BEC~\cite{barontini_controlling_2013} and many aspects of exciton-polariton condensates~\cite wouters_excitations_2007,wouters_spatial_2008,keeling_spontaneous_2008,lagoudakis_quantized_2008}. In the last case, a pumping process ensures the presence of a steady state. Beyond this mean-field approach, the loss process introduces fluctuations, which are due to the shot noise associated to the quantization of the particles. Both the dissipation and the fluctuations produced by losses was taken into account in stochastic theoretical descriptions~\cite{carusotto_spontaneous_2005,wouters_stochastic_2009,grisins_degenerate_2016,bouchoule_cooling_2018}\footnote{Other approaches such as the Keldish formalism have been developped~\cite{szymanska_nonequilibrium_2006}} While for exciton-polariton condensates a pumping process is present, in atomic Bose gases the sole effect of losses can be investigated. In~\cite{grisins_degenerate_2016,bouchoule_cooling_2018} the time evolution of a Bose-Einstein condensate, or a quasicondensate in reduced dimension, submitted to homogeneous losses has been theoretically investigated. The dissipative term is responsible for cooling: although the loss process is homogeneous, losses per unit length occur at a higher rate in regions of higher densities -- just because there are more atoms-- which leads to a decrease of density fluctuations and thus of their associated interaction energy. On the other hand, the stochastic nature of losses tends to increase density fluctuations and thus the interaction energy; this corresponds to a heating term. As a result of the competition between both effects, it has been predicted that phononic collective modes acquire, at large times, a temperature $k_BT$ that decreases proportionally to the energy scale $mc^2$ where $m$ is the atomic mass and $c$ the speed of sound. The precise value of the asymptotic ratio $k_B T/(mc^2)$ depends on the loss process and the geometry~\cite{bouchoule_cooling_2018}. An intrinsic homogeneous loss process present in cold atoms setup is a three-body loss process where a loss event corresponds to an inelastic collision involving three atoms and amounts to the loss of the three atoms. In~\cite{schemmer_cooling_2018}, the asymptotic ratio $k_B T/(mc^2)$ associated to this three-body losses has been experimentally observed, and its value is in agreement with theoretical predictions. On the other hand, there was up to now no experimental evidence of an asymptotic ratio $k_B T/(mc^2)$ in the case of a one-body loss process\cite{rauer_cooling_2016}. A one-body loss process corresponds to a uniform loss rate: each atom has the probability $\Gamma dt$ to be lost during a time-interval $dt$, regardless both of its position and its energy. In this paper, we demonstrate the presence of an asymptotic value of the ratio $k_BT/(mc^2)$ for one-dimensional harmonically confined quasicondensates submitted to one-body losses, and our results are in agreement with theoretical predictions. \section{Description of the experiment and data analysis} We use an atom-chip set-up, described in detail in~\cite{schemmer_out--equilibrium_2019}, to produce ultracold gases of $^{87}$Rb atoms, polarized in the stretch state $|F=2,m_F=2\rangle$ and confined in a very elongated magnetic trap. The transverse confinement is realized by three parallel wires aligned along $z$, running an AC current modulated at 400~kHz, together with a homogeneous longitudinal magnetic field $B_0=\unit[2.4]{G}$~\cite{trebbia_roughness_2007}: atoms are confined transversely in the time-averaged potential and the transverse oscillation frequency $\omega_\perp/(2\pi)$, which depends on the data-set, lies in the interval [1.5-4.0]$\,$kHz. A longitudinal harmonic confinement of frequency $\omega_z/(2\pi)=\unit[9.5]{Hz}$ is realized by a pair of wires perpendicular to $z$. Using standard radio-frequency (RF) evaporative cooling, we prepare clouds whose temperature $T$ and chemical potential\footnote{The energy offset used for chemical potential is the energy of the transverse ground state, {\it i.e.} $\hbar\omega_\perp$.} $\mu_p$, depending on the data set, lie in the range $\mu_p/(2\pi\hbar)\in \unit[1.0-3.1]{kHz}$ and $T \in \unit[40-75]{nK}$. The ratios $k_B T/(\hbar \omega_\perp)$ and $\mu_p/(\hbar\omega_\perp)$ lie in the range $0.3-0.7$ and $0.6-1.2$ respectively such that the clouds are quasi-one-dimensional. The clouds lie deep in the quasicondensate regime~\cite{kheruntsyan_pair_2003}, which is characterized by strongly reduced density fluctuations --the two-body zero distance correlation function $g^{(2)}(0)$ being close to one -- while longitudinal phase fluctuations are still present. We then increase the frequency of the RF knife by about 25 kHz, so that it no longer induces losses but ensures the removing of residues of three-body recombination events. In contrast to the three-body process, no intrinsic process leads to a one-body loss term in our experiment and one-body losses have to be engineered. We introduce homogeneous one-body losses by coupling the trapped atoms to the untrapped state $|F=1,m_F=1\rangle$, which lies at an energy $\hbar\omega_{HFS}+(3/2)\mu_B B_0$ below the trapped state $|F=2,m_F=2\rangle$, where $\omega_{HFS}\simeq \unit[6.8347]GHz$ is the hyperfine splitting of the $^{87}$Rb ground state. Coupling is realized by a microwave (MW) field produced by a voltage-controlled oscillator connected to an antenna placed a few centimeter away from the atomic cloud. We use a noise-generator to produce a MW power spectrum which presents a rectangular shape 200~kHz wide. Its central frequency $\omega_0$ may be varied in time. During the preparation phase of our ultra-cold cloud, $\omega_0$ is chosen such that the transition is shifted from resonance by about 5~MHz so that the MW does not induce any noticeable losses. At time $t=0$, we suddenly shift $\omega_0$ to its resonance value to induce losses. The large width of the MW power spectrum, compared to $\omega_\perp$ and to the chemical potential of the atoms, ensures that the loss rate is homogeneous over the size of the atomic cloud and is not affected by interaction effects. We adapt the loss rate $\Gamma$ adjusting the power of the MW field. \begin{figure} \centerline{\includegraphics[width=0.9\textwidth]{schema_and_profile_and_ripples-2.pdf}} \caption{Implementation of one-body losses in our magnetically-confined gas and data analysis. (a) sketch of the MW coupling between the trapped and untrapped state. (b) A typical density profile $n_0(z)$, obtained averaging over about 20 images. The data to which it corresponds (5$^{\rm{th}}$ data point of data set 6) is the encircled data point in Fig.(a). The shape expected for a quasicondensate is shown in green, with the peak density $n_p$ as single fitting parameter. (c) Density ripples power spectrum for the same data as (b), together with the theoretical fit yielding the temperature $T$. The red solid line in (b) is the density profile expected for a cloud at a temperature $T$ using the Yang-Yang equation of state and the local density approximation.} \label{fig.drawing_analysis} \end{figure} We analyze the atomic cloud using absorption images taken after a time of flight $t_f=\unit[8]{ms}$ following the sudden switch off of the confining potential. We acquire an ensemble of about 20 images taken in the same experimental conditions. The fast transverse expansion of the cloud provides an effective instantaneous switch off of the interactions with respect to the longitudinal motion and the gas evolves as a non-interacting gas for $t_f$. Averaging over the data set, we extract the longitudinal density profile $n_0(z)$. The longitudinal velocity width, of the order of $k_B T /(\hbar n_p)$, where $n_p$ is the peak linear density~\cite{mora_extension_2003}, is small enough so that the longitudinal density profile is not affected by the time-of-flight and $n_0(z)$ is equal to the density profile of the cloud prior to the trap release. From $n_0(z)$, we extract the total atom number and the peak density $n_p$. The latter is obtained by fitting the central part of the measured density profile with the profile expected for a gas lying in the quasicondensate regime. To compute the quasicondensate density profile we rely on the local density approximation (LDA): the gas at position $z$ is described by a homogeneous gas at chemical potential $\mu(z)=\mu_p - m\omega_z^2z^2/2$, and the linear density is derived from $\mu(z)$ using the equation of state of a homogeneous quasicondensate. The latter, which relies the chemical potential $\mu$ to the linear density $n$, is $\mu=\hbar\omega_\perp(\sqrt{1+4n a_{3D}}-1)$, where $a_{3D}$ is the 3D scattering length~\cite{salasnich_effective_2002,fuchs_hydrodynamic_2003}. For $na_{3D}\ll 1$ it reduces to the pure 1D expression $\mu=g_{1D}n$ , where the 1D coupling constant\footnote{In our case, $\omega_\perp\ll \hbar/(ma_{3D}^2)$ such that we are far from confinement-induced resonance.} is $g_{1D}=2\hbar\omega_\perp a_{3D}$~\cite{olshanii_atomic_1998}. At larger $n a_{3D}$ it includes the effect due to the transverse swelling of the wavefunction. The longitudinal quasicondensate profile extends over $2R$, where $R= \sqrt{2\mu_p/(m\omega_z^2)}$. Fig.~(\ref{fig.drawing_analysis})(b) shows a typical experimental density profile $n_0$, together with the theoretical quasicondensate profile. The good agreement between most of the cloud's shape and the calculated profile confirms that the cloud lies deep into the quasicondensate regime. It also confirms that the loss rate is small enough so that the cloud shape has time to follow adiabatically the atom number decrease. Temperature determination is realized by the well-established density-ripple thermometry method~\cite{dettmer_observation_2001,imambekov_density_2009,manz_two-point_2010,schemmer_monitoring_2018,schemmer_cooling_2018}. This thermometry uses the fact that thermally excited phase fluctuations initially present in the cloud transform into density fluctuations during $t_f$ such that single shot images of the cloud presents large random density ripples. From the set of acquired images, we extract the power spectrum of the density ripples. More precisely, we extract from each image $\rho_q=\int_{-R}^{R} dz \delta n(z)e^{iqz}$ where $\delta n(z)=n(z)-n_0(z)$ and we then compute the density ripple power spectrum $\langle |\rho_q|^2\rangle$, from which we remove the expected flat-noise contribution of optical shot noise. The power spectrum is then fitted with the expected power spectrum for a quasicondensate of peak density $n_p$ confined in a harmonic longitudinal potential, calculated using the LDA approximation~\cite{schemmer_monitoring_2018}, with the temperature as fitting parameter\footnote{ We take into account the finite imaging resolution by multiplying the theoretical power spectrum with $e^{-k^2\sigma^2}$ where $\sigma$ is the rms width of the imaging point-spread-function. Due to finite depth-of-focus, $\sigma$ depends on the size of the cloud along the imaging axis, which itself depends on $\omega_\perp$. Thus $\sigma$ may depend on the data set but for a given data set we use the same $\sigma$ for all evolution times. }. Fig.~\ref{fig.drawing_analysis}(c) shows an example of a power spectrum (corresponding to the encircled data point in Fig.~(\ref{fig.evolNat})(a)), together with the theoretical fit yielding the temperature $T$. This thermometry probes fluctuations whose wavelengths are much larger than the healing length $\xi=\hbar/\sqrt{m g n_p}$ such that the temperature corresponds to the temperature of the phononic collective modes. \section{Experimental results} We investigate the time evolution of the atomic cloud for 6 different data sets which correspond to different transverse oscillation frequencies -- {\it i.e.} different interaction 1D effective coupling constant --, different initial situations -- {\it i.e.} different atom number and temperature -- and different MW power -- {\it i.e.} different 1-body loss rate. They are listed in table~(\ref{table.data}). \begin{table} \centering \begin{tabular}{|l|l|l|c|} \hline {\bf data-set number}&{\bf $\omega_\perp/(2\pi)$ (kHz)}& {\bf $\Gamma$ (s$^{-1}$)}&{\bf Symbol}\\ \hline \hline 1 & 1.5 & 3.8 &{\color{jaunePython} $\blacktriangleleft$} \\ \hline 2 & 1.5 & 1.6 & {\color{turquoisePython} $\bigstar$}\\ \hline 3 & 2.1 & 5.2 & {\color{bleuPython} \raisebox{-0.08cm}{{\huge $\bullet$}}} \\ \hline 4 & 3.1 & 4.9 & {\color{vertPython} $\blacksquare$} \\ \hline 5 & 3.1 & 2.5 & {\color{rougePython} $\blacktriangle$} \\ \hline 6 & 4.0 & 4.5 &{\color{violetPython} $\blacktriangleright$}\\ \hline \end{tabular} \caption{Data sets presented in this paper, with the associated symbol used in the figures 2 and 3.} \label{table.data} \end{table} We plot in Fig.~(\ref{fig.evolNat})(a) the time evolution of the total atom number for the different data sets. The exponential decrease of the atom number, shown by the good agreement with exponential laws represented by strait lines in the semi-log plot, confirms that we realized a uniform one-body loss process. The loss rate varies by roughly a factor 2 between different data sets. As pointed out in the introduction, a relevant energy scale is $m c_p^2$ where $c_p$ is the sound velocity computed at the center of the cloud, which fulfills $mc_p^2=n\partial_n(\mu)|_{n=n_p}$. For pure 1D quasicondensates, $mc_p^2=g_{1D}n_p$ where $n_p$ is the peak density. In our data sets the linear densities can reach values which are not small compared to $1/a_{3D}$ and we use the more general expression $mc_p^2=n_pg_{1D}/\sqrt{1+4n_p a_{3D}}$. The evolution of the energy $mc_p^2$ for the data sets is shown in Fig.~(\ref{fig.evolNat})(b). The variation of $mc_p^2$ during time is as large as a factor 2.5. \begin{figure} \centerline{\includegraphics[height=6cm]{NvsGammat_final.pdf \includegraphics[height=6cm]{mcp2vsGammat_final.pdf} } \caption{(a) Evolution of the total atom number for the different sets of data shown in semi-log scale. The loss rate $\Gamma$ for each data set is deduced from an expontential fit, shown as dashed line. (b) Evolution of the energy scale $mc_p^2$ for same data sets as a function of $\Gamma t$. Both plots use the color codes of table.~\ref{table.data} } \label{fig.evolNat} \end{figure} The time evolution of the ratio $y=k_B T/(mc_p^2)$ is shown in Fig.~(\ref{fig.evolTemp})(a) for all the data sets. Theory for 1D harmonically confined gases~\cite{bouchoule_cooling_2018} predicts that $y$ converge at long times towards the asymptotic value $y_{\infty}^{\rm{theo}}=0.75$, shown as solid black line in Fig.~(\ref{fig.evolTemp})(a). The observed behavior is compatible with this prediction: the spread of values of $y$ among different data sets decreases as $\Gamma t$ increases and at long times, all data gather around $y_{\infty}^{\rm{theo}}=0.75$, regardless of the loss rate $\Gamma$ and of the transverse oscillation frequency. For the data sets 2,3 and 6, $y$ deviates by no more than 20\% from $y_\infty^{theo}$ over the whole time evolution while $mc_p^2$ decreases by a factor up to 2.5. For all data sets $y$ is about stationnary for times $t>0.7/\Gamma$ and we note $y_\infty$ the mean value of $y$ for times $t>0.7/\Gamma$. Fig.~(\ref{fig.evolTemp})(c) shows $y_\infty$, plotted versus the transverse oscillation frequency. Results are close to $y_\infty^{theo}$, with $|y_\infty - y_\infty^{theo}|/y_\infty^{theo} < 0.2$. The observed discrepancy between $y_\infty$ and $y_\infty^{theo}$ may be due on the one hand to our finite thermometry precision (the value $y_\infty$ is within one error bar of $y_\infty$ for most data sets) and on the other hand to the fact that the criteria $\Gamma t > 0.7$ might be insufficient to unsure the asymptotic value of $y$ has be attained. Quantitative experimental investigation of the time-evolution of $y$ under the effect of losses is difficult with our data sets. The reason is that the initial condition we produce are such that the maximal deviation between $y(t=0)$ and $y_\infty^{theo}$ is comparable to our thermometry resolution. We attribute this to the preparation scheme where, for our experimental procedure, three-body losses during the evaporative cooling probably impose a value of $y$ close to 0.75~\cite{schemmer_cooling_2018}. On the theoretical side, for a given initial condition, the expected time evolution of $y$ can be computed using the dynamical equations derived in~\cite{bouchoule_cooling_2018}. For pure 1D harmonically confined cloud the equation reduces to $dy/d(\Gamma t)= y/3 + 1/4 $. To take into account the 3D effect due to transverse swelling of the wave-function, we solved numerically the general equations given in~\cite{bouchoule_cooling_2018}. We show in dashed-dotted black lines in Fig.~(\ref{fig.evolTemp})(a) the expected time-evolution for initial conditions corresponding to the $1^{\rm st}$ and the $5^{\rm th}$ data sets. The expected convergence of $y$ towards $y_\infty^{theo}$ is found to be very slow for the data set 5. Here transverse swelling effects slow down the dynamics\footnote{Because of transverse swelling effect at large density, for some initial parameters, the function $y(t)$ could even be not monotonous. }. Experimentally, the convergence appears to be slightly faster. For initial situations corresponding to the data set 1 on the other hand, transverse swelling effects are expected to speed up the dynamics. Data are consistent with this behavior. \begin{figure} \centerline{ \includegraphics[height=6cm]{yvsGammat_final.pdf}\includegraphics[height=6cm]{y_inf_final.pdf} \caption{ (a) Time evolution of the ratio $y=k_B T/(mc_p^2)$ for the different data sets, shown as a function of $\Gamma t$. The solid horizontal line shows $y_\infty^{theo}=0.75$. The dashed-dotted lines are the computed expected time evolutions corresponding to initial situations of the data set 1 and 5. (b) For each data set, when available, mean value of $y$ for times larger than $0.7/\Gamma$. Error bars show the standard deviation among the data points that fulfill $\Gamma t > 0.7$.} \label{fig.evolTemp} \end{figure} \section{Conclusion} In this paper, we identify for the first time the asymptotic temperature of a 1D quasicondensate submitted to a 1-body loss process: more precisely, we show that the ratio $k_B T/(mc_p^2)$ reaches an asymptotic value, close to the theoretical prediction of 0.75. In a previous work~\cite{rauer_cooling_2016} which investigates the evolution of the temperature of a quasicondensate under the effect of losses, 1D quasicondensates were shown to reach lower ratios $k_B T /(mc_p^2)$, in disagreement with theoretical predictions. The difference between the work~\cite{rauer_cooling_2016} and the present work is two-fold. First, in ~\cite{rauer_cooling_2016} the out-coupling is realised with a monochromatic field, in which case, for chemical potential of the order of the transverse trapping frequency, homogeneity of the loss process is not guaranteed. Such an inhomogeneity makes the loss process sensitive to the energy of the atoms; a phenomena not accounted for by the model. In this paper the use of a wide MW power spectrum ensures the homogeneity of the losses. Second, the data sets in~\cite{rauer_cooling_2016} for which ratios $y$ lower than expected are reported, correspond to losses engineered via a radio-frequency field that couples magnetic states within the hyperfine level $F=2$: in opposition to what happens when using microwave outcoupling to $F=1$, the transfer of the trapped atoms, which are in the $|F=2,m_F=2\rangle$ state, to untrapped states $|F=2,m_F'\leq 0 \rangle$ involves the intermediate state $|F=2,m_F=1\rangle$ which is held in the magnetic trap. Since both states contribute to the images and a priori host uncorrelated fluctuations, one expects a decrease of the density ripple power spectrum and thus of the fitted temperature. This work leads to many open-questions. First, the thermometry we use probes the collective modes which lie in the phononic regime\footnote{Their frequency is much smaller than $\mu/\hbar$.}, while theoretical predictions~\cite{johnson_long-lived_2017} indicate that collective modes of higher frequency reach, under the effect of losses, higher temperatures. As already pointed out in~\cite{johnson_long-lived_2017}, for clouds confined in a smoothly varying potential, information on higher frequency collective modes may be retrieved from the wings of the cloud, namely the part of the density profile that extends beyond the size of $R$ of a quasicondensate. Indeed, as losses occur, we observe the growing of the fraction of atoms present in the wings and the density in the wings typically largely exceed that expected for a cloud at thermal equilibrium at a temperature equal to the temperature extracted from the density ripple thermometry, as is shown in Fig.~(\ref{fig.drawing_analysis}). This call for further theoretical and experimental investigations. Second, the theoretical prediction that for phonons the ratio $k_B T/(mc^2)$ reaches an asymptotic value is {\it a priori} also valid in higher dimensions. It is an open question whether coupling to higher frequency collective modes, an inefficient process in 1D\cite{johnson_long-lived_2017}, prevents the phonon modes to attain this asymptotic behavior. Finally, it would be interesting to extend the investigation of the effect of losses to regimes different from the (quasi-)condensate regime. In the case of 1D Bose gases with contact interactions, that are described by the Lieb-Liniger model, a description in terms of the evolution of the distribution of rapidities~\cite{lieb_exact_1963} would permit to generalize the studies to all possible states of the gas. \section{Acknowledgments} The authors thanks Bernhard Rauer for interesting discussions and Marc Cheneau for his suggestion of using micro-wave field to induce losses. M. S. gratefully acknowledges support by the Studienstiftung des deutschen Volkes. This work was supported by Région Île de France (DIM NanoK, Atocirc project). The authors thank Sophie Bouchoule of C2N (centre nanosciences et nanotechnologies, CNRS / UPSUD, Marcoussis, France) for the development and microfabrication of the atom chip. Alan Durnez and Abdelmounaim Harouri of C2N are acknowledged for their technical support. C2N laboratory is a member of RENATECH, the French national network of large facilities for micronanotechnology.
{'timestamp': '2020-04-02T02:12:28', 'yymm': '1912', 'arxiv_id': '1912.02029', 'language': 'en', 'url': 'https://arxiv.org/abs/1912.02029'}
\subsubsection*{Acknowledgements.} We warmly thank the organizers of \emph{Women in Numbers Europe 4} for putting together this team. We also wish to thank Chris Leonardi and Andrew Sutherland for helpful conversations around the contents of \cite{Eprint} and \cite{Sutherland} respectively. The first author was supported by the European Union’s H2020 Programme under grant agreement number ERC-669891. The second author was supported by the Austrian Science Fund, Project P34808. The fifth author was supported by the Magnus Ehrnrooth grant 336005 and Academy of Finland grant 351271. The last author was partially supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) -- Project-ID 286237555 -- TRR 195 and by the Italian program Rita Levi Montalcini for young researchers, Edition 2020. \section*{Introduction} Homomorphic encryption is a method of encrypting plaintext that allows users to compute directly with the ciphertext. This has many interesting applications, including being able to engage in cloud computing without giving up your data to the owner of the cloud. Scientifically, the premise is easy to describe: Suppose that the plaintext and the ciphertext space both have a ring structure, and that we encrypt plaintext via a map between these spaces. If this map is a ring homomorphism, then this describes a \emph{fully homomorphic encryption} scheme. Creating such a ring homomorphism that describes secure encryption (requiring, for example, that such a map should be efficiently computable and hard to invert) is, however, much harder than describing its properties. The closest the scientific community has come to constructing an example of fully homomorphic encryption is using maps based on (variants of) the \emph{Learning With Errors} (LWE) problem from lattice-based cryptography \cite{Regev, LPR, GSW, FHEW, TFHE}. However, all known LWE-based constructions are not truly fully homomorphic: Decrypting a message that was encrypted using LWE relies on the `error' that was used in the encryption being small, and adding and multiplying encrypted messages together causes the error to grow. Once the error is too large, the data can no longer be decrypted, so methods such as bootstrapping need to be employed to correct this growth (see e.g.~\cite{FHEW}). These methods may lead to practical fully homomorphic encryption in the future, but more research is needed. In this paper, we explore an alternative approach for homomorphic encryption, introduced by Leonardi and Ruiz-Lopez in~\cite{Eprint}. Their construction relies on the \emph{Learning Homomorphisms with Noise} (LHN) problem introduced by Baumslag, Fazio, Nicolosi, Shpilrain, and Skeith in~\cite{Baumslag}: Roughly speaking, this is the problem of recovering a group homomorphism from the knowledge of the images of certain elements multiplied by noise. The focus of~\cite{Eprint} is on the difficulty of constructing post-quantum secure instantiations of their primitive, but we believe the construction is interesting even in a classical setting. A big advantage of the LHN approach over a LWE approach is that the noise, which plays the role of the errors in LWE-based homomorphic encryption, does not grow with repeated computation in the way that the errors grow in lattice-based constructions. As such, there is no limitation on the number of additions that can be computed on encrypted data. However, it is less clear how to extend this construction to a multiplicative homomorphism. As such, the LHN approach is akin in some sense to the Benaloh~\cite{Benaloh} or Paillier~\cite{Paillier} cryptosystems. In the nonabelian setting, Leonardi and Ruiz-Lopez's construction has some hopes of being post-quantum secure unlike the Benaloh or Paillier constructions, but as they explored already in their work this is nontrivial to instantiate, and our work only strengthens this claim as we show that even solvable groups may admit quantum~attacks. $\,$\\ \noindent Our main contributions address \textbf{finite groups} and include: \begin{enumerate} \item Reducing the security of the abelian group instantiation of Leonardi and Ruiz-Lopez's public key homomorphic encryption scheme to the discrete logarithm problem in 2-groups (under certain plausible assumptions); this gives a polynomial-time classical attack if the 2-part of the relevant group is cyclic, and a practical classical attack if it is a product of a small number of cyclic groups. \item Highlighting an abelian group instantiation of Leonardi and Ruiz-Lopez's homomorphic encryption scheme where there is no known practical classical attack, namely, the product of many cyclic 2-groups. This may be of interest to the community as a new example of unbounded additively homomorphic encryption. \item Highlighting assumptions that need to be made in order to apply any discrete-logarithm derived attack, with a view to constructing (more) examples of groups on which there is no known practical attack on Leonardi and Ruiz-Lopez's homomorphic encryption scheme. \item A description of a quantum attack on an instantiation with solvable groups, under certain assumptions. \end{enumerate} The layout of this paper is as follows: In Section~\ref{sec:cryptosystem}, we recap the public key homomorphic encryption scheme proposed by Leonardi and Ruiz-Lopez in~\cite{Eprint}. In Section~\ref{sec:2}, we discuss some simple instantiations: The abelian case, the noiseless case, and we give some basic security requirements (including some recalled from~\cite{Eprint}). In Section~\ref{abelian-case} we describe our reduction from the abelian group instantiation of the Leonardi Ruiz-Lopez primitive to the (extended) discrete logarithm problem, under certain assumptions. In \cref{sectionConvert}, we describe some ways of instantiating the primitive with nonabelian groups to which our attack on abelian groups would also apply. In Section~\ref{sec:solvable}, we describe our quantum attack on instantiations with solvable groups, under certain assumptions on how such groups would be represented. In Section~\ref{future-work}, we outline our plans for future work. \section{Preliminaries}\label{sec:cryptosystem} In this section, we describe the public-key additive homomorphic encryption of Leonardi and Ruiz-Lopez~\cite[Sec.\ 5.2]{Eprint}; we will refer to this throughout this work as \emph{Leonardi--Ruiz-Lopez encryption}. Fix three finitely generated groups $G, H, K$ and probability distributions $\xi$ on $G$ and $\chi$ on $H$. This data should be chosen in such a way that operations can be performed efficiently in the groups and we can sample from both distributions efficiently. A natural choice could be, for instance, to take $G,H,K$ finite and $\xi,\chi$ to be uniform distributions. The groups $G,H,K$ and the distributions $\xi,\chi$ are public. In the following sections we will mostly work with finite groups and we will always make it clear when this is the case. \medskip \noindent For the \textbf{key generation}, Alice \begin{itemize} \item chooses efficiently computable secret homomorphisms $\varphi \colon G \to H$ and $\psi \colon H \to K$ such that she can efficiently sample from $\ker(\psi)$ and such that the center $\mathrm{Z}(H)$ of $H$ is not contained in $\ker(\psi)$; \item chooses a natural number $m$; \item samples elements $g_1,\dots, g_m \in G$ via $\xi$ and secret elements $h_1,\dots,h_m \in \ker(\psi)$ via $\chi$; \item chooses an element $\tau \in \mathrm{Z}(H)\backslash \ker(\psi)$ of order $2$. \end{itemize} Alice computes the public key as the set \[ \{(g_1,\varphi(g_1)h_1), \dots, (g_m,\varphi(g_m)h_m), \tau \}. \] Note that, whereas the elements $g_1,\dots,g_m$ are public, both $\varphi$ and $h_1,\dots,h_m$ are private (as are also $\psi$ and $\ker(\psi)$). \medskip \noindent For \textbf{encrypting} a one-bit message $\beta \in \{0,1\}$, Bob chooses a natural number $\ell$, then samples a word $w=w_1\cdots w_{\ell}$ over the indices $ \{1,\dots, m\}$ and using Alice's public key, he computes \[ (g,h')=(g_{w_1}\cdots g_{w_{\ell}},\varphi(g_{w_1})h_{w_1}\cdots \varphi(g_{w_{\ell}})h_{w_{\ell}} ). \] He then sends $(g,h)=(g,h'\tau^\beta)$ to Alice. \medskip \noindent For \textbf{decrypting} $(g,h)$, Alice computes $\nu=\psi(\varphi(g))^{-1}\cdot\psi(h) \in K$ and deduces that the message $\beta$ equals $0$ if $\nu$ equals $1_K$ (and else, $\beta$ equals $1$). \medskip To see that the decryption indeed produces the correct message $\beta$, recall that $h_1,\dots, h_m$ are contained in $\ker(\psi)$. Hence $\nu=\psi(\tau)^\beta$ and, since $\tau$ is not contained in $\ker(\psi)$, the element $\nu$ equals $1$ if and only if $\beta$ equals $0$. \newcommand{\textsf{LHN-PKE}}{\textsf{LHN-PKE}} For the convenience of the reader, we give a schematic summary of the data described above: \begin{center} \renewcommand{\arraystretch}{1.5}% \begin{tabular}{ |c|c|c| } \hline \ \ Public information \ \ & \ \ Alice's private information \ \ & \ \ Bob's private information \ \ \\ \hline $G, H, K, \xi, \chi$ & $\phi:G\rightarrow H$ & \\ \ $(g_1,\varphi(g_1)h_1), \dots, (g_m,\varphi(g_m)h_m)$ \ & $\psi:H\rightarrow K$ & $\beta\in\{0,1\}$ \\ $\tau\in \mathrm{Z}(H)\setminus\ker(\psi)$ & $\ker(\psi)$& word $w$\\ $(g,h)=(g,h'\tau^\beta) \in G \times H$ & $h_1,\ldots,h_m\in\ker(\psi)$ & \\ \hline \end{tabular} \end{center} \subsection{Homomorphic properties}\label{sec:homomorphic} The primary selling point of Leonardi--Ruiz-Lopez encryption is that it is \emph{unbounded additive homomorphic}. We say that an encryption function $E$ from plaintext to ciphertext space is \emph{additive} if the plaintext space admits an additive operator $+$ and, given encryptions $E(\beta)$ and $E(\tilde{\beta})$ of messages $\beta$ and $\tilde{\beta}$ respectively, one can compute a valid encryption $E(\beta + \tilde{\beta})$ of $\beta + \tilde{\beta}$ (without the knowledge of the plaintext $\beta + \tilde{\beta}$). We say that $E$ is \emph{unbounded additive homomorphic} if such additions can be performed an unbounded number of times without introducing systematic decryption failures.\footnote{ The number of additions is bounded in, for example, LWE-based homomorphic encryption, where the error grows too large. } The reader may have observed that $\tau$ having order two is not necessary for successful decryption; this property is needed to make the encryption additive, as we now recall from~\cite{Eprint}. Write the encryptions of $\beta$ and $\tilde{\beta}$ sampled from $\{0,1\}$ as \[(g,h'\tau^{\beta}) = (g_{w_1}\cdots g_{w_{\ell}},\varphi(g_{w_1})h_{w_1}\cdots \varphi(g_{w_{\ell}})h_{w_{\ell}} \tau^\beta)\] and \[(\tilde{g},\tilde{h}'\tau^{\tilde{\beta}}) = ({g}_{\tilde{w}_1}\cdots {g}_{\tilde{w}_{\tilde{\ell}}}, \varphi({g}_{\tilde{w}_1}){h}_{\tilde{w}_1}\cdots \varphi({g}_{\tilde{w}_{\tilde{\ell}}}){h}_{\tilde{w}_{\tilde{\ell}}} \tau^{\tilde{\beta}})\] respectively. Then, as $\tau$ is central in $H$ and has order 2, we can construct a valid encryption of $\beta + \tilde{\beta}$ via the observation that \begin{align*} &\varphi(g_{w_1})h_{w_1}\cdots \varphi(g_{w_{\ell}})h_{w_{\ell}} \tau^\beta {\varphi}({g}_{\tilde{w}_1})h_{\tilde{w}_1}\cdots {\varphi}({g}_{\tilde{w}_{\tilde{\ell}}}){h}_{\tilde{w}_{\tilde{\ell}}} \tau^{\tilde{\beta}} \\= &\,\varphi(g_{w_1})h_{w_1}\cdots \varphi(g_{w_{\ell}})h_{w_{\ell}} {\varphi}({g}_{\tilde{w}_1}){h}_{\tilde{w}_1}\cdots {\varphi}({g}_{\tilde{w}_{\tilde{\ell}}}){h}_{\tilde{w}_{\tilde{\ell}}} \tau^{\beta + \tilde{\beta}}; \end{align*} this encryption is given by \[(g,h'\tau^\beta)(\tilde{g},\tilde{h}'\tau^{\tilde{\beta}})=(g\tilde{g},h'\tau^{\beta}\tilde{h'}\tau^{\tilde{\beta}})=(g\tilde{g},h'\tilde{h'}\tau^{\beta+\tilde{\beta}}).\] \begin{rmk} Note that, for Leonardi--Ruiz-Lopez encryption to be fully homomorphic, it would also need to be \emph{multiplicative}: That is, at the very least, given valid encryptions of $E(\beta)$ and $E(\tilde{\beta})$, we should be able to deduce a valid encryption of $\beta\tilde{\beta}$. It is not at all obvious if this is even possible. On an abstract level our encryption function from plaintext to ciphertext space maps \[E: \{0,1\} \rightarrow G\times H,\] where the domain can be naturally endowed with a ring structure using addition and multiplication mod 2, but there seems to be no natural extra operation on $G\times H$ that would allow us to deduce a valid encryption of $\beta\tilde{\beta}$. We stress that in the case of LWE-based homomorphic encryption, both plaintext and ciphertext spaces come equipped with a ring structure, so the equivalent of our function $E$ is generally taken to be a ring homomorphism. The map $E$ being a ring homomorphism is, however, not always strictly necessary to deduce a valid encryption of $\beta\tilde{\beta}$. If in future work we were to succeed in deducing a valid encryption of $\beta \tilde{\beta}$, we expect that this will only apply to a specific instantiation of Leonardi--Ruiz-Lopez encryption, not one for abstract groups, where there is more structure to be exploited. \end{rmk} \subsection{Remarks on Leonardi--Ruiz-Lopez encryption} \noindent Some remarks on the construction above: \begin{enumerate}[label=$(\mathrm{R}\arabic*)$] \item The underlying hard problem of this encryption scheme is described as the $\textsf{LHN-PKE}$ problem, so named as it is based on the Learning Homomorphisms with Noise problem (LHN) but is adapted to this Public Key Encryption scheme (PKE). \begin{definition}\label{def-LHN-PKE} Let $G, H, K, \xi,$ and $ \chi$ be as above. We define the \emph{$\textsf{LHN-PKE}$} problem for $G, H, K, \xi,$ and $ \chi$ to be: Given $G, H, K, \xi,$ and $\chi$, for any \begin{itemize} \item $\varphi$ sampled uniformly at random from $\Hom(G,H)$, \item $\psi$ sampled uniformly at random from the elements of $\Hom(H,K)$ whose kernel does not contain $\mathrm{Z}(H)$, \item $g_1,\ldots,g_m$ sampled from $G$ using $\xi$, \item $h_1,\ldots,h_m$ sampled from $\ker(\psi)$ using $\chi$, \item $\tau$ sampled from the order $2$ elements of $\mathrm{Z}(H) \setminus \ker(\psi)$ using $\chi$, \item $\beta$ sampled uniformly at random from $\{0,1\}$, \item small $\ell$ and word $w=w_1\cdots w_{\ell}$ sampled uniformly at random from $\{1,\cdots,m\}^{\ell}$, \end{itemize} recover $\beta$ from the following information: \begin{itemize} \item $(g_1,\varphi(g_1)h_1), \ldots, (g_m,\varphi(g_m)h_m)$, \item $\tau$, \item $g=g_{w_1}\cdots g_{w_{\ell}}$, \item $h = h'\tau^\beta = \varphi(g_{w_1})h_{w_1}\cdots \varphi(g_{w_{\ell}})h_{w_{\ell}} \tau^\beta$. \end{itemize} \end{definition} \item In order for the encryption and decryption to work, the assumptions that $\tau$ is central or of order $2$ are not necessary. The reason we work under these assumptions is, as explained in \cref{sec:homomorphic}, that in this case, the cryptosystem is unbounded additive homomorphic. \item\label{rmk:wlog-gen} Once Alice has fixed the elements $g_1,\dots,g_m$ and determined the public key, all computations inside $G$ actually take place inside the subgroup $\langle g_1,\dots, g_m \rangle$ that is generated by $g_1,\ldots,g_m$. So for cryptanalysis, we may and will assume that $G$ is generated by $g_1,\dots,g_m$, i.e.\ that $G=\langle g_1,\dots, g_m \rangle$. \item The work of~\cite{Eprint} was inspired by~\cite{Baumslag}, which introduces the Learning Homomorphisms with Noise problem in order to construct a symmetric primitive. However, the noise accumulates in the construction of~\cite{Baumslag} in a manner akin to the error growth in LWE constructions. Leonardi and Ruiz-Lopez also introduce a symmetric primitive in~\cite{Eprint}, but we focus on the PKE construction in this work. \item The noise consists of the elements $h_1,\dots,h_m$ that are mixed into the product $h'$ in the second component $h$ of the ciphertext. These elements are chosen to be in the kernel of $\psi$ and therefore get erased during decryption. `Being contained in the kernel' of $\psi$ can thus be thought of as an equivalent of `the error being small' in the LWE-based encryption of~\cite{Regev} or `the noise being small' in LHN-based encryption of~\cite{Baumslag}. The strength of Leonardi--Ruiz-Lopez encryption is that the noise does not accumulate and will not lead to systematic decryption errors, since in the decryption process, we can erase the noise neatly by applying~$\psi$. \end{enumerate} \section{Simple instantiations and security}\label{sec:2} In this section, we describe some simple instantiations of Leonardi--Ruiz-Lopez encryption. The abelian case is a central focus of this paper as it is much simpler to describe than the general case; the description below is for the reader who wishes only to understand the abelian case. We also describe the noiseless case in order to highlight the role that the noise plays in the encryption. Finally we discuss the requirements on the setup parameters of Leonardi--Ruiz-Lopez encryption in order to achieve security against some naive classical attacks, concluding this section with a list of properties that the groups must have for any classically secure instantiation. \subsection{The abelian case} If $H$ is abelian, we can rewrite $(g,h)= (g_{w_1}\cdots g_{w_{\ell}},\varphi(g_{w_1})h_{w_1}\cdots \varphi(g_{w_{\ell}})h_{w_{\ell}}\cdot\tau^\beta )$ as \[ (g,h)= (g_{w_1}\cdots g_{w_{\ell}},\varphi(g_{w_1})\cdots \varphi(g_{w_{\ell}})h_{w_1}\cdots h_{w_{\ell}}\cdot\tau^\beta )=(g,\varphi(g)h_{w_1}\cdots h_{w_{\ell}}\cdot\tau^\beta).\] If both $G$ and $H$ are abelian, it makes sense to switch to the following notation: Instead of choosing indices $w_1,\dots,w_{\ell}$, Bob just chooses non-negative integers $r_1,\dots,r_m$ and encrypts $\beta$ to \[ (g,h)= (g_1^{r_1}\cdots g_m^{r_m},\varphi(g_1^{r_1})\cdots \varphi(g_m^{r_m})h_1^{r_1}\cdots h_m^{r_m}\cdot\tau^\beta )=(g,\varphi(g)h_1^{r_1}\cdots h_m^{r_m}\cdot\tau^\beta).\] This system has been claimed to not be quantum secure in \cite[Section~8.2]{Eprint}, cf.\ also \cref{sec:LRL-cryptanalysis}, while we discuss security in the classical sense in \cref{sec:assumptions}. Moreover, in \cref{sectionConvert} we describe instantiations of the $\textsf{LHN-PKE}$ problem in which $G$ and $H$ are nonabelian but the security reduces to the case in which they are. \subsection{The noiseless case}\label{sec:noiseless} Let us assume that $h_1=h_2=\dots=h_m=1$. Then the public key consists of all pairs $(g_i,\varphi(g_i))$ together with $\tau$ and Bob would encrypt the message $\beta \in \{0,1\}$ to \[ (g,h)=(g_{w_1}\cdots g_{w_{\ell}},\varphi(g_{w_1})\cdots \varphi(g_{w_{\ell}})\tau^\beta)=(g,\varphi(g)\tau^\beta). \] Let Eve be an attacker who is aware of the fact that Alice decided to work in a noiseless setting. Then Eve knows all $g_i$'s as well as their images $\varphi(g_i)$ from the public key. If she can write $g$ as a product in $g_1,\dots,g_m$, then she can compute $\varphi(g)$. Knowing $\tau$ from the public key, Eve can then decrypt $(g,h)=(g,\varphi(g)\tau^\beta)$. Note that even if Eve did not use the same word $w_1\cdots w_{\ell}$ as Bob to write $g$ as a product in $g_1,\dots,g_m$, she would nonetheless obtain the correct value of $\varphi(g)$. Of course, finding such a word might still be a hard problem, even if $m=1$. For example, if $G$ is the multiplicative group of a finite field and $g_1$ is a generator, finding a word in $g_1$ defining $g$ is the same as solving the discrete logarithm problem, which is known to be hard for classical computers (though there are quantum algorithms to solve it, see \cite{Shor_1, Shor_2}). Alice, on the other hand, will probably have a closed form describing $\varphi$ that does not require to write elements as products in $g_1,\dots,g_m$ when she applies $\varphi$ in the decryption process (and similarly for $\psi$). In the $m=1$ example above, she would choose $\varphi$ to take every element to a certain power. For attacking the cryptosystem in the general case, a possible strategy is to construct attacks that reduce to the noiseless case. We will come back to such attacks in \cref{sec:security}. \subsection{Security}\label{sec:security} In all that follows, let $\lambda$ be the security parameter.\footnote{ Typically, we want any computations undertaken by the user to have complexity that is polynomial in $\lambda$, and an attacker who attempts to decrypt by guessing any unknowns should only succeed with probability at most $2^{-\lambda}$.} \subsubsection{Many homomorphisms} First of all, note that an attacker who can guess both $\varphi$ and $\psi$ can decrypt in the same way Alice does. Denote the set of all possible choices for $\psi$ by \[\operatorname{Hom}(H,K)^- = \{ \psi \in \operatorname{Hom}(H,K) : \mathrm{Z}(H) \not\subseteq \ker(\psi)\}.\] To avert brute force attacks, the groups $G,H,K$ should be chosen in such a way that $\operatorname{Hom}(G,H)$ and $\operatorname{Hom}(H,K)^-$ are of size at least $\Theta(2^{\lambda})$,\footnote{ We are using Bachmann-Landau notation for complexity, see for example Section 1.2.11.1 of~\cite{Knuth}. } and $\varphi$ and $\psi$ should be sampled uniformly at random from $\operatorname{Hom}(G,H)$ and $\operatorname{Hom}(H,K)^-$ respectively. This ensures that if an attacker guesses $\varphi$ she succeeds with probability $2^{-\lambda}$, and similarly for $\psi$. \subsubsection{Words} As we already saw in the noiseless case, there are links between the $\textsf{LHN-PKE}$ problem and the ability of an attacker to write $g$ as an expression in the generators $g_1,\ldots,g_m$. Assume for instance that Eve wants to decrypt the cyphertext $(g,h)$. She knows that $g$ is a product in $g_1,\dots,g_m$ and recall that $g_1,\dots,g_m$ are public. If she knows the exact expression of $g$ as a product in $g_1,\dots,g_m$ that Bob used in the encryption process, then she can compute $h'$ from the public key, erase it from $h=h'\tau^\beta$, and recover the message $\beta$. It is important to note that other than in the noiseless case, it is in general not enough to find \textit{any} expression of $g$ as a product in $g_1,\dots,g_m$, because that product will in general not produce the correct term $h'$ yielding to a different accumulation of the noise. That is, the attacker needs to recover the \emph{correct} word $w$, not just any expression of $g$ in $g_1\ldots,g_m$. In the cryptanalysis we carry out in \cref{abelian-case} for finite abelian $G$, $H$, and $K$, we give a reduction of $\textsf{LHN-PKE}$ to the extended discrete logarithm problem for finite $2$-groups; our reduction circumnavigates the issue of finding the correct word. \subsubsection{An attack on instances with few normal subgroups} The idea behind the following attack is to replace Alice's secret $\psi:H\rightarrow K$ with some new $\bar \psi:H\rightarrow L$ erasing the noise without erasing~$\tau^\beta$ (for instance $L$ could be a quotient of $H$, as described below). A similar attack was also described in Section 7.2 of \cite{Eprint}. Assume that Eve knows a normal subgroup $N$ of $H$ that contains all elements $h_1,\dots,h_m$ but does not contain $\tau$. She can then define $\bar \psi\colon H \to H/N$ as the natural projection and by applying $\bar \psi$ to all second coordinates of the elements $(g_i,\varphi(g_i)h_i)$ in the public key and to the second coordinate of the encrypted message $(g,h)$ she can switch to the noiseless case; cf.\ \cref{sec:noiseless}. We deduce in particular, that there should be at least $\Theta(2^{\lambda})$ normal subgroups in $H$, so that if an attacker guesses $\ker(\psi)$ she succeeds with probability $2^{-\lambda}$. \subsubsection{An attack on instances with weak normal subgroups} Now suppose that an attacker can find a normal subgroup $N$ of $H$ that contains $\varphi(g_i)h_i$ for all $i=1,\dots,m$ but does not contain $\tau$ (note that these elements are all public so it is easy to check these conditions). Then she can directly apply the projection $H\to H/N$ to the second coordinate in the encrypted message $(g,h)$ and can deduce that $\beta$ equals zero if and only if she obtained the neutral element in $H/N$. To avoid such an attack it seems that Alice should check, after the key generation process, whether the normal closure of $\langle \varphi(g_1)h_1,\dots,\varphi(g_m)h_m \rangle$ contains $\tau$ and if it doesn't she should choose a different key. For cryptanalysis, we may thus assume that $H$ equals the normal closure of $\langle \varphi(g_1)h_1,\dots,\varphi(g_m)h_m \rangle$ (as otherwise, we can just work in this smaller group). In particular, if $H$ is abelian, we may assume $H=\varphi(G)\ker\psi$. \subsubsection{A summary of the discussed security assumptions}\label{sec:security-assumptions} We conclude this section with a list of necessary properties for security deduced from the list of naive attacks above: \begin{enumerate}[label=$(\mathrm{S}\arabic*)$] \item\label{it:S1} $\operatorname{Hom}(G,H)$ and $\operatorname{Hom}(H,K)^-$ are of size exponential in the security parameter; \item\label{it:S2} finding the precise word $w$ used to express $g$ as a product in the $g_i$'s in the encryption phase has complexity that is exponential in the security parameter; \item\label{it:S3} the number of normal subgroups in H is exponential in the security parameter; \item\label{it:S4} the normal closure of $\langle \varphi(g_1)h_1,\dots,\varphi(g_m)h_m \rangle$ contains $\tau$. \end{enumerate} \section{Cryptanalysis in the finite abelian case}\label{abelian-case} In this section, we discuss the hardness of $\textsf{LHN-PKE}$ under the assumption that $G$ and $H$ are finite and that $H$ is abelian. Leonardi and Ruiz-Lopez~\cite{Eprint} dismissed the abelian instantiation due to an argument that there should exist a polynomial-time quantum algorithm for the $\textsf{LHN-PKE}$ problem; the reduction is more complex than is suggested in~\cite{Eprint} but their statement is true as we show in \cref{sec:LRL-cryptanalysis}. Nevertheless, the unbounded homomorphic property of the proposed cryptosystem is sufficiently powerful that a classically secure construction would also be of great interest to the cryptographic community. We show that, if $H$ is abelian, under some mild assumptions that we introduce in \cref{sec:assumptions}, the $\textsf{LHN-PKE}$ problem for $G$ and $H$ can be reduced to the extended discrete logarithm problem (cf.\ \cref{def:eDLP}) in some specific abelian $2$-group. Assume that $G$ and $H$ are finite and that $H$ is abelian. Given the following public information \begin{enumerate} \item $\{(g_1,\varphi(g_1)h_1),\ldots,(g_m,\varphi(g_m)h_m),\tau\}$ and \item $(g,h)=(g,h'\tau^\beta) \in G \times H$ \end{enumerate} one can proceed as follows: \begin{itemize} \item In case $G$ is not abelian, the $\textsf{LHN-PKE}$ problem for $G,H,K,\xi,$ and $\chi$ is reduced to the $\textsf{LHN-PKE}$ problem for $\overline{G} = G/[G,G], H, K, \xi$, and $\chi$ following \cref{G_also_abelian}: Here $[G,G]$ denotes the commutator subgroup of $G$, so $\overline{G}$ is abelian. \item In case $G$ is also abelian and satisfies some additional assumptions presented in \cref{sec:assumptions}, the $\textsf{LHN-PKE}$ problem for $G,H,K,\xi,$ and $\chi$ is reduced to the eDLP problem in $G$, as defined in \cref{sec:assumptions}. \item In case $G$ is also abelian and satisfies some additional assumptions presented in \cref{sec:assumptions}, the $\textsf{LHN-PKE}$ problem for $G,H,K,\xi,$ and $\chi$ reduces to the $\textsf{LHN-PKE}$ problem for $G_2, H_2, K, \xi$, and $\chi$, where $G_2$ and $H_2$ are the Sylow 2-subgroups of $G$ and $H$, respectively. \end{itemize} To conclude, in \cref{sec:limitations} we discuss the genericity and limitations of the assumptions made in \cref{sec:assumptions}, as well as the impact of the reductions made. \subsection{A simplified setting for $\textsf{LHN-PKE}$ when $H$ is abelian} In this section we take $H$ to be abelian and we show that, for cryptanalysis, one can can consider, instead of $G$, its abelianization $G/[G,G]$. \begin{lemma}\label{G_also_abelian} Assume $H$ is abelian. Then the $\textsf{LHN-PKE}$ problem for $G,H,K,\xi,$ and $\chi$ is at most as hard as the $\textsf{LHN-PKE}$ problem for $\overline{G} = G/[G,G], H, K, \xi$, and $\chi$. \end{lemma} \begin{proof} Since $H$ is abelian, the commutator subgroup $[G,G]$ of $G$ is contained in the kernel of $\varphi$. Define $\overline{G}=G/[G,G]$. Then $\varphi\colon G\to H$ induces a well-defined homomorphism $\overline{\varphi} \colon \overline{G} \to H$ and any ciphertext $(g,h)$ can be interpreted as the ciphertext $(\bar g, h)$ in the cryptosystem given by $\overline{\varphi} \colon \overline{G} \to H$, $\psi\colon H \to K$ as before and public key given by $(\overline{g_i},\bar \varphi(\overline{g_i})h_i)$ for $i=1,\dots,m$ together with the same $\tau$ as before. Indeed, if Bob used the word $w_1\cdots w_{\ell}$ to encrypt $(g,h)$, then the same word gives rise to the encryption $(\bar g, h)$ in the new cryptosystem of the same message. \qed \end{proof} \subsection{Cryptanalysis for abelian groups}\label{sec:assumptions} In \cref{sec:assumptions} we discuss the $\textsf{LHN-PKE}$ problem for $G,H,$ and $K$ finite and abelian. In each subsection, we explicitly mention under which assumptions from the following list we are working. For a group $\Gamma$ and $\gamma_1,\ldots,\gamma_m\in\Gamma$, we are interested in the following properties: \begin{enumerate}[label=$(\mathrm{A}\arabic*)$] \item\label{it:A1} $\Gamma$ is abelian; \item\label{it:A2} the largest positive odd factor of the order $|\Gamma|$ of $\Gamma$ is known (or easily computable); \item\label{it:A3} $\gamma_1,\ldots,\gamma_m\in \Gamma$ are such that $\Gamma=\gen{\gamma_1}\oplus\ldots\oplus\gen{\gamma_m}$; \item\label{it:A4} The orders $|\gamma_1|,\ldots,|\gamma_m|$ of $\gamma_1,\ldots,\gamma_m$ are known (or easily computable). \end{enumerate} Under Assumptions \ref{it:A1}, \ref{it:A3}, and \ref{it:A4} the following problem is well-posed. \begin{definition}\label{def:eDLP} The \emph{extended discrete logarithm problem (eDLP)} for $\Gamma$ is the problem of determining, for each $x\in \Gamma$, the unique vector \[(\alpha_1,\ldots,\alpha_m)\in \Z/|\gamma_1|\Z\times \ldots\times\Z/|\gamma_m|\Z\] such that \[x=\gamma_1^{\alpha_1}\cdots \gamma_m^{\alpha_m}.\] \end{definition} \noindent Note that eDLP is just called \emph{discrete logarithm problem} (DLP) in \cite{Sutherland}. For a discussion of existing algorithms to solve it, we refer to \cref{sec:eDLP2gps}. \subsubsection{From $\textsf{LHN-PKE}$ to eDLP}\label{sec:attack2gps} \noindent In this section, we work under Assumptions \ref{it:A1}, \ref{it:A3}, and \ref{it:A4} of \cref{sec:assumptions}. More precisely, we assume that $G$ and $H$ are abelian, $G$ is finite and satisfies $G=\gen{g_1}\oplus\ldots\oplus\gen{g_m}$, and the orders of $g_1,\ldots,g_m$ are known. \begin{proposition}\label{lem:h_i<g_i} The $\textsf{LHN-PKE}$ problem for $G,H,K,\xi,$ and $\chi$ is at most as hard as the \emph{eDLP} problem in $G$. \end{proposition} \begin{proof} Let $r_1\ldots,r_m\in\Z$ be the non-negative integers chosen to write $$g=g_1^{r_1}\cdots g_m^{r_m} \ \ \textup{ and } \ \ h=\varphi(g)h_1^{r_1}\cdots h_m^{r_m}\cdot\tau^\beta.$$ For each $i\in\{1,\ldots,m\}$, set $\ell_i=\varphi(g_i)h_i$ and, given that the order of $\varphi(g_i)$ divides $|g_i|$, compute \[ \ell_i^{|g_i|}=(\varphi(g_i)h_i)^{|g_i|}=h_i^{|g_i|}. \] As a consequence, the subgroup $M$ of $H$ that is generated by $Y=\{\ell_i^{|g_i|} : i=1,\ldots,m\}$ is contained in $\ker(\psi)$ and thus does not contain $\tau$. Since the orders of the $g_i$'s are known, the subgroup $M$ can be easily determined. We show that, eDLP being solvable in $G$ yields also $\textsf{LHN-PKE}$ being solvable in this context. To this end, let $s_1,\ldots,s_m$ be such that, for any choice of $i$, one has $s_i\equiv r_i \bmod |g_i|$. From the public information, one easily computes \begin{align* X & = \prod_{i=1}^m(\ell_i^{s_i})^{-1}\cdot h=\prod_{i=1}^m\ell_i^{r_i-s_i}\cdot\tau^{\beta}=\prod_{i=1}^m\varphi(g_i)^{r_i-s_i}h_i^{r_i-s_i}\cdot \tau^{\beta}. \end{align*} Since $r_i-s_i$ is a multiple of $|g_i|$, it is straightforward to see that $X\in M$ if and only if $\beta=0$, i.e., working modulo the subgroup $M$, one can recover $\beta$ from the public information. \qed \end{proof} \subsubsection{From abelian groups to $2$-groups}\label{sec:reduce_2} In this section, we assume that \ref{it:A1} and \ref{it:A2} from \cref{sec:assumptions} hold, i.e.\ we assume that both $G$ and $H$ are finite abelian groups and that the odd parts of $|G|$ and $|H|$ are known. We let $G_2$ and $H_2$ denote the unique Sylow $2$-subgroups of $G$ and $H$, respectively. In the new language, $\tau$ and $\tau^\beta$ belong to $H_2$. In the following, we show that in the case of abelian groups, the $\textsf{LHN-PKE}$ problem for the original data reduces to the $\textsf{LHN-PKE}$ problem for the 2-parts $G_2$ and $H_2$ (with the induced data). \begin{theorem}\label{result_abelian} The $\textsf{LHN-PKE}$ problem for $G, H, K, \xi$, and $\chi$ is at most as hard as the $\textsf{LHN-PKE}$ problem for $G_2, H_2, K, \xi$, and $\chi$. \end{theorem} \begin{proof} Write the orders of $G$ and $H$ as \[ |G|=2^{n_G}q_G, \ |H|=2^{n_H}q_H \] where $q_G$ and $q_H$ are odd numbers. Define $q$ as the least common multiple of $q_G$ and $q_H$ or any other odd common multiple (such as $q_{G}q_{H}$ which might be easier to compute). Then for any elements $g \in G$ and $h \in H$, the orders of $g^q$ and $h^q$ are both powers of $2$, i.e.\ $g^q\in G_2$ and $h^q\in H_2$. Moreover, $G$ and $H$ being abelian, the assignment $x \rightarrow x^q$ defines homomorphisms $G\to G_2$ and $H\to H_2$. Equip $G_2$ and $H_2$ with the induced distributions $\xi$ and $\chi$ and write $\varphi_2\colon G_2\to H_2$ and $\psi_2\colon H_2\to K$ for the restrictions of $\varphi$ and $\psi$ to $G_2$ and $H_2$, respectively. Assume that $\beta$ can be recovered from $(G_2, H_2, K, \xi,\chi)$. We show that $\beta$ can be determined from $(G, H, K, \xi,\chi)$. Recall that the pair $(g,h)=(g_1^{r_1}\cdots g_m^{r_m},\varphi(g)h_1^{r_1}\cdots h_m^{r_m}\cdot\tau^\beta)=(g,h'\tau^\beta) \in G\times H$ is public. Raising both entries to their $q$-th power, one obtains \[ (g^q,h^q)=((g_1^q)^{r_1}\cdots (g_m^q)^{r_m},\varphi(g^q)\cdot(h_1^q)^{r_1}\cdots (h_m^q)^{r_m}\cdot(\tau^q)^\beta)=(g^q,(h')^q\tau^\beta)\in G_2\times H_2. \] From the last equation it is clear that the elements $g_1,\dots,g_m \in G$ and $h_1,\dots,h_m \in H$ are replaced with their $q$-th powers $g_1^q,\dots,g_m^q \in G_2$ and $h_1^q,\dots,h_m^q\in H_2$. Moreover, as $\tau=\tau^q$, it holds that $\tau \in H_2\setminus\ker(\psi)$ and so the pair $(\tau,\beta)$ is preserved. By assumption $\beta$ is determined from the data associated to the $2$-parts and so the proof is complete. \qed \end{proof} \subsection{Assumptions and reductions}\label{sec:limitations} Assume in this section that $G$ and $H$ are finite and that $H$ is abelian. Thanks to \ref{rmk:wlog-gen}, for cryptanalysis purposes, we can replace $G$ with $\tilde{G}=\gen{g_1,\ldots,g_m}$ and we do so. Moreover, in view of \cref{G_also_abelian}, the $\textsf{LHN-PKE}$ problem on the pair $(\tilde{G},H)$ is reduced to the $\textsf{LHN-PKE}$ problem on the pair $(\overline{G},H)$, where $\overline{G}$ denotes the abelianization of $\tilde{G}$. Assumption \ref{it:A1} holds for the last pair and the elements $g_1,\ldots,g_m\in G$ are replaced with their images $\overline{g_1}, \ldots,\overline{g_m}$ in $\overline{G}$ and the homomorphism $\phi:\tilde{G}\rightarrow H$ with the induced homomorphism $\overline{G}\rightarrow H$, which we identify with $\phi$, for simplicity. Assuming now that \ref{it:A2} holds for $H$ and $\overline{G}$, \cref{result_abelian} allows to reduce the $\textsf{LHN-PKE}$ problem on $(\overline{G},H)$ to the $\textsf{LHN-PKE}$ problem on $(\overline{G}_2,H_2)$, where $\overline{G}_2$ and $H_2$ denote the Sylow $2$-subgroups of $\overline{G}$ and $H$. Here the elements $\overline{g_1},\ldots,\overline{g_m}$ and $h_1,\ldots,h_m$ are replaced by \[\overline{g_1}^q,\ldots,\overline{g_m}^q \ \ \textup{ and } \ \ h_1^q,\ldots,h_m^q\] where $q$ denotes the least common multiple of the odd parts of $|\overline{G}|$ and $|H|$. Set $X_2=\gen{\overline{g_1}^q,\ldots,\overline{g_m}^q}$ and suppose, at last, that \ref{it:A3} and \ref{it:A4} hold for $X_2$ namely that \[X_2=\gen{\overline{g_1}^q}\oplus\ldots\oplus\gen{\overline{g_m}^q}\] and the sizes of the summands are known. Then, as a consequence of \cref{lem:h_i<g_i}, the $\textsf{LHN-PKE}$ problem for $(\overline{G}_2,H_2)$ is reduced to the eDLP problem in $\overline{G}_2$. While assumptions~\ref{it:A1}--\ref{it:A4} are natural assumptions to make when constructing groups that both admit efficient computation and satisfy the security requirements \ref{it:S1}--\ref{it:S4} of \cref{sec:security-assumptions}, there exist examples of groups where these assumptions are not satisfied or may be at odds with our security requirements. In this section we discuss assumptions~\ref{it:A1}--\ref{it:A4} with a view towards constructing instantiations of Leonardi--Ruiz-Lopez encryption to which our classical attack does not apply, or at least is not polynomial-time. \subsubsection{The eDLP in finite abelian $p$-groups}\label{sec:eDLP2gps} Let $p$ be a prime number and let $G$ be a finite abelian $p$-group given as \[G=\gen{g_1}\oplus\ldots\oplus\gen{g_m},\] where the orders of the summands are known. Let $e\in\Z$ be such that $p^e$ is the exponent of $G$. Then, according to \cite[Cor.~1]{Sutherland}, the eDLP in $G$ can be solved using \begin{equation} \label{re} O\left( \frac{\log(e+1)}{\log\log(e+2)}\log|G|+\frac{\log_p|G|}{m}p^{m/2} \right) \end{equation} group operations on a classical computer. In particular, when $m$ is polynomial in $\log\log |G|$, the eDLP has complexity polynomial in $\log|G|$. When $p=2$, setting $n= \log_p|G|$, we deduce the following from the performance result in \cite[Table~1]{Sutherland}: \begin{itemize} \item When $m =1,2,4,8,$ the counts are dominated by the first term of \eqref{re}. This explains the initial cost decrease when $m$ increases for a fixed $n.$ \item Using Shank's algorithm, there is a possibility of improving the factor $n/m$ by $\sqrt{n/m}$ though this is not relevant for applications: these normally require that $n/m$ is close to $1.$ \end{itemize} To the best of our knowledge, there is no existing work on the eDLP that beats~\cite{Sutherland}. \subsubsection {On the security assumptions \ref{it:A2} and \ref{it:A4}} \label{RSAetc} In \cref{sec:attack2gps} above we described how and under which assumptions the $\textsf{LHN-PKE}$ problem for $G$, $H$, and $K$ can be reduced to solving the eDLP in $G$. The necessity for setting assumptions \ref{it:A2} and \ref{it:A4} in particular, came from the fact that there are known examples of abelian groups, some of them already being used successfully in existing cryptographic protocols, that do not have to obey them. One type of such groups are the known RSA groups. These are given in the form $(\mathbb{Z}/N\mathbb{Z})^{\times}$ where $N = pq$ is hard to factor. In this case, Alice would know the factorisation and hence the order of the group, but an adversary should not be able to compute it. Another type of such groups, where even the creators of the cryptosystem might not know the group's order, is that of ideal class groups of imaginary quadratic fields. This is an interesting category of groups for cryptography since it allows one to work in a \emph{trustless setup}. In other words, we do not need a trusted third party to generate groups of secure order, in contrast to cryptosystems that employ RSA groups for example, where a trusted third party needs to generate a secure, i.e. hard to factor, modulus $N \in \mathbb{N}$ for the groups $(\mathbb{Z}/N\mathbb{Z})^{\times}$. Despite the fact that neither the structure nor the order of the ideal class group is known, the group operation is efficient and the elements of the group have a compact representation, via reduced binary quadratic forms. An excellent reference for trusted unknown-order groups is the paper by Dobson, Galbraith and Smith \cite{DGS}; they also take into account Sutherland's algorithm \cite{SuthClass} and they propose new security parameters for cryptosystems that employ ideal class groups. In the same paper the authors also discuss other groups of unknown order that can be used, namely genus-$3$ Jacobians of hyperelliptic curves, initially introduced by Brent \cite{Brent}. Even though these groups appear to have some computational advantages when compared to ideal class groups, these advantages exist only in theory for now since these genus-3 Jacobians have not yet been implemented. \subsubsection{Regarding Assumption \ref{it:A3}} In this section, we discuss Assumption \ref{it:A3}. For the sake of simplicity and in view of \cref{sec:reduce_2}, we restrict to $2$-groups but everything can be said similarly for arbitrary finite abelian groups using \cref{result_abelian}. The case where Assumption \ref{it:A3} holds for $\gen{g_1,\ldots,g_m}$, i.e., $g_1,\dots,g_m$ are independent and satisfy $\langle g_1,\dots,g_m \rangle =\gen{g_1}\oplus\ldots\oplus\gen{g_m}$, seems to be the key case out of the following reasons. First of all, when sampling $g_1,\dots,g_m$ from $G$ uniformly, it is very likely that these are independent at least if $m$ is small in comparison with the number of cyclic factors in $G$. For example, if $G=C_2^\lambda$ is a direct product of cyclic groups of order $2$, then the probability of sampling $g_1,\dots,g_m$ that are independent is $$ \frac{(2^\lambda-1)(2^\lambda-2)(2^\lambda-2^2)\cdots(2^\lambda-2^{m-1})}{2^{m\lambda}}$$ which is very close to $1$ if $m\ll \lambda$. In addition, the case where $g_1,\dots,g_m$ are independent can be seen as the generic case and we expect that an attacker could use similar strategies as developed in \cref{sec:assumptions} and design an attack for the dependent case. Indeed, if $g_1,\dots,g_m$ are dependent, the attacker will obtain more information from the public key as in the independent case. \\ There is also another strategy for an attack if \ref{it:A3} does not hold for $G$ and the number of cyclic factors in $H$ is small. More precisely, assume that $|\Hom(H,\F_2)|$ is sub-exponential. If Eve can write any element $g \in G$ as a product in $g_1,\dots,g_m$ then she can recover the secret message $\beta$ from the ciphertext $(g,h)$ as follows. For every maximal subgroup $M$ of $H$ not containing $\tau$ (by assumption, there are only sub-exponentially many of these) convert $h$ into an element $\tilde h$ in $H/M$. Decrypt $(g,\tilde h) \in G\times \tilde H$ as if the noise was erased completely in $H/M$ (as explained in Section \ref{sec:noiseless}) and check on a number of self-encrypted messages if this provides a correct decryption function. Since $\ker(\psi)\neq H$, there will always be a maximal subgroup $M$ of $H$ that contains $h_1\dots,h_m$ but does not contain $\tau$, so eventually this procedure will indeed provide a decrpytion function. \subsection{Comparison with the quantum attack}\label{sec:LRL-cryptanalysis} Throughout this section, assume that \ref{it:A3} holds for the abelian group $G$. Even though we only did cryptanalysis for finite groups until now, we show in the following two subsections how one can perform quantum attacks when $G$ is either torsion-free or finite. The mixed case can be considered as a combination of the two cases: first dealing with the free part of the group and then recovering $\beta$ as explained in the finite case. \subsubsection{The torsion-free case}\label{subs:TorsionFree} Assume as in \cite[Sec.\ 8.2]{Eprint}, that $G=\gen{g_1}\oplus\ldots\oplus\gen{g_m}$ is isomorphic to $\Z^{m}$, i.e.\ the orders $|g_i|$ are all infinite. We briefly recall the discussion from \cite[Sec.\ 8.2]{Eprint}. To this end, let $f:\Z^{m+1}\rightarrow G$ be defined by \[ (a_1,\ldots,a_{m+1}) \longmapsto g^{a_{m+1}}g_1^{a_1}\cdots g_m^{a_m}. \] Then the kernel of $f$ is equal to $\gen{(r_1,\ldots,r_m,1)}$ and it can be determined, using Shor's algorithm~\cite{Shor_2}, in quantum polynomial time in $m$; cf.\ \cite{ImIv22,Wat01}. Once $(r_1,\ldots,r_m,1)$ is known, it is easy to recover $\beta$ from the encrypted message $(g,h)$ and the public information. \subsubsection{The torsion case} Assume in this section that $G$ is finite and that \ref{it:A4} holds. Let, moreover, $f:\Z^{m+1}\rightarrow G$ be defined by \[ (a_1,\ldots,a_{m+1}) \longmapsto g^{a_{m+1}}g_1^{a_1}\cdots g_m^{a_m}. \] Then a set of generators of $\ker(f)$ can be determined in time polynomial in $\log|G|$ on a quantum computer \cite{Kitaev,Shor_2,Simon}; see also \cite{EHK04}. Note that $\ker(f)$ contains $|g_1|\Z\times \ldots\times |g_m|\Z\times |g|\Z$. Let now $(s_1,\ldots,s_{m+1})$ be one of the generators found. Then $g=g_1^{-s_1s_{m+1}}\cdots g_m^{-s_ms_{m+1}}$ and so it follows that \[ g_1^{r_1+s_1s_{m+1}}\cdots g_m^{r_m+s_ms_{m+1}}=1. \] This is the same as saying that, for each $1\leq i\leq m$, one has $$r_i\equiv -s_is_{m+1}\bmod |g_i|. $$ Modding out $H$ by the subgroup generated by all $h_i^{|g_i|}$ one can recover $\beta$ and thus solve the $\textsf{LHN-PKE}$ problem; cf.\ \cref{lem:h_i<g_i}. \section{An attack that reduces to the abelian case}\label{sectionConvert} A general strategy for an attacker to solve the $\textsf{LHN-PKE}$ problem could be to convert the $H$-part of the ciphertext as follows. Assume that Eve has access to the ciphertext \[(g,h) = (g_{w_1}\cdots g_{w_\ell}, \varphi(g_{w_1})h_{w_1}\cdots\varphi(g_{w_{\ell}})h_{w_\ell}\tau^\beta) \in G \times H\] encrypted by Bob using Alice's public key \[\{(g_1,\varphi(g_1)h_1),\ldots,(g_m,\varphi(g_m)h_m),\tau\}.\]Eve can then choose a homomorphism $\vartheta\colon H \to \overline{H}$ for some group $\overline{H}$ such that $\vartheta(\tau)\neq 1$ and compute $\vartheta(h)$. Her new pair $(g,\overline{ h})=(g,\vartheta(h))$ is then of the form \[ (g_{w_1}\cdot \dotsc \cdot g_{w_{\ell}},\overline{\varphi}(g_{w_1})\overline{ h}_{w_1}\cdot \dotsc \cdot \overline{\varphi}(g_{w_{\ell}})\overline{ h}_{w_{\ell}} \overline{\tau}^\beta) \] where, for all $i$, we set $\overline{h_i}=\vartheta(h_i)$ and write $\overline{\varphi}=\vartheta\circ\varphi$ and $\overline{\tau}=\vartheta(\tau)$. Since $\overline{\tau}\neq 1$, this pair still contains the information on $\beta$ and if $\vartheta$ is chosen cleverly, it might be much simpler to deduce $\beta$ from $(g,h')$. Note that in general, we cannot define a suitable counterpart $\overline{\psi}$ of $\psi$ here, so the information that $\tau$ is not contained in the kernel of $\psi$ cannot be directly converted into a statement on $\overline{\tau}$ and has to be considered individually (if necessary). \medskip A special case of this strategy is to reduce, if possible, to an abelian group $\overline{H}$, i.e., the goal is to eventually apply the attack that we describe in Section~\ref{abelian-case} even when $G$ and $H$ are nonabelian. Suppose that, given $\tau$ and $H$, Eve is able to find an abelian group $\overline{H}$ and an efficiently computable homomorphism $\vartheta: H \rightarrow \overline{H}$ such that $\vartheta(\tau) \neq 1$ and such that $\vartheta(\tau)$ is not contained in the subgroup generated by certain powers of $\vartheta(\varphi(g_i)h_i)$. This condition can be checked using the public key and we will specify below which powers are sufficient. Let $\overline{G}=G/[G,G]$ be the abelianization of $G$ and $\bar \varphi \colon \overline{G} \to \overline{H}$ be the homomorphism obtained from $\vartheta\circ \varphi \colon G\to \overline{H}$ by reducing modulo $[G,G]$. For all $i$, define $\bar h_i=\vartheta(h_i)$, set $\bar \tau=\vartheta(\tau)$ and, for all $g$ in $G$, let $\bar g \in \overline{G}$ be the image of $g$ in $\overline{G}$. Eve first replaces the encrypted message by \[(\bar g, \vartheta(h))=(\bar g_{w_1}\cdots \bar g_{w_\ell}, \bar\varphi(\bar g_{w_1})\bar h_{w_1}\cdots\bar\varphi(\bar g_{w_{\ell}})\bar h_{w_\ell}\bar \tau^\beta) \in \overline{G} \times \overline{H}\] in order to work inside abelian groups $\overline{G}$ and $\overline{H}$. Now she proceeds in a similar way as in the proof of \cref{result_abelian}: First, she computes the group orders of $\overline{G}$ and $\overline{H}$ and finds the least common multiple $q$ of their odd parts. Then she converts the tuple $(\bar g, \vartheta(h)) $ above into a tuple with entries inside the $2$-Sylow subgroups $\overline{G}_2$, $\overline{H}_2$ of $\overline{G}$ and $\overline{H}$ by taking both entries to their $q$-th powers: \[(\bar g^q, \vartheta(h)^q)=(\bar g_{w_1}^q\cdots \bar g_{w_\ell}^q, \bar\varphi(\bar g_{w_1}^q)\bar h_{w_1}^q\cdots\bar\varphi(\bar g_{w_{\ell}}^q)\bar h_{w_\ell}^q\bar \tau^\beta) \in \overline{G}_2 \times \overline{H}_2.\] Finally, Eve can proceed in a similar way as in the proof of \cref{lem:h_i<g_i}: For all $i$ define $\bar l_i=(\bar\varphi(\bar g_i)\bar h_i)^q=\vartheta(\varphi(g_i)h_i)^q$ and let $ \beta_i$ be the order of $\varphi(\bar g_i)$. Eve can compute the elements $\bar l_i$ and the numbers $\beta_i$ using the public key. Then $(\bar l_i)^{\beta_i}=(\bar h_i^q)^{\beta_i}$ can also be computed from the public key. If $\bar \tau$ is not contained in the subgroup $M$ generated by $(\bar l_1)^{\beta_1},\dots,(\bar l_m)^{\beta_m}$, then working inside $\overline{H} /M$ can reveal the value of $\beta$ as in the proof of \cref{lem:h_i<g_i} under similar assumptions as stated there. \begin{rmk}\label{rmk:attack-ab} Observe that the above attack applies to any finite group $H$ if $\tau \not\in [H,H]$ and the image of $\tau$ in $H/[H,H]$ is not contained in the subgroup generated by the images of certain powers of $\varphi(g_i)h_i$. In particular, it is advisable (but likely not sufficient) to sample $\tau$ from~$[H,H]$. \end{rmk} \section{Normal forms and the solvable case}\label{sec:solvable} In this section we consider the case where the platform groups are finite and solvable and give evidence of why, given the efficiency constraints attached to the system, the groups should not be expected to provide secure postquantum cryptosystems (in analogy to \cref{sec:LRL-cryptanalysis}). For the background on finite solvable groups we refer to the very friendly \cite[Ch.\ 3]{Isaacs}. \begin{definition} For a finite group $\Gamma$, the \emph{derived series} of $\Gamma$ is the series $(\Gamma^{(i)})_{i\geq 1}$ defined inductively by \[\Gamma^{(1)}=\Gamma \ \ \textup{ and } \ \ \Gamma^{(i+1)}=[\Gamma^{(i)},\Gamma^{(i)}].\] If for some index $m$ the group $\Gamma^{(m)}$ is trivial, then $\Gamma$ is said to be \emph{solvable}. \end{definition} In a finite solvable group $\Gamma$ each quotient $\Gamma^{(i)}/\Gamma^{(i+1)}$ is abelian. Moreover, only finitely many such quotients are nontrivial and, if one is trivial, all subsequent ones are, too. Until the end of this section, assume that $H$ is solvable. Then, for cryptanalysis purposes and in analogy with \cref{G_also_abelian}, we assume without loss of generality that $G$ is also solvable. We let $s$ be the \emph{derived length} of $H$, i.e.\ $s$ is such that $H^{(s+1)}=1$ but $H^{(s)}\neq 1$. Then, without loss of generality, we assume that $G^{(s+1)}=1$. We remark that, if $s=1$, then $G$ and $H$ are abelian. \begin{rmk}[Efficient communication and computation]\label{rmk:efficiency} Alice and Bob, as part of their message exchange, need to be able to communicate elements and perform operations in the groups efficiently. An often favourable approach (also proposed in \cite[\S~4.2]{Baumslag}) is that of using \emph{normal forms} of elements with respect to a \emph{polycyclic presentation}, cf.\ \cite[Ch.~2]{EickHab}. For instance, when working with $G$ and $H$ abelian, \ref{it:A3} and \ref{it:A4} holding for $G$ is almost the same as saying that $G$ is given by a polycyclic presentation and the expression $g=g_1^{r_1}\cdots g_m^{r_m}$ is the normal form of $g$ with respect to this presentation. The word ``almost'' in the previous sentence is there to stress that the $r_i$'s are not uniquely identified by their class modulo $|g_i|$, which in turn is what happens for normal forms (see below). \end{rmk} We briefly explain here what it means for an element $g$ of the solvable group $G$ to be communicated in a \emph{normal form} with respect to a polycylic presentation respecting the derived filtration. To do so, for each $i\in\{1,\ldots,s\}$: \begin{enumerate}[label=$(\alph*)$] \item let $m_i$ denote the minimum number of generators of $G^{(i)}/G^{(i+1)}$, \item let $g_{i1},\ldots, g_{im_i}$ be elements of $G^{(i)}$ such that \[ G^{(i)}/G^{(i+1)}=\gen{g_{i1}G^{(i+1)}}\oplus\cdots\oplus\gen{g_{im_i}G^{(i+1)}}, \] \item for each $j\in\{1,\ldots,m_i\}$ let $o_{ij}$ denote the order of $g_{ij}$ modulo $G^{(i+1)}$. \end{enumerate} Then any element $g\in G$ can be uniquely represented by a vector $$\delta=(\delta_{11},\ldots,\delta_{1m_1},\delta_{21},\ldots,\delta_{2m_2},\ldots,\delta_{s1},\ldots,\delta_{sm_s})$$ of integers $0\leq \delta_{ij}<o_{ij}$ of length $n=m_1+\ldots+m_s$ such that \[ g=\prod_{i=1}^s\prod_{j=1}^{m_i}g_{ij}^{\delta_{ij}}, \] which is precisely the normal form of $g$ with respect to the chosen generators $\{g_{ij}\}$. Note that the data $(a)$-$(b)$-$(c)$ mentioned above should be public for Alice and Bob to be able to share the elements with each other. If the chosen generators fit into a \emph{polycyclic sequence}, then the group operation is performed through the \emph{collection process} \cite[\S~2.2]{EickHab}. It should be mentioned that, though polycyclic presentations generally yield a good practical performance, it is, to the best of our knowledge, not clear whether multiplication in these presentations can always be performed in polynomial time \cite{LGSo98}. Note that the expression of $h'$ also depends on the vector $\delta$: \[ h'=\prod_{i=1}^s\prod_{j=1}^{m_i}(\varphi(g_{ij})h_{ij})^{\delta_{ij}}. \] Shor's algorithm, applied on each level $G^{(i)}/G^{(i+1)}$, is polynomial in the log of the size of this quotient. Moreover, Shor's algorithm really does determine the vector $\delta$ of $g$ because of the condition $0\leq \delta_{ij}<o_{ij}$. In particular Eve can recover the vector $\delta$ of $g$ and use it to compute \[ \tau^{\beta}=\left(\prod_{i=1}^s\prod_{j=1}^{m_i}(\varphi(g_{ij})h_{ij})^{\delta_{ij}}\right)^{-1}\cdot h. \] The complexity of this algorithm on each quotient $G^{(i)}/G^{(i+1)}$ is polynomial in $\log|G^{(i)}/G^{(i+1)}|$, which makes the overall complexity to be polynomial in $\log |G|$. There is obviously no separation in complexity between the algorithm being run by the adversary, and the one being run by the user. This implies that for the system to be secure, we would need to have $|G| > 2^{2^{\lambda}}$, making it hard to say if it would even be possible to represent elements of $G$ in a computer. \begin{rmk} Given that the quotients of consecutive elements of the derived series are abelian, it is natural to ask whether the classical attack we designed for abelian groups could be generalised to an attack in the solvable context. It seems, however, that the nonabelian solvable case is substantially different from the abelian one. Among the limitations are: \begin{itemize} \item $\tau$ could for instance belong to $H^{(s)}$ while $g_1,\ldots, g_m$ will typically live in $H^{(1)}\setminus H^{(2)}$ (in the general setting we are indeed not necessarily working with normal forms); \item without knowing $\varphi$, it is not at all clear at which depths in the derived filtration $\varphi(g_1),\ldots\varphi(g_m)$ are to be found in $H$ (as publicly given only with noise); \item dealing with quotients is more delicate as one always has to consider normal closures of subgroups. \end{itemize} \end{rmk} \section{Future work}\label{future-work} In future work, we plan to consider several types of finite groups $G$, $H$, and $K$ for instantiating Leonardi--Ruiz-Lopez encryption and explore whether we can either construct attacks for the corresponding cryptosystems or prove security results. A first candidate would be the group $C_2^\lambda$ for all groups $G,H,K$. As none of the classical attacks presented in this paper apply in this case, Leonardi--Ruiz-Lopez encryption might prove to be classically secure for this choice. Other abelian candidates are the RSA groups and ideal class groups mentioned in Section \ref{RSAetc}. As a first nonabelian example, we plan to work with certain $p$-groups and use strategies that allow us to circumnavigate the attacks presented in Section \ref{sec:solvable}. In particular, it would be beneficial to work with presentations of groups that are not based on normal forms and yet allow efficient computation. The advantage of working with nonabelian groups is that it may be possible to construct a post-quantum additive homomorphic cryptosystem.
{'timestamp': '2023-02-28T02:01:15', 'yymm': '2302', 'arxiv_id': '2302.12867', 'language': 'en', 'url': 'https://arxiv.org/abs/2302.12867'}
\section{Preliminaries}\label{prelim} Throughout this paper, we use the shorthand notation $[d] = \{1,\ldots,d\}$. We write \begin{eqnarray*} \begin{aligned} H(\mathcal{B}_t,\ket{\phi}) = - \sum_{i = 1}^{d}|\inp{\phi}{b_k^t}|^2 \log |\inp{\phi}{b_k^t}|^2, \end{aligned} \end{eqnarray*} for the Shannon entropy~\cite{shannon:info} arising from measuring the pure state $\ket{\phi}$ in basis $\mathcal{B}_t = \{\ket{b_1^t},\ldots,\ket{b_d^t}\}$. In general, we will use $\ket{b_k^t}$ with $k \in [d]$ to denote the $k$-th element of a basis $\mathcal{B}_t$ indexed by $t$. We also briefly refer to the R\'enyi entropy of order 2 (collision entropy) of measuring $\ket{\phi}$ in basis $\mathcal{B}_t$ given by $H_2(\mathcal{B}_t,\ket{\phi}) = - \log \sum_{i=1}^d |\inp{\phi}{b_k^t}|^4$~\cite{cachin:renyi}. \subsection{Mutually unbiased bases} We also need the notion of mutually unbiased bases (MUBs), which were initially introduced in the context of state estimation~\cite{wootters:mub}, but appear in many other problems in quantum information. The following definition closely follows the one given in~\cite{boykin:mub}. \begin{definition}[MUBs] \label{def-mub} Let $\mathcal{B}_1 = \{|b^1_1\rangle,\ldots,|b^1_{d}\rangle\}$ and $\mathcal{B}_2 = \{|b^2_1\rangle,\ldots,|b^2_{d}\rangle\}$ be two orthonormal bases in $\mathbb{C}^d$. They are said to be \emph{mutually unbiased} if $|\lanb^1_k |b^2_l\rangle| = 1/\sqrt{d}$, for every $k,l \in[d]$. A set $\{\mathcal{B}_1,\ldots,\mathcal{B}_m\}$ of orthonormal bases in $\mathbb{C}^d$ is called a \emph{set of mutually unbiased bases} if each pair of bases is mutually unbiased. \end{definition} We use $N(d)$ to denote the maximal number of MUBs in dimension $d$. In any dimension $d$, we have that $\mbox{N}(d) \leq d+1$~\cite{boykin:mub}. If $d = p^k$ is a prime power, we have that $\mbox{N}(d) = d+1$ and explicit constructions are known~\cite{boykin:mub,wootters:mub}. If $d = s^2$ is a square, $\mbox{N}(d) \geq \mbox{MOLS}(s)$ where $\mbox{MOLS}(s)$ denotes the number of mutually orthogonal $s \times s$ Latin squares~\cite{wocjan:mub}. In general, we have $\mbox{N}(n m) \geq \min\{\mbox{N}(n),\mbox{N}(m)\}$ for all $n,m \in \mathbb{N}$~\cite{zauner:diss,klappenecker:mubs}. It is also known that in any dimension, there exists an explicit construction for 3 MUBs~\cite{grassl:mub}. Unfortunately, not very much is known for other dimensions. For example, it is still an open problem whether there exists a set of $7$ MUBs in dimension $d=6$. We say that a unitary $U_t$ transforms the computational basis into the $t$-th MUB $\mathcal{B}_t = \{\ket{b^t_1},\ldots,\ket{b^t_d}\}$ if for all $k \in [d]$ we have $\ket{b^t_k} = U_t\ket{k}$. Here, we are particularly concerned with two specific constructions of mutually unbiased bases. \subsubsection{Latin squares} First of all, we consider MUBs based on mutually orthogonal Latin squares~\cite{wocjan:mub}. Informally, an $s \times s$ Latin square over the symbol set $[s] = \{1,\ldots,s\}$ is an arrangement of elements of $[s]$ into an $s \times s$ square such that in each row and each column every element occurs exactly once. Let $L_{ij}$ denote the entry in a Latin square in row $i$ and column $j$. Two Latin squares $L$ and $L'$ are called mutually orthogonal if and only if $\{(L_{i,j},L'_{i,j})|i,j \in [s]\} = \{(u,v)|u,v \in [s]\}$. From any $s\times s$ Latin square we can obtain a basis for $\mathbb{C}^{s}\otimes \mathbb{C}^{s}$. First, we construct $s$ of the basis vectors from the entries of the Latin square itself. Let $\ket{v_{1,\ell}} = (1/\sqrt{s}) \sum_{i,j\in [s]} E^L_{i,j}(\ell) \ket{i,j}$ where $E^L$ is a predicate such that $E^L_{i,j}(\ell) = 1$ if and only if $L_{i,j} = \ell$. Note that for each $\ell$ we have exactly $s$ pairs $i,j$ such that $E_{i,j}(\ell) = 1$, because each element of $[s]$ occurs exactly $s$ times in the Latin square. Secondly, from each such vector we obtain $s-1$ additional vectors by adding successive rows of an $s \times s$ (complex) Hadamard matrix $H = (h_{ij})$ as coefficients to obtain the remaining $\ket{v_{t,j}}$ for $t \in [s]$, where $h_{ij} = \omega^{ij}$ with $i,j \in \{0,\ldots,s-1\}$ and $\omega = e^{2 \pi i/s}$. Two additional MUBs can then be obtained in the same way from the two non-Latin squares where each element occurs for an entire row or column respectively. From each mutually orthogonal latin square and these two extra squares which also satisfy the above orthogonality condition, we obtain one basis. This construction therefore gives $\mbox{MOLS}(s) + 2$ many MUBs. It is known that if $s = p^k$ is a prime power itself, we obtain $p^k+1\approx \sqrt{d}$ MUBs from this construction. Note, however, that there do exist many more MUBs in prime power dimensions, namely $d+1$. If $s$ is not a prime power, it is merely known that $\mbox{MOLS}(s) \geq s^{1/14.8}$~\cite{wocjan:mub}. As an example, consider the following $3 \times 3$ Latin square and the $3 \times 3$ Hadmard matrix\\ \begin{center} \begin{tabular}{lr} \begin{tabular}{|c|c|c|} \hline 1 & 2 & 3\\\hline 2 & 3 & 1\\\hline 3 & 1 & 2 \\\hline \end{tabular}, & $ H = \left(\begin{array}{ccc} 1 &1& 1\\ 1 &\omega &\omega^2\\ 1 &\omega^2& \omega \end{array}\right)$, \end{tabular} \end{center} where $\omega = e^{2 \pi i/3}$. First, we obtain vectors \begin{eqnarray*} \ket{v_{1,1}} &=& (\ket{1,1} + \ket{2,3} + \ket{3,2})/\sqrt{3}\\ \ket{v_{1,2}} &=& (\ket{1,2} + \ket{2,1} + \ket{3,3})/\sqrt{3}\\ \ket{v_{1,3}} &=& (\ket{1,3} + \ket{2,2} + \ket{3,1})/\sqrt{3}. \end{eqnarray*} With the help of $H$ we obtain 3 additional vectors from the ones above. From the vector $\ket{v_{1,1}}$, for example, we obtain \begin{eqnarray*} \ket{v_{1,1}} &=& (\ket{1,1} + \ket{2,3} + \ket{3,2})/\sqrt{3}\\ \ket{v_{2,1}} &=& (\ket{1,1} + \omega \ket{2,3} + \omega^2 \ket{3,2})/\sqrt{3}\\ \ket{v_{3,1}} &=& (\ket{1,1} + \omega^2 \ket{2,3} + \omega \ket{3,2})/\sqrt{3}. \end{eqnarray*} This gives us basis $\mathcal{B} = \{\ket{v_{t,\ell}}|t,\ell \in [s]\}$ for $s = 3$. The construction of another basis follows in exactly the same way from a mutually orthogonal Latin square. The fact that two such squares $L$ and $L'$ are mutually orthogonal ensures that the resulting bases will be mutually unbiased. Indeed, suppose we are given another such basis, $\mathcal{B'} = \{\ket{u_{t,\ell}}|t,\ell \in [s]\}$ belonging to $L'$. We then have for any $\ell,\ell' \in [s]$ that $|\inp{u_{1,\ell'}}{v_{1,\ell}}|^2 = |(1/s) \sum_{i,j\in [s]} E^{L'}_{i,j}(\ell') E^L_{i,j}(\ell)|^2 = 1/s^2$, as there exists excactly only one pair $\ell,\ell' \in [s]$ such that $E^{L'}_{i,j}(\ell') E^L_{i,j}(\ell) = 1$. Clearly, the same argument holds for the additional vectors derived from the Hadamard matrix. \subsubsection{Generalized Pauli matrices} The second construction we consider is based on the generalized Pauli matrices $X_d$ and $Z_d$~\cite{boykin:mub}, defined by their actions on the computational basis $C = \{\ket{1},\ldots,\ket{d}\}$ as follows: $$ X_d\ket{k} = \ket{k+1}, \and Z_d\ket{k} = \omega^k\ket{k},~\forall \ket{k} \in C, $$ where $\omega = e^{2 \pi i/d}$. We say that $\left(X_{d}\right)^{a_1} \left(Z_{d}\right)^{b_1} \otimes \cdots \otimes \left(X_{d}\right)^{a_N} \left(Z_{d}\right)^{b_N}$ for $a_k,b_k \in \{0,\ldots,d-1\}$ and $k \in [N]$ is a \emph{string of Pauli Matrices}. If $d$ is a prime, it is known that the $d+1$ MUBs constructed first by Wootters and Fields~\cite{wootters:mub} can also be obtained as the eigenvectors of the matrices $Z_d,X_d,X_dZ_d,X_dZ_d^2,\ldots,X_dZ_d^{d-1}$~\cite{boykin:mub}. If $d = p^k$ is a prime power, consider all $d^2-1$ possible strings of Pauli matrices excluding the identity and group them into sets $C_1,\ldots,C_{d+1}$ such that $|C_i| = d - 1$ and $C_i \cup C_j = \{\mathbb{I}\}$ for $i \neq j$ and all elements of $C_i$ commute. Let $B_i$ be the common eigenbasis of all elements of $C_i$. Then $B_1,\ldots,B_{d+1}$ are MUBs~\cite{boykin:mub}. A similar result for $d = 2^k$ has also been shown in~\cite{lawrence:mub}. A special case of this construction are the three mutually unbiased bases in dimension $d=2^k$ given by the unitaries $\mathbb{I}^{\otimes k}$,$H^{\otimes k}$ and $K^{\otimes k}$ with $K = (\mathbb{I} + i\sigma_x)/\sqrt{2}$ applied to the computational basis. \subsection{2-designs} For the purposes of the present work, \emph{spherical $t$-designs} (see for example Ref.\ \cite{Renesetal04a}) can be defined as follows. \begin{definition}[$t$-design] Let $\{|\tau_1\rangle,\ldots,|\tau_{m}\rangle\}$ be a set of state vectors in $\mathbb{C}^d$, they are said to form a $t$-design if \begin{eqnarray*} \begin{aligned} \frac{1}{m}\sum_{i=1}^m [|\tau_i\rangle \langle \tau_i|]^{\otimes t} = \frac{\Pi_+^{(t,d)}}{\mathop{\mathrm{Tr}}\nolimits \Pi_+^{(t,d)}}, \end{aligned} \end{eqnarray*} where $\Pi_+(t,d)$ is a projector onto the completely symmetric subspace of ${\mathbb{C}^d}^{\otimes t}$ and \begin{eqnarray*} \begin{aligned} \mathop{\mathrm{Tr}}\nolimits \Pi_+^{(t,d)}=\cmb{d+t-1}{d-1}=\frac{(d+t-1)!}{(d-1)!~t!}, \end{aligned} \end{eqnarray*} is its dimension. \end{definition} Any set $\mathbb{B}$ of $d+1$ MUBs forms a \emph{spherical $2$-design} \cite{KlappeneckerRotteler05a,Renesetal04a}, i.e., we have for $\mathbb{B} = \{\mathcal{B}_1,\ldots,\mathcal{B}_{d+1}\}$ with $\mathcal{B}_t = \{\ket{b^t_1},\ldots,\ket{b^t_d}\}$ that \begin{eqnarray*} \begin{aligned} \frac{1}{d(d+1)}\sum_{t=1}^{d+1}\sum_{k=1}^{d} [|b^t_k\rangle \langle b^t_k|]^{\otimes 2} &= 2\frac{\Pi_+^{(2,d)}}{d(d+1)}. \end{aligned} \end{eqnarray*} \section{Uncertainty relations} We now prove tight entropic uncertainty for measurements in MUBs in square dimensions. The main result of~\cite{maassen:entropy}, which will be very useful for us, is stated next. \begin{theorem}[Maassen and Uffink] Let $\mathcal{B}_1$ and $\mathcal{B}_2$ be two orthonormal basis in a Hilbert space of dimension $d$. Then for all pure states $\ket{\psi}$ \begin{eqnarray} \begin{aligned} \label{eq:maasenuffinkbound} \frac{1}{2}\left[ H(\mathcal{B}_1,\ket{\psi})+H(\mathcal{B}_2,\ket{\psi})\right]\geq -\log c(\mathcal{B}_1,\mathcal{B}_2), \end{aligned} \end{eqnarray} where $c(\mathcal{B}_1,\mathcal{B}_2)=\max \left \{|\langle b_1|b_2\rangle|:|b_1\rangle \in \mathcal{B}_1,|b_2\rangle \in \mathcal{B}_2\right \}$. \end{theorem} The case when $\mathcal{B}_1$ and $\mathcal{B}_2$ are MUBs is of special interest for us. More generally, when one has a set of MUBs a trivial application of (\ref{eq:maasenuffinkbound}) leads to the following corollary also noted in~\cite{azarchs:entropy}. \begin{corollary}\label{MUderived} Let $\mathbb{B}=\{\mathcal{B}_1,\ldots,\mathcal{B}_m\}$, be a set of MUBs in a Hilbert space of dimension $d$. Then \begin{eqnarray} \begin{aligned} \label{eq:manymubsbound} \frac{1}{m} \sum_{t=1}^m H(\mathcal{B}_t,|\psi\rangle)\geq \frac{\log d}{2}. \end{aligned} \end{eqnarray} \end{corollary} \begin{proof} Using (\ref{eq:maasenuffinkbound}), one gets that for any pair of MUBs $\mathcal{B}_t$ and $\mathcal{B}_{t'}$ with $t\neq t'$ \begin{eqnarray} \begin{aligned}\label{eq:maassenuffinkij} \frac{1}{2}\left[ H(\mathcal{B}_t,\psi)+H(\mathcal{B}_{t'},\psi)\right]\geq \frac{\log d}{2}. \end{aligned} \end{eqnarray} Adding up the resulting equation for all pairs $t\neq t'$ we get the desired result (\ref{eq:manymubsbound}). \end{proof} Here, we now show that this bound can in fact be tight for a large set of MUBs. \subsection{MUBs in square dimensions} Corollary \ref{MUderived}, gives a lower bound on the average of the entropies of a set of MUBs. The obvious question is whether that bound is tight. We show that the bound is indeed tight when we consider product MUBs in a Hilbert space of square dimension. \begin{theorem}\label{squareThm} Let $\mathbb{B}=\{\mathcal{B}_1,\ldots,\mathcal{B}_m\}$ with $m\geq 2$ be a set of MUBs in a Hilbert space $\mathcal{H}$ of dimension $s$. Let $U_t$ be the unitary operator that transforms the computational basis to $\mathcal{B}_t$. Then $\mathbb{V}=\{\mathcal{V}_1,\ldots,\mathcal{V}_m\}$, where \begin{eqnarray*} \begin{aligned} \mathcal{V}_t=\left \{U_t|k\rangle \otimes U_t^* |l\rangle: k,l\in[s] \right \}, \end{aligned} \end{eqnarray*} is a set of MUBs in $\mathcal{H} \otimes \mathcal{H}$, and it holds that \begin{eqnarray} \begin{aligned}\label{eq:squaremubsbound} \min_{\ket{\psi}} \frac{1}{m} \sum_{t=1}^m H(\mathcal{V}_t,|\psi\rangle)= \frac{\log d}{2}, \end{aligned} \end{eqnarray} where $d=\dim(\mathcal{H} \otimes \mathcal{H})=s^2$. \end{theorem} \begin{proof} It is easy to check that $\mathbb{V}$ is indeed a set of MUBs. Our proof works by constructing a state $\ket{\psi}$ that achieves the bound in Corollary~\ref{MUderived}. It is easy to see that the maximally entangled state \begin{eqnarray*} \begin{aligned} |\psi\rangle = \frac{1}{\sqrt{s}}\sum_{k=1}^{s}|kk\rangle, \end{aligned} \end{eqnarray*} satisfies $U\otimes U^*|\psi\rangle=|\psi\rangle$ for any $U\in \textrm{U}(d)$. Indeed, \begin{eqnarray*} \begin{aligned} \langle \psi |U\otimes U^*|\psi\rangle&=\frac{1}{s}\sum_{k,l=1}^{s} \langle k |U|l\rangle\langle k |U^*|l\rangle\\ &=\frac{1}{s}\sum_{k,l=1}^{s} \langle k |U|l\rangle\langle l |U^\dagger|k\rangle\\ &=\frac{1}{s}\mathop{\mathrm{Tr}}\nolimits UU^\dagger=1. \end{aligned} \end{eqnarray*} Therefore, for any $t\in[m]$ we have that \begin{eqnarray*} \begin{aligned} H(\mathcal{V}_t,|\psi\rangle) &=-\sum_{kl}|\langle kl|U_t\otimes U_t^*|\psi\rangle|^2\log|\langle kl|U_t\otimes U_t^*|\psi\rangle|^2\\ &=-\sum_{kl}|\langle kl|\psi\rangle|^2\log|\langle kl|\psi\rangle|^2\\ &=\log s=\frac{\log d}{2}. \end{aligned} \end{eqnarray*} Taking the average of the previous equation we get the desired result. \end{proof} \subsection{MUBs based on Latin Squares} We now consider mutually unbiased bases based on Latin squares~\cite{wocjan:mub} as described in Section~\ref{prelim}. Our proof again follows by providing a state that achieves the bound in Corollary~\ref{MUderived}, which turns out to have a very simple form. \begin{lemma}\label{LSentropy} Let $\mathbb{B}=\{\mathcal{B}_1,\ldots,\mathcal{B}_m\}$ with $m \geq 2$ be any set of MUBs in a Hilbert space of dimension $d=s^2$ constructed on the basis of Latin squares. Then $$ \min_{\ket{\psi}} \frac{1}{m} \sum_{\mathcal{B}\in\mathbb{B}} H(\mathcal{B},\ket{\psi}) = \frac{\log d}{2}. $$ \end{lemma} \begin{proof} Consider the state $\ket{\psi} = \ket{1,1}$ and fix a basis $\mathcal{B}_t = \{\ket{v^t_{i,j}}|i,j \in [s]\} \in \mathbb{B}$ coming from a Latin square. It is easy to see that there exists exactly one $j \in [s]$ such that $\inp{v^t_{1,j}}{1,1} = 1/\sqrt{s}$. Namely this will be the $j \in [s]$ at position $(1,1)$ in the Latin square. Fix this $j$. For any other $\ell \in [s], \ell \neq j$, we have $\inp{v^t_{1,\ell}}{1,1} = 0$. But this means that there exist exactly $s$ vectors in $\mathcal{B}$ such that $|\inp{v^t_{i,j}}{1,1}|^2 = 1/s$, namely exactly the $s$ vectors derived from $\ket{v^t_{1,j}}$ via the Hadamard matrix. The same argument holds for any such basis $\mathcal{B} \in \mathbb{T}$. We get \begin{eqnarray*} \sum_{\mathcal{B} \in \mathbb{B}} H(\mathcal{B},\ket{1,1}) &=& \sum_{\mathcal{B} \in \mathbb{B}} \sum_{i,j \in [s]} |\inp{v^t_{i,j}}{1,1}|^2 \log |\inp{v^t_{i,j}}{1,1}|^2\\ &=& |\mathbb{T}| s \frac{1}{s} \log \frac{1}{s}\\ &=& |\mathbb{T}| \frac{\log d}{2}. \end{eqnarray*} The result then follows directly from Corollary~\ref{MUderived}. \end{proof} \subsection{Using a full set of MUBs} We now provide an alternative proof of an entropic uncertainty relation for a full set of mutually unbiased bases. This has previously been proved in~\cite{sanchez:entropy2}. Nevertheless, because our proof is so simple using existing results about 2-designs we include it here for completeness, in the hope that if may offer additional insight. \begin{lemma}\label{FullentropyCollision} Let $\mathbb{B}$ be a set of $d+1$ MUBs in a Hilbert space of dimension $d$. Then $$ \frac{1}{d+1}\sum_{\mathcal{B}\in\mathbb{B}}H_2(\mathcal{B},\ket{\psi}) \geq \log\left(\frac{d+1}{2}\right). $$ \end{lemma} \begin{proof} Let $\mathcal{B}_t = \{\ket{b^t_1},\ldots,\ket{b^t_d}\}$ and $\mathbb{B} = \{\mathcal{B}_1,\ldots,\mathcal{B}_{d+1}\}$. We can then write \begin{eqnarray*} \frac{1}{d+1}\sum_{\mathcal{B}\in\mathbb{B}}H_2(\mathcal{B},\ket{\psi}) &=& - \frac{1}{d+1}\sum_{t=1}^{d+1} \log \sum_{k=1}^d |\inp{b^t_k}{\psi}|^4\\ &\geq& \log\left(\frac{1}{d+1}\sum_{t=1}^{d+1}\sum_{k=1}^d |\inp{b^t_k}{\psi}|^4\right)\\ &=&\log\left(\frac{d+1}{2}\right), \end{eqnarray*} where the first inequality follows from the concavity of the $\log$, and the final inequality follows directly from the fact that a full set of MUBs forms a 2-design and~\cite[Theorem 1]{KlappeneckerRotteler05a}. \end{proof} We then obtain the original result by Sanchez-Ruiz~\cite{sanchez:entropy2} by noting that $H(\cdot) \geq H_2(\cdot)$. \begin{corollary}\label{Fullentropy} Let $\mathbb{B}$ be a set of $d+1$ MUBs in a Hilbert space of dimension $d$. Then $$ \frac{1}{d+1}\sum_{\mathcal{B}\in\mathbb{B}}H(\mathcal{B},\ket{\psi}) \geq \log\left(\frac{d+1}{2}\right). $$ \end{corollary} \section{Locking} We now turn our attention to locking. We first explain the connection between locking and entropic uncertainty relations. In particular, we show that for MUBs based on generalized Pauli matrices, we only need to look at such uncertainty relations to determine the exact strength of the locking effect. We then consider how good MUBs based on Latin squares are for locking. In order to determine how large the locking effect is for some set of mutually unbiased bases $\mathbb{B}$, and the state \begin{equation}\label{rhoAB} \rho_{AB} = \sum_{t=1}^{|\mathbb{B}|} \sum_{k=1}^{d} p_{t,k} (\outp{k}{k} \otimes \outp{t}{t})_A \otimes (\outp{b^t_k}{b^t_k})_B, \end{equation} we must find an optimal bound for $\mathcal{I}_c(\rho_{AB})$. Here, $\{p_{t,k}\}$ is a probability distribution over $\mathbb{B} \times [d]$. That is, we must find a POVM $M_A \otimes M_B$ that maximizes Eq.\ (\ref{mutualInfo}). It has been shown in~\cite{terhal:locking} that we can restrict ourselves to to taking $M_A$ to be the local measurement determined by the projectors $\{\outp{k}{k} \otimes \outp{t}{t}\}$. It is also known that we can limit ourselves to take the measurement $M_B$ consisting of rank one elements $\{\alpha_i \outp{\Phi_i}{\Phi_i}\}$ only~\cite{davies:access}, where $\alpha_i \geq 0$ and $\ket{\Phi_i}$ is normalized. Maximizing over $M_B$ then corresponds to maximizing Bob's accessible information~\cite[Eq.\ (9.75)]{peres:book} for the ensemble $\mathcal{E} = \{p_{k,t},\outp{b^t_k}{b^t_k}\}$ \begin{eqnarray} \begin{aligned}\label{accessible} &&\mathcal{I}_{acc}(\mathcal{E})= \max_M \left(- \sum_{k,t} p_{k,t} \log p_{k,t} + \right.\\ &&\left.\sum_{i} \sum_{k,t} p_{k,t} \alpha_i \bra{\Phi_i}\rho_{k,t}\ket{\Phi_i} \log \frac{p_{k,t} \bra{\Phi_i}\rho_{k,t}\ket{\Phi_i}}{\bra{\Phi_i}\mu\ket{\Phi_i}} \right), \end{aligned} \end{eqnarray} where $\mu = \sum_{k,t} p_{k,t} \rho_{k,t}$ and $\rho_{k,t} = \outp{b^t_k}{b^t_k}$. Therefore, we have $\mathcal{I}_c(\rho_{AB}) = \mathcal{I}_{acc}(\mathcal{E})$. We are now ready to prove our locking results. \subsection{An example} We first consider a very simple example with only three MUBs that provides the intuition behind the remainder of our paper. The three MUBs we consider now are generated by the unitaries $\mathbb{I}$, $H$ and $K = (\mathbb{I} + i\sigma_x)/\sqrt{2}$ when applied to the computational basis. For this small example, we also investigate the role of the prior over the bases and the encoded basis elements. It turns out that this does not affect the strength of the locking effect positively. Actually, it is possible to show the same for encodings in many other bases. However,we do not consider this case in full generality as to not obscure our main line of argument. \begin{lemma}\label{3mubLocking} Let $U_0=\mathbb{I}^{\otimes n}$,$U_1 = H^{\otimes n}$, and $U_2 = K^{\otimes n}$, where $k \in \{0,1\}^n$ and $n$ is an even integer. Let $\{p_t\}$ with $t \in [3]$ be a probability distribution over the set $\mathcal{S} = \{U_1,U_2,U_3\}$. Suppose that $p_1,p_2,p_3 \leq 1/2$ and let $p_{t,k} = p_t (1/d)$. Consider the ensemble $\mathcal{E} = \{ p_t \frac{1}{d},U_t \outp{k}{k}U_t^\dagger\}$, then $$ \mathcal{I}_{acc}(\mathcal{E}) = \frac{n}{2}. $$ If, on the other hand, there exists a $t \in [3]$ such that $p_t > 1/2$, then $\mathcal{I}_{acc}(\mathcal{E}) > n/2$. \end{lemma} \begin{proof} We first give an explicit measurement strategy and then prove a matching upper bound on $\mathcal{I}_{acc}$. Consider the Bell basis vectors $\ket{\Gamma_{00}} = (\ket{00} + \ket{11})/\sqrt{2}$, $\ket{\Gamma_{01}} = (\ket{00} - \ket{11})/\sqrt{2}$, $\ket{\Gamma_{10}} = (\ket{01} + \ket{10})/\sqrt{2}$, and $\ket{\Gamma_{11}} = (\ket{01} - \ket{10})/\sqrt{2}$. Note that we can write for the computational basis \begin{eqnarray*} \ket{00} &=& \frac{1}{\sqrt{2}}(\ket{\Gamma_{00}} + \ket{\Gamma_{01}})\\ \ket{01} &=& \frac{1}{\sqrt{2}}(\ket{\Gamma_{10}} + \ket{\Gamma_{11}})\\ \ket{10} &=& \frac{1}{\sqrt{2}}(\ket{\Gamma_{10}} - \ket{\Gamma_{11}})\\ \ket{11} &=& \frac{1}{\sqrt{2}}(\ket{\Gamma_{00}} - \ket{\Gamma_{01}}). \end{eqnarray*} The crucial fact to note is that if we fix some $k_1k_2$, then there exist exactly two Bell basis vectors $\ket{\Gamma_{i_1i_2}}$ such that $|\inp{\Gamma_{i_1i_2}}{k_1k_2}|^2 = 1/2$. For the remaining two basis vectors the inner product with $\ket{k_1k_2}$ will be zero. A simple calculation shows that we can express the two qubit basis states of the other two mutually unbiased bases analogously: for each two qubit basis state there are exactly two Bell basis vectors such that the inner product is zero and for the other two the inner product squared is $1/2$. We now take the measurement given by $\{\outp{\Gamma_i}{\Gamma_i}\}$ with $\ket{\Gamma_i} = \ket{\Gamma_{i_1i_2}} \otimes \ldots \otimes \ket{\Gamma_{i_{n-1}i_{n}}}$ for the binary expansion of $i = i_1i_2\ldots i_n$. Fix a $k = k_1k_2\ldots k_n$. By the above argument, there exist exactly $2^{n/2}$ strings $i \in \{0,1\}^n$ such that $|\inp{\Gamma_i}{k}|^2 = 1/(2^{n/2})$. Putting everything together, Eq.\ (\ref{accessible}) now gives us for any prior distribution $\{p_{t,k}\}$ that \begin{equation}\label{Ibell} -\sum_i \bra{\Gamma_i}\mu\ket{\Gamma_i} \log \bra{\Gamma_i}\mu\ket{\Gamma_i} - \frac{n}{2} \leq \mathcal{I}_{acc}(\mathcal{E}). \end{equation} For our particular distribution we have $\mu = \mathbb{I}/d$ and thus $$ \frac{n}{2} \leq \mathcal{I}_{acc}(\mathcal{E}). $$ We now prove a matching upper bound that shows that our measurement is optimal. For our distribution, we can rewrite Eq.\ (\ref{accessible}) for the POVM given by $\{\alpha_i \outp{\Phi_i}{\Phi_i}\}$ to \begin{eqnarray*} \mathcal{I}_{acc}(\mathcal{E}) &=& \max_M \left(\log d + \right.\\ &&\left. \sum_i \frac{\alpha_i}{d} \sum_{k,t} p_{t} |\bra{\Phi_i}U_t\ket{k}|^2 \log |\bra{\Phi_i}U_t\ket{k}|^2 \right)\\ &=& \max_M \left(\log d - \sum_i \frac{\alpha_i}{d} \sum_{t} p_t H(\mathcal{B}_t,\ket{\Phi_i}) \right). \end{eqnarray*} It follows from Corollary~\ref{MUderived} that $\forall i\in \{0,1\}^n$ and $p_1,p_2,p_3\leq 1/2$, \begin{eqnarray*} (1/2-p_1) [H(\mathcal{B}_2,\ket{\Phi_i}) + H(\mathcal{B}_3,\ket{\Phi_i}) ]&+&\\ (1/2-p_2) [H(\mathcal{B}_1,\ket{\Phi_i}) + H(\mathcal{B}_3,\ket{\Phi_i})]&+&\\ (1/2-p_3)[H(\mathcal{B}_1,\ket{\Phi_i})+H(\mathcal{B}_2,\ket{\Phi_i})]&& \geq n/2. \end{eqnarray*} Reordering the terms we now get $\sum_{t=1}^3 p_{t} H(\mathcal{B}_t,\ket{\Phi_i})\geq n/2.$ Putting things together and using the fact that $\sum_i \alpha_i=d$, we obtain $$ \mathcal{I}_{acc}(\mathcal{E}) \leq \frac{n}{2}, $$ from which the result follows. If, on the other hand, there exists a $t \in [3]$ such that $p_t > 1/2$, then by measuring in the basis $\mathcal{B}_t$ we obtain $\mathcal{I}_{acc}(\mathcal{E}) \geq p_t n > n/2$. \end{proof} Above, we have only considered a non-uniform prior over the set of bases. In \cite{BalWehWin:pistar} it is observed that when we want to guess the XOR of a string of length $2$ encoded in one (unknown to us) of these three bases, the uniform prior on the strings is not the one that gives the smallest probability of success. This might lead one to think that a similar phenomenon could be observed in the present setting, i.e., that one might obtain better locking with three basis for a non-uniform prior on the strings. In what follows, however, we show that this is not the case. Let $p_t=\sum_{k} p_{k,t}$ be the marginal distribution on the basis, then the difference in Bob's knowledge between receiving only the quantum state and receiving the quantum state \emph{and} the basis information is given by \begin{eqnarray*} \Delta(p_{k,t})=H(p_{k,t})-\mathcal{I}_{acc}(\mathcal{E}) -H(p_t), \end{eqnarray*} substracting the basis information itself. Consider the post-measurement state $\nu=\sum_i \bra{\Gamma_i}\mu\ket{\Gamma_i}\ket{\Gamma_i}\bra{\Gamma_i}$. Using (\ref{Ibell}) we obtain \begin{eqnarray} \label{gap1} \Delta(p_{k,t})\leq H(p_{k,t})-S(\nu)+n/2 -H(p_t), \end{eqnarray} where $S$ is the von Neuman entropy. Consider the state \begin{eqnarray*} \rho_{12} = \sum_{k=1}^{d} \sum_{t=1}^{3} p_{k,t}(\outp{t}{t})_1 \otimes (U_t \outp{k}{k} U_t^{\dagger})_2, \end{eqnarray*} we have that \begin{eqnarray*} S(\rho_{12})=H(p_{k,t}) &\leq S(\rho_1) +S(\rho_2)\\ & = H(p_t) +S(\mu)\\ &\leq H(p_t)+S(\nu). \end{eqnarray*} Using (\ref{gap1}) and the previous equation we get \begin{eqnarray*} \Delta(p_{k,t})\leq n/2, \end{eqnarray*} for any prior distribution. This bound is saturated by the uniform prior and therefore we conclude that the uniform prior results in the largest gap possible. \subsection{MUBs from generalized Pauli Matrices} We first consider MUBs based on the generalized Pauli matrices $X_d$ and $Z_d$ as described in Section~\ref{prelim}. We consider a uniform prior over the elements of each basis and the set of bases. Choosing a non-uniform prior does not lead to a better locking effect. \begin{lemma}\label{equiv} Let $\mathbb{B}=\{\mathcal{B}_1,\ldots,\mathcal{B}_{m}\}$ be any set of MUBs constructed on the basis of generalized Pauli matrices in a Hilbert space of prime power dimension $d = p^N$. Consider the ensemble $\mathcal{E} = \{ \frac{1}{d m},\outp{b^t_k}{b^t_k}\}$. Then $$ I_{acc}(\mathcal{E}) = \log d - \frac{1}{m} \min_{\ket{\psi}} \sum_{\mathcal{B}_t \in \mathbb{B}} H(\mathcal{B}_t,\ket{\psi}). $$ \end{lemma} \begin{proof} We can rewrite Eq.\ (\ref{accessible}) for the POVM given by $\{\alpha_i \outp{\Phi_i}{\Phi_i}\}$ to \begin{eqnarray*} \mathcal{I}_{acc}(\mathcal{E}) &=& \max_M \left(\log d + \right.\\ &&\left. \sum_i \frac{\alpha_i}{d m} \sum_{k,t} |\inp{\Phi_i}{b^t_k}|^2 \log |\inp{\Phi_i}{b^t_k}|^2 \right)\\ &=& \max_M \left(\log d - \sum_i \frac{\alpha_i}{d} \sum_{t} p_{t} H(\mathcal{B}_t,\ket{\Phi_i}) \right). \end{eqnarray*} For convenience, we split up the index $i$ into $i = ab$ with $a = a_1,\ldots,a_N$ and $b=b_1,\ldots,b_N$, where $a_\ell,b_\ell \in \{0,\ldots,p-1\}$ in the following. We first show that applying generalized Pauli matrices to the basis vectors of a MUB merely permutes those vectors. \begin{claim} Let $\mathcal{B}_t = \{\ket{b^t_1},\ldots,\ket{b^t_d}\}$ be a basis based on generalized Pauli matrices (Section~\ref{prelim}) with $d = p^N$. Then $\forall a,b \in \{0,\ldots,p-1\}^N, \forall k \in [d]$ we have that $\exists k' \in [d],$ such that $\ket{b^{t}_{k'}} = X_d^{a_1}Z_d^{b_1} \otimes \ldots \otimes X_d^{a_N}Z_d^{b_N}\ket{b^t_k}$. \end{claim} \begin{proof} Let $\Sigma_p^i$ for $i \in \{0,1,2,3\}$ denote the generalized Pauli's $\Sigma_p^0 = \mathbb{I}_p$, $\Sigma_p^1 = X_p$, $\Sigma_p^3 = Z_p$, and $\Sigma_p^2 = X_p Z_p$. Note that $X_p^uZ_p^v = \omega^{uv} Z_p^v X_p^u$, where $\omega = e^{2\pi i/p}$. Furthermore, define $ \Sigma_p^{i,(x)} = \mathbb{I}^{\otimes (x - 1)} \otimes \Sigma_p^{i} \otimes \mathbb{I}^{N-x} $ to be the Pauli operator $\Sigma_p^i$ applied to the $x$-th qupit. Recall from Section~\ref{prelim} that the basis $\mathcal{B}_t$ is the unique simultaneous eigenbasis of the set of operators in $C_t$, i.e., for all $k \in [d]$ and $f,g \in [N]$, $\ket{b^t_k} \in \mathcal{B}_t$ and $c_{f,g}^t \in C_t$, we have $c_{f,g}^t \ket{b^t_k}=\lambda_{k,f,g}^t \ket{b^t_k} \textrm{ for some value }\lambda^t_{k,f,g}$. Note that any vector $\ket{v}$ that satisfies this equation is proportional to a vector in $\mathcal{B}_t$. To prove that any application of one of the generalized Paulis merely permutes the vectors in $\mathcal{B}_t$ is therefore equivalent to proving that $\Sigma^{i,(x)}_{p} \ket{b^t_k}$ are eigenvectors of $c_{f,g}^t$ for any $f,g \in [k]$ and $i \in \{1, 3\}$. This can be seen as follows: Note that $c_{f,g}^t=\bigotimes_{n=1}^N \left(\Sigma^{1, (n)}_{p}\right)^{f_N} \left(\Sigma^{3,(n)}_{p}\right)^{g_N}$ for $f = (f_1,\ldots,f_N)$ and $g=(g_1, \ldots, g_N)$ with $f_N,g_N \in \{0,\ldots,p-1\}$~\cite{boykin:mub}. A calculation then shows that $$ c_{f,g}^t \Sigma^{i,(x)}_p \ket{b^t_k}= \tau_{f_x,g_x, i} \lambda_{k,f,g}^t \Sigma^{i,(x)}_{p} \ket{b^t_k}, $$ where $\tau_{f_x,g_x, i}=\omega^{g_x}$ for $i = 1$ and $\tau_{f_x,g_x,i}=\omega^{-f_x}$ for $i = 3$. Thus $\Sigma^{i,(x)}_{p} \ket{b^t_k}$ is an eigenvector of $c^t_{f,g}$ for all $t, f, g$ and $i$, which proves our claim. \end{proof} Suppose we are given $\ket{\psi}$ that minimizes $\sum_{\mathcal{B}_t \in \mathbb{T}} H(\mathcal{B}_t,\ket{\psi})$. We can then construct a full POVM with $d^2$ elements by taking $\{\frac{1}{d}\outp{\Phi_{ab}}{\Phi_{ab}}\}$ with $\ket{\Phi_{ab}} = (X_d^{a_1}Z_d^{b_1} \otimes \ldots \otimes X_d^{a_N}Z_d^{b_N})^\dagger\ket{\psi}$. However, it follows from our claim above that $\forall a,b,k, \exists k'$ sucht that $|\inp{\Phi_{ab}}{b^t_k}|^2 = |\inp{\psi}{b^{t}_{k'}}|^2$, and thus $H(\mathcal{B}_t,\ket{\psi}) = H(\mathcal{B},\ket{\Phi_{ab}})$ from which the result follows. \end{proof} Determining the strength of the locking effects for such MUBs is thus equivalent to proving bounds on entropic uncertainty relations. We thus obtain as a corollary of Theorem~\ref{squareThm} and Lemma~\ref{equiv}, that, for dimensions which are the square of a prime power $d = p^{2N}$, using any product MUBs based on generalized Paulis does not give us any better locking than just using 2 MUBs. \begin{corollary}\label{pauliLocking} Let $\mathbb{S}=\{\mathcal{S}_1,\ldots,\mathcal{S}_{m}\}$ with $m \geq 2$ be any set of MUBs constructed on the basis of generalized Pauli matrices in a Hilbert space of prime (power) dimension $s = p^N$. Define $U_t$ as the unitary that transforms the computational basis into the $t$-th MUB, i.e., $\mathcal{S}_t = \{U_t\ket{1},\ldots,U_t\ket{s}\}$. Let $\mathbb{B} = \{\mathcal{B}_1,\ldots,\mathcal{B}_{m}\}$ be the set of product MUBs with $\mathcal{B}_t = \{U_t \otimes U_t^* \ket{1},\ldots,U_t \otimes U_t^*\ket{d}\}$ in dimension $d=s^2$. Consider the ensemble $\mathcal{E} = \{ \frac{1}{d m},\outp{b^t_k}{b^t_k}\}$. Then $$ I_{acc}(\mathcal{E}) = \frac{\log d}{2}. $$ \end{corollary} \begin{proof} The claim follows from Theorem~\ref{squareThm} and the proof of Lemma~\ref{equiv}, by constructing a similar measurement formed from vectors $\ket{\hat{\Phi}_{\hat{a}\hat{b}}} = K_{a^1b^1} \otimes K_{a^2b^2}^* \ket{\psi}$ with $\hat{a} = a^1a^2$ and $\hat{b} = b^1b^2$, where $a^1,a^2$ and $b^1,b^2$ are defined like $a$ and $b$ in the proof of Lemma~\ref{equiv}, and $K_{ab} = (X_d^{a_1}Z_d^{b_1}\otimes\ldots\otimes X_d^{a_N}Z^{b_N}_d)^\dagger$ from above. \end{proof} The simple example we considered above is in fact a special case of Corollary~\ref{pauliLocking}. It shows that if the vector that minimizes the sum of entropies has certain symmetries, such as for example the Bell states, the resulting POVM can even be much simpler. \subsection{MUBs from Latin Squares} At first glance, one might think that maybe the product MUBs based on generalized Paulis are not well suited for locking just because of their product form. Perhaps MUBs with entangled basis vectors do not exhibit this problem. To this end, we examine how well MUBs based on Latin squares can lock classical information in a quantum state. All such MUBs are highly entangled, with the exception of the two extra MUBs based on non-Latin squares. Surprisingly, it turns out, however, that \emph{any} set of at least two MUBs based on Latin squares, does equally well at locking as using just 2 such MUBs. Thus such MUBs perform equally ``badly'', i.e., we cannot improve the strength of the locking effect by using more MUBs of this type. \begin{lemma}\label{LSlocking} Let $\mathbb{B}=\{\mathcal{B}_1,\ldots,\mathcal{B}_m\}$ with $m \geq 2$ be any set of MUBs in a Hilbert space of dimension $d=s^2$ constructed on the basis of Latin squares. Consider the ensemble $\mathcal{E} = \{ \frac{1}{d m},\outp{b^t_k}{b^t_k}\}$. Then $$ \mathcal{I}_{acc}(\mathcal{E}) = \frac{\log d}{2}. $$ \end{lemma} \begin{proof} Note that we can again rewrite $\mathcal{I}_{acc}(\mathcal{E})$ as in the proof of Lemma~\ref{equiv}. Consider the simple measurement in the computational basis $\{\outp{i,j}{i,j}|i,j \in [s]\}$. The result then follows by the same argument as in Lemma~\ref{LSentropy}. \end{proof} \section{Conclusion and Open Questions} We have shown tight bounds on entropic uncertainty relations and locking for specific sets of mutually unbiased bases. Surprisingly, it turns out that using more mutually unbiased basis does not always lead to a better locking effect. It is interesting to consider what may make these bases so special. The example of three MUBs considered in Lemma~\ref{3mubLocking} may provide a clue. These three bases are given by the common eigenbases of $\{\sigma_x \otimes \sigma_x, \sigma_x \otimes \mathbb{I}, \mathbb{I} \otimes \sigma_x\}$, $\{\sigma_z \otimes \sigma_z, \sigma_z \otimes \mathbb{I}, \mathbb{I} \otimes \sigma_z\}$ and $\{\sigma_y \otimes \sigma_y, \sigma_y \otimes \mathbb{I}, \mathbb{I} \otimes \sigma_y\}$ respectively~\cite{boykin:mub}. However, $\sigma_x \otimes \sigma_x$, $\sigma_z \otimes \sigma_z$ and $\sigma_y \otimes \sigma_y$ commute and thus also share a common eigenbasis, namely the Bell basis. This is exactly the basis we will use as our measurement. For all MUBs based on generalized Pauli matrices, the MUBs in prime power dimensions are given as the common eigenbasis of similar sets consisting of strings of Paulis. It would be interesting to determine the strength of the locking effect on the basis of the commutation relations of elements of \emph{different} sets. Perhaps it is possible to obtain good locking from a subset of such MUBs where none of the elements from different sets commute. It is also worth noting that the numerics of~\cite{terhal:locking} indicate that at least in dimension $p$ using more than three bases does indeed lead to a stronger locking effect. It would be interesting to know, whether the strength of the locking effect depends not only on the number of bases, but also on the dimension of the system in question. Whereas general bounds still elude us, we have shown that merely choosing mutually unbiased bases is not sufficient to obtain good locking effects or high lower bounds for entropic uncertainty relations. We thus have to look for different properties. \acknowledgments We would like to thank Harry Buhrman, Hartwig Bosse, Matthias Christandl, Richard Cleve, Debbie Leung, Serge Massar, David Poulin, and Ben Toner for discussions. We would especially like to thank Andris Ambainis and Andreas Winter for many helpful comments and interesting discussions. We would also like to thank Debbie Leung, John Smolin and Barbara Terhal for providing us with explicit details on the numerical studies conducted in~\cite{terhal:locking}. Thanks also to Matthias Christandl and Serge Massar for discussions on errors in string commitment protocols, to which end claim 1 was proved in the first place. Thanks also to Matthias Christandl and Ronald de Wolf for helpful comments on an earlier version of this note. We are supported by an NWO vici grant 2004-2009 and by the EU project QAP (IST 015848).
{'timestamp': '2007-03-21T10:58:55', 'yymm': '0606', 'arxiv_id': 'quant-ph/0606244', 'language': 'en', 'url': 'https://arxiv.org/abs/quant-ph/0606244'}
\section{Introduction} The recent increased interest in the analysis of hydrodynamic disc flows is motivated, on one hand, by the study of turbulent processes, and, on the other, by the investigation of regular structure formation in protoplanetary discs. Indeed, many astrophysical discs are thought to be neutral or having ionization rates too low to effectively couple with magnetic field. Among these are cool and dense areas of protoplanetary discs, discs around young stars, X-ray transient and dwarf nova systems in quiescence (see e.g. Gammie and Menou 1998, Sano et al. 2000, Fromang, Terquem and Balbus 2002). Observational data shows that astrophysical discs often exhibit radial gradients of thermodynamic variables (see e.g. Sandin et al. 2008, Issela et al. 2007). To what extent these inhomogeneities affect the processes occurring in the disc is still subject open to investigations. It has been found that strong local entropy gradients in the radial direction may drive the Rossby wave instability (Lovelace et al. 1999, Li et al. 2000) that transfers thermal to kinetic energy and leads to vortex formation. However, in astrophysical discs, radial stratification is more likely weak. In this case, the radial entropy (temperature) variation on the global scale leads to the existence of baroclinic perturbations over the barotropic equilibrium state. This more appropriate situation has recently become a subject of extensive study. Klahr and Bodenheimer (2003) pointed out that the radial stratification in the disc can lead to the global baroclinic instability. Numerical results show that the resulting state is highly chaotic and transports angular momentum outwards. Later Klahr (2004) performed a local 2D linear stability analysis of a radially stratified flow with constant surface density and showed that baroclinic perturbations can grow transiently during a limited time interval. Johnson and Gammie (2005) derived analytic solutions for 3D linear perturbations in a radially stratified discs in the Boussinesq approximation. They find that leading and trailing waves are characterized by positive and negative angular momentum flux, respectively. Later Johnson and Gammie (2006) performed numerical simulations, in the local shearing sheet model, to test the radial convective stability and the effects of baroclinic perturbations. They found no substantial instability due to the radial stratification. This result reveals a controversy over the issue of baroclinic instability. Presently, it seems that nonlinear baroclinic instability is an unlikely development in the local dynamics of sub-Keplerian discs with weak radial stratification. Potential vorticity production, and the formation and development of vortices in radially stratified discs have been studied by Petersen et al. (2007a,b) by using pseudospectral simulations in the anelastic approximation. They show that the existence of thermal perturbations in the radially stratified disc flows leads to the formation of vortices. Moreover, stronger vortices appear in discs with higher temperature perturbations or in simulations with higher Reynolds numbers, and the transport of angular momentum may be both outward and inward. Keplerian differential rotation in the disc is characterized by a strong velocity shear in the radial direction. It is known that shear flows are non-normal and exhibit a number of transient phenomena due to the non-orthogonal nature of the operators (see e.g. Trefethen et al. 1993). In fact, the studies described above did not take into account the possibility of mode coupling and energy transfer between different modes due to the shear flow induced mode conversion. Mode coupling is inherent to shear flows (cf. Chagelishvili et al. 1995) and often, in many respects, defines the role of perturbation modes in the system dynamics and the further development of nonlinear processes. Thus, a correct understanding of the energy exchange channels between different modes in the linear regime is vital for a correct understanding of the nonlinear phenomena. Indications of the shear induced mode conversion can be found in a number of previous studies. Barranco and Marcus 2005 report that vortices are able to excite inertial gravity waves during 3D spectral simulations. Brandenburg and Dintrans (2006) have studied the linear dynamics of perturbation SFH to analyze nonaxisymetric stability in the shearing sheet approximation. Temporal evolution of the perturbation gain factors reveal a wave nature after the radial wavenumber changes sign. Compressible waves are present, along with vortical perturbations, in the simulation by Johnson \& Gammie (2005b) but their origin is not particularly discussed. In parallel, there are a number of papers that focus on the investigation of the shear induced mode coupling phenomena. The study of the linear coupling of modes in Keplerian flows has been conducted in the local shearing sheet approximation (Tevzadze et al. 2003,2008) as well as in 2D global numerical simulations (Bodo et al. 2005, hereafter B05). Tevzadze et al. (2003) studied the linear dynamics of three-dimensional small scale perturbations (with characteristic scales much less then the disc thickness) in vertically (stably) stratified Keplerian discs. They show, that vortex and internal gravity wave modes are coupled efficiently. B05 performed global numerical simulations of the linear dynamics of initially imposed two-dimensional pure vortex mode perturbations in compressible Keplerian discs with constant background pressure and density. The two modes possible in this system are effectively coupled: vortex mode perturbations are able to generate density-spiral waves. The coupling is, however, strongly asymmetric: the coupling is effective for wave generation by vortices, but not viceversa. The resulting dynamical picture points out the importance of mode coupling and the necessity of considering compressibility effects for processes with characteristic scales of the order or larger than the disc thickness. Bodo et al. (2007) extended this work to nonlinear amplitudes and found that mode coupling is an efficient channel for energy exchange and is not an artifact of the linear analysis. B05 is particularly relevant to the present study, since it studies the dynamics of mode coupling in 2D unstratified flows and is a good starting point for a further extension to radially stratified flows. Later, Heinemann \& Papaloizou (2009a) derived WKBJ solutions of the generated waves and performed numerical simulations of the wave excitation by turbulent fluctuations (Heinemann \& Papaloizou 2009b). In the present paper we study the linear dynamics of perturbations and analyze shear flow induced mode coupling in the local shearing sheet approximation. We investigate the properties of mode coupling using qualitative analysis within the three-mode approximation. Within this approximation we tentatively distinguish vorticity, entropy and pressure modes. Quantitative results on mode conversion are derived numerically. It seems that a weak radial stratifications, while being a weak factor for the disc stability, still provides an additional degree of freedom (an active entropy mode), opening new options for velocity shear induced mode conversion, that may be important for the system behavior. One of the direct result of mode conversion is the possibility of linear generation of the vortex mode (i.e., potential vorticity) by compressible perturbations. We want to stress the possibility of the coupling between high and low frequency perturbations, considering that high frequency oscillations have been often neglected during previous investigations in particular for protoplanetary discs. Conventionally there are two distinct viewpoints commonly employed during the investigation of hydrodynamic astrophysical discs. In one case (self gravitating galactic discs) the emphasis is placed on the investigation of the dynamics of spiral-density waves and vortices, although normally present in numerical simulations, are thought to play a minor role in the overall dynamics. In the other case (non-self-gravitating hydrodynamic discs) the focus is on the potential vorticity perturbations and density-spiral waves are often thought to play a minor role. Here, discussing the possible (multi) mode couplings, we want to draw attention to the possible flaws of these simplified views (see e.g. Mamatsashvili \& Chagelishvili 2007). In many cases, mode coupling makes different perturbation to equally participate in the dynamical processes despite of a significant difference in their temporal scales. In the next section we present mathematical formalism of our study. We describe three mode formalism and give schematic picture of the linear mode coupling in the radially sheared and stratified flow. Numerical analysis of the mode coupling is presented in Sec. 3. We evaluate mode coupling efficiencies at different radial stratification scales of the equilibrium pressure and entropy. The paper is summarized in Sec. 4. \section{Basic equations} The governing ideal hydrodynamic equations of a two-dimensional, compressible disc flows in polar coordinates are: \begin{equation} {\partial \Sigma \over \partial t} + {1 \over r} {\partial \left( r \Sigma V_r \right) \over \partial r} + {1 \over r} {\partial \left( \Sigma V_\phi \right)\over \partial \phi} = 0~,~~~~~~~~~~~~~~~~~~~~~~ \end{equation} \begin{equation} {\partial V_r \over \partial t} + V_r{\partial V_r \over \partial r}+ {V_\phi \over r}{\partial V_r \over \partial \phi} - {V_\phi^2 \over r} = -{1 \over \Sigma} {\partial P \over \partial r} - {\partial \psi_g \over \partial r} ~,~~~~~~ \end{equation} \begin{equation} {\partial V_\phi \over \partial t} + V_r{\partial V_\phi \over \partial r}+ {V_\phi \over r}{\partial V_\phi \over \partial \phi} + {V_r V_\phi \over r} = -{1 \over \Sigma r}{\partial P\over \partial \phi}~,~~~~~~~~~ \end{equation} \begin{equation} {\partial P \over \partial t} + V_r{\partial P \over \partial r}+ {V_\phi \over r}{\partial P \over \partial \phi} = - {\gamma P} \left( {1 \over r} {\partial (r V_r) \over \partial r} + {1 \over r} {\partial V_\phi \over \partial \phi} \right)~, \end{equation} where $V_r$ and $V_\phi$ are the flow radial and azimuthal velocities respectively. $P(r,\phi)$, $\Sigma(r,\phi)$ and $\gamma~$ are respectively the pressure, the surface density and the adiabatic index. $\psi_g$ is the gravitational potential of the central mass, in the absence of self-gravitation $~(\psi_g \sim -{1 / r})$. This potential determines the Keplerian angular velocity: \begin{equation} {\partial \psi_g \over \partial r} = \Omega_{Kep}^2 r ~,~~~~ \Omega_{Kep} \sim r^{-3/2}; \end{equation} \subsection{Equilibrium state} We consider an axisymmetric $(\partial / \partial \phi \equiv 0),~$ azimuthal $(\bar {V}_{r} = 0)~$ and differentially rotating basic flow: $\bar {V}_{\phi}= \Omega(r)r$. In the 2D radially stratified equilibrium (see Klahr 2004), all variables are assumed to follow a simple power law behavior: \begin{equation} \bar {\Sigma}(r) = \Sigma_0 \left( {r \over r_0} \right)^{-\beta_\Sigma},~~~~\bar {P}(r) = P_0\left( {r \over r_0} \right)^{-\beta_P} ~, \end{equation} where overbars denote equilibrium and $\Sigma_0$ and $P_0$ are the values of the equilibrium surface density and pressure at some fiducial radius $r = r_0$. The entropy can be calculated as: \begin{equation} \bar S = \bar P \bar \Sigma^{-\gamma} = - \left(r \over r_0 \right)^{-\beta_S} ~, \end{equation} where \begin{equation} \beta_S \equiv \beta_P - \gamma \beta_\Sigma ~. \end{equation} $S$ is sometimes called potential temperature, while the physical entropy can be derived as $\bar S = C_V \log S + \bar S_0$. This equilibrium shows a deviation from the Keplerian profile due to the radial stratification: $$ \Delta \Omega^2(r) = \Omega^2(r) - \Omega^2_{Kep}= {1 \over r {\bar {\Sigma}(r)}} {\partial {\bar{P}(r)}\over \partial r} = ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ $$ \begin{equation} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ = - {P_0 \over \Sigma_0} {\beta_P \over r_0^2}\left( {r \over r_0} \right)^{\beta_\Sigma-\beta_P-2} ~. \end{equation} Hence, the described state is sub-Keplerian or super-Keplerian when the radial gradient of pressure is negative ($\beta_P>0$) or positive ($\beta_P<0$), respectively. Although these discs are non-Keplerian, they are still rotationally supported, since the deviation from the Keplerian profile is small: $~\Delta \Omega^2(r)\ll \Omega^2_{Kep}$. \subsection{Linear perturbations} We split the physical variables into mean and perturbed parts: \begin{equation} \Sigma(r,\phi) = {\bar {\Sigma}(r)} + {{\Sigma}^\prime(r,\phi)} ~, \end{equation} \begin{equation} P(r,\phi) = {\bar{P}(r)} + P^\prime(r,\phi) ~, \end{equation} \begin{equation} V_r(r,\phi) = V_r^\prime(r,\phi) ~, \end{equation} \begin{equation} V_\phi(r,\phi) = \Omega(r) r + V_\phi^\prime(r,\phi) ~. \end{equation} In order to remove background trends from the perturbations we employ the global radial power law scaling for perturbed quantities: \begin{equation} \hat \Sigma(r) \equiv \left({r \over r_0}\right)^{-\delta_\Sigma} \Sigma^\prime(r) ~, \end{equation} \begin{equation} \hat P(r) \equiv \left({r \over r_0}\right)^{-\delta_P} P^\prime(r) ~, \end{equation} \begin{equation} \hat {\bf V}(r) \equiv \left({r \over r_0}\right)^{-\delta_V} {\bf V}^\prime(r) ~. \end{equation} After the definitions one can get the following dynamical equations for the scaled perturbed variables: \begin{equation} \left\{ {\partial \over \partial t} + \Omega(r) {\partial \over \partial \phi} \right\} {\hat \Sigma \over \Sigma_0} + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \end{equation} $$ \left( {r \over r_0} \right)^{-\beta_\Sigma-\delta_\Sigma+\delta_V} \left[ {\partial \hat V_r \over \partial r} + {1 \over r} {\partial \hat V_\phi \over \partial \phi} + {1+\delta_V-\beta_\Sigma \over r} \hat V_r \right] = 0 ~, $$ \begin{equation} \left\{ {\partial \over \partial t} + \Omega(r) {\partial \over \partial \phi} \right\} \hat V_r - 2\Omega(r) \hat V_\phi + \end{equation} $$ {c_s^2 \over \gamma} \left({r \over r_0} \right)^{\beta_\Sigma+\delta_P-\delta_V} {\partial \over \partial r} {\hat P \over P_0} + c_s^2 {\delta_P \over \gamma r_0} \left( {r \over r_0} \right)^{\beta_\Sigma+\delta_P-\delta_V-1} {\hat P \over P_0} + $$ $$ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ c_s^2 {\beta_P \over \gamma r_0} \left( {r \over r_0} \right)^{2\beta_\Sigma+\delta_\Sigma-\beta_P-\delta_V-1} {\hat \Sigma \over \Sigma_0} = 0 ~, $$ \begin{equation} \left\{ {\partial \over \partial t} + \Omega(r) {\partial \over \partial \phi} \right\} \hat V_\phi + \left( 2 \Omega(r) + r {\partial \Omega(r) \over \partial r} \right) \hat V_r + ~~~~~~~~~~~~~~~ \end{equation} $$ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ {c_s^2 \over \gamma r_0} \left( {r \over r_0} \right)^{\beta_\Sigma+\delta_P-\delta_V-1} {\partial \over \partial \phi} {\hat P \over P_0} = 0 ~, $$ \begin{equation} \left\{ {\partial \over \partial t} + \Omega(r) {\partial \over \partial \phi} \right\} {\hat P \over P_0} + \end{equation} $$ \gamma \left( {r \over r_0} \right)^{-\beta_P+\delta_V-\delta_P} \left[ {\partial \hat V_r \over \partial r} + {1 \over r} {\partial \hat V_\phi \over \partial \phi} + {1+\delta_V-\beta_P/\gamma \over r} \hat V_r \right] = 0 ~, $$ where $c_s^2 = \gamma P_0/\Sigma_0$ is the squared sound speed at $r=r_0$. \subsection{Local approximation} The linear dynamics of perturbations in differentially rotating flows can be effectively analyzed in the local co-rotating shearing sheet frame (e. g., Goldreich \& Lynden-Bell 1965; Goldreich \& Tremaine 1978). This approximation simplifies the mathematical description of flows with inhomogeneous velocity. In the radially stratified flows the spatial inhomogeneity of the governing equations comes not only from the equilibrium velocity, but from the pressure, density and entropy profiles as well. In this case we first re-scale perturbations in global frame in order to remove background trends from linear perturbations, rather then use complete form of perturbations to the equilibrium (see Eqs. 14-16). Hence, using the re-scaled linear perturbation ($\hat P$, $\hat \Sigma$, $\hat {\rm \bf V}$) we may simplify local shearing sheet description as follows. Introduction of a local Cartesian co-ordinate system: \begin{equation} x \equiv r - r_0~,~~~~ y \equiv r_0 (\phi - \Omega_0 t)~,~~~~{x \over r_0} ,~ {y \over r_0} \ll 1~, \end{equation} \begin{equation} {\partial \over \partial x} = {\partial \over \partial r}~,~~~ {\partial \over \partial y} = {1 \over r_0}{\partial \over \partial \phi}~,~~~ {\partial \over \partial t} = {\partial \over \partial t} - r_0 \Omega_0 {\partial \over \partial y}, \end{equation} where $\Omega_0$ is the local rotation angular velocity at $r=r_0$, transforms global differential rotation into a local radial shear flow and the two Oort constants define the local shear rate: \begin{equation} A \equiv {1 \over 2} r_0 \left[ {\partial \Omega(r) \over \partial r}\right]_{r=r_0}~,~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \end{equation} \begin{equation} B \equiv - {1 \over 2} \left[ r{\partial \Omega(r)\over \partial r} + 2\Omega(r) \right]_{r=r_0}= -A - \Omega_0~. \end{equation} Hence, the equations describing the linear dynamics of perturbations in local approximation read as follows: \begin{equation} \left\{ {\partial \over \partial t} + 2Ax {\partial \over \partial y} \right\} {\hat P \over \gamma P_0} + \end{equation} $$ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \left[ {\partial \hat V_x \over \partial x} + {\partial \hat V_y \over \partial y} + {1+\delta_V-\beta_P/\gamma \over r_0} \hat V_x \right] = 0 ~, $$ \begin{equation} \left\{ {\partial \over \partial t} + 2Ax {\partial \over \partial y} \right\} {\hat V_x} - 2\Omega_0 \hat V_y + \end{equation} $$ ~~~~~~~~~~~~~~~~~~~~~ c_s^2 \left[ {\partial \over \partial x} {\hat P \over \gamma P_0} + {\delta_P + \beta_P/\gamma \over r_0} {\hat P \over \gamma P_0} - {\beta_P \over \gamma r_0} {\hat S \over \gamma P_0}\right] = 0 ~, $$ \begin{equation} \left\{ {\partial \over \partial t} + 2Ax {\partial \over \partial y} \right\} {\hat V_y} - 2B \hat V_y + c_s^2 {\partial \over \partial y} {\hat P \over \gamma P_0} =0 ~, \end{equation} \begin{equation} \left\{ {\partial \over \partial t} + 2Ax {\partial \over \partial y} \right\} {\hat S \over \gamma P_0} - {\beta_S \over \gamma r_0} \hat V_x = 0 ~, \end{equation} where $\hat S $ is the entropy perturbation: \begin{equation} \hat S \equiv \hat P - c_s^2 \hat \Sigma ~. \end{equation} Now we may adjust the global scaling law of perturbations in order to simplify the local shearing sheet description (see Eqs. 25,26): \begin{equation} 1 + \delta_V - \beta_P/\gamma = 0 ~, \end{equation} \begin{equation} \delta_P + \beta_P/\gamma = 0 ~. \end{equation} Let us introduce spatial Fourier harmonics (SFHs) of perturbations with time dependent phases: \begin{equation} \left( \begin{array}{c} {\hat V}_x({\bf r},t) \\ {\hat V}_y({\bf r},t) \\ {{\hat P}({\bf r},t) / \gamma P_0} \\ {{\hat S}({\bf r},t) / \gamma P_0} \end{array} \right) = \left( \begin{array}{r} u_x({\bf k}(t),t) \\ u_y({\bf k}(t),t) \\ -{\rm i} p({\bf k}(t),t) \\ s({\bf k}(t),t) \end{array} \right) \times \end{equation} $$ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \exp \left( {\rm i} k_x(t) x + {\rm i} k_y y \right) ~, $$ with \begin{equation} k_x(t) = k_x(0) - 2Ak_y t~. \end{equation} Using the above expansion and Eqs. (27-30), we obtain a compact ODE system that governs the local dynamics of SFHs of perturbations: \begin{equation} {{\rm d} \over {\rm d} t} p - k_x(t) u_x - k_y u_y = 0 ~, \end{equation} \begin{equation} {{\rm d} \over {\rm d} t} u_x - 2 \Omega_0 u_y + c_s^2 k_x(t) p - c_s^2 k_P s = 0 ~, \end{equation} \begin{equation} {{\rm d} \over {\rm d} t} u_y - 2 B u_x + c_s^2 k_y p = 0 ~, \end{equation} \begin{equation} {{\rm d} \over {\rm d} t} s - k_S u_x = 0 ~. \end{equation} where \begin{equation} k_P = {\beta_P \over \gamma r_0} ~~~ k_S = {\beta_S \over \gamma r_0} ~. \end{equation} The potential vorticity: \begin{equation} W \equiv k_x(t)u_y - k_y u_x - 2B p ~, \end{equation} is a conserved quantity in barotropic flows: $W = {\rm const.}$ when $k_P=0$. \subsection{Perturbations at rigid rotation} The dispersion equation of our system can be obtained in the shearless limit ($A=0$, $B=-\Omega$). Hence, using Fourier expansion of perturbations in time $\propto \exp({\rm i} \omega t)$, in the shearless limit, we obtain: \begin{equation} \omega^4 - \left( c_s^2 k^2 + 4 \Omega_0^2 - c_s^2 \eta \right) \omega^2 - c_s^4 \eta k_y^2 = 0 ~, \end{equation} where \begin{equation} \eta \equiv k_P k_S = {\beta_P \beta_S \over \gamma^2 r_0^2} ~. \end{equation} Solutions of the Eq. (40) describe a compressible density-spiral mode and a convective mode that involves perturbations of entropy and potential vorticity. For weakly stratified discs $(\eta \ll k^2)$, we find the frequencies are: \begin{equation} \bar \omega_{p}^2 = c_s^2 k^2 + 4 \Omega_0^2 ~, \end{equation} \begin{equation} \bar \omega_{c}^2 = - {c_s^4 \eta k_y^2 \over c_s^2 k^2 + 4 \Omega_0^2} ~. \end{equation} High frequency solutions ($\bar \omega_{p}^2$) describe the density-spiral waves and will be referred later as the P-modes. Low frequency solutions ($\bar \omega_{c}^2$), instead, describe radial buoyancy mode due to the stratification. In barotropic flows ($\eta=0$) this mode is degenerated into stationary zero frequency vortical solution. Therefore, we may refer to it as a baroclinic mode. The mode describes instability when $\eta>0$. In this case the equilibrium pressure and entropy gradients point in the same direction. Klahr (2004) has anticipated such result, although worked in the constant surface density limit ($\beta_\Sigma=0$). The same behavior has been obtained for axisymmetric perturbations in Johnson and Gammie (2005). For comparison, in our model baroclinic perturbations are intrinsically non-axisymmetric. Hence, our result obtained in the rigidly rotating limit shows that the local exponential stability of the radial baroclinic mode is defined by the Schwarzschild-Ledoux criterion: \begin{equation} {{\rm d} \bar P \over {\rm d} r} {{\rm d} \bar S \over {\rm d} r} > 0 ~. \end{equation} The dynamics of linear modes can be described using the modal equations for the eigenfunctions: \begin{equation} \left\{ {{\rm d}^2 \over {\rm d} t^2} + \bar \omega_{p,c}^2 \right\} \Phi_{p,c}(t) = 0 ~, \end{equation} where $\Phi_p(t)$ and $\Phi_c(t)$ are the eigenfunctions of the pressure and convective (baroclinic) modes, respectively. The form of these functions can be derived from Eqs. (34-41) in the shearless limit: \begin{equation} \Phi_{p,c}(t) = (\bar \omega_{p,c}^2+c_s^2 \eta) p(t) - 2 \Omega_0 W(t) - c_s^2 k_P k_x s(t) ~. \end{equation} All physical variables in our system ($p$, $u_x$, $u_y$, $s$) can be expressed by the two modal eigenfunctions and their first time derivatives ($\Phi_{p,c}$, $\Phi_{p,c}^\prime$). Hence, we can fully derive the perturbation field of a specific mode individually by setting the eigenfunction of the other mode equal to zero. As we will see later, the Keplerian shear leads to the degeneracy of the convective buoyancy mode. In this case only the shear modified density-spiral wave mode eigenfunction can be employed in the analysis. \subsection{Perturbations in shear flow: mode coupling} It is well known that velocity shear introduces non-normality into the governing equations that significantly affects the dynamics of different perturbations. In this case we benefit from the shearing sheet transformation and seek the solutions in the form of the so-called Kelvin modes. These originate from the vortical solutions derived in seminal paper by Kelvin (1887). In fact, as it was argued lately (see e.g., Volponi and Yoshida 2002), the shearing sheet transformation leads to some sort of generalized modal approach. Shear modes arising in such description differ from linear modes with exponential time dependence in many respects. Primarily, phases of these continuous spectrum shear modes vary in time through shearing wavenumber; their amplitudes can be time dependent; and most importantly, they can couple in limited time intervals. On the other hand, shear modes can be well separated asymptotically, where analytic WKBJ solution for the each mode can be increasingly accurate. In the following, we will simply refer to these shearing sheet solution as ``modes''. The character of shear flow effects significantly depend on the value of velocity shear parameter. To estimate the time-scales of the processes we compare the characteristic frequencies of the linear modes $|\bar \omega_p|$, $|\bar \omega_c|$ and the velocity shear $|A|$. In order to speak about the modification of the linear mode by the velocity shear, the basic frequency of the mode should be higher than the one set by shear itself: $\omega^2 > A^2$. Otherwise the modal solution can not be used to calculate perturbation dynamics, since perturbations will obey the shear induced variations at shorter timescales. In quasi-Keplerian differentially rotating discs with weak radial stratification: \begin{equation} \bar \omega_p^2 \gg A^2 ~~~~ {\rm and} ~~~ \bar \omega_{c}^2 \ll A^2 ~, {~~~ \rm when ~~~} {\beta_P \beta_S \over \gamma^2} \ll 1 ~. \end{equation} In this case the convective mode diverges from its modal behavior and is strongly affected by the velocity shear: the thermal and kinematic parts obey shear driven dynamics individually. Therefore, we tentatively distinguish shear driven vorticity (W) and entropy (S) modes. On the contrary, the high frequency pressure mode is only modified by the action of the background shear. Hence, we assume the above described three mode (S, W and P) formalism as the framework for our further study. For the description of the P mode in differential rotation we define the function: \begin{equation} \Psi_p(t) = \omega_p^2(t) p(t) - 2\Omega_0 W(t) - c_s^2 k_P k_x(t) s(t) ~, \end{equation} where \begin{equation} \omega_p^2(t) = c_s^2 k^2(t) - 4B \Omega_0 ~. \end{equation} This can be considered as the generalization of the $\Phi_P(t)$ eigenfunction for the case of the shear flow, by accounting for the temporal variation of the radial wavenumber. In order to analyze the mode coupling in the considered limit, we rewrite Eqs. (34-39) as follows: \begin{equation} \left\{ {{\rm d}^2 \over {\rm d} t^2} + f_p {{\rm d} \over {\rm d} t} + \omega_p^2 - \Delta \omega_p^2 \right\} \Psi_p = \chi_{pw} W + \chi_{ps} s ~, \end{equation} \begin{equation} \left\{ {{\rm d} \over {\rm d} t} + f_s \right\} s = \chi_{s p 1} {{\rm d} \Psi_p \over {\rm d} t} + \chi_{s p 2} \Psi_p + \chi_{s w} W ~, \end{equation} \begin{equation} {{\rm d} W \over {\rm d} t} = \chi_{ws} s ~, \end{equation} where $f_p$ and $\Delta \omega_p^2$ describe the shear flow induced modification to the P-mode \begin{equation} f_p = 4 A { k_x k_y \over k^2} - 2 {(\omega_p^2)^\prime \over \omega_p^2 } ~, \end{equation} \begin{equation} \Delta \omega_p^2 = {(\omega_p^2)^{\prime \prime} \over \omega_p^2} + f_p {(\omega_p^2)^\prime \over \omega_p^2 } + 8AB{k_y^2 \over k^2} ~, \end{equation} parameter $f_s$ describes the modification to the entropy mode \begin{equation} f_s = c_s^2 \eta {k_x^2 (\omega_p^2)^\prime \over k^2 \omega_p^4} ~, \end{equation} and $\chi$ parameters describe the coupling between the different modes: \begin{equation} \chi_{pw} = 2 \Omega_0 \Delta \omega_p^2(t) + 4A {k_y^2 \over k^2} \omega_p^2 ~, \end{equation} \begin{equation} \chi_{ps} = c_s^2 k_P k_x \left( \Delta \omega_p^2 + 4B {k_y \over k_x} {(\omega_p^2)^\prime \over \omega_p^2} - 8AB {k_y^2 \over k^2} \right) ~, \end{equation} \begin{equation} \chi_{s p 1} = {k_S k_x \over k^2 \omega_p^2 }~, \end{equation} \begin{equation} \chi_{s p 2} = -{k_S k_x \over k^2 \omega_p^2 } \left( {(\omega_p^2)^\prime \over \omega_p^2} + 2B {k_y \over k_x} \right) ~, \end{equation} \begin{equation} \chi_{s w} = -{2 \Omega k_S k_x \over k^2 \omega_p^2} \left( {(\omega_p^2)^\prime \over \omega_p^2} + 2B {k_y \over k_x} + {k_y \omega_p^2 \over 2 \Omega k_x } \right) ~, \end{equation} \begin{equation} \chi_{w s} = -c_s^2 k_P k_y ~. \end{equation} Here prime denotes temporal derivative. Equations (50-52) describe the linear dynamics of modes and their coupling in the considered three mode model. In this limit, our interpretation is that the homogeneous parts of the equations describe the individual dynamics of modes, while the right hand side terms act as a source terms and describe the mode coupling. This tentative separation is already fruitful in a qualitative description of mode coupling. Dynamics of the density-spiral wave mode in the differential rotation can be described by the homogeneous part of the Eq. (50). Homogeneous part of Eq. (51) describes the modifications to the entropy dynamics. Inhomogeneous parts of the Eqs. (50-52) reveal coupling terms between the three linear modes that originate due to the background velocity shear and radial stratification. We analyze the mode coupling dynamics numerically, but use the coupling $\chi$ coefficients for qualitative description. The sketch illustration of the mode coupling in the above described three-mode approximation can be seen in Fig. \ref{coupling}. The figure reveals a complex picture of the three mode coupling that originates by the combined action of velocity shear and radial stratification. The temporal variation of the coupling coefficients during the swing of the perturbation SFHs from leading to trailing phases is shown in Fig. \ref{chi}. The relative amplitudes of the $\chi_{pw}$ and $\chi_{ps}$ parameters reveal that potential vorticity is a somewhat more effective source of P mode perturbations when compared to the entropy mode. On the other hand, it seems that S mode excitation sources due to potential vorticity ($\chi_{sw}$) can be stronger when compared with the P-mode sources ($\chi_{sp1}$, $\chi_{sp2}$). The effect of the stratification parameters on the mode coupling is somewhat more apparent. First, we may conclude that the excitation of the entropy mode, which depends on the parapeters $\chi_{sp1}$, $\chi_{sp2}$ and $\chi_{sw}$ is generally a stronger process for higher entropy stratification scales $k_S$ (see Eqs. 58-60). Second, we see that the generation of the potential vorticity depending on the $\chi_{ws}$ parameter proceeds more effectively at hight pressure stratification scale $k_P$. And third, we see profound asymmetry in the three-mode coupling: P-mode is not coupled with the W-mode {\it directly}. A quantitative estimate of the mode excitation parameters can be done using a numerical analysis. In this case, the amplitudes of the generated W and S modes can be estimated through the values of potential vorticity or entropy outside the coupling area. In order to quantify the second order P mode dynamics we define its modal energy as follows: \begin{equation} E_P(t) \equiv |\Psi_p(t)^\prime|^2 + \omega_p(t)^2 |\Psi_P(t)|^2 ~. \end{equation} This quadratic form is a good approximation to the P mode energy in the areas where it obeys adiabatic dynamics: $k_x(t)/k_y \gg 1$. The presented qualitative analysis suggests that perturbations of the density-spiral waves can generate entropy perturbations not only due to the flow viscosity (not included in our formalism), but also kinematically, due to the velocity shear induced mode coupling. The generated entropy perturbations should further excite potential vorticity through baroclinic coupling. Hence, it seems that in baroclinic flows, contrary to the barotropic case, P-mode perturbations are able to generate potential vorticity through a three-mode coupling mechanism: P $\to$ S $\to$ W. We believe that traces of the described mode coupling can be also seen in Klahr 2004, where the process has not been fully resolved due to the numerical filters used to remove higher frequency oscillations. \begin{figure} \begin{center} \includegraphics[width=80mm]{Coupling.eps} \end{center} \caption{ Mode coupling scheme. In the zero shear limit two second order modes P-mode and buoyancy mode with eigenfunctions $\Phi_p$ and $\Phi_c$ are uncoupled. In the shear flow, when the characteristic time of shearing is shorter then the buoyancy mode temporal variation scale ($A^2 > \bar \omega_c^2$), we use three mode formalism. In this limit we consider the coupling of the P, W, and S modes. $\chi$ parameters describe the strength of the coupling channel. Asymmetry of the mode coupling is revealed in the fact that compressible oscillations of the pressure mode are not able to directly generate potential vorticity, but still do so via interaction with S-mode and farther baroclinic ties with W-mode.} \label{coupling} \end{figure} \begin{figure} \begin{center} \includegraphics[width=80mm]{chi.eps} \end{center} \caption{The coupling $\chi$ parameters vs. the ratio of radial to azimuthal wavenumbers $k_x(t)/k_y$ when latter passes through zero value during the time interval $\Delta t = 4 \Omega_0^{-1}$. Here $k_y = H^{-1}$, $k_P = k_S = 0.5 H^{-1}$. } \label{chi} \end{figure} \section{Numerical Results} In order to study the mode coupling dynamics in more detail we employ numerical solutions of Eqs. (34-37). We impose initial conditions that correspond to the one of the three modes and use a standard Runge-Kutta scheme for numerical integration (MATLAB ode34 RK implementation). Perturbations corresponding to the individual modes at the initial point in time are derived in the Appendix A. \subsection{W-mode: direct coupling with S and P-modes} In this subsection we consider the dynamics of SFH when only the perturbations of potential vorticity are imposed initially. As it is known from previous studies (see Chagelishvili et al. 1997, Bodo et al. 2005) vorticity perturbations are able to excite acoustic modes nonadiabatically in the vicinity of the area where $k_x(t)=0$. Here we observe a similar, but more complex, behavior of mode coupling. The W-mode is able to generate P and S-modes simultaneously. Fig. \ref{SFH_w1} shows the evolution of the W-mode perturbations in a flow with growing baroclinic perturbations ($\eta>0$). The results show the excitation of both S and P-mode perturbations due to mode coupling that occurs in a short period of time in the vicinity of $t=10$. The following growth of the negative potential vorticity is due to the baroclinic coupling of entropy and potential vorticity perturbations. Fig. \ref{SFH_w2} shows the evolution of potential vorticity SFH in flows with negative $\eta$. After the mode coupling and generation of P and S-modes, we observe a decrease of potential vorticity. This represents the well known fact that stable stratification (positive Richardson number) can play a role of ``baroclinic viscosity'' on the vorticity perturbations. Numerical calculations show that the efficiency of the mode coupling generally decreases as we increase the azimuthal wavenumber $k_y$ corresponding to an increase of the density-spiral wave frequency: lower frequency waves couple more efficiently. To test the effect of background stratification parameters on the mode coupling, we calculate the amplitude of the entropy and the energy of the P-mode perturbations generated in flows with different pressure and entropy stratification scales. The amplitudes are calculated after a $10 \Omega_0^{-1}$ time interval from the change in sign of the radial wave-number. In this case, modes are well isolated and the energy of the P mode can be well defined. Fig. \ref{surf_w} shows the results of these calculations. It seems that the mode coupling efficiency is higher with stronger radial gradients. In particular, numerical results generally verify our qualitative results that the S-mode generation predominantly depends on the entropy stratification scale $k_S$. Therefore, P-mode excitation is stronger for higher values of $\eta$. \begin{figure} \begin{center} \includegraphics[width=84mm]{SFH_w1.eps} \end{center} \caption{Evolution of the W-mode SFH in the flow with $k_x(t)=-30H^{-1}$, $k_y=2H^{-1}$ and equilibrium with growing baroclinic perturbations $k_P=k_S=0.2H^{-1}$. Mode coupling occurs in the vicinity of $t=10 \Omega_0^{-1}$, where $k_x(t)=0$. Excitation of the P and S-modes are clearly seen in the panels for pressure ($P$) and entropy ($S$) perturbations. Perturbations of the potential vorticity start to grow due to the baroclinic coupling with entropy perturbations. } \label{SFH_w1} \end{figure} \begin{figure} \begin{center} \includegraphics[width=84mm]{SFH_w2.eps} \end{center} \caption{Evolution of the W-mode SFH in the flow with $k_x(t)=-30H^{-1}$, $k_y=2H^{-1}$ and equilibrium with positive $\eta$: $k_P=-0.2H^{-1}$, $k_S=0.2H^{-1}$. Interestingly, SFH dynamics shows the decay of potential vorticity after the mode coupling and excitation of S- and W-modes at $t=10 \Omega_0^{-1}$. The latter fact is normally anticipated process in the flows that are baroclinically stable. } \label{SFH_w2} \end{figure} \begin{figure} \begin{center} \includegraphics[width=84mm]{surf_w.eps} \end{center} \caption{Surface graph of the generated S and P-mode amplitudes at $ky=2H^{-1}$, $kx=-60H^{-1}$, and different values of $k_P$ and $k_S$. Initial perturbations are normalized to set E(0)=1. Excitation amplitudes of the entropy perturbations show stronger dependence of the $k_S$ (left panel), while both entropy and pressure scales are important (approximately $k_S k_P$ dependence) for the generation of P-modes (right panel). See electronic edition of the journal for color images.} \label{surf_w} \end{figure} \subsection{S-mode: direct coupling with W and P-modes} Fig. \ref{SFH_s1} shows the evolution of the S-mode SFH in a flow with growing baroclinic perturbations. Here we observe two shear flow phenomena: mode coupling and transient amplification. Entering the nonadiabatic area (around $t = 10$) the entropy SFH is able to generate the P-mode, while undergoing transient amplification itself. The transient growth of entropy is unsubstantial and the growth rate decreases with the growth of $k_y$. The W-mode is instead constantly coupled to entropy perturbations through baroclinic forces, although higher entropy perturbations at later times give an higher rate of growth of potential vorticity. The total energy of perturbations is however dominated, at the end, by the P mode. Fig. \ref{surf_s} shows the dependence of the W and P-mode generation on the pressure and entropy stratification scales. As expected from qualitative estimates, P-mode excitation depends almost solely on the pressure stratification scale $k_P$, while the generation of potential vorticity generally grows with $\eta$. \begin{figure} \begin{center} \includegraphics[width=84mm]{SFH_s1.eps} \end{center} \caption{Evolution of the S-mode SFH in the flow with $k_x(t)=-30H^{-1}$, $k_y=2H^{-1}$ and equilibrium with growing baroclinic perturbations $k_P=k_S=0.2H^{-1}$. Perturbations of the potential vorticity are coupled grow from the beginning due to the baroclinic coupling with entropy perturbations. Excitation of the P-mode is clearly seen in the panel for pressure ($P$), while the panel for entropy perturbations ($S$) shows swing amplification in the nonadiabatic area around $k_x(t)=0$. Change of the amplitude of the entropy SFH affects the growth factor of potential vorticity SFH.} \label{SFH_s1} \end{figure} \begin{figure} \begin{center} \includegraphics[width=84mm]{surf_s.eps} \end{center} \caption{Surface graph of the generated W and P-mode amplitudes at $ky=2H^{-1}$, $kx=-60H^{-1}$, and different values of $k_P$ and $k_S$. Initial perturbations are normalized to set E(0)=1. Excitation amplitudes of the entropy perturbations show predominant dependence on the $k_S$ (left panel), while only pressure stratification scale $k_P$ is important for the generation of P-modes (right panel). See electronic edition of the journal for color images.} \label{surf_s} \end{figure} \subsection{P-mode: direct coupling with S-mode and indirect coupling with W-mode} Fig. \ref{SFH_p1} shows the evolution of an initially imposed P-mode SFH in a flow with growing baroclinic perturbations. The {\it oscillating} behavior of the entropy perturbation for $t < 10$ is given by the P-mode. This oscillating component has a zero mean value when averaged over time-scales longer than the wave period. The existence of the {\it aperiodic} S-mode is instead characterized by a nonzero mean value. When the azimuthal wavenumber $k_y(t)$ changes sign at $t = 10$, we can observe the appearance of a nonzero mean value (marked on the plot by the horizontal dashed line), indicating that the high frequency oscillations of the P-mode are able to generate the aperiodic perturbations of the S-mode. The aperiodic part of the entropy perturbation is than able to generate potential vorticity perturbations. However, as we see from Eq. (54) and Fig. \ref{coupling}, there is no direct coupling between P and W-modes. Therefore, the P-mode generates the S-mode by shear flow induced mode conversion, while the W-mode is further generated because of its baroclinic ties with the entropy SFH. We describe this situation as the three-mode coupling or in other words, indirect coupling of the P to the W-mode. Note, that although the S and W-mode generation is apparent from the dynamics of entropy and potential vorticity SFH, energetically it plays a minor role as compared to the compressible energy carried by the P-mode. Fig \ref{SFH_p2}. shows that P-mode generates potential vorticity with a positive sign. However, the sign of the generated potential vorticity depends on the initial phase of the P-mode. Hence, our numerical results show generation of the W-mode with both positive and negative signs. It is interesting also to look at the P-mode dynamics in flows stable to baroclinic perturbations (see Fig. \ref{SFH_p2}). The initially imposed P-mode is able to generate the S-mode and consequently the W-mode, that gives a growth of the potential vorticity with time. Apart from the intrinsic limitations (the dependence of the sign of the generated potential vorticity on the initial phase of the P-mode and the low efficiency of the W-mode generation), this process demonstrates the fact that potential vorticity can be actually generated in flows with positive radial buoyancy ($\eta<0$) and Richardson number. Fig. \ref{surf_p} shows the dependence of the S and W-mode generation on the pressure and entropy stratification scales. In good agreement with qualitative estimates, the S-mode excitation depends strongly on the entropy stratification scale $k_S$, while the generation of the potential vorticity generally grows with $\eta$. \begin{figure} \begin{center} \includegraphics[width=84mm]{SFH_p1.eps} \end{center} \caption{Evolution of the P-mode SFH in the flow with $k_x(t)=-30H^{-1}$, $k_y=2H^{-1}$ and equilibrium with growing baroclinic perturbations $k_P=k_S=0.2H^{-1}$. Mode coupling occurs in the vicinity of $t=10\Omega_0^{-1}$, where W and S-modes are excited. The amplitude of the generated aperiodic contribution to the entropy perturbation is marked by the red doted line. Farther, this component leads to the baroclinic production of potential vorticity with negative sign.} \label{SFH_p1} \end{figure} \begin{figure} \begin{center} \includegraphics[width=84mm]{SFH_p2.eps} \end{center} \caption{Same as in previous figure but for $k_P=-0.2H^{-1}$ and $ k_S=0.2H^{-1}$. Perturbations are stable to baroclinic forces. However, production of the potential vorticity with positive sign is still observed.} \label{SFH_p2} \end{figure} \begin{figure} \begin{center} \includegraphics[width=84mm]{surf_p.eps} \end{center} \caption{Surface graph of the generated S and W-mode amplitudes at $ky=2H^{-1}$, $kx=-60H^{-1}$, and different values of $k_P$ and $k_S$. Initial perturbations are normalized to set E(0)=1. Excitation amplitudes of the entropy perturbations mainly depend on the $k_S$ (left panel), while both pressure and entropy stratification scales are important for the generation of W-mode perturbations (right panel). See electronic edition of the journal for color images.} \label{surf_p} \end{figure} \section{Conclusion and Discussion} We have studied the dynamics of linear perturbations in a 2D, radially stratified, compressible, differentially rotating flow with different radial density, pressure and entropy gradients. We employed global radial scaling of linear perturbations and removed the algebraic modulation due to the background stratification. We derived a local dispersion equation for nonaxisymmetric perturbations and the corresponding eigenfunctions in the zero shear limit. We show that the local stability of baroclinic perturbations in the barotropic equilibrium state is defined by the Schwarzschild-Ledoux criterion. We study the shear flow induced linear coupling and the related possibility of the energy transfer between the different modes of perturbations using qualitative and a more detailed numerical analysis. We employ a three-mode formalism and describe the behavior of S W and P-modes under the action of the baroclinic and velocity shear forces in local approximation. We find that the system exhibits an asymmetric coupling pattern with five energy exchange channels between three different modes. The W-mode is coupled to S and P-modes: perturbations of the potential vorticity are able to excite entropy and compressible modes. The amplitude of the generated S-mode grows with the increase of entropy stratification scale of the background ($k_S$) while the amplitude of the generated P-mode perturbations grows with the increase of background baroclinic index ($\eta$). The S-mode is coupled to the W and P-modes: the amplitude of the generated P-mode perturbations grows with increase of the background pressure stratification scale ($k_P$), while the amplitude of the W-mode grows with the increase of baroclinic index. The P-mode is coupled to the S-mode: the amplitude of generated entropy perturbations grows with the increase of the background entropy stratification scale. On the other hand, there is no direct energy exchange channel from P to W mode and, therefore, no direct conversion is possible. Our results, however, show that the P-mode is still able to generate W-mode through indirect three mode P-S-W coupling scheme. This linear inviscid mechanism indicates that compressible perturbations are able to generate potential vorticity via aperiodic entropy perturbations. The dynamics of radially stratified discs have been already studied by both, linear shearing sheet formalism and direct numerical simulations. However, previous studies focus on the baroclinic stability and vortex production by entropy perturbations, neglecting the coupling with higher frequency density waves. The most vivid signature of density wave excitation in radially stratified disc flows can be seen in Klahr (2004). The numerical results presented on the linear dynamics of perturbation SFH show high frequency oscillations after the radial wavenumber changes sign. However, focusing on the energy dynamics, the author filters out high frequency oscillations from the analysis. The purpose of numerical simulations by Johnson and Gammie (2006) was the investigation of the velocity shear effects on the radial convective stability and the possibility of the development of baroclinic instability. Therefore, no significant amount of compressible perturbations is present initially, and it is hard to judge if high frequency oscillations appear later in simulations. Petersen et al. (2007a), (2007b) employed the anelastic approximation that does not resolve the coupling of potential vorticity and entropy with density waves. Moreover, if produced, high frequency density waves soon develop into spiral shocks (see e.g., Bodo et al 2007). The anelastic gas approximation does intentionally neglect this complication and simplifies the description down to low frequency dynamics. Numerical simulations of hydrodynamic turbulence in unstratified disc flows showed that the dominant part of turbulent energy is accumulated into the high frequency compressional waves (see, e.g., Shen et al. 2006). On the other hand, it is vortices that are thought to play a key role in hydrodynamic turbulence in accretion discs, as well as planet formation in protoplanetary disc dynamics. Therefore, any link and possible energy exchange between high frequency compressible oscillations and aperiodic vortices can be an important factor in the above described astrophysical situations. Based on the present findings we speculate that density waves can participate in the process of the development of regular vortical structures in discs with negative radial entropy gradients. Numerical simulations have shown that thermal (entropy) perturbations can generate vortices in baroclinic disc flows (see e.g., Petersen et al. 2007a, 2007b). Hence, vortex development through this mechanism depends on the existence of initial regular entropy perturbations, i.e., thermal plumes, in differentially rotating baroclinic disc flows. It seems that compressional waves with linear amplitudes can heat the flow through two different channels: viscous dissipation and shear flow induced mode conversion. However, there is a strict difference between the entropy production by the kinematic shear mechanism and viscous dissipation. In the latter case, compressional waves first need to be tightly stretched down to the dissipation length-scales by the background differential rotation to be subject of viscous dumping. As a result, the entropy produced by viscous dissipation of compressional waves takes a shape of narrow stretched lines. This thermal perturbations can baroclinically produce potential vorticity of similar configuration. However, this is clearly not an optimal form of potential vorticity that can lead to the development of the long-lived vortical structures. On the contrary, entropy perturbations produced through the mode conversion channel can have a form of a localized thermal plumes. These can be very similar to those used in numerical simulations by Petersen et al. (2007a), (2007b). In this case compressional waves can eventually lead to the development of persistent vortical structures of different polarity. Hence, high frequency oscillations of the P mode can participate in the generation of anticyclonic vortices that further accelerate dust trapping and planetesimal formation in protoplanetary discs with equilibrium entropy decreasing radially outwards. Using the local linear approximation we have shown the possibility of the potential vorticity generation in flows with both, positive and negative radial entropy gradients (Richardson numbers). In fact, the standard alpha description of the accretion discs implies \emph{positive} radial stratification of entropy and hence, weak baroclinic decay of existing vortices. In this case there will be a competition between the ``baroclinic viscosity'' and potential vorticity generation due to mode conversion. Hence, it is not strictly overruled that a significant amount of compressional perturbations can lead to the development of anticyclonic vortices even in flows with positive entropy gradients. In this case, radial stratification opens an additional degree of freedom for velocity shear induced mode conversion to operate. Although, the viability of this scenario needs further investigation. This paper presents the results obtained within the linear shearing sheet approximation. At nonlinear amplitudes, the P mode leads to the development of shock waves. These shocks induce local heating in the flow. Therefore, a realistic picture of entropy production and vortex development in radially stratified discs with significant amount of compressible perturbations needs to be analyzed by direct numerical simulations. \section*{Acknowledgments} A.G.T. was supported by GNSF/PRES-07/153. A.G.T. would like to acknowledge the hospitality of Osservatorio Astronomico di Torino. This work is supported in part by ISTC grant G-1217.
{'timestamp': '2009-09-15T19:31:36', 'yymm': '0909', 'arxiv_id': '0909.2844', 'language': 'en', 'url': 'https://arxiv.org/abs/0909.2844'}
\section{Introduction} Analytic approximations and perturbation methods are of common use in different branches of general relativity theory. The most common methods are the post-Newtonian approximations, able to deal with the gravitational field of any system in the non-relativistic limit, the post-Minkowskian approximations, which are appropriate for relativistic systems in the weak gravitational field regime, and the perturbation formalisms, which expand (at linear order in general) around some exact solution of the Einstein field equations. The B3 session {\em Analytic approximations, perturbation methods, and their applications} of GR18 conference was dominated by the issues more or less directly related to the problem of detecting gravitational waves by currently operating or planned to be built in the near future detectors. One of the most important targets for gravitational-wave detectors are coalescing binary systems made of compact objects of different kinds. These sources produce ``chirps'' of gravitational radiation whose amplitude and frequency are increasing in time. For successful detection of the chirp signals and extraction of their astrophysically important parameters it is crucial to theoretically predict gravitational waveforms with sufficient accuracy. One of the important sources for the future space-based LISA detector are extreme-mass-ratio binaries consisting of a small compact body (of stellar mass) orbiting around a supermassive black hole. The need for accurate modelling of the orbital dynamics of such binaries motivates some recent work on the problem of calculating the gravitational self-force experienced by a point particle moving in the background space-time of a more massive body. Several talks in the session reported the progress on different aspects of the gravitational self-force computations. These were talks by {\em S~Detweiler}, {\em M~Favata}, {\em J~L~Friedman}, {\em W~Hikida}, {\em N~Sago}, and {\em B~F~Whiting}. For comparable-mass binaries, computations based on the post-Newtonian approximation of general relativity are useful for constructing gravitational-wave templates for data-analysis purposes. The post-Newtonian approximation describes with great accuracy the inspiral phase of these systems. The analytic post-Newtonian results are currently matched to recent numerical calculations of the merger and ring-down phases of black hole binaries \cite{Buonanno&Cook&Pretorius2007,Boyle&al2007}. In the contributed talks (by {\em L~Blanchet}, {\em B~R~Iyer}, and {\em M~Vas\'uth}) recent results on incorporating the spin-orbit effects (also within the accuracy beyond the leading-order effects) as well as the generalization to eccentric orbits (most of the explicit analytic results concern circular orbits) were presented. Recently a new approach to the perturbative solution of the problem of motion and radiation in general relativity was developed. This is the approach pioneered by Goldberger and Rothstein \cite{Goldberger&Rothstein2006} in which effective field theory methods were used to describe the dynamics of nonrelativistic extended objects coupled to gravity. {\em B~Kol} discussed this approach as well as some of its applications. {\em A~Tartaglia} presented a new semi-analytic method for computing the emission of gravitational waves by a compact object moving in the background of a more massive body. Several other topics were tackled in the contributed talks. {\em C~L\"ammerzahl} discussed the influence of the cosmic expansion on the physics in gravitationally bound systems. {\em D~Singh} presented an analytic perturbation approach for classical spinning particle dynamics, based on the Mathisson-Papapetrou-Dixon equations of motion. {\em H~Sotani} studied gravitational radiation from collapsing magnetized dust. {\em A~S~Kubeka} remarked on the computation of the Ricci tensor for non-stationary axisymmetric space-times. In the rest of this article all talks contributed to the B3 session are sketched in more detail (in the order in which they were presented at the conference). \section{Contributed talks} \subsection{{\em Self-force analysis in extreme-mass-ratio inspiral} by S~Detweiler and I~Vega (reported by S~Detweiler)} The motion of a small object of mass $m$ orbiting a supermassive black hole of mass $M$ deviates slightly from a geodesic and has an acceleration that scales as the ratio $m/M$ of the masses. This acceleration includes the dissipative effects of radiation reaction and is said to result from the gravitational self-force acting on $m$ \cite{Poisson2004}. As an alternative, the effects of the self-force may be described as geodesic motion in an appropriately regularized metric of the perturbed spacetime \cite{Detweiler2005}. The LISA effort requires accurate gravitational wave templates for data analysis. For extreme-mass-ratio inspirals the templates should include both the dissipative and conservative effects of the self-force. The talk described a novel, efficient method for simultaneously calculating both the gravitational self-force as well as its effect on the gravitational waveform. The Authors replaced the usual singular point source with a distributed, abstract, analytically determined source. The resulting perturbation in the field from this special distributed source is guaranteed to be differentiable at the location of the particle and to provide the appropriate self-force effect on the motion of the particle. At the same time, the field from the distributed source is identically equal to the actual perturbed field in the wave zone. Thus, this abstract field simultaneously provides both the self-force acting on a point source and also the effect of the self-force on the waveform of the radiation. \subsection{{\em The adiabatic approximation and three-body effects in extreme-mass-ratio inspirals} by M~Favata} Extreme-mass-ratio inspirals (EMRIs) are an important class of LISA sources consisting of a compact object inspiralling into a supermassive black hole. The detection of these sources and the precision measurement of their parameters relies on the accurate modeling of their orbital dynamics. A precise description of the binary's orbit requires an evaluation of the compact object's self-force. The adiabatic approximation (more appropriately referred to as the {\em radiative approximation} \cite{Pound&Poisson2007a,Pound&Poisson2007b}), consists of computing the time-averaged rates of change of the three conserved quantities of geodesic motion. Its use greatly simplifies the computation of the orbital evolution. However, the adiabatic approximation ignores corrections to the conservative dynamics proportional to the mass $m$ of the compact object. These `post-adiabatic' corrections will affect the binary's positional orbital elements, for example, by causing $O(m)$ corrections to the pericenter precession rate. Using a toy model of an electric charge orbiting a central mass and perturbed by the electromagnetic self-force, Pound, Poisson, and Nickel \cite{Pound&Poisson&Nickel2005} have called into question the accuracy of the adiabatic approximation, especially for eccentric orbits. In order to estimate the size of the post-adiabatic phase errors in the gravitational case, the Author presented an analytical computation, accurate to second-post-Newtonian order, of the small-eccentricity corrections to the gravitational-wave phase. These post-Newtonian, eccentricity corrections to the phase can be significant not only for EMRIs but for other binary sources as well. Using this phase expansion it was found that the post-adiabatic phase errors are sufficiently small that waveforms based on the adiabatic approximation can be used for EMRI detection, but not for precise parameter measurements. The Author also discussed the effect of a third mass orbiting the EMRI. The analysis models the system as a hierarchical triple using the equations of motion of Blaes, Lee, and Socrates \cite{Blaes&Lee&Socrates2002}. To have a measurable effect on the EMRI's waveform, the distant mass must be sufficiently close to the compact object that both the inner and outer binaries would be detected as EMRIs. Such ``double-EMRI'' systems are rare. \subsection{{\em Extreme-mass-ratio binary inspiral in a radiation gauge} by~J~L~Friedman, T~S~Keidl, Dong-Hoon~Kim, E~Messaritaki and A~G~Wiseman (reported by~J~L~Friedman)} Gravitational waves from the inspiral of a stellar-size black hole of mass $m$ to a supermassive black hole of mass $M$ can be accurately approximated by a point particle moving in a Kerr background. To find the particle's orbit to first order in the mass ratio $m/M$, one must compute the self-force. The computation requires a renormalization, but the well-known MiSaTaQuWa prescription \cite{Mino&Sasaki&Tanaka1997,Quinn&Wald1997} involves a harmonic gauge, a gauge that is not well suited to perturbations of the Kerr geometry. In a harmonic gauge, one solves ten coupled PDEs, instead of the single decoupled Teukolsky equation for the gauge invariant components ($\psi_0$ or $\psi_4$) of the perturbed Weyl tensor. In the talk progress was reported in finding the renormalized self-force from $\psi_0$ or $\psi_4$. Following earlier work by Chrzanowski and by Cohen and Kegeles, a radiation gauge was adopted to reconstruct the perturbed metric from the perturbed Weyl tensor. The Weyl tensor component is renormalized by subtracting a singular part obtained using a recent Detweiler-Whiting version \cite{Detweiler&Whiting2003} of the singular part of the perturbed metric as a local solution to the perturbed Einstein equations. The Authors' method relies on the fact that the corresponding renormalized $\psi_0$ is a {\em sourcefree} solution to the Teukolsky equation. One can then reconstruct a nonsingular renormalized metric in a radiation gauge, a gauge that exists only for vacuum perturbations. More details can be found in Ref.\ \cite{Keidl&al2007}. \subsection{{\em Adiabatic evolution of three `constants' of motion in Kerr spacetime} by~W~Hikida, K~Ganz, H~Nakano, N~Sago and T~Tanaka (reported by~W~Hikida)} General orbits of a particle of small mass around a Kerr black hole are characterized by three parameters: the energy, the angular momentum, and the Carter constant. For energy and angular momentum, one can evaluate their change rates from the fluxes of the energy and the angular momentum at infinity and on the event horizon according to the balance argument. On the other hand, for the Carter constant, one can not use the balance argument because the conserved current associated with it is not known. Recently Mino proposed a new method of evaluating the averaged change rate of the Carter constant by using the radiative field. The Authors developed a simplified scheme for practical evaluation of the evolution of the Carter constant based on the Mino's proposal. In the talk this scheme was described in some detail, and derivation of explicit analytic formulae for the change rates of the energy, the angular momentum, and the Carter constant was presented. Also some numerical results for large eccentric orbits were shown. For more details see Ref.\ \cite{Ganz&al2007}. \subsection{{\em Gravitational self-force on a particle orbiting a Schwarzschild black hole} by~L~Barack and N~Sago (reported by~N~Sago)} In the talk the calculation of the gravitational self-force acting on a pointlike particle moving around a Schwarzschild black hole was presented. The calculation was done in the Lorenz gauge. First, the Lorenz-gauge metric perturbation equations were solved directly using numerical evolution in the time domain. Then the back-reaction force from each of the multipole modes of the perturbation was computed. Finally, the {\em mode sum} scheme was applied to obtain the physical self-force. The temporal component of the self-force describes the rate of the loss of orbital energy. As a check of their scheme, the Authors compared their result for this component with the total flux of gravitational-wave energy radiated to infinity and through the event horizon. The radial component of the self-force was also calculated. From their result for the self-force, the Authors computed the correction to the orbital frequency due to the gravitational self-force, taking into account of both the dissipative and the conservative effects. More details can be found in Ref.\ \cite{Barack&Sago2007}. \subsection{{\em Mobile quadrupole as a semianalytic method for gravitational-wave emission} by~A~Tartaglia, M~F~De~Laurentis, A~Nagar and N~Radicella (reported by~A~Tartaglia)} The quadrupole formula is the simplest approximation for studying the gravitational-wave emission from a binary system. The formula gives its best performance for quasi-circular and quasi-stationary motion of the emitters. Whenever the motion is far from the quasi-circular approximation, other semi-analytic methods or numerical calculations of growing complexity must be used. In the talk a situation was studied where the gravitational wave is emitted by a concentrated object accelerating in the background field of a central mass. Provided one knows the background metric, the gravitational wave represents a first order perturbation on it, so that the space-time trajectory of each object of the pair is almost geodesic. Once the equations of a geodesic are written, the motion can be thought of as an instantaneous rotation around an (instantaneously at rest) curvature centre for the space trajectory. In this condition the quadrupole formula is easily applicable at each different place along the geodesic, after calculating the curvature and the equivalent angular velocity as the ratio between the three-speed and the curvature radius. Everything is reduced to a problem of ordinary geometry. The energy emission rate and the waveforms obtained by this way must simply be converted from the local time to the time of a far away inertial observer. The approach was applied to the capture of a mass by a Kerr black hole. The method is much lighter (from the computational point of view) than numerical relativity, giving comparable results. For the research results relevant to the presented approach see Refs.\ \cite{Dymnikova1977,Dymnikova&Popov1980,Schnittma&Bertschinger2004}. \subsection{{\em The non-radiated multipoles in the perturbed Kerr spacetime} by~L~R~Price and B~F~Whiting (reported by~B~F~Whiting)} For the self-force problem in general relativity, it has been shown that the perturbed metric produced by a finite-mass test, point-particle, has a singular part which exerts no influence on the particle, while the self-force which the particle experiences arises entirely due to a metric perturbation which is smooth at the location of the particle \cite{Detweiler&Whiting2003}. However, metric reconstruction from the perturbed Weyl tensor is unable to yield perturbations for the non-radiated multipoles in Petrov type II spacetimes, such as that surrounding the Kerr black hole \cite{Whiting&Price2005}. In the talk a new form of the perturbed Einstein equations, developed by the Authors on the base of the Newman-Penrose formalism, was presented. With its assistance, progress towards filling the low multipole gap, which will contribute to the calculation of regularization parameters for the self-force problem, was discussed. \subsection{{\em Higher-order spin effects in the radiation field of compact binaries} by~L~Blanchet, A~Buonanno and G~Faye (reported by~L~Blanchet)} In the talk the investigation, motivated by the search for gravitational waves emitted by binary black holes, of the gravitational radiation field of compact objects with spins was discussed. The approach is based on the multipolar post-Newtonian wave generation formalism and on the formalism of point particles with spin (Papapetrou-Dixon-Bailey-Israel). The Authors computed: (i) the spin-orbit coupling effect in the binary's equations of motion one post-Newtonian (PN) order beyond the dominant effect (confirming a previous result by Tagoshi et al.\ \cite{Tagoshi&Ohashi&Owen2001}), (ii) the spin-orbit coupling effects in the binary's mass and current quadrupole moments at the same order, (iii) the spin-orbit contributions in the gravitational-wave energy flux, and (iv) the secular evolution of the binary's orbital phase up to 2.5PN order (for maximally rotating black holes). Previous results on the spin-orbit effect at the lowest order were computed in Refs.\ \cite{Kidder&Will&Wiseman1993,Kidder1995}. Crucial ingredients for obtaining the next-order 2.5PN contribution in the orbital phase are the binary's energy and the spin precession equations. These results provide more accurate gravitational-wave templates to be used in the data analysis of rapidly rotating Kerr-type black-hole binaries with the ground-based interfrometric detectors and the space-based detector LISA. Details of the presented results were published in Refs.\ \cite{Faye&Blanchet&Buonanno2006,Blanchet&Buonanno&Faye2006}. \subsection{{\em The 3PN gravitational wave luminosity from inspiralling compact binaries in~eccentric orbits} by~K~G~Arun, L~Blanchet, B~R~Iyer and M~S~S~Qusailah (reported by~B~R~Iyer)} Some details of the computation of the complete gravitational-wave luminosity of inspiralling compact binaries on quasi-elliptical orbits up to the third post-Newtonian (3PN) order using multipolar post-Minkowskian formalism were presented. There are two types of contributions to the gravitational-wave luminosity at 3PN order: the instantaneous type terms, which depend on the dynamics of the binary only at the retarded instant, and the hereditary terms, which are sensitive to dynamics of the system in the entire past. The new inputs for the calculation of the 3PN instantaneous terms include the mass octupole and current quadrupole at 2PN for general orbits and the 3PN accurate mass quadrupole. Using the 3PN quasi-Keplerian representation of elliptical orbits obtained recently \cite{Memmesheimer&Gopakumar&Schafer2004}, the flux is averaged over the binary's orbit. The hereditary terms have a `tail', `tail of tail' and `tail-squared' contributions which are computed using a semi-analytic procedure extending the earlier work of Blanchet and Sch\"afer at 1.5PN \cite{Blanchet&Schafer1993}. This semi-analytic extension uses the 1PN quasi-Keplerian parametrisation of the binary and exploits the doubly periodic nature of the orbital motion. The final 3PN accurate energy flux averaged over the binary's orbit was presented in the modified harmonic (which contains no logarithmic terms) and the ADM coordinates. Also a gauge invariant expression of the flux was provided in terms of the orbital frequency and the periastron precession constant. The results are consistent with those obtained by perturbation theory in the test particle limit to order $e_t^2$ (where $e_t$ is the so-called time eccentricity) and the 3PN circular orbit results. These results form the starting input for the construction of templates for inspiralling binaries in quasi-eccentric orbits, an astrophysically possible class of sources both for the ground-based and the space-based gravitational-wave interferometers. \subsection{{\em On the influence of the cosmic expansion on the physics in gravitationally bound systems} by~C~L\"ammerzahl and H~Dittus (reported by~C~L\"ammerzahl)} It is an old question whether the cosmological expansion influences the dynamics of gravitationally bound systems \cite{Lammerzahl&Preuss&Dittus2007}. Though sometimes it has been claimed that the expansion will tie apart gravitationally bound systems, the majority of papers covering this issue derive no measurable influence. In the talk some additional arguments for the latter were given. It was shown that (i) the gravitational field created by an isolated body will feel only a tiny influence, (ii) the planetary orbits also are practically inert against the expansion, and (iii) Doppler tracking of satellites in deep space is also only marginally influenced by the cosmic expansion. \subsection{{\em Spin evolution in binary systems} by~M~Vas\'uth and J~Maj\'ar (reported~by~M~Vas\'uth)} Gravitational waves emitted by compact binary systems are characterized by different parameters of the binary. Among them the effects of rotation of the orbiting bodies appear at 1.5 post-Newtonian (PN) order both in the dynamical description and the wave generation problem. In the talk the evolution of the individual spins of the bodies was discussed for compact binaries in circular and general eccentric orbits. For a 2PN description the spin precession equations were analyzed up to 0.5PN order. To the lowest order the angles between the total angular momentum and spin vectors are constant and spin-spin interaction causes additional harmonic dependence. The true anomaly parameterization proved to be useful in the description of eccentric orbits. In the circular and general cases linear and harmonic dependence of the angles describing the orientation of spins were found. \subsection{{\em Matched asymptotic expansion as a classical effective field theory} by~B~Kol} In the talk it was explained how the method of {\em classical effective field theory} borrowed from {\em quantum field theory} by Goldberger and Rothstein \cite{Goldberger&Rothstein2006} in the context of the motion of a compact object within a background whose typical length scale is much larger, is equivalent to {\em matched asymptotic expansion}, and moreover it offers additional insight. Feynman diagrams, divergences, (dimensional) regularization, counter-terms and the Feynman gauge appeared. The ideas were demonstrated by the case of caged black holes (black holes within a compact dimension). Within this method the source is replaced by a "black box" effective action. Another application of these ideas is to the inspiral problem of a binary system. The Author presented a computation utilizing high energy physics methods of the radiation reaction force for the case of a scalar field. \subsection{{\em An analytic perturbation approach for classical spinning particle dynamics} by~D~Singh} The Author presented a perturbation method to analytically describe the dynamics of a classical spinning particle, based on the Mathisson-Papapetrou-Dixon equations of motion. By a power series expansion with respect to the particle's spin magnitude, it was demonstrated how to obtain an analytic representation of the particle's kinematic and dynamical degrees of freedom that is formally applicable to infinite order in the expansion parameter. Within this formalism, it is possible to identify a classical analogue of radiative corrections to the particle's mass and spin due to spin-gravity interaction. The robustness of this approach was demonstrated by showing how to explicitly compute the first-order momentum and spin tensor components for arbitrary particle motion in a general space-time background. Potentially interesting applications based on this perturbation approach were also considered. For more details see Ref.\ \cite{Singh2007}. \subsection{{\em Gravitational radiation from collapsing magnetized dust} by~H~Sotani, S~Yoshida and K~D~Kokkotas (reported by~H~Sotani)} The Authors studied the influence of magnetic fields on the axial gravitational waves emitted during the collapse of a homogeneous dust sphere. It was found that while the energy emitted depends weakly on the initial matter perturbations it has strong dependence on the strength and the distribution of the magnetic field perturbations. The gravitational wave output of such a collapse can be up to an order of magnitude larger or smaller calling for detailed numerical 3D studies of collapsing magnetized configurations. More details are given in Ref.\ \cite{Sotani&al2007}. \subsection{{\em Gravitational waveforms for compact binaries} by~M~Vas\'uth and J~Maj\'ar (reported by~M~Vas\'uth)} Among the promising sources of gravitational radiation are binary systems of compact stars. The detectable signal is characterized by different parameters of the system, e.g., rotation of the bodies and the eccentricity of the orbit. The Authors presented a method to evaluate the gravitational wave polarization states for inspiralling compact binaries and considered eccentric orbits and the spin-orbit contribution in the case of two spinning objects up to 1.5 post-Newtonian order. In the circular orbit limit the presented results are in agreement with existing results. For more details see Ref.\ \cite{Vasuth&Majar2007}. \subsection{{\em On the Ricci tensor for non-stationary axisymmetric space-times} by~A~S~Kubeka} The results on Ricci tensor for non-stationary axisymmetric space-times determined by Chandrasekhar \cite{Chandrasekhar1975} have been found to be incorrect both in the linear and non-linear regimes. However, the incorrectness of the Ricci tensor does not affect the well-known results on linear perturbations of a Schwarzschild black hole solution. \section*{Acknowledgments} MV contribution was supported by OTKA grant No.\ F049429. The work of SD, LP, IV and BW was supported by NSF Grant No.\ PHY-0555484. \section*{References}
{'timestamp': '2007-11-02T14:11:15', 'yymm': '0710', 'arxiv_id': '0710.5658', 'language': 'en', 'url': 'https://arxiv.org/abs/0710.5658'}
\section{{\label{sec:intro}} Introduction} Understanding of the steady state of non-equilibrium systems is the subject of intense research. The typical situation is a solid in contact with two heat baths at different temperature. At the difference of equilibrium systems where the Boltzmann-Gibbs formalism provides an explicit description of the steady state, no equivalent theory is available for non-equilibrium stationary state (NESS). In the last few years, efforts have been concentrated on stochastic lattice gases (\cite{S1}). For these latter precious informations on the steady state like the typical macroscopic profile of conserved quantities and the form of the Gaussian fluctuations around this profile have been obtained (\cite{S1}). Recently, Bertini, De Sole, Gabrielli, Jona-Lasinio and Landim proposed a definition of non-equilibrium thermodynamic functionals via a macroscopic fluctuation theory (MFT) which gives for large diffusive systems the probability of atypical profiles (\cite{BL1},\cite{BL2}) in NESS. The method relies on the theory of hydrodynamic limits and can be seen as an infinite-dimensional generalization of the Freidlin-Wentzel theory. The approach of Bertini et al. provides a variational principle from which one can write the equation of the time evolution of the typical profile responsible of a given fluctuation. The resolution of this variational problem is, however, in general very difficult and it has only been carried for two models : the Symmetric Simple Exclusion Process (SSEP) (\cite{BL2}) and the Kipnis Marchioro Presutti (KMP) model (\cite{BL3}). Hence, it is of extreme importance to identify simple models where one can test the validity of MFT. The most studied stochastic lattice gas is the Simple Exclusion Process. Particles perform random walks on a lattice but jumps to occupied sites are suppressed. Hence the only interaction is due to the exclusion condition. The only conserved quantity by the bulk dynamics is the number of particles. In this situation, the heat reservoirs are replaced by particles reservoirs which fix the density at the boundaries. The KMP process is a Markov process composed of particles on a lattice. Each particle has an energy and a stochastic mechanism exchange energy between nearest-neigbor particles (\cite{KMP}). The real motivation is to extend MFT for Hamiltonian systems (\cite{B-r}). Unfortunately, for these later, even the derivation of the typical profile of temperature adopted by the system in the steady state is out of range of the actual techniques (\cite{BLR}). The difficulty is to show that the systems behave ergodically, e.g. that the only time invariant measures locally absolutely continuous w.r.t. Lebesgue measure are, for infinitely extended spatial uniform systems, of the Gibbs type. For some stochastic lattice gases it can be proven but it remains a challenging problem for Hamiltonian dynamics. We investigate here the MFT for a system of harmonic oscillators perturbed by a conservative noise (\cite{Ber1}, \cite{BO},\cite{BBO}). These stochastic perturbations are here to reproduce (qualitatively) the effective (deterministic) randomness coming from the Hamiltonian dynamics (\cite{OVY}, \cite{LO}, \cite{FFL}). This hybrid system can be considered as a first modest step in the direction of purely Hamiltonian systems. From a more technical point of view, SSEP and KMP are gradient systems and have only one conserved quantity. For gradient systems the microscopic current is a gradient (\cite{KL}) so that the macroscopic diffusive character of the system is trivial. Dealing with non-gradient models, we have to show that microscopically, the current is a gradient up to a small fluctuating term. The decomposition of the current in these two terms is known in the hydrodynamic limit literature as a \textit{fluctuation-dissipation equation} (\cite{EMY}). In general, it is extremely difficult to solve such an equation. Our model has two conserved quantities, energy and deformation, and is non-gradient. But fortunately, an exact fluctuation-dissipation equation can be established. In fact we are not able to apply MFT for the two conserved quantities but only for the temperature field which is a simple, but non-linear, functional of the energy and deformation fields. The paper is organized as follows. In section \ref{SEC:2}, we define the model. In section \ref{SEC:3} we establish the fluctuation-dissipation equation and obtain hydrodynamic limits for the system in a diffusive scale. Section \ref{SEC:4} is devoted to a physical interpretation of the fluctuating term appearing in the fluctuation-dissipation equation. In section \ref{SEC:5} we compute the covariance of the fluctuation fields in the NESS by a dynamical approach and show the covariance for the energy presents a non-locality we retrieve in the large deviation functional (the quasi-potential). The latter is studied in section \ref{SEC:6} for the temperature field. \section{{\label{sec:2}} The Model} We consider the dynamics of the open system of length $N$. Atoms are labeled by $x \in \{1,\dots, N\}$. Atom $1$ and $N$ are in contact with two heat reservoirs at different temperatures $T_{\ell}$ and $T_r$. Momenta of atoms are denoted by $p_1, \dots, p_{N}$ and the distance between particles are denoted by $r_1, \dots, r_{N-1}$. The Hamiltonian of the system is given by \begin{eqnarray*} {\ensuremath{\mathcal H}}^N = \sum_{x=1}^{N} {e}_x, \quad {e}_x = \frac{ p_x^2 + r_x^2}2 \qquad x= 1,\dots, N-1\\ {e}_{N} = \frac{p_{N}^2}2. \end{eqnarray*} We consider stochastic dynamics where the probability density distribution on the phase space at time $t$, denoted by $P(t, p, r)$, evolves following the Fokker-Planck equation \begin{equation*} \partial_t P = N^2 {\mathcal L}^* P \end{equation*} Here ${\mathcal L} = {\mathcal A} + \gamma {\mathcal S}+{\mathcal B}_{1,T_\ell} +{\mathcal B}_{N,T_r}$ is the generator of the process and ${\mathcal L}^*$ the adjoint operator. The factor $N^2$ in front of ${\mathcal L}^*$ is here because we have speeded up the time by $N^2$, this corresponds to a diffusive scaling. ${\mathcal A}$ is the usual Hamiltonian vector field \begin{eqnarray*} {\mathcal A} = \sum_{x=1}^{N-1} (p_{x+1} - p_x) \partial_{r_x} + \sum_{x=2}^{N-1} (r_x - r_{x-1}) \partial_{p_x}\\ + (r_1-\ell) \partial_{p_1} - (r_{N-1}-\ell) \partial_{p_{N}} \end{eqnarray*} The constant $\ell$ fix the deformation at the boundaries. $\mathcal S$ is the generator of the stochastic perturbation and $\gamma>0$ is a positive parameter that regulates its strength. The operator $S$ acts only on momenta $\{p_x\}$ and generates a diffusion on the surface of constant kinetic energy. This is defined as follows. For every nearest neigbor atoms $x$ and $x+1$, consider the following one dimensional surface of constant kinetic energy $e$ \begin{equation*} {\mathbb S}_e^1 =\{ (p_x,p_{x+1}) \in {\mathbb R}^2; p_x^2 +p_{x+1}^2 =e\} \end{equation*} The following vector field $X_{x,x+1}$ is tangent to ${\mathbb S}_{e}^1$ \begin{equation} \label{eq:4} X_{x, x +1} = p_{x+1} \partial_{p_x} - p_x \partial_{p_{x+1}} \end{equation} so $X_{x,x+1}^2$ generates a diffusion on ${\mathbb S}_e^1$ (Brownian motion on the circle). We define \begin{equation*} {\mathcal S}= \frac {1}{2} \sum_{x=1}^{N-1} X_{x, x +1}^2 \end{equation*} ${\mathcal B}_{1,T_\ell}$ and ${\mathcal B}_{N,T_r}$ are two boundary generators of Langevin baths at temperature $T_\ell$ and $T_r$ \begin{equation*} {\mathcal B}_{x,T}= \frac 12 \left(T \partial_{p_x}^2 - p_x \partial_{p_x} \right) \end{equation*} The bulk dynamics conserve two quantities: the total energy ${\mathcal H}^N=\sum_{x=1}^{N} {e}_x$ and the total deformation ${\mathcal R}^N=\sum_{x=1}^{N-1} r_x$. The energy conservation law can be read locally as (\cite{Ber1}, \cite{BO}) \begin{equation*} e_x (t) - e_x (0) = J^e_{x} (t) -J_{x+1}^e (t) \end{equation*} where $J^e_{x} (t)$ is the total energy current between $x-1$ and $x$ up to time $t$. This can be written as \begin{equation*} J^e_{x} (t)=N^2 \int_0^t j^e_{x} (s)ds + M_{x} (t) \end{equation*} In the above, $M_{x} (t)$ is a martingale, i.e. a stochastic noise with mean $0$. The instantaneous energy current $j^{e}_{x}$ can be written as \begin{equation*} j_{x}^e= -r_{x-1}p_{x} -\cfrac{\gamma}{2}\nabla(p_{x}^2) \end{equation*} The first term $-r_{x-1} p_x$ is the Hamiltonian contribution to the energy current while the noise contribution is given by the discrete gradient $-(\gamma/2) \nabla(p_{x}^2)=(\gamma/2)(p_{x}^2 -p_{x+1}^2)$. Similarly, the deformation instantaneous current $j^{r}_{x}$ between $x-1$ and $x$ is given by \begin{equation*} j^{r}_{x}=-p_{x} \end{equation*} We denote by $\mu^{ss}=<\cdot>_{ss}$ the invariant probability measure for the process. In the case $T_\ell = T_r=T$, the system is in thermal equilibrium. There is no heat flux and the Gibbs invariant measure (or canonical measure) is a product Gaussian measure $\mu_{ss}=\mu^{T,\ell}$ depending on the temperature $T$ and the mean deformation $\ell$: \begin{equation} \label{eq:mu} \mu^{T,\ell} = Z_{T}^{-1} \exp \left\{ -\cfrac{1}{2T}\sum_{x=1}^{N} p_x^2 -\cfrac{1}{2T}\sum_{x=1}^{N-1} (r_x-\ell)^2 \right\} \end{equation} \section{{\label{sec:3}} Fluctuation-dissipation equation and Hydrodynamic limit} Diffusive interacting particle systems can be classified in two categories: gradient systems and non-gradient systems (\cite{KL}). For the first, we can write the curent of the conserved quantities as a spatial discrete gradient. For example SSEP and KMP process are gradient systems. A powerful approach introduced by Varadhan (\cite{V}) to study non-gradient systems is to obtain a fluctuation-dissipation equation, meaning a decomposition of the current $j$ of conserved quantities as the sum of a microscopic gradient $\nabla h$ and of a fluctuating term of the form ${\mathcal L} u$: \begin{equation} \label{eq:fde0} j=\nabla h + {\mathcal L}u \end{equation} where ${\mathcal L}$ is the generator of the interacting particle system. In fact, the equality (\ref{eq:fde0}) is only an approximation in a suitable Hilbert space (\cite{KL}). Fortunately, for our system, we can write an equality like (\ref{eq:fde0}) without approximations. The fluctuation-dissipation equation for the deformation current $j^r$ and the energy current $j^e$ is given by (\cite{Ber1}) \begin{equation} \label{eq:fde} \begin{cases} j^{r}_{x}= -\gamma^{-1}\nabla(r_x) + {\mathcal L} h_x\\ j^e_{x}=\nabla\left[\phi_x \right]+{\mathcal L} g_x \end{cases} \end{equation} where $$\phi_x = \cfrac{1}{2\gamma}r_x^2 + \cfrac{\gamma}{2} p_x^2 +\cfrac{1}{2\gamma} p_x p_{x+1} +\cfrac{\gamma}{4}\nabla(p_{x+1}^2)$$ and $$h_x= \gamma^{-1} p_x, \quad g_x =\cfrac{p_x^2}{4}+\cfrac{p_x}{2\gamma} (r_x + r_{x-1})$$ Assume that initially the system starts from a local equilibrium $<\cdot>$ with macroscopic deformation profile $u_0 (q)$ and energy profile ${\varepsilon}_0 (q)$, $q \in [0,1]$. This means that if the macroscopic point $q \in [0,1]$ is related to the microscopic point $x$ by $q=x/N$ then at time $t=0$ \begin{equation*} <r_{[Nq]} (0)> \to u_0(q), \quad <e_{[Nq]} (0)> \to {\varepsilon}_0 (q) \end{equation*} as $N$ goes to infinity. The currents are related to conserved quantities by the conservation law \begin{eqnarray*} \partial_t <r_{[Nq]}(t)> \approx -N \partial_q <j_{[Nq]}^r (t)>,\\ \partial_{t} <e_{[Nq]}(t)> \approx - N \partial_q <j_{[Nq]}^e (t)>. \end{eqnarray*} By (\ref{eq:fde}) and the fact that the terms $N<{\mathcal L} h_x>$ and $N<{\mathcal L} g_x>$ are of order ${\mathcal O} (N^{-1})$ and do not contribute to the limit (\cite{Ber1}) we get \begin{equation*} \begin{cases} \partial_t <r_{[Nq]} (t)> \approx \gamma^{-1} \Delta <r_{[Nq]}(t)>\\ \partial_t <e_{[Nq]} (t)> \approx \Delta <\phi_{[Nq]} (t)>\\ \end{cases} \end{equation*} To close the hydrodynamic equations, one has to replace the term $<\phi_{[Nq]} (t)>$ by a function of the conserved quantities $<r_{[Nq]} (t)>$ and $<e_{[Nq]}(t)>$. The replacement is obtained through a "thermal local equilibrium" statement (see \cite{Ber1}, \cite{BO}, \cite{ELS1},\cite{ELS2}, \cite{KL}, \cite{S1}) for a rigorous justification in the context of conservative interacting particle systems). We repeat here the arguments of \cite{BL3} for convenience of the reader. Thermal local equilibrium assumption corresponds to assume that each given macroscopic region of the system is in equilibrium, but different regions may be in different equilibrium states, corresponding to different values of the parameters. Let us consider an atom with position $q=x/N$ which is far from the boundary and introduce a very large number $2L+1$ of atoms in microscopic units ($L\gg 1$), but still an infinitesimal number at the macroscopic level ($(2L+1)/N \ll 1$). We choose hence $L=\epsilon N$ where $\epsilon \ll 1$ in order to have these two conditions. We consider the system in the box $\Lambda_L (x)$ composed of the atoms labeled by $x-L, \ldots, x+L$. The time evolution of the $2L+1$ atoms is essentially given by the bulk dynamics; since the variations of deformation and energy in the volume containing the $2L+1$ atoms changes only via boundary effects and we are looking at what happened after $N^2$ microscopic time units, the system composed of the $L$ atoms has relaxed to the micro-canonical state $\lambda_{{\bar r}_q (t), {\bar e}_q (t)}$ corresponding to the local empirical deformation ${\bar r}_q (t)$ and the local empirical energy ${\bar e}_q (t)$ in the box $\Lambda_L (x)$. This means that we can divide the observables into two classes, according to their relaxation times: the fast observables, which relax to equilibrium values on a time scale much shorter than $t$ and will not have any effect on the hydrodynamical scales and the slow observables which are locally conserved by the dynamics and need much longer times to relax. We can then replace the term $<\phi_{[Nq]} (t)>$ by $\lambda_{{\bar r}_q (t), {\bar e}_q (t)} (\phi_0)$. By equivalence of ensembles, in the thermodynamic limit $N \to \infty$ and then $\varepsilon \to 0$, this last quantity is equivalent to \begin{equation*} \cfrac{\gamma+\gamma^{-1}}{2} <e_{[Nq]} (t)> + \cfrac{\gamma^{-1}-\gamma}{4} (<r_{[Nq]} (t)>)^2 \end{equation*} We have obtained the time evolution of the deformation/energy profiles $u(t,q)=\lim <r_{[Nq]} (t)>$, $\varepsilon (t,q)= \lim <e_{[Nq]} (t)>$ in the bulk. At the boundaries, Langevin baths fix temperature at $T_\ell$ and $T_r$. Hence it is more natural to introduce the couple of deformation/temperature profiles rather than deformation/energy profiles. The temperature profile $T(t,q)$ is related to $u(t,q)$ and $\varepsilon (t,q)$ by $\varepsilon (t,q) = T(t,q) +u(t,q)^2/2$. Deformation and temperature profiles evolve according to the following equations \begin{equation} \label{eq:hl} \begin{cases} \partial_t T = \cfrac{1}{2}(\gamma+\gamma^{-1}) \Delta T + \gamma^{-1}(\nabla u)^2,\\ \partial_t u =\gamma^{-1} \Delta u,\\ T(t,0)=T_\ell,\;\; T(t,1)=T_r,\\ u(t,0)=u(t,1)=\ell,\\ T(0,q)=T_0 (q),\; u(0,q)=u_{0}(q). \end{cases} \end{equation} As $t$ goes to infinity, the system reaches its steady state characterized in the thermodynamic limit by a linear temperature profile ${\bar T}(q)=T_\ell + (T_r -T_\ell) q$ and a constant deformation profile ${\bar r}(q)= \ell$. The system satisfies Fourier's law and the conductivity is given by $(\gamma+\gamma^{-1})/2$ (\cite{BO}). \section{{\label{sec:4}} Interpretation of the fluctuation-dissipation equation} We have seen that functions $h_x$ and $g_x$ had no influence on the form of the hydrodynamic equations. This is well understood by the fact that they are related to \textit{first order} corrections to local equilibrium as we explain below. Assume $T_{\ell (r)}=T \pm \delta T /2$ with $\delta T$ small. For $\delta T = 0$, the stationary state $<\cdot>_{ss}$ equals the Gibbs measure $\mu_{T}^{\ell}$ (see \ref{eq:mu}). If $\delta T$ is small, it is suggestive to try an ansatz for $<\cdot>_{ss}$ in the form: $$\tilde{\mu} = Z^{-1} \,\prod_x dp_x dr_x \exp\left(- \cfrac{1}{2T(x/N)} (p_x^2 + (r_x -\ell)^2) \right)$$ where $T(\cdot)$ is the linear interpolation on $[0,1]$ between $T_\ell$ and $T_r$. ${\tilde \mu}$ is the "local equilibrium" approximation of $<\cdot>_{ss}$. Let $f_{ss}$ be the density of the stationary state $<\cdot>_{ss}$ with respect to ${\tilde \mu}$, i.e. the solution of ${\mathcal L}^{*,T(\cdot)} f_{ss} =0$. Here ${\mathcal L}^{*,T(\cdot)}$ is the adjoint operator of ${\mathcal L}$ in ${\mathbb L}^2 ({\tilde \mu})$. It turns out that \begin{eqnarray*} {\mathcal L}^{*,T(\cdot)}&=& -{\mathcal A} +\gamma {\mathcal S} + B_{1,T_\ell} +B_{N,T_r} \\ &+&\cfrac{\delta T}{T^2} \left( \cfrac{1}{N} \sum_{x=1}^{N-2} {\tilde j}^{e}_{x,x+1} -\ell \, \cfrac{1}{N} \sum_{x=1}^{N-1} {\tilde j}^r_{x,x+1}\right)\\ &+&\cfrac{\delta T}{T^2} \left( \cfrac{1}{N} \sum_{x=1}^{N-1} p_x p_{x+1} X_{x,x+1}\right) +\cfrac{\delta T}{4} (\partial^2_{p_1} -\partial^{2}_{p_N})\\ &+& {\mathcal O} ((\delta T)^2) +{\mathcal O}(N^{-1}) \end{eqnarray*} where ${\hat j}^{e}$ and ${\hat j}^r$ are the energy and deformation currents for the reversed dynamics at equilibrium. They are obtained from $j^e$ and $j^r$ by reversing momenta $p \to -p$. Expanding $f_{ss}$ at first order $f_{ss}= 1 +\delta T\, v + o(\delta T)$, we get that for large $N$ and small gradient temperature $\delta T$, $v$ has to satisfy the following Poisson equation: \begin{equation*} (-{\mathcal A} + \gamma {\mathcal S}) v = T^{-2} \left(\cfrac{1}{N} \sum_{x=1}^{N-2} {\tilde j}^{e}_{x} -\ell \, \cfrac{1}{N} \sum_{x=1}^{N-1} {\tilde j}^r_{x}\right) \end{equation*} Let ${\hat v}$ the function obtained from $v$ by reversing momenta. By the fluctuation-dissipation equation (\ref{eq:fde}) we get \begin{equation*} {\hat v}=\cfrac{1}{NT^2}\sum_{x=1}^{N-1} (g_x- \ell h_x ) + {\mathcal O} (N^{-1}) \end{equation*} Therefore the functions $g_x$ and $h_x$ are directly related to first order corrections to local equilibrium. \section{{\label{sec:5}} Non-equilibrium Fluctuations and steady State Correlations} Assume that initially the system starts from a local equilibrium $<\cdot>$ with macroscopic deformation profile $u_0 (q)$ and temperature profile $T_0 (q)$, $q \in [0,1]$. The time-dependant deformation fluctuation field $R_t^N$ and energy fluctuation field $Y_t^N$ are defined by $$R_t^N (H)=\cfrac{1}{\sqrt N}\sum_{x=1}^{N} H\left (x/N \right)\left(r_x (t) - u\left(t,x/N\right) \right)$$ $$Y_t^N (G)=\cfrac{1}{\sqrt N}\sum_{x=1}^{N} G\left(x/N\right)\left(e_x (t) -\varepsilon(t,x/N) \right)$$ where $H,G$ are smooth test functions, $(T(t,\cdot), u(t,\cdot))$ are solutions of the hydrodynamic equations (\ref{eq:hl}) with $\varepsilon=T+u^2 /2$. The fluctuation-dissipation equations (section \ref{SEC:3}) give (\cite{FNO}, \cite{S1}): \begin{equation*} \begin{cases} R_t^N (H)=R_0^N (H)+ \cfrac{1}{\gamma}\int_{0}^t R_s^N(\Delta H)ds+{\ensuremath{\mathcal M}}_t^{1,N}\\ Y_t^N (G)=Y_0^N (G)+\gamma \int_{0}^{t} Y_s^N (\Delta G)ds\\ \phantom{Y_t^N (G)} + \int_{0}^{t}ds\left\{\cfrac{1}{\sqrt{N}}\sum_{x\in {\mathbb T}_N}(\Delta G)(x/N)f_x(\omega_s)\right\}\\ \phantom{Y_t^N (G)} +{\ensuremath{\mathcal M}}_t^{2,N} \end{cases} \end{equation*} where ${\ensuremath{\mathcal M}}^{1,N}$ and ${\ensuremath{\mathcal M}}^{2,N}$ are martingales and $f_x$ is the function defined by $$f_x (\omega)= \cfrac{\left(\gamma^{-1}-\gamma\right)}{2} r_x^2-\left(\cfrac{1}{2\gamma}p_{x+1}p_x -\cfrac{\gamma}{4}\nabla^{*} p_x^2\right)$$ Covariance of the limit martingales are computed using standard stochastic calculus and thermal equilibrium property (\cite{S1}, \cite{FNO}): \begin{equation*} \left<\left({\ensuremath{\mathcal M}}^{1,N}_t\right)^2\right> \rightarrow \cfrac{2}{\gamma}\int_0^t ds \int_{[0,1]}dq T(q,s)(\nabla H)^{2}(q) \end{equation*} \begin{equation*} \begin{split} \left<\left({\ensuremath{\mathcal M}}^{2,N}_t\right)^2\right> \rightarrow \cfrac{2}{\gamma}\int_{[0,1]}dq \int_0^t ds u^{2} (q,s) T(q,s)(\nabla G)^{2}(q)\\ +(\gamma+\gamma^{-1})\int_0^t ds \int_{[0,1]}dq T^{2}(q,s)(\nabla G)^{2}(q) \end{split} \end{equation*} \begin{eqnarray*} \left<{\ensuremath{\mathcal M}}^{1,N}_t {\ensuremath{\mathcal M}}^{2,N}_t\right> \rightarrow \\ \cfrac{2}{\gamma}\int_0^t ds\int_{[0,1]}dq u(s,q) T(s,q)(\nabla G)(q)(\nabla H)(q) \end{eqnarray*} Hence $R_t^N$ converges as $N$ goes to infinity to the solution of the linear stochastic differential equation: \begin{equation} \label{eq:R} \partial_t R =\cfrac{1}{\gamma}\Delta R -\nabla\left[\sqrt{\cfrac{2}{\gamma}T(t,q)}W_{1}(t,q)\right] \end{equation} where $W_1(t,q)$ is a standard space time white noise.\\ The description of the limit for the energy fluctuation field is more demanding. We have first to close the equation. In order to do it, we use a "dynamical Boltzmann-Gibbs lemma" (\cite{KL}, \cite{S1}). Observables are divided into two classes: non-hydrodynamical and hydrodynamical. The first one are non conserved quantities and fluctuate in a much faster scale than the others (in the time scale where these last change). Hence, they should average out and only their projection on the hydrodynamical variables should persist in the limit. One expects there exist constants $C, D$ such that \begin{eqnarray*} \cfrac{1}{\sqrt N} \int_0^t ds \sum_{x=1}^{N} (\Delta G)(x/N) \left\{ f_x (\omega_s)\right.\\ \left. -C(r_x -u(s,x/N)) -D\left(\ensuremath{\mathcal E}_x-\varepsilon (s,x/N)\right)\right\} \end{eqnarray*} vanishes as $N$ goes to infinity. Constants $C$ and $D$ depend on the macroscopic point $q=x/N$ and on the time $t$. In order to compute these constants, we assume thermal local equilibrium. Around the macroscopic point $q$, the system is considered in equilibrium with a fixed value of the deformation $u(t,q)$ and of the temperature $T(t,q)$. The constant $C,D$ are then computed by projecting the function $f_x$ on the deformation and energy fields (\cite{KL}, \cite{S1}). If $\mu^{T,\ell}$ is the Gibbs equilibrium measure with temperature $T$ and mean deformation $\ell$ (the mean energy is then ${\varepsilon}={\ell}^{2}/2 +T$), we have $\Phi(\ell,{\varepsilon})=\mu^{T,\ell}(f_{x})=\varepsilon +{\ell}^{2}/2$ and then \begin{equation*} C=\partial_\ell \Phi (u(s,q), {\varepsilon} (s,q)), \quad D= \partial_{\varepsilon} \Phi (u(s,q),{\varepsilon} (s,q)) \end{equation*} Therefore the time-dependant energy fluctuation field $Y_t^N$ converges as $N$ goes to infinity to the solution of the linear stochastic differential equation: \begin{widetext} \begin{equation} \label{eq:Y} \partial_t Y =\cfrac{1}{2}\left(\gamma+\cfrac{1}{\gamma}\right)\Delta Y +\cfrac{1}{2}\left(\cfrac{1}{\gamma}-\gamma\right)\Delta (u(t,q) R) -\nabla\left[\sqrt{\gamma +\gamma^{-1}}T(t,q)W_{2}(t,q)+u(t,q)\sqrt{\cfrac{2T(t,q)}{\gamma}}W_1(t,q)\right] \end{equation} \end{widetext} where $W_2 (t,q)$ is a standard space-time white noise independent of $W_1 (t,q)$. Remark that the deterministic terms in (\ref{eq:R}) and (\ref{eq:Y}) result from linearizing the nonlinear equation as (\ref{eq:hl}). We now compute the fluctuations fields for the NESS $<\cdot>_{ss}$ which is obtained as the stationary solution of the Langevin equations (\ref{eq:R}-\ref{eq:Y}). The field $L_t$ defined by $L_t= -\ell R_t +Y_t$ is solution of the Langevin equation \begin{equation*} \partial_t L = b\Delta L -\nabla \left[\sqrt{2b}{T} (t,q) W_2 (q,t) \right] \end{equation*} with $b=\cfrac{1}{2} (\gamma +\gamma^{-1})$. The fields $R_t$ and $L_t$ are solutions of independent decoupled linear Langevin equations and converge as $t$ goes to infinity to independent Gaussian fields. It follows that $R_t$ and $L_t$ converge to stationary fluctuation fields ${R}_{ss}$ and ${Y}_{ss}$ such that \begin{eqnarray*} \text{Cov}({R}_{ss}(G),{R}_{ss}(H))=\int_{0}^{1}dq G(q)H(q){\bar T}(q)\\ \text{Cov}({Y}_{ss}(G),{Y}_{ss} (H))=\int_{0}^{1}dq G(q) H(q)\left\{{\bar T}^2 (q)+\ell^2{\bar T}(q)\right\}\\ +2(T_\ell -T_r)^{2} \int_0^1 G(q) (\Delta^{-1} H) (q) dq\\ \text{Cov}({Y}_{ss} (G),{R}_{ss}(H))=\ell\int_{0}^{1} H(q) G(q){\bar T}(q)dq \end{eqnarray*} Observe that the covariance of the fluctuations of energy is composed of two terms. The first one corresponds to Gaussian fluctuations for the energy under local equilibrium state while the second term represents the contribution to the covariance due to the long range correlations in the NESS. As in the case of SSEP and KMP process, the correction is given by the Green function of the Dirichlet Laplacian (\cite{BL3}, \cite{S2}). \section{{\label{sec:6}} Large fluctuations} \subsection{Macroscopic dynamical behavior} Assume that initially the system is prepared in a state with a deformation profile $u_0$, energy profile ${\varepsilon}_0$ and hence temperature profile $T_0= {\varepsilon}_0 -{u}_0^2 /2$. In a diffusive scale the deformation (resp. energy, resp. temperature) profiles $u$ (resp. ${\varepsilon}$, resp. $T$) where ${\varepsilon}=T+{u^2}/2$ evolve according to the hydrodynamic equations (\ref{eq:hl}). Our aim is to obtain the large deviation principle corresponding to the law of large numbers $(\ref{eq:hl})$. It consists to estimate the probability that the empirical quantities (deformation, energy, temperature) do not follow the corresponding solutions of $(\ref{eq:hl})$ but remain close to some prescribed paths. This probability will be exponentially small in $N$ and we look for the exponential rate. We follow the classic procedure in large deviation theory (\cite{BL2},\cite{KL}): we perturb the dynamics in such a way that the prescribed paths become typical and we compute the cost of such perturbation.\\ Fix a path ${\mathcal Y} (t, \cdot)=(u(t,\cdot),{\varepsilon} (t,\cdot))$. The empirical deformation profile ${\mathcal R}^N_t$ and empirical energy profile ${\mathcal E}_t^N$ are defined by \begin{eqnarray} \label{eq:RE0} {\mathcal R}_t^N (q)= N^{-1} \sum_{x=1}^N r_x (t) {\bf 1}_{\left[x/N, (x+1)/N \right)} (q),\\ {\mathcal E}_t^N (q)= N^{-1} \sum_{x=1}^N e_x (t) {\bf 1}_{\left[x/N, (x+1)/N \right)} (q).\nonumber \end{eqnarray} In appendix, we explain how to define a Markovian dynamics associated to a couple of functions $H(t,q), G(t,q)$, $q \in [0,1]$, such that the perturbed system has hydrodynamic limits given by $u$ and $\varepsilon$. This is possible if the function $F=(H,G)$ solves the Poisson equation \begin{equation} \label{eq:poisson} \begin{cases} \partial_t {\mathcal Y}= \Delta {\mathcal Y} -\nabla(\sigma \nabla F))\\ F(t,0)=F(t,1)=(0,0) \end{cases} \end{equation} where the mobility $\sigma:=\sigma (u,\varepsilon)$ is given by \begin{equation} \label{eq:mobility} \sigma(u,\varepsilon)= 2\left( \begin{array}{cc} T & uT\\ uT & u^2 T +T^2 \end{array} \right), \quad T={\varepsilon}-u^2 /2 \end{equation} The perturbed process defined a probability measure $\tilde{\mathbb P}$ on the deformation/energy paths space by mean of the empirical deformation and energy profiles (see (\ref{eq:RE0})). Our goal is to estimate the probability \begin{eqnarray*} \phantom{a}&\phantom{=}&{\mathbb P} \left[({\mathcal R}_s^N, {\mathcal E}_{s}^N)\sim(u(s, \cdot),\varepsilon (s, \cdot)),\; s\in [0,t]\right]\\ &=&{\tilde {\mathbb E}} \left[ \cfrac{d{\mathbb P}}{d{\tilde{{\mathbb P}}}} {\bf 1}_{\left\{({\mathcal R}_s^N, {\mathcal E}_{s}^N )\sim(r(s, \cdot),\varepsilon (s, \cdot)), \; s\in[0,t]\right\}}\right] \end{eqnarray*} To avoid irrelevant complications due to the fluctuations of the initial state which have no incidence on the derivation of the quasi-potential, we assume that the initial profiles $u_0$ and $T_0$ are the stationary profiles ${\bar r} (q)=\ell$ and ${\bar T} (q) =T_\ell +(T_r -T_\ell)q$. The function $F$ is such that $${\tilde {\mathbb P}} \left[({\mathcal R}_s^N, {\mathcal E}_{s}^N)\sim(u(s,\cdot),{\varepsilon} (s, \cdot)),\; s \in [0,t]\right]\approx 1$$ In the appendix we show that in the large $N$ limit, under ${\tilde {\mathbb P}}$, the Radon-Nikodym derivative is given by \begin{equation*} \cfrac{d{\mathbb P}}{d{\tilde{{\mathbb P}}}} \approx \exp\left\{ -N J_{[0,t]}(u,{\varepsilon})\right\} \end{equation*} where \begin{equation} \label{eq:29} J_{[0,t]} (u,{\varepsilon})=\cfrac{1}{2}\int_0^{t} ds<\nabla F (s, \cdot), \sigma \nabla F (s,\cdot)>_q \end{equation} where $\sigma$ is here for $\sigma (u(s,\cdot),\varepsilon(s,\cdot))$ and $<\cdot,\cdot>_q$ for the usual scalar product in ${\mathbb L}^2 ([0,1],dq)$. Hence we have obtained \begin{eqnarray*} {\mathbb P} \left[({\mathcal R}_s^N, {\mathcal E}_{s}^N)\sim(u(s, \cdot),{\varepsilon}(s, \cdot)),\; s\in [0,t]\right] \\ \approx \exp\left\{ -N J_{[0,t]}(u,{\varepsilon})\right\} \end{eqnarray*} \subsection{The quasi-potential} To understand what is the quasi-potential, consider the following situation. Assume the system is macroscopically in the stationary profile $(u(-\infty, \cdot), {\varepsilon} (-\infty,\cdot))= (\ell, {\bar T}(\cdot)+\ell^2/2)$ at $t=-\infty$ but at $t=0$ we find it in the state $(u(q),\varepsilon(q))$. We want to determine the most probable trajectory followed in the spontaneous creation of this fluctuation. According to the precedent subsection this trajectory is the one that minimizes $J_{[-\infty,0]}$ among all trajectories $({\hat u}, {\hat \varepsilon})$ connecting the stationary profiles to $(u,\varepsilon)$. The quasi-potential is then defined by \begin{equation*} W (u,\varepsilon) = \inf_{({\hat u}, {\hat \varepsilon})} J_{[0,t]} ({\hat u}, {\hat \varepsilon}) \end{equation*} MFT postulates the quasi-potential $W$ is the appropriate generalization of the free energy for non-equilibrium systems and this has been proven rigorously for SSEP (\cite{BL2}). $W$ is solution of an infinite-dimensional Hamilton-Jacobi equation which is in general very difficult to solve. It has been solved for specific models (SSEP and KMP) having a single conservation law (\cite{BL2}, \cite{BL3}). For the system we consider, two quantities are conserved and we are not able to solve this Hamilton-Jacobi equation. Nevertheless we can compute the quasi-potential for the temperature profile (\ref{eq:30}) in the case $\gamma=1$. The latter is obtained by projecting the quasi-potential $W$ on the deformation/energy profiles with a prescribed temperature profile. Consider the system in its steady state $<\cdot>_{ss}$. Our aim is here to estimate the probability that the empirical kinetic energy defined by \begin{equation} \label{eq:30} \Theta^N (q) = N^{-1} \sum_{x=1}^N p_x^2 {\bf 1}_{\left[x/N, (x+1)/N \right)} (q) \end{equation} is close to some prescribed temperature profile $\pi (q)$ different form the linear profile ${\bar T} (q)= T_\ell +(T_r -T_\ell)q$. This probability will be exponentially small in $N$ \begin{equation*} \left< \left[ \Theta^N (q) \sim \pi (q) \right] \right>_{ss} \approx \exp( -N V(\pi)) \end{equation*} By MFT, the rate function $V(\pi)$ coincides with the following \textit{projected quasi-potential} \begin{equation*} V(\pi)=\inf_{t >0} \inf_{(u,\varepsilon)\in \ensuremath{\mathcal A}_{t,\pi}} J_{[0,t]} (u,\varepsilon) \end{equation*} where the paths set ${\ensuremath{\mathcal A}}_{t,\pi}$ is defined by \begin{eqnarray*} {\ensuremath{\mathcal A}}_{t,\pi} = \left\{ (u,\varepsilon); \quad {\varepsilon}(t, \cdot)-\cfrac{u^2(t, \cdot)}{2}=\pi(\cdot); \right.\\ \left. {\phantom{\cfrac{u^2(t, \cdot)}{2}}}u (0,\cdot)=\ell,\, T(0,\cdot)= {\bar T}(\cdot)\right\} \end{eqnarray*} Paths ${\mathcal Y}=(u,\varepsilon) \in {\mathcal A}_{t,\pi}$ must also satisfy the boundary conditions \begin{equation} \label{eq:bc} u(t,0)=u(t,1)=\ell, \quad \varepsilon (t,0)=T_\ell+\ell^2/2, \, \varepsilon (t,1)=T_r +\ell^2/2 \end{equation} In fact, it can be shown that $J_{[0,t]} (u,\varepsilon) =+\infty$ if the path ${\mathcal Y}$ does not satisfy these boundary conditions.\\ Our main result is the computation of the projected quasi-potential: \begin{equation} \label{eq:32} V (\pi)= \inf_{\tau \in \ensuremath{\mathcal T}} [{\ensuremath{\mathcal F}} (\pi, \tau) ] \end{equation} where $\ensuremath{\mathcal T}=\{ \tau \in C^{1}([0,1]); \; \tau' (q)>0,\; \tau(0)=T_\ell,\; \tau (1)=T_r\}$ and \begin{equation*} {\cal F} (\pi,\tau)= \int_0^1 dq \left[\cfrac{\pi (q)}{\tau(q)} -1 - \log\cfrac{\pi(q)}{\tau (q)} -\log \cfrac{\tau' (q)}{(T_r - T_{\ell})}\right] \end{equation*} Before proving (\ref{eq:32}) let us make some remarks. First, $V(\pi)$ is equal to the rate function for the KMP process (\cite{BL3}). Nevertheless, it is not easy to understand the deep reason. The symmetric part ${\mathcal S}$ of the generator ${\mathcal L}$ is more or less a time-continuous version of the KMP process for the kinetic energy but the Hamiltonian part has a non-trivial effect on the latter since it mixes momenta with positions. Hence, the derivation of the quasi-potential for the kinetic energy can not be derived from the the computations for the KMP process. Secondly, we are able to compute $V$ only for $\gamma=1$. When $\gamma$ is equal to $1$ hydrodynamic equations for the deformation and for the energy are decoupled but since temperature is a non-linear function of deformation and energy, it is not clear why it helps-- but it does. Formula (\ref{eq:32}) shows that the large deviation functional $V$ is nonlocal and consequently not additive: the probability of temperature profile in disjoint macroscopic regions is not given by the product of the separate probabilities. Nonlocality is a generic feature of NESS and is related to the ${\mathcal O} (N^{-1})$ corrections to local thermal equilibrium. \vspace{0,5cm} Let us call $S(\pi)$ the right hand side of equality (\ref{eq:32}). \vspace{0,5cm} For every time independent deformation/energy profiles $(r(q),e(q))$ and $\tau (q) \in \ensuremath{\mathcal T}$ we define the functional \begin{equation} \label{eq:U} U(r,e,\tau)=\int_0^1 dq \left\{ \cfrac{T}{\tau} -1 - \log \cfrac{T}{\tau} -\log \cfrac{\tau'}{T_\ell - T_r} +\cfrac{(r-\ell)^2}{2\tau}\right\} \end{equation} where $T(q)=e(q)-r(q)^2/2$ the temperature profile corresponding to $(r(q),e(q))$. Define the function $\tau:=\tau(r,e)$ of ${\ensuremath{\mathcal T}}$ as the unique increasing solution of: \begin{equation} \label{eq:tau} \begin{cases} \tau^2 \cfrac{\Delta \tau}{(\nabla \tau)^2} = \tau -T -\cfrac{1}{2} (r-\ell)^2\\ \tau(0)=T_\ell,\; \tau (1)= T_r \end{cases} \end{equation} Fix deformation/energy paths satisfying boundary conditions (\ref{eq:bc}) and define ${\mathcal Z}$ by \begin{equation} \label{eq:YZ} {\mathcal Y}=\left(\begin{array}{c}u\\\varepsilon \end{array}\right),\qquad {\mathcal Z}=\left[\partial_t {\mathcal Y}-\Delta {\mathcal Y}+\nabla(\sigma \nabla(\delta U))\right],\\ \end{equation} In the appendix we show the following formula \begin{widetext} \begin{eqnarray} \label{eq:43} J_{[0,t]}(u,\varepsilon)=U(u(t,\cdot),{\varepsilon} (t,\cdot),\tau({\varepsilon} (t,\cdot),u(t,\cdot)))-U(u(0, \cdot),{\varepsilon}(0,\cdot),\tau(u(0,\cdot),{\varepsilon} (0,\cdot)))\\ \nonumber \\ +\cfrac{1}{2}\int_0^t ds \left<\nabla^{-1}{\mathcal Z}, \sigma^{-1}\nabla^{-1}{\mathcal Z} \right>_q+\cfrac{1}{4}\int_0^t ds \int_0^1 dq (u(s,q)-\ell)^4 \cfrac{(\nabla \tau)^2 (s,q)}{\tau^4 (s,q)}\nonumber \end{eqnarray} \end{widetext} where \begin{equation*} \delta U=\left(\begin{array}{c}\cfrac{\delta U}{\delta r}\\ \phantom{a}\\ \cfrac{\delta U}{\delta {e}} \end{array}\right) (u,\varepsilon,\tau(u,\varepsilon)) \end{equation*} If $(u,\varepsilon)$ belongs to ${\mathcal A}_{t,\pi}$, \begin{eqnarray*} U(\, u(0, \cdot),{\varepsilon}(0,\cdot),\tau(u(0,\cdot),{\varepsilon} (0,\cdot))\,)\\ =U(\,\ell, \bar{T}+\ell^2/2,\tau(\ell, {\bar T} +\ell^2 /2)\,)=0 \end{eqnarray*} and $$U(u(t, \cdot),{\varepsilon} (t,\cdot), \tau(u(t, \cdot), \varepsilon(t, \cdot))) \ge {\mathcal F} (\pi, \tau (u(t,\cdot),{\varepsilon} (t,\cdot)).$$ The two last terms on the right hand side of (\ref{eq:43}) are positive so that for every paths in ${\mathcal A}_{t,\pi}$, we have \begin{eqnarray*} J_{[0,t]}(u,\varepsilon) \geq S(\pi) \end{eqnarray*} and we obtain hence \begin{equation} \label{eq:firstinequality} V(\pi)\geq S(\pi) \end{equation} To obtain the other sense of the inequality, we have to construct an optimal path $(u^*,{\varepsilon}^*) \in \ensuremath{\mathcal A}_{t,\pi}$ such that the two last terms in the right hand side of $(\ref{eq:43})$ are equal to $0$, i.e. \begin{equation} \label{eq:44} \begin{cases} \partial_t {\mathcal Y}=\Delta {\mathcal Y}-\nabla(\sigma \nabla(\delta U))\\ u(t,q)=\ell \end{cases} \end{equation} We note $T^{*}={\varepsilon}^* -{u^*}^2/2$ the corresponding temperature. Then reporting in (\ref{eq:43}), we obtain \begin{equation} \label{eq:10000} J_{[0,t]}(u^*,{\varepsilon}^*)=U(u^{*}(t),{\varepsilon}^{*}(t),\tau(u^*(t) , {\varepsilon}^*(t))) \end{equation} By the definition (\ref{eq:U}) of $U$ and by using the fact that $u^{*} (t,q)=\ell$, we obtain \begin{eqnarray} \label{eq:10001} J_{[0,t]}(u^*,{\varepsilon}^*)={\ensuremath{\mathcal F}}(T^{*}(t, \cdot),\tau(u^*(t, \cdot), {\varepsilon}^{*}(t, \cdot)) \nonumber\\ ={\ensuremath{\mathcal F}}(\pi,\tau(\ell,\pi +\ell^2 /2)) \end{eqnarray} The variational problem defining $S$ is solved for $\tau= \tau(\ell,\pi+\ell^2 /2)$ (\cite{BL3}) so that $$S(\pi)={\ensuremath{\mathcal F}} (\pi, \tau(\ell,\pi +\ell^2 /2))$$ and therefore we have \begin{equation*} V(\pi)=\inf_{t >0} \inf_{\ensuremath{\mathcal A}_{t,\pi}} J_{[0,t]}(u,\varepsilon) \leq S(\pi) \end{equation*} This inequality with (\ref{eq:firstinequality}) shows that $V(\pi)=S(\pi)$. It remains to prove that such ``good'' path exists. The proof is similar to \cite{BL3} and we shall merely outline it. Equation (\ref{eq:44}) is equivalent to the following one \begin{equation} \label{eq:45} \begin{cases} \begin{array}{l} \partial_t T^{*}=-\Delta(T^*)+2\nabla\left[\cfrac{(T^*)^2}{(\tau^*)^2}\nabla(\tau^*)\right]\\ u^*(t,q)=\ell \end{array} \end{cases} \end{equation} where $\tau^*(t,\cdot)= \tau(\ell, T^*(t,\cdot) +\ell^2/2)$. Let us denote by $\theta^*(s, \cdot)=T^{*}(t-s, \cdot)$ the time reversed path of $T^*$. $\theta^{*}$ can be constructed in the following procedure. We define $\theta^{*}(s,q),\; s\in[0,t],\; q\in [0,1]$ by $$\theta^{*}(s, \cdot)= \rho (s, \cdot) -2\rho(s, \cdot)^2 \cfrac{\Delta \rho (s, \cdot)}{[(\nabla \rho)(s, \cdot)]^2}$$ where $\rho(s,q)$ is the solution of \begin{equation*} \label{eq47} \begin{cases} \partial_s \rho =\Delta \rho\\ \rho (s,0)=T_{\ell},\qquad \rho (s,1)=T_r\\ \rho(0,q)=\rho_0(q)=\tau(\ell, \pi+\ell^2/2)(q) \end{cases} \end{equation*} It can be checked that $T^{*}(s,q)=\theta^{*}(t-s,q)$ solves $(\ref{eq:45})$. Moreover, we have $T^{*}(0,\cdot)=\theta^{*}(t,\cdot)$ and $T^{*}(t,\cdot)=\pi(\cdot)$. This path belongs to ${\ensuremath{\mathcal A}}_{t,\pi}$ only as $t\to \infty$ since $\theta^{*}(t,\cdot)$ goes to $\bar{T}(\cdot)$ as $t\to +\infty$. We have hence in fact to define $T^*$ by the preceding procedure in some time interval $[t_1,t]$ and to interpolate $\bar{T}(\cdot)$ to $\pi^{*}(t_1, \cdot)$ in the time interval $[0,t_1]$ (see \cite{BL2}, \cite{BL3} for details). This optimal path is also obtained as the time reversed solution of the hydrodynamic equation corresponding to the process with generator ${\ensuremath{\mathcal L}}^{*}$. It is easy to show that this last hydrodynamic equation is in fact the same as the hydrodynamic equation corresponding to ${\ensuremath{\mathcal L}}$. This is the "generalized" Onsager-Machlup theory developed in \cite{BL1} for NESS: "the spontaneous emergence of a macroscopic fluctuation takes place most likely following a trajectory which can be characterized in terms of the time reversed process." Observe also the following a priori non trivial fact: the optimal path is obtained with a constant deformation profile. \section{Conclusions} In the present work we obtained hydrodynamic limits, Gaussian fluctuations and (partially) large fluctuations for a model of harmonic oscillators perturbed by a conservative noise. Up to now MFT has been restricted to gradient systems with a single conservation law. This work is hence the first one where MFT is applied for a non-gradient model with two conserved quantities. The quasi-potential for the temperature has been computed in the case $\gamma=1$ and it turns out that it coincides with the one of the KMP process. Our results show this system exhibits generic features of non-equilibrium models : long range correlations and non-locality of the quasi-potential. Nevertheless our study is not completely satisfactory. It would be interesting to extend the previous results to the case $\gamma \neq 1$ and to compute the quasi-potential for the two conserved quantities and not only for the temperature. The difficulty is that there does not exist general strategy to solve the corresponding infinite-dimensional Hamilton-Jacobi equation.
{'timestamp': '2008-08-05T16:34:53', 'yymm': '0808', 'arxiv_id': '0808.0662', 'language': 'en', 'url': 'https://arxiv.org/abs/0808.0662'}
\section{Introduction} The uncertainty principle in harmonic analysis is a class of theorems which state that a nontrivial function and its Fourier transform can not both be too sharply localized. For background on different appropriate notions of localization and an overview on the recent renewed interest in mathematical formulations of the uncertainty principle, see the survey \cite{JP.FS}. This paper will adopt the broader view that the uncertainty principle can be seen not only as a statement about the time-frequency localization of a single function but also as a statement on the degradation of localization when one considers successive elements of an orthonormal basis. In particular, the results that we consider show that the elements of an orthonormal basis as well as their Fourier transforms can not be uniformly concentrated in the time-frequency plane. Hardy's Uncertainty Principle \cite{JP.Ha} may be viewed as an early theorem of this type. To set notation, define the \emph{Fourier transform} of $f\in L^1({\mathbb{R}})$ by $$ \widehat{f}(\xi)=\int f(t)e^{-2i\pi t\xi} {d}t, $$ and then extend to $L^2({\mathbb{R}})$ in the usual way. \medskip \begin{theorem} [Hardy's Uncertainty Principle] Let $a,b,C,N>0$ be positive real numbers and let $f \in L^2({\mathbb{R}})$. Assume that for almost every $x,\xi \in {\mathbb{R}}$, \begin{equation} \label{eq_hardy} |f(x)|\leq C(1+|x|)^Ne^{-\pi a|x|^2}\ \ \ \hbox{ and } \ \ \ |\widehat{f}(\xi)|\leq C(1+|\xi|)^Ne^{-\pi b|\xi|^2}. \end{equation} The following hold: \begin{itemize} \item If $ab>1$ then $f=0$. \item If $ab=1$ then $f(x)=P(x)e^{-\pi a|x|^2}$ for some polynomial $P$ of degree at most $N$. \end{itemize} \end{theorem} This theorem has been further generalized where the pointwise condition (\ref{eq_hardy}) is replaced by integral conditions in \cite{JP.BDJ}, and by distributional conditions in \cite{JP.De}. Also see \cite{AP.GZ} and \cite{AP.HL}. One may interpret Hardy's theorem by saying that the set of functions which, along their Fourier transforms, is bounded by $C(1+|x|)^Ne^{-\pi|x|^2}$ is finite dimensional, in the sense that its span is a finite dimensional subspace of $L^2({\mathbb{R}})$. In the case $ab<1$, the class of functions satisfying the condition (\ref{eq_hardy}) has been fully described by B. Demange \cite{JP.De}. In particular, it is an infinite dimensional subset of $L^2({\mathbb{R}})$. Nevertheless, it can not contain an infinite orthonormal sequence. Indeed, this was first proved by Shapiro in \cite{JP.Sh}: \begin{theorem} [Shapiro's Umbrella Theorem] \label{ap.sh-umbrel} Let $\varphi,\psi \in L^2({\mathbb{R}})$. If $\{ e_k \} \subset L^2 ({\mathbb{R}})$ is an orthonormal sequence of functions such that for all $k$ and for almost all $x,\xi\in{\mathbb{R}}$, $$ |e_k(x)|\leq |\varphi(x)|\quad\mbox{and}\quad|\widehat{e}_k(\xi)|\leq |\psi(\xi)|, $$ then the sequence $\{e_k\}$ is finite. \end{theorem} Recent work of A. De Roton, B. Saffari, H.S. Shapiro, G. Tennenbaum, see \cite{JP.DeR}, shows that the assumption $\varphi,\psi \in L^2 ({\mathbb{R}})$ can not be substantially weakened. Shapiro's elegant proof of Theorem \ref{ap.sh-umbrel} uses a compactness argument of Kolmogorov, see \cite{JP.Sh2}, but does not give a bound on the number of elements in the finite sequence. A second problem of a similar nature studied by Shapiro in \cite{JP.Sh} is that of bounding the means and variances of orthonormal sequences. For $f\in L^2({\mathbb{R}})$ with $\norm{f}_2=1$, we define the following associated \emph{mean} $$ \mu(f)={\rm Mean}(|f|^2) = \int t|f(t)|^2\mbox{d}t, $$ and the associated \emph{variance} $$ \Delta^2(f)= {\rm Var} (|f|^2) = \int|t-\mu(f)|^2|f(t)|^2\mbox{d}t. $$ It will be convenient to work also with the \emph{dispersion} $\Delta(f)\equiv\sqrt{\Delta^2(f)}$. In \cite{JP.Sh}, Shapiro posed the question of determining for which sequences of real numbers $\{a_n\}_{n=0}^{\infty},$ $\{b_n\}_{n=0}^{\infty},$ $\{c_n\}_{n=0}^{\infty},$ $\{d_n\}_{n=0}^{\infty} \subset {\mathbb{R}}$ there exists an orthonormal basis $\{e_n\}_{n=0}^{\infty}$ for $L^2 ({\mathbb{R}})$ such that for all $n \geq 0$ $$ \mu(e_n)=a_n,\ \mu(\widehat{e}_n)=b_n,\ \Delta(e_n)=c_n,\ \Delta(\widehat{e}_n)=d_n. $$ Using again Kolmogorov's compactness argument, he proved the following, \cite{JP.Sh}: \begin{theorem} [Shapiro's Mean-Dispersion Principle]\label{ap.sh-meandisp} There does not exist an infinite orthonormal sequence $\{e_n\}_{n=0}^{\infty} \subset L^2({\mathbb{R}})$ such that all four of $\mu(e_n)$, $\mu(\widehat{e}_n)$, $\Delta(e_n), \Delta(\widehat{e}_n)$ are uniformly bounded. \end{theorem} \medskip An extension of this theorem in \cite{JP.Po} shows that if $\{e_n\}_{n=0}^{\infty}$ is an orthonormal basis for $L^2({\mathbb{R}})$ then two dispersions and one mean $\Delta(e_n), \Delta(\widehat{e}_n),\mu(e_n)$ can not all be uniformly bounded. Shapiro recently pointed out a nice alternate proof of this result using the Kolmogorov compactness theorem from \cite{JP.Sh}. The case for two means and one dispersion is different. In fact, it is possible to construct an orthonormal basis $\{e_n\}_{n=0}^{\infty}$ for $L^2({\mathbb{R}})$ such that the two means and one dispersion $\mu(e_n)$, $\mu(\widehat{e}_n),\Delta(e_n)$ are uniformly bounded, see \cite{JP.Po}. Although our focus will be on Shapiro's theorems, let us also briefly refer the reader to some other work in the literature concerning uncertainty principles for bases. The classical Balian-Low theorem states that if a set of lattice coherent states forms an orthonormal basis for $L^2({\mathbb{R}})$ then the window function satisfies a strong version of the uncertainty principle, e.g., see \cite{AP.CP,AP.GHHK}. For an analogue concerning dyadic orthonormal wavelets, see \cite{AP.battle}. \subsection*{Overview and main results} The goal of this paper is to provide quantitative versions of Shapiro's Mean-Dispersion Principle and Umbrella Theorem, i.e., Theorems \ref{ap.sh-umbrel} and \ref{ap.sh-meandisp}. Section \ref{sec2.ap} addresses the Mean-Dispersion Theorem. The main results of this section are contained in Section \ref{ap.sharp.md.sec} where we prove a sharp quantitative version of Shapiro's Mean-Dispersion Principle. This result is sharp, but the method of proof is not easily applicable to more general versions of the problem. Sections \ref{ap.herm.sec} and \ref{ap.rayleigh.sec} respectively contain necessary background on Hermite functions and the Rayleigh-Ritz technique which is needed in the proofs. Section \ref{ap.riesz.shmd.sec} proves a version of the mean-dispersion theorem for Riesz bases. Section \ref{sec3.ap} addresses the Umbrella Theorem and variants of the Mean-Dispersion Theorem. The main results of this section are contained in Section \ref{gen.md.ap} where we prove a quantitative version of the Mean-Dispersion Principle for a generalized notion of dispersion, and in Section \ref{JP.sec:QUT} where we prove a quantitative version of Shapiro's Umbrella Theorem. Explicit bounds on the size of possible orthonormal sequences are given in particular cases. Since the methods of Section \ref{sec2.ap} are no longer easily applicable here, we adopt an approach based on geometric combinatorics. Our results use estimates on the size of spherical codes, and the theory of prolate sphero\"\i dal wavefunctions. Section \ref{ap.sphcode.sec} contains background results on spherical codes, including the Delsarte, Goethals, Seidel bound. Section \ref{ap.apponb.sec} proves some necessary results on projections of one set of orthonormal functions onto another set of orthonormal functions. Section \ref{ap.psw.sec} gives an overview of the prolate sphero\"\i dal wavefunctions and makes a connection between projections of orthonormal functions and spherical codes. Section \ref{ap.rieszang.sec} concludes with extensions to Riesz bases. \section{Growth of means and dispersions} \label{sec2.ap} In this section, we use the classical Rayleigh-Ritz technique to give a quantitative version of Shapiro's Mean-Dispersion Theorem. We also prove that, in this sense, the Hermite basis is the best concentrated orthonormal basis of $L^2({\mathbb{R}})$. \subsection{The Hermite basis} \label{ap.herm.sec} Results of this section can be found in \cite{JP.FS}. The Hermite functions are defined by $$ h_k(t)=\frac{2^{1/4}}{\sqrt{k!}}\left(-\frac{1}{\sqrt{2\pi}}\right)^ke^{\pi t^2} \left(\frac{{d}}{{d}t}\right)^ke^{-2\pi t^2}. $$ It is well known that the Hermite functions are eigenfunctions of the Fourier transform, satisfy $\widehat{h_k}=i^{-k}h_k$, and form an orthonormal basis for $L^2({\mathbb{R}})$. Let us define the {\em Hermite operator} $H$ for functions $f$ in the Schwartz class by $$Hf(t) = -\frac{1}{4\pi^2}\frac{d^2}{dt^2}f(t) + t^2 f(t).$$ It is easy to show that \begin{equation} \label{JP.eigen_val} Hh_k = \left( \frac{2k+1}{2\pi} \right) h_k, \end{equation} so that $H$ may also be seen as the densely defined, positive, self-adjoint, unbounded operator on $L^2(\mathbb{R})$ defined by $$ Hf = \sum_{k=0}^{\infty} \frac{2k+1}{2\pi} \langle f, h_k \rangle h_k. $$ From this, it immediately follows that, for each $f$ in the domain of $H$ \begin{eqnarray} \langle Hf,f \rangle&=& \sum_{k=0}^{\infty} \frac{2k+1}{2\pi} |\langle f, h_k \rangle |^2 = \int |t|^2 |f(t)|^2 dt + \int |\xi|^2 |\widehat{f}(\xi)|^2 d\xi \label{JP.eq:fundherm} \\ &=&\mu(f)^2\norm{f}_2^2+\Delta^2(f)+\mu(\widehat{f})^2\|\widehat{f}\|_2^2+\Delta^2(\widehat{f}). \notag \end{eqnarray} \subsection{The Rayleigh-Ritz Technique} \label{ap.rayleigh.sec} The Rayleigh-Ritz technique is a useful tool for estimating eigenvalues of operators, see \cite[Theorem XIII.3, page 82]{JP.RS4}. \begin{theorem} [The Rayleigh-Ritz Technique] \label{JP.ral_ritz_thm} Let $H$ be a positive self-adjoint operator and define $$ \lambda_k (H) = \sup_{\varphi_0, \cdots, \varphi_{k-1}} \ \ \inf_{\psi \in [\varphi_0, \cdots, \varphi_{k-1}]^{\perp},\|\psi\|_2 \leq 1, \psi \in D(H)} \langle H \psi, \psi \rangle.$$ where $D(H)$ is the domain of $H$. Let $V$ be a $n+1$ dimensional subspace, $V\subset D(H),$ and let $P$ be the orthogonal projection onto $V$. Let $H_V = PHP$ and let $\widetilde{H_V}$ denote the restriction of $H_V$ to $V$. Let $\mu_0 \leq \mu_1 \leq \cdots \leq \mu_n$ be the eigenvalues of $\widetilde{H_V}$ Then $$ \lambda_k (H) \leq \mu_k, \ \ \ k=0, \cdots, n. $$ \end{theorem} The following corollary is a standard and useful application of the Rayleigh-Ritz technique. For example, \cite[Chapter 12]{JP.LL} contains a version in the setting of Schr\"odinger operators. \begin{corollary} \label{JP.trace_cor} Let $H$ be a positive self-adjoint operator, and let $\varphi_0, \cdots, \varphi_n$ be an orthonormal set of functions. Then \begin{equation} \label{JP.eq:trace} \sum_{k=0}^n \lambda_k(H) \leq \sum_{k=0}^n \langle H\varphi_k, \varphi_k \rangle. \end{equation} \end{corollary} \begin{proof} If some $\varphi_k \notin D(H)$ then positivity of $H$ implies that \eqref{JP.eq:trace} trivially holds since the right hand side of the equation would be infinite. We may thus assume that $\varphi_0, \cdots, \varphi_n\in D(H)$. Define the $n+1$ dimensional subspace $V = {\rm span} \ \{\varphi_k\}_{k=0}^n$ and note that the operator $\widetilde{H_V}$ is given by the matrix $M =[\langle H \varphi_j, \varphi_k \rangle]_{0\leq j,k \leq n}$. Let $\mu_0, \cdots, \mu_n$ be the eigenvalues of $\widetilde{H_V}$, i.e., of the matrix $M$. By Theorem \ref{JP.ral_ritz_thm}, $$ \sum_{k=0}^n \lambda_k (H) \leq \sum_{k=0}^n \mu_k = {\rm Trace} (M) = \sum_{k=0}^n \langle H\varphi_k, \varphi_k \rangle $$ which completes the proof of the corollary. \end{proof} \subsection{The Sharp Mean-Dispersion Principle} \label{ap.sharp.md.sec} \begin{theorem} [Mean-Dispersion Principle] \label{JP.th:optherm} Let $\{e_k\}_{k=0}^{\infty}$ be any orthonormal sequence in $L^2({\mathbb{R}})$. Then for all $n\geq 0$, \begin{equation} \label{JP.eq:optherm} \sum_{k=0}^n \left( \Delta^2(e_k) + \Delta^2(\widehat{e_k}) + |\mu(e_k)|^2 + |\mu(\widehat{e_k})|^2 \right) \geq \frac{(n+1)(2n+1)}{4\pi}. \end{equation} Moreover, if equality holds for all $0 \leq n\leq n_0$, then there exists $\{c_k\}_{n=0}^{n_0} \subset {\mathbb{C}}$ such that $|c_k|=1$ and $e_k=c_kh_k$ for each $0 \leq k \leq n_0$. \end{theorem} \begin{proof} Since $H$ is positive and self-adjoint, one may use Corollary \ref{JP.trace_cor}. It follows from Corollary \ref{JP.trace_cor} that for each $n \geq 0$ one has \begin{equation} \label{JP.trace_eq} \sum_{k=0}^n \frac{2k+1}{2\pi} \leq \sum_{k=0}^n \langle He_k, e_k \rangle. \end{equation} From (\ref{JP.eq:fundherm}), note that since $\|e_k\|_2=1$, $$ \langle H e_k, e_k \rangle = \Delta^2 (e_k) + \Delta^2 (\widehat{e_k}) + |\mu(e_k)|^2 + |\mu(\widehat{e_k})|^2. $$ This completes the proof of the first part. Assume equality holds in (\ref{JP.eq:optherm}) for all $n=0,\ldots,n_0$, in other terms that, for $n=0,\ldots,n_0$, $$ \scal{He_n,e_n}= \Delta^2(e_n) + \Delta^2(\widehat{e_n}) + |\mu(e_n)|^2 + |\mu(\widehat{e_n})|^2 = \frac{2n+1}{2\pi}. $$ Let us first apply (\ref{JP.eq:fundherm}) for $f=e_0$: $$ \sum_{k=0}^{\infty} \frac{2k+1}{2\pi} |\langle e_0, h_k \rangle |^2 =\scal{He_0,e_0}=\frac{1}{2\pi} =\sum_{k=0}^{\infty} \frac{1}{2\pi} |\langle e_0, h_k \rangle |^2 $$ since $\|e_0\|_2=1$. Thus, for $k\geq1$, one has $\scal{e_0,h_k}=0$ and hence $e_0=c_0h_0$. Also $\|e_0\|_2=1$ implies $|c_0|=1$. Next, assume that we have proved $e_k=c_kh_k$ for $k=0,\ldots,n-1$. Since $e_n$ is orthogonal to $e_k$ for $k<n,$ one has $\scal{e_n,h_k}=0$. Applying (\ref{JP.eq:fundherm}) for $f=e_n$ we obtain that, $$ \sum_{k=n}^{\infty} \frac{2k+1}{2\pi} |\langle e_n, h_k \rangle |^2 =\sum_{k=0}^{\infty} \frac{2k+1}{2\pi} |\langle e_n, h_k \rangle |^2 =\scal{He_n,e_n}=\frac{2n+1}{2\pi} =\sum_{k=n}^{\infty} \frac{2n+1}{2\pi} |\langle e_n, h_k \rangle |^2. $$ Thus $\scal{e_n,h_k}=0$ for $k>n$. It follows that $e_n=c_nh_n$. \end{proof} \begin{examplenum} For all $n \geq 0$, the Hermite functions satisfy $$\mu(h_n) = \mu(\widehat{h_n}) =0 \ \ \ \hbox{ and } \ \ \ \Delta^2 (h_n) =\Delta^2 (\widehat{h_n}) = \frac{2n+1}{4\pi}.$$ \end{examplenum} For comparison, let us remark that Bourgain has constructed an orthonormal basis $\{b_n\}_{n=1}^{\infty}$ for $L^2({\mathbb{R}})$, see \cite{JP.B}, which satisfies $\Delta^2 (b_n) \leq \frac{1}{2\sqrt{\pi}} + \varepsilon$ and $\Delta^2 (\widehat{b_n}) \leq \frac{1}{2\sqrt{\pi}} + \varepsilon$. However, it is difficult to control the growth of $\mu(b_n), \mu(\widehat{b_n})$ in this construction. For other bases with more structure, see the related work in \cite{AP.BCGP} that constructs an orthonormal basis of lattice coherent states $\{g_{m,n}\}_{m,n\in{\mathbb{Z}}}$ for $L^2({\mathbb{R}})$ which is logarithmically close to having uniformly bounded dispersions. The means $(\mu(g_{m,n}), \mu( \widehat{g_{m,n}}))$ for this basis lie on a translate of the lattice ${\mathbb{Z}} \times {\mathbb{Z}}$. It is interesting to note that if one takes $n=0$ in Theorem \ref{JP.th:optherm} then this yields the usual form of Heisenberg's uncertainty principle (see \cite{JP.FS} for equivalences between uncertainty principles with sums and products). In fact, using (\ref{JP.eq:fundherm}), Theorem \ref{JP.th:optherm} also implies a more general version of Heisenberg's uncertainty principle that is implicit in \cite{JP.FS}. In particular, if $f\in L^2({\mathbb{R}})$ with $\norm{f}_2=1$ is orthogonal to $h_0,\ldots,h_{n-1}$ then $$ \Delta^2(f) + \Delta^2(\widehat{f}) + |\mu(f)|^2 + |\mu(\widehat{f})|^2 \geq \frac{2n+1}{2\pi}. $$ For instance, if $f$ is odd, then $f$ is orthogonal to $h_0$, and $\mu(f)=\mu(\widehat{f})=0$. Using the usual scaling trick, we thus get the well known fact that the optimal constant in Heisenberg's inequality, e.g., see \cite{JP.FS}, is given as follows $$ \Delta(f)\Delta(\widehat{f}) \geq \begin{cases} \frac{1}{4\pi}\norm{f}_2^2&\mbox{in general}\\ \frac{3}{4\pi}\norm{f}_2^2&\mbox{if $f$ is odd} \end{cases}. $$ \begin{corollary} \label{ap.cor.opt.md} Fix $A>0.$ If $\{e_k\}_{k=0}^{n} \subset L^2({\mathbb{R}})$ is an orthonormal sequence and for $k=0,\ldots,n$, satisfies $$ |\mu(e_k)|,\ |\mu(\widehat{e_k})|,\ \Delta(e_k),\ \Delta(\widehat{e_k})\leq A, $$ then $n\leq 8\pi A^2$. \end{corollary} \begin{proof} According to Theorem \ref{JP.th:optherm} $$ 4(n+1)A^2\geq \sum_{k=0}^n \left( \Delta^2(e_k) + \Delta^2(\widehat{e_k}) + |\mu(e_k)|^2 + |\mu(\widehat{e_k})|^2 \right) \geq \frac{(n+1)(2n+1)}{4\pi}. $$ It follows that $2n+1\leq 16\pi A^2$. \end{proof} This may also be stated as follows: \begin{corollary} \label{JP.cor:cor2} If $\{e_k\}_{k=0}^{\infty} \subset L^2({\mathbb{R}})$ is an orthonormal sequence, then for every $n$, $$ \max\{|\mu(e_k)|,\ |\mu(\widehat{e_k})|,\ \Delta(e_k),\ \Delta(\widehat{e_k})\ : 0 \leq k \leq n\} \geq \sqrt{\frac{2n+1}{16\pi}}. $$ \end{corollary} \subsection{An extension to Riesz bases} \label{ap.riesz.shmd.sec} Recall that $\{x_k\}_{k=0}^{\infty}$ is a \emph{Riesz basis} for $L^2({\mathbb{R}})$ if there exists an isomorphism, $U:L^2({\mathbb{R}})\to L^2({\mathbb{R}})$, called the \emph{orthogonalizer} of $\{x_k\}_{k=0}^{\infty}$, such that $\{Ux_k\}_{k=0}^{\infty}$ is an orthonormal basis for $L^2({\mathbb{R}})$. It then follows that, for every $\{a_n\}_{n=0}^{\infty} \in\ell^2$, \begin{equation} \frac{1}{\norm{U}^2}\sum_{n=0}^{\infty} |a_n|^2 \leq \norm{\sum_{n=0}^{\infty} a_nx_n}_{2}^2\leq\norm{U^{-1}}^2\sum_{n= 0}^{\infty} |a_n|^2. \label{JP.eq:riesz} \end{equation} One can adapt the results of the previous sections to Riesz bases. To start, note that the Rayleigh-Ritz technique leads to the following, cf. \cite[Theorem XIII.3, page 82]{JP.RS4}: \begin{lemma} \label{JP.lem:traceR} Let $H$ be a positive, self-adjoint, densely defined operator on $L^2 ({\mathbb{R}})$, and let $\{x_k\}_{k=0}^{\infty}$ be a Riesz basis for $L^2({\mathbb{R}})$ with orthonormalizer $U$. Then, for every $n\geq 0$, \begin{equation} \label{JP.eq:traceR} \sum_{k=0}^n \lambda_k(H) \leq \norm{U}^2 \sum_{k=0}^n \langle Hx_k, x_k \rangle. \end{equation} \end{lemma} \begin{proof} Let us take the notations of the proof of Corollary \ref{JP.trace_cor}. Write $\varphi_k=Ux_k$, it is then enough to notice that $$ M=[\scal{Hx_k,x_k}]=[\scal{HU^{-1}x_k,U^{-1}x_k}]=[\scal{U^{-1}\,^*HU^{-1}x_k,x_k}]. $$ As $U^{-1}\,^*HU^{-1}$ is a positive operator, that the Rayleigh-Ritz theorem gives $$ \sum_{k=0}^n\langle Hx_k, x_k \rangle\geq \sum_{k=0}^n\lambda_k(U^{-1}\,^*HU^{-1}). $$ But, \begin{eqnarray*} \lambda_k(U^{-1}\,^*HU^{-1})&=&\sup_{\varphi_0, \cdots, \varphi_{k-1}} \ \ \inf_{\psi \in [\varphi_0, \cdots, \varphi_{k-1}]^{\perp},\|\psi\|_2\leq1} \scal{U^{-1}\,^*HU^{-1}\psi,\psi}\\ &=& \sup_{\varphi_0, \cdots, \varphi_{k-1}} \inf_{\psi \in [\varphi_0, \cdots, \varphi_{k-1}]^{\perp},\|\psi\|_2\leq1} \scal{HU^{-1}\psi,U^{-1}\psi}\\ &=& \sup_{\varphi_0, \cdots, \varphi_{k-1}} \inf_{\tilde\psi \in [U^*\varphi_0, \cdots, U^*\varphi_{k-1}]^{\perp},\|U\tilde\psi\|_2\leq1} \scal{H\tilde\psi,\tilde\psi} \end{eqnarray*} and, as $\|U\tilde\psi\|_2\leq \norm{U}\,\|\tilde\psi\|_2$, $$ \lambda_k(U^{-1}\,^*HU^{-1})\geq \frac{1}{\norm{U}^2} \sup_{\tilde\varphi_0, \cdots, \tilde\varphi_{k-1}} \inf_{\tilde\psi \in [\tilde\varphi_0, \cdots, \tilde\varphi_{k-1}]^{\perp},\|\tilde\psi\|_2\leq1} \scal{H\tilde\psi,\tilde\psi}=\frac{1}{\norm{U}^2}\lambda_k(H). $$ \end{proof} Adapting the proofs of the previous section, we obtain the following corollary. \begin{corollary} If $\{ x_k \}_{k=0}^{\infty}$ is a Riesz basis for $L^2({\mathbb{R}})$ with orthonormalizer $U$ then for all $n$, $$ \sum_{k=0}^n \left( \Delta^2(x_k) + \Delta^2(\widehat{x_k}) + |\mu(x_k)|^2 + |\mu(\widehat{x_k})|^2 \right) \geq \frac{(n+1)(2n+1)}{4\pi\norm{U}^2}. $$ Thus, for every $A>0$, there are at most $8\pi A^2\norm{U}^2$ elements of the basis $\{ x_k \}_{k=0}^{\infty}$ such that $|\mu(e_n)|$, $|\mu(\widehat{e_n})|$, $\Delta(e_n)$, $\Delta(\widehat{e_n})$ are all bounded by $A$. In particular, $$ \max\{|\mu(x_k)|,\ |\mu(\widehat{x_k})|,\ \Delta(x_k),\ \Delta(\widehat{x_k}):\ 0 \leq k \leq n \} \geq \frac{1}{\norm{U}}\sqrt{\frac{2n+1}{16\pi}}. $$ \end{corollary} \section{Finite dimensional approximations, spherical codes and the Umbrella Theorem} \label{sec3.ap} \subsection{Spherical codes} \label{ap.sphcode.sec} Let ${\mathbb{K}}$ be either ${\mathbb{R}}$ or ${\mathbb{C}}$, and let $d\geq1$ be a fixed integer. We equip ${\mathbb{K}}^d$ with the standard Euclidean scalar product and norm. We denote by ${\mathbb{S}}_d$ the unit sphere of ${\mathbb{K}}^d$. \begin{definition} Let $A$ be a subset of $\{z\in{\mathbb{K}}\,:\ |z|\leq 1\}$. A spherical $A$-code is a finite subset $V \subset {\mathbb{S}}_d$ such that if $u,v \in V$ and $u\not=v$ then $\scal{u,v}\in A$. \end{definition} Let $N_{\mathbb{K}}(A,d)$ denote the maximal cardinality of a spherical $A$-code. This notion has been introduced in \cite{JP.DeGoSi} in the case ${\mathbb{K}}={\mathbb{R}}$ where upper-bounds on $N_{\mathbb{R}}(A,d)$ have been obtained. These are important quantities in geometric combinatorics, and there is a large associated literature. Apart from \cite{JP.DeGoSi}, the results we use can all be found in \cite{JP.CoSl}. Our prime interest is in the quantity $$N_{\mathbb{K}}^s(\alpha,d)=\begin{cases}N_{\mathbb{R}}([-\alpha,\alpha],d),&\mbox{when }{\mathbb{K}}={\mathbb{R}}\\ N_{\mathbb{C}}(\{z\in {\mathbb{C}}:|z|\leq \alpha\},d),&\mbox{when }{\mathbb{K}}={\mathbb{C}}\\ \end{cases}$$ for $\alpha\in (0,1]$. Of course $N_{\mathbb{R}}^s(\alpha,d)\leq N_{\mathbb{C}}^s(\alpha,d)$. Using the standard identification of ${\mathbb{C}}^d$ with ${\mathbb{R}}^{2d}$, namely identifying $Z=(x_1+iy_1,\ldots, x_d + iy_d)\in{\mathbb{C}}^d$ with $\tilde Z=(x_1,y_1,\ldots, x_d, y_d)\in{\mathbb{R}}^{2d}$, we have $\scal{\tilde Z,\tilde Z'}_{{\mathbb{R}}^{2d}}=\mbox{Re}\,\scal {Z,Z'}_{{\mathbb{C}}^d}$. Thus $N_{\mathbb{C}}^s(\alpha,d)\leq N_{\mathbb{R}}^s(\alpha,2d)$. In dimensions $d=1$ and $d=2$ one can compute the following values for $N_{\mathbb{K}}^s(\alpha,d)$: \begin{itemize} \item $N_{\mathbb{R}}^s(\alpha,1)=1$ \item If $0 \leq \alpha < 1/2$ then $N_{\mathbb{R}}^s(\alpha,2)=2$ \item If $\cos\frac{\pi}{N}\leq \alpha<\cos\frac{\pi}{N+1}$ and $3 \leq N$ then $N_{\mathbb{R}}^s(\alpha,2)=N$. \end{itemize} In higher dimensions, one has the following result. \begin{lemma} \label{ap.sphcodebnd.lem} If $0\leq \alpha<\frac{1}{d}$ then $N_{\mathbb{K}}^s(\alpha,d)=d$. \end{lemma} \begin{proof} An orthonormal basis of ${\mathbb{K}}^d$ is a spherical $[-\alpha,\alpha]$-code so that $N_{\mathbb{K}}^s(\alpha,d)\geq d$. For the converse, let $\alpha<1/d$ and assume towards a contradiction that $w_0,\ldots,w_d$ is a spherical $[-\alpha,\alpha]$-code. Indeed, let us show that $w_0,\ldots,w_d$ would be linearly independent in ${\mathbb{K}}^d$. Suppose that $\displaystyle\sum_{j=0}^{d}\lambda_jw_j=0,$ and without loss of generality that $|\lambda_j|\leq|\lambda_0|$ for $j=1,\ldots,d$. Then $\lambda_0\norm{w_0}^2=\displaystyle-\sum_{j=1}^{d}\lambda_j\scal{w_j,w_0}$ so that $|\lambda_0|\leq |\lambda_0|d\alpha.$ As $d\alpha<1$ we get that $\lambda_0=0$ and then $\lambda_j=0$ for all $j$. \end{proof} In general, it is difficult to compute $N_{\mathbb{K}}^s(\alpha,k)$. A coarse estimate using volume counting proceeds as follows. \begin{lemma} \label{ap.expbnd.lem} If $0 \leq \alpha <1$ is fixed, then there exist constants $0<a_1<a_2$ and $0<C$ such that for all $d$ $$ \frac{1}{C}e^{a_1 d}\leq N_{\mathbb{K}}^s(\alpha,d)\leq Ce^{a_2 d}. $$ Moreover, for $\alpha\leq 1/2$ one has $N_{\mathbb{K}}^s(\alpha,d) \leq 3^d$ if ${\mathbb{K}}={\mathbb{R}}$, and $N_{\mathbb{K}}^s(\alpha,d) \leq 9^d$ if ${\mathbb{K}}={\mathbb{C}}$. \end{lemma} \begin{proof} \noindent The counting argument for the upper bound proceeds as follows. Let $\{w_j\}_{n=1}^N$ be a spherical $A$-code, with $A =[-\alpha, \alpha]$ or $A= \{z\in {\mathbb{C}} : |z| \leq \alpha \}.$ For $j\not=k$, one has $$ \norm{w_j-w_k}^2=\norm{w_j}^2+\norm{w_k}^2+2\mbox{Re}\,\scal{w_j,w_k} \geq 2-2\alpha. $$ So, the open balls $\displaystyle B\left(w_j,\sqrt{\frac{1-\alpha}{2}}\right)$ of center $w_j$ and radius $\displaystyle\sqrt{\frac{1-\alpha}{2}}$ are all disjoint and included in the ball of center $0$ and radius $\dst1+\sqrt{\frac{1-\alpha}{2}}$. Therefore $$ Nc_d\left(\frac{1-\alpha}{2}\right)^{hd/2}\leq c_d\left(1+\sqrt{\frac{1-\alpha}{2}}\right)^{hd} $$ where $c_d$ is the volume of the unit ball in ${\mathbb{K}}^d$, $h=1$ if ${\mathbb{K}}={\mathbb{R}}$ and $h=2$ if ${\mathbb{K}}={\mathbb{C}}$. This gives the bound $N\leq \left(1+\sqrt{\frac{2}{1-\alpha}}\right)^{hd}$. Note that for $\alpha\leq 1/2$ we get $N\leq 3^d$ if ${\mathbb{K}}={\mathbb{R}}$ and $N\leq 9^d$ if ${\mathbb{K}}={\mathbb{C}}$. The lower bound too may be obtained by a volume counting argument, see \cite{JP.CoSl}. \end{proof} The work of Delsarte, Goethals, Seidel, \cite{JP.DeGoSi} provides a method for obtaining more refined estimates on the size of spherical codes. For example, taking $\beta = - \alpha$ in Example 4.5 of \cite{JP.DeGoSi} shows that if $\alpha<\frac{1}{\sqrt{d}}$ then \begin{equation} \label{delsarte_bnd.ap} \displaystyle N_{\mathbb{R}}^s(\alpha,d)\leq \frac{(1-\alpha^2)d}{1-\alpha^2d}. \end{equation} Equality can only occur for spherical $\{-\alpha,\alpha\}$-codes. Also, note that if $\alpha=\frac{1}{\sqrt{d}}\sqrt{1-\frac{1}{d^k}}$, then $\frac{1-\alpha^2}{1-\alpha^2d}d\sim d^{k+1}$. \subsection{Approximations of orthonormal bases} \label{ap.apponb.sec} We now make a connection between the cardinality of spherical codes and projections of orthonormal bases. Let $\mathcal{H}$ be a Hilbert space over ${\mathbb{K}}$ and let $\Psi=\{\psi_k\}_{k=1}^{\infty}$ be an orthonormal basis for $\mathcal{H}$. For an integer $d\geq 1$, let ${\mathbb{P}}_d$ be the orthogonal projection on the span of $\{\psi_1,\ldots,\psi_d\}$. For $\varepsilon>0$, we say that an element $f\in \mathcal{H}$ is $\varepsilon,d$-approximable if $\norm{f-{\mathbb{P}}_df}_{\mathcal{H}}<\varepsilon$, and define ${{\mathcal S}}_{\varepsilon,d}$ to be the set of all of $f\in {\mathcal{H}}$ with $||f||_{\mathcal{H}}=1$ that are $\varepsilon,d$-approximable. We denote by $A_{\mathbb{K}}(\varepsilon,d)$ the maximal cardinality of an orthonormal sequence in ${{\mathcal S}}_{\varepsilon,d}$. \begin{examplenum} Let $\{e_j\}_{j=1}^n$ be the canonical basis for ${\mathbb{R}}^n$, and let $\{\psi_j\}_{j=1}^{n-1}$ be an orthonormal basis for $V^{\perp}$, where $V = {\rm span} \{(1,1,\ldots,1)\}$. Then $\norm{e_k-P_{n-1}e_k}_2=\frac{1}{\sqrt{n}}$ holds for each $1 \leq k \leq n$, and hence $A_{\mathbb{R}}(\frac{1}{\sqrt{n}},n-1)\geq n$. \end{examplenum} Our interest in spherical codes stems from the following result, cf. \cite[Corollary 1]{JP.Po}. \begin{proposition} \label{JP.prop:estimate} If $0<\varepsilon<1/\sqrt{2}$ and $\alpha=\frac{\varepsilon^2}{1-\varepsilon^2},$ then $A_{\mathbb{K}}(\varepsilon,d)\leq N_{\mathbb{K}}^s(\alpha,d)$. \end{proposition} \begin{proof} Let $\{\psi_k\}_{k=1}^{\infty}$ be an orthonormal basis for $\mathcal{H}$, and let ${\mathcal S}_{\varepsilon,d}$ and ${\mathbb{P}}_d$ be as above. Let $\{f_j\}_{j=1}^N \subset {\mathcal{H}}$ be an orthonormal set contained in ${\mathcal S}_{\varepsilon,d}$. For each $k=1,\ldots,N$, $j=1,\ldots,d$, let $a_{k,j}=\scal{f_k,\psi_j}$ and write ${\mathbb{P}}_df_k:=\displaystyle\sum_{j=1}^da_{k,j}\psi_j$ so that $\norm{f_k-{\mathbb{P}}_df_k}_{\mathcal{H}}<\varepsilon$. Write $v_k=(a_{k,1},\ldots,a_{k,d})\in{\mathbb{K}}^d$ then, for $k\not=l$ \begin{eqnarray} \scal{v_k,v_l}&=&\scal{{\mathbb{P}}_df_k,{\mathbb{P}}_df_l}=\scal{{\mathbb{P}}_df_k-f_k+f_k,{\mathbb{P}}_df_l-f_l+f_l}\nonumber\\ &=&\scal{{\mathbb{P}}_df_k-f_k,{\mathbb{P}}_df_l-f_l}+\scal{{\mathbb{P}}_df_k-f_k,f_l}+\scal{f_k,{\mathbb{P}}_df_l-f_l}\nonumber\\ &=&\scal{{\mathbb{P}}_df_k-f_k,{\mathbb{P}}_df_l-f_l}+\scal{{\mathbb{P}}_df_k-f_k,f_l-{\mathbb{P}}_df_l}+\scal{f_k-{\mathbb{P}}_df_k,{\mathbb{P}}_df_l-f_l}\nonumber\\ &=&\scal{f_k-{\mathbb{P}}_df_k,{\mathbb{P}}_df_l-f_l} \label{JP.eq:orsp} \end{eqnarray} since ${\mathbb{P}}_df_k-f_k$ is orthogonal to ${\mathbb{P}}_df_l$. It follows from the Cauchy-Schwarz inequality that $\abs{\scal{v_k,v_l}}\leq \varepsilon^2$. On the other hand, $$ \norm{v_k}_{{\mathbb{K}}^d}=\norm{{\mathbb{P}}_df_k}_{\mathcal{H}} =(\norm{f_k}_{\mathcal{H}}^2- \norm{f_k-{\mathbb{P}}_df_k}_{\mathcal{H}}^2)^{1/2}\geq(1-\varepsilon^2)^{1/2}. $$ It follows that $w_k=\displaystyle\frac{v_k}{\norm{v_k}_{{\mathbb{K}}^d}}$ satisfies, for $k\not=l$, $\abs{\scal{w_k,w_l}}= \frac{\abs{\scal{v_k,v_l}}}{\norm{v_k}_{{\mathbb{K}}^d}\norm{v_l}_{{\mathbb{K}}^d}}\leq\frac{\varepsilon^2}{1-\varepsilon^2},$ and $\{w_k\}_{k=1}^N$ is a spherical $[-\alpha,\alpha]$-code in ${\mathbb{K}}^d$. \end{proof} Note that the proof only uses orthogonality in a mild way. Namely, if instead $\{f_j\}_{j=1}^N \subset \mathcal{H}$ with $\norm{f_j}_{\mathcal{H}} =1$ satisfies $|\scal{f_j,f_k}| \leq \eta^2$ for $j\neq k$, then Equation (\ref{JP.eq:orsp}) becomes $\scal{v_k,v_l}=\scal{f_k-{\mathbb{P}}_df_k,{\mathbb{P}}_df_l-f_l} +\scal{f_k,f_l},$ so that $|\scal{v_k,v_l}| \leq \varepsilon^2+\eta^2$, and the end of the proof shows that $\displaystyle N\leq N_{\mathbb{K}}^s\left(\frac{\varepsilon^2+\eta^2}{1-\varepsilon^2},d\right)$. In view of Proposition \ref{JP.prop:estimate}, it is natural ask the following question. Given $\alpha=\frac{\varepsilon^2}{1-\varepsilon^2}$, is there a converse inequality of the form $N_{\mathbb{K}}^s(\alpha,d)\leq CA_{\mathbb{K}}(\varepsilon',d')$ with $C>0$ an absolute constant and $\varepsilon\leq \varepsilon'\leq C\varepsilon$, $d\leq d'\leq Cd$? Note that for $\varepsilon$ such that $\alpha<1/d$, we have $A_{\mathbb{K}}(\varepsilon,d)=N_{\mathbb{K}}^s(\alpha,d)=d$. \subsection{Prolate sphero\"\i dal wave functions} \label{ap.psw.sec} In order to obtain quantitative versions of Sha\-pi\-ro's theorems, we will make use of the prolate sphero\"\i dal wave functions. For a detailed presentation on prolate sphero\"\i dal wave functions see \cite{JP.SP,JP.LP1,JP.LP2}. Fix $T,\Omega>0$ and let $\{\psi_n\}_{n=0}^{\infty}$ be the associated prolate spheroidal wave functions, as defined in \cite{JP.SP}. $\{\psi_n\}_{n=0}^{\infty}$ is an orthonormal basis for $PW_\Omega\equiv \{f\in L^2({\mathbb{R}})\,:\ \mbox{supp}\,\widehat{f}\subseteq [-\Omega,\Omega]\},$ and the $\psi_n$ are eigenfunctions of the differential operator $$L=(T^2-x^2) \frac{{d^2}}{{d} x^2} -2x \frac{{d}}{{d}x} - \frac{\Omega^2}{T^2}x^2.$$ As in the previous section, for an integer $d\geq 0$, define ${\mathbb{P}}_d$ to be the projection onto the span of $\psi_0,\ldots,\psi_{d-1},$ and for $\varepsilon>0$ define $$ \mathcal{S}_{\varepsilon,d}=\{f\in L^2({\mathbb{R}})\,: \norm{f}_2=1, \norm{f-{\mathbb{P}}_df}_2 <\varepsilon\}. $$ For the remainder of the paper, the orthonormal basis used in the definitions of $\mathcal{S}_{\varepsilon,d}$, ${\mathbb{P}}_d$, and ${A}_{\mathbb{K}} (\varepsilon, d)$, will always be chosen as the prolate sphero\"\i dal wavefunctions. Note that these quantities depend on the choice of $T,\Omega$. Finally, let $$ {\mathcal P}_{T,\Omega,\varepsilon}=\left\{f\in L^2({\mathbb{R}})\,:\ \int_{|t|>T}|f(t)|^2\,\mbox{d}t\leq\varepsilon^2 \quad\mbox{and}\quad\int_{|\xi|>\Omega}|\widehat{f}(\xi)|^2\,\mbox{d}\xi\leq\varepsilon^2\right\} $$ and ${\mathcal P}_{T,\varepsilon}={\mathcal P}_{T,T,\varepsilon}$. \begin{theorem}[Landau-Pollak \cite{JP.LP2}] \label{JP.th:LPth12} Let $T,\ \varepsilon$ be positive constants and let $d=\lfloor 4T\Omega\rfloor+1$. Then, for every $f\in {\mathcal P}_{T,\Omega,\varepsilon}$, $$ \norm{f-{\mathbb{P}}_df}_2^2\leq 49\varepsilon^2\norm{f}_2^2. $$ In other words, ${\mathcal P}_{T,\Omega,\varepsilon}\cap\{f\in L^2 ({\mathbb{R}})\,:\ \norm{f}_2=1\}\subset\mathcal{S}_{7\varepsilon,d}$. \end{theorem} It follows that the first $d=4T^2+1$ elements of the prolate sphero\"\i dal basis well approximate ${\mathcal P}_{T,\varepsilon}$, and that ${\mathcal P}_{T,\varepsilon}$ is ``essentially'' $d$-dimensional. \subsection{Generalized means and dispersions} \label{gen.md.ap} As an application of the results on prolate spheroidal wavefunctions and spherical codes, we shall address a more general version of the mean-dispersion theorem. Consider the following generalized means and variances. For $p>1$ and $f\in L^2({\mathbb{R}})$ with $\norm{f}_2=1$, we define the following associated \emph{$p$-variance} $$ \Delta_p^2(f)=\inf_{a\in{\mathbb{R}}}\int|t-a|^p|f(t)|^2\mbox{d}t. $$ One can show that the infimum is actually a minimum and is attained for a unique $a\in{\mathbb{R}}$ that we call the \emph{$p$-mean} $$ \mu_p(f)= \mbox{arg min}_{a \in {\mathbb{R}}}\,\int|t-a|^p|f(t)|^2\mbox{d}t. $$ As before, define the \emph{$p$-dispersion} $\Delta_p(f)\equiv\sqrt{\Delta_p^2(f)}$. The proof of the Mean-Dispersion Theorem for $p=2$ via the Rayleigh-Ritz technique relied on the special relation \eqref{JP.eq:fundherm} of means and dispersions with the Hermite operator. In general, beyond the case $p=2$, such simple relations are not present and the techniques of Section \ref{sec2.ap} are not so easily applicable. However, we shall show how to use the combinatorial techniques from the beginning of this section to obtain a quantitative version of Theorem \ref{ap.sh-meandisp} for generalized means and dispersions. The following lemma is a modification of \cite[Lemma 1]{JP.Po}. \begin{lemma} \label{JP.lem:Po-prolate} Let $A>0$ and $p>1$. Suppose $g\in L^2({\mathbb{R}})$, $\norm{g}_2=1$ satisfies $$ |\mu_p(g)|,\ |\mu_p(\widehat{g})|,\ \Delta_p(g),\ \Delta_p(\widehat{g})\leq A. $$ Fix $\varepsilon>0$, then $g\in{\mathcal P}_{A+(A/\varepsilon)^{2/p},\varepsilon}$. \end{lemma} This gives a simple proof of a strengthened version of Shapiro's Mean-Dispersion Theorem: \begin{corollary} \label{JP.cor:cor3} Let $0<A$, $1<p<\infty,$ $0<\varepsilon <1/7\sqrt{2}$, $\alpha=49\varepsilon^2/(1-49\varepsilon^2)$, and set $d=\lfloor 4\bigl(A+(A/\varepsilon)^{2/p}\bigr)^2\rfloor+1$. If $\{e_k\}_{k=1}^{N} \subset L^2 ({\mathbb{R}})$ is an orthonormal set such that for all $1 \leq k \leq N$, $$ |\mu_p(e_k)|,\ |\mu_p(\widehat{e_k})|,\ \Delta_p(e_k),\ \Delta_p(\widehat{e_k})\leq A, $$ then $$N \leq N_{\mathbb{C}}^s(\alpha,d)\leq N_{\mathbb{R}}^s(\alpha,2d).$$ \end{corollary} \begin{proof} According to Lemma \ref{JP.lem:Po-prolate}, $e_1,\ldots,e_n$ are in ${\mathcal P}_{A+(A/\varepsilon)^{2/p},\varepsilon}$. The definition of $d$ and Theorem \ref{JP.th:LPth12} show that $\{ e_j \}_{j=1}^N \subset {\mathcal S}_{7\varepsilon,d}$. According to Proposition \ref{JP.prop:estimate}, $N\leq A_{\mathbb{C}}(7\varepsilon,d)\leq N_{\mathbb{C}}^s(\alpha,d)\leq N_{\mathbb{R}}^s(\alpha,2d)$, where $\alpha=49\varepsilon^2/(1-49\varepsilon^2)$. \end{proof} This approach does not, in general, give sharp results. For example, in the case $p=2$ the bound obtained by Corollary \ref{JP.cor:cor3} is not as good as the one given in Section 2. To see this, assume that $p=2$ and $A\geq 1$. Then $4A^2(1+1/\varepsilon)^2\leq d\leq 5A^2(1+1/\varepsilon)^2$. In order to apply the Delsarte, Goethals, Seidal bound (\ref{delsarte_bnd.ap}) we will now chose $\varepsilon$ so that $\alpha<\frac{1}{2\sqrt{d}}$ which will then give that $n\leq 4d$. Our aim is thus to take $d$ is as small as possible by chosing $\varepsilon$ as large as possible. For this, let us first take $\varepsilon\leq 1/50$ so that $\alpha\leq 50\varepsilon^2$. It is then enough that $50\varepsilon^2\leq\frac{1}{4A(1+1/\varepsilon)}$, that is $\varepsilon^2+\varepsilon-\frac{1}{200A}\leq 0$. We may thus take $\varepsilon= \frac{\sqrt{1+\frac{1}{50A}}-1}{2}$. Note that, as $A\geq 1$, we get that $\varepsilon\leq\frac{\sqrt{1+\frac{1}{50}}-1}{2}<1/50$. This then gives $$ n\leq 20d\leq 20A^2\left(1+\frac{2}{\sqrt{1+\frac{1}{50A}}-1}\right)^2 =20A^2\left(1+100A\Bigl(\sqrt{1+\frac{1}{50A}}+1\Bigr)\right)^2 \leq C A^4. $$ In particular, the combinatorial methods allow one to take $N= CA^4$ in Corollary \ref{JP.cor:cor3}, whereas the sharp methods of Section \ref{ap.sharp.md.sec} give $N = 8 \pi A^2,$ see Corollary \ref{ap.cor.opt.md}. \subsection{The Quantitative Umbrella Theorem} \label{JP.sec:QUT} A second application of our method is a quantitative form of Shapiro's umbrella theorem. As with the mean-dispersion theorem, Shapiro's proof does not provide a bound on the number of elements in the sequence. As before, the combinatorial approach is well suited to this setting whereas the approach of Section \ref{sec2.ap} is not easily applicable. Given $f \in L^2 (\mathbb{R})$ and $\varepsilon>0$, define $$ C_f(\varepsilon)=\inf\left\{ T \geq 0 \,:\ \int_{|t|>T}| f(t)|^2\leq\varepsilon^2\norm{f}_2^2 \right\}.$$ Note that if $f$ is not identically zero then for all $0<\varepsilon<1$ one has $0 < C_f (\varepsilon) < \infty$. \begin{theorem} \label{JP.th:umbrella} Let $\varphi,\psi\in L^2({\mathbb{R}})$ and $M = \min \{ \norm{\varphi}_2,\norm{\psi}_2 \} \geq 1$. Fix $\frac{1}{50M} \geq \varepsilon >0$, $T >\max \{ C_\varphi(\varepsilon),C_\psi(\varepsilon) \}$, and $d = \lfloor 4T^2\rfloor+1$. If $\{e_n\}_{n=1}^N$ is an orthonormal sequence in $L^2({\mathbb{R}})$ such that for all $1 \leq n \leq N,$ and for almost all $x, \xi \in{\mathbb{R}}$, \begin{equation} \label{JP.eq:umbrella} |e_n(x)|\leq|\varphi(x)|\quad\mbox{and}\quad|\widehat{e}_n(\xi)|\leq|\psi(\xi)|, \end{equation} then \begin{equation} \label{ap.bnd.umbrella.thm} N \leq N^s_{\mathbb{C}}(50\varepsilon^2M^2,d) \leq N^s_{\mathbb{R}}(50\varepsilon^2M^2,2d). \end{equation} In particular, $N$ is bounded by an absolute constant depending only on $\varphi$ and $\psi$. \end{theorem} \begin{proof} By (\ref{JP.eq:umbrella}), $T>\max \{ C_\varphi(\varepsilon),C_\psi(\varepsilon) \}$, implies $\{ e_n \}_{n=1}^N \subset{\mathcal P}_{T,\varepsilon M}$. According to Theorem \ref{JP.th:LPth12}, ${\mathcal P}_{T,\varepsilon M}\subset {\mathcal S}_{7\varepsilon M,d}$. It now follows from Proposition \ref{JP.prop:estimate}, that $$ N \leq \mathbb{A}_{{\mathbb{C}}} \left( 7 \varepsilon M, d \right) \leq N^s_{\mathbb{C}} \left( \frac{49\varepsilon^2M^2}{1 - 49 \varepsilon^2 M^2},d \right) \leq N^s_{\mathbb{C}}(50\varepsilon^2M^2,d) \leq N^s_{\mathbb{R}}(50\varepsilon^2M^2,2d) . $$ \end{proof} Let us give two applications where one may get an explicit upper bound by making a proper choice of $\varepsilon$ in the proof above. \begin{proposition} \label{JP.ex:power} Let $1/2<p$ and $\sqrt{\frac{2p-1}{2}} \leq C$ be fixed. If $\{e_n \}_{n=1}^N \subset L^2({\mathbb{R}})$ is an orthonormal set such that for all $1 \leq n \leq N$, and for almost every $x,\xi \in {\mathbb{R}}$, $$|e_n(x)|\leq\frac{C}{(1+|x|)^p} \ \ \ \hbox{ and } \ \ \ |\widehat{e}_n(\xi)|\leq\frac{C}{(1+|\xi|)^p},$$ then $$ N\leq \begin{cases} 9^{\left(\frac{200\sqrt{2}C}{\sqrt{2p-1}}\right)^{\frac{4}{2p-1}}} &\mbox{ if } 1/2 < p,\\ 16\left(\frac{400C^2}{2p-1}\right)^{\frac{1}{p-1}}&\mbox{ if } 1 < p \leq 3/2,\\ {4} \left(\frac{500C^2}{2p-1}\right)^{\frac{2}{2p-3}}&\mbox{ if } 3/2 < p.\\ \end{cases}$$ \end{proposition} \begin{proof} If $\varphi(x)=\frac{C}{(1+|x|)^p}$, then $M = ||\varphi||_2 = C\sqrt{\frac{2}{2p-1}} \geq 1$, and a computation for $0 < \varepsilon \leq 1$ shows that $C_\varphi(\varepsilon)=\frac{1}{\varepsilon^{2/(2p-1)}}-1.$ Let $\delta =\delta(\varepsilon)=\frac{4}{\varepsilon^{4/(2p-1)}}$ and $\alpha = \alpha(\varepsilon) =\frac{100 C^2\varepsilon^2}{2p-1}$. Taking $T = C_{\varphi}(\varepsilon)$ implies that $d = \lfloor 4T^2\rfloor+1 \leq \delta(\varepsilon)$. If $0 < \varepsilon \leq \frac{1}{50M}$, then Theorem \ref{JP.th:umbrella} gives the bound $N\leq N^s_{\mathbb{C}}\left(\alpha (\varepsilon ),\delta (\varepsilon ) \right)$. We shall chose $\varepsilon$ differently for the various cases. {\em Case 1.} For the case $1/2 <p,$ take $\varepsilon=\frac{1}{50M},$ and use the exponential bound given by Lemma \ref{ap.expbnd.lem} for $N^s_{\mathbb{C}}(\alpha (\varepsilon), \delta (\varepsilon))$ to obtain the desired estimate. {\em Case 2.} For the case $1<p\leq 3/2$, let $\varepsilon_0= \left(\frac{\sqrt{2p-1}}{20C}\right)^{\frac{2p-1}{2(p-1)}}$, $\alpha=\alpha(\varepsilon_0)$, and $\delta=\delta(\varepsilon_0)$. Note that $\alpha = \frac{1}{2\sqrt{\delta}}< \frac{1}{\sqrt{2\delta}}$, and also that $\varepsilon_0 \leq \frac{1}{50M},$ since $1 < p \leq 3/2$. Thus the bound (\ref{delsarte_bnd.ap}) yields $N \leq N^s_{\mathbb{R}}(\alpha,2\delta)= 4(1-\alpha^2)\delta\leq 4\delta$. The desired estimate follows. {\em Case 3.} For the case $3/2<p$, define $\varepsilon_1=\left(\frac{\sqrt{2p-1}}{50C\sqrt{2}}\right)^{\frac{2p-1}{2p-3}}$ and note that $\varepsilon_1 \leq \frac{1}{50M}.$ Since $3/2<p$, taking $\varepsilon<\varepsilon_1$, $\alpha=\alpha(\varepsilon)$, $\delta=\delta(\varepsilon),$ implies that $\alpha(\varepsilon) < 1/ \delta(\varepsilon)$. Thus, by Lemma \ref{ap.sphcodebnd.lem}, $N \leq \delta(\varepsilon)$ for all $\varepsilon < \varepsilon_1$. Hence, $N \leq \delta (\varepsilon_1),$ and the desired estimate follows. \end{proof} Note that in the case $1/2 <p$, the upper bound in Proposition \ref{JP.ex:power} approaches infinity as $p$ approaches $1/2$. Indeed, we refer the reader to the counterexamples for $p<1/2$ in \cite{JP.DeR, By.ap}. The case $p=1/2$ seems to be open as \cite{JP.DeR} need an extra logarithmic factor in their construction. For perspective in the case $3/2<p$, if one takes $C = C_p = \sqrt{\frac{2p -1}{2}}$, then the upper bound in Proposition \ref{JP.ex:power} approaches $4$ as $p$ approaches infinity. \begin{proposition} Let $0<a\leq 1$ and $(2a)^{1/4} \leq C$ be fixed. If $\{e_n\}_{n=0}^N \subset L^2 ({\mathbb{R}})$ is an orthonormal set such that for all $n$ and for almost every $x, \xi \in {\mathbb{R}}$ $$|e_n(x)|\leq Ce^{-\pi a|x|^2} \ \ \ \hbox{ and } \ \ \ |\widehat{e}_n(\xi)|\leq Ce^{-\pi a|\xi|^2},$$ then $$N \leq 2+ \frac{8}{a\pi} \max \Big\{ 2\ln \left( \frac{50 C \sqrt{\pi} e^{\pi}}{a^{1/4}} \right), \ln \left( \frac{50 \pi C^2 e^{\pi a /2}}{a^{5/2}e^{2\pi}} \right) \Big\}.$$ \end{proposition} \begin{proof} Let $\gamma_a(x)=Ce^{-\pi a|\xi|^2}$ and let $C_a(\varepsilon)=C_{\gamma_a}(\varepsilon)$. First note that \begin{eqnarray*} \int_{|t|>T} |\gamma_a (t)|^2 dt &=& \int_{|t|>T}C^2e^{-2\pi a|t|^2} {d}t =\frac{2C^2}{\sqrt{a}}\int_{T\sqrt{a}}^{\infty} \frac{(1+s^2)e^{-2\pi s^2}}{1 +s^2} {d}s\\ &\leq& \frac{C^2\pi (1+aT^2)}{\sqrt{a}} e^{-2\pi a T^2}, \end{eqnarray*} while $M=||\gamma_a||_2 = (\int_{{\mathbb{R}}} C^2e^{-2\pi a|t|^2} {d}t)^{1/2} =\frac{C}{(2a)^{1/4}}$. In particular, $\norm{\gamma_a}_2\geq1$. Now for every $T>0$, set $\varepsilon(T)=2^{-1/4} \sqrt{\pi} \sqrt{1+aT^2}e^{-\pi a T^2}$, so that $C_a\bigl(\varepsilon(T)\bigr) \leq T$. By Theorem \ref{JP.th:umbrella}, we get that $N\leq N_{\mathbb{R}}^s(50\varepsilon^2(T)M^2,8T^2+2)$, provided $\varepsilon(T) \leq \frac{1}{50M}$. Let us first see what condition should be imposed on $T$ to have $\varepsilon(T) \leq \frac{1}{50M}$. Setting $s=(1+aT^2)$, this condition is equivalent to $\sqrt{s} e^{- \pi s} \leq \frac{e^{-\pi} a^{1/4}}{50 C\sqrt{\pi}}.$ Thus, it suffices to take $s \geq \frac{2}{\pi} \ln \left( \frac{50 C \sqrt{\pi} e^{\pi}}{a^{1/4}} \right)$, and $T^2 \geq \frac{2}{a\pi} \ln \left( \frac{50 C \sqrt{\pi} e^{\pi}}{a^{1/4}} \right)$. We will now further choose $T$ large enough to have $50\varepsilon(T)^2M^2< \frac{1}{8T^2+2}$, so that Lemma \ref{ap.sphcodebnd.lem} will imply $N \leq N_{\mathbb{R}}^s(50\varepsilon(T)^2M^2,8T^2+2)=8T^2+2$. This time, the condition reads $(1 + aT^2)(1 +4T^2) e^{-2\pi a T^2} < \frac{\sqrt{a}}{50 \pi C^2}$. Let $r= a(4T^2 +1)$. Thus, it suffices to take $r^2 e^{-\frac{\pi}{2} r} < \frac{a^{5/2} e^{2\pi}}{50 \pi C^2 e^{{\pi a}/{2}}}$ It is enough to take $r> \frac{4}{\pi} \ln \left( \frac{50 \pi C^2 e^{\pi a /2}}{a^{5/2}e^{2\pi}} \right)$, and $T^2 > \frac{1}{a\pi} \ln \left( \frac{50 \pi C^2 e^{\pi a /2}}{a^{5/2}e^{2\pi}} \right)$. Combining the bounds for $T^2$ from the previous two paragraphs yields $$N \leq 2+ \frac{8}{a\pi} \max \Big\{ 2\ln \left( \frac{50 C \sqrt{\pi} e^{\pi}}{a^{1/4}} \right), \ln \left( \frac{50 \pi C^2 e^{\pi a /2}}{a^{5/2}e^{2\pi}} \right) \Big\}.$$ \end{proof} A careful reading of the proof of the Umbrella Theorem shows the following: \begin{proposition} Let $0<C,$ and let $1\leq p,q,\widehat{p},\widehat{q}\leq \infty$ satisfy $\displaystyle\frac{1}{p}+\frac{1}{q}=1$ and $\displaystyle\frac{1}{\widehat{p}}+\frac{1}{\widehat{q}}=1$. Let $\varphi\in L^{2p}({\mathbb{R}})$ and $\psi\in L^{2\widehat{p}}({\mathbb{R}})$, and suppose that $\varphi_k\in L^{2q}({\mathbb{R}})$ and $\psi_k\in L^{2\widehat{q}}({\mathbb{R}})$ satisfy $\norm{\varphi_k}_{2q} \leq C$, $\norm{\psi_k}_{2\widehat{q}}\leq C$. There exists $N$ such that, if $\{e_k\}\subset L^2 ({\mathbb{R}})$ is an orthonormal set which for all $k$ and almost every $x, \xi \in{\mathbb{R}}$ satisfies $$ |e_k(x)|\leq \varphi_k(x) \ \varphi(x) \quad \ \ \ \hbox{ and } \ \ \ \quad |\widehat{e_k} (\xi)|\leq \psi_k(\xi) \ \psi(\xi), $$ then $\{e_k\}$ has at most $N$ elements. As with previous results, a bound for $N$ can be obtained in terms of spherical codes. The bound for $N$ depends only on $\varphi, \psi, C$. \end{proposition} Indeed, let $\varepsilon>0$ and take $T>0$ big enough to have $\displaystyle\int_{|t|>T}|f(t)|^{2p}\mbox{d}t\leq\varepsilon^p/C^{2/p}$. Then \begin{eqnarray*} \int_{|t|>T}|e_k(t)|^2\,\mbox{d}t &\leq&\int_{|t|>T}|\varphi_k(t)\varphi(t)|^2\,\mbox{d}t \leq \left(\int_{|t|>T}|\varphi_k(t)|^{2q}\,\mbox{d}t\right)^{1/q}\left(\int_{|t|>T}|\varphi(t)|^{2p}\,\mbox{d}t\right)^{1/p}\\ &\leq& C^2(\varepsilon^p/C^{2/p})^{1/p}=\varepsilon. \end{eqnarray*} A similar estimate holds for $\widehat{e_k}$ and we conclude as in the proof of the Umbrella Theorem. \subsection{Angles in Riesz bases} \label{ap.rieszang.sec} Let us now conclude this section with a few remarks on Riesz bases. Let $\{x_k\}_{k=0}^{\infty}$ be a Riesz basis for $L^2({\mathbb{R}})$ with orthogonalizer $U$ and recall that, for every sequence $\{a_n\}_{n=0}^{\infty} \in\ell^2$, \begin{equation} \frac{1}{\norm{U}^2}\sum_{n=0}^{\infty}|a_n|^2 \leq \norm{\sum_{n= 0}^{\infty} a_nx_n}_2^2\leq\norm{U^{-1}}^2\sum_{n=0}^{\infty} |a_n|^2. \label{JP.eq:riesz2} \end{equation} Taking $a_n=\delta_{n,k}$ in (\ref{JP.eq:riesz2}) shows that $\frac{1}{\norm{U}}\leq\norm{x_k}_2 \leq\norm{U^{-1}}$. Then taking $a_n=\delta_{n,k}+\lambda\delta_{n,l}$, $k\not=l$ and $\lambda=t,-t$, $t>0$ gives $$ \frac{1}{\norm{U}^2}(1+t^2)\leq\norm{x_k}_2^2+t^2\norm{x_l}_2^2+2t\abs{\mbox{Re}\scal{x_k,x_l}} \leq\norm{U^{-1}}^2(1+t^2) $$ thus $\abs{\mbox{Re}\scal{x_k,x_l}}^2$ is \begin{eqnarray*} &\leq&\min\Bigl((\norm{x_k}_2^2-\norm{U}^{-2})(\norm{x_l}_2^2-\norm{U}^{-2}), (\norm{U^{-1}}^2-\norm{x_k}_2^2)(\norm{U^{-1}}^2-\norm{x_l}_2^2)\Bigr)\\ &\leq&\norm{x_k}_2^2\norm{x_l}_2^2\min\ent{ \left(1-\frac{1}{\norm{U}^2\norm{U^{-1}}^2}\right)^2 ,\left(\frac{\norm{U^{-1}}^2}{\norm{U}^2}-1\right)^2} \end{eqnarray*} while taking $\lambda=it,-it$, $t>0$ gives the same bound for $\abs{\mbox{Im}\scal{x_k,x_l}}^2$. It follows that \begin{equation} \label{JP.eq:c(u)} \abs{\scal{x_k,x_l}}\leq C(U)\norm{x_k}_2 \norm{x_l}_2 \leq C(U)\norm{U^{-1}}^2 \end{equation} where $$ C(U):=\sqrt{2} \min\ent{1-\left(\frac{1}{\norm{U}\norm{U^{-1}}}\right)^2,\left(\frac{\norm{U^{-1}}}{\norm{U}}\right)^2-1}. $$ We may now adapt the proof of Proposition \ref{JP.prop:estimate} to Riesz basis: \begin{proposition} \label{JP.prop:estimateriesz} Let $\{\psi_k)_{k=1}^{\infty}$ be an orthonormal basis for $L^2({\mathbb{R}})$. Fix $d\geq 0$ and let ${\mathbb{P}}_d$ be the projection on the span of $\{\psi_1,\ldots,\psi_{d-1}\}$. Let $\{x_k\}_{k=1}^{\infty}$ be a Riesz basis for $L^2({\mathbb{R}})$ and let $U$ be its orthogonalizer. Let $\varepsilon>0$ be such that $\varepsilon<\min\left(\norm{U}^{-2},\sqrt{\frac{\norm{U}^{-2}-C(U)\norm{U^{-1}}^2}{2}}\right)$ and let \begin{equation} \label{JP.eq:defalpha} \alpha=\frac{\varepsilon^2+C(U)\norm{U^{-1}}^2}{\norm{U}^{-2}-\varepsilon^2}. \end{equation} If $\{x_k\}_{k=1}^N$ satisfies $\norm{x_k-{\mathbb{P}}_dx_k}_2<\varepsilon$ then $N \leq N_{\mathbb{K}}^s(\alpha,d)$. \end{proposition} \begin{proof} Assume without loss of generality that $x_0,\ldots,x_N$ satisfy $\norm{x_k-{\mathbb{P}}_dx_k}<\varepsilon$ and let $a_{k,j}=\scal{x_k,\psi_j}$. Write $v_k=(a_{k,1},\ldots,a_{k,d})\in{\mathbb{K}}^d$ then, the same computation as in (\ref{JP.eq:orsp}), for $k\not=l$ $$ \scal{v_k,v_l}=\scal{x_k-{\mathbb{P}}_dx_k,{\mathbb{P}}_dx_l-x_l}+\scal{x_k,x_l} $$ thus $\abs{\scal{v_k,v_l}}\leq\varepsilon^2+\abs{\scal{x_k,x_l}}$. On the other hand $$ \norm{v_k}=\norm{{\mathbb{P}}_dx_k}=(\norm{x_k}^2-\norm{x_k-{\mathbb{P}}_df_k}^2)^{1/2}\geq(\norm{U}^{-2}-\varepsilon^2)^{1/2} $$ It follows from (\ref{JP.eq:c(u)}) that $w_k=\displaystyle\frac{v_k}{\norm{v_k}}$ satisfies, for $k\not=l$, $$ \abs{\scal{w_k,w_l}}\leq\frac{\varepsilon^2+C(U)\norm{U^{-1}}^2}{\norm{U}^{-2}-\varepsilon^2} $$ and $\{w_k\}$ forms a spherical $[-\alpha,\alpha]$-code in ${\mathbb{K}}^d$. \end{proof} Note that the condition on $\varepsilon$ implies that $0<\alpha<1$. Also note that if $U$ is a near isometry in the sense that $(1+\beta)^{-1}\leq\norm{U}^2\leq\norm{U^{-1}}^2\leq 1+\beta$ then $C(U)\leq\sqrt{2}\frac{\beta(2+\beta)}{(1+\beta)^2}$ and $\alpha\leq\frac{(1+\beta)\varepsilon^2+\beta(2+\beta)}{1-(1+\beta)\varepsilon^2}$. In particular, if $U$ is near enough to an isometry, meaning that $\beta$ is small enough, then this $\alpha$ is comparable with the $\alpha$ of Proposition \ref{JP.prop:estimate}. As a consequence, we may then easily adapt the proof of results that relied on Proposition \ref{JP.prop:estimate} to the statements about Riesz bases. For example, an Umbrella Theorem for Riesz bases reads as follows: \begin{theorem} \label{JP.th:umbrellaR} Let $\varphi,\psi\in L^2({\mathbb{R}})$ with $\norm{\varphi}_2,\norm{\psi}_2 \geq1$. Let $\{f_n\}_{n=1}^{\infty}$ be a Riesz basis for $L^2({\mathbb{R}})$ with orthonormalizer $U$ that is near enough to an isometry $(1+\beta)^{-1}\leq\norm{U}^2\leq\norm{U^{-1}}^2\leq 1+\beta$ with $\beta$ small enough. Then there exists a constant $N=N(\varphi,\psi,\beta)$ depending only on $\varphi, \psi$ and $\beta$, such that the number of terms of the basis that satisfies $$|f_n(x)|\leq|\varphi(x)|\quad\mbox{and}\quad|\widehat{f}_n(\xi)|\leq|\psi(\xi)|$$ for almost all $x, \xi\in{\mathbb{R}}$ is bounded by $N$. As with previous results, a bound on $N$ can be given in terms of spherical codes. \end{theorem} \subsection*{Acknowledgements} A portion of this work was performed during the Erwin Schr\"odinger Institute (ESI) Special Semester on ``Modern methods of time-frequency analysis.'' The authors gratefully acknowledge ESI for its hospitality and financial support. The authors also thank Professor H.S. Shapiro for valuable comments related to the material.
{'timestamp': '2006-06-16T17:52:14', 'yymm': '0606', 'arxiv_id': 'math/0606395', 'language': 'en', 'url': 'https://arxiv.org/abs/math/0606395'}
\section{Introduction} At present, much attention is focused on galaxy clusters due to the potential application of the thermal Sunyaev-Zeldovich (SZ) effect as a cosmological tool. Together with observations of X-ray emission, a measurement of the Hubble constant can be made if a complete sample of galaxy clusters is used (see reviews by Rephaeli 1995 and Birkinshaw 1998). Recent advances in interferometric tools have now allowed accurate mapping of the SZ decrement, producing two dimensional images which facilitate comparison of the X-ray emission and SZ effect. The SZ effect is typically of arcminute scale, which is not observable with most interferometers designed to achieve high angular resolution. The exception is the Ryle Telescope which has been used successfully to image the SZ effect at 2 cm. Another way to achieve the necessary beam size and sensitivity is to use an interferometer designed for millimeter wavelengths equipped with low-noise centimeter-wave receivers. We used this approach at the Owens Valley Radio Observatory Millimeter Array (OVRO) and Berkeley-Illinois-Maryland Association Millimeter Array (BIMA), where we have now detected the SZ effect in over 20 clusters at 1 cm, with preliminary results given in Carlstrom {\it et al.} (1996, 1997). The accuracy of cm-wave observations of the SZ effect can be limited by emission from unresolved radio point sources towards galaxy clusters. The observing frequency of 28.5 GHz was influenced four different factors: the large beam size required to be sensitive to the SZ decrement using existing interferometers, the availability of low-noise HEMT amplifiers, atmospheric transparency, and the expected low radio source contamination due to falling flux density of most radio sources with increase in frequency. The interferometric technique makes it possible to detect radio point sources with longer baselines, which have little sensitivity to the SZ effect, and then remove their contribution from the short baseline data. Though such removal will produce point source-free SZ images, the uncertainties in removal of sources, due to the limited signal-to-noise and imperfect coherence, can introduce systematic noise. When the flux density of point sources are high, modeling and removal can result in systematic bias levels comparable to the size of the SZ effect. Since there are no published surveys at 28.5 GHz, it is not possible for us to predict accurately the number of radio sources expected to be present in a given cluster. In a few hours, however, it is possible for us to map a cluster with sufficient sensitivity to image unresolved 28.5 GHz radio sources which may complicate SZ mapping. Clusters with no bright sources, are then observed for longer periods, $\sim$ 20 to 50 hours, to obtain adequate signal-to-noise images of the SZ effect. In this paper, our primary goal is to provide information on clusters which contain radio point sources at 28.5 GHz. Future publications will present our results on SZ detections in detail. Given that the cluster sample presented in this paper is incomplete in terms of either redshift or X-ray luminosity, statistical studies with this sample relating to cluster properties should be treated with caution. Section 2 of this paper describes observations made with OVRO and BIMA arrays. The detected 1 cm radio source sample and its properties are presented in Section 3, where we also estimate the radio source contamination in measuring the Hubble constant through a joint analysis of SZ and X-ray data by considering galaxy cluster Abell 2218 as an example. \section{Observations} The observed cluster sample is presented in Table 1. The pointing centers of clusters were derived from existing literature and were checked with optical images when such images were available. Usually, optical coordinates of the central galaxy were taken as the pointing coordinates of a given cluster. If there was not a clear central galaxy, centroid coordinates from X-ray observations were used (e.g., Ebeling {\it et al.} 1996, Ebeling {\it et al.} 1997). Our sample ranges in redshift from $\sim$ 0.15 to 0.85, with the lower limit imposed by the large angular scale of nearby clusters to which the interferometer would not be sensitive, and the upper limit based on the X-ray detection limit of distant clusters. This sample was observed at OVRO with six telescopes of the millimeter array during summers of 1995 and 1996, with six telescopes of the BIMA array during summer of 1996, and with nine telescopes of the BIMA array during summer of 1997. We equipped both arrays with low-noise 1 cm receivers, especially designed for the detection of SZ effect. Each receiver contains a cryogenically cooled scalar feed-horn and HEMT amplifier covering the frequency range 26 to 36 GHz. The system temperatures scaled above the atmosphere ranged from 30 to 45 K. During the 1995 OVRO observations, our receivers were sensitive to linear polarization. Due to the rotation of polarized intensity across the sky of our calibrating sources, which are expected to be polarized up to 10\%, the calibration process for cluster fields observed in 1995 introduced additional uncertainties. For long time series calibrator observations with large parallactic angle coverage, the flux variation can be corrected by estimating the polarization. However, for short observations we expect an additional 5\% to 10\% uncertainty in the flux density of sources imaged in our 1995 cluster sample. We upgraded our receivers so that observations during 1996 and 1997 detected circular polarization, which is not subject to this effect. For clusters that were initially observed in 1995 and were reobserved in later years, we have opted to use latest data to avoid additional uncertainties. Integration time on each cluster field ranged from $\sim$ 3 hours to 50 hours, with the short integration times on clusters where we happened to detect a bright radio source. For each cluster, $\sim$ 5 minute observations of a secondary calibrator from the VLA calibrator list were interleaved with every $\sim$ 25 minutes spent on a cluster. Between different clusters, $\sim$ 45 to 60 minutes were spent observing planets, with care taking to observe Mars frequently since it is used as our primary flux calibrator. The flux densities of the secondary gain and phase calibrators were calibrated relative to Mars. The brightness temperature of Mars was calculated using a thermal-radiative model with an estimated uncertainty of 4\% (Rudy 1987). In Table 2, we present 1 cm flux densities of gain and phase calibrators determined through this process for the summer 1997 observations, which can be useful for future observational programs at this wavelength. Some of these calibrator sources are likely to be variable at 28.5 GHz, but during the time scale of our 1997 observations, 2 months, the maximum variation was found to be less than 4\%. The uncertainties in the reported flux densities in Table 2 are less than $\sim$ 5\%. For the OVRO data, the MMA software package (Scoville {\it et al.} 1993) was used to calibrate the visibility data and then write it in UV-FITS format. We flagged all of the data taken when one antenna was shadowed by another, cluster data that was not bracketed in time by phase calibrator data (mostly at the end or beginning of an observation), and, rarely, data with anomalous correlations. We followed the same procedure for data from BIMA, except that MIRIAD software package (Wright \& Sault 1993) was used for calibration and data editing purposes. The image processing and CLEANing were done using DIFMAP (Shepherd, Pearson, \& Taylor 1994). We cleaned all fields uniformly, based on the rms noise level. Our automated mapping algorithm within DIFMAP was able to find sources with extended structures, which when compared with low frequency data, such as VLA D-Array 1.4 GHz NVSS survey (Condon {\it et al.} 1996), were confirmed for all cases. In general, $\sim$ 2000 clean iterations with a low clean loop gain of 0.01 was chosen to avoid instabilities and artifacts that can occur in fields with a large number of sources. We looked for radio point sources in naturally weighted maps with visibilities greater than 1 k$\lambda$ in BIMA data and 1.5 k$\lambda$ in OVRO data. Since the interferometer is less sensitive with only the long baseline data, we obtained flux densities of detected sources in maps made with all visibilities. Using images made with all the UV data also allowed us to look for sources with extended structure. Typical synthesized beam sizes in these images range from 12$''$ to 30$''$. For typical cluster and control blank fields with no bright radio sources ($\geq$ 1 mJy), and no evident SZ decrement, the noise distribution was found to be a Gaussian centered at zero. These images did not contain any pixels with peak flux density $\leq$ -4 $\sigma$ within 250$''$. The mean rms noise level for all our 56 cluster observations is 0.24 mJy beam$^{-1}$, while the lowest rms noise level is 0.11 mJy beam$^{-1}$ for BIMA observations and 0.07 mJy beam$^{-1}$ for OVRO observations. Given the decrease in sensitivity due to the primary beam attenuation from the image centers, we only report sources within 250$''$ of the pointed coordinates. A Gaussian-noise analysis suggested that within 250$''$ from the center in all 56 cluster fields, only 1 noise pixel is expected at a level above 4 $\sigma$. Among all 56 cluster fields, there was only one instance where a source was clearly detected at a distance greater than 250$''$ from the cluster center; In CL 0016+16 we found a source $\sim$ 290$''$ away from the pointed coordinates, which is discussed in Carlstrom {\it et al.} (1996). \section{Results and Discussion} In Table 3, we report the flux densities of detected 1 cm radio sources. When calculating these flux densities, we have corrected for the beam response. To determine the primary beam pattern at BIMA, the radio source 3C454.3 with a flux density of $\sim$ 8.7 Jy at 28.5 GHz was observed with a grid pattern of pointing offsets, and then a two dimensional Gaussian fit was performed to the flux density values. A 300$''$ by 300$''$ grid with 75$''$ spacing was best fit by a Gaussian with a major axis of 386$''$, a minor axis of 380$''$ (FWHM) and a position angle -85.31$^{\circ}$, with an uncertainty of 3$''$. The rms residual from the fit was $\sim$ 0.01 Jy. A 360$''$ by 360$''$ grid with 90$''$ spacing was best fit by a Gaussian with a major axis of 382$''$ and a minor axis of 379$''$, also with rms residual of 0.01 Jy. Given the small difference between two axes and positional uncertainty of at least $\sim$ 5$''$ at BIMA, we have utilized a symmetrical Gaussian model with a 380$''$ FWHM half-power point. At OVRO, we have made holographic measurements of the beam pattern and have corrected the fluxes based on the position of sources relative to a modeled Gaussian distribution, which resulted in a primary beam of 235$''$ (FWHM). For our 1 cm sample, we searched literature for low frequency counterparts within 15$''$ from the 28.5 GHz radio source coordinates. A low frequency source was accepted as a counterpart when the difference between our coordinates and published coordinates was less than the astrometric uncertainty in our coordinates and the low frequency counterpart coordinates. The error in 1 cm coordinates ranges from $\sim$ 3$''$ to 10$''$, which is equivalent to the image resolution divided by the signal-to-noise with which the source was detected. For cluster fields with bright radio sources, the signal-to-noise was low due to small integration times, producing uncertainties in position as high as $\sim$ 10$''$. Still, identification of such sources was easier due to their relatively high flux densities. For published sources, the astrometric errors ranged from sub-arcseconds, mostly from VLA observations, to few arcseconds. The mean difference between our coordinates and published coordinates was $\sim$ 6$''$. Based on Moffet \& Birkinshaw (1989), we estimated the field density of 5 GHz radio sources towards clusters with a flux limit of 1 mJy is $\sim$ 25 degree$^{-2}$. Therefore, the probability of an unrelated radio source, with a 5 GHz flux density above 1 mJy, lying within 6$''$ is $<$ 0.5\%. When there is a well known counterpart from literature coincident with the detected 1 cm source, we have noted the commonly used name in Table 3. We have calculated the spectral index of individual radio sources by fitting all known flux densities, with spectral index $\alpha$ defined as $S \sim \nu^{-\alpha}$. In Fig.\ 1, we show a histogram of the calculated spectral indices of 52 sources for which we have found radio observations at other frequencies. In this plot, we have not included the sources 1635+6613 towards Abell 2218 and 1615-0608 towards Abell 2163, which are found with flux densities that peak between 1.4 GHz and 28.5 GHz (e.g., Fig.\ 2). These sources may indicate self-absorbed radio cores, with spectral turnover due to free-free absorption. Such turnovers in inverted spectra are found in Gigahertz Peaked Spectrum (GPS) sources, though definition of GPS sources calls for peaked spectra between 0.5 and 10 GHz (De Vries, Barthel, \& O'Dea 1997). The increase in turnover frequency well above 10 GHz, could be due to an increase in ambient density. Also, 1615-0608 towards Abell 2163 is known to be variable based on VLA observations by Herbig \& Birkinshaw (1994). During our observations, the flux density of this source did not change significantly: we measured a flux density of 1.12 $\pm$ 0.29 mJy in 1995 (OVRO) and 0.93 $\pm$ 0.42 mJy in 1997 (BIMA) at 1 cm. In Table 3, we report the 1995 flux density value since tabulated VLA measurements were made closer to our 1995 observations. Abell 2163 is also known to contain one of the largest radio halo sources ever found. We did not detect any emission from the cluster center, which is understandable given that the halo was detected only at 1.4 GHz, with an integrated flux density of $\sim$ 6 mJy and a steep spectral index of $\sim$ 1.5. In our sample we also find 3 sources with inverted spectra between 1.4 and 28.5 GHz: 0152+0102 towards Abell 267, 0952+5151 towards Zw 2701, and 1155+2326 towards Abell 1413. These could either represent free-free emission due to starburst, or synchrotron emission from a weak AGN, or both, with an optically thick part of a thermal bremsstrahlung component that extends to high frequencies. Since the inverted spectral indices are less than -2, which is the value expected for optically thick thermal sources, it is more likely that these sources represent multiple non-thermal components. The relatively flat-spectrum ($-0.5 \lesssim \alpha \lesssim 0.5$) sources may indicate unresolved cores and hotspots, and further high resolution observations are necessary to resolve full structure. These sources include 0152+0102 towards Abell 267 and 2201+2054 towards Abell 2409. In Table 3, the identification of a source as a central galaxy (CG) was only made when we have used the optical coordinates of central galaxy from literature as the pointing coordinates, and when we have detected a radio source at 1 cm within 10$''$ of the observed coordinates. We have found 13 such sources, which may well represent the radio emission associated with central cD galaxy of the cluster. Due to the low resolution of our observations, most of the radio sources are unresolved, but in a few cases we find some evidence for extended emission. These sources include 0037+0907 and 0307+0908 towards Abell 68 (Fig.\ 3), 1716+6708 (4C +67.26) towards RXJ1716+6708 (Fig.\ 4), 1335+4100 (4C +41.26) towards Abell 1763, and 1017+5934 towards Abell 959. The nature of extended emission associated with these sources should be further studied, and high resolution observations at several frequencies will be helpful in this regard. 1335+4100 (4C +41.26) towards Abell 1763 is a well studied FR II type radio source (e.g., Owen 1975). For our sample of 52 radio sources with known flux densities at lower frequencies, a mean spectral index of 0.77 $\pm$ 0.06, and a median of 0.84 are found. If the three sources with inverted spectral indices are not considered, the mean and median rise to 0.85 $\pm$ 0.06 and 0.85 respectively. To avoid a biased estimate for the spectral index distribution, however, we must consider counterparts at 1 cm for all sources detected at lower frequencies. Galaxy clusters CL 0016+16, Abell 2218 and Abell 665 have been observed at 1.4, 4.85, 14.9 and 20.3 GHz by Moffet \& Birkinshaw (1989), and their observations are complete to a flux density limit of 1 mJy at 4.85 GHz. In each of these three clusters, we selected sources in the low frequency survey which were located within 300$''$ from the cluster center. We list these sources, their flux densities at 1.4 GHz, expected flux densities at 28.5 GHz based on 1.4 and 4.85 GHz spectral index, observed flux densities at 28.5 GHz, and calculated spectral indices between 1.4 and 28.5 GHz in Table 4. At 28.5 GHz, we detect all sources with flux densities greater than 7 mJy at 1.4 GHz, at a detection level greater than 3 $\sigma$. We looked for counterparts of these sources at 28.5 GHz, which should form a complete sample and not bias the spectral index distribution. Also, given that we looked for 28.5 GHz counterparts only within 15$''$ of the 1.4 GHz source coordinates, we expect all detections at a level above 3$\sigma$ to be real. For this sample, we find a mean spectral index of 0.71 $\pm$ 0.08, and a median of 0.71. The 1 cm spectral index distribution agrees with that of the 6 cm mJy population with a median of 0.75 (Donnelly {\it et al.} 1987). However, the 1 cm distribution is steeper than the sub-mJy and the $\mu$Jy populations, where medians of 0.35 (Windhorst {\it et al.} 1993) and 0.38 (Fomalont {\it et al.} 1991) were found at 4.85 and 8.4 GHz respectively. The latter sub-mJy populations have been identified with faint blue galaxies. Our sample could be part of the lower frequency mJy and sub-mJy populations, but given the lack of detailed optical data for most of our sources, we cannot exactly state the optical nature of our 28.5 GHz sample. We compare our results with a 1.4 GHz survey by Condon, Dickey, \& Salpeter (1990) in areas without rich galaxy clusters, in order to address whether we are finding an overabundance of radio sources at 28.5 GHz towards galaxy clusters. They found a total of 354 radio sources, down to a flux limit of 1.5 mJy, in a total surveyed area of about 12 square degrees. Seven of these sources are thought to be associated with galaxy clusters, which includes Abell 851 (source 0942+4658 in Table 3). Ignoring this small contamination, we calculated the expected flux densities of the 1.4 GHz sources at 28.5 GHz based on the mean spectral index value of our sample. For a spectral index of 0.71, we found 170 sources with fluxes greater than 0.4 mJy at 1 cm, which is the lowest 4 $\sigma$ detection limit of our observations. Given that the ratio of total area observed by the 1.4 GHz survey and our survey is $\sim$ 15, we only expect $\sim$ 7 to 15 sources be present with flux densities greater than 0.4 mJy at 28.5 GHz, and therefore be detected in our observations. Given that we find 62 sources, ignoring the inverted and unusual spectrum sources, we conclude that we are finding at least $\sim$ 4 times more sources than usually expected. Given the primary beam attenuation and the nonuniformity of flux density limit from one cluster field to another, the above ratio is only a lower bound on the calculated ratio. If we take these facts into account, we find that our sample at 28.5 GHz contains 7 times more than one would normally expect based on a low frequency radio survey devoid of clusters. This result may have some consequences when planning and reducing data from large field observations, such as our planned degree-square SZ effect survey. The reason that we are seeing more sources might be explained through gravitational lensing of a background radio population by cluster potentials. Our cluster sample ranges in redshift from $\sim$ 0.15 to 0.85, with a mean of 0.29 $\pm$ 0.02, and a median of 0.23. If the excess source counts are indeed an effect due to lensing, the background population should be at a lower flux level than what we have observed. An optimal lensing configuration suggests that background sources should be at angular diameter distances twice that of the galaxy clusters, which are assumed to be lensing potentials. For the range of cluster redshifts, the background source sample should be at redshifts between $\sim$ 0.4 and 1.4, with a mean redshift of $\sim$ 0.7. In terms of well known radio source samples, a possibility of such an unlensed population between redshifts of 0.4 and 1.4 is the sub-mJy radio sources at 5 and 8.4 GHz (Windhorst {\it et al.} 1993). For simplicity, we consider a cluster potential based on the singular isothermal sphere (SIS) model of Schneider, Ehlers \& Falco (1992). Such a potential brightens, but dilutes the spatial distribution, background sources by the magnification factor, \begin{equation} \mu(\theta) = \left| 1 - \frac{\theta_E}{\theta} \right|^{-1}, \end{equation} where $\theta$ is the angle, or distance, to radio source from cluster center, and $\theta_E$ is the Einstein angle, which depends on the distances to a given cluster and background radio sources ($\theta > \theta_E$). The Einstein angle can be observationally determined through optical images of clusters where background sources are lensed into arcs and whose redshift is known. In order to estimate reliable Einstein angles for background sources at redshifts around $\sim$ 0.7, we considered two well studied clusters. In Abell 2218 an arc is found $\sim$ 21$''$ from the cluster center with a measured redshift of 0.702 (Pell\'{o} {\it et al.} 1992), and in Abell 370 an arc is found $\sim$ 36$''$, with a redshift of 0.72 (Kneib {\it et al.} 1994). The 1 cm source sample ranges from $\sim$ 0 to 250$''$ in distance from individual cluster centers, with a mean of $\sim$ 94 $\pm$ 10$''$, and a median of 97$''$. These values suggest a mean magnification factor of $\sim$ 1.4, suggesting that we should expect 10 to 20 sources at 28.5 GHz towards galaxy clusters, based on our earlier estimate of 7 to 15 sources and not accounting for the spatial dilution due to lensing. It is unlikely that lensing can account for the significant excess number of radio sources we have detected at 28.5 GHz. Also, VLA A-array observations at 1.4 GHz to a flux density limit of 1 mJy have not yet produced convincing evidence for the existence of gravitationally lensed radio sources, such as radio arcs, towards galaxy clusters (e.g. Andernach {\it et al.} 1997). An apparent detection of gravitational lensing towards clusters, based on the tangential orientation of radio sources, is discussed in Bagchi \& Kapahi (1995). Recently, Smail {\it et al.} (1997) have observed an increase in sub-mm surface flux density towards clusters Abell 370 and CL 2244-02, which was interpreted as due to the gravitational lensing by cluster potentials of strongly star-forming galaxies at redshifts $\gtrsim$ 1. Given that the number counts of our sample cannot be totally explained as due to a lensing effect and that we do not have enough resolution to look for alterations that might be a result of lensing situations, we conclude that a large fraction of the detected 1 cm sample must be associated with clusters towards which they were found. In Fig.\ 6, we plot the source counts per solid angle at 1 cm, which were binned in logarithmic intervals of 0.2 mJy into 8 different bins. The solid angle for each flux bin takes into account the variation in sensitivity of our observations. A large number of sources are found in the lowest bin, which may have a nonuniform detections due to the variation in noise level from one cluster field to another. The maximum-likelihood fit to a power-law distribution of the observed sources, and normalised to the source counts greater than 1.6 mJy, is $N(>S) = (59 ^{+20}_{-15}) \times (S/$mJy$)^{-0.96 \pm 0.14}$ in a total surveyed area of 2.5 $\times$ 10$^{-4}$ sr, where $N(>S)$ is an integral representation of number of sources with flux densities greater than $S$ in mJy. Given that we only looked for sources towards a sample of X-ray luminosity selected galaxy clusters, and that we have not carried out 28.5 GHz observations to a uniform flux density limit in all observed fields, the above number count-flux relationship should not be treated as true in general for all radio sources at 28.5 GHz. However, our result may be useful when studying radio source contamination in planned CMB anisotropy and SZ experiments. The corresponding differential source count slope $\gamma$ is $\sim$ 1.96 ($dN/dS \propto S^{-\gamma}$). This slope is similar to what is found for 6 cm mJy radio sources ($\gamma$ $\approx$ 1.8, Donnelly {\it et al.} 1987) with similar flux densities as our sample, but is marginally flatter than sub-mJy population of radio sources ($\gamma$ $\approx$ 2.3, Windhorst {\it et al.} 1993). The flattening of the slope from the expected Euclidean value ($\gamma = 2.5$) is likely to be due to the dependence of radio luminosity with galaxy cluster properties, as we may be finding bright radio sources towards X-ray luminous clusters. Recently, Loeb \& Refregier (1997) have suggested that the value of the Hubble constant determined through a joint analysis of SZ and X-ray data may be underestimated due to radio point source contamination. We address this issue based on our 28.5 GHz data and low frequency observations towards Abell 2218, which was also studied in Loeb \& Refregier (1997). There are 5 known sources within 300$''$ from the cluster center (Moffet \& Birkinshaw 1989), out of which we detect 3 (see Fig.\ 5) down to a flux density of $\sim$ 1 mJy. By subtracting these three sources and using all visibilities and a Gaussian UV taper of 0.5 at 0.9 k$\lambda$, we find a SZ decrement with a signal-to-noise ratio greater than 20. The restored beam size of this map is 110$''$ by 98$''$. By extrapolating low frequency flux densities to 1 cm, based on the 1.4 and 4.85 GHz spectral indices, we infer an unaccounted intensity of $\sim$ 250 Jy sr$^{-1}$. Assuming that the flux densities in the map at the expected location of the unsubtracted sources are the real 28.5 GHz flux densities of the undetected sources, we estimate an upper limit on the unaccounted intensity of $\sim$ 590 Jy sr$^{-1}$. The latter value is equivalent to the noise contribution in the observed SZ decrement. The two intensities are equivalent to $\sim$ 10 and 25 $\mu$K respectively, which we take as the range of errors in the observed SZ temperature decrement $\Delta T_{sz}$. The central temperature fluctuation due to SZ decrement towards Abell 2218, $\Delta T_{sz}$ ranges from $\sim$ 0.6 to 1.1 mK, based on different $\beta$-models to SZ morphology (see also Jones {\it et al.} 1993). The Hubble constant, $H$, varies as $H \propto {T_{sz}}^{-2}$. Thus, the offset in true and calculated Hubble constant, $\Delta H$, is: \begin{equation} \frac{\Delta H}{H} \sim \frac{2 \Delta T_{sz}}{T_{sz}}. \end{equation} For Abell 2218, we find that the fractional correction to the Hubble constant from not accounting for sources with flux densities less than 0.5 mJy at 28.5 GHz ranges from $\sim$ 1\% to 6\%. If sources with flux densities less than 0.1 mJy are not accounted for, we estimate an upper limit on the offset of 2\%. These values are in agreement with Loeb \& Refregier (1997), who suggested that the 5 GHz sub-mJy population (Windhorst {\it et al.} 1993) may affect the derivation of the Hubble constant at 15 GHz by 7\% to 13\%, if sources less than 0.1 mJy at 15 GHz not properly taking into account. Given that the intensity of the SZ decrement has a spectral index of -2, and assuming a spectral index of 0.7 for the radio source flux contribution, we estimate the frequency dependence of the correction as $\nu^{-2.7}$. Thus, at 15 GHz, we also find that the Hubble constant may be underestimated up to 13\%. The contribution from free-free emission, which scales as $\nu^{-0.1}$, is not expected to contribute to underestimation of the Hubble constant at a level more than 0.1\% at 28.5 GHz. At high frequencies ($>$ 90 GHz), the free-free and dust emissions, with dust scaling as $\nu^{\beta}$, $3<\beta<4$ at 100 GHz, may become the dominant source of error. Therefore, based on the 28.5 GHz data towards Abell 2218, we conclude that the error in the Hubble constant through a joint analysis of SZ data at 28.5 GHz and X-ray emission observations is not expected to be larger than the error introduced by the analysis (such as $\beta$-models) and unknown nature of the galaxy cluster shape (oblate vs. prolate etc.), which can amount up to 30\% (e.g. Roettiger {\it et al.} 1997). \acknowledgments We wish to thank the staff at OVRO and BIMA observatories for their assistance with our observations, in particular J. R. Forster, J. Lugten, S. Padin, R. Plambeck, S. Scott, and D. Woody. We also thank C. Bankston and P. Whitehouse at the MSFC for helping with the construction of the SZ receivers, and M. Pospieszalski for the Ka-band HEMT amplifiers. We also gratefully acknowledge H. Ebeling, A. Edge, H. Bohringer, S. Allen, C. Crawford, A. Fabian, W. Voges, J. Huchra and P. Henry for providing us results from X-ray observations of galaxy clusters prior to publication. ARC acknowledges useful discussions with A. Fletcher on an early draft of the paper. JEC acknowledges support from a NSF-Young Investigator Award and the David and Lucile Packard Foundation. Initial support to build hardware for the SZ observations came from a NASA CDDF grant. Radio astronomy with the OVRO millimeter array is supported by the NSF grant AST96-13717, and astronomy with the BIMA array is supported by the NSF grant AST96-13998.
{'timestamp': '1998-01-30T00:15:11', 'yymm': '9711', 'arxiv_id': 'astro-ph/9711218', 'language': 'en', 'url': 'https://arxiv.org/abs/astro-ph/9711218'}
\section{Introduction} A celebrated theorem by Dirac \cite{Dirac52} asserts the existence of a Hamilton cycle whenever the minimum degree of a graph $G$, denoted $\delta(G)$, is at least $\frac{n}{2}$. Moreover, this is best possible as can be seen from the complete bipartite graph $K_{\floor{\frac{n-1}{2}},\ceil{\frac{n+1}{2}}}$. Dirac's theorem is one of the most influential results in the study of Hamiltonicity of graphs and has seen generalisations in many directions over the years (for some examples consider surveys \cite{lisurvey,gouldsurvey,bennysurvey} and references therein). In this paper we discuss one such direction by considering what conditions ensure that we can find various \textit{$2$-factors} in $G$. Here, a \textit{$2$-factor} is a spanning 2-regular subgraph of $G$ or equivalently, a union of vertex-disjoint cycles that contains every vertex of $G$ and hence, $2$-factors can be seen as a natural generalisation of Hamilton cycles. Brandt, Chen, Faudree, Gould and Lesniak \cite{brandt97} proved that for a large enough graph the same degree condition as in Dirac's theorem, $\delta(G)\ge n/2$, allows one to find a $2$-factor with exactly $k$ cycles. \begin{thm} If $k \geq 1$ is an integer and $G$ is a graph of order $n \geq 4k$ such that $\delta(G) \geq \frac{n}{2}$, then $G$ has a $2$-factor consisting of exactly $k$ cycles. \end{thm} Once again, this theorem gives the best possible bound on the minimum degree, using the same example as for the tightness of Dirac's theorem above. This indicates that perhaps if we restrict our attention to Hamiltonian graphs, thereby excluding this example, a smaller minimum degree might be enough. That this is in fact the case was conjectured by Faudree, Gould, Jacobson, Lesniak and Saito \cite{Faudree05}. \begin{conj}\label{conj:main} For any $k \in \mathbb{N}$ there are constants $c_k <1/2,$ $n_k$ and $a_k$ such that any Hamiltonian graph $G$ of order $n\ge n_k$ with $\delta(G) \ge c_k n+a_k$ contains a $2$-factor consisting of $k$ cycles. \end{conj} Faudree et al.\ prove their conjecture for $k=2$ with $c_2=5/12.$ The conjecture was shown to be true for all $k$ by S\'{a}rk\"{o}zy \cite{Sarkozy08} with $c_k=1/2-\varepsilon$ for an uncomputed small value of $\varepsilon>0.$ Gy\"ori and Li \cite{gyori12} announced that they can show that $c_k=5/11+\varepsilon$ suffices. The best known bound was due to DeBiasio, Ferrara and Morris \cite{DeBiasio14} who show that $c_k= \frac{2}{5}+\varepsilon$ suffices. On the other hand no constructions of very high degree Hamiltonian graphs without $2$-factors of $k$ cycles are known. Faudree et al.~ \cite{Faudree05} say ``we do not know whether a linear bound of minimum degree in Conjecture~\ref{conj:main} is appropriate''. Sark\"ozy~\cite{Sarkozy08} says ``the obtained bound on the minimum degree is probably far from best possible; in fact, the ``right'' bound might not even be linear''. DeBiasio et al.~\cite{DeBiasio14} say ``one vexing aspect of Conjecture~\ref{conj:main} and the related work described here is that it is possible that a sublinear, or even constant, minimum degree would suffice to ensure a Hamiltonian graph has a 2-factor of the desired type''. In particular, in \cite{Faudree05,Sarkozy08,DeBiasio14} they all ask the question of whether the minimum degree needs to be linear in order to guarantee a $2$-factor consisting of $k$ cycles. We answer this question by showing that the minimum degree required to find $2$-factors consisting of $k$ cycles in Hamiltonian graphs is indeed sublinear in $n.$ \begin{thm}\label{thm:main} For every $k\in \mathbb{N}$ and $ \varepsilon>0$, there exists $N = N(k,\varepsilon)$ such that if $G$ is a Hamiltonian graph on $n \geq N$ vertices with $\delta (G) \geq \varepsilon n$, then $G$ has a $2$-factor consisting of $k$ cycles. \end{thm} \subsection{An overview of the proof} We now give an overview of the proof to help the reader navigate the rest of the paper. In the next section we will show that any $2$-edge-coloured graph $G$ on $n$ vertices with minimum degree being linear in both colours contains a blow-up of a short colour-alternating cycle. This is an auxiliary result which we need for our main proof. There, we also introduce ordered graphs and show a result which, given an ordering of the vertices of $G$ allows us to find a blow-up as above that is also consistent with the ordering, meaning that given two parts of the blow-up, vertices of one part all come before the other. The main part of the proof appears in \Cref{sec:main}. The key idea is given a graph $G$ with a Hamilton cycle $H=v_1\ldots v_nv_1$, to build an auxiliary $2$-edge-coloured graph $A$ whose vertex set is the set of edges $e_i=v_iv_{i+1}$ of $H$ and for any edge $v_iv_j\in G\setminus H$ we have a red edge between $e_i$ and $e_j$ and a blue edge between $e_{i-1}$ and $e_{j-1}$ in $A$. The crucial property of $A$ is that given any vertex disjoint union of colour-alternating cycles $S$ in $A$ one can find a $2$-factor $F(S)$ in $G$, consisting of the edges of $H$ which are not vertices of $S$ and the edges of $G$ not in $H$ which gave rise to the edges of $S$ in $A$. However, we can not control the number of cycles in $F(S)$ (except knowing that $F(S)$ has at most $|S|$ cycles), since it depends on the structure of $S$ and also on how $S$ is embedded within $A$. To circumvent this issue we will find instead a large blow-up of $S$. Then within this blow-up we show how to find a modification of $S$ denoted $S^+$ which has the property that $F(S^+)$ has precisely one cycle more than $F(S)$. Similarly, we find another modification $S^-$ such that the corresponding $2$-factor $F(S^-)$ has precisely one cycle less than $F(S)$. Since the number of cycles in $F(S)$ is bounded, if our blow-up of $S$ is sufficiently large we can perform these operations multiple times and therefore obtain a $2$-factor with the target number of cycles. \section{Preliminaries}\label{sec:prelim} Let us first fix some notation and conventions that we use throughout the paper. For a graph $G=(V,E)$, let $\delta(G)$ denote its minimum degree, $\Delta(G)$ its maximum degree and $d(v)$ the degree of a vertex $v \in V$. For us, a \emph{$2$-edge-coloured graph} is a triple $G = (V, E_1, E_2)$ such that both $G_1 = (V, E_1)$ and $G_2 = (V,E_2)$ are simple graphs. We always think of $E_1$ as the set of \emph{red} edges and of $E_2$ as the set of \emph{blue} edges of $G$. Accordingly, we define $\delta_1(G)$ to be the minimum degree of red edges of $G$ (that is $\delta(G_1)$), and analogously $\Delta_1(G)$, $\delta_2(G)$, etc. Note that with our definition the same two vertices may be connected by two edges with different colours. In this case, we say that $G$ has a \emph{double edge}. A \textit{blow-up} $G(t)$ of a $2$-edge coloured graph $G$ (with no two vertices joined by both a red and a blue edge) is constructed by replacing each vertex $v$ with a set of $t$ independent vertices and adding a complete bipartite graph between any two such sets corresponding to adjacent vertices in the colour of their edge. When working with \emph{digraphs} we always assume they are simple, so without loops and with at most one edge from any vertex to another (but we allow edges in both directions between the same two vertices). \subsection{Colour-alternating cycles}\label{subs:cycles} In this subsection, our goal is to prove that any $2$-edge-coloured graph, which is dense in both colours contains a blow-up of a colour-alternating cycle. We begin with the following auxiliary lemma that will only be used in the subsequent lemma where we will apply it to a suitable auxiliary digraph to give rise to many colour-alternating cycles. \begin{lem} Let $k \ge 2$ be a positive integer. A directed graph on $n$ vertices with minimum out-degree at least $\frac{n\log (2k)}{k-1}$ has at least $\frac{n^\ell}{2k^{\ell+1}}$ cycles of length $\ell$ for some fixed $2\le \ell\le k.$ \end{lem} \begin{proof} Let us sample $k$ vertices $v_1,\ldots, v_{k}$ from $V(G),$ independently, uniformly at random, with repetition. We denote by $X_i$ the event that vertex $v_i$ has no out-neighbour in $S:=\{v_1,\ldots, v_k\}.$ We know that $\mathbb{P}(X_i)\le \left(1-\frac{\log (2k)}{k-1}\right)^{k-1}\le \frac{1}{2k}.$ If no $X_i$ occurs then the subgraph induced by $S$ has minimum out-degree at least $1$ so contains a directed cycle. The probability of this occurring is at least: $$ \mathbb{P}\left(\overline{X_1}\cap \ldots \cap \overline{X_k}\right)=1-\mathbb{P}(X_1\cup \ldots \cup X_k)\ge 1-k\mathbb{P}(X_i) \ge 1/2,$$ where we used the union bound. This means that in at least $n^k/2$ outcomes we can find a cycle of length at most $k$ within $S.$ In particular, there is an $\ell \le k$ such that in at least $\frac{n^k}{2k}$ outcomes the cycle we find has length exactly $\ell$. Note that the same cycle might have been counted multiple times, but at most $k^\ell n^{k-\ell}$ times. This implies that $C_\ell$ occurs at least $\frac{n^\ell}{2k^{\ell+1}}$ times. \end{proof} Now, we use this lemma to conclude that there are many copies of some short colour-alternating cycle in any $2$-edge-coloured graph which has big minimum degree in both colours. \begin{lem}\label{lem:many-cycles} For every $\gamma \in (0,1)$ there exist $c=c(\gamma), L = L(\gamma)$ and $K = K(\gamma)$ such that, if $G$ is a $2$-edge-coloured graph on $n \geq K$ vertices satisfying $\delta_1(G), \delta_2(G) \geq \gamma n$, then $G$ contains at least $cn^\ell$ copies of a colour-alternating cycle of some fixed length $4 \le \ell \le L$. \end{lem} \begin{proof} Let $k=8/\gamma^2 \log (8/\gamma^2)$ so that $\gamma^2/4 \ge \log (2k)/(k-1).$ We set $L=2k,$ $K=8k/\gamma^2$ and $c=(\gamma/2)^{2k}/(4k^{k+1}).$ We build a digraph $D$ on the same vertex set as $G$ by placing an edge from $v$ to $u$ if and only if there are at least $\gamma^2n/2$ vertices $w$ such that $vw$ is red and $wu$ is blue. Let us first show that every vertex of $D$ has out-degree at least $\gamma^2n/4.$ There are at least $\gamma n$ red neighbours of $v$ and each has $\gamma n$ blue neighbours so there are at least $\gamma^2n^2$ red-blue paths of length $2$ starting at $v.$ Let us assume that there are less than $\gamma^2n/2$ vertices $u$ such that there are at least $\gamma^2n/2$ vertices $w$ such that $vw$ is red and $wu$ is blue. In this case there are less than $\gamma^2n/2 \cdot n+ n \cdot \gamma^2n/2$ red-blue paths starting at $v$ which is a contradiction. Note that we allowed $u=v$ in the above consideration so we deduce that minimum out-degree in $D$ is at least $\gamma^2n/2-1\ge \gamma^2n/4.$ The previous lemma implies that there is some $\ell \le k$ such that $D$ contains at least $n^{\ell}/(2k^{\ell+1})$ copies of $C_\ell.$ For any such cycle by replacing each directed edge by a red-blue path of $G$ between its endpoints, ensuring we don't reuse a vertex, we obtain at least $(\gamma^2n/2-\ell)(\gamma^2n/2-\ell-1)\cdots(\gamma^2n/2-2\ell+1)\ge (\gamma/2)^{2\ell}n^\ell$ colour-alternating $C_{2\ell}$'s in $G$. Noticing that each such $C_{2\ell}$ may arise in at most $2$ different ways from a directed $C_{\ell}$ of $D$ we deduce that there are at least $n^{\ell}/(2k^{\ell+1})\cdot(\gamma/2)^{2\ell}n^\ell/2\ge c(\gamma)n^{2\ell}$ colour-alternating $C_{2\ell}$'s in $G$. \end{proof} The reason for formulating the above lemma is that we can deduce the existence of the blow-up of a cycle from the existence of many copies of this cycle using the hypergraph version of the celebrated K\H{o}v\'ari-S\'os-Tur\'an theorem proved by Erd\H{o}s in \cite{kst}: \begin{thm}\label{thm:kst} Let $\ell,t \in \mathbb{N}$. There exists $C=C(\ell,t)$ such that any $\ell$-graph on $n$ vertices with at least $Cn^{\ell-1/t^\ell}$ edges contains $K^{(\ell)}(t)$, the complete $\ell$-partite hypergraph with parts of size $t,$ as a subgraph. \end{thm} We are now ready to find our desired blow-up. \begin{lem}\label{lem:cycle-blow-up} For every $\gamma \in (0,1)$ and $t\in \mathbb{N}$, there exist positive integers $L = L(\gamma)$ and $K = K(\gamma,t)$ such that, if $G$ is a $2$-edge-coloured graph on $n \geq K$ vertices satisfying $\delta_1(G), \delta_2(G) \geq \gamma n$, then $G$ contains $\mathcal{C}(t6^L)$ where $\mathcal{C}$ is a colour-alternating cycle with $|V(\mathcal{C})| \leq L.$ \end{lem} \begin{proof} Let $L=L(\gamma),c=c(\gamma),K \ge K(\gamma)$ be parameters of \Cref{lem:many-cycles} so that we can find $cn^\ell$ copies of a colour-alternating cycle of length $4 \le \ell \le L.$ Let $C=C(L,t6^L)\ge C(\ell,t6^L)$ be the parameter given by \Cref{thm:kst}. By assigning each vertex of $V(G)$ into one of $\ell$ parts uniformly at random we can find a partition of $V(G)$ into $V_1,\ldots,V_\ell$ such that there are $cn^\ell/\ell^\ell$ colour-alternating cycles $v_1\ldots v_\ell$ with $v_i \in V_i$. We also know that at least half of these cycles always use edges of the same colour between all $V_i,V_{i+1}.$ We now build an $\ell$-graph $H$ on the same vertex set as $G$ whose edges correspond to sets of vertices of such colour-alternating cycles. So we know $H$ has at least $\frac{c}{2\ell^\ell}n^\ell \ge Cn^{\ell-1/(t^\ell\cdot 6^{\ell L})}$ many edges, by taking $K$ large enough, depending on $t,L.$ So \Cref{thm:kst} implies that $H$ contains $K^{(\ell)}(t6^L)$ as a subgraph, which corresponds to a desired $\mathcal{C}(t6^L).$ \end{proof} \subsection{Ordered graphs} In our arguments it will not be enough to just find a blow-up of a colour-alternating cycle as in the previous subsection; we will also care about the ``order'' in which the cycles are embedded. In this section we give some notation about ordered graphs and a result which we will need later. An \emph{ordered graph} is a graph together with a total order of its vertex set. Here, whenever $G$ is a graph on an indexed vertex set $V(G) = \{v_1, \dots, v_n\}$, we assume that $G$ is ordered by $v_i < v_j \iff i < j$. An \emph{ordered subgraph} of an ordered graph $G$ is a subgraph of $G$ that is endowed with the order that is induced by $G$ and if not stated otherwise, we assume that subgraphs of $G$ are always endowed with that order. For us, two vertices $u < v$ of an ordered graph $G$ are called \emph{neighbouring}, if the set of vertices between $u$ and $v$, that is $\{x \in V(G) | u \leq x \leq v\}$, is either just $\{u,v\}$ or the whole vertex set $V(G)$. Given an ordered graph $G$ we say a blow-up $H=G(k)$ of $G$ is \textit{ordered consistently} if for any $x,y \in V(H)$ which belong to parts of the blow-up coming from vertices $u,v\in G$ respectively we have $x <_H y$ iff $u <_G v.$ \begin{lem}\label{lem:ordered} Let $t,L\in \mathbb{N},$ $H$ be a graph on $L$ vertices and $H(t2^L)\subseteq G$ for an ordered graph $G$. There exists an ordering of $H$ for which the consistently ordered $H(t)$ is an ordered subgraph of $G.$ \end{lem} \begin{proof} We prove the result by induction on $L,$ where the $L=1$ case is immediate. Let $\{V_1, \dots, V_L\}$ be the clusters of vertices of $H(t2^L),$ so $|V_i|=t2^L.$ Let $w_1, \dots, w_p$ be the median vertices of the sets $V_1, \dots ,V_p$ with respect to the ordering of $H(t2^L)$ induced by $G$ and assume without loss of generality that $w_1$ is the smallest of them. We now throw away all vertices of $V_1$ that are larger than $w_1$ and all vertices of $V_i$ that are smaller than $w_i$ for $i \geq 2$. This leaves us with $L$ sets $\{W_1, \dots, W_L\}$ of size $\ceil{|V_i|/2}=t2^{L-1}$ with the property that $v_1 \in W_1, v_i \in W_i \implies v_1 <_G w_1<_G w_i<_G v_i$ for all $i \geq 2$. If $v\in H$ corresponds to $V_1$ and we denote $H'=H-v$ then $\mathcal{W} = \{W_2, \dots, W_L\}$ spans $H'(t2^{L-1})\subseteq G\setminus V_1$. By the induction hypothesis we can find a consistently ordered $H'(t)$ as an ordered subgraph of $G\setminus V_1$ which together with any subset of size $t$ of $W_1$ gives the desired consistently ordered $H(t)$ in $G$. \end{proof} \section{Proof of \Cref{thm:main}}\label{sec:main} \subsection{Constructing an auxiliary graph}\label{constrA} Throughout the whole section, let $G$ be a Hamiltonian graph on $n$ vertices. First of all, let us fix a Hamilton cycle $H$ of $G$ and name the vertices of $G$ such that $H = v_1 v_2 \dots v_n v_1$. We assume that $G$ is ordered according to this labelling. Also, let us denote the edges of $H$ by $e_1, e_2, \dots , e_n$ such that $e_1 = v_1 v_2, \dots , e_n = v_n v_1$. In all our following statements, we will identify $v_{n+1}$ and $v_1$, and more generally $v_i$ and $v_j$, as well as $e_i$ and $e_j$, if $i$ and $j$ are congruent modulo $n$. Furthermore, since we can always picture $G$ as a large cycle with some edges inside it, we call all the edges that are not part of $H$, the \emph{inner edges} of $G$. Our goal is to find a $2$-factor with a fixed number of cycles in $G$. Note that, if $G$ is dense, it is not hard to find a large collection of vertex-disjoint cycles in $G$. The difficulty lies in the fact that we want this collection to be spanning while still controlling the exact number of cycles. Naturally, we have to rely on the Hamiltonian structure of $G$ to give us such a spanning collection of cycles. Indeed, when building these cycles we will try to use large parts of the Hamilton cycle $H$ as a whole and connect them correctly using some inner edges of $G$. It is convenient for our approach to construct an auxiliary graph $A$ out of $G$, that captures the information we need about the inner edges of $G$. \begin{defn} \label{defn:aux} Given the setup above, we define the \emph{auxiliary graph} $A = A(G,H)$ as the following ordered, $2$-edge-coloured $n$-vertex graph: \begin{enumerate} \item Every vertex of $A$ corresponds to exactly one edge of $H$, thus we have $V(A) = \{e_1, \dots , e_n\}$ and we order the vertices of $A$ according to this labelling; \item two vertices $e_i = v_i v_{i+1}$ and $e_j = v_j v_{j+1}$ of $A$ are connected with a red edge if there is an inner edge of $G$ connecting $v_{i+1}$ and $v_{j+1}$; \item similarly, the vertices $e_i$ and $e_j$ of $A$ are connected with a blue edge if there is an inner edge of $G$ connecting $v_i$ and $v_j$. \end{enumerate} \end{defn} Throughout this section, let $A = A(G,H)$ for our fixed $G$ and $H$. Note that, by the above definition, every edge $\ell \in E(A)$ corresponds to a unique inner edge $e$ of $G$. In the following, we denote this edge by $e(\ell) \in E(G)$. To be precise, if $\ell = e_i e_j$, then $e(\ell) \coloneqq v_{i+1} v_{j+1}$ if $\ell$ is a red edge and $e(\ell) \coloneqq v_i v_j$ if $\ell$ is a blue edge. Conversely, every inner edge of $G$ corresponds to exactly one red edge and to one blue edge of $A$. This leads to the following observation: \begin{obs} \label{deltaA} For $i \in \{1, \dots, n\}$, we have $d^{A}_1(e_i) = d^{G}(v_{i+1})-2$ and $d^{A}_2(e_i) = d^{G}(v_i)-2$. In particular, we have $\delta_1(A) = \delta_2(A) = \delta(G) - 2$. \end{obs} In \Cref{ExA} we give an example of a Hamiltonian graph and its corresponding auxiliary graph. \begin{figure}[ht] \caption{Let us call the left graph $G$ and fix its Hamilton cycle $H = v_1 \dots v_8 v_1$. Then the graph on the right is the auxiliary graph $A(G,H)$.} \includegraphics[scale = 0.8]{2-factors_picture1.pdf} \label{ExA} \end{figure} The motivation for defining $A$ just as above is given by the fact that 2-regular (possibly non-spanning!) subgraphs $S \subseteq A$ satisfying some extra conditions naturally correspond to a $2$-factor in $G$. Recall that in our setting, two vertices $e_i$ and $e_j$ of $A$ are neighbouring if $|i-j| \equiv 1$ (modulo $n$). Let us make the following definition: \begin{defn} \label{defn:correspondence} Given the same setup as above and a subgraph $S \subseteq A$ that is a union of vertex-disjoint colour-alternating cycles without neighbouring vertices (i.e.\ if $e_i \in V(S)$ then $e_{i-1},e_{i+1}\notin V(S)$), we define its corresponding subgraph $F(S) \subseteq G$ as follows: \begin{enumerate} \item $V(F(S)) \coloneqq V(G)$; \item the edges of $F(S)$ are all the edges of $H$ except for those that correspond to vertices of $S$. Additionally, for every edge $\ell \in E(S)$, let the corresponding inner edge $e(\ell)$ be an edge of $F(S)$ too. That is, $E(F(S)) \coloneqq \left(\{e_1, \ldots, e_n\} \setminus V(S)\right) \cup \{e(\ell) \mid \ell \in E(S)\}$. \end{enumerate} \end{defn} \begin{lem} \label{correspondence} If $S \subseteq A$ is a union of vertex-disjoint colour-alternating cycles without neighbouring vertices, then $F(S) \subseteq G$ is a $2$-factor. \end{lem} In order to illustrate the above definitions, consider the Hamiltonian graph given in \Cref{ExA} and the subgraphs $S_1$ and $S_2$ of the corresponding auxiliary graph where $S_1$ is just the cycle $e_2 e_4 e_6 e_8 e_2$ and $S_2$ is the union of the cycles $e_1 e_3 e_1$ and $e_5 e_7 e_5$. Their corresponding $2$-factors $F(S_1)$ and $F(S_2)$ are shown as dashed in \Cref{ExCorr}. Note that they use the same inner edges of $G$ but still have different numbers of cycles. \begin{figure}[ht] \caption{$2$-factors $F(S_1)$ and $F(S_2)$ used in the illustration above.} \includegraphics[scale = 0.8]{2-factors_picture2.pdf} \label{ExCorr} \end{figure} \begin{proof}[ of \Cref{correspondence}] Since $F \coloneqq F(S)$ consists of exactly $n$ edges, it suffices to show that $\delta(F)\ge 2$. Let $v_j$ be an arbitrary vertex of $F$. We distinguish two cases: If both edges $e_{j-1}, e_j \notin V(S)$, then $e_{j-1}, e_j \in E(F)$ and $v_j$ is incident to $e_{j-1}$ and $e_j$ in $F$. Else, exactly one of the edges $e_{j-1}$ and $e_j$ is a vertex of $S$ since $S$ contains no neighbouring vertices. In this case we use the fact that every vertex $e_i$ of $S$ is incident to a red edge $\ell_i$ and to a blue edge $\ell_i'$. Hence, by \Cref{defn:correspondence}, either $e_{j-1}\in S$ and $e_j \notin S$ in which case $v_j$ is incident to $e_j$ and $e(\ell_{j-1})$ in $F$ or $e_{j-1}\notin S$ and $e_j \in S$ in which case $v_j$ is incident to $e_{j-1}$ and $e(\ell_j')$ in $F$. In both cases these two edges are distinct as one of them is an inner edge of $G$ and the other one is not. \end{proof} We note that $F(S)$ does not only depend on the structure of $S$ but also on the order in which $S$ is embedded within $A$. However, it is immediate that if $S$ is embedded in auxiliary graphs of two Hamiltonian graphs (possibly with different number of vertices) in the same order then $F(S)$ has the same number of cycles in both cases. \begin{obs}\label{obs:samecomps} Let $A_1=A(G_1,H_1)$ and $A_2=A(G_2,H_2)$. Let $S_1$ and $S_2$ be disjoint unions of colour-alternating cycles without neighbouring vertices, which are isomorphic as coloured subgraphs of $A_1$ and $A_2$ whose corresponding vertices appear in the same order along $H_1$ and $H_2.$ Then $F(S_1)$ and $F(S_2)$ consist of the same number of cycles. \end{obs} We remark that it is not always true that all $2$-factors of $G$ arise as $F(S)$ for some $S \subseteq A.$ \subsection{Controlling the number of cycles} \label{control} It is not hard to see that the auxiliary graph $A$ (of a graph with a big enough minimum degree) must contain a colour-alternating cycle $C$, which corresponds to a $2$-factor $F(C) \subseteq G$ by \Cref{correspondence} (disregarding, for the moment, the issue of $C$ containing neighbouring vertices). However, it is not at all obvious how to generally determine the number of components of $F(C).$ We begin by giving a rough upper bound. \begin{obs} \label{obs:no.cycles} If $C \subseteq A$ is a non-empty colour-alternating cycle of length $L$ without neighbouring vertices, then the number of components of the corresponding $2$-factor $F(C)$ is at most $L$. \end{obs} \begin{proof} Note that the $2$-factor $F(C)$ contains exactly $L$ inner edges and, since $F(C) \neq H$, each cycle of $F(C)$ must contain at least one inner edge (in fact, at least two in our setting). \end{proof} However, in order to prove \Cref{thm:main}, we need to be able to show the existence of a $2$-factor consisting of exactly $k$ cycles, for a fixed predetermined number $k$. This is where we are going to make use of \Cref{lem:cycle-blow-up,lem:ordered} which allow us to find a consistently ordered blow-up of $C$. This will give us the freedom to find slight modifications of $C$ with different numbers of cycles in $F(C)$. \subsubsection{Going up} In this subsection we give a modification of a union of colour-alternating cycles which will have precisely one more cycle in its corresponding $2$-factor. \begin{defn}\label{defn:goingup} Let $S$ be a disjoint union of colour-alternating cycles with $V(S)=\{s_1, \dots, s_m\}$ and let $C$ be a cycle of $S$. We construct a $2$-edge-coloured ordered graph $U(S,C)$ as follows: \begin{enumerate} \item Start with a copy of $S$ and for every $s_i \in V(C)$, add a vertex $s_{i+1/2}$; \item For every red or blue edge $s_i s_j \in E(C)$, add an edge $s_{i+1/2} s_{j+1/2}$ of the same colour; \item Order the resulting graph according to the order of the indices of its vertices. \end{enumerate} Given a $2$-edge-coloured ordered graph $U$, we say that $U$ is a \emph{going-up version} of $S$, if there exists a component $C$ of $S$ such that $U$ and $U(S,C)$ are isomorphic $2$-edge-coloured ordered graphs. \end{defn} In other words $U(S,C)$ consists of $S$ with an additional copy of $C$ ordered in such a way that the vertices of the new copy of $C$ immediately follow their corresponding vertices of the original copy of $C$. In particular, $U$ is also a disjoint union of colour-alternating cycles and is an ordered subgraph of a consistently ordered $S(2)$. Note if $S$ contains no double edges, neither does $U.$ \Cref{fig:goingup} shows what a going-up version $U$ of $S$ looks like if $S$ is just a colour-alternating $C_4$. \Cref{fig:goingupG} shows what the corresponding $2$-factors look like (assuming $S \subseteq U \subseteq A$). Note that the dashed cycles of $F(U)$ have the same structure as the dashed cycles in $F(S)$ but $F(U)$ additionally has a new bold cycle. We now show that a similar situation occurs in general. \begin{figure}[ht] \caption{A colour-alternating cycle $S$ and a going-up version of it $U$} \includegraphics[scale = 0.8]{2-factors_going_up.pdf} \label{fig:goingup} \end{figure} \begin{figure}[ht] \caption{$2$-factors corresponding to $U$ and $S$ given in \Cref{fig:goingup}.} \includegraphics[scale = 0.8]{2-factors_going_up_G.pdf} \label{fig:goingupG} \end{figure} \begin{lem}[Going up]\label{lem:goingup} Let $S \subseteq A$ be a disjoint union of colour-alternating cycles without neighbouring vertices and let $U$ be an ordered subgraph of $A$ without neighbouring vertices that is a going-up version of $S$. Then, the $2$-factor $F(U) \subseteq G$ has exactly one component more than $F(S)$. \end{lem} \begin{proof}[ of \Cref{lem:goingup}] For an edge $e=v_k v_{k+1}\in H$ we let $v^{+}(e)=v_{k+1}$ and $v^{-}(e)=v_k.$ We denote the vertices of $S$ by $s_1, \dots, s_m$ according to their order in $A$. Let $C$ be a colour-alternating cycle $s_{j_1} \dots s_{j_k} s_{j_1}$ in $S$ for which $U=U(S,C)$. Let us denote the vertices of $U$ by $u_1, \dots, u_m$ and $u_{j_1 + 1/2}, \dots, u_{j_k+1/2}$ as they appear along $H$ such that $u_1, \dots, u_m$ make a copy of $S$ and $u_{j_1}, \dots, u_{j_k}$ correspond to $C$. The vertices $v^+(u_{j_i})$ and $v^-(u_{j_i + 1/2})$ are connected in $F(U)$ by paths $P_i \subseteq H$ for $i \in \{1, \dots, k\}$. Furthermore, since $C$ is a colour-alternating cycle either $v^+(u_{j_i})v^+(u_{j_{i+1}})\in E(G)$ for all odd $i$ and $v^-(u_{j_i + 1/2})v^-(u_{j_{i+1} + 1/2})\in E(G)$ for all even $i$ or vice versa in terms of parity. This means that taking all $P_i$ and these edges we obtain one cycle $$Z:=v^+(u_{j_1})v^+(u_{j_2})P_2v^-(u_{j_2+1/2})v^-(u_{j_3+1/2})P_3v^{+}(u_{j_3})\dots P_kv^-(u_{j_k + 1/2})v^-(u_{j_1+1/2})P_1v^+(u_{j_1})\in F(U),$$ if $C$ starts with a red edge (which is exactly the bold cycle in the example shown in \Cref{fig:goingupG}) or $$Z:=v^-(u_{j_1+1/2})v^-(u_{j_2+1/2})P_2v^+(u_{j_2})v^+(u_{j_3})P_3v^{-}(u_{j_3+1/2})\dots P_kv^+(u_{j_k})v^+(u_{j_1})P_1v^-(u_{j_1+1/2})\in F(U),$$ if $C$ starts with a blue edge. Let us now consider the graph $G'$ that is obtained from $G$ by deleting $Z$ (including all edges incident to vertices of $Z$) and adding the edges $S_{j_i} = v^-(u_{j_i}) v^+(u_{j_i + 1/2})$ for $i \in \{1, \dots, k\}$. Let $H'$ be the Hamilton cycle of $G'$ made of $H$ and $S_j$'s ordered according to the order of $G$. We claim that sending the vertices $s_i$ to $S_i$ if $s_i \in C$ and to $u_i$ otherwise for $i \in \{1, \dots, m\}$ gives an order-preserving isomorphism from $S$ to its image $S' \subseteq A(G', H')$. Indeed, if $s_i, s_j \notin C$, then the fact that $u_i u_j$ is a red or a blue edge whenever $s_i s_j$ is a red or a blue edge just follows from \Cref{defn:goingup}. Furthermore, if $s_{j_i} s_{j_{i+1}}$ is a red edge for $i \in \{1, \dots, k\}$, then $v^+(s_{j_i + 1/2})$ is adjacent to $v^+(s_{j_{i+1}+1/2})$, which means that $S_i S_{i+1}$ is a red edge. This works analogously for blue edges of $C$, which shows the claim. Hence, by \Cref{obs:samecomps}, the $2$-factor $F(S')$ in $G'$ has the same number of components as $F(S)$ in $G$. However, since $F(S')$ is by definition just $F(U) \setminus Z$, this completes the proof. \end{proof} \subsubsection{Going down} We now turn to the remaining case when we want to find a $2$-factor with less components than one that we already found. \begin{defn} Let $S \subseteq A$ be a disjoint union of colour-alternating cycles without neighbouring vertices. We say that a vertex $e_k \in V(A)$ \emph{separates components of $F(S)$} if the vertices $v_k$ and $v_{k+1}$ lie in different connected components of $F(S)$. \end{defn} \begin{obs}\label{obs:sepcomps} If $F(S)$ has more than one connected component, then at least one vertex of $S$ separates components. \end{obs} \begin{proof} Since $F(S)$ is not connected there must exist vertices $v_k,v_{k+1}$ of $H$ belonging to different components of $F(S).$ Let $e_k=v_k v_{k+1}$ so $e_k \notin E(F(S))$. Since the only edges of $H$ (that is vertices of $A$) that are not in $E(F(S))$ are vertices of $S,$ $e_k$ is the desired separating vertex. \end{proof} We are now ready to construct a going-down version of $S$ giving rise to a $2$-factor with one less cycle. \begin{defn}\label{defn:goingdown} Let $S$ be a disjoint union of colour-alternating cycles with $V(S)=\{s_1, \dots, s_m\}$. For any $s_k \in V(S)$ we construct the $2$-edge-coloured ordered graph $D=D(S,s_k)$ as follows: \begin{enumerate} \item Start with a copy of $S$ and for every vertex $s_i$ in the cycle $C\subseteq S$ that contains $s_k$, add the vertices $s_{i+1/3}$ and $s_{i+2/3}$ to $D$; \item if $i, j \neq k$ and if $s_i s_j$ is a red or a blue edge of $S$, then add the edges $s_{i+1/3}s_{j+1/3}$ and $s_{i+2/3}s_{j+2/3}$ of the same colour to $D$; \item if $s_i s_k$ is the blue edge of $S$ incident to $s_k$, then delete it and add the blue edges $s_i s_{k+1/3}$, $s_{i+1/3} s_{k+2/3}$ and $s_{i+2/3} s_k$ to $D$; \item if $s_i s_k$ is the red edge of $S$ incident to $s_k$, then add the red edges $s_{i+1/3} s_{k+2/3}$ and $s_{i+2/3} s_{k+1/3}$ to $D$; \item order the resulting graph according to the order of the indices of its vertices. \end{enumerate} Let $S\subseteq A$ be a disjoint union of colour alternating cycles without neighbouring vertices, so that $F(S)$ exists. We say that a $2$-edge-coloured ordered graph $D$ is a \emph{going-down version} of $S$ if there exists a vertex $s_k$ that separates components of $F(S)$ such that $D$ and $D(S,s_k)$ are isomorphic $2$-edge-coloured ordered graphs. \end{defn} In other words $D=D(S,s_k)$ consists of a copy of $S$ with added two copies of the cycle containing $s_k$ where the edges incident to $s_k$ and its copies are rewired in a certain way. It is easy to see that every vertex of $D$ is still incident to exactly one edge of each colour so is still a disjoint union of colour-alternating cycles. Note also that $D$ is an ordered subgraph of consistently ordered $S(3).$ If $S$ contained no double edges neither does $D$. \Cref{fig:goingdown} shows a going-down version $D = D(S, s_1)$ for $S$ on $\{s_1, \dots, s_4\}$ being again a colour-alternating $C_4$. Note that $F(D)$, shown in \Cref{fig:goingdownG}, contains two paths, marked as dotted and bold, that connect the two dashed parts of $F(D)$ that resemble the two disjoint cycles of $F(S),$ into a single cycle. We will show that this occurs in general. \begin{figure}[ht] \caption{A colour-alternating cycle $S$ and a going-down version of it $D(S,s_1).$} \includegraphics[scale = 0.8]{2-factors_going_down.pdf} \label{fig:goingdown} \end{figure} \begin{figure}[ht] \caption{$2$-factors corresponding to $U$ and $D(S,s_1)$ given in \Cref{fig:goingdown}.} \includegraphics[scale = 0.8]{2-factors_going_down_G.pdf} \label{fig:goingdownG} \end{figure} \begin{lem}[Going down]\label{lem:goingdown} Let $S \subseteq A$ be a disjoint union of colour-alternating cycles without neighbouring vertices and let $D$ be an ordered subgraph of $A$ without neighbouring vertices that is a going-down version of $S$. Then the $2$-factor $F(D) \subseteq G$ consists of one cycle less than $F(S)$. \end{lem} \begin{proof} For an edge $e=v_kv_{k+1}\in H$ we let $v^{+}(e)=v_{k+1}$ and $v^{-}(e)=v_k.$ We denote the vertices of $S$ by $s_1, \dots, s_m$ where $D=D(S,s_1)$ and $s_1$ separates components of $F(S)$. We denote the vertices of $D$ by $d_1, \dots, d_m$ and $d_{4/3}, d_{5/3}, d_{j_1+1/3}, d_{j_1 + 2/3}, \dots, d_{j_k + 2/3}$ as they appear along $H$ such that $d_1, \dots, d_m$ make a copy of $S$ in which $d_1$ corresponds to $s_1$ and $d_1,d_{j_1}, \dots, d_{j_k}$ to the cycle $C = s_1 s_{j_1} \dots s_{j_k} s_1$ of $S.$ The vertices $v^+(d_{j_i})$ and $v^-(d_{j_i + 1/3})$ as well as the vertices $v^+(d_{j_i+1/3})$ and $v^-(d_{j_i+2/3})$ in $F(D)$ are connected by paths $P_i \subseteq H$ and $Q_i \subseteq H$ respectively for all $i \in \{1, \dots, k\}$. If $C$ begins by a red edge then $$P:=v^+(d_1)v^+(d_{j_1})P_1v^-(d_{j_1+1/3})v^-(d_{j_2+1/3})P_2v^{+}(d_{j_2})\dots P_kv^-(d_{j_k + 1/3})v^-(d_{5/3})\in F(D),$$ where $v^+(d_1)v^+(d_{j_1})\in F(D)$ by \Cref{defn:goingdown} part 4; $v^-(d_{j_k + 1/3})v^-(d_{5/3})\in F(D)$ by part $3$ and edges between paths $P_i$ are in $F(D)$ by part 2 in the same way as in the going up case. Similarly, $$Q:=v^+(d_{5/3})v^+(d_{j_1+1/3})Q_1v^-(d_{j_1+2/3})v^-(d_{j_2+2/3})Q_2v^{+}(d_{j_2+1/3})\dots Q_kv^-(d_{j_k +2/3})v^-(d_{1})\in F(D)$$ On the other hand if $C$ begins by a blue edge then we have $$P:=v^-(d_{5/3})v^-(d_{j_1+1/3})P_1v^+(d_{j_1})v^+(d_{j_2})P_2\dots P_kv^+(d_{j_k})v^+(d_{1})\in F(D),$$ $$Q:=v^-(d_{1})v^-(d_{j_1+2/3})Q_1v^+(d_{j_1+1/3})v^+(d_{j_2+1/3})Q_2\dots Q_kv^+(d_{j_k+1/3})v^+(d_{5/3})\in F(D)$$ So in either case the path $P\subseteq F(D)$ contains $P_1, \dots, P_k$ and has endpoints $v^+(d_1),v^-(d_{5/3})$ while $Q \subseteq F(D)$ contains $Q_1, \dots, Q_k$ and has endpoints $v^+(d_{5/3}),v^-(d_1)$. For example in \Cref{fig:goingdownG}, the paths $P$ and $Q$ correspond to the dotted and the bold path respectively. Our goal now is to show that $P$ and $Q$ connect two ``originally distinct'' components that are ``inherited'' from $F(S)$. Consider the graph $G'$ that is obtained from $G$ by deleting all the vertices of paths $P_i$ and $Q_i$ (equivalently all inner vertices of $P$ and $Q$) and adding the edges $S_{j_i} = v^-(d_{j_i}) v^+(d_{j_i + 2/3})$ for $i \in \{1, \dots, k\}$. Let $H'$ be the Hamilton cycle of $G'$ made of $H$ and $S_j$'s ordered according to the order of $G$. First, we claim that the map that sends $s_1$ to $d_{4/3}$ and $s_i$ to $S_i$ if $s_i$ is part of $C \setminus \{s_1\}$ and to $d_i$ otherwise for $i \in \{2, \dots, m\}$ is an order-preserving isomorphism from $S$ onto its image $S' \subseteq A(G', H')$. Indeed, by \Cref{defn:goingdown} parts 3 and 4 for $i=1,k$ if $s_1s_{j_i}$ is red then $d_{4/3}d_{j_i+2/3}$ is a red edge of $A$ so $v^+(d_{4/3})v^+(d_{j_i+2/3})\in F(D)$ implying that $d_{4/3} S_{j_i}$ is red in $A'.$ If $s_1s_{j_i}$ is blue then $d_{4/3}d_{j_i}$ is a blue edge of $A$ so $v^-(d_{4/3})v^-(d_{j_i})\in F(D)$ implying that $d_{4/3} S_{j_i}$ is blue in $A'.$ For $i\neq k$ edge $S_{j_i} S_{j_{i+1}}$ is of the same colour as $s_{j_i}s_{j_{i+1}}$ by \Cref{defn:goingdown} part 2 and for $s_i,s_j \notin C$ we know $d_i d_j$ has the same colour by part 1. Therefore, by \Cref{obs:samecomps}, $F(S')$ has the same number of components as $F(S)$. Since $s_1$ separates components in $S$ we know that $d_{4/3}$ separates components in $F(S')$. This means in particular that $d_1$ and $d_{5/3}$ lie in two different cycles $C_1$ and $C_2$ of $F(S')$. Now, observe that we obtain $F(D)$ from $F(S')$ by deleting $d_1$ and $d_{5/3}$ and adding the paths $P$ and $Q$. However, since $P$ connects $v^+(d_1)$ and $v^-(d_{5/3})$ and $Q$ connects $v^+(d_{5/3})$ and $v^-(d_1)$, this process joins $C_1$ and $C_2$ into one big cycle and hence, $F(D)$ has exactly one component less than $F(S)$. \end{proof} \subsection{Completing the proof}\label{subs:complete} We are now ready to put all the ingredients together in order to complete our proof of \Cref{thm:main} in the way that has already been outlined throughout the previous section. \begin{proof}[ of \Cref{thm:main}] Let $k$ be a positive integer and $\varepsilon$ a positive real number. Let $L=L(\varepsilon/2),K=K(\varepsilon/2,2^k)$ be the parameters coming from \Cref{lem:cycle-blow-up}. Let $N\ge \max(4/\varepsilon,K).$ Now, suppose that $G$ is a Hamiltonian graph on $n \geq N$ vertices with minimum degree $\delta(G) \geq \varepsilon n$. Let us fix a Hamilton cycle $H \subseteq G$, name the vertices of $G$ such that $H = v_1 v_2 \dots v_n v_1$ and assume that $G$ is ordered according to this labelling. Let $A = A(G,H)$ be the ordered, $2$-edge-coloured auxiliary graph corresponding to $G$ and $H$ according to \Cref{defn:aux}. We know by \Cref{deltaA} that $\delta_\nu(A)\ge \delta_\nu(G)-2 \ge \frac{\varepsilon}{2} n.$ \Cref{lem:cycle-blow-up} shows that there is a $\mathcal{C}(2^k6^L)\subseteq A$ where $\mathcal{C}$ is a colour-alternating cycle of length at most $L$ without double-edges. \Cref{lem:ordered} allows us to find a consistently ordered $\mathcal{C}(2^k3^L)$ as an ordered subgraph of $A.$ By removing every second vertex of $\mathcal{C}(2^k3^L)$ in $A$ we obtain a consistently ordered $\mathcal{C}'=\mathcal{C}(2^{k-1}3^L)$ that is an ordered subgraph of $A$ without neighbouring vertices. For $\mathcal{C}\subseteq \mathcal{C}'$ by \Cref{correspondence} we obtain a $2$-factor $F(\mathcal{C}) \subseteq G$. Let $\ell$ be the number of cycles of $F(\mathcal{C}).$ By \Cref{obs:no.cycles}, we know that $1 \leq \ell \leq L$. Let us first assume that $k > \ell$. We find a sequence $S_0, S_1, \dots, S_{k-\ell}$ defined as follows: let $S_0 = \mathcal{C};$ given $S_{i-1}$ let $\mathcal{C}_{i-1}$ be an arbitrary cycle of $S_{i-1}$ and let $S_{i} = U(S_{i-1}, \mathcal{C}_{i-1})$. By construction, $S_i$ is again a disjoint union of colour-alternating cycles, without double edges, and is an ordered subgraph of $\mathcal{C}(2^i) \subseteq \mathcal{C}'$ (since by construction $S_i\subseteq S_{i-1}(2)$). Therefore, for all $i\le k-\ell$ there is an order-preserving embedding of $S_i$ into $A$ without neighbouring vertices. So, by \Cref{correspondence} and \Cref{lem:goingup} we deduce that $F(S_i)$ has one more cycle than $F(S_{i-1})$. In particular, the $2$-factor $F(S_{k-\ell}) \subseteq G$ consists of exactly $k$ components. Let us now assume that $k < \ell$. Here, we find a sequence $S_0, S_1, \dots, S_{\ell - k}$ of disjoint unions of colour-alternating cycles that are ordered subgraphs of $A$ without neighbouring vertices such that $F(S_i)$ consists of $\ell-i$ cycles. Let $S_0 = \mathcal{C},$ and assume we are given $S_{i-1}$ for $i\le \ell-k$ with $F(S_{i-1})$ having $\ell-i+1 \ge k+1 \ge 2$ cycles. This means that $S_{i-1}$ has a vertex $v_{i-1}$ that separates components of $F(S_{i-1})$ by \Cref{obs:sepcomps}. We let $S_i = D(S_{i-1}, v_{i-1})$, which is a disjoint union of colour-alternating cycles, without double edges, and is an ordered subgraph of a consistently ordered $\mathcal{C}(3^i)$ (since by construction $S_i \subseteq S_{i-1}(3)$). Note that $\ell - k \leq L$ by \Cref{obs:no.cycles} and hence, $ \mathcal{C}(3^i) \subseteq \mathcal{C}(3^{\ell - k}) \subseteq \mathcal{C}'$ so we can find a copy of $S_i$ into $A$ without having neighbouring vertices. By \Cref{lem:goingdown}, $F(S_i)$ has one less cycle than $F(S_{i-1})$, so exactly $\ell - i$ cycles. In particular, $F(S_{\ell - k})$ is a $2$-factor in $G$ with $k$ cycles, which concludes the proof. \end{proof} \section{Concluding remarks and open problems} In this paper we show that in a Hamiltonian graph the minimum degree condition needed to guarantee any $2$-factor with $k$-cycles is sublinear in the number of vertices. The best lower bound is still only a constant. In the case of a $2$-factor with two components, the best bounds are given by Faudree et al. \cite{Faudree05} who construct minimum degree $4$ Hamiltonian graphs without a $2$-factor with $2$ components. In the case of $2$-factors with $k$ components, no constructions have been given previously, but it is easy to see that a minimum degree of at least $k+2$ is necessary: \begin{prop} There are arbitrarily large Hamiltonian graphs with minimum degree $k+1$ which do not have a $2$-factor with $k$ components. \end{prop} \begin{proof} Let $G$ consist of a cycle $\mathcal{C}$ of length $n-k+1$ and an independent set $U$ of size $k-1$ with all the edges between $\mathcal{C}$ and $U$ added. It's easy to see that for $n\ge 2k$, $G$ is Hamiltonian and has minimum degree $k+1$. However $G$ does not have a $2$-factor with $k$ components (e.g.\ because every cycle in a $2$-factor of $G$ must use at least one vertex in $U$). \end{proof} For fixed $k$, we do not know of any Hamiltonian graphs with non-constant minimum degree which do not have a $2$-factor with $k$ components. This indicates that the necessary minimum degree in \Cref{conj:main} may in fact be much smaller, perhaps even a constant (depending on $k$). A step in this direction was made by Pfender \cite{Pfender04} who showed that in the $k=2$ case, a Hamiltonian graph $G$ with minimum degree of $7$ contains a $2$-factor with $2$ cycles in a very special case when $G$ is claw-free. If one takes greater care with various parameters in \Cref{sec:prelim} one can show that a minimum degree of $\frac{Cn}{\sqrt[4]{\log \log n / (\log \log \log n)^2}}$ suffices for finding an ordered blow-up of a short cycle so in particular this minimum degree is enough to find $2$-factors consisting of a fixed number of cycles. We believe that it would be messy but not too hard to improve this a little bit further, but to reduce the minimum degree condition to $n^{1-\varepsilon}$ would require some new ideas. On the other hand we do believe that our approach of finding alternating cycles in the auxiliary graph could still be useful in this case, but one needs to either find a better way of finding ordered blow-ups of short cycles or obtain a better understanding of how the number of cycles in $F(S)$ depends on the order and structure of a disjoint union of colour-alternating cycles $S.$ Another possibility is to augment the auxiliary graph in order to include edges that connect the front/back to the back/front vertex of two edges of the Hamilton cycle, which would allow us to obtain a $1$-to-$1$-correspondence between $2$-factors of $G$ and suitable structures in this new auxiliary graph. Another way of saying that a graph is Hamiltonian is that it has a $2$-factor consisting of a single cycle. A possibly interesting further question which arises is whether knowing that $G$ contains a $2$-factor consisting of $\ell$ cycles already allows the minimum degree condition needed for having a $2$-factor with $k>\ell$ cycles to be weakened. \textbf{Acknowledgements.} We are extremely grateful to the anonymous referees for their careful reading of the paper and many useful suggestions.
{'timestamp': '2020-03-10T01:13:02', 'yymm': '1905', 'arxiv_id': '1905.09729', 'language': 'en', 'url': 'https://arxiv.org/abs/1905.09729'}
\section{Introduction} Nowadays the classical theory of black holes in four-dimensional asymptotically flat spacetime is thought of as elegant and well understood. General relativity provides a unique family of exact solutions for stationary black holes which, in the most general case, involves only three physical parameters: the mass, angular momentum and the electric charge. Since classically the stationary black holes are ``dead" objects, it is of crucial importance to explore their characteristic responses to external perturbations of different sorts. {\it Superradiance} is one of such responses, namely, it is a phenomenon of amplification of scalar, electromagnetic and gravitational waves scattered by a rotating black hole. Though the phenomenon of superradiance, as a Klein-paradox state of non-gravitational quantum systems, has been known for a long time (see \cite{mano} and references therein), Zel'dovich was the first to suggest the idea of supperradiant amplification of waves when scattering by the rotating black hole \cite{zeldovich}. In order to argue the idea he had explored a heuristic model of the scattering of a wave by a rotating and absorbing cylinder. It turned out that when the wave spectrum contains the frequency $ \omega $ fulfilling the condition $ \omega < m \Omega $, where $ m $ is the azimuthal number or magnetic quantum number of the wave and $ \Omega $ is the angular velocity of the cylinder, the reflection of the wave occurs with amplification. In other words, the rotating cylinder effectively acts as an amplifier, transmitting its rotational energy to the reflected wave. Zel'dovich concluded that a similar phenomenon must occur with rotating black holes as well, where the horizon plays the role of an absorber. Similar arguments showing that certain modes of scalar waves must be amplified by a Kerr black hole were also given in \cite{misner}. A complete theory of the superradiance in the Kerr metric was developed by Starobinsky in \cite{starobinsky1}. (See also Ref.\cite{starobinsky2}). The appearance of the superradiance in string microscopic models of rotating black holes was studied in a recent paper \cite{dias1}. Physically, the superradiant scattering is a process of stimulated radiation which emerges due to the excitations of negative energy modes in the ergosphere of the black hole. It is a wave analogue of the Penrose process \cite{penrose}, in which a particle entering the ergosphere decays into two particles, one of which has a negative energy relative to infinity and is absorbed by the black hole. This renders the other particle to leave the ergosphere with greater energy than the initial one, thereby extracting the rotational energy from the black hole. In a quantum-mechanical picture, the superradiance is of stimulated emission of quanta which must be accompanied by their spontaneous emission as well \cite{zeldovich}. The spontaneous superradiance arises due to quantum instability of the vacuum in the Kerr metric, leading to a pair production of particles. When leaving the ergosphere these particles carry positive energy and angular momentum from the black hole to infinity, whereas inside the ergosphere they form negative energy and angular momentum flows into the black hole \cite{unruh}. The phenomenon of superradiance, after all, has a deep conceptual significance for understanding the stability properties of the black holes. As early as 1971 Zel'dovich \cite{zeldovich} noted that placing a reflecting mirror (a resonator) around a rotating black hole would result in re-amplification of superradiant modes and eventually the system would develop instability. The effect of the instability was later studied in \cite{press1} and the system is now known as a ``black hole bomb." This study has also created the motivation to answer general questions on the stability of rotating black holes against small external perturbations. Using analytical and numerical methods, it has been shown that the Kerr black holes are stable to massless scalar, electromagnetic, and gravitational perturbations \cite{press2}. However, the situation turned out to be different for perturbing massive bosonic fields. As is known, classical particles of energy $ E $ and mass $ m $, obeying the condition $ E < m $, perform a finite motion in the gravitational potential of the black hole. From quantum-mechanical point of view there exists a certain probability for tunneling such particles through the potential barrier into the black hole. In consequence of this, the bound states of the particles inside the potential well must become {\it quasistationary} or {\it quasinormal} (see, for instance \cite{ag1} and references therein). Similarly, for fields with mass $ \mu $, the wave of frequency $ \omega < \mu $ can be thought of as a ``bound particle" and therefore must undergo repetitive reflections between the potential well and the horizon. In the regime of superradiance, this will cause exponential growth of the number of particles in the quasinormal states, developing the instability \cite{damour, zouros, detweiler, dolan}. Thus, for massive bosonic fields the potential barrier of the black hole plays the role of a mirror in the heuristic model of the black hole bomb. There are also alternative models where a reflecting mirror leading to the instability arises due to an extra dimension which, from a Kaluza-Klein point of view, acts as a massive term (see for instance, Ref. \cite{dias2}). In recent years, the question of the stability of black holes to external perturbations has been the subject of extensive studies in four and higher-dimensional spacetimes with a cosmological constant. In particular, analytical and numerical works have revealed the perturbative stability of nonrotating black holes in de Sitter or anti-de Sitter (dS/AdS) spacetimes of various dimensions \cite{kodama1, konoplya}. Though the similar general analysis concerning the stability of rotating black holes in the cosmological spacetimes still remains an open question, significant progress has been achieved in understanding their superradiant instability \cite{hreall, cardoso1, cardoso2, cardoso3}. The causal structure of the AdS spacetime shows that spatial infinity in it corresponds to a finite region with a timelike boundary. Because of this property, the spacetime exhibits a ``box-like" behavior, ensuring the repetitive reflections of massless bosonic waves between spatial infinity and a Kerr-AdS black hole. The authors of work \cite{hreall} have shown that the Kerr-AdS black hole in five dimensions admits a corotating Killing vector which remains timelike everywhere outside the horizon, provided that the angular velocity of the boundary Einstein space does not exceed the speed of light. This means that there is no way to extract energy from the black hole. However, these authors have also given simple arguments showing that for over-rotating Kerr-AdS black holes whose typical size is constrained to $ r_{+} < l $, where $ l $ is a length scale determined by the negative cosmological constant, the superradiant instability may occur. That is, the small Kerr-AdS black holes may become unstable against external perturbations. The idea was further developed in \cite{cardoso1, cardoso2}. In particular, it was found that there must exist a critical radius for the location of the mirror in the black hole bomb model. Below this radius the superradiant condition is violated and the system becomes classically stable. Extending this fact to the case of the small Kerr-AdS black holes in four dimensions, the authors proved that the black holes indeed exhibit a superradiant instability to massless scalar perturbations. Later on, it was shown that the small Kerr-AdS black holes are also unstable to gravitational perturbations \cite{cardoso3}. In a recent work \cite{kodama2}, it was argued that the instability properties of the Kerr-AdS black holes to gravitational perturbations are equivalent to those against massless scalar perturbations. The main purpose of the present paper is to address the superradiant instability of small rotating charged AdS black holes with two independent rotation parameters in minimal five-dimensional gauged supergravity. In Sec.II we discuss some properties of the spacetime metric given in the Boyer-Lindquist coordinates which are rotating at spatial infinity. In particular, we define a corotating Killing vector and calculate the angular velocities of the horizon as well as its electrostatic potential. We also discuss the ``hidden" symmetries of the metric and demonstrate the separability of the Hamilton-Jacobi equation for massive charged particles. Section III is devoted to the study of the Klein-Gordon equation. We show that it is completely separable for massive charged particles and present the decoupled radial and angular equations in the most compact form. In Sec.IV we consider the near-horizon behavior of the radial equation and find the threshold frequency for the superradiance. In Sec.V we examine the instability of the small AdS black holes to low-frequency scalar perturbations. Here we construct the solution of the radial equation in the region close to the horizon and in the far-region. By matching these solutions in an intermediate region, we obtain the frequency spectrum for the quasinormal modes. We show that in the regime of superradiance the black hole exhibits instability to ``selective" modes of the perturbations: Namely, only the modes of even orbital quantum number $ \ell $ exponentially grow with time. We also show that the modes of odd $ \ell $ do not exhibit any damping, but oscillate with frequency-shifts. In the Appendix we study the angular equation for AdS modified spheroidal harmonics in five dimensions. \section{The metric and its properties} The general metric for rotating charged AdS black holes in the bosonic sector of minimal supergravity theory in five dimensions was recently found in \cite{cclp}. The theory is described by the action \begin{eqnarray} S&=& \int d^5x \sqrt{-g} \left(R+\frac{12}{l^2} -\frac{1}{4}\, F_{\alpha \beta}F^{\alpha \beta} +\frac{1}{12\sqrt{3}}\,\epsilon^{\mu\nu \alpha\beta\lambda}F_{\mu\nu} F_{\alpha\beta}A_{\lambda}\right)\,, \label{5sugraaction} \end{eqnarray} leading to the coupled Einstein-Maxwell-Chern-Simons field equations \begin{equation} {R_{\mu}}^{\nu}= 2\left(F_{\mu \lambda} F^{\nu \lambda}-\frac{1}{6}\,{\delta_{\mu}}^{\nu}\,F_{\alpha \beta} F^{\alpha \beta}\right) - \frac{4}{l^2}\,{\delta_{\mu}}^{\nu}\,, \label{einmax} \end{equation} \begin{equation} \nabla_{\nu}F^{\mu\nu}+\frac{1}{2\sqrt{3}\sqrt{-g}}\,\epsilon^{\mu \alpha\beta\lambda\tau} F_{\alpha\beta}F_{\lambda\tau}=0\,. \label{maxw} \end{equation} The general black hole solution of \cite{cclp} to these equations can be written in the form \begin{eqnarray} \label{gsugrabh} ds^2 & = & - \left( dt - \frac{a \sin^2\theta}{\Xi_a}\,d\phi - \frac{b \cos^2\theta}{\Xi_b}\,d\psi \right)\nonumber \left[f \left( dt - \frac{a \sin^2\theta}{\Xi_a}\,d\phi\, - \frac{b \cos^2\theta}{\Xi_b}\,d\psi \right)\nonumber \right. \\[2mm] & & \left. \nonumber + \frac{2 Q}{\Sigma}\left(\frac{b \sin^2\theta}{\Xi_a}\,d\phi + \frac{a \cos^2\theta}{\Xi_b}\,d\psi \right) \right] + \,\Sigma \left(\frac{r^2 dr^2}{\Delta_r} + \frac{d\theta^{\,2}}{ ~\Delta_{\theta}}\right) \\[2mm] && + \,\frac{\Delta_{\theta}\sin^2\theta}{\Sigma} \left(a\, dt - \frac{r^2+a^2}{\Xi_a} \,d\phi \right)^2 +\,\frac{\Delta_{\theta}\cos^2\theta}{\Sigma} \left(b\, dt - \frac{r^2+b^2}{\Xi_b} \,d\psi \right)^2 \nonumber \\[2mm] && +\,\frac{1+r^2\,l^{-2}}{r^2 \Sigma } \left( a b \,dt - \frac{b (r^2+a^2) \sin^2\theta}{\Xi_a}\,d\phi - \, \frac{a (r^2+b^2) \cos^2\theta}{\Xi_b}\,d\psi \right)^2, \end{eqnarray} where \begin{eqnarray} f& =&\frac{\Delta_r - 2 a b\, Q -Q^2}{r^2 \Sigma}+ \frac{Q^2} {\Sigma^2}\,\,,~~~~ \Xi_a=1 - \frac{a^2}{l^2}\,\,,~~~~ \Xi_b=1 - \frac{b^2}{l^2}\,\,, \nonumber \\[6mm] \Delta_r &= &\left(r^2 + a^2\right)\left(r^2 + b^2\right)\left(1+r^2l^{-2} \right)+ 2 a b\, Q + Q^2 - 2 M r^2\,, \nonumber \\[4mm] \Delta_\theta & = & 1 -\frac{a^2}{l^2} \,\cos^2\theta -\frac{b^2}{l^2} \,\sin^2\theta \,,~~~~~ \Sigma = r^2+ a^2 \cos^2\theta + b^2 \sin^2\theta \, \label{gsugrametfunc} \end{eqnarray} We see that the metric is characterized by the parameters of mass $M $, electric charge $ Q $ as well as by two independent rotation parameters $ a $ and $ b $. The cosmological constant is taken to be negative determining the cosmological length scale as $ l^2= - 6/\Lambda $. Throughout this paper we suppose that the rotation parameters satisfy the relation $ a^2, b^2 < l^2 $. For the potential one-form of the electromagnetic field, we have \begin{equation} A= -\frac{\sqrt{3}\,Q}{2 \,\Sigma}\,\left(dt- \frac{a \sin^2\theta}{\Xi_a}\,d\phi -\frac{b \cos^2\theta}{\Xi_b}\,d\psi \right)\,.\label{sugrapotform1} \end{equation} We recall that in their canonical forms, the Kerr-Newman-AdS metric in four dimensions as well as the Kerr-AdS metric in five dimensions are given in the Boyer-Lindquist coordinates which are rotating at spatial infinity. In order to be able to make an easy comparison of our description with those in four and five dimensions, we give the metric (\ref{gsugrabh}) in the asymptotically rotating Boyer-Lindquist coordinates $ x^{\mu}=\{t, r, \theta, \phi, \psi\}$ with $\mu=0, 1, 2, 3, 4 $ (see Ref.\cite{aliev1}). It is easy to see that for $ Q=0 $, it recovers the five-dimensional Kerr-AdS solution of \cite{hhtr}. The authors of \cite{cclp} have calculated the physical parameters and examined the global structure and the supersymmetric properties of the solution in (\ref{gsugrabh}). In particular, they showed that for appropriate ranges of the parameters, the solution is free of closed timelike curves(CTCs) and naked singularities, describing a regular rotating charged black hole. The determinant of the metric (\ref{gsugrabh}) does not involve the electric charge parameter $ Q $ and is given by \begin{equation} \sqrt{-g}= \frac{r \Sigma \sin\theta\,\cos\theta}{\Xi_a\, \Xi_b}\,\,,\label{5ddeterminant} \end{equation} whereas, the contravariant metric components have the form \begin{eqnarray} g^{00}&=&-\frac{1}{\Sigma}\left\{ \frac{(r^2+a^2)(r^2+b^2)\left[ r^2+l^2(1-\Xi_a\Xi_b)\right]\,+2\,ab\left[ (r^2+a^2+b^2)\, Q+ a b M\right]}{\Delta_r} \nonumber \right. \\[2mm] & & \left. \nonumber -l^2\left(1-\frac{\Xi_a\Xi_b}{\Delta_\theta}\right)\right\}\,,~~~~~ g^{11}=\frac{\Delta_r}{r^2\Sigma}\,\,,~~~~~~g^{22}=\frac{\Delta_\theta}{\Sigma}\,\,, \\[3mm] g^{03}&=&\frac{\Xi_a}{\Sigma} \left\{ \frac{a\, \Xi_b}{\Delta_\theta} - \frac{(r^2+b^2) \left[ b Q+ a\, \Xi_b (r^2+a^2) \right]+ 2\, a b (a Q +b M) }{\Delta_r} \right\}\,,\nonumber \\[3mm] g^{04}&=& \frac{\Xi_b}{\Sigma} \left\{ \frac{b\, \Xi_a}{\Delta_\theta} - \frac{(r^2+a^2) \left[ a Q+ b\, \Xi_a (r^2+b^2) \right]+ 2\, a b (b Q +a M) }{\Delta_r} \right\}\,,\nonumber \\[3mm] g^{33}&=&\frac{\Xi_a^2}{\Sigma}\left\{ \frac{\cot^2\theta+\Xi_b}{\Delta_\theta} + \frac{(r^2+b^2)\left[ b^2-a^2+(r^2+a^2)(1-\Xi_b) \right] -2b\,(a Q+b M)}{\Delta_r} \right\}\,,\nonumber \\[3mm] g^{44}&=&\frac{\Xi_b^2}{\Sigma}\left\{ \frac{\tan^2\theta+\Xi_a}{\Delta_\theta} + \frac{(r^2+a^2)\left[ a^2-b^2+(r^2+b^2)(1-\Xi_a) \right] -2a\,(b Q+a M)}{\Delta_r} \right\}\,,\nonumber \\[3mm] g^{34}&=& -\frac{\Xi_a \Xi_b}{\Sigma} \left\{ \phantom{\bigg\{ }\frac{a b}{l^2} \left[ \frac{1}{\Delta_\theta}- \frac{(r^2+a^2)(r^2+b^2)}{\Delta_r} \right] + \frac{2 a b M+ (a^2+b^2)\,Q}{\Delta_r}\phantom{\bigg\} } \right\}\,. \label{contras} \end{eqnarray} The horizons of the black hole are governed by the equation $ \Delta_r=0 $, which can be regarded as a cubic equation with respect to $ r^2 $. It has two real roots; $ r_1=r_{+}^2 $ and $ r_2=r_{0}^2$. The largest of these roots, $ r_{+}^2 > r_{0}^2 $, represents the radius of the event horizon. However, when the equations \begin{eqnarray} \Delta_r&=& 0 \,\,, ~~~~~~\frac{d \Delta_r}{dr}=0 \label{extremeeq} \end{eqnarray} are satisfied simultaneously, the two roots coincide, $ r_{+}^2 = r_{0}^2 = r_{e}^2 $, representing the event horizon of an extreme black hole. From these equations, it follows that the parameters of the extreme black hole must obey the relations \begin{eqnarray} 2 M_{e}l^2 &=& 2 \left(r_{e}^2+ a^2 +b^2 +l^2 \right)r_{e}^2 +r_{e}^4 + a^2 b^2 + \left(a^2 + b^2\right) l^2\,,\\[2mm] Q_{e}&= &\frac{r_{e}^2}{l} \left(2 r_{e}^2+ a^2 + b^2 +l^2\right)^{1/2} - a b\,. \label{extparam} \end{eqnarray} The time translational and rotational (bi-azimuthal) isometries of the spacetime (\ref{gsugrabh}) are defined by the Killing vector fields \begin{equation} {\bf \xi}_{(t)}= \partial / \partial t\,, ~~~~ {\bf \xi}_{(\phi)}= \partial / \partial \phi \, , ~~~~ {\bf \xi}_{(\psi)}= \partial / \partial \psi \,. \label{3killings} \end{equation} Using these Killing vectors one can also introduce a corotating Killing vector \begin{equation} \chi = {\bf \xi}_{(t)}+ \Omega_{a}\,{\bf \xi}_{(\phi)}+ \Omega_{b} \,{\bf \xi}_{(\psi)}\,,\label{corotating} \end{equation} where $ \Omega_{a} $ and $ \Omega_{b} $ are the angular velocities of the event horizon in two independent orthogonal 2-planes of rotation. We have \begin{eqnarray} \Omega_{a}&=&\frac{\Xi_a [a(r_+^2+b^2)+b Q]}{(r_+^2+a^2)(r_+^2+b^2)+a b Q }\,\,,~~~~~~~~~ \Omega_{b}=\frac{\Xi_b [b(r_+^2+a^2)+a Q] }{(r_+^2+a^2)(r_+^2+b^2)+a b Q}\,\,. \end{eqnarray} It is straightforward to show that the corotating Killing vector in (\ref {corotating}) is null on the event horizon of the black hole, i.e. it is tangent to the null generators of the horizon, confirming that the quantities $ \Omega_{a} $ and $ \Omega_{b} $ are indeed the angular velocities of the horizon. We also need the electrostatic potential of the horizon relative to an infinitely distant point. It is given by \begin{eqnarray} \Phi_H & = - & A \cdot \chi = - \left(A_0 + \Omega_{a}\,A_{\phi} +\Omega_{b}\,A_{\psi}\right)|_{r=r_+ } \,\,. \label{hpot} \end{eqnarray} Substituting into this expression the components of the potential in (\ref{sugrapotform1}), we find the explicit form for the electrostatic potential \begin{equation} \Phi_H=\frac{\sqrt{3}}{2}\,\frac{ Q r_+^2}{(r_+^2+a^2)(r_+^2+b^2)+a b Q }\,. \end{equation} It is also important to note that, in addition to the global isometries, the spacetime (\ref{gsugrabh}) also possesses hidden symmetries generated by a second-rank Killing tensor. The existence of the Killing tensor ensures the complete separability of variables in the Hamilton-Jacobi equation for geodesic motion of uncharged particles \cite{dkl}. Below, we describe the separation of variables in the Hamilton-Jacobi equation for charged particles. \subsection{The Hamilton-Jacobi equation for charged particles} The Hamilton-Jacobi equation for a particle of electric charge $ e $ moving in the spacetime under consideration is given by \begin{equation} \frac{\partial S}{\partial \lambda}+\frac{1}{2}g^{\mu\nu}\left(\frac{\partial S}{\partial x^\mu}-eA_\mu\right)\left(\frac{\partial S}{\partial x^\nu}-e A_\nu\right)=0 \,, \label{HJeq} \end{equation} where, $ \lambda $ is an affine parameter. Since the potential one-form (\ref{sugrapotform1}) respects the Killing isometries (\ref{3killings}) of the spacetime as well, $ \pounds_{\xi} A^{\mu}=0 $, we assume that the action $ S $ can be written in the form \begin{equation} S=\frac{1}{2}m^2\lambda - E t+L_\phi \phi +L_\psi \psi + S_r(r)+S_\theta(\theta)\,, \label{ss} \end{equation} where the constants of motion represent the mass $ m $, the total energy $ E $ and the angular momenta $ L_{\phi} $ and $ L_{\psi} $ associated with the rotations in $ \phi $ and $ \psi $ 2-planes. Substituting this action into equation (\ref{HJeq}) and using the metric components in (\ref{contras}) along with the contravariant components of the potential \begin{eqnarray} A^{0}&=&\frac{\sqrt{3}\, Q}{2\Sigma }\,\, \frac{(r^2+a^2)(r^2+b^2)+a b Q}{\Delta_r}\,,\nonumber \\[2mm] A^{3}&=& \frac{\sqrt{3}\, Q \Xi_a}{2\Sigma }\,\,\,\frac{a (r^2+b^2)+b Q}{\Delta_r}\,, \nonumber \\[2mm] A^{4}&=& \frac{\sqrt{3}\, Q \Xi_b}{2\Sigma }\,\,\frac{b (r^2+a^2)+a Q}{\Delta_r}\,, \label{potcontras} \end{eqnarray} and \begin{equation} A^\mu A_\mu=-\frac{3\,Q^2 r^2}{4\,\Sigma \Delta_r}\,, \label{sqpot} \end{equation} we obtain two independent ordinary differential equations for $ r $ and $ \theta$ motions: \begin{eqnarray} && \frac{\Delta_r}{r^2} \left(\frac{dS_r}{dr}\right)^2 +\frac{\left(a b E-b \Xi_a L_\phi-a \Xi_b L_\psi\right)^2}{r^2} - \frac{\left[(r^2+a^2)(r^2+b^2)+a b Q \right]^2}{\Delta_r r^2} \cdot \nonumber \\[2mm] && ~~~~~ \left\{E- \frac{L_\phi\Xi_a \left[a(r^2+b^2)+b\,Q\right]}{(r^2+a^2)(r^2+b^2)+a b Q }-\frac{ L_\psi\Xi_b \left[b(r^2+a^2)+a Q \right]}{(r^2+a^2)(r^2+b^2)+a b Q} \right.\nonumber \\[2mm] && \left. ~~~~~ - \frac{\sqrt{3}}{2}\,\frac{e Q r^2}{(r^2+a^2)(r^2+b^2)+a b Q }\right\}^2 +m^2 r^2 =-K\,, \label{radeq1} \end{eqnarray} \begin{eqnarray} && \Delta_{\theta} \left(\frac{dS_\theta}{d\theta}\right)^2 +\frac{L_{\phi}^2 \,\Xi_{a}^2 \left(\cot^2\theta +\Xi_b\right) + L_{\psi}^2 \,\Xi_{b}^2 \left(\tan^2\theta +\Xi_a\right)- 2 a b l^{-2} L_{\phi} L_{\psi} \Xi_{a}\Xi_{b}}{\Delta_{\theta}}\nonumber \\[2mm] && ~~~+ \frac{ E^2 l^2 \left(\Delta_\theta - \Xi_a \Xi_b \right) - 2 E \left(a L_\phi + b L_\psi\right)\Xi_a \Xi_b }{\Delta_{\theta}} + m^2 \left(a^2 \cos^2\theta+ b^2\sin^2\theta\right)= K\,, \label{angulareq1} \end{eqnarray} where $ K $ is a constant of separation. The complete separability in the Hamilton-Jacobi equation (\ref {HJeq}) occurs due to the existence of a new quadratic integral of motion $ K=K^{\mu\nu }p_{\mu} p_{\nu} \,$, which is associated with the hidden symmetries of the spacetime. Here $ K^{\mu\nu }$ is an irreducible Killing tensor generating the hidden symmetries. Using equation (\ref{angulareq1}) along with $ - m^2= g^{\mu\nu }p_{\mu} p_{\nu} \,$, we obtain that the Killing tensor is given by \begin{eqnarray} &&K^{\mu\nu}= \, l^2\left(1-\frac{\Xi_a\Xi_b}{\Delta_\theta}\right)\delta^\mu_t\delta^\nu_t + \frac{\Xi_a \Xi_b}{\Delta_\theta} \left[a\left( \delta^\mu_t \delta^\nu_\phi+\delta^\mu_\phi \delta^\nu_t\right)+b\left( \delta^\mu_t \delta^\nu_\psi+\delta^\mu_\psi \delta^\nu_t\right)\right] \nonumber \\[2mm] && + \frac{1}{\Delta_\theta}\left[ \Xi_{a}^2 \left(\cot^2\theta +\Xi_b\right) \delta^\mu_\phi\delta^\nu_\phi + \Xi_{b}^2 \left(\tan^2\theta +\Xi_a\right) \delta^\mu_\psi\delta^\nu_\psi -\frac{a b \Xi_a \Xi_b}{l^2} \left(\delta^\mu_\phi\delta^\nu_\psi+\delta^\mu_\psi\delta^\nu_\phi\right)\right] \nonumber \\[2mm] && - \,g^{\mu\nu}\left(a^2\cos\theta^2 + b^2\sin\theta^2\right) + \Delta_\theta\, \delta^\mu_\theta\delta^\nu_\theta\,. \label{killingt} \end{eqnarray} This expression agrees with that given in \cite{dkl} up to terms involving symmetrized outer products of the Killing vectors. Similarly, for the vanishing cosmological constant, $ l \rightarrow \infty $, it recovers the result of work \cite{fs1}. \section{The Klein-Gordon equation} We consider now the Klein-Gordon equation for a scalar field with charge $ e $ and mass $ \mu $ in the background of the metric (\ref{gsugrabh}). It is given by \begin{equation} \left(D^{\mu}D_{\mu}-\mu^2 \right)\Phi= 0\,, \label{KGeq} \end{equation} where $ D_{\mu}= \nabla_{\mu}- ie A_{\mu} $ and $ \nabla_{\mu} $ is a covariant derivative operator. Decomposing the indices as $ \mu= \{1,2 , M \} $ in which $ M = 0,\, 3, \,4 $, we can write down the above equation in the form \begin{eqnarray} \frac{1}{r}\frac{\partial}{\partial r} \left(\frac{\Delta_r}{r}\, \frac{\partial \Phi}{\partial r}\right) +\frac{1}{\sin 2\theta} \frac{\partial}{\partial\theta}\left (\sin2 \theta \Delta_\theta \frac{\partial \Phi}{\partial \theta} \right)+ \nonumber \\ [2mm] \left(g^{MN}\frac{\partial^2 \Phi}{\partial x^M {\partial x^N}} - 2 i e A^M \frac{\partial \Phi}{\partial x^N}-e^2 A_M A^M \right)\Sigma &=& \mu^2 \Sigma \Phi \,. \label{decomeq} \end{eqnarray} It is easy to show that with the components of $ g^{MN} $ given in (\ref{contras}) and with equations (\ref{potcontras}) and (\ref{sqpot}), this equation is manifestly separable in variables $ r $ and $ \theta $. That is, one can assume that its solution admits the ansatz \begin{equation} \Phi=e^{-i \omega t + i m_\phi \phi +i m_\psi \psi} S(\theta) R(r)\,, \label{sansatz} \end{equation} where $ m_\phi $ and $ m_\psi $ are the ``magnetic" quantum numbers related to $ \phi $ and $ \psi $ 2-planes of rotation, so that they both must take integer values. In what follows, for the sake of certainty, we restrict ourselves to the case of positive frequency $ (\omega > 0) $ and positive $ m_\phi $ and $ m_\psi $ . The substitution of the expression (\ref{sansatz}) into equation (\ref{decomeq}) results in two decoupled ordinary differential equations for angular and radial functions. The angular equation is given by \begin{eqnarray} &&\frac{1}{\sin2 \theta}\frac{d}{d\theta}\left(\sin2\theta\Delta_\theta\frac{d S}{d\theta}\right) +\frac{1}{\Delta_\theta}\left[\omega^2 l^2 \left(\Xi_a \Xi_b - \Delta_\theta \right) - m_\phi^2 \Xi_{a}^2 \left(\cot^2\theta +\Xi_b\right)- m_\psi^2 \Xi_{b}^2 \left(\tan^2\theta +\Xi_a\right)\right.\nonumber \\[2mm] && \left. ~~~~~ +2 \,\Xi_a \Xi_b \left( \omega a m_\phi + \omega b m_\psi + \frac{a b} {l^{2}}\, m_\phi m_\psi \right) - \Delta_\theta \mu^2 \left(a^2\cos^2\theta+b^2\sin^2\theta \right)\right] S = - \lambda S\,, \label{angular1} \end{eqnarray} where $\lambda $ is a constant of separation. With regular boundary conditions at $ \theta=0 $ and $ \theta=\pi/2 $, this equation describes a well-defined Sturm-Liouville problem with eigenvalues $ \lambda_{\ell}(\omega) $, where $\ell $ is thought of as an ``orbital" quantum number. The corresponding eigenfunctions are five-dimensional (AdS modified) spheroidal functions $ S(\theta)= S_{\ell\, m_\phi m_\psi}(\theta|a \omega\,, b\omega) $. In some special cases of interest, the eigenvalues were calculated in the Appendix , see equation (\ref{flat}). The radial equation can be written in the form \begin{equation} \frac{\Delta_r}{r} \frac{d}{d r}\left(\frac{\Delta_r}{r} \frac{d R}{d r}\right) +U(r)\,R=0\,, \label{radial1} \end{equation} where \begin{eqnarray} \label{rpotential} &&U(r)=-\Delta_r \left[\lambda + \mu^2 r^2 + \frac{\left(a b\, \omega-b \Xi_a m_\phi-a \Xi_b m_\psi\right)^2}{r^2}\right]+\frac{\left[(r^2+a^2)(r^2+b^2)+ab\,Q \right]^2}{r^2} \cdot \nonumber \\[4mm] &&\left\{\omega- \frac{m_\phi\Xi_a \left[a(r^2+b^2)+b Q\right]}{(r^2+a^2)(r^2+b^2)+a b Q } - \frac{m_\psi\Xi_b \left[b(r^2+a^2)+a Q\right]}{(r^2+a^2)(r^2+b^2)+a b Q} -\frac{\sqrt{3}}{2}\,\frac{ e Q r^2}{(r^2+a^2)(r^2+b^2)+a b Q } \right\}^2.\nonumber \\[2mm] \label{radpot1} \end{eqnarray} When the cosmological constant vanishes, $ l \rightarrow \infty $, the above expressions agree with those obtained in \cite{fs2} for a five-dimensional Myers-Perry black hole. \section{The Superradiance Threshold} The radial equation can be easily solved near the horizon. For this purpose, it is convenient to introduce a new radial function $ \mathcal{R}$ defined by \begin{equation} R= \left[\frac{r}{(r^2+a^2)(r^2+b^2)+a b Q}\right]^{1/2} \,\mathcal{R}\, \label{newrad} \end{equation} and a new radial, the so-called tortoise coordinate $ r_* $, obeying the relation \begin{equation} \frac{dr_*}{dr}=\frac{(r^2+a^2)(r^2+b^2)+a b Q}{\Delta_r}\,\,. \label{tortoise} \end{equation} With these new definitions the radial equation (\ref{radial1}) can be transformed into the form \begin{eqnarray} \frac{d^2\mathcal{R}}{dr_*^2} +V(r)\mathcal{R}=0\,, \label{radial2} \end{eqnarray} where the effective potential is given by \begin{eqnarray} &&V(r)= -\frac{\Delta_r \left[r^2 \left(\lambda +\mu^2 r^2\right)+\left(a b\,\omega-b \Xi_a m_\phi-a \Xi_b m_\psi\right)^2\right]}{\left[(r^2+a^2)(r^2+b^2)+a b Q\right]^2} -\frac{\Delta_r }{2 r u^{3/2}}\,\frac{d}{d r}\left(\frac{\Delta_r }{r u^{3/2}}\,\frac{d u}{d r}\right) + \nonumber \\[4mm] && \left\{\omega- \frac{m_\phi\Xi_a \left[a(r^2+b^2)+b\,Q\right]}{(r^2+a^2)(r^2+b^2)+a b Q } - \frac{m_\psi\Xi_b \left[b(r^2+a^2)+a Q\right]}{(r^2+a^2)(r^2+b^2)+a b Q} -\frac{\sqrt{3}}{2}\,\frac{e Q r^2}{(r^2+a^2)(r^2+b^2)+a b Q }\right\}^2\,. \nonumber\\[2mm] \label{effective} \end{eqnarray} For brevity, we have also introduced \begin{equation} u=\frac{(r^2+a^2)(r^2+b^2)+a b Q}{r}\,. \end{equation} In what follows, we consider a massless scalar field, $ \mu=0 $. We see that at the horizon $ r = r_+ $ $ (\Delta_r = 0) $, the effective potential in equation (\ref{effective}) becomes \begin{eqnarray} V(r_+ )&=&(\omega-m_\phi \Omega_{a}-m_\psi \Omega_{b}-e \Phi_{H})^2\,. \label{p} \end{eqnarray} With this in mind, it is easy to verify that for an observer near the horizon, the asymptotic solution of the wave equation \begin{equation} \Phi=e^{-i \omega t + i m_\phi \phi +i m_\psi \psi}\, e^{-i(\omega- \omega_{p})r_*}S(\theta) \,, \label{sansatzh} \end{equation} corresponds to an ingoing wave at the horizon. The threshold frequency \begin{equation} \omega_{p}= m_\phi \Omega_{a}+m_\psi \Omega_{b}+e \Phi_{H} \label{bound} \end{equation} determines the frequency range \begin{equation} 0< \omega < \omega_{p} \label{fbound} \end{equation} for which, the phase velocity of the wave changes its sign. As in the four dimensional case \cite{press2}, this fact is the signature of the superradiance. That is, when the condition (\ref{fbound}) is fulfilled there must exist a superradiant outflow of energy from the black hole. From equation (\ref{bound}), it follows that the electric charge of the black hole changes the superradiance threshold frequency for charged particles. Next, turning to the asymptotic behavior of the solution at spatial infinity, we recall that in this region the AdS spacetime reveals a box-like behavior. In other words, at spatial infinity the spacetime effectively acts as a reflective barrier. Therefore, we require the vanishing field boundary condition \begin{equation} \Phi\rightarrow 0 \,~~~~ as ~~~~ r\rightarrow \infty\, . \label{boundinf} \end{equation} With the boundary conditions (\ref{sansatzh}) and (\ref{boundinf}), namely, requiring a purely ingoing wave at the horizon and a purely damping wave at infinity, we arrive at a characteristic-value problem for complex frequencies of quasinormal modes of the massless scalar field, see \cite{detweiler}. The imaginary part of these frequencies describes the damping of the modes. A characteristic mode is stable if the imaginary part of its complex frequency is negative ({\it the positive damping}), while for the positive imaginary part, the mode undergoes exponential growth ({\it the negative damping}). In the latter case, the system will develop instability. \section{Instability} In this section we describe the instability for small-size five-dimensional AdS black holes, $ r_+ \ll l $, in the regime of low-frequency perturbations. That is, we assume that the wavelength of the perturbations is much larger than the typical size of the horizon, $ 1/\omega \gg r_+ $. In addition, we also assume slow rotation i.e. we restrict ourselves to linear order terms in rotation parameters $ a $ and $ b $. With these approximations, we can apply the similar method first developed by Starobinsky \cite{starobinsky1} and later on used by many authors (see, \cite{cardoso2} and references therein) to construct the solutions of the radial equation (\ref{radial1}) in the region near the horizon and in the far-region. It is remarkable that there exists an intermediate region where the two solutions overlap and matching these solutions enables us to calculate the frequency of quasinormal modes and explore the (in)stability of these modes. \subsection{Near-region solution} For small and slowly rotating black holes, in the region close to the horizon, $ r- r_+ \ll 1/\omega $, and in the regime of low-frequency perturbations, $ 1/\omega \gg r_+ \, $, the radial equation (\ref{radial1}) can be approximated by the equation \begin{equation} 4\Delta_x \frac{d}{dx} \left(\Delta_x \frac{dR}{dx}\right) +\left[ x_+^3 \left(\omega-\omega_p\right)^2- \ell(\ell+2)\Delta_x \right] R=0\,, \label{nearrad1} \end{equation} where we have used the new radial coordinate $ x=r^2 $ and \begin{equation} \Delta_x \simeq x^2-2 M x +Q^2 = (x-x_+)(x-x_-)\,. \end{equation} The superradiance threshold frequency is given by equation (\ref{bound}), in which one must now take \begin{eqnarray} \Omega_a \simeq\frac{a}{x_+} +\frac{b Q}{x_+^2}\,\,,~~~~~~ \Omega_b \simeq \frac{b}{x_+} +\frac{a Q}{x_+^2}\,\,, ~~~~~~ \Phi_H \simeq \frac{\sqrt{3}}{2}\frac{Q}{x_+}\,. \label{nearvel} \end{eqnarray} In obtaining the above equations we have neglected the term involving $ r_+ ^2/l^2 $ as well as all terms with square and higher orders in rotation parameters. With the approximation employed, the eigenvalues $ \lambda_{\ell} $ are replaced by their five-dimensional flat spacetime value $ \lambda_{\ell} \simeq \ell(\ell+2) $. (See the Appendix). Next, it is convenient to define a new dimensionless variable \begin{equation} z=\frac{x-x_+}{x-x_-}\,\,, \end{equation} which in the near-horizon region goes to zero, $ z\rightarrow 0 $. Then equation (\ref{nearrad1}) can be put in the form \begin{equation} z (1-z) \frac{d^2R}{dz^2} +(1-z)\frac{d R}{dz} +\left[ \frac{1-z}{z}\,\Omega^2 -\frac{\ell(\ell+2)}{4(1-z)}\right]R=0\,, \label{nearrad2} \end{equation} where \begin{equation} \Omega=\frac{x_+^{3/2}}{2}\, \frac{\omega-\omega_p}{x_+-x_-}\,. \label{newsuperf} \end{equation} It is straightforward to check that the ansatz \begin{equation} R(z)=z^{i \Omega}\,(1-z)^{1+\ell/2}\,F(z)\,, \end{equation} when substituting into the above equation, takes us to the hypergeometric equation of the form (\ref{hyperg1}) for the function $ F(z)= F(\alpha\,, \beta\,,\gamma, z) $ with \begin{equation} \alpha= 1+\ell/2 + 2 i \Omega\\,,~~~~~~~\beta= 1+\ell/2\,,~~~~~~~ \gamma=1+2i\Omega\,. \label{nearradpara} \end{equation} The physical solution of this equation corresponding to the ingoing wave at the horizon, $ z\rightarrow 0 $, is given by \begin{equation} R(z)= A z^{-i \Omega}\,(1-z)^{1+\ell/2}\,F\left(1+\ell/2\,, 1+\ell/2 - 2 i \Omega\,, 1-2i\Omega\,, z\right)\,, \label{nearphys} \end{equation} where $ A $ is a constant. For large enough values of the wavelength, this solution may overlap with the far-region solution. Therefore, we need to consider the large $ r $ $ (z \rightarrow 1) $ limit of this solution. For this purpose, we use the functional relation between the hypergeometric functions of the arguments $ z $ and $ 1- z $ \cite{abramowitz}, which in our case has the form \begin{eqnarray} && F\left(1+\ell/2\,, 1+\ell/2 - 2 i \Omega\,, 1-2i\Omega\,, z\right)=\nonumber\\[2mm]&& ~~~~~~\frac{\Gamma(-1-\ell)\,\Gamma(1-2i\Omega) }{\Gamma(-\ell/2)\,\Gamma(-\ell/2-2i\Omega)}\,F\left(1+\ell/2\,, 1+\ell/2 - 2 i \Omega\,, 2+\ell\,, 1- z\right) + \nonumber\\[2mm]&& ~~~~~~\frac{\Gamma(1+\ell)\,\Gamma(1-2i\Omega) }{\Gamma(1+\ell/2)\,\Gamma(1+\ell/2-2i\Omega)}\,\left(1-z\right)^{-1-\ell}F\left(-\ell/2-2i\Omega\,,-\ell/2 \,, -\ell\,, 1- z\right). \label{funtr1} \end{eqnarray} Taking this into account in equation (\ref{nearphys}), we obtain that the large $ r $ behavior of the near-region solution is given by \begin{eqnarray} R \sim A \Gamma(1-2i\Omega)\left[\frac{\Gamma(-1-\ell)\,(r_+^2- r_-^2)^{1+\ell/2}}{\Gamma(-\ell/2)\,\Gamma(-\ell/2-2i\Omega)}\,\,r^{-2-\ell} + \frac{\Gamma(1+\ell)\,(r_+^2- r_-^2)^{-\ell/2}}{\Gamma(1+\ell/2)\,\Gamma(1+\ell/2-2i\Omega)}\, \,r^{\ell}\right], \label{larnear} \end{eqnarray} where we have also used the fact that $F(\alpha\,, \beta\,,\gamma, 0 )=1 $. \subsection{Far-region solution} In this region $ r_+\gg M $ the effects of the black hole are suppressed and the radial equation (\ref{radial1}) in this approximation is reduced to the form \begin{equation} \left(1+\frac{r^2}{l^2}\right) \frac{d^2R}{dr^2} + \left(\frac{3}{r}+\frac{5r}{l^2}\right)\frac{dR}{dr} +\left[\frac{\omega^2}{1+\frac{r^2}{l^2}}-\frac{\ell(\ell+2)}{r^2} \right]R=0\,. \end{equation} Defining a new variable \begin{equation} y= \left(1+\frac{r^2}{l^2}\right)\,, \end{equation} we can also put the equation into the form \begin{equation} y(1-y)\frac{d^2R}{dy^2} +\left(1-3y \right) \frac{dR}{dy} -\frac{1}{4}\,\left[\frac{\omega^2 l^2}{y}-\frac{\ell(\ell+2)}{y-1}\right]R=0\,. \label{adsrad1} \end{equation} We note that this is an equation in a pure AdS spacetime and therefore, we look for its solution satisfying the boundary conditions at infinity, $ y\rightarrow\infty $, and at the origin of the AdS space, $ y\rightarrow 1 $. Again, one can show that the ansatz \begin{equation} R=y^{\omega l/2}(1-y)^{\ell/2}\, F(y)\,, \end{equation} transforms equation (\ref{adsrad1}) into the hypergeometric equation of the form (\ref{hyperg1}), where the parameters of the hypergeometric function $ F(\alpha\,, \beta\,,\gamma\,, y ) $ are given by \begin{equation} \alpha= 2+\ell/2 + \omega l/2\,,~~~~~~\beta= \ell/2 + \omega l/2\,,~~~~~~\gamma =1+ \omega l\,. \end{equation} The solution of this equation vanishing at $ y \rightarrow \infty $, i.e. obeying the boundary condition (\ref{boundinf}) is given by \begin{equation} R(y)= B y^{-2 -\ell/2}\,(1-y)^{\ell/2}\,F\left(2+\ell/2+ \omega l/2\,\,,2+\ell/2- \omega l/2\,\,, 3 \,\,, 1/y\right)\,, \label{nearphys1} \end{equation} where $ B $ is a constant. We are also interested in knowing the small $ r $ $ (y\rightarrow 1) $ behavior of this solution. Using the expansion of the hypergeometric function in (\ref{nearphys1}) in terms of the hypergeometric functions of the argument $ 1-y $ given by \begin{eqnarray} && F\left(2+\ell/2+ \omega l/2\,\,,2+\ell/2- \omega l/2\,\,, 3 \,\,, 1/y\right)=\nonumber\\[3mm]&& \frac{\Gamma(3)\Gamma(1+\ell)\,\,y^{2+\ell/2- \omega l/2}\,(y-1)^{-1-\ell}}{\Gamma(2+\ell/2+ \omega l/2 )\,\Gamma(2+\ell/2 - \omega l/2)}\,F\left(1-\ell/2-\omega l/2\,\,, -1-\ell/2 - \omega l/2\,\,, -\ell\,\,, 1- y\right) \nonumber\\[3mm]&& + \frac{\Gamma(3)\Gamma(-1-\ell)\,\,y^{2+\ell/2+ \omega l/2}}{\Gamma(1-\ell/2+ \omega l/2 )\,\Gamma(1-\ell/2 - \omega l/2)}\,F\left(2+\ell/2+\omega l/2\,\,, \ell/2 + \omega l/2\,\,, 2+\ell\,\,, 1- y\right)\,,\nonumber\\ \label{funtr2} \end{eqnarray} we find that for small values of $ r $ the asymptotic solution has the form \begin{eqnarray} &&R(r)\sim B \Gamma(3)(-1)^{\ell/2}\left[\frac{\Gamma(1+\ell)\,l^{2+\ell} \,\,r^{-2-\ell} }{\Gamma(2+\ell/2+\omega l/2)\,\Gamma(2+\ell/2-\omega l/2)}\right. \nonumber\\[3mm] && \left. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +\,\frac{\Gamma(-1-\ell)\,l^{-\ell}\,\,r^{\ell}}{\Gamma(1-\ell/2+\omega l/2)\,\Gamma(1-\ell/2-\omega l/2)}\right]\,. \label{farradsmall} \end{eqnarray} Requiring the regularity of this solution at the origin of the AdS space $(r=0)$, we obtain the quantization condition \begin{equation} 2+\ell/2-\omega l/2 = -n \,, \label{quantization} \end{equation} where $ n $ is a non-negative integer being a ``principal" quantum number. We recall that with this condition the gamma function $ \Gamma(2+\ell/2-\omega l/2) =\infty $. Thus, we find that the discrete frequency spectrum for scalar perturbations in the five-dimensional AdS spacetime is given by \begin{equation} \omega_n=\frac{2n+\ell+4}{l}\,. \label{fspectrum} \end{equation} This formula generalizes the four-dimensional result of works in \cite{cklemos, burgess} to five dimensions. Since at infinity the causal structure of the AdS black hole is similar to that of the pure AdS background, it is natural to assume that equation (\ref{fspectrum}) equally well governs the frequency spectrum at large distances from the black hole. However, the important difference is related to the inner boundaries which are different; for the AdS spacetime we have $ r=0 $, while for the black hole in this spacetime we have $ r=r_+ $. Therefore, to catch the effect of the black hole, the solution (\ref{farradsmall}) must ``respond" to the ingoing wave condition at the boundary $ r=r_+ $. Physically, this means that one must take into account the possibility for tunneling of the wave through the potential barrier into the black hole and scattering back. As we have described above, this would result in the quasinormal spectrum with the complex frequencies \begin{equation} \omega= \omega_n + i \sigma\,, \label{complexf} \end{equation} where $ \sigma $ is supposed to be a small quantity, describing the damping of the quasinormal modes. Taking this into account in equation (\ref{farradsmall}), we first note that \begin{equation} \Gamma(2+\ell/2+\omega l/2)\,\Gamma(2+\ell/2-\omega l/2)= \Gamma(4+\ell+n+il\sigma/2)\,\Gamma(-n-il\sigma/2)\,. \label{product} \end{equation} Next, applying to this expression the functional relations for the gamma functions \cite{abramowitz} \begin{equation} \Gamma(k+z)=(k-1+z)(k-2+z)\ldots(1+z)\Gamma(1+z)\,,~~~~~ \Gamma(z)\Gamma(1-z)=\frac{\pi}{\sin{\pi z}}\,, \label{gammakpz} \end{equation} where $ k $ is a non-negative integer, it is easy to show that for $ l\sigma \ll 1 $ \begin{equation} \Gamma(2+\ell/2+\omega l/2)\,\Gamma(2+\ell/2-\omega l/2) =-\frac{2 i}{l \sigma}\,\frac{(\ell+3+n)!}{(-1)^{n+1}n!}\,. \label{product1} \end{equation} Similarly, one can also show that \begin{eqnarray} \Gamma(1-\ell/2+\omega l/2)\,\Gamma(1-\ell/2-\omega l/2)=\Gamma(-1-\ell-n)\,\Gamma(3+n)\,. \label{product2} \end{eqnarray} Substituting now these expressions into equation (\ref{farradsmall}), we obtain the desired form of the far-region solution at small values of $ r $. It is given by \begin{eqnarray} R&=& B \Gamma(3)(-1)^{\ell/2}\left[ \frac{\Gamma(-1-\ell)\, l^{-\ell}\,\,r^{\ell}} {\Gamma(-1-\ell-n)\,\Gamma(3+n)}+ i \sigma\, \frac{(-1)^{n+1} n!\,\Gamma(1+\ell)}{2(3+\ell+n)!}\,\,l^{3+\ell}\, r^{-2-\ell} \right]. \label{farnearsol} \end{eqnarray} \subsection{Overlapping} Comparing the large $ r $ behavior of the near-region solution in (\ref{larnear}) with the small $ r $ behavior of the far-region solution in (\ref{farnearsol}), we conclude that there exists an intermediate region $ r_+ \ll r-r_+\ll 1/\omega $ where these solutions overlap. In this region we can match them which allows us to obtain the damping factor in the form \begin{eqnarray} \sigma &=&- 2 i \,\frac{(r^2_+- r^2_-)^{1+\ell}}{l^{3+2\ell}} \frac{(3+\ell+n)!\,(1+\ell+n)! }{(-1)^{\ell}\, n! \,(2+n)! \,[\ell!(1+\ell)!]^2} \, \frac{\Gamma(1+\ell/2)\,\Gamma(1+\ell/2-2 i \Omega)} {\Gamma(-\ell/2)\,\Gamma(-\ell/2-2 i \Omega )}\,\,. \label{sigma} \end{eqnarray} We note that in this expression the quantity $ \Omega $ is given by \begin{equation} \Omega=\frac{r_+^{3}}{2}\, \frac{\omega_n-\omega_p}{r_+^2-r_-^2}\,. \label{newsuperf} \end{equation} It is also important to note that, in contrast to the related expression in four dimensions \cite{cardoso2}, equation (\ref{sigma}) involves the term $\ell/2 $ in the arguments of the gamma functions. Therefore, its further evaluation requires us to consider the cases of even and odd values of $\ell $ separately. \subsubsection{Even $ \ell $ } In this case using the functional relations \cite{abramowitz} \begin{eqnarray} &&\Gamma(k+ i z)\,\Gamma(k - i z)=\Gamma(1+ i z)\,\Gamma(1- i z)\prod_{j=1}^{k-1}\left(j^2+z^2\right)\,\,,\\[2mm] &&\Gamma(1+ i z)\,\Gamma(1- i z)=\frac{\pi z}{\sinh \pi z}\,, \label{func3} \end{eqnarray} one can show that \begin{eqnarray} \frac{\Gamma(1+\ell/2)\,\Gamma(1+\ell/2-2 i \Omega)} {\Gamma(-\ell/2)\,\Gamma(-\ell/2-2 i \Omega )}=-2 i \Omega \,[(\ell/2)!]^2 \prod_{j=1}^{\ell/2}\left(j^2+4 \Omega^2 \right)\,. \end{eqnarray} Substituting this expression into equation (\ref{sigma}) we find that \begin{eqnarray} \sigma & = & - \left(\omega_n-\omega_p\right) \, \frac{2 (3+\ell+n)!\,(1+\ell+n)! }{(-1)^{\ell}\, n! \,(2+n)! \,[\ell!(1+\ell)!]^2} \, \frac{r_+^3\,(r^2_+- r^2_-)^{\ell}}{l^{3+2\ell}} [(\ell/2)!]^2 \prod_{j=1}^{\ell/2}\left(j^2+4 \Omega^2 \right)\,.\nonumber\\ \label{sigmaf} \end{eqnarray} We see that the sign of this expression crucially depends on the sign of the factor $ \left(\omega_n-\omega_p\right)$ and in the superradiant regime $ \omega_n < \omega_p $ it is positive. In other words, we have the negative damping effect, as we have discussed at the end of Sec.IV, resulting in exponential growth of the modes with characteristic time scale $ \tau=1/\sigma $. Thus, the small AdS black holes under consideration become unstable to the superradiant scattering of massless scalar perturbations of even $ \ell $ or equivalently of even sum $ m_\phi + m_\psi $. We recall that we consider the positive frequency modes and the positive magnetic quantum numbers $ m_\phi $ and $ m_\psi $\,. \subsubsection{Odd $ \ell $ } In order to evaluate the combination of the gamma functions appearing in equation (\ref{sigma}) for odd values of $ \ell $, we appeal to the relations \cite{abramowitz} \begin{eqnarray} \Gamma\left(k+\frac{1}{2}\right)=\pi^{1/2}2^{-k}(2k-1)!!\,,~~~~~ \Gamma\left(\frac{1}{2}+iz\right)\Gamma\left(\frac{1}{2}-iz\right)=\frac{\pi}{\cosh{\pi z}}\,. \label{funct} \end{eqnarray} Using these relation along with those given in (\ref{gammakpz}), after some algebra, we obtain that \begin{eqnarray} \frac{\Gamma(1+\ell/2)\,\Gamma(1+\ell/2-2 i \Omega)} {\Gamma(-\ell/2)\,\Gamma(-\ell/2-2 i \Omega )} = \frac{(\ell!!)^2 }{ 2^{1+\ell}}\,\prod_{j=1}^{(\ell+1)/2} \left[\left(j-\frac{1}{2}\right)^2+4 \Omega^2\right]. \end{eqnarray} With this in mind, we put equation (\ref{sigma}) in the form \begin{eqnarray} \sigma &=&- i \,\frac{(r^2_+- r^2_-)^{1+\ell}}{l^{3+2\ell}} \frac{(3+\ell+n)!\,(1+\ell+n)!\,(\ell!!)^2 }{(-1)^{\ell}\,2^{\ell}\, n! \,(2+n)! \,[\ell!(1+\ell)!]^2} \,\,\prod_{j=1}^{(\ell+1)/2} \left[\left(j-\frac{1}{2}\right)^2+4 \Omega^2\right]. \label{sigmaoddf} \end{eqnarray} We see that this expression is purely imaginary and it does not change the sign in the superradiant regime. In other words, these modes do not undergo any damping, but they do oscillate with frequency-shifts. \section{Conclusion} In this paper, we have discussed the instability properties of small-size, $ r_+\ll l,$ charged AdS black holes with two rotation parameters, which are described by the solution of minimal five-dimensional gauged supergravity recently found in \cite{cclp}. The remarkable symmetries of this solution allow us to perform a complete separation of variables in the field equations governing scalar perturbations in the background of the AdS black holes. We have begun with demonstrating the separability of variables in the Hamilton-Jacobi equation for massive charged particles as well as in the Klein-Gordon equation for a massive charged scalar field. In both cases, we have presented the decoupled radial and angular equations in their most compact form. Next, exploring the behavior of the radial equation near the horizon, we have found the threshold frequency for the superradiance of these black holes. Restricting ourselves to slow rotation and to low-frequency perturbations, when the characteristic wavelength scale is much larger than the typical size of the black hole, we have constructed the solutions of the radial equation in the region close to the horizon and in the far-region of the spacetime. Performing the matching of these solutions in an overlapping region of their validity, we have derived an analytical formula for the frequency spectrum of the quasinormal modes. Analyzing the imaginary part of the spectrum for modes of even and odd $ \ell $ separately, we have revealed a new feature: In the regime of superradiance only the modes of even $ \ell $ undergo the negative damping, exponentially growing their amplitudes. On the other hand, the modes of odd $ \ell $ turn out to be not sensitive to the regime of superradiance, oscillating without any damping, but with frequency-shifts. This new feature is inherent in the five-dimensional AdS black hole spacetime and absent in four dimensions where the small-size AdS black holes exhibit the instability to all modes of scalar perturbations in the regime of superradiance \cite{cardoso2}. We emphasize once again that our result was obtained in the regime of low-frequency perturbations and for small-size, slowly rotating AdS black holes. Therefore, its validity is guaranteed for a certain range of the perturbation frequencies and parameters of the black holes within the approximation employed. The full analysis beyond this approximation requires a numerical work. Meanwhile, one should remember that the characteristic oscillating modes for the instability to occur (for superradiance) are governed by the radius of the AdS space. This means that the instability will not occur for an arbitrary range of the black hole parameters. We also emphasize that the different stability properties of even and odd modes of scalar perturbations arise only in the five-dimensional case with reflective boundary conditions. The physical reason for this is apparently related with the ``fermionic constituents" of the five-dimensional AdS black hole. Therefore, it would be interesting to explore this effect in the spirit of work \cite{maldacena} using an effective string theory picture, where even $ \ell $ refers to bosons and odd $ \ell $ to fermions. This is a challenging project for future work. \section{Acknowledgments} A. N. thanks the Scientific and Technological Research Council of Turkey (T{\"U}B\.{I}TAK) for partial support under the Research Project No. 105T437. O. D. also thanks T{\"U}B\.{I}TAK for a postdoctoral fellowship through the Programme BIDEB-2218.
{'timestamp': '2009-02-16T19:25:24', 'yymm': '0808', 'arxiv_id': '0808.0280', 'language': 'en', 'url': 'https://arxiv.org/abs/0808.0280'}
\section{Model Description} Let $\Omega=[-1,1]^N$ for some positive dimension $N$ and consider a set $\{S_j\}_{j=1}^M$ of mutually exclusive and exhaustive subsets of $\Omega$. A typical example will be the generalized quadrants, i.e. $$S_j = \left\{x \in \Omega \left| sgn (x_i) = \alpha_i \mbox{, } 1 \leq i \leq N \mbox{ and } \sum_{i=0}^{N-1} \alpha_{i+1} 2^i = 2^N+1-2j \right. \right\},$$ where the unique set of $\{\alpha_i\}$ denotes the binary decomposition of the integer $j < M$. Given a mapping $f: \Omega \longrightarrow \Omega$, we define the \textit{truncator} map as the following discrete dynamical system: \begin{equation} x(n+1)_i = x(n)_i sgn \left( f \left( x(n) \right)_i \right). \label{eq:trunc1} \end{equation} In this paper we specialize to the case of \textit{shuffling} maps, i.e. $f$ which can be expressed as a set of invertible operators $A_j$ associated with each component $S_j$ of $\Omega$. Specifically, consider the finite group $G=\{1, 2, \ldots, M\}$ endowed with an operation $\circ$ such that, for every $g \in G$, $g \circ g = 1$. This group is naturally isomorphic to the cyclic product group $\mathbb{Z}_2 \times \mathbb{Z}_2 \times \ldots \times \mathbb{Z}_2$ of $M$ factors, which can be represented as a modulo multiplication group $M_n$ for some large enough $n$ such that $\phi(n)=M$, where $\phi$ is the Euler totient function. In this setting, assign an orientation reversing invertible $\ell_\infty$ isometry $A_j$ to each component $S_j$ of $\Omega$, with the property that $A_j \left(S_i \right) = S_{i \circ j}$. The associated \textit{shuffling} map is given by a mapping $\varphi: G \longrightarrow G$ such that $f \left|_{S_i} \right. = A_{\varphi(i)}$. Using this notation, the resulting truncator dynamics can be described as \begin{equation} x(n+1) = \sum_{i=1}^M A_{\varphi (i)} \left( x(n) \right) {\bf 1}_{S_i} \left(x(n) \right). \label{eq:trunc2} \end{equation} These dynamics arise in a variety of settings \cite{palle1,palle2,kawamura1}. We were driven to study the truncator dynamics because they represent the frozen phase limit ($\beta \rightarrow \infty$) of a class of interacting particle systems describing economic interactions and opinion formation \cite{theo1,theo2,theo3,bornholdt1}. In this setting, the points $x$ represent configurations of a spin network and the shuffling map represents the interaction Hamiltonian that describes the influence of local and global effects to the flipping of individual spins. Another setting where such truncator dynamics arise is that of random Boolean networks. Often such models are used to describe regulatory networks (e.g. genetic or metabolic networks in biology \cite{troein1,troein2,troein3}) and they are also used to describe instances of the satisfiability problem \cite{parisi}. In this latter setting, global optimization algorithms are constructed to flip the values of Boolean variables populating the nodes of a graph in such a way as to maximize the probability that the clauses represented by the graph connections are simultaneously satisfied. Our goal in this paper is to characterize the periodic attractors of the truncator map. Specifically we consider random endomorphisms of $G$ \cite{diaconis,peres} and derive the distribution of periodic orbits of the resulting random truncator dynamics. Of course the full truncator map (\ref{eq:trunc1}) is generically chaotic \cite{gilmore}, because there is sensitivity to initial conditions in the neighborhood of the boundaries between the components $S_j$ (e.g. the axes, when the components are generalized quadrants). Here we will restrict our attention to shuffling maps and the resulting restricted truncator dynamics (\ref{eq:trunc2}) which captures the spectrum of periodic attractors. In a later step we plan to use this analysis as a building block for understanding the transitions between the basins of attraction of the periodic attractors we describe here. \section{Algebraic Dynamics} In order to better describe the orbits of (\ref{eq:trunc2}) we define a new, noncommutative operation on $G$. This operation encodes the action of the shuffling map $\varphi$ on $G$: $$g_1 \ast g_2 = g_1 \circ \varphi (g_2).$$ Abusing notation and identifying each $x \in \Omega$ with the index of the component $S_i$ in which it lies (i.e. the $i$ such that ${\bf 1}_{S_i} (x) = 1$), and subsequently every index with the corresponding member of $G$, we can describe every orbit of (\ref{eq:trunc2}) as a sequence: \begin{equation} g \rightarrow g^{\ast 2} \doteq g \ast g \rightarrow g^{\ast 3} \doteq (g \ast g) \ast (g \ast g) \rightarrow \cdots. \label{eq:path} \end{equation} Here is a list of some preliminary results for this algebraic structure: \begin{theorem} \label{homo1} If $\varphi$ is a homomorphism with respect to $\circ$ then it is also a homomorphism with respect to $\ast$. Conversely, if $\varphi$ is a surjective homomorphism with respect to $\ast$ then it is also a homomorphism with respect to $\circ$. \end{theorem} \begin{proof} The first statement follows immediately from the definition of the $\ast$ operation, since for any $g_1, g_2 \in G$, both $\varphi(g_1 \ast g_2)$ and $\varphi(g_1) \ast \varphi(g_2)$ are equal to $\varphi(g_1) \circ \varphi^{(2)} (g_2)$ (where $\varphi^{(k)}$ denotes the $k$-fold iteration of $\varphi$). For the second statement, we observe that, for every $g_2 \in {\rm Im} \varphi$, there exists some $g_3 \in G$ such that $g_2 = \varphi (g_3)$ and thus, $$\varphi(g_1 \circ g_2) = \varphi \left(g_1 \circ \varphi(g_3) \right) = \varphi(g_1 \ast g_3) = \varphi(g_1) \ast \varphi(g_3) = \varphi(g_1) \circ \varphi^{(2)} (g_3) = \varphi(g_1) \circ \varphi(g_2).$$ \end{proof} \begin{theorem} \label{fixedpoint} For any $\varphi$ (not necessarily a homomorphism), if the $\ast$ operation is commutative, then $\# \varphi^{-1} (1) = 1$ and $\varphi^{-1} (1) = 1$ is the unique attractor of (\ref{eq:trunc2}). \end{theorem} \begin{proof} The assumption implies that for every $g_1, g_2 \in G$, \begin{eqnarray*} g_1 \ast g_2 = g_2 \ast g_1 & \Longleftrightarrow & g_1 \circ \varphi(g_2) = g_2 \circ \varphi(g_1) \\ & \Longleftrightarrow & g_1 \circ \varphi(g_1) = g_2 \circ \varphi(g_2) \\ & \Longleftrightarrow & g_1 \ast g_1 = g_2 \ast g_2. \end{eqnarray*} But this implies that there exists a unique $\hat{g} \in G$ such that all points $x \in \Omega$ move into $S_{\hat{g}}$ after one step of (\ref{eq:trunc2}). Consider $\hat{g}$ itself. Since it remains fixed, it is a fixed point of (\ref{eq:trunc2}). This implies that $\varphi(\hat{g})=1$. Any $h \in \varphi^{-1} (1)$ is a fixed point since $h \ast h = h \circ \varphi(h) = h$. But if there was any other member of $\varphi^{-1} (1)$ different from $\hat{g}$, it would have to move to $\hat{g}$ in one step as we have already seen, refuting its stationarity. No other attractors are possible since all points converge to $\hat{g}$ in one step. Therefore, $\hat{g}$ is the unique fixed point of (\ref{eq:trunc2}). \end{proof} As an example, consider the case $N=2$ and the map $f(z) = {\frac {z + z^{-1}}{2}}$, where we think of $\Omega$ as the unit $\ell_\infty$ ball in ${\mathbb{C}}$. Of course $f^{-1} (z) = z - \sqrt{z^2 -1}$ which leads us to conclude that this is indeed a shuffling map, with $A_1 (z) = z$, $A_2 (z) = \bar{z} e^{i \pi}$, $A_3 (z) = z e^{i \pi}$ and $A_4 (z) = \bar{z}$, and therefore $\varphi(1) = 4$, $\varphi(2) = 3$, $\varphi(3) = 2$ and $\varphi(4) =1$. We observe that, with this choice of $\varphi$, $1 \ast 1 = 2 \ast 2 = 3 \ast 3 = 4 \ast 4 = 4 = \varphi^{-1} (1)$. This, according to Theorem \ref{fixedpoint}, every $z \in \Omega$ with positive real and negative imaginary parts will be a fixed point for the following dynamics \begin{eqnarray*} {\rm Re} z(n+1) & = & {\rm Re} z(n) sgn \left({\rm Re} \left ({\frac {z + z^{-1}}{2}} \right) \right) \\ {\rm Im} z(n+1) & = & {\rm Im} z(n) sgn \left({\rm Im} \left ({\frac {z + z^{-1}}{2}} \right) \right). \end{eqnarray*} Now, for every $\varphi: G \rightarrow G$ consider a new multiplication in $G$ defined so that it satisfies $$g \otimes g = \varphi (g)$$ and so that it is left-distributive\footnote{One can easily check that, if $1 \in \ker \varphi$, then $\otimes$ is bilaterally distributive.} with respect to the addition defined by $\circ$. Note that $\otimes$ is not necessarily associative, e.g. $\{(g \otimes g) \otimes g\} \circ \{g \otimes (g \otimes g) \} = [g, g^{\otimes 2}] = \varphi \left(g \circ \varphi(g) \right) \circ \varphi(g) \circ \varphi^{(2)} (g)$ which can be different from $1$ when $\varphi$ is not a homomorphism. Due to this potential non-associativity, we must be careful about defining $\otimes$ powers. In particular, let $\alpha_\ell \alpha_{\ell-1} \cdots \alpha_2 \alpha_1$ be the binary decomposition of the integer $k>1$. Then define $g^{\otimes k} = \left( \alpha_\ell g^{\otimes 2^{\ell-1}} \right) \otimes \left( \alpha_{\ell -1} g^{\otimes 2^{\ell-2}} \right) \otimes \cdots \otimes \left( \alpha_2 g^{\otimes 2} \right) \otimes (\alpha_1 g)$, where $g^{\otimes 2^j}= \left(g^{\otimes 2^{j-1}} \right) \otimes \left(g^{\otimes 2^{j-1}} \right)$. If $\varphi$ is a homomorphism, then $\otimes$ is commutative because for any $g, h \in G$, $$(g \circ h) \otimes ( g \circ h) = \varphi (g \circ h ) = \varphi (g) \circ \varphi (h)$$ while, the distributive property implies that $$(g \circ h) \otimes ( g \circ h) = (g \otimes g) \circ (h \otimes h) \circ (g \otimes h) \circ (h \otimes g) = \varphi (g) \circ \varphi (h) \circ (g \otimes h) \circ (h \otimes g)$$ and therefore $(g \otimes h) \circ (h \otimes g)=1$ which implies commutativity. For the same reason, a general $\varphi$ leads to the following identity: $$(g \otimes h) \circ (h \otimes g) = \varphi (g \circ h) \circ \varphi (g) \circ \varphi (h).$$ We define the commutator of two elements $g,h$ of $G$ as $[g,h] \doteq (g \otimes h) \circ (h \otimes g)$ and say that $g$ commutes with $h$ if $[g,h] = 0$. Also observe that $1$, the identity of the addition $\circ$, is a trapping element of $G$ with respect to $\otimes$ since for any $g \in G$, $$\varphi(g)= g \otimes g = (g \circ 1) \otimes g = (g \otimes g) \circ (1 \otimes g) = \varphi (g) \circ (1 \otimes g)$$ and therefore $1 \otimes g = g \otimes 1 = 1$. \section{Polynomial Roots} Now consider the ring of polynomials in $G$ using this multiplication and coefficients from $\mathbb{Z}_2$. The action of $\mathbb{Z}_2$ on $G$ is modeled as exterior multiplication of $\mathbb{Z}_2$ on $M_n$, the modulo multiplication group that represents $G$, i.e. $0 \cdot g = 1$ and $1 \cdot g = g$. \begin{theorem} \label{polynomial} For any $\varphi \in \hom (G,\circ)$, $g \in G$ and $p>0$, \begin{equation} g^{\ast p} = \bigodot_{k=0}^{p-1} \gamma_{k,p} \varphi^{(k)} (g), \label{eq:poly} \end{equation} where $\varphi^{(0)} (g) = g$, \begin{equation} \gamma_{k,p} \equiv \left( \gamma_{k,p-1} + \gamma_{k-1,p-1} \right) \pmod{2}, \label{eq:pascal} \end{equation} and $\gamma_{0,0}=1$, $\gamma_{k,0}=0$ for $k<0$. \end{theorem} \begin{proof} Observe that the expression for $g^{\ast n}$ is a polynomial in $G$ as described above, of degree $2^k$, since $\varphi^{(k)} (g) = g^{\otimes 2^k}$, as can be easily checked using induction. Observe further that the coefficients of these polynomials obey the binary version of the Pascal triangle. We will show this using induction in $p$. The desired result clearly holds for $p=1$ since $g = \gamma_{0,1} g$ and $\gamma_{0,1}= 1$. Assume the desired result holds for $p$. Then \begin{eqnarray*} g^{\ast (p+1)} & = & g^{\ast p} \ast g^{\ast p} = g^{\ast p} \circ \varphi (g^{\ast p}) = \left\{ \bigodot_{k=0}^{p-1} \gamma_{k,p} \varphi^{(k)} (g) \right\} \circ \varphi \left( \bigodot_{k=0}^{p-1} \gamma_{k,p} \varphi^{(k)} (g) \right) = \\ & = & \left\{ \bigodot_{k=0}^{p-1} \gamma_{k,p} \varphi^{(k)} (g) \right\} \circ \left\{ \bigodot_{k=1}^{p} \gamma_{k-1,p} \varphi^{(k)} (g) \right\} = \\ & = & \left(\gamma_{0,p} g \right) \left(\gamma_{p-1,p} \varphi^{(p)} (g) \right) \bigodot_{k=1}^{p-1} \left((\gamma_{k-1,p} + \gamma_{k,p}) \pmod{2} \right) \varphi^{(k)} (g) \end{eqnarray*} because when both $\gamma_{k,p}$ and $\gamma_{k-1,p}$ are both equal to $1$, then the corresponding term contains $\varphi^{(k)} \circ \varphi^{(k)}$ and therefore vanishes. \end{proof} Let's define the period of an element $g$ as \begin{equation} p^\ast (g) = \min \{i>1 | g^{\ast i} = g\} -1 \label{eq:periodef} \end{equation} where we understand the minimum of any empty set to be equal to $\infty$. Using this concept we can summarize a set of necessary and sufficient conditions for a truncator map to possess limit cycles of particular periods as follows: \begin{theorem} \begin{enumerate} \item For a general $\varphi$, $g \in \ker \varphi \Longleftrightarrow p^\ast (g) = 1$. \item If $g$ commutes with $g^{\otimes 2}$ and $1 \in \ker \varphi$, $g \in \ker \varphi^{(2)} \setminus \ker \varphi \Longleftrightarrow p^\ast (g) = 2$. \item Let $\bigtriangleup \doteq \{g \in G | g^{\ast 2} = g^{\otimes 4} \}$. If $g$ commutes with $g^{\otimes 2}$ and $g^{\otimes 4}$ and $1 \in \ker \varphi$, $$g \in \varphi^{-1} ({\rm Im}\varphi \cap \bigtriangleup ) \setminus \ker \varphi \Longleftrightarrow p^\ast (g) = 3.$$ \end{enumerate} \end{theorem} \begin{proof} Using (\ref{eq:periodef}) we see that $p^\ast (g)$ is equal to $1$ iff $g^{\ast 2} = g$ which is true iff $g \in \ker \varphi$, thus proving the first statement of the theorem. On the other hand, we clearly have $g^{\ast 3} = g \circ g^{\otimes 2} \left(g \circ g^{\otimes 2} \right)^{\otimes 2} = g \circ g^{\otimes 4} \circ \left[g, g^{\otimes 2} \right]$. But when $[g, g^{\otimes 2}] = 1$, $p^\ast (g) \leq 2$ iff $g \in \ker \varphi^{(2)}$. Together with the previous statement, we have proved the second statement of the theorem. In this case we have \begin{equation} g^{\ast 4} = g \circ g^{\otimes 2} \circ g^{\otimes 4} \circ g^{\otimes 8} \circ [g, g^{\otimes 2}] \circ [g, g^{\otimes 2}]]^{\otimes 2} \circ [g, g^{\otimes 4}] \circ \left[g \circ g^{\otimes 4}, [g, g^{\otimes 2}] \right]. \label{eq:gast4} \end{equation} When $g$ commutes with $g^{\otimes 2}$ and $g^{\otimes 4}$, (\ref{eq:gast4}) simplifies to $g^{\ast 4} = g \circ g^{\otimes 2} \circ g^{\otimes 4} \circ g^{\otimes 8}$. Since $\varphi(g) \in \bigtriangleup$, $g^{\otimes 2} \circ g^{\otimes 4} \circ g^{\otimes 8} = 1$ and therefore $g^{\ast 4} = g$ which implies $p^\ast (g) \leq 3$. Requiring that $g \not \in \ker \varphi$ is sufficient to complete the proof of the last statement in the theorem because $g \in \ker \varphi^{(2)} \setminus \ker \varphi \Longrightarrow g^{\otimes 2} \circ g^{\otimes 4} \circ g^{\otimes 8} = g^{\otimes 2} \neq 1$. \end{proof} \section{Random Maps} Let $\mu \in {\mathcal M}_1 \left( G^G \right)$ be a probability measure on the set of maps from $G$ to itself. This can be described as a sequence of measures $\nu_g \in {\mathcal M}_1 (G)$ on $G$ indexed by the elements of $G$, such that for every $g, h \in G$: $$\nu_g (h) = \mu \left( \varphi (g) = h \right).$$ A uniformly random $\varphi$ maps each element $g$ to $1$ with probability $M^{-1}$. Thus the number of $g$ that are mapped to $1$ is a binomial random variable: $$\lambda \left( \left|\ker \varphi \right| = k \right) = \left( \begin{array}{c} M \\ k \end{array} \right) M^{-M} (M-1)^{M-k},$$ and therefore, \begin{theorem} Let $\lambda$ be the uniform measure in ${\mathcal M}_1 \left( G^G \right)$. Then: $$\lim_{M \rightarrow \infty} \lambda \left( \left|\ker \varphi \right| = k \right) = (ek!)^{-1}.$$ \end{theorem} We proceed by defining a transition matrix $\Phi$ such that for every pair $(i,j) \in G^2$, $\Phi_{i,j} = \mu(i^{\ast 2}=j)$. We consider a stochastic process on $G$ which propagates according to (\ref{eq:path}) with iid choices of $\varphi$ in every draw. Observe that $$\mu \left(i^{\ast 3} =j \right) = \sum_{k \in G} \mu \left(i^{\ast 2} =k \right) \mu \left(k^{\ast 2} =j \right) = \sum_{k \in G} \Phi_{ik} \Phi_{kj} = \left(\Phi^2 \right)_{ij}.$$ Now let $\imath$ be the identity mapping on $G$, and define an addition $+$ in $G^G$ such that $(\varphi_1 +\varphi_2)(g) = \varphi_1(g) \circ \varphi_2(g)$. Then, we can express $\ast$ powers of $g$ as $$g^{\ast p} = (\varphi + \imath)^{(p-1)} (g).$$ Then $$g^{\ast p} = g \Longleftrightarrow \varphi \left(\bigodot_{k=0}^{p-2} (\varphi + \imath)^{(k)} (g) \right) = 1.$$ Thus, when it is finite, $p^\ast$ can be computed as the first passage time into $\ker \varphi$ of a Markov chain on $G$ with transition matrix elements: \begin{eqnarray*} {\rm Pr} \left( \bigodot_{k=0}^p \left(\varphi + \imath \right)^{(k)} (g) = j \left| \bigodot_{k=0}^{p-1} \left(\varphi + \imath \right)^{(k)} (g) = i \right. \right) & = & \mu \left(\left(\varphi + \imath \right)^{(p)} (g) = i \circ j \right) \\ & = & \left(\Phi^p \right)_{g, i \circ j}. \end{eqnarray*} On the other hand when $p^\ast (g) = \infty$ ($g$ is transient for the dynamics), the Markov chain never enters $\ker \varphi$. This is a time inhomogeneous process as seen in the expression for the $p$-step transition probabilities: \begin{eqnarray*} & & {\rm Pr} \left( \bigodot_{k=0}^p \left( \varphi + \imath \right)^{(k)} (g) =j \left| g=i \right. \right) = \\ & = & \sum_{k_1,k_2,\ldots,k_{p=2} \in G} \Phi_{i , i \circ k_1} \left(\Phi^2 \right)_{i \circ k_1, i \circ k_1 \circ k_2} \cdots \left(\Phi^p \right)_{\bigodot_{j=1}^{p-1} k_j \circ i, \bigodot_{j=1}^p k_j \circ i }. \end{eqnarray*} \section{Synchronous Spin Market Dynamics} In this section we show how to map a spin model of market microstructure onto the class of truncator dynamics. The state space $X$ of the model we want to consider is the set of spin configurations on a lattice on the $d$-dimensional torus\footnote{Here we use the notation ${\mathcal T}^d$ to denote the object $\underbrace{ {\mathcal S}^1 \times \ldots \times {\mathcal S}^1}_d$.} $Y \doteq \left({\mathcal Z}/L \right)^d \subset {\mathcal T}^d$, i.e. $X \subset \{-1,1 \}^Y$, for an appropriately chosen $L$ so that $|Y|=N$. The path of a typical element of $X$ is given by $\eta: Y \times \aleph \longrightarrow \{-1,1\}$ and each site $x \in Y$ is endowed with a (typically $\ell_1$) neighborhood ${\mathcal N} (x) \subset Y$ it inherits from the natural topology on the torus ${\mathcal T}^d$. We construct a discrete time Markov process with synchronous transitions updating all the spins simultaneously. We proceed to construct a transition matrix for the spins, based on the following interaction potential: $$h(x,n) = \sum_{y \in {\mathcal N}(x)} \eta(y,n) - \alpha \eta(x,n) N^{-1} \left|\sum_{y \in Y} \eta(y,n) \right|,$$ where $\alpha>0$ is the coupling constant between local and global interactions. At time $n$ the spins change to $+1$ with probability $p^+ \doteq \left( 1+ \exp \left\{- 2\beta h \left(x,n \right) \right\} \right)^{-1}$ and to $-1$ with probability $p^- = 1- p^+$, where $\beta$ is the normalized inverse temperature. Let $f: X \longrightarrow X$ be such that for all $x \in Y$, $\left(f(\eta( \cdot, n)) \right) (x) \doteq \eta(x,n) h(x,n) = \sum_{y \in {\mathcal N}(x)} \eta(y,n) \eta(x,n) - \alpha N^{-1} \left|\sum_{y \in Y} \eta(y,n) \right|$. It is easy to check that the frozen phase of this system ($\beta \rightarrow \infty$) is a shuffling truncator map, as described above in (\ref{eq:trunc2}). In the frozen phase, the transitions are deterministic (each row of $\Phi$ has only one nonzero element). High but finite values of $\beta$ lead to the introduction of some genuine randomness in the transition matrix. To illustrate this procedure, let's consider the above spin market model with $N=4$ and $d=1$ and standard nearest neighbor topology in ${\mathcal S}^1$. Consider first $g=16$ which represents the quadrant $(-1,-1,-1,-1)$. We realize there are two cases. When $\alpha <2$, $\varphi(g) = 1$, which represents the quadrant $(1,1,1,1)$; thus when $\alpha<2$ (subcritical regime), $g=16 \in \ker \varphi$ and therefore $p^\ast (16) = 1$. Notice that by symmetry, the same is true for $g=1$. On the other hand when $\alpha >2$ (supercritical regime), $\varphi (16) = 16$ and therefore $16 \ast 16 =1$ and $\varphi(1) = 16$. Thus, $p^\ast (16) = p^\ast (1) = 2$. Next consider the element $g=15$ representing the quadrant $(-1,-1,-1,1)$. Once again there are two cases, but this time they are separated by $4$ rather than $2$. Specifically, when $\alpha<4$, $\varphi (15) = 12$, representing the quadrant $(-1,1,-1,-1)$, and thus $15 \ast 15 = 6$, representing quadrant $(1,-1,1,-1)$. Proceeding from $g^{\ast 2} = 6$ we see that $\varphi (6) = 16$ and therefore $6 \ast 6 = 11$, representing quadrant $(-1,1,-1,1)$. Thus, when $\alpha<4$ (the relevant subcritical regime), $p^\ast (15) = \infty$, draining into the period 2 attractor $6 \rightarrow 11 \rightarrow 6 \rightarrow \cdots$. On the other hand when $\alpha>4$ (the relevant supercritical regime) $\varphi (15) = 16$ and thus $15 \ast 15 = 2$, representing quadrant $(1,1,1,-1)$. Continuing from $g^{\ast 2} = 2$ we see that $\varphi (2) = 16$ and therefore $2 \ast 2 = 15$. So we conclude that in the supercritical regime, $p^\ast (15) = 2$. \section{Conclusions and Next Steps} We have presented a new methodological framework for analyzing a class of random symbolic dynamics. This framework draws on the iterated function systems (IFS) literature to identify Boolean maps with Boolean expressions, thus constructing an algebraic structure akin to the modulo multiplication groups. This structure in turn helps clarify the qualitative properties of the underlying interaction Hamiltonian by exhibiting parameter ranges which lead to different algebraic properties. We have shown constructively that large classes of symbolic dynamics, including random Boolean networks, can be described in terms of our proposed truncator maps. In the case of the Bornholdt spin market microstructure model we have shown examples of fixed points, period 2 cycles as well as transient points in configuration space. We proceeded to show that non-zero temperature can be accommodated by constructing a Markov chain in the space of automorphisms of our ring structure. In a particularly simple case we were able to compute explicitly the thermodynamic limit of the number of fixed points. This analytical result lends support to the conjecture that as the number of agents increase, with overwhelming probability there are but very few fixed points. A natural next step is to extend the analysis presented here beyond shuffling maps to general truncator dynamics. Such an extension will involve long memory as the iterated images become intertwined. On the other hand the resulting global mixing is likely to induce ergodic properties missing in the case of pure shuffling. Furthermore, even in the case of shuffling maps, the solution of the inhomogeneous exit problem identified above as a way to represent the spectrum of the truncator dynamics remains generally open. We plan to address this problem explicitly in future work. \bibliographystyle{amsalpha}
{'timestamp': '2006-06-27T09:54:59', 'yymm': '0606', 'arxiv_id': 'math/0606667', 'language': 'en', 'url': 'https://arxiv.org/abs/math/0606667'}
\section{Introduction} \label{sec:introduction} Over the past decade, several studies focused on characterising and selecting quasars, using both optical and near infrared (NIR) data from the $u$-band to the WISE W2 band. The astrometric surveys within the {\it Gaia} mission significantly contributed to the determination of the extra-galactic nature of several quasar candidates, which are expected to show a parallax consistent with zero \citep[see][and references therein for further information on the selection principles]{Heintz2018b, Geier2019}. The present work is based on the identification of three objects along the slit of the OSIRIS instrument at the Gran Telescopio Canarias (GTC), used for the spectroscopic follow-up observations of quasar candidates \citep{Geier2019}. The primary source, identified as GQ\,1114+1549A in the present paper, is a confirmed quasar candidate located at RA = 11:14:34.26, Dec = +15:49:44.80 (J2000.0) \citep{Heintz2018b}. The second source, hereby identified as GQ\,1114+1549B, had not been included by any previous survey and its quasar nature was confirmed in the present work. The spectrum of that object yielded a redshift, compatible with that of GQ\,1114+1549A, while having significantly different $g$, $r$ and $i$-band magnitudes compared to its companion. Ultimately, the spectrum of the third source was found to be reliably compatible with that of a foreground star. The present paper aims at presenting the serendipitous discovery of GQ\,1114+1549B and complementing it with archival and follow-up observations, analysed with the aim of characterising the nature of the quasar pair. The small angular separation of this pair of quasars warrants further analysis, particularly with the objective of establishing whether the pair constitutes a gravitationally bound system. The present document is divided into three sections. Section \ref{sec:data} covers the observations conducted on the quasar pair, section \ref{sec:results} describes the main results, while section \ref{Sect:diskussion} concludes discussing the importance and implications of our discovery for future binary quasar studies. We assume a $\Lambda$CDM cosmological model with $H_0=67.4$ km s$^{-1}$Mpc$^{-1}$, $\Omega_M=0.32$ and $\Omega_{\Lambda}=0.68$ \citep{Planck2018}. \section{Observations} \label{sec:data} The first spectroscopic observations of the quasar pair were obtained with OSIRIS instrument at the GTC. The candidate quasar GQ\,1114+1549A, selected according to optical data, NIR photometry and astrometry, was observed on December 4, 2018, when two 400 sec integrations were obtained. Using the Grism R1000B and a 1.23 arcsec slit, the observation yielded a resolution of $\mathcal{R}=500$, with a spectral range of 3750--7800 \AA. The observing conditions were a seeing of about 1.1 arcsec, and spectroscopic sky transparency, and dark time. The target was observed at an average airmass of 1.43, at parallactic slit angle. While analysing the GTC spectrum of GQ\,1114+1549A, the authors found evidence for a second (adjacent) quasar on the slit. Follow-up observations of the quasar pair were obtained on July 2 and on July 5 2019 with the Nordic Optical Telescope (NOT). The total integration time was 2840 sec, with the aim of confirming the presence of a companion quasar in the field, aligning the slit with both quasars. Here, we used the low-resolution spectrograph AlFOSC and a grism covering the spectral range 3800--9000 \AA \ with a 1.3 arcsec wide slit providing a resolution of $\mathcal{R}\approx280$. The pair was observed in evening twilight, as it was already setting, with the position angle being 124$^\mathrm{o}$ East of North. As the airmass was high (1.5--1.8) the observation suffered from significant differential slitloss, but the spectral region from 4500--9000 \AA \ was well covered. The spectroscopic data from all the observations were reduced using standard procedures in IRAF\,\footnote{IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation.}. The GTC spectra were flux calibrated using observations of the spectro-photometric standard star Feige 110 observed on the same night. The NOT spectra were flux calibrated using a sensitivity function from an earlier night. To reject cosmic rays we used La\_cosmic \citep{vanDokkum01}. We corrected the spectra for Galactic extinction using the extinction maps of \cite{Schlegel98}. To improve the absolute flux calibration we scaled the spectra to the $r$-band photometry from SDSS \citep{Alam15}. \section{Results} \label{sec:results} In Fig.~\ref{fig:2dspectrum} we show the discovery spectroscopy from GTC. The top panel shows the 2-dimensional spectra with the trace of GQ\,1114+1549A in the middle. Below that there is the trace of a bright star (see Fig.~\ref{fig:findchart}). Above the trace of GQ\,1114+1549A there is the weak trace of another source. In this trace, we noted the presence of possible broad emission lines indicative of this source being also a quasar. The two lower panels in Fig.~\ref{fig:2dspectrum} show the extracted 1-dimensional spectra of GQ\,1114+1549A and this possible second quasar on the slit (referred to as GQ\,1114+1549B). As the trace is very weak we had to bin the spectrum significantly to properly bring the emission lines out of the noise for visibility. \begin{figure} \epsscale{.95} \plotone{2dspec.eps} \caption{Here we show the GTC spectra in which the \ion{Si}{4}, \ion{C}{4}, and \ion{C}{3}] lines are well covered. The top panel shows the 2-dimensional spectra showing three objects on the slit. The lowest trace is the bright star located west of GQ\,1114+1549A (see Fig.~\ref{fig:findchart}, the central trace is the primary quasar target (GQ\,1114+1549A) and the upper trace is the serendipitously discovered quasar (GQ\,1114+1549B). The two bottom panels show the quasar spectra, A (below) and B (above). The B-spectrum is suppressed by a about a factor of 10 (i.e., we get only 10\% of the flux in the slit compared to the A-spectrum). The B-spectrum is binned by a factor of seven for better visibility. Over-plotted is also the $g$, $r$, and $i$-band photometry from SDSS.} \label{fig:2dspectrum} \end{figure} In Figure~\ref{fig:findchart} we show a 1$\times$1 arcmin$^2$ field around the two sources (marked with A and B) as imaged in the $r$-band by SDSS in DR12 \citep{Alam15}. We also indicate the slit orientations. From this image it is clear that the slit position covering GQ\,1114+1549A at the parallactic angle grazes another point source with similar brightness west of GQ\,1114+1549A, which must be GQ\,1114+1549B. The projected angular separation between the two objects is $8.76\pm0.11~\mathrm{arcsec}$ as measured from the SDSS $r$-band image\footnote{http://skyserver.sdss.org/dr12/en/tools/explore/Summary.aspx?}. \begin{figure} \epsscale{.80} \plotone{slit.eps} \caption{A 1$\times$1 arcmin$^2$ field around the two sources (marked A and B) as imaged in the $r$-band by SDSS DR12. North is up and East is to the left. We have over-plotted information on the proper motion from Gaia DR2 \citep{Gaia2018} with red arrows and red error ellipses showing the 2-$\sigma$ uncertainty on the proper motion. The quasars both have proper motions consistent with 0, whereas the two other objects are moving significantly and are hence unrelated. We also plot a schematic view of the slit during the GTC observation (dotted lines), which was centred on source A and aligned with the parallactic angle, oriented at 115$^\mathrm{o}$ East of North (EoN) at the time of the observation. The position angle between the two objects is 124$^\mathrm{o}$ EoN, and this was the slit angle used during the NOT observation (full drawn lines).} \label{fig:findchart} \end{figure} The purpose of the NOT spectroscopy was to obtain a spectrum with both GQ\,1114+1549A and GQ\,1114+1549B properly aligned. As the source was already setting in evening twilight the observation was difficult, but the authors still managed to capture a useful spectrum, as shown in Fig.~\ref{fig:2dspectrum_not}. Here we again show both the 2-dimensional spectrum (top) and the extracted 1-dimensional spectra of both quasars. It is clear that both objects are quasars at a very similar redshift of 1.76$\pm0.01$. By accounting for both GTC and NOT data, there is no doubt that a new physical binary quasar pair has been serendipitously discovered. Remarkably, this is the second time that by chance we discover a quasar pair using a random slit angle \citep[for the first discovery see][]{Heintz2016}. \begin{figure} \epsscale{.95} \plotone{2dspecNOT.eps} \caption{Here are shown the spectra from the NOT obtained using a slit properly aligned with both quasars. The top panel illustrated the 2-dimensional spectrum with the traces of both quasars separated by 8.76 arcsec. The two bottom panels show the 1-dimensional spectra covering the region from \ion{C}{3}] to \ion{Mg}{2} (marked with dashed, red lines). The spectra are not corrected for telluric absorption. The red dots show the $g$, $r$ and $i$-band photometry from SDSS.} \label{fig:2dspectrum_not} \end{figure} \section{Discussion and Conclusions} \label{Sect:diskussion} For two quasars at the same redshift at such a low projected separation the first question to answer is if this is a lensed source or a physical quasar pair. An overview of more than 200 known lensed quasars can be found in \citet{GaiaPairs2018}, but unfortunately the separations are not included in the list. Currently, a handful of lensed systems are known with separation larger than 10 arcsec \citep{Inada2003,Inada2006,Dahle2013,Shu2018} so a separation of 8.76 arcsec does not exclude that the pair under study here is a lensed system. The redshifts of the two quasars are in our case also consistent within the errors. However, the spectra and spectral energy distributions, as mapped out by the photometry (see Table~\ref{tab:mag}), is remarkably different with one of the pair members (A) being strongly reddened and the other not (B). As an example, source A has a colour excess in the $r-z$ bands \citep[which is found to be a good tracer of reddening due to dust in high-$z$ quasars;][]{Heintz2018} of $r-z = 0.73\pm 0.01$\,mag, which is at the 95th percentile of the full sample of quasars from the SDSS data release 14 \citep{Paris18}. Source B with $r-z = 0.11\pm 0.01$\,mag falls well within one standard derivation from the mean compared to the same sample. Therefore, the system is most likely a physical pair of quasars with a projected proper distance of 75 kpc in the assumed cosmology. \begin{table*}[!htbp] \begin{tabular*}{1.0\textwidth}{@{\extracolsep{\fill}}l c c c c c c c c c } \noalign{\smallskip} \hline \hline \noalign{\smallskip} \emph{Object} & \emph{u} & \emph{g} & \emph{r} & \emph{i} & \emph{z} & \emph{Y} & \emph{J} & \emph{H} & \emph{K$_s$}\\ & [mag] & [mag] & [mag] & [mag] & [mag] & [mag] & [mag] & [mag] & [mag]\\ \hline A & 21.58 & 20.51 & 19.78 & 19.11 & 18.99 & 18.05 & 17.60 & 17.22 & 16.39 \\ B & 19.99 & 19.83 & 19.86 & 19.68 & 19.75 & 18.96 & 18.75 & 18.24 & 18.48 \\ \noalign{\smallskip} \hline \noalign{\smallskip} \end{tabular*} \caption{The optical and near-infrared magnitudes of object A and B (all on the AB magnitude system) from the SDSS and UKIDSS catalogues \citep{Warren2007,Alam15}.} \label{tab:mag} \end{table*} Previously, \citet{Hennawi06,Hennawi2010} and \citet{Findlay2018} have carried out an extensive search for binary QSO systems, using the Sloan Digital Sky Survey \citep[SDSS;][]{York00} and the 2QZ QSO catalogues. The number of known physical quasar pairs includes many hundreds, but most of these have substantially larger separations than 10 arcsec \citep{Findlay2018}. Only about 30 pairs have separations smaller than 10 arcsec in the SDSS footprint where there are about half a million confirmed quasars \citep{Paris2018,Findlay2018}. It is known that the number of quasar pairs with small separation (here $\lesssim$30 arcsec) is significantly underestimated in catalogues based only on the SDSS/BOSS spectroscopic sample, primarily due to fibre collisions. \citet{Findlay2018} try to decrease this bias by searching for objects close to known quasars with quasar-like colours in the optical. However, significantly reddened quasars like GQ\,1114+1549A are typically missed in quasar searches based only on optical colours. Indeed, neither GQ\,1114+1549A nor GQ\,1114+1549B are classified as quasars in SDSS. It is one of the goals of our selection, (which led to the discovery of GQ\,1114+1549A), to avoid the colour bias inherent in the SDSS quasar selection \citep[and to a lesser extend in the BOSS quasar selection, see also the discussion in][]{Krogager19}. Our previous serendipitous discovery of a quasar pair in \citet{Heintz2016} was also a system of two quasars with very different colours, in that case, however, also with different redshifts. It is therefore possible that the number of quasar pairs at small separation is significantly underestimated. A new study based only on a selection of point sources without significant proper motion based on {\it Gaia} astrometric data next to known quasars is be a relatively easy way to explore this possibility. As seen in Fig.~\ref{fig:findchart}, astrometry is a very clean way of rejecting stars in searches for quasars \citep[see also][]{Heintz2018b}. \section*{Acknowledgments} \acknowledgments The majority of the present work was carried out during a summer school hosted at the Astronomical Institute of the Slovak Academy of Sciences (Tatransk{\'a} Lomnica, Slovakia). Over the course of group projects, the first five authors identified the binary quasar, under the supervision of the 7\textsuperscript{th} author. We would like to thank the organisers of the summer school, as well as acknowledging the support received from the ERASMUS+ grant number 2017-1-CZ01-KA203-035562, the European Union's Horizon 2020 research and the innovation programme, under grant agreement No 730890 (OPTICON). We further thank A. Angeletti, J. X. Prochaska and L. Delchambre for helpful discussions. The presented work was made possible by observations conducted with the Gran Telescopio Canarias (GTC) and with the Nordic Optical Telescope (NOT), installed in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofísica de Canarias, in the island of La Palma. Our data analysis is partly based on data obtained through the UKIRT Infrared Deep Sky Survey. The Cosmic Dawn Center is funded by the DNRF. K. E. Heintz also acknowledges the support by a Project Grant (162948--051) from The Icelandic Research Fund. J. P. U. Fynbo thanks the Carlsberg Foundation for their support. \bibliographystyle{aasjournal}
{'timestamp': '2019-10-11T02:12:52', 'yymm': '1910', 'arxiv_id': '1910.04502', 'language': 'en', 'url': 'https://arxiv.org/abs/1910.04502'}
\section{Introduction} Logic and declarative programming is often and successfully used as parts of desktop and server applications. We value the declarative techniques because it is easier to write programs that relate closely to the specification (or even write compilable specifications) and show their correctness. Declarative programming has found its place in most computer science curricula in some form (often Haskell and Prolog) as well. But especially in logic programming the applications often are theoretical or only used as part of a larger system. The parts that interact with the outside world are usually written in an imperative fashion. For embedded systems, where rule-based interaction with the outside world is often the majority of the application, declarative programming is an avenue not well explored. With the advent of really cheaply produced microchips that allow for direct hardware interaction, small and easily programmable systems have found a place in STEM education (Science, Technology, Engineering, Math) and are used to teach electrical engineering, signal processing, mechanical engineering, robotics, and of course, programming in all levels of school and academia~\cite{DBLP:conf/rie/AgatolioM16,DBLP:conf/teem/Martin-RamosSLS16,DBLP:conf/iticse/RussellJS16}. Systems like this are ubiquitous in the hobbyist realm and are most often used in IoT (Internet of Things) devices and home automation. We can categorize those systems and the manner in which they can be programmed the following way: \begin{itemize} \item \textbf{System on a chip} (SoC) devices like the Raspberry~Pi can, in principle, be programmed using any software a traditional desktop computer can be programmed with. While the available resources of SoCs are limited, the available main memory is in the dozens or hundreds of megabyte and even the slowest devices have CPUs with at least 300~MHz while the faster ones use multicore architectures with operating frequencies in the gigahertz range. Those CPUs are often found in phones, tablets, TVs, and other multimedia devices as well. And since the SoC devices usually also have a standard-compliant Linux distribution installed, any programming interface suitable to work with the GPIO (General Purpose Input Output) can be used for the described tasks. There is hardly any mainstream programming language that can not be used to program a Raspberry~Pi or similar SoC devices. Even the LEGO Mindstorms EV3 platform falls into this category and students can engage with this platform using (among others) Python, Java, Go, C, Ruby, Perl, and even Prolog~\cite{Sw17} as their programming language of choice. Almost any technology stack for declarative logic programming can be used on these devices. \item In contrast the ways in which \textbf{microcontrollers} can be programmed is very limited. Microcontrollers often use 8-bit CPUs with operating frequencies range from 16 to 40~MHz and an operating memory of 0.5 to 8~KB. Even the larger microcontrollers like the ESP-family with 32-bit CPUs, operating frequencies of up to 240~Mhz and 520~KB of memory are unsuitable for a modern Linux kernel and userspace, let alone the technology stack for declarative programming. For this kind of embedded programming there have traditionally been only two options. The approachable method that is often used in teaching beginner and intermediate courses is a graphical block-based programming language like scratch that uses an approach of translating code templates that fit like puzzle-pieces to actual C source code. The second approach that is taken on academic or advanced level is to program C code directly. Both approaches limit the user with regards to available programming paradigms. Imperative programming seems to have no real alternatives, even though such systems that can be equipped with sensors, buttons, lights, displays, etc. are, in principle, well-suited to be programmed using other paradigms. Especially in interactive applications like environmental sensing and robotics, event-driven or rule-based declarative approaches are desirable. \end{itemize} There have been many attempts to bring declarative programming to embedded systems. Some declarative approaches, like LUSTRE~\cite{Halbwachs91thesynchronous} from the early '90s, aim at reactive and dataflow oriented programming. Comparative experiments with implementations of embedded applications using abstract declarative languages (Prolog, OCaml) showed that while the abstract code is shorter, the overhead for the runtime environments is significant~\cite{DBLP:conf/sbcci/SpechtRCLCW07}. In the recent past there have been advances in bringing event-driven programming in the form of functional reactive programming (FRP) to the Arduino platform. The Juniper programming language~\cite{DBLP:conf/icfp/HelblingG16} is such a language that leverages the functional reactive style. The \texttt{frp-arduino} project% \footnote{\url{https://github.com/frp-arduino/frp-arduino}} provides a domain-specific language that is embedded into Haskell in order to create and compile FRP programs for the Arduino. There are other declarative programming approaches for the Arduino-based microcontroller platform like Microscheme\footnote{\url{https://github.com/ryansuchocki/microscheme}}, a Scheme subset for the Arduino platform. In the home automation context there have been projects that allow to declaratively configure microcontroller systems with common sensor setups (like ESPHome\footnote{\url{https://esphome.io/}}) but this approach is limited to this specific domain and a small number of targeted devices and peripherals. But in terms of logic programming the Arduino platform is sorely lacking. Logic programming languages like Datalog allow concise and clear descriptions of system behaviors. To use rule-based systems in the domain of robotics and home automation is very appealing. In this paper we propose a specific dialect of Datalog closely related to the Dedalus language~\cite{DBLP:conf/datalog/AlvaroMCHMS10} (Section \ref{sec:dedalus}) that includes IO operations. We define an evaluation order for the different types of rules (Section \ref{sec:evaluation}) and give a scheme to compile the Datalog code to C~code (Section \ref{sec:compilation}). This scheme can be used to program Arduino-based microcontrollers in an expressive and declarative fashion which we show by providing some example programs (Sections \ref{sec:exampleprograms} and \ref{sec:macros}). \section{Target Platform} As our target platform we have chosen microcontrollers with the ATmega328 8-bit processor, like the Arduino Nano, Arduino UNO\footnote{\url{https://www.arduino.cc}}, or similar devices (see Figure~\ref{fig:arduino_picture}). The ATmega328 is comparatively cheap and widely used. This target platform comes with a set of limitations and design challenges: \begin{figure} \begin{center} \includegraphics[height=3.5cm]{MVIMG_20190509_160354_255.jpg} \includegraphics[height=3.5cm]{IMG_20190531_173550_396.jpg} \end{center} \caption{Arduino Nano and Uno Compatible Boards with 1 Euro Cent for Size Comparison} \label{fig:arduino_picture} \end{figure} \begin{itemize} \item There is only 2~KB of SRAM available that is used for both heap and stack data. This means we are limited in operational memory for storing derived facts and in algorithm design with regards to function call depth. \item 32~KB of Flash memory can be used to store the program. This might seem a lot in comparison but this is also used to store additional libraries for peripheral access that are wanted by the user. This is also quite limiting considering the algorithm design and the amount of source code we are allowed to generate. The \texttt{Arduino.h} header files with pin input and output and writing to the serial port already use 2~KB of that memory, when compiled with size optimization enabled. \item A boot loader of about 2~KB is used for the firmware. \item The ATmega328 processor has an operational speed of 20~MHz which is a lot compared to the amount of data we have to operate on. \item There is an additional EEPROM non-volatile storage of 1~KB. This storage is slow and is limited in the amount of write cycles. If the user chooses to write to or read from this storage as an effectful operation (i.e. IO~predicate, see Section~\ref{sec:dedalus}), they can do so. \end{itemize} The chosen target platform gives us restrictions with regards to the resource usage to aim at. Since we generate C-code and our approach to interfacing with the rest of the system is generic our approach works for other embedded systems and processors as well. The generic approach is also useful since there already is a huge ecosystem for embedded development. The ``PlatformIO'' platform\footnote{\url{https://platformio.org/}} (self-proclaimed ``open source ecosystem for IoT development'') has well over 600 different supported boards and over 6.400 libraries in its registry. There is no reason why this effort should be duplicated. \section{Extension to Dedalus language} \label{sec:dedalus} We base our work on the Dedalus\textsubscript{0} language (from here on just Dedalus). Dedalus is a special variant of Datalog with negation where the \textbf{final attribute of every predicate} is a ``timestamp'' from the domain of the whole numbers. We call this attribute the ``time suffix''. We give a quick overview over the Dedalus language~\cite{DBLP:conf/datalog/AlvaroMCHMS10}: \begin{itemize} \item Every subgoal of a rule must use the same variable $\mathcal{T}$ as time suffix. \item Every rule head has the variable $\mathcal{S}$ as a time suffix. \item A rule is \textbf{deductive} if $\mathcal{S}$ is bound to $\mathcal{T}$, i.e. $\mathcal{S} = \mathcal{T}$ is a subgoal of this rule. \\ Example: $p(X, \mathcal{S}) \leftarrow q(X, Y, \mathcal{T}), p(Y, \mathcal{T}), \mathcal{S} = \mathcal{T}.$ \\ We allow for stratified negation in the deductive rules. \item A rule is \textbf{inductive} is $\mathcal{S}$ is bound to the successor of $\mathcal{T}$, i.e. $successor(\mathcal{T}, \mathcal{S})$ is a subgoal of this rule. \\ Example: $p(X, \mathcal{S}) \leftarrow q(X, Y, \mathcal{T}), p(Y, \mathcal{T}), successor(\mathcal{T}, \mathcal{S}).$ \\ We allow arbitrary negated body literals in inductive rules, because the program is always stratified with regards to the last component. \end{itemize} \noindent In Dedalus every rule is either deductive or inductive. To make it easier to work with those restrictions some syntactic sugar is added: \begin{itemize} \item For deductive rules the time argument is left out in the head of the rule and every subgoal. \\ Example: $p(X) \leftarrow q(X, Y), p(Y).$ \item For inductive rules the suffix ``@next'' is added to rule head and the time argument is left out in the head of the rule and every subgoal. \\ Example: $p(X)@next \leftarrow q(X, Y), p(Y).$ \item For facts any timestamp of the domain is allowed as $\mathcal{S}$ (written using the @-notation). To keep the memory footprint low we only allow facts for the timestamp 0 in this notation. \\ Example: $p(5)@0.$ \end{itemize} If a fact is not transported from one timestamp to the next we have a notion of deletion. But Dedalus is more than just Datalog with updates. With this extension our Datalog program now has a notion of time where not everything happens at once but the facts with some timestamp $T_n$ can be seen as ``happening earlier'' than the facts with timestamp $T_m$ with $n<m$. Depending on the evaluation strategy, any fact with an earlier timestamp may be deduced before those with a later timestamp. The timestamp also captures a notion of state, similar to the Statelog language~\cite{DBLP:conf/dagstuhl/LausenLM98a}. This is useful for interactions with the environment. To facilitate this interaction we add a predicate type and two types of rules that are used to manage effectful functions of the system (IO): \begin{itemize} \item An \textbf{IO predicate} is a predicate that corresponds to a system function that has effectful behavior with regards to the environment. IO~predicates do not correspond to members of the minimal model of our program. \item An \textbf{IO~literal} is a literal from an IO~predicate. Depending on usage this can be considered a (restricted) variant of an action atom or external atom~\cite{DBLP:journals/ai/EiterSP99,DBLP:conf/ijcai/EiterIST05}. \item An \textbf{input rule} is a \textbf{inductive rule} that has as last subgoal a positive IO~literal corresponding to a system function that reads a value from the environment, like the current time or a sensor value. The system function is executed when it is needed to derive a fact for the next state. \item An \textbf{output rule} is a \textbf{deductive rule} that has an IO~literal as the head. The literal corresponds to a system function that changes the environment, like setting the output current of a pin. The system function is executed when the literal can be derived. \item A rule has at most one IO literal in either head or body. \item We also allow arithmetic comparison of bound variables and arbitrary arithmetic expressions within the operands of the comparison. \end{itemize} \section{Program Evaluation} \label{sec:evaluation} \begin{figure} \tikzset{state/.style={draw, fill=black!10, minimum width=1cm, minimum height=6em}} \tikzset{op/.style={draw, fill=black!10, minimum width=3cm, minimum height=1.5em, node distance=1.5em}} \begin{center} \begin{tikzpicture}[auto,] \clip (-2,1) rectangle + (10,-4.1); \node [state] (curr) {$t_n$}; \node [state, below of = curr, node distance = 6em ] (next) {$t_{n+1}$}; \node [op, right = 1cm of curr.north east, anchor=north west] (curr_deduction) {deduction}; \node [op, below of = curr_deduction] (curr_output) {output}; \node [op, below of = curr_output] (curr_induction) {induction}; \node [op, below of = curr_induction] (curr_input) {input}; \node [op, below of = curr_input] (next_deduction) {deduction}; \node [op, below of = next_deduction] (next_output) {output}; \node [op, below of = next_output] (next_induction) {induction}; \node [op, below of = next_induction] (next_input) {input}; \node [minimum height=12em, fill=black!10, draw, right = 1cm of curr_deduction.north east, anchor=north west] (env) {environment}; \draw [->] (env.west|-curr_input) -> (curr_input.east); \draw [<->] (curr_deduction.west) -> (curr.east|-curr_deduction); \draw [<-] (env.west|-curr_output) -> (curr_output.east); \draw [<-] (curr_output.west) -> (curr.east|-curr_output); \draw [->] (curr.east|-curr_induction) -> (curr_induction.west); \draw [->] (curr_induction.-175) -> (next.61); \draw [->] (curr_input.west) -> (next.59); \draw [->] ($(curr_induction.-175) + (0,6em)$) -> ($(next.61) + (0,6em)$); \draw [->] ($(curr_induction.-175) - (0,6em)$) -> ($(next.61) - (0,6em)$); \draw [->] ($(curr_input.west) + (0,6em)$) -> ($(next.59) + (0,6em)$); \draw [->] ($(curr_input.west) - (0,6em)$) -> ($(next.59) - (0,6em)$); \draw [->] (env.west|-next_input) -> (next_input.east); \draw [<->] (next_deduction.west) -> (next.east|-next_deduction); \draw [<-] (env.west|-next_output) -> (next_output.east); \draw [<-] (next_output.west) -> (next.east|-next_output); \draw [->] (next.east|-next_induction) -> (next_induction.west); \draw [->] ($(curr.north west) - (0.25, 0)$) -- node[midway,sloped,below] {time} ($(next.south west) - (0.25, 0)$); \end{tikzpicture} \end{center} \caption{Fact Deduction Order} \label{fig:deductionOrder} \end{figure} \noindent Deduction of facts for the state $t_n$, the following state $t_{n+1}$, and scheduling and execution of effectful functions happens in 4 phases (see Figure~\ref{fig:deductionOrder}): \begin{enumerate} \item In the deduction phase all facts for the current timestamp are derived. During this phase only the deductive rules (i.e. the rules that derive facts for the current timestamp) are used. In our case we use a naive evaluation strategy (taking the strata into account) that uses the least amount of additional memory but any datalog evaluation strategy that computes the fixpoint can be used to derive the facts for the current timestamp. \item In the output phase IO~functions that write data or affect the environment can be executed. Output rules of the form $B \leftarrow A_1 \wedge A_2 \wedge \dots \wedge A_n.$ where $B$ is the single IO~literal are evaluated in the order they are written in the Datalog program. The function corresponding to $B\theta$ is evaluated once for every ground substitution $\theta$ for $A_1$ to $A_{n}$ where $A_1\theta \dots A_{n}\theta$ is in the minimal model. \item In the induction phase all facts for the next timestamp are derived. During this phase only the inductive rules (i.e. the rules that derive facts for the next timestamp) are used. Since facts derived through inductive rules may only depend on facts from the current timestamp, all necessary facts are known after one execution of each rule. Therefore all inductive rules are evaluated once (and in any order) for this timestamp. \item In the input phase IO~functions that read data from the environment can be executed. Input rules of the form $B \leftarrow A_1 \wedge A_2 \wedge \dots \wedge A_n.$ where $A_n$ is the single IO~literal are evaluated in the order they are written in the Datalog program. The function corresponding to $A_n\theta$ is evaluated once for every ground substitution $\theta$ for $A_1$ to $A_{n-1}$ where $A_1\theta \dots A_{n-1}\theta$ is in the minimal model of the current state and the derived $B\theta$ is in the minimal model for the following state. \end{enumerate} The effectful functions corresponding to IO~literals are called once for every ground substitution of the free variables in the rest of the rule. This is not a restriction since deduplication can be made explicit by introducing additional rules. Let $I$ be the IO~literal in the output rule $I \leftarrow p(X)$ then the function corresponding to $I$ is executed for every $X$ with $p(X)$ in the minimal model, even though $X$ does not appear in $I$. With the introduction of the regular predicate $I'$ this can be rewritten to remove multiple execution as the rules $I' \leftarrow p(X)$ and $I \leftarrow I'$ which has only ever one or no ground substitution. Then the needed memory for the duplicate checks is explicit and transparent for the programmer. Since order is enforced in the input and output phase of the program we deviate from a purely declarative description. It is not often the case that the order of gathering data from the environment or changing pin states is important. In case it is, the order is explicit to the programmer. Note that while we allow arithmetic comparison with arbitrary arithmetic expressions, new constants are only introduced by input rules. Since the number of facts for a specific timestamp generated by input rules is limited, the number of new constants introduced is finite as well. While termination does not hold for the whole program (and we do not want it to), the minimal model for any specific state is always finite. We say that our program is locally terminating, meaning that every following timestamp is reached eventually. \section{IO~Literals and Example Programs} \label{sec:exampleprograms} Our application is statically typed and we only allow primitive types for our data values. This is why all predicates need to be declared beforehand with the static types of their arguments. As syntax we use something similar to what the Souffl\'e system~\cite{DBLP:conf/cav/JordanSS16} does to declare relations. \texttt{.decl r(unsigned long, byte)} declares the predicate \texttt{r} with two arguments and their respective types. Since we have no general mechanism for textual output of relations, we do not need to define names for the arguments. Before we can show example programs we want to give some IO~predicates for the Arduino interface. Users can write their own IO~predicates to interface with any number of existing libraries for their system. On a most basic level, an embedded board communicates with the outside world by means of GPIO-pins (general purpose input/output) that are attached to sensors, actors, or other mechanical or electrical components. Basic interface functions\footnote{\url{https://github.com/arduino/ArduinoCore-avr/blob/master/cores/arduino/Arduino.h}} for the pins in an Arduino-based systems (see Figure \ref{fig:pininterface}) are the functions \texttt{pinMode} that sets whether a pin is in input or output mode, \texttt{digitalWrite} that sets the output voltage (usually between the constants \texttt{HIGH} and \texttt{LOW}) of a pin (both persistent until the next call), and \texttt{digitalRead} that reads the voltage on a pin and gives either a \texttt{LOW} or \texttt{HIGH} value. \begin{figure}% \begin{verbatim}void pinMode(uint8_t pin, uint8_t mode); void digitalWrite(uint8_t pin, uint8_t val); int digitalRead(uint8_t pin); unsigned long millis(void);\end{verbatim}% \caption{Extract from \texttt{Arduino.h} Header Files} \label{fig:pininterface} \end{figure}% We define an IO predicate, which always starts with an \texttt{\#} to denote that it is an IO~predicate, with its arguments (left side) by arbitrary C statements (right side). Every IO~predicate may only have one definition. Within the defining C-statements variables from the predicate arguments can be used (prepended with \texttt{\#} as to not overlap with constants like \texttt{HIGH} and \texttt{LOW}). Constants from the outside C-code may also be used as constants in the Datalog-code using \texttt{\#} as a prefix. These base functions are part of our standard library but since there are many different community-created libraries, we allow arbitrary C-code for interaction with our Datalog system. \begin{figure} \begin{verbatim} #pinIn(P) = {pinMode(#P, INPUT);} #pinOut(P) = {pinMode(#P, OUTPUT);} #digitalWrite(P, Val) = {digitalWrite(#P, #Val);} #digitalRead(P, Val) = {int Val = digitalRead(#P);} #millis(T) = {unsigned long T = millis();} \end{verbatim} \caption{Defined IO Predicates from the Standard Library} \end{figure} Depending on the definition of an IO~predicate, some binding patterns for IO~literals are not allowed. Consider the IO~predicate defined as \begin{center}\texttt{\#digitalRead(P, Val) = \{int Val = digitalRead(\#P);\}}.\end{center} We say that the variable \texttt{P} is read in the definition (as its value is used in a function call) and the variable \texttt{Val} is set in the definition. This corresponds to binding pattern bound-free. Every variable that is not read in the definition is considered set in the definition. Some restriction arise from inserting the definitions into the source code ``as is'': \label{sec:iorules} \begin{itemize} \item If the IO~literal is used as the head of a rule, all variables appearing must be bound and read in the definition. The definition is compiled ``as is''. \item If the IO~literal is used in an input rule, all variables read in the definition must be bound by the other literals in the query. If some variable is set in the definition and bound by other literals, we compile the use of $p(A)$ with $A$ bound but set in the definition as $p(A'), A'=A$. When the rule is rewritten this way, the variable set in the definition is free again and we use a later comparison to check whether the values are equal. \end{itemize} \begin{figure} \begin{minipage}[t]{0.4\textwidth} \begin{verbatim} .decl setup .decl pressed setup@0. #pinIn(2) :- setup. #pinOut(13) :- setup. \end{verbatim} \end{minipage} \begin{minipage}[t]{0.6\textwidth} \begin{verbatim} pressed@next :- #digitalRead(2, #HIGH). #digitalWrite(13, #HIGH) :- pressed. #digitalWrite(13, #LOW) :- !pressed. \end{verbatim} \end{minipage} \caption{Program That Changes the Led When a Button Is Pressed} \label{fig:touchblink} \end{figure} \noindent We give an example program that switches an LED (the internal LED on this example board is connected to pin 13) on when the button connected to pin 2 is pressed, and off when it is released (see Figure \ref{fig:touchblink}). This program only has input and output rules and defines a minimal behavior that can easily be adapted to arbitrary connected sensors (temperature, distance) and actors (relays, motors). In the same manner we define the blink-program (see Figure \ref{fig:blinklarge}) that toggles the LED every second using the system function \texttt{millis} that returns the number of milliseconds since the microcontroller has been turned on. In this example we use the deduction phase to deduce actions and depending on those actions we both affect the environment and change the following state. The switching action (\texttt{turn\_on} and \texttt{turn\_off}) is deduced explicitly and the current state (\texttt{on\_since} and \texttt{off\_since}) is passed into the following state through the inductive rules until the decision to toggle is reached. \begin{figure} \begin{minipage}[t]{0.4\textwidth} \begin{verbatim} .decl setup .decl now(unsigned long) .decl off_since(unsigned long) .decl on_since(unsigned long) .decl turn_off .decl turn_on setup@0. #pinOut(13) :- setup. off_since(0)@0. now(0)@0. \end{verbatim} \end{minipage} \begin{minipage}[t]{0.6\textwidth} \begin{verbatim} turn_off :- on_since(P), now(T), P+1000 < T. turn_on :- off_since(P), now(T), P+1000 < T. on_since(P)@next :- !turn_off, on_since(P). on_since(T)@next :- turn_on, now(T). off_since(P)@next :- !turn_on, off_since(P). off_since(T)@next :- turn_off, now(T). now(T)@next :- #millis(T). #digitalWrite(13, #HIGH) :- turn_on. #digitalWrite(13, #LOW) :- turn_off. \end{verbatim} \end{minipage} \caption{Blink-Program} \label{fig:blinklarge} \end{figure} \section{Runtime Environment and Compilation} \label{sec:compilation} \subsection{Memory Management} Our runtime environment uses two buffers to store deduced facts. One buffer is for the facts in the current state and the other buffer is for the facts in the following state. Since we discard facts from previous states and do not dynamically allocate memory, this scheme allows us to not store timestamp data or superfluous pointers at all. For the state transition the buffers are switched and the buffer for the next state is zeroed. The buffer size is given by the user during compilation as some unknown amount of memory might be needed for the other libraries and their data structures. Our facts are stored in the buffers in a simple manner: \begin{itemize} \item Predicates are numbered (from 1) and we use the first byte to store the predicate (up to 255 different predicates). \item Subsequent bytes are used for the arguments. \item Facts are stored one after the other in the buffer. \item Empty tail of the buffers are filled with zeroes. \end{itemize} \begin{figure} \tikzset{field/.style={draw, fill=black!10, minimum width=1.5em, minimum height=1.5em}} \tikzset{brace/.style={decoration={brace},decorate}} \tikzset{bnode/.style={midway,above}} \tikzset{mbnode/.style={bnode,below,yshift=-0.05cm}} \tikzset{mbrace/.style={brace,decoration={mirror}}} \begin{center} \begin{tikzpicture}[auto,node distance=0] \node [field] (p1_p) {$001$}; \node [field, right = of p1_p] (p1_1) {$003$}; \node [field, right = of p1_1] (p1_2) {$232$}; \node [field, right = of p1_2] (q1_p) {$002$}; \node [field, right = of q1_p] (q1_1) {$042$}; \node [field, right = of q1_1] (q1_2) {$000$}; \node [field, right = of q1_2] (q1_3) {$012$}; \node [field, right = of q1_3] (p2_p) {$001$}; \node [field, right = of p2_p] (p2_1) {$128$}; \node [field, right = of p2_1] (p2_2) {$012$}; \node [field, right = of p2_2] (f1) {$000$}; \node [field, right = of f1] (f2) {$000$}; \node [field, right = of f2] (fr) {$\dots$}; \draw [brace] (p1_p.north west) -- (p1_2.north east) node [bnode] {$p(1000)$}; \draw [brace] (q1_p.north west) -- (q1_3.north east) node [bnode] {$q(42,12)$}; \draw [brace] (p2_p.north west) -- (p2_2.north east) node [bnode] {$p(-12)$}; \draw [brace] (f1.north west) -- (fr.north east) node [bnode] {free memory}; \draw [mbrace] (p1_p.south west) -- (p1_p.south east) node [mbnode] {$p$}; \draw [mbrace] (p1_1.south west) -- (p1_2.south east) node [mbnode] {$1000$}; \draw [mbrace] (p2_1.south west) -- (p2_2.south east) node [mbnode] {$-12$}; \draw [mbrace] (q1_1.south west) -- (q1_1.south east) node [mbnode] {$42$}; \draw [mbrace] (q1_2.south west) -- (q1_3.south east) node [mbnode] {$12$}; \draw [mbrace] (q1_p.south west) -- (q1_p.south east) node [mbnode] {$q$}; \draw [mbrace] (p2_p.south west) -- (p2_p.south east) node [mbnode] {$p$}; \end{tikzpicture} \caption{Mapping Example with Declarations \texttt{.decl p(int)}, \texttt{.decl q(byte, int)}} \label{fig:memoryMapping} \end{center} \end{figure} \noindent This memory management scheme is very simple but uses no additional memory on pointers for organizing the data structure. Fact access time is linear in the number of stored facts. This is a reasonable compromise since we can not store many facts anyways. Consider predicates with lengths of 8 Bytes. If we want to use 800 Bytes of our RAM for fact storage we would allocate 400 Bytes per Buffer with 50 facts until the buffer is full. Saving memory on facts, pointers, and call stack by not using more complex data structures is reasonable. \subsection{Target Code} The following functions we compile to C-code: \begin{itemize} \item Switching buffers and clearing of a buffer. \item Writing values of the available data types to a buffer. \item Reading values of the available data types from a buffer. \item Inserting a fact into a buffer. \item Retrieving a fact position from a buffer according to the used binding patterns with the first argument for the start of the memory area to search in and one additional argument for every bound value. At least the pattern where every value is bound is used since we use it for duplicate checking on insert of facts. These functions return 0 if there is no fitting fact in the buffer. \item Reading an argument value from a fact given the fact position in a buffer. \end{itemize} \noindent Additionally we compile the size and memory locations of the buffers (\texttt{curr\_buff}, \texttt{next\_buff}), the size of the facts depending on the predicate, and the mapping from predicates to numbers as constants into the code, effectively storing them in the program memory. \begin{itemize} \item For every rule we generate a function without parameters that returns whether facts have been inserted. \item The generated function contains a nested-loop-join for every literal in the body with variables bound in order of appearance in the rule. \item The generated function contains an if-statement for every arithmetic comparison. \item Additionally we generate a duplicate check for the fact that is to be inserted, and an insertion statement. \end{itemize} \noindent The code that we compile the rule $p(A) \leftarrow q(A),p(B), A < B$ to, where all arguments are integers, is shown in Figure~\ref{fig:compiled_rule}. \begin{figure} \begin{tt} \begin{tabular}{@{}|c|l@{}} \multicolumn{1}{l}{}&\textcolor{Black}{% bool deductive{\U}rule{\U}1() {\LB}}\\ \multicolumn{1}{l}{}&\textcolor{Black}{% ~~bool inserted{\U}facts = false;}\\ \multicolumn{2}{@{}l@{}}{}\\[-9pt] \cline{1-1} $q(A)$&\textcolor{MidnightBlue}{% ~~size{\U}t~q1~=~curr{\U}buff;}\\ &\textcolor{MidnightBlue}{% ~~while~((q1~=~q{\U}f(q1))~!=~0)~{\LB}~~~~~~~//~find~next~q-fact}\\ &\textcolor{MidnightBlue}{% ~~~~int~A~=~q{\U}arg1(q1);~~~~~~~~~~~~~~~//~read~first~argument}\\ \multicolumn{2}{@{}l@{}}{}\\[-9pt] \cline{1-1} $p(B)$&\textcolor{Fuchsia}{% ~~~~size{\U}t~p1~=~curr{\U}buff;}\\ &\textcolor{Fuchsia}{% ~~~~while~((p1~=~p{\U}f(p1))~!=~0)~{\LB}~~~~~//~find~next~p-fact}\\ &\textcolor{Fuchsia}{% ~~~~~~int~B~=~p{\U}arg1(p1);~~~~~~~~~~~~~//~read~first~argument}\\ \multicolumn{2}{@{}l@{}}{}\\[-9pt] \cline{1-1} $A\!<\!B$&\textcolor{OrangeRed}{% ~~~~~~if~(A~{\LT}~B)~{\LB}}\\ \multicolumn{2}{@{}l@{}}{}\\[-9pt] \cline{1-1} $p(A)$&\textcolor{OliveGreen}{% ~~~~~~~~if~(p{\U}b(curr{\U}buff,~A)~==~0)~{\LB}~//~duplicate~check}\\ &\textcolor{OliveGreen}{% ~~~~~~~~~~insert{\U}p(curr{\U}buff,~A);~~~~~//~insertion}\\ \multicolumn{2}{@{}l@{}}{}\\[-9pt] \multicolumn{1}{l}{}&\textcolor{Black}{% ~~~~~~~~~~inserted{\U}facts~=~true;}\\ \multicolumn{2}{@{}l@{}}{}\\[-9pt] $p(A)$&\textcolor{OliveGreen}{% ~~~~~~~~{\RB}}\\ \cline{1-1} \multicolumn{2}{@{}l@{}}{}\\[-9pt] $A\!<\!B$&\textcolor{OrangeRed}{% ~~~~~~{\RB}}\\ \cline{1-1} \multicolumn{2}{@{}l@{}}{}\\[-9pt] $p(B)$&\textcolor{Fuchsia}{% ~~~~~~p1~+=~size{\U}of{\U}p;~//~advance~pointer~past~seen~fact}\\ &\textcolor{Fuchsia}{% ~~~~{\RB}}\\ \cline{1-1} \multicolumn{2}{@{}l@{}}{}\\[-9pt] $q(A)$&\textcolor{MidnightBlue}{% ~~~~q1~+=~size{\U}of{\U}q;~~~//~advance~pointer~past~seen~fact}\\ &\textcolor{MidnightBlue}{% ~~{\RB}}\\ \cline{1-1} \multicolumn{2}{@{}l@{}}{}\\[-9pt] \multicolumn{1}{l}{}&\textcolor{Black}{% ~~return~inserted{\U}facts;}\\ \multicolumn{1}{l}{}&\textcolor{Black}{% {\RB}}\\ \end{tabular} \end{tt} \caption{Compiled Rule $\textcolor{OliveGreen}{p(A)} \leftarrow \textcolor{MidnightBlue}{q(A)}, \textcolor{Fuchsia}{p(B)}, \textcolor{OrangeRed}{A < B}.$} % \label{fig:compiled_rule} \end{figure} For inductive rules instead of writing the fact to the buffer corresponding to the current timestamp, it is written into the buffer corresponding to the following (next) timestamp. IO~literals are compiled ``as is'' according to the rules from Section~\ref{sec:iorules} with their usage replaced by the C-statements they are defined with. \subsection{Compiled Source File} The end result of the compilation process is a C source file that can be compiled to machine code using the Arduino toolchain (for example PlatformIO or the Arduino IDE\footnote{\url{https://www.arduino.cc/en/Main/Software}}) and has the following general format: \begin{figure} \begin{verbatim} // includes // Buffer Declarations // Functions for Buffer Access // Reading and Writing Facts void setup() { // Buffer initialization // Facts for timestamp 0 } void loop() { do { // deductive phase added_facts = false; added_facts |= deductive_rule_1(); ... added_facts |= deductive_rule_i(); } while (added_facts); output_rule_1(); inductive_rule_1(); input_rule_1(); ... output_rule_j(); inductive_rule_n(); input_rule_m(); switch_buffers(); } \end{verbatim} \caption{Simplified Outline of Compiled Source File} \label{fig:compiled_outline} \end{figure} \noindent The \texttt{setup} and \texttt{loop} functions are the entrypoints for the processor. The setup-function is called once when the microcontroller is started and the loop-function is called repeatably once the setup has finished. The loop function executes all the derivation steps in the proper order (see Figure \ref{fig:compiled_outline}). \section{Macro Expansion} \label{sec:macros} In the context of home automation and IoT some tasks are quite common and need to be accomplished in many projects. Some of these tasks are initialization of sensors, persisting of facts into the EEPROM, or delayed deduction of facts, like one second in the future. To facilitate this, we allow for macro expansion in our programming language. Macros are written in square brackets and are placed in front of a rule. The rule is then rewritten on a syntactic level to accomplish the task. We give two macros as an example: \begin{itemize} \item The setup-macro rewrites a rule \texttt{[setup]head.} to \texttt{head :- setup.} and adds the fact \texttt{setup@0} that marks the first state $T_0$ with the fact \texttt{setup}. This can be used for initialization of pins and sensors as well as initial state. \item The \texttt{[delay:1000]} macro (with any integer number) adds rules that deduces the fact in the future (as many milliseconds in the future). The rule \texttt{[delay:X]head(Args) :- body(Args)} is replaced by the following rules: \begin{itemize} \item Initial time fact: \texttt{now(0)@0.} \item Reading current time: \texttt{now(T)@next :- \#millis(T).} \item Deriving the fact that is to be delayed: \\ \texttt{delayed\_head(Args, Curr) :- body(Args), now(Curr).} \item Deriving the delayed fact when the delay time is reached: \\ \texttt{head(Args) :- delayed\_head(Args, Await),\\ \hphantom{head(Args) :-} now(Curr), Await+X <= Curr.} \item The rule that transports delay forward if the time is not yet reached:\\ \texttt{delayed\_head(Args, Await)@next :- delayed\_head(Args, Await),\\ \hphantom{delayed\_head(Args, Await)@next :- }now(Curr), Await+X > Curr.} \item New predicate declarations: ~~~~~~~ \texttt{.decl now(unsigned long)} \\ \texttt{.decl delayed\_head(<former arguments>, unsigned long)} \end{itemize} \end{itemize} \noindent These macros decrease the program size and using the presented macros we can now show the final and very concise version (without the declarations) of our blink-program. Note that the expanded version is slightly different to the hand-crafted version of the blink-program but behaves the same. \begin{figure} \begin{minipage}[t]{0.5\textwidth} \begin{verbatim} [setup]#pinOut(13). [delay:1000]turn_on :- turn_off. #digitalWrite(13, #HIGH) :- turn_on. \end{verbatim} \end{minipage} \begin{minipage}[t]{0.5\textwidth} \begin{verbatim} [setup]turn_off. [delay:1000]turn_off :- turn_on. #digitalWrite(13, #LOW) :- turn_off. \end{verbatim} \end{minipage} \caption{Concise Blink-Program using Macros} \end{figure} \section{Conclusion} We have shown that programs for Arduino and similar microcontroller systems can be written in a declarative logic language with few restrictions using a slightly altered version of the Datalog dialect Dedalus. Effectful operations are introduced by defining an evaluation scheme where local termination still holds. The Dedalus approach seems useful as it not only captures a notion of state-changes during the execution in an interactive environment, the captured notion of time allows us to use IO functions depending on facts corresponding to the state we consider as ``now''. Then we have presented a straightforward translation scheme from our program code to Arduino-C that integrates well with existing library functions. While the generated code corresponds to a naive evaluation scheme, it is not algorithmically complex and uses not too much of the available program memory. Additionally we showed a method for code expansion that extends the usefulness of our language by autogenerating boilerplate code. This means that introductory examples of Arduino programs written in Datalog are as easy, if not easier than the equivalent C program. There are still a few open questions and areas for further research. Is it useful to apply transformations like magic sets, SLDmagic~\cite{DBLP:conf/cl/Brass00}, or our Push method~\cite{DBLP:conf/ershov/BrassS17} with the IO~rules as query goals? How well do other datalog optimzation and compilation schemes work with the limited operating memory? With a focus on the physical aspects of specific boards, can we analyze the program to find pins that are used as input but defined as output and vice versa? Can we identify otherwise incorrectly used system resources like pins that might be set differently multiple times in the same state, or facts that may not co-occur in the same state (like \texttt{led\_on} and \texttt{led\_off})? The initial state of the program is known beforehand (there is no dynamic database for EDB facts) and parameters are only introduced through input rules. Can a set of possible states for the application, parameterized in the arguments of the facts, be calculated beforehand and used as program state instead of a general purpose fact storage? Since the memory on the chip is severely limited, can we give an upper bound on the number of facts deduced for every timestamp (e.g. the amount of memory needed for the runtime system) using functional dependency analysis for derived predicates~\cite{DBLP:conf/lopstr/EngelsBB17}? If this was known during compilation, the buffers can be appropriately sized automatically. How quickly can the minimal model for a state be deduced and can we give upper and lower bounds for the duration of one timestamp? The last two questions are especially interesting with regards to real-time applications and safety and liveness properties of embedded systems. \section{Related Work} \bibliographystyle{splncs03}
{'timestamp': '2019-09-04T02:01:34', 'yymm': '1909', 'arxiv_id': '1909.00043', 'language': 'en', 'url': 'https://arxiv.org/abs/1909.00043'}
\section{Introduction and formulation of the problem}\label{intro} We study slightly compressible fluid flows in rotating porous media. The fluid has density $\rho$, velocity $v$, and pressure $p$. The porous media is rotated with a constant angular velocity $\tilde\Omega \vec k$, with the unit vector $\vec k$ being the axis of rotation, and $\tilde\Omega\ge 0$ being the angular speed. Let $x$ be the coordinate vector of a position in the rotating frame. The equation for fluid flows written in the rotating frame, see e.g. \cite{VadaszBook}, is \begin{equation}\label{Drot} \frac{\mu}{k}v+ c\rho \tilde\Omega \vec k\times v + \rho \tilde\Omega^2 \vec k\times (\vec k\times x)=-\nabla p+\rho \vec g, \end{equation} where $\mu$ is the dynamic viscosity, $k$ the permeability, $c\rho \tilde\Omega \vec k\times v$ the Coriolis acceleration, $\tilde\Omega^2 \vec k\times (\vec k\times x)$ the centripetal acceleration, and $\vec g$ the gravitational acceleration. The basic assumption for equation \eqref{Drot} is that the flows obey the Darcy's law \be \frac{\mu}{k}v=-\nabla p+\rho \vec g. \end{equation} However, in many situations, for instance, when the Reynolds number is large, this assumption is invalid. Instead, Forchheimer equations \cite{Forchh1901,Forchheimerbook} are usually used to model the flows in these cases. For example, the two-term Forchheimer's law states that \begin{equation} av+b|v|v=-\nabla +\rho \vec g, \end{equation} where $a, b>0$ are some physical parameters. (See also Forchheimer's three-term and power laws in, e.g., \cite{MuskatBook,BearBook,NieldBook}.) A general form of the Forchheimer equations, taking into account Muskat's dimension analysis \cite{MuskatBook}, is \begin{equation}\label{Fnr} \sum_{i=0}^N a_i \rho^{\alpha_i} |v|^{\alpha_i} v=-\nabla p +\rho \vec g. \end{equation} Here we focus on the explicit dependence on the density, leaving the dependence on the dynamic viscosity and permeability encoded in the coefficients $a_i$'s. The interested reader is referred to the books \cite{BearBook,NieldBook,StraughanBook} for more information about the Forchheimer equations and a larger family of Brinkman-Darcy-Forchheimer equations. For their mathematical analysis in the case of incompressible fluids, see e.g. \cite{Zabensky2015a,ChadamQin,Payne1999a,Payne1999b,MTT2016,CKU2006,KR2017} and references therein. For the treatments of compressible fluids, see \cite{ABHI1,HI1,HI2,HIKS1,HKP1,HK1,HK2,CHK1,CHK2,CH1,CH2}. It is noted that the Forchheimer flows have drawn much less attention of mathematical research compared to the Darcy flows, and among the papers devoted to them, the number of those on compressible fluids is much smaller than that on the incompressible one. Then the equation for the rotating flows corresponding to \eqref{Fnr}, written in the rotating frame, is \begin{equation}\label{FM} \sum_{i=0}^N a_i \rho^{\alpha_i} |v|^{\alpha_i} v+ \frac{2\rho \tilde\Omega}{\tilde\phi} \vec k\times v+\rho \tilde\Omega^2 \vec k\times (\vec k\times x)=-\nabla p +\rho \vec g, \end{equation} where $\tilde \phi\in(0,1)$ is the constant porosity. In particular, when $N=1$, the specific Forchheimer's two-term law for rotating fluids \cite{Ward64,VadaszBook} is \begin{equation} \frac{\mu}{k} v+ \frac{c_F \rho}{\sqrt{k}}|v|v + \frac{2\rho \tilde\Omega}{\tilde\phi} \vec k\times v+\rho \tilde\Omega^2 \vec k\times (\vec k\times x)=-\nabla p +\rho \vec g, \end{equation*} where $c_F$ is the Forchheimer constant. Even in this case, there is no mathematical analysis for compressible fluids in literature. We make one simplification in \eqref{FM}: replacing $\displaystyle \frac{2\rho \tilde\Omega}{\tilde\phi} \vec k\times v$ with \begin{equation}\label{sim} \frac{2\rho_* \tilde\Omega}{\tilde\phi} \vec k\times v, \quad \rho_*=const.\ge 0. \end{equation} We then approximate equation \eqref{FM} by \begin{equation}\label{FMR} \sum_{i=0}^N a_i \rho^{\alpha_i} |v|^{\alpha_i} v+ \mathcal R \vec k\times v+\rho \tilde\Omega^2 \vec k\times (\vec k\times x)=-\nabla p +\rho \vec g, \end{equation} where \begin{equation}\label{Rdef} \mathcal R=\frac{2\rho_* \tilde\Omega}{\tilde\phi}=const.\ge 0. \end{equation} Let $g:\mathbb{R}^+\rightarrow\mathbb{R}^+$ be a generalized polynomial defined by \begin{equation}\label{eq2} g(s)=a_0 + a_1s^{\alpha_1}+\cdots +a_Ns^{\alpha_N}=\sum_{i=0}^N a_i s^{\alpha_i}\quad\text{for } s\ge 0, \end{equation} where $N\ge 1$ is an integer, the powers $\alpha_0=0<\alpha_1<\alpha_2<\ldots<\alpha_N$ are real numbers, and the coefficients $a_0,a_N>0$ and $a_1,a_2,\ldots,a_{N-1}\ge 0$ are constants. Then equation \eqref{FMR} can be rewritten as \begin{equation}\label{FM1} g(\rho|v|) v+ \mathcal R \vec k\times v=-\nabla p +\rho \vec g - \rho \tilde\Omega^2 \vec k\times (\vec k\times x). \end{equation} Multiplying both sides of \eqref{FM1} by $\rho$ gives \begin{equation}\label{neweq1} g(|\rho v|) \rho v + \mathcal R \vec k\times (\rho v) =-\rho\nabla p+ \rho^2 \vec g- \rho^2 \tilde\Omega^2 \vec k\times (\vec k\times x). \end{equation} We will solve for $\rho v$ from \eqref{neweq1}, which is possible thanks to the following lemma. \begin{lemma}\label{Fsolve} Given any vector $k\in\mathbb R^3$, the function $F_0(v)\stackrel{\rm def}{=} g(|v|)v + k\times v$ is a bijection from $\mathbb R^3$ to $\mathbb R^3$. \end{lemma} \begin{proof} Note that $F_0$ is a continuous function on $\mathbb R^3$ and $$ \frac{F_0(v)\cdot v}{|v|}=\frac{g(|v|)|v|^2}{|v|}\to\infty \text{ as }|v|\to\infty.$$ Then it is well-known that $F_0(\mathbb R^3)=\mathbb R^3$, see e.g. \cite[Theorem 3.3]{Deimling2010}. It remains to prove that $F_0$ is one-to-one. Let $v, w\in \mathbb R^3$. We have \begin{equation*} \begin{split} (F_0(v)-F_0(w))\cdot (v-w) &= (g(|v|)v-g(|w|))\cdot (v-w)+ [k\times (v-w)]\cdot(v-w)\\ &=(g(|v|)v-g(|w|)w)\cdot (v-w) \\ &= a_0|v-w|^2+\sum_{i=1}^N a_i (|v|^{\alpha_i}v-|w|^{\alpha_i} w)\cdot (v-w). \end{split} \end{equation*} By applying \cite[Lemma 4.4, p.~13]{DiDegenerateBook} to each $(|v|^{\alpha_i}v-|w|^{\alpha_i} w)\cdot (v-w)$, for $i\ge 1$, we then obtain \begin{equation}\label{Fmono} (F_0(v)-F_0(w))\cdot (v-w)\ge a_0|v-w|^2+\sum_{i=1}^N C_i a_i |v - w|^{\alpha_i+2}, \end{equation} where $C_i>0$ depends on $\alpha_i$ and $a_i$, for $i=1,\ldots, N.$ If $F_0(v)=F_0(w)$, it follows the monotonicity \eqref{Fmono} that \begin{equation*} 0=(F_0(v)-F_0(w))\cdot (v-w)\ge a_0 |v - w|^2, \end{equation*} which implies $v=w$. \end{proof} Let vector $\vec k=(k_1,k_2,k_3)$ be fixed now with $k_1^2+k_2^2+k_3^2=1$. We denote by ${\mathbf J} $ the $3\times 3$ matrix representing the rotation $\vec k\times$, that is, ${\mathbf J} x=\vec k\times x$ for all $x\in\mathbb R^3$. Explicitly, we have \begin{equation}\label{JR} {\mathbf J} =\begin{pmatrix} 0 & -k_3 & k_2\\ k_3 & 0 & -k_1 \\ -k_2& k_1 & 0 \end{pmatrix} \text{ and } {\mathbf J}^2=\begin{pmatrix} k_1^2-1&k_1 k_2 & k_1 k_3\\ k_1 k_2&k_2^2-1&k_2 k_3\\ k_1 k_3& k_2 k_3&k_3^2-1 \end{pmatrix}. \end{equation} \begin{definition}\label{FX} Throughout the paper, the function $g$ in \eqref{eq2} is fixed. We define the function $F:\mathbb R^3\to\mathbb R^3$ by \begin{equation}\label{Fdef} F(v)= g(|v|)v + \mathcal R {\mathbf J} v \quad \text{ for }v\in\mathbb R^3, \end{equation} and denote its inverse function, which exists thanks to Lemma \ref{Fsolve}, by \begin{equation} \label{Xdef} X=F^{-1}. \end{equation} \end{definition} Since $F$ is odd, then so is $X$. Returning to equation \eqref{neweq1}, we can invert \begin{equation}\label{new2} \rho v= -X(\rho\nabla p - \rho^2 \vec g + \rho^2 \tilde\Omega^2 {\mathbf J}^2 x). \end{equation} The equation of state for (isothermal) slightly compressible fluids is \begin{equation}\label{slight} \frac{1}{\rho} \frac{d\rho}{dp}=\kappa, \quad \text{where the constant compressibility } \kappa>0 \text { is small}. \end{equation} Using \eqref{slight}, we write \eqref{new2} as \begin{equation}\label{rveq} \rho v= -X(\kappa^{-1} \nabla \rho - \rho^2 \vec g+ \rho^2 \tilde\Omega^2 {\mathbf J}^2 x). \end{equation} Consider the equation of continuity \begin{equation}\label{eq5} \tilde \phi\frac{\partial \rho}{\partial t} +\nabla\cdot(\rho v)=0, \end{equation} where $t$ is the time variable. Combining \eqref{eq5} with \eqref{rveq} gives \begin{equation}\label{eq0} \tilde \phi\frac{\partial \rho}{\partial t}=\nabla\cdot(X(\kappa^{-1} \nabla \rho - \rho^2 \vec g+ \rho^2 \tilde \Omega^2 {\mathbf J}^2 x)). \end{equation} In the rotating frame, the gravitational field $\vec g$ becomes $\vec g(t)=\tilde{\mathcal G} e_0(t)$, with the unit vector \begin{equation} e_0(t)=(- \sin \theta \cos(\tilde\Omega t+\omega_0), -\sin\theta \sin(\tilde\Omega t+\omega_0), \cos \theta) , \end{equation} where $\theta\in[0,\pi]$ is the fixed angle between $\vec g$ and $\vec k$, the number $\omega_0=const.$, and $\tilde{\mathcal G}>0$ is the gravitational constant. We make a simple change of variable $u=\rho/\kappa$. Then we obtain from \eqref{eq0} the partial differential equation (PDE) \begin{equation}\label{ueq} \phi u_t= \nabla \cdot\Big (X\big(\nabla u+u^2 \mathcal Z(x,t)\big)\Big), \end{equation} where \begin{equation}\label{Zx} \mathcal Z(x,t)=-\mathcal G e_0(t)+ \Omega^2 {\mathbf J}^2 x, \end{equation} \begin{equation} \phi =\kappa\tilde \phi,\quad \mathcal G=\kappa^2\tilde{\mathcal G},\quad \Omega=\kappa \tilde \Omega. \end{equation} To reduce the complexity in our mathematical treatment, hereafter, we consider the involved parameters and all equations to be non-dimensional. This is allowed by using appropriate scalings. In this paper, we study the initial and boundary value problem (IBVP) for equation \eqref{ueq}. More specifically, let $U$ be an open, bounded set in $\mathbb R^3$ with $C^1$ boundary $\Gamma=\partial U$. We study the following problem \begin{equation}\label{ibvpg} \begin{aligned} \begin{cases} \phi u_t=\nabla\cdot \Big(X\big(\nabla u+u^2 \mathcal Z(x,t)\big)\Big)\quad &\text{in}\quad U\times (0,\infty)\\ u(x,0)=u_0(x) \quad &\text{in}\quad U\\ u(x,t)= \psi(x,t)\quad &\text{in}\quad \Gamma\times (0,\infty), \end{cases} \end{aligned} \end{equation} where the initial data $u_0(x)$ and the Dirichlet boundary data $\psi(x,t)$ are given. We will focus on the mathematical analysis of problem \eqref{ibvpg}. We obtain various estimates of the solution in terms of the initial and boundary data. These estimates show how the solutions, in space and time, can be controlled by the initial and boundary data. We emphasize that the dependence on the problem's key parameters, including the angular speed of rotation, are expressed explicitly in our results. The paper is organized as follows. In section \ref{Prem}, we establish basic properties of the function $X$ which are crucial to our understanding of problem \eqref{ibvpg}. They reveal the nonlinear structure and the degeneracy of the nonlinear parabolic equation \eqref{ueq}. Moreover, they have explicit dependence on the physical parameters, which, as stated above, is an important goal of this paper. In section \ref{maxsec}, we prove the maximum principle for non-negative solutions of equation \eqref{ueq} in Theorem \ref{maxpr}. Using this, we derive the maximum estimates for non-negative solutions of the IBVP \eqref{ibvpg} in Corollary \ref{maxcor}. Section \ref{Gradprep} contains the Lady\v{z}enskaja-Ural{$'$}\def\cprime{$'$} \def\cprime{$'$}ceva-typed embedding, Theorem \ref{LUembed}, with the weight $K[w,Q]$ which is related to the type of degeneracy of the nonlinear PDE \eqref{ueq}. This is one of the key tools in obtaining higher integrability for the gradient later. In section \ref{L2asub}, we establish the estimate for the $L^{2-a}_{x,t}$-norm of the gradient in Theorem \ref{L2apos}. It was done through the $\mathcal K$-weighted $L^2$-norm first, see Proposition \ref{L2a}, and then by the interpretation of the weight $\mathcal K$. In section \ref{gradsec}, we estimate the $L^{s}_{x,t}$-norms of the gradient, which is interior in the spatial variables, for any finite number $s>2-a$. Specifically, we obtain estimates for $2-a<s\le 4-a$ in subsection \ref{L4ma}, and for $s>4-a$ in subsection \ref{Lhigher}. We use the iteration method by Lady\v{z}enskaja and Ural{$'$}\def\cprime{$'$} \def\cprime{$'$}ceva \cite{LadyParaBook68}. This is a classical technique but, with suitable modifications based on the structure of equation \eqref{ueq}, applies well to our complicated nonlinear PDE. Moreover, it is sufficiently explicit to allow us to track all the necessary constants. Section \ref{maxintime} is devoted to the estimates for the the gradient's $L^\infty_tL^s_x$-norms, which, of course, are stronger norms than those in the previous section. It is worth mentioning that the derived estimates in this paper are already complicated, therefore, we strive to make them coherent, and hence more digestible, rather than sharp. Concerning the simplification \eqref{sim}, it is a common strategy when encountering a new nonlinear problem. As presented above, it allows us to formulate the whole fluid dynamics system as a scalar parabolic equation \eqref{ueq}. Such approximation, usually with some average density $\rho_*$, makes the problem much more accessible, while still gives insights on the flows' behaviors. More importantly, this approach prompts the way to analyze the full model. Indeed, in the general case, the $\mathcal R$ in \eqref{Rdef} becomes $\mathcal R(u)$, and the PDE \eqref{ueq} becomes \begin{equation}\label{unext} \phi u_t= \nabla \cdot\Big (X\big(u,\nabla u+u^2 \mathcal Z(x,t)\big)\Big), \end{equation} with $X(u, y)$ defined in the same way as \eqref{Xdef}. Therefore, we can reduce the fluid dynamics system to a scalar PDE again. Furthermore, the properties of function $X$ established in subsection \ref{Xpropsec} with explicit dependence on $\mathcal R=\mathcal R(u)$, and other $X$-related results in section \ref{Gradprep} will play fundamental roles in understating the structure of the PDE \eqref{unext}. This will be pursued and reported in a sequel (part II) of this paper. Finally, in spite of our focus on the slightly compressible fluids, the techniques developed in the current paper can be combined with those in our previous work \cite{CHK1,CHK2} to model and analyze other types of compressible fluid flows such as the rotating isentropic flows for gases. \section{Preliminaries}\label{Prem} This section contains prerequisites and basic results on function $X$. \subsection{Notation and elementary inequalities} A vector $x\in\mathbb R^3$ is denoted by a $3$-tuple $(x_1,x_2,x_3)$ and considered as a column vector, i.e., a $3\times 1$ matrix. Hence $x^{\rm T}$ is the $1\times 3$ matrix $(x_1\ x_2\ x_3)$. For a function $f=(f_1,f_2,\ldots,f_m):\mathbb R^n\to\mathbb R^m$, its derivative is the $m\times n$ matrix \begin{align*} f'=Df=\Big(\frac{\partial f_i}{\partial x_j}\Big)_{1\le i\le m,\,1\le j\le n}. \end{align*} In particular, when $n=3$ and $m=1$, i.e., $f:\mathbb R^3\to \mathbb R$, the derivative is $$f'=Df=(f_{x_1}\ f_{x_2}\ f_{x_3}),$$ while its gradient vector is $$\nabla f=(f_{x_1},f_{x_2},f_{x_3})=(Df)^{\rm T}.$$ The Hessian matrix is $$D^2 f\stackrel{\rm def}{=} D(\nabla f)= \Big(\frac{\partial^2 f}{\partial x_j \partial x_i} \Big)_{i,j=1,2,3}.$$ Interpolation inequality for integrals: \begin{equation}\label{Lpinter} \int |f|^sd\mu\le \Big(\int |f|^p d\mu\Big)^\frac{q-s}{q-p}\Big(\int |f|^q d\mu\Big)^\frac{s-p}{q-p}\text{ for }0<p<s<q. \end{equation} \medskip For two vectors $x,y\in \mathbb R^3$, their dot product is $x\cdot y=x^{\rm T}y=y^{\rm T}x$, while $xy^{\rm T}$ is the $3\times 3$ matrix $(x_iy_j)_{i,j=1,2,3}$. Let $\mathbf A=(a_{ij})$ and $\mathbf B=(b_{ij})$ be any $3\times 3$ matrices of real numbers. Their inner product is \begin{equation*} \mathbf A:\mathbf B\stackrel{\rm def}{=} {\rm trace}(\mathbf A\mathbf B^{\rm T})=\sum_{i,j=1}^3 a_{ij}b_{ij}. \end{equation*} The Euclidean norm of the matrix $\mathbf A$ is $$|\mathbf A|=(\mathbf A:\mathbf A)^{1/2}=(\sum_{i,j=1}^3 a_{ij}^2)^{1/2}.$$ (Note that we do not use $|\mathbf A|$ to denote the determinant in this paper.) When $\mathbf A$ is considered as a linear operator, another norm is defined by \begin{equation}\label{opnorm} |\mathbf A|_{\rm op}=\max\Big\{ \frac{|Ax|}{|x|}:x\in\mathbb R^3,x\ne 0\Big\} = \max\{ |Ax|:x\in\mathbb R^3,|x|=1\}. \end{equation} We have the following inequalities \begin{align} \label{mm0} |\mathbf A y|&\le |\mathbf A|\cdot |y| \quad\text{ for any }y\in\mathbb R^3,\\ \label{mop} |\mathbf A y|&\le |\mathbf A|_{\rm op}\cdot |y|\quad \text{ for any }y\in\mathbb R^3,\\ \label{mmi} |\mathbf A \mathbf B|&\le |\mathbf A|\cdot |\mathbf B| \end{align} It is also known that \begin{equation}\label{nnorms} |\mathbf A|_{\rm op}\le |\mathbf A|\le c_*|\mathbf A|_{\rm op}, \end{equation} where $c_*$ is a positive constant independent of $\mathbf A$. In particular, for matrix ${\mathbf J}$, we observe, for any $x\in\mathbb R^3$, that \begin{equation}\label{Jxineq} |{\mathbf J} x|\le |\vec k|\,|x|=|x|\quad\text{ and }\quad |{\mathbf J}^2x|\le |{\mathbf J} x|\le |x|. \end{equation} By choosing $x\ne 0$ perpendicular to $\vec k$, we conclude, for the norm \eqref{opnorm}, that $$|{\mathbf J}|_{\rm op}=|{\mathbf J}^2|_{\rm op}=1.$$ For the Euclidean norm, we have, from explicit formulas in \eqref{JR}, that \begin{equation}\label{Jnorm} |{\mathbf J}|^2=2|\vec k|^2=2, \end{equation} \begin{equation}\label{J2norm} |{\mathbf J}^2|^2=3-2|\vec k|^2+|\vec k|^4=2. \end{equation} \medskip We recall below some more elementary inequalities that will be used in this paper. First, \begin{equation}\label{ee3} \frac{x^p+y^p}2\le (x+y)^p\le 2^{(p-1)^+}(x^p+y^p)\quad \text{for all } x,y\ge 0,\quad p>0, \end{equation} where $z^+=\max\{z, 0\}$ for any $z\in\mathbb R$. Particularly, \begin{equation}\label{ee2} (x+y)^p\le 2^p(x^p+y^p)\quad \text{for all } x,y\ge 0,\quad p>0, \end{equation} \begin{equation}\label{ee4} x^\beta \le x^\alpha+x^\gamma\quad \text{for all } 0\le \alpha\le \beta\le\gamma. \end{equation} \begin{equation}\label{ee5} x^\beta \le 1+x^\gamma \quad \text{for all } 0\le \beta\le\gamma. \end{equation} By the triangle inequality and the second inequality of \eqref{ee3}, we have \begin{equation}\label{ee6} |x-y|^p\ge 2^{-(p-1)^+}|x|^p-|y|^p \quad \text{for all } x,y\in\mathbb R^n,\quad p>0. \end{equation} \subsection{Properties of the function $X$}\label{Xpropsec} It is obvious that the structure of the parabolic equation \eqref{ueq} depends greatly on the properties of the function $X$. Thus, this subsection is devoted to studying $X$. Recall that the functions $F$ and $X$ are defined in Definition \ref{FX}. Throughout the paper, we denote \begin{equation} \label{adef} a=\frac{\alpha_N}{1+\alpha_N}\in(0,1), \end{equation} \begin{equation}\label{defxione} \chi_0=g(1)=\sum_{i=0}^N a_i\text{ and }\chi_1=g(1)+\mathcal R=\chi_0+\mathcal R. \end{equation} \begin{lemma}\label{lem21} {\rm (i)} One has \begin{equation}\label{X0} \frac{c_1\chi_1^{-1}|y|}{(1+|y|)^a}\le |X(y)|\le \frac{c_2\chi_1^a|y|}{(1+|y|)^a} \text{ for all }y\in \mathbb R^3, \end{equation} where $c_1=\min\{1,\chi_0\}^a $ and $c_2=2^a c_1^{-1}\min\{a_0,a_N\}^{-1}$. Alternatively, \begin{equation}\label{X1} \chi_1^{-(1-a)}|y|^{1-a}-1\le |X(y)|\le c_3|y|^{1-a} \text{ for all }y\in \mathbb R^3, \end{equation} where $c_3=(a_N)^{a-1}$. {\rm (ii)} One has \begin{equation}\label{X2} \frac{c_4 \chi_1^{-2} |y|^2}{(1+|y|)^a} \le X(y)\cdot y\le \frac{c_2 \chi_1^a |y|^2}{(1+|y|)^a} \text{ for all }y\in \mathbb R^3, \end{equation} where $c_4=(\min\{1,a_0,a_N\}/2^{\alpha_N})^{1+a}$. Alternatively, \begin{equation}\label{X3} c_5\chi_1^{-2} (|y|^{2-a}-1) \le X(y)\cdot y\le c_3|y|^{2-a}\text{ for all }y\in \mathbb R^3, \end{equation} where $c_5=2^{-a}c_4$. \end{lemma} \begin{proof} Let $y\in\mathbb R^3$ and $v=X(y)$. Then, by \eqref{Xdef}, \begin{equation}\label{Fvy} F(v)=y.\end{equation} (i) Since $g(|v|)v$ and $\vec k\times v$ are orthogonal, we have from \eqref{Fdef} and \eqref{Fvy} that \begin{equation} |y|^2 = g(|v|)^2|v|^2+\mathcal R^2|{\mathbf J} v|^2. \end{equation*} This and \eqref{Jxineq} show that \begin{equation} g(|v|)^2|v|^2\le |y|^2\le (g(|v|)^2+\mathcal R^2)|v|^2. \end{equation*} Thus, \begin{equation}\label{gf1} g(|v|)|v|\le |y|\le (g(|v|)+\mathcal R)|v|. \end{equation} \textit{Proof of \eqref{X1}.} From the first inequality in \eqref{gf1}, \begin{equation}\label{vf} |y| \ge g(|v|)|v|\ge a_N |v|^{\alpha_N+1}, \text{ which implies }|v|\le (a_N^{-1} |y|)^\frac{1}{\alpha_N+1}=c_3|y|^{1-a}. \end{equation} So we obtain the second inequality of \eqref{X1}. From \eqref{gf1}, \begin{equation}\label{fov} |y|\le \chi_1(1+|v|)^{\alpha_N+1}. \end{equation} Then we obtain the first inequality in \eqref{X1}. \textit{Proof of \eqref{X0}.} Since $X(0)=0$, we consider only $y,v\ne 0$. \textit{Case $|v|>1$.} By \eqref{gf1}, $|y|> g(1)\cdot 1=\chi_0$. Furthermore, by \eqref{vf}, \begin{equation}\label{v1} |v|\le (a_N^{-1}|y|)^{1-a}=\frac{2^a a_N^{a-1}|y|}{(|y|+|y|)^a}\le \frac{2^a \chi_1^a a_N^{-1}|y|}{(|y|+\chi_0)^a}\le \frac{2^a a_N^{-1}\chi_1^a|y|}{(\min\{1,\chi_0\})^a(|y|+1)^a}. \end{equation} On the other hand, we have from \eqref{gf1} that $|y|\le (g(1)|v|^{\alpha_N}+\mathcal R)|v|\le \chi_1 |v|^{\alpha_N+1}$. Then \begin{equation}\label{v2} |v|\ge (\chi_1^{-1}|y|)^{1-a}\ge \frac{\chi_1^{a-1}|y|}{|y|^a}\ge \frac{\chi_0^a \chi_1^{-1}|y|}{(|y|+1)^a}. \end{equation} \textit{Case $0<|v|\le 1$.} It follows \eqref{gf1} that $a_0 |v|\le |y|\le \chi_1 |v|\le \chi_1$. Thus, \begin{align}\label{v3} |v|&\le a_0^{-1} |y|=\frac{a_0^{-1}|y|(|y|+\chi_0)^a}{(|y|+\chi_0)^a}\le \frac{a_0^{-1}|y|(\chi_1+\chi_0)^a}{(|y|+\chi_0)^a} \le \frac{ a_0^{-1}(2\chi_1)^a|y|}{(\min\{1,\chi_0\})^a(|y|+1)^a},\\ \label{v4} |v|&\ge \chi_1^{-1} |y|\ge \chi_1^{-1} |y| \frac{1}{(|y|+1)^a}. \end{align} From \eqref{v1} and \eqref{v3}, we obtain the second inequality in \eqref{X0}. From \eqref{v2} and \eqref{v4}, we obtain the first inequality in \eqref{X0}. (ii) Note that the second inequality of \eqref{X2}, respectively \eqref{X3}, follows the Cauchy-Schwarz inequality and the second inequality of \eqref{X0}, respectively \eqref{X1}. Thus, we focus on proving the first inequalities of \eqref{X2} and \eqref{X3}. We have from \eqref{Fdef} and \eqref{gf1} that \begin{equation*} X(y)\cdot y=v\cdot F(v)=g(|v|)|v|^2 \ge \frac{g(|v|)|y|^2}{(g(|v|)+\mathcal R)^2} . \end{equation*} We estimate \begin{align} \label{g2} g(|v|)+\mathcal R &\le \chi_1(1+|v|)^{\alpha_N},\\ \intertext{and, by using \eqref{ee2},} \label{g1} g(|v|) &\ge \min\{a_0,a_N\} (1+|v|^{\alpha_N}) \ge \min\{a_0,a_N\} 2^{-\alpha_N}(1+|v|)^{\alpha_N}. \end{align} Hence, \begin{align*} \frac{g(|v|)}{(g(|v|)+\mathcal R)^2} \ge \frac{\min\{a_0,a_N\} 2^{-\alpha_N}}{\chi_1^2(1+|v|)^{\alpha_N}}. \end{align*} Note, by \eqref{ee3}, that we also have \begin{equation*} 1+|y|\ge 1+g(|v|)|v|\ge \min\{1,a_N\}(1+|v|^{\alpha_N+1}) \ge \min\{1,a_N\}2^{-\alpha_N}(1+|v|)^{\alpha_N+1}. \end{equation*} Then, \begin{equation}\label{vy} (1+|v|)^{\alpha_N} \le (1+|y|)^a (2^{\alpha_N}/\min\{1,a_N\})^a. \end{equation} Therefore, \begin{equation}\label{gg} \frac{g(|v|)}{(g(|v|)+\mathcal R)^2} \ge \frac{\min\{a_0,a_N\} 2^{-\alpha_N}}{\chi_1^2(1+|y|)^a}(\min\{1,a_N\}2^{-\alpha_N})^a\ge \frac{c_4 \chi_1^{-2}}{(1+|y|)^a}. \end{equation} Hence we obtain the first inequality in \eqref{X2}. Then the first inequality of \eqref{X3} follows this by considering $|y|\le 1$ and $|y|>1$ separately. \end{proof} \begin{remark} Compared to \eqref{X1}, inequality in \eqref{X0} indicates $X(y)\to0$ as $|y|\to0$ at the rate $|y|$, while $X(y)\to\infty$ as $|y|\to\infty$ at a different rate $|y|^{1-a}$. This refined form \eqref{X0} and its proof originate from \cite[Lemma 2.1]{CHIK1}. \end{remark} \begin{lemma}\label{Fpinv} The function $F$ belongs to $C^1(\mathbb R^3)$, and the derivative matrix $F'(v)$ is invertible for each $v\in\mathbb R^3$. Consequently, $X\in C^1(\mathbb R^3)$ and $X'(y)$ is invertible for each $y\in\mathbb R^3$. \end{lemma} \begin{proof} Elementary calculations show that \begin{align} \label{Fprime} F'(v)&=g'(|v|)\frac{v v^{\rm T}}{|v|}+g(|v|)\mathbf I_3+ \mathcal R {\mathbf J}\text{ for }v\ne 0,\\ \label{Fp0} F'(0)&=g(0)\mathbf I_3+\mathcal R {\mathbf J}. \end{align} Clearly, $F'(v)$ is continuous at $v\ne 0$, and $F'(v)\to F'(0)$ as $v\to 0$. Therefore, $F\in C^1(\mathbb R^3)$. For $z\in \mathbb R^3$, we have \begin{align*} z^{\rm T} F'(v) z &= \Big(g'(|v|)\frac{(z^{\rm T}v)^2}{|v|}+g(|v|)|z|^2\Big)\text{ for }v\ne 0,\\ z^{\rm T} F'(0) z&=g(0)|z|^2. \end{align*} Since $g'(s)>0$ for $s>0$, it follows that \begin{equation}\label{Fzz} z^{\rm T} F'(v) z \ge g(|v|)|z|^2 \text{ for all } v,z\in\mathbb R^3. \end{equation} Let $v\in\mathbb R^3$. If $F'(v)z=0$, then $0=z^{\rm T} F'(v)z\ge g(|v|)|z|^2$, which implies that $z=0$. Hence, $F'(v)$ is invertible. By the Inverse Function Theorem, the statements for $X$ follow those for $F$. \end{proof} \begin{lemma}\label{Xder} For any $y\in\mathbb R^3$, the derivative matrix $X'(y)$ satisfies \begin{equation}\label{Xprime} c_6\chi_1^{-1}(1+|y|)^{-a}\le |X'(y)|\le c_7(1+\chi_1)^a (1+|y|)^{-a}, \end{equation} \begin{equation}\label{hXh} \xi^{\rm T} X'(y)\xi \ge c_8 \chi_1^{-2}(1+|y|)^{-a} |\xi|^2\text{ for all }\xi\in \mathbb R^3, \end{equation} where \begin{align*} c_6= \sqrt3(2^{-\alpha_N}\min\{1,a_N\})^a/(\alpha_N+2),\quad c_7=c_* 2^{\alpha_N}/\min\{a_0,a_N\},\quad c_8=c_4/(\alpha_N+2)^2. \end{align*} \end{lemma} \begin{proof} Let $y\in\mathbb R^3$, then, by \eqref{Xdef}, $X'(y)=(F'(v))^{-1}$, with $F(v)=y.$ We first claim that \begin{equation}\label{claim} \frac{c_9}{g(|v|)+\mathcal R}\le |X'(y)|\le \frac{c_*}{g(|v|)}, \end{equation} where $c_9=\sqrt3/(\alpha_N+2)$, and $c_*>0$ is the positive constant in \eqref{nnorms}. Accepting \eqref{claim} for a moment, we prove the inequality \eqref{Xprime}. Observe, by \eqref{fov}, that \begin{equation}\label{g5} 1+|y|\le (\chi_1+1)(1+|v|)^{\alpha_N}. \end{equation} On the one hand, \eqref{g1} and \eqref{g5} yield \begin{equation}\label{g3} g(|v|) \ge \min\{a_0,a_N\} 2^{-\alpha_N} (1+|y|)^a/(1+\chi_1)^a. \end{equation} On the other hand, \eqref{g2} and \eqref{vy} give \begin{equation}\label{g4} g(|v|)+\mathcal R\le \chi_1(1+|y|)^a (2^{\alpha_N}/\min\{1,a_N\})^a. \end{equation} Then, by combining \eqref{claim}, \eqref{g3} and \eqref{g4}, we obtain \eqref{Xprime}. We now prove the claim \eqref{claim}. \medskip \noindent\textit{Proof of the first inequality in \eqref{claim}.} First, we consider $y,v\ne 0$. In \eqref{Fprime}, the matrix $g'(|v|)|v|^{-1}v v^{\rm T}+g(|v|)\mathbf I_3$ is symmetric, while $\mathcal R {\mathbf J}$ is anti-symmetric. Hence they are orthogonal, and, together with \eqref{Jnorm}, we have \begin{equation*} \begin{split} |F'(v)|^2 &=\Big|g'(|v|)\frac{v v^{\rm T}}{|v|}+g(|v|)\mathbf I_3\Big|^2+\mathcal R^2 |{\mathbf J}|^2\\ &={\rm trace}\Big\{\big(g'(|v|)\frac{v v^{\rm T}}{|v|}+g(|v|)\mathbf I_3\big)^2\Big\}+2\mathcal R^2\\ &={\rm trace}\Big( (g'(|v|))^2v v^{\rm T} +2g'(|v|)g(|v|)\frac{v v^{\rm T}}{|v|} +(g(|v|))^2I_3\Big)+2\mathcal R^2. \end{split} \end{equation*} Since trace$(vv^{\rm T})=|v|^2$, we have \begin{align*} |F'(v)|^2 &=(g'(|v|))^2 |v|^2 +2g'(|v|)g(|v|)|v| +3(g(|v|))^2+2\mathcal R^2\\ &=(g'(|v|) |v| +g(|v|))^2 +2g(|v|)^2+2\mathcal R^2. \end{align*} Note that \begin{equation*} g'(|v|)>0 \quad \text{ and }\quad g'(|v|)|v|\le \alpha_N g(|v|). \end{equation*} Then \begin{equation}\label{i1} |F'(v)|^2\le ((\alpha_N+1)^2+2)g(|v|)^2 +2\mathcal R^2 \le (\alpha_N+2)^2 (g(|v|)+\mathcal R)^2. \end{equation} Similarly, we have from \eqref{Fp0} that \begin{equation}\label{i2} |F'(0)|^2=3g(0)^2+ 2\mathcal R^2\le 3 (g(0)+\mathcal R)^2 \le (\alpha_N+2)^2(g(0)+\mathcal R)^2 . \end{equation} From \eqref{i1} and \eqref{i2}, it follows \begin{equation}\label{FpU} |F'(v)|\le (\alpha_N+2)(g(|v|)+\mathcal R)\text{ for all }v\in \mathbb R^3. \end{equation} By \eqref{mmi}, $$\sqrt 3 = |\mathbf I_3| =| F'(v)X'(y) | \le |F'(v)|\cdot |X'(y)|,$$ which gives \begin{align*} |X'(y)|\ge \frac{\sqrt 3}{ |F'(v)|} \ge \frac{\sqrt3}{(\alpha_N+2)(g(|v|)+\mathcal R) }=\frac{c_9}{g(|v|)+\mathcal R}. \end{align*} \medskip \noindent\textit{Proof of the second inequality in \eqref{claim}.} For $z\ne 0$, by Cauchy-Schwarz inequality and \eqref{Fzz}, we have \begin{equation}\label{Fpz} |F'(v)z|\ge \frac{|z^{\rm T} F'(v)z|}{|z|} \ge g(|v|)|z|. \end{equation} For any $\xi\in\mathbb R^3\setminus \{0\}$, applying \eqref{Fpz} to $z=X'(y)\xi$, which is non-zero thanks to $X'(y)$ being invertible, gives \begin{align*} |\xi|\ge g(|v|)|X'(y)\xi|. \end{align*} Thus, the operator norm of $X'(y)$ is bounded by $|X'(y)|_{\rm op}\le 1/g(|v|)$, and then, by relation \eqref{nnorms}, \begin{equation*} |X'(y)|\le c_*|X'(y)|_{\rm op}\le c_*/g(|v|). \end{equation*} \medskip \noindent\textit{Proof of \eqref{hXh}.} Let $u=X'(y)\xi=(F'(v))^{-1}\xi$, which gives $\xi=F'(v)u$. Rewriting $[X'(y)\xi]\cdot \xi$ in terms of $u,v$ and using property \eqref{Fzz}, we have \begin{equation}\label{Xh1} [X'(y)\xi]\cdot \xi= u\cdot [ F'(v)u ]\ge |u|^2 g(|v|)=|X'(y)\xi|^2 g(|v|). \end{equation} From \eqref{mm0} and \eqref{FpU}, for any $\xi\in\mathbb R^3$: \begin{equation*} |\xi|=|F'(v)X'(y)\xi|\le |F'(v)| \cdot |X'(y)\xi|\le (\alpha_N+2) (g(|v|)+\mathcal R)|X'(y)\xi|, \end{equation*} thus \begin{equation}\label{Xh2} |X'(y)\xi|\ge \frac{|\xi|}{(\alpha_N+2)(g(|v|)+\mathcal R)}. \end{equation} Combining \eqref{Xh1}, \eqref{Xh2} and \eqref{gg} yields \begin{align*} \xi^{\rm T}X'(y)\xi&\ge \frac{g(|v|)}{(\alpha_N+2)^2(g(|v|)+\mathcal R)^2} |\xi|^2\ge \frac{c_4\chi_1^{-2} |\xi|^2}{(\alpha_N+2)^2(1+|y|)^a}. \end{align*} This proves \eqref{hXh}. \end{proof} \section{Maximum estimates}\label{maxsec} We will estimate the solutions of \eqref{ibvpg} by the maximum principle. Denote \begin{equation}\label{Phi0} \Phi(x,t)=\nabla u(x,t)+u^2(x,t)\mathcal Z(x,t).\end{equation} We re-write the PDE in \eqref{ibvpg} in the non-divergence form as \begin{align*} \phi u_t=X'(\Phi):(D\Phi)^{\rm T} &=X'(\Phi):\big(D^2 u + u^2 D\mathcal Z(x,t)+ 2u \mathcal Z(x,t) Du\big)^{\rm T}\\ &=\mathbf A:\big (D^2 u + u^2 \Omega^2 {\mathbf J}^2 + 2u (\mathcal Z(x,t) Du)^{\rm T}\big ), \end{align*} where $\mathbf A=\mathbf A(x,t)= X'(\Phi(x,t))$. We write $\mathbf A=\mathbf S+\mathbf T,$ where $\mathbf S$ and $\mathbf T$ are the symmetric and anti-symmetric parts of $\mathbf A$, i.e., $$\mathbf S=(s_{ij})_{i,j=1,2,3}=\frac12(\mathbf A+\mathbf A^{\rm T})\quad \text{and}\quad \mathbf T=\frac12(\mathbf A-\mathbf A^{\rm T}).$$ Sine $D^2u$ is symmetric, we have $\mathbf T:D^2u=0$, hence $\mathbf A:D^2u=\mathbf S :D^2u$. Similarly, ${\mathbf J}^2$ is symmetric, and we have $\mathbf A:{\mathbf J}^2=\mathbf S:{\mathbf J}^2$. Therefore, \begin{equation}\label{usym} \phi u_t=\mathbf S:D^2 u+u^2 \Omega^2 \mathbf S:{\mathbf J}^2 + 2u \mathbf A:(\mathcal Z(x,t) Du)^{\rm T}. \end{equation} Equation \eqref{usym} turns out to possess a maximum principle. Recall that the parabolic boundary of $U\times(0,T]$ is \begin{equation*} \partial_p (U\times(0,T])=\bar U\times[0,T]\setminus U\times (0,T]. \end{equation*} \begin{theorem}[Maximum principle]\label{maxpr} Suppose $T>0$, and $u\in C(\bar U\times[0,T])\cap C_{x,t}^{2,1}(U\times (0,T])$ with $u\ge 0$ is a solution of \eqref{ueq} on $U\times (0,T]$. Then \begin{equation}\label{max-u} \max_{\bar U\times [0,T]} u =\max_{\partial_p (U\times(0,T])} u. \end{equation} \end{theorem} \begin{proof} We make use of equation \eqref{usym} which is equivalent to \eqref{ueq}. We examine the second term on the right-hand side of \eqref{usym}. Direct calculations using the formula of ${\mathbf J}^2$ in \eqref{JR} give $$\mathbf S:{\mathbf J}^2 =\sum_{i,j=1}^3 k_ik_j s_{ij} -\sum_{i=1}^3 s_{ii}=\vec k^{\rm T}\mathbf S\vec k -\rm{trace}(\mathbf S).$$ By \eqref{hXh}, we have $\xi^{\rm T}\mathbf A\xi \ge 0$ for all $\xi\in\mathbb R^3$, and, hence, \begin{equation}\label{Axx} \xi^{\rm T}\mathbf S\xi \ge 0\text{ for all }\xi\in\mathbb R^3. \end{equation} By \eqref{Axx} and the fact $\mathbf S$ is symmetric, we have $\mathbf S\ge 0$ with eigenvalues $\lambda_1,\lambda_2,\lambda_3\ge 0$. Then, applying Cauchy-Schwarz's inequality and \eqref{mop} to $\vec k^{\rm T}\mathbf S \vec k$, we obtain \begin{equation} \label{newpos} \mathbf S:{\mathbf J}^2 \le |\mathbf S|_{\rm op} |\vec k|^2 -\rm{trace}(\mathbf S)=\max\{\lambda_1,\lambda_2,\lambda_3\}\cdot 1 - (\lambda_1+\lambda_2+\lambda_3)\le 0. \end{equation} Let $\varepsilon>0$. Set $u^\varepsilon(x,t)=e^{-\varepsilon t}u(x,t)$ and $\displaystyle M_\varepsilon=\max_{\bar U\times [0,T]} u^\varepsilon$. We prove that \begin{equation}\label{maxe} \max_{\bar U\times [0,T]} u^\varepsilon = \max_{\partial_p (U\times (0,T))} u^\varepsilon. \end{equation} Suppose \eqref{maxe} is false. Then $M_\varepsilon>0$ and there exists a point $(x_0,t_0)\in U\times (0,T]$ for such that $u^\varepsilon(x_0,t_0)=M_\varepsilon$. At this maximum point $(x_0,t_0)$ we have \begin{equation}\label{vt1} u^\varepsilon_t (x_0,t_0)\ge 0, \quad \nabla u^\varepsilon(x_0,t_0)=0,\quad D^2 u^\varepsilon(x_0,t_0)\le 0. \end{equation} We observe the followings: (a) The second property of \eqref{vt1} implies $\nabla u(x_0,t_0)=0$. (b) On the one hand, we have from \eqref{Axx} that $\mathbf S(x,t)\ge 0$ on $U\times(0,T]$. On the other hand, the last property of \eqref{vt1} implies $D^2 u(x_0,t_0)\le 0$. Then it is well-known that $\mathbf S(x_0,t_0): D^2u(x_0,t_0)\le 0$, see e.g. \cite[Chapter 2, Lemma 1]{FriedmanPara2008}. From \eqref{usym}, \eqref{newpos}, and (a), (b), we obtain $u_t(x_0,t_0)\le 0$. Therefore, \begin{equation*} u^\varepsilon_t(x_0,t_0)=-\varepsilon e^{-\varepsilon t_0} u(x_0,t_0) +e^{-\varepsilon t_0} u_t(x_0,t_0) \le -\varepsilon u^\varepsilon(x_0,t_0)=-\varepsilon M_\varepsilon <0. \end{equation*} This contradicts the first inequality in \eqref{vt1}, hence, \eqref{maxe} holds true. Note that \begin{equation*} e^{-\varepsilon T}\max_{\bar U\times [0,T]} u\le \max_{\bar U\times [0,T]} u^\varepsilon = \max_{\partial_p (U\times (0,T))} u^\varepsilon\le \max_{\partial_p (U\times (0,T))} u \le \max_{\bar U\times [0,T]} u. \end{equation*} Then letting $\varepsilon\to 0$ yields \eqref{max-u}. \end{proof} \begin{corollary}\label{maxcor} Let $u\in C(\bar U\times[0,T])\cap C_{x,t}^{2,1}(U\times (0,T])$ with $u\ge 0$ solve problem \eqref{ibvpg} on a time interval $[0,T]$ for some $T>0$. Then it holds for all $t\in[0,T]$ that \begin{equation}\label{uM} \max_{x\in\bar U}u(x,t)\le M_0(t)\stackrel{\rm def}{=} \sup_{x\in U} |u_0(x)|+\sup_{(x,\tau)\in \partial U \times (0,t]} |\psi(x,\tau)|. \end{equation} \end{corollary} \begin{proof} Because of the continuity of $u$ on $\bar U\times [0,t]$, the quantity $M_0(t)$, in fact, equals the maximum of $u$ on $\partial_p(U\times(0,t])$. Then applying inequality \eqref{max-u} to $T=t$ yields estimate \eqref{uM}. \end{proof} \section{Preparations for the gradient estimates}\label{Gradprep} This section contains technical preparations for the estimates for different norms of the gradient $\nabla u$ in the next three sections. Given two mappings $Q:U\to\mathbb R^3$ and $w:U\to \mathbb R$, we define a function $K[w,Q]$ on $U$ by \begin{equation}\label{Ku} K[w,Q](x)=(1+|\nabla w(x) + w^2(x) Q(x) |)^{-a}\quad\text{ for } x\in U. \end{equation} This function will be conveniently used in comparisons with $X(\nabla w(x) + w^2(x) Q(x))$ arising in the PDE \eqref{ueq}. \begin{lemma}\label{kuglem} If $s\ge a$, then the following inequalities hold on $U$: \begin{align} \label{kug1} K[w,Q]|\nabla w|^s &\le 2^{2s-a}|\nabla w|^{s-a}+2^{2s+1-a}(1+|w^2 Q|^s),\\ \label{kug2} K[w,Q]|\nabla w|^s&\ge 3^{-1}|\nabla w|^{s-a} - 3^{-1} (1+|w^2 Q|^s). \end{align} \end{lemma} \begin{proof} We denote, in this proof, $K=K[w,Q]$. Let $s\ge a$, by the triangle inequality and inequalities \eqref{ee2}, \eqref{ee5}, we have \begin{align*} K|\nabla w|^s &\le 2^s K\cdot (|\nabla w +w^2 Q|^s+|w^2 Q|^s)\\ &\le 2^s |\nabla w +w^2 Q|^{s-a}+ 2^s K|w^2 Q|^s\\ &\le 2^s\cdot 2^{s-a}( |\nabla w|^{s-a} + |w^2 Q|^{s-a})+2^s K|w^2 Q|^s. \end{align*} Using inequality \eqref{ee5} and the fact $K\le 1$, we estimate \begin{equation*} |w^2 Q|^{s-a}\le 1+|w^2 Q|^s, \quad K|w^2 Q|^s\le 1+|w^2 Q|^s. \end{equation*} Hence, \begin{equation}\label{Kw1} K|\nabla w|^s \le 2^{2s-a}|\nabla w|^{s-a} +(2^{2s-a}+2^s)(1+|w^2 Q|^s). \end{equation} Noticing that $2^{2s-a}+2^s\le 2\cdot 2^{2s-a}$, we obtain \eqref{kug1} from \eqref{Kw1}. To prove \eqref{kug2}, we write $ |\nabla w|^{s-a} = K|\nabla w|^{s-a} (1+|\nabla w+w^2 Q| )^a$, and apply inequality \eqref{ee3} to have \begin{align*} |\nabla w|^{s-a} &\le K|\nabla w|^{s-a} (1+|\nabla w|^a+|w^2 Q|^a ) =K\cdot (|\nabla w|^{s-a} +|\nabla w|^s+|\nabla w|^{s-a}|w^2 Q|^a ). \end{align*} Concerning the last sum between the parentheses, applying inequality \eqref{ee5} to its first term gives \begin{align*} |\nabla w|^{s-a}\le 1 + |\nabla w|^s, \end{align*} and applying Young's inequality to its last term with the conjugate powers $s/(s-a)$ and $s/a$, when $s>a$, gives \begin{align*} |\nabla w|^{s-a}|w^2 Q|^a\le |\nabla w|^s+|w^2 Q|^s. \end{align*} Obviously, this inequality also holds when $s=a$. Thus, \begin{equation}\label{pre-kug} |\nabla w|^{s-a} \le 3K|\nabla w|^s+K\cdot(1+|w^2 Q|^s) \le 3K|\nabla w|^s+(1+|w^2 Q|^s). \end{equation} We obtain \eqref{kug2}. \end{proof} For our later convenience, we rewrite inequality \eqref{pre-kug}, by replacing $s-a$ with $s$, as \begin{equation}\label{kugs} |\nabla w|^s\le 3K[w,Q]|\nabla w|^{s+a} + (1+|w^2 Q|^{s+a}) \text{ for all }s\ge 0. \end{equation} \begin{theorem}\label{LUembed} For each $s\geq1$, there exists a constant $C>0$ depending only on $s$ and number $a$ in \eqref{adef} such that for any function $w\in C^2(U)$ and non-negative function $\zeta\in C_c^1(U)$, the following inequality holds \begin{equation}\label{LUineq} \begin{split} \int_U K[w,Q] |\nabla w|^{2s+2} \zeta^2 dx &\le C \sup_{\text{\rm supp} \zeta }(w^2) \Big \{ \int_U K[w,Q] |D^2 w|^2 (|\nabla w|^{2s-2} +1)\zeta^2 dx\\ &\quad + \int_U K[w,Q] |\nabla w|^{2s} \Big(|\nabla \zeta|^2 +(|w||Q'|+|Q|)^2\zeta^2 w^2 \Big) dx\Big\} \\ &\quad +C\int_U \Big( 1+ | w^2 Q|^{2s}\Big) | w^2Q |^{2s+2} \zeta^2 dx. \end{split} \end{equation} Assume, in addition, that $Q$ and $Q'$ are bounded on $U$. Then, \begin{equation}\label{ibt} \begin{split} &\int_U K[w,Q] |\nabla w|^{2s+2} \zeta^2 dx \le C M_w^2 \int_U K[w,Q] |D^2 w|^2 |\nabla w|^{2s-2} \zeta^2 dx\\ &\quad + CM_w^2 \int_U K[w,Q]|\nabla w|^{2s} (|\nabla \zeta|^2 + (M_w\mu_Q+M_Q)^2 M_w^2 \zeta^2 ) dx \\ &\quad + C{\rm sgn}(s-1) M_w^2 \int_U K[w,Q] |D^2 w|^2 \zeta^2 dx + C |U|(M_w^2 M_Q)^{2s+2}(1+M_w^2 M_Q)^{2s}, \end{split} \end{equation} where \begin{align*} M_w=\sup_{x\in\text{\rm supp} \zeta } |w(x)|,\quad M_Q=\sup_{x\in U} |Q(x)|,\quad \mu_Q=\sup_{x\in U} |Q'(x)|. \end{align*} \end{theorem} \begin{proof} For convenience in computing the derivatives, we will first establish \eqref{LUineq} with $K[w,Q]$ being replaced by the following function \begin{equation}\label{Kstar} K_*(x)= (1+ |\nabla w(x) + w^2(x)Q(x)|^2 )^{-a/2}. \end{equation} In this proof, the symbol $C$ denotes a generic positive constant depending only on $s$ and number $a$ in \eqref{adef}, while $C_\varepsilon>0$ depends on $s$, $a$ and $\varepsilon$. We use Einstein's summation convention in our calculations. Let $$I = \int_U K_*(x) |\nabla w(x)|^{2s+2} \zeta^2(x) dx.$$ By integration by parts and direct calculations, we see that \begin{align*} I &=\int_U (K_* |\nabla w|^{2s} \partial_i w \zeta^2)\cdot \partial_i w dx = - \int_U \partial_i (K_* |\nabla w|^{2s} \partial_i w \zeta^2)\cdot w dx\\ &=I_1+I_2+I_3+I_4, \end{align*} where \begin{align*} I_1& = a \int_U \frac{(\nabla w + w^2Q) \cdot (\partial_i \nabla w + 2w\partial_i w Q+ w^2 \partial_i Q) }{(1+ |\nabla w + w^2Q|^2)^{a/2+1}} |\nabla w|^{2s} \partial_i w\cdot \zeta^2 \cdot w dx,\\ I_2&= - \int_U K_* |\nabla w|^{2s} \Delta w \cdot \zeta^2 \cdot w dx,\\ I_3&= - 2s \int_U K_* \Big(|\nabla w|^{2s-2} \partial_i\partial_j w \partial_j w\Big)\cdot \partial_i w \cdot \zeta^2 \cdot w dx,\\ I_4&= -2 \int_U K_* |\nabla w|^{2s} \partial_i w \zeta \partial_i \zeta \cdot w dx. \end{align*} Above, we used $\partial_i(|\nabla w|^{2s})=\partial_i(|\nabla w|^2)^s=s(|\nabla w|^2)^{s-1}\partial_i (|\nabla w|^2)$ to avoid possible singularities when $\nabla w=0$. We estimate $I_1$ first. Observe that \begin{equation*} \frac{1}{(1+ |\nabla w + w^2Q|^2)^{a/2+1}} = \frac{K_*}{1+|\nabla w + w^2Q|^2}\le\frac{2K_*}{(1+|\nabla w + w^2Q|)^2} . \end{equation*} By Cauchy-Schwarz and triangle inequalities, we have \begin{align*} I_1 &\le C \int_U \frac{K_* }{(1+|\nabla w + w^2Q|)^2} \cdot |\nabla w + w^2Q|\\ &\qquad \cdot \Big (|D^2 w| |\nabla w|^{2s+1} + |w||\nabla w|^{2s+2}|Q|+ w^2 |\nabla w|^{2s+1}|Q'| \Big) \zeta^2 \cdot |w| dx \\ &\le J_1+J_2+J_3, \end{align*} where \begin{align*} J_1&= C\int_U \frac{K_* }{1+|\nabla w + w^2Q|}(|D^2 w| |\nabla w|^{2s+1} ) \zeta^2 \cdot |w| dx,\\ J_2&= C\int_U \frac{K_* }{1+|\nabla w + w^2Q|}( |wQ||\nabla w|^{2s+2}) \zeta^2 \cdot |w| dx,\\ J_3&= C\int_U \frac{K_* }{1+|\nabla w + w^2Q|}(w^2 |Q'||\nabla w|^{2s+1} ) \zeta^2 \cdot |w| dx. \end{align*} For $J_1$, by using triangle inequality $|\nabla w|\le |\nabla w+ w^2 Q |+ |w^2 Q|,$ we have \begin{equation*} \begin{split} J_1 &\le C\int_U \frac{K_* }{1+|\nabla w + w^2 Q|}(|\nabla w+ w^2 Q |^{2s+1} + |w^2 Q|^{2s+1}) |D^2 w| \zeta^2 |w| dx\\ &\le C \int_U K_* \Big( |\nabla w+ w^2 Q |^{2s}+ |w^2 Q|^{2s+1}\Big) |D^2 w| \zeta^2 |w| dx. \end{split} \end{equation*} Applying triangle inequality and \eqref{ee2} gives $$ |\nabla w+ w^2 Q |^{2s}\le C( |\nabla w|^{2s}+ |w^2 Q|^{2s}).$$ Thus, \begin{equation*} J_1 \le C \int_U K_* |\nabla w|^{2s} |D^2 w| \zeta^2 |w| dx + C \int_U K_* \Big( |w^2 Q |^{2s}+ |w^2 Q|^{2s+1}\Big) |D^2 w| \zeta^2 |w| dx. \end{equation*} Let $\varepsilon>0$. Denote \begin{equation*} I_5=\int_U K_* |D^2 w|^2|\nabla w|^{2s-2}\zeta^2 w^2 dx,\quad I_6=\int_U K_* |D^2 w|^2\zeta^2 w^2 dx. \end{equation*} In the last inequality for $J_1$, we apply Cauchy's inequality to obtain \begin{equation*} \begin{split} J_1 &\le C \int_U K_* |\nabla w|^{2s} |D^2 w| \zeta^2 |w| dx + C \int_U K_* \Big( |w^2 Q |^{2s}+ |w^2 Q|^{2s+1}\Big) |D^2 w| \zeta^2 |w| dx\\ &\le \varepsilon I+ C_\varepsilon I_5 + C \int_U K_* \Big( |w^2 Q |^{4s}+ |w^2 Q|^{4s+2}\Big) \zeta^2 dx +CI_6. \end{split} \end{equation*} Similarly, \begin{equation*} \begin{split} J_2 &\le C \int_U \frac{K_* }{1+|\nabla w + w^2 Q|}(|\nabla w+ w^2 Q |^{2s+2} + |w^2 Q|^{2s+2}) \zeta^2 w^2 |Q|dx\\ &\le C \int_U K_* \Big( |\nabla w+ w^2 Q |^{2s+1}+ |w^2 Q|^{2s+2}\Big) \zeta^2 w^2|Q| dx\\ &\le C \int_U K_* \Big( |\nabla w|^{2s+1}+ |w^2 Q |^{2s+1}+ |w^2 Q|^{2s+2}\Big) \zeta^2 w^2|Q| dx. \end{split} \end{equation*} For $J_3$, neglecting the denominator in the integrand gives \begin{equation*} J_3 \le C\int_U K_* |\nabla w|^{2s+1} \zeta^2 |w|^3|Q'| dx. \end{equation*} Combining the above estimates for $J_1$, $J_2$ and $J_3$ yields \begin{align*} I_1\le \varepsilon I+ C_\varepsilon I_5 +CI_6+ J_4+J_5, \end{align*} where \begin{align*} J_4&= C\int_U K_* |\nabla w|^{2s+1} (|Q|+|w||Q'|) \zeta^2 w^2 dx,\\ J_5&= C \int_U K_* \Big( |w^2 Q |^{4s}+ |w^2 Q|^{4s+2}\Big) \zeta^2 dx + C\int_U K_* \Big( |w^2 Q |^{2s+1}+ |w^2 Q|^{2s+2}\Big) \zeta^2 w^2 |Q|dx. \end{align*} We estimate, by using Cauchy's inequality, \begin{equation*} J_4 \le \varepsilon I+C_\varepsilon\int_U K_* |\nabla w|^{2s} (|Q|+|w||Q'|)^2 \zeta^2 w^4 dx, \end{equation*} and, with $K_*\le 1$, \begin{align*} J_5&\le C \int_U \Big( |w^2 Q |^{4s}+ |w^2 Q|^{4s+2}\Big) \zeta^2 dx + C\int_U \Big( |w^2 Q |^{2s+1}+ |w^2 Q|^{2s+2}\Big) \zeta^2 w^2 |Q|dx\\ &=C\int_U ( |w^2 Q |^{4s}+ |w^2 Q|^{4s+2}+ |w^2 Q |^{2s+2}+ |w^2 Q|^{2s+3}) \zeta^2 dx. \end{align*} Applying inequality \eqref{ee4} gives \begin{equation*} J_5\le C\int_U \Big( | w^2Q |^{2s+2}+ | w^2 Q|^{4s+2}\Big) \zeta^2 dx = C\int_U \Big( 1+ | w^2 Q|^{2s}\Big) | w^2Q |^{2s+2} \zeta^2 dx. \end{equation*} Therefore, \begin{align*} I_1&\le 2\varepsilon I+ C_\varepsilon I_5 +CI_6+C_\varepsilon\int_U K_* |\nabla w|^{2s} (|Q|+|w||Q'|)^2 \zeta^2 w^4 dx\\ &\quad + C\int_U \Big( 1+ | w^2 Q|^{2s}\Big) | w^2Q |^{2s+2} \zeta^2 dx. \end{align*} The terms $I_2$, $I_3$, $I_4$ can be bounded simply by \begin{align*} I_2+I_3+I_4 &\le \int_U K_* |\nabla w|^{2s} |\Delta w| \zeta^2 |w| dx + 2 s \int_U K_* |\nabla w|^{2s} |D^2 w| \zeta^2 |w| dx\\ &\quad +2 \int_U K_* |\nabla w|^{2s+1} \zeta |\nabla \zeta| |w|dx\\ &\le C\int_U K_* |\nabla w|^{2s} |D^2 w | \zeta^2 |w| dx +2\int_U K_* |\nabla w|^{2s+1} \zeta|\nabla \zeta|\, |w| dx. \end{align*} Applying Cauchy's inequality to each integral gives \begin{align*} I_2+I_3+I_4 & \le (\varepsilon I + C_\varepsilon I_5)+\Big(\varepsilon I +C_\varepsilon \int_U K_* |\nabla w|^{2s} |\nabla \zeta|^2 w^2 dx\Big). \end{align*} Combining the estimates of $I_1$ and $I_2+I_3+I_4$, we have \begin{equation*} I \le 4\varepsilon I +C_\varepsilon I_5+CI_6 + C_\varepsilon I_7 + C\int_U \Big( 1+ | w^2 Q|^{2s}\Big) | w^2Q |^{2s+2} \zeta^2 dx, \end{equation*} where \begin{align*} I_7=\int_U K_* |\nabla w|^{2s} \Big(|\nabla \zeta|^2 +(|w||Q'|+|Q|)^2\zeta^2 w^2 \Big)w^2 dx. \end{align*} Selecting $\varepsilon=1/8$, we obtain \begin{equation}\label{Ks} I \le C(I_5+I_6+I_7) +C\int_U \Big( 1+ | w^2 Q|^{2s}\Big) | w^2Q |^{2s+2} \zeta^2 dx. \end{equation} In each integral of $I_5$, $I_6$ and $I_7$, we bound \begin{align*} \zeta^2 w^2\le \zeta^2\sup_{\rm supp \zeta}(w^2),\quad |\nabla \zeta|^2 w^2\le |\nabla \zeta|^2 \sup_{\rm supp \zeta}(w^2). \end{align*} It then follows \eqref{Ks} that \begin{equation}\label{LU0} \begin{split} \int_U K_* |\nabla w|^{2s+2} \zeta^2 dx &\le C \sup_{\text{\rm supp} \zeta }(w^2) \Big \{ \int_U K_* |D^2 w|^2 (|\nabla w|^{2s-2} +1)\zeta^2 dx\\ &\quad + \int_U K_* |\nabla w|^{2s} \Big(|\nabla \zeta|^2 +(|w||Q'|+|Q|)^2\zeta^2 w^2 \Big) dx\Big\} \\ &\quad +C\int_U \Big( 1+ | w^2 Q|^{2s}\Big) | w^2Q |^{2s+2} \zeta^2 dx. \end{split} \end{equation} We compare $K_*$ in \eqref{Kstar} with $K[w,Q]$ in \eqref{Ku}. Because \begin{equation*} 2^{-1}(1+ |\nabla w + w^2 Q|)^2 \le 1+ |\nabla w+ w^2 Q|^2 \le (1+ |\nabla w + w^2 Q|)^2, \end{equation*} then \begin{equation}\label{KK} K[w,Q](x)\le K_*(x)\le 2^{a/2} K[w,Q](x)\quad\forall x\in U. \end{equation} Applying the first, respectively second, inequality of \eqref{KK} to the left-hand side, respectively right-hand side, of \eqref{LU0}, we obtain inequality \eqref{LUineq}. Now, consider the case $w$, $Q$ and $Q'$ are bounded. By simple estimates of the last two integrals of \eqref{LUineq} using the numbers $M_w$, $M_Q$, $\mu_Q$, and by using the following estimate $$|\nabla w|^{2s-2}+1\le 2|\nabla w|^{2s-2}+{\rm sgn}(s-1)$$ for the first integral on the right-hand side of \eqref{LUineq}, we obtain inequality \eqref{ibt}. \end{proof} \section{Gradient estimates (I) }\label{L2asub} This section is focused on \emph{a priori} estimates for the gradient of a solution $u(x,t)$ of the IBVP \eqref{ibvpg}. Hereafter, we fix $T>0$. Let $u(x,t)$ be a $C_{x,t}^{2,1}(\bar U\times(0,T])\times C(\bar U\times[0,T])$ solution of \eqref{ibvpg}, not necessarily non-negative. In the estimates of the Lebesgue norms below, we will use the energy method. For that, it is convenient to shift the solution to a function vanishing at the boundary. Let $\Psi(x,t)$ be the extension of $\psi(x,t)$ from $\Gamma\times(0,T]$ to $\bar U\times[0,T]$. It is assumed to have necessary regularity needed for calculations below. All our following estimates, as far as the boundary data is concerned, will be expressed in terms of $\Psi$. This will not lose the generality since we can always translate them into $\psi$-dependence estimates, see e.g. \cite{HI1,JerisonKenig1995}. Define $\bar u=u-\Psi$ and $\bar u_0=u_0-\Psi$. We derive from \eqref{ibvpg} the equations for $\bar u$: \begin{equation}\label{ubar} \begin{aligned} \begin{cases} \phi \bar u_t=\nabla\cdot (X(\Phi(x,t))-\phi \Psi_t\quad &\text{on}\quad U\times (0,T],\\ \bar u(x,0)=\bar u_0(x) \quad &\text{on}\quad U,\\ \bar u(x,t)= 0\quad &\text{on}\quad \Gamma\times (0,T], \end{cases} \end{aligned} \end{equation} where $\Phi(x,t)$ is the same notation as \eqref{Phi0}, i.e., $$\Phi(x,t)=\nabla u(x,t)+u^2(x,t)\mathcal Z(x,t)=\nabla u(x,t)+W(x,t),$$ with $W(x,t)=u^2(x,t)\mathcal Z(x,t).$ The following ``weight'' function will play important roles in our statements and proofs: $$\mathcal K(x,t)=K[u(\cdot,t),\mathcal Z(\cdot,t)](x)= (1+|\Phi(x,t)|)^{-a}.$$ We estimate the $L^{2-a}_{x,t}$-norm for $\nabla u$ in terms of the initial and boundary data. Define \begin{equation}\label{MZ} M_{\mathcal Z}=\sup_{\bar U\times[0,T]}|\mathcal Z(x,t)|, \end{equation} \begin{equation}\label{Estar} {\mathcal E}_0=\inttx{0}{T}{U} (\chi_1^{2(2+a)}|\nabla \Psi|^2+ \phi\chi_1^2 (|\Psi_t|^2+\Psi^2))dxdt. \end{equation} \begin{xnotation In the remaining of this paper, the constant $C$ is positive and generic with varying values in different places. It depends on number $N$, the coefficients $a_i$'s and powers $\alpha_i$'s of the function $g$ in \eqref{eq2}, the number $c_*$ in \eqref{nnorms}, and the set $U$. In sections \ref{gradsec} and \ref{maxintime}, it further depends on number $s$, the subsets $U'$, $V$, $U_k$'s of $U$. However, it does not depend on the initial and boundary data of $u$, the functions $\Phi(x,t)$, $\mathcal Z(x,t)$, and numbers $T$, $T_0$, $t_0$, $\phi$, $\mathcal R$, $\chi_*$, $M_*$, whenever these are introduced. In particular, it is independent of the cut-off function $\zeta$ in Lemmas \ref{lem61}, \ref{lem62}, \ref{lem67}, Proposition \ref{prop63}, and inequality \eqref{iterate2}. \end{xnotation} \begin{proposition}\label{L2a} One has \begin{align} \inttx{0}{T}{U} \mathcal K|\nabla u|^2 dxdt &\le C\Big\{ \chi_1^2 \phi (\|\bar u_0\|_{L^2}^2+\|u\|_{L^2(U\times (0,T))}^2)\notag\\ &\quad\qquad+(1+\chi_1)^{2(2+a)}M_{\mathcal Z}^2\|u\|_{L^4(U\times (0,T))}^4+{\mathcal E}_0\Big\},\label{gradu0}\\ \inttx{0}{T}{U} |\nabla u|^{2-a} dxdt &\le C \Big\{\chi_1^2 \phi (\|\bar u_0\|_{L^2}^2+\|u\|_{L^2(U\times (0,T))}^2) \notag\\ &\quad\qquad +(1+\chi_1)^{2(2+a)}(T+M_{\mathcal Z}^2\|u\|_{L^4(U\times (0,T))}^4)+{\mathcal E}_0\Big\}. \label{gradu1} \end{align} \end{proposition} \begin{proof} In this proof, we denote $J= \int_U \Psi_t \bar u dx$. Multiplying the PDE \eqref{ubar} by $\bar u$, integrating over domain $U$ and using integration by parts, we have \begin{align*} \frac{\phi }{2} \frac{d}{dt} \int_U \bar u^2 dx &=- \int_U X(\Phi)\cdot \nabla \bar u dx-\phi J\\ &=-\int_UX(\Phi)\cdot \Phi dx +\int_UX(\Phi)\cdot (\nabla \Psi + W) dx-\phi J. \end{align*} By \eqref{X0} and \eqref{X2}, we have \begin{align*} \frac\phi 2 \frac{d}{dt} \int_U \bar u^2 dx&\le -c_4\chi_1^{-2}\int_U \mathcal K |\Phi|^2 dx +c_2\chi_1^a \int_U \mathcal K |\Phi|(|\nabla \Psi|+|W|)dx-\phi J. \end{align*} For the second integral on the right-hand side, applying Cauchy's inequality gives \begin{align*} \frac\phi 2 \frac{d}{dt} \int_U \bar u^2 dx&\le -c_4\chi_1^{-2}\int_U \mathcal K |\Phi|^2 dx\\ &\quad+\int_U \Big[\varepsilon\mathcal K|\Phi|^2 +C\varepsilon^{-1}\chi_1^{2a}\mathcal K(|\nabla \Psi|+|W|)^2\Big]dx-\phi J. \end{align*} Choosing $\varepsilon=c_4\chi_1^{-2}/2$, we obtain \begin{align}\label{ewJ} \frac\phi 2 \frac{d}{dt} \int_U \bar u^2 dx&\le -\frac{c_4\chi_1^{-2}}{2}\int_U \mathcal K |\Phi|^2 dx+C \chi_1^{2(1+a)}\int_U (|\nabla \Psi|^2+|W|^2)dx-\phi J. \end{align} Note that \begin{equation}\label{Wbound} |W|\le M_{\mathcal Z} u^2. \end{equation} Writing $\bar u$ in $J$ as $u-\Psi$, and using Cauchy's inequality, we have \begin{equation}\label{pJ} \phi |J|\le \int_U \phi |\Psi_t| (|u|+|\Psi|) dx\le \phi \int_U (u^2+|\Psi_t|^2+|\Psi|^2)dx. \end{equation} Utilizing estimates \eqref{Wbound} and \eqref{pJ} in \eqref{ewJ}, we derive \begin{align*} \frac\phi 2 \frac{d}{dt} \int_U \bar u^2 dx +\frac{c_4\chi_1^{-2}}{2}\int_U \mathcal K|\Phi|^2 dx &\le C\chi_1^{2(1+a)}M_{\mathcal Z}^2\int_U u^4 dx\\ &\quad+C \chi_1^{2(1+a)}\int_U |\nabla \Psi|^2dx +\phi\int_U (u^2 + |\Psi_t|^2+\Psi^2)dx. \end{align*} Then integrating from $0$ to $T$ gives \begin{equation} \begin{aligned}\label{KP} \inttx{0}{T}{U} \mathcal K|\Phi|^2 dx dt &\le C \chi_1^2 \phi \|\bar u_0\|_{L^2}^2 +C\chi_1^{2(2+a)}M_{\mathcal Z}^2\inttx{0}{T}{U} u^4 dxdt+C\phi \chi_1^2\inttx{0}{T}{U} u^2 dxdt\\ &\quad +C{\mathcal E}_0. \end{aligned} \end{equation} For the left-hand side of \eqref{KP}, we use inequalities \eqref{ee6} and \eqref{Wbound} to have \begin{align*} \mathcal K|\Phi|^{2}=\mathcal K|\nabla u+W|^2\ge 2^{-1}\mathcal K |\nabla u|^2-\mathcal K|W|^2\ge 2^{-1} \mathcal K|\nabla u|^2-M_{\mathcal Z}^2 u^4. \end{align*} Combining this with \eqref{KP} yields \begin{align*} \inttx{0}{T}{U} \mathcal K|\nabla u|^2 dx &\le 2\inttx{0}{T}{U} \mathcal K|\Phi|^2 dx dt + 2M_{\mathcal Z}^2\inttx{0}{T}{U} u^4 dxdt\\ &\le C \chi_1^2 \phi \|\bar u_0\|_{L^2}^2 +C(1+\chi_1)^{2(2+a)}M_{\mathcal Z}^2\inttx{0}{T}{U} u^4 dxdt\\ &\quad+C\phi\chi_1^2\inttx{0}{T}{U} u^2 dxdt +C{\mathcal E}_0, \end{align*} which proves \eqref{gradu0}. Using \eqref{kug2} and \eqref{Wbound}, we estimate the integrand on the left-hand side of \eqref{gradu0} by \begin{align*} \mathcal K|\nabla u|^2\ge 3^{-1}|\nabla u|^{2-a} -(1+|W|^2)\ge 3^{-1}|\nabla u|^{2-a} -(1+(u^2M_{\mathcal Z})^2). \end{align*} It results in \begin{align*} \inttx{0}{T}{U} |\nabla u|^{2-a} dxdt &\le 3\inttx{0}{T}{U} \mathcal K|\nabla u|^2 dx + 3\inttx{0}{T}{U} (1+M_{\mathcal Z}^2 u^4) dxdt\\ &\le C \chi_1^2 \phi (\|\bar u_0\|_{L^2}^2+\|u\|_{L^2(U\times (0,T))}^2) \\ &\quad+C(1+\chi_1)^{2(2+a)}\inttx{0}{T}{U} (1+M_{\mathcal Z}^2 u^4) dxdt+ C{\mathcal E}_0, \end{align*} which proves \eqref{gradu1}. \end{proof} To have more specific estimates, we examine the bounds for the constituents of the PDE in \eqref{ubar}. Note, from \eqref{Zx} and \eqref{Jxineq}, that \begin{equation*} |\mathcal Z(x,t)| \le \mathcal G + \Omega^2 |{\mathbf J}^2 x| \le \mathcal G + \Omega^2 r_0 =\mathcal G(1+\Omega_*^2), \end{equation*} where \begin{equation} \label{r0} r_0=\max\{|x|: x\in \bar U \} \text{ and } \Omega_*=\Omega \sqrt{r_0/\mathcal G}=\tilde\Omega\sqrt{ r_0/\tilde{\mathcal G}}. \end{equation} Thus, the number $M_{\mathcal Z}$ in \eqref{MZ} can be bounded by \begin{equation}\label{MZ1} M_{\mathcal Z}\le \mathcal G(1+\Omega_*)^2. \end{equation} Next, it is obvious that $D\mathcal Z(x,t)=\Omega^2 {\mathbf J}^2$, hence, by \eqref{J2norm}, \begin{equation}\label{DZ} |D\mathcal Z(x,t)|=\mu_{\mathcal Z}\stackrel{\rm def}{=} \sqrt 2 \Omega^2=\sqrt 2(\mathcal G/r_0) \Omega_*^2. \end{equation} We rewrite $\mathcal R$ in \eqref{Rdef} as \begin{equation}\label{RR1} \mathcal R=\frac{2\rho_* \Omega}{\phi} =\frac{2\rho_* \sqrt{\mathcal G/r_0}}{\phi} \Omega_* =\frac{2\rho_* \sqrt{ \tilde{\mathcal G}/r_0}}{\tilde\phi} \Omega_*. \end{equation} From \eqref{MZ1}, \eqref{DZ} and \eqref{RR1}, we conveniently relate the upper bounds of $\mathcal R$, $M_{\mathcal Z}$, $\mu_{\mathcal Z}$ to a single parameter $\chi_*$ as follows \begin{equation}\label{RMm} \mathcal R\le \chi_*\stackrel{\rm def}{=} \max\{1,d_*(1+\Omega_*)\},\quad M_{\mathcal Z}\le \chi_*^2,\quad \mu_{\mathcal Z}\le \chi_*^2, \end{equation} where \begin{equation} d_* =\sqrt{\mathcal G/r_0}\max\Big\{\frac{2\rho_*}{\phi},\sqrt{r_0},2^{1/4}\Big\}. \end{equation*} The reason for \eqref{RMm}, with the choice of $\chi_*\ge 1$, is to simplify many estimates that will be obtained later, and specify the dependence of those estimates on the parameters $d_*$ and $\Omega_*$. \begin{remark} A very common situation is that the rotation is about the vertical axis, then $\vec k=\pm (0,0,1)$, and \begin{equation}\label{Jmatrix} {\mathbf J}=\pm \begin{pmatrix} 0 & -1 & 0\\ 1 & 0 & 0 \\ 0& 0 & 0 \end{pmatrix},\quad {\mathbf J}^2=-\begin{pmatrix} 1&0 & 0\\ 0&1&0\\ 0& 0&0 \end{pmatrix}. \end{equation} Thanks to \eqref{Jmatrix}, we can, in this case, replace the $r_0$ in \eqref{r0} with a smaller number, namely, $$r_0=\max\{(x_1^2+x_2^2)^{1/2}: x=(x_1,x_2,x_3)\in \bar U \}.$$ \end{remark} \begin{definition}\label{BEN} We will use the following quantities in our estimates throughout the paper: \begin{align*} M_*&=\sup_{ U\times[0,T]} |u|, \quad {\mathcal E}_*=\inttx{0}{T}{U} (|\nabla \Psi|^2+\phi |\Psi_t|^2+\phi \Psi^2)dxdt,\\ {\mathcal N}_0&=\phi \|\bar u_0\|_{L^2}^2 + TM_*^2+{\mathcal E}_*,\quad {\mathcal N}_*=\phi\|\bar u_0\|_{L^2}^2+T+ {\mathcal E}_*,\\ {\mathcal N}_2&=\phi( \|\bar u_0\|_{L^2}^2+\|\nabla u_0\|_{L^2}^2)+T+ {\mathcal E}_*,\\ {\mathcal N}_s&=\phi( \|\bar u_0\|_{L^2}^2+\|\nabla u_0\|_{L^2}^2+\|\nabla u_0\|_{L^s}^s)+T+ {\mathcal E}_* \text{ for $s>2$.} \end{align*} \end{definition} The estimates obtained in the rest of this paper will depend on the quantities in Definition \ref{BEN}. Among those, only $M_*$ still depends on the solution $u$. However, this quantity can be bounded in terms of initial and boundary data by using different techniques. For instance, in our original problem, $u=\rho/\kappa\ge 0$, hence we have from Corollary \ref{maxcor} that $M_*\le C M_0(T)$, see \eqref{uM}. Therefore, in the following, we say ``the estimates are expressed in terms of the initial and boundary data'' even when they contain $M_*$. \medskip As stated in section \ref{intro}, we will keep tracks of the dependence on certain physical parameters, particularly, the angular speed. Note, by \eqref{defxione} and \eqref{RMm}, that \begin{equation}\label{xichi} \chi_1\le C\chi_*. \end{equation} With this and the fact $\chi_*\ge 1$, we compare ${\mathcal E}_0$ in \eqref{Estar} with ${\mathcal E}_*$ by \begin{equation}\label{EEs} {\mathcal E}_0\le C\chi_*^{2(2+a)}{\mathcal E}_*. \end{equation} It is clear that \begin{align} \label{NNN0} {\mathcal E}_*&\le {\mathcal N}_0\le (M_*+1)^2 {\mathcal N}_*,\\ \label{NNNs} {\mathcal E}_*&\le {\mathcal N}_*\le {\mathcal N}_2\le {\mathcal N}_s \text{ for $s>2$.} \end{align} \begin{theorem}\label{L2apos} One has \begin{align} \label{gradu2} \inttx{0}{T}{U} \mathcal K|\nabla u|^2 dxdt&\le C \Big\{\chi_*^2\phi \|\bar u_0\|_{L^2}^2 + \chi_*^{2(4+a)}TM_*^2(M_*+1)^2+\chi_*^{2(2+a)}{\mathcal E}_*\Big\},\\ \label{gradu3} \inttx{0}{T}{U} |\nabla u(x,t)|^{2-a}dxdt &\le C\Big\{ \chi_*^2\phi \|\bar u_0\|_{L^2}^2 + \chi_*^{2(4+a)}T(M_*+1)^4+\chi_*^{2(2+a)}{\mathcal E}_*\Big\}. \end{align} Consequently, the following more concise estimates hold \begin{equation} \label{gradu4} \inttx{0}{T}{U} \mathcal K|\nabla u|^2 dxdt\le C\chi_*^{2(4+a)}(M_*+1)^2 {\mathcal N}_0, \end{equation} \begin{equation} \label{gradu6a} \inttx{0}{T}{U} \mathcal K|\nabla u|^2 dxdt \le C\chi_*^{2(4+a)}(M_*+1)^4 {\mathcal N}_*, \end{equation} \begin{equation} \label{gradu6b} \inttx{0}{T}{U} |\nabla u(x,t)|^{2-a}dxdt \le C\chi_*^{2(4+a)}(M_*+1)^4 {\mathcal N}_*. \end{equation} \end{theorem} \begin{proof} By the definition of $M_*$, we obviously have \be |u|\le M_*\text{ on } \bar U\times[0,T], \end{equation} hence, \begin{equation}\label{uu} \|u\|_{L^2(U\times (0,T))}^2\le CTM_*^2\quad\text{and}\quad \|u\|_{L^4(U\times (0,T))}^4\le CTM_*^4. \end{equation} Using \eqref{uu}, the relations \eqref{xichi}, \eqref{EEs}, and estimate of $M_{\mathcal Z}$ in \eqref{RMm} for the right-hand side of \eqref{gradu0}, we have \begin{align*} \inttx{0}{T}{U} \mathcal K|\nabla u|^2 dxdt &\le C\chi_*^2 \phi (\|\bar u_0\|_{L^2}^2 +TM_*^2) +C(1+\chi_*)^{2(2+a)}\cdot \chi_*^4\cdot T M_*^4 +C\chi_*^{2(2+a)}{\mathcal E}_*\\ &\le C\chi_*^2 \phi \|\bar u_0\|_{L^2}^2 +C\chi_*^{2(2+a)+4} T M_*^2 (1+M_*^2) +C\chi_*^{2(2+a)}{\mathcal E}_*. \end{align*} Then \eqref{gradu2} and \eqref{gradu4} follow. From \eqref{gradu4}, we infer \eqref{gradu6a} thanks to the last relation in \eqref{NNN0}. Similarly, we have from \eqref{gradu1} that \begin{equation*} \inttx{0}{T}{U} |\nabla u(x,t)|^{2-a}dxdt \le C\Big\{ \chi_*^2\phi (\|\bar u_0\|_{L^2}^2+TM_*^2)+\chi_*^{2(2+a)}(T+ \chi_*^4 \cdot T M_*^4) +\chi_*^{2(2+a)}{\mathcal E}_*\Big\}. \end{equation*} Then \eqref{gradu3} and \eqref{gradu6b} follow. \end{proof} We remark that while \eqref{gradu3} gives a direct estimate for the $L^{2-a}$-norm, the alternative estimate in form of \eqref{gradu2} prepares for the iterations in section \ref{gradsec} below. \begin{remark}\label{smlrmk1} From the point of view of pure PDE estimates, the right-hand side of \eqref{gradu4} can be small, for a fixed $T>0$, while the right-hand side of \eqref{gradu6a} and \eqref{gradu6b} cannot. It is because $|u|$, $|\bar u_0|$, and ${\mathcal E}_*$ being small will make ${\mathcal N}_0$, but not ${\mathcal N}_*$, small. \end{remark} \section{Gradient estimates (II)} \label{gradsec} In this section, we establish the interior $L^s$-estimate of $\nabla u $ for all $s > 0$. For the remainder of this paper, $\zeta$ always denotes a function in $C^2_c(U\times [0,T])$ that satisfies $0\le \zeta(x,t)\le 1$ for all $(x,t)$. When such a function is specified, the quantity $\mathcal D_s$ is defined, for $s\ge 0$, by \begin{equation}\label{Ds} \mathcal D_s= \int_U|\nabla u_0(x)|^{2s+2}\zeta^2(x,0) dx. \end{equation} The next two lemmas \ref{lem61} and \ref{lem62} are the main technical steps for the later iterative estimates of the gradient. \begin{lemma} \label{lem61} For any $s\ge 0$, one has \begin{equation}\label{iterate1} \begin{aligned} &\phi \sup_{t\in[0,T]}\int_U |\nabla u(x,t)|^{2s+2} \zeta^2(x,t) dx,\\ &(s+1)c_8\chi_1^{-2}\inttx{0}{T}{U} \mathcal K |D^2 u|^2 |\nabla u|^{2s} \zeta^2 dx dt\le \phi \mathcal D_s +CI_0, \end{aligned} \end{equation} where \begin{align*} I_0&=\mu_{\mathcal Z}^2 (1+\chi_1)^{2(1+a)}\inttx{0}{T}{U} \mathcal K |\nabla u|^{2s} u^4 \zeta^2 dx dt \\ &\quad + (1+\chi_1)^{2(1+a)}\inttx{0}{T}{U} \mathcal K |\nabla u|^{2s+2} (M_{\mathcal Z}^2 u^2\zeta^2 + |\nabla \zeta|^2) dx dt + \inttx{0}{T}{U} |\nabla u|^{2s+2} \zeta|\zeta_t| dx dt. \end{align*} Consequently, \begin{align} \label{irat0} \phi \sup_{t\in[0,T]}\int_U |\nabla u(x,t)|^{2s+2} \zeta^2(x,t) dx &\le \phi \mathcal D_s +C J_0,\\ \label{iterate4} \inttx{0}{T}{U} \mathcal K |D^2 u|^2 |\nabla u|^{2s} \zeta^2 dx dt &\le C \chi_*^2 (\phi \mathcal D_s+J_0), \end{align} where \begin{align*} J_0 &=\chi_*^{2(3+a)}\widetilde M^4\inttx{0}{T}{U} \mathcal K |\nabla u|^{2s} \zeta^2 dx dt \\ &\quad + \chi_*^{2(3+a)}(\widetilde M+1)^2\inttx{0}{T}{U} \mathcal K |\nabla u|^{2s+2} (\zeta^2 + |\nabla \zeta|^2) dx dt + \inttx{0}{T}{U} |\nabla u|^{2s+2} \zeta|\zeta_t| dx dt, \end{align*} with \begin{equation} \widetilde M=\sup\{ |u(x,t)|:(x,t)\in \operatorname{supp}\zeta\}. \end{equation} \end{lemma} \begin{proof} For each $j=1,2,3$ and $t\in(0,T)$, let $\{\varphi_n(x)\}_{n=1}^\infty$ be a sequence in $C_c^\infty(U)$ that approximates $|\nabla u(x,t)|^{2s} \partial_j u(x,t)\zeta^2(x,t)$ in $W_0^{1,2}(U)$. Multiplying equation \eqref{ueq} by $-\partial_j \varphi_n(x)$, integrating the resulting equation over $U$, and using the integration by parts twice for the right-hand side give \begin{align*} -\phi\int_U \frac{\partial u}{\partial t} \partial_j\varphi_n dx &= -\int_U \frac{\partial}{\partial x_i}\Big[ X_i(\Phi(x,t)) \Big] \partial_j \varphi_n dx =\int_U X_i(\Phi(x,t)) \partial_i \partial_j \varphi_n dx\\ &= -\int_U \frac{\partial}{\partial x_j}\Big[ X_i(\Phi(x,t)) \Big] \partial_i \varphi_n dx. \end{align*} Passing $n\to\infty$ and summing in $j$ yield \begin{align*} -\phi\int_U \frac{\partial u}{\partial t}\nabla\cdot ( |\nabla u|^{2s} \nabla u\zeta ^2)dx &= -\int_U \frac{\partial}{\partial x_j}\Big[ X_i(\Phi(x,t)) \Big] \partial_i (|\nabla u|^{2s} \partial_j u \zeta^2) dx. \end{align*} Performing integration by parts again for the left-hand side, we obtain \begin{equation}\label{esti} \begin{aligned} & \frac{\phi}{2s+2} \frac{d}{dt} \int_U |\nabla u|^{2s+2} \zeta^2 dx\\ &= -\int_U \frac{\partial}{\partial x_j}\Big[ X_i(\Phi(x,t)) \Big] \partial_i (|\nabla u|^{2s} \partial_j u \zeta^2) dx+\frac{1}{s+1}\int_U |\nabla u|^{2s+2} \zeta\zeta_t dx\\ &=I_1+I_2+I_3+I_4, \end{aligned} \end{equation} where \begin{align*} I_1&= -\int_U \frac{\partial}{\partial x_j}\Big[ X_i(\Phi(x,t)) \Big] \partial_j \partial_i u |\nabla u|^{2s} \zeta^2 dx,\\ I_2&= -2\int_U \frac{\partial}{\partial x_j}\Big[ X_i(\Phi(x,t)) \Big] \partial_j u \, |\nabla u|^{2s} \, \zeta \partial_i \zeta dx,\\ I_3&= -2s\int_U \frac{\partial}{\partial x_j}\Big[ X_i(\Phi(x,t)) \Big] \partial_j u \, (|\nabla u|^{2s-2} \partial_i\partial_l u\partial_l u)\, \zeta^2 dx,\\ I_4&=\frac{1}{s+1} \int_U|\nabla u|^{2s+2} \zeta\zeta_t dx. \end{align*} For $i,j=1,2,3$, denote $X'_{ij}(y)=\partial X_i(y)/\partial y_j$. By the second inequality of \eqref{Xprime}, \begin{equation}\label{XpPh} |X'_{ij}(\Phi(x,t))|\le C(1+\chi_1)^a\mathcal K(x,t). \end{equation} Let $I_5=\int_U \mathcal K |D^2 u|^2 |\nabla u|^{2s} \zeta^2 dx$. \medskip \noindent\textit{Estimation of $I_1$.} We have \begin{align*} I_1 &= -\int_U X'_{im}(\Phi) ( \partial_m\partial _j u + \partial _j \mathcal Z_m u^2 + 2u \partial_j u \mathcal Z_m ) \partial_j \partial_i u |\nabla u|^{2s} \zeta^2 dx\\ &=I_{1,1}+I_{1,2}+I_{1,3}, \end{align*} where \begin{align*} I_{1,1}&=-\int_U D(\partial_j u) X'(\Phi) \nabla (\partial_j u ) |\nabla u|^{2s} \zeta^2 dx,\\ I_{1,2}&=-\int_U X'_{im}(\Phi) \partial _j \mathcal Z_m u^2 \partial_j \partial_i u |\nabla u|^{2s} \zeta^2 dx,\\ I_{1,3}&=-2 \int_U X'_{im}(\Phi) u \partial_j u \mathcal Z_m \partial_j \partial_i u |\nabla u|^{2s} \zeta^2 dx. \end{align*} To estimate $I_{1,1}$, by applying \eqref{hXh} to $y=\Phi$, $\xi=\nabla(\partial_j u)$, we have \begin{align*} I_{1,1} &\le - c_8\chi_1^{-2} \int_U \mathcal K \Big( \sum_{j=1}^n|\nabla(\partial_j u)|^2 \Big) |\nabla u|^{2s} \zeta^2 dx =-c_8\chi_1^{-2} I_5. \end{align*} To estimate $I_{1,2}$, using inequality \eqref{XpPh} to estimate $|X'_{im}(\Phi)|$, identity \eqref{DZ} for $|\partial_j\mathcal Z_m|$, and then applying the Cauchy inequality to $u^2 |D^2 u |$, we obtain \begin{align*} |I_{1,2}|&\le C \int_U (1+\chi_1)^a \mathcal K \mu_{\mathcal Z} u^2 |D^2 u | |\nabla u|^{2s} \zeta^2 dx\\ &\le \varepsilon I_5+ C\varepsilon^{-1}(1+\chi_1)^{2a} \mu_{\mathcal Z}^2 \int_U \mathcal K u^4 |\nabla u|^{2s} \zeta^2 dx . \end{align*} Similarly, we estimate $I_{1,3}$ by \begin{align*} |I_{1,3}|&\le C(1+\chi_1)^a \int_U \mathcal K M_{\mathcal Z} u |D^2 u | |\nabla u|^{2s+1} \zeta^2 dx\\ &\le \varepsilon I_5+ C\varepsilon^{-1}(1+\chi_1)^{2a} M_{\mathcal Z}^2\int_U \mathcal K u^2 |\nabla u|^{2s+2} \zeta^2 dx . \end{align*} Summing up, we obtain \begin{equation}\label{estione} \begin{aligned} I_1&\le (2\varepsilon -c_8\chi_1^{-2} )I_5+ C\varepsilon^{-1}(1+\chi_1)^{2a} \mu_{\mathcal Z}^2 \int_U \mathcal K u^4 |\nabla u|^{2s} \zeta^2 dx\\ &\quad + C\varepsilon^{-1}(1+\chi_1)^{2a} M_{\mathcal Z}^2\int_U \mathcal K u^2 |\nabla u|^{2s+2} \zeta^2 dx. \end{aligned} \end{equation} \medskip \noindent\textit{Estimation of $I_2$.} We calculate \begin{align*} I_2 &=-2\int_U X'_{im}(\Phi)\Big ( \partial_m\partial _j u + \partial _j \mathcal Z_m u^2 + 2u \partial_j u \mathcal Z_m\Big ) \partial_j u \, |\nabla u|^{2s} \, \zeta \partial_i \zeta dx\\ &=I_{2,1}+I_{2,2}+I_{2,3}, \end{align*} where the integral is split along the sum $\partial_m\partial _j u + \partial _j \mathcal Z_m u^2 + 2u \partial_j u \mathcal Z_m$. Using \eqref{XpPh} and Cauchy's inequality gives \begin{align*} |I_{2,1}| &\le C(1+\chi_1)^a\int_U \mathcal K |D^2 u||\nabla u|^{2s+1} \zeta |\nabla \zeta| dx\\ &\le \varepsilon I_5+C\varepsilon^{-1}(1+\chi_1)^{2a} \int_U \mathcal K |\nabla u|^{2s+2} |\nabla \zeta|^2 dx, \end{align*} Using \eqref{XpPh}, \eqref{DZ} and \eqref{MZ}, we obtain \begin{align*} |I_{2,2}| &\le C(1+\chi_1)^a\int_U \mathcal K |D\mathcal Z| u^2|\nabla u|^{2s+1} \zeta |\nabla \zeta| dx\\ &\le C(1+\chi_1)^a\mu_{\mathcal Z} \int_U \mathcal K u^2|\nabla u|^{2s+1} \zeta |\nabla \zeta| dx, \end{align*} and \begin{align*} |I_{2,3}| &\le C(1+\chi_1)^a\int_U \mathcal K |\mathcal Z| u|\nabla u|^{2s+2} \zeta |\nabla \zeta| dx\\ &\le C(1+\chi_1)^aM_{\mathcal Z}\int_U \mathcal K u|\nabla u|^{2s+2} \zeta |\nabla \zeta| dx. \end{align*} Thus, we have \begin{equation}\label{estitwo} \begin{aligned} &I_2\le \varepsilon I_5+C\varepsilon^{-1}(1+\chi_1)^{2a} \int_U \mathcal K |\nabla u|^{2s+2} |\nabla \zeta|^2 dx\\ &\quad+C(1+\chi_1)^a\mu_{\mathcal Z} \int_U \mathcal K u^2|\nabla u|^{2s+1} \zeta |\nabla \zeta| dx+C(1+\chi_1)^aM_{\mathcal Z}\int_U \mathcal K u|\nabla u|^{2s+2} \zeta |\nabla \zeta| dx. \end{aligned} \end{equation} \medskip \noindent\textit{Estimation of $I_3$.} Using similar calculations to those for $I_2$, we have \begin{align*} I_3 &=-2s\int_U X'_{mi}(\Phi) \Big( \partial_m\partial _j u + \partial _j \mathcal Z_m u^2 + 2u \partial_j u \mathcal Z_m \Big) \cdot \partial_j u \, (|\nabla u|^{2s-2} \partial_i\partial_l u\partial_l u)\, \zeta^2 dx\\ &=I_{3,1}+I_{3,2}+I_{3,3}. \end{align*} where the integral, again, is split along the sum $\partial_m\partial _j u + \partial _j \mathcal Z_m u^2 + 2u \partial_j u \mathcal Z_m$. Rewriting $I_{3,1}$ and applying \eqref{hXh} to $y=\Phi$ and $\xi=\frac{1}2 \nabla( |\nabla u|^2)$, we have \begin{align*} I_{3,1} &=-2s\int_U X'_{im}(\Phi)\cdot [ \partial_m\partial _j u \cdot \partial_j u ] \cdot[\partial_i\partial_l \cdot u\partial_l u] \cdot |\nabla u|^{2s-2}\, \zeta^2 dx\\ &=-2s\int_U X'_{im}(\Phi) \Big [\frac{1}2 \partial _m |\nabla u|^2\Big ] \Big[\frac{1}2 \partial_i |\nabla u|^2)\Big]\cdot |\nabla u|^{2s-2} \zeta^2 dx\\ &\le -2s\, c_8 \chi_1^{-2} \int_U \mathcal K \Big|\frac{1}2\nabla (|\nabla u|^2)\Big|^2 |\nabla u|^{2s-2} \zeta^2 dx \le 0. \end{align*} We estimate $I_{3,2}$ and $I_{3,3}$ similarly to $I_{2,1}$, $I_{2,2}$, $I_{2,3}$, and obtain \begin{align*} |I_{3,2}|&\le C(1+\chi_1)^a\int_U \mathcal K |D\mathcal Z| u^2 \, |\nabla u|^{2s} \, |D^2 u| \zeta^2 dx\\ &\le \varepsilon I_5 + C\varepsilon^{-1}(1+\chi_1)^{2a} \mu_{\mathcal Z}^2 \int_U \mathcal K u^4 \, |\nabla u|^{2s} \zeta^2 dx, \end{align*} and \begin{align*} |I_{3,3}|&\le C(1+\chi_1)^a M_{\mathcal Z}\int_U \mathcal K u \, |\nabla u|^{2s+1} \, |D^2 u| \zeta^2 dx\\ &\le \varepsilon I_5 + C\varepsilon^{-1} (1+\chi_1)^{2a} M_{\mathcal Z}^2\int_U \mathcal K u^2 \, |\nabla u|^{2s+2} \zeta^2 dx. \end{align*} Therefore, \begin{equation}\label{estithree} \begin{aligned} I_3&\le 2\varepsilon I_5+ C\varepsilon^{-1}(1+\chi_1)^{2a} \mu_{\mathcal Z}^2 \int_U \mathcal K u^4 \, |\nabla u|^{2s} \zeta^2 dx\\ &\quad + C\varepsilon^{-1} (1+\chi_1)^{2a} M_{\mathcal Z}^2\int_U \mathcal K u^2 \, |\nabla u|^{2s+2} \zeta^2 dx. \end{aligned} \end{equation} Combining \eqref{esti} with the estimates \eqref{estione}, \eqref{estitwo} and \eqref{estithree}, we have \begin{align*} &\frac{\phi}{2s+2} \frac{d}{dt} \int_U |\nabla u|^{2s+2} \zeta^2 dx +(c_8\chi_1^{-2}-5\varepsilon)I_5 \le C\varepsilon^{-1}(1+\chi_1)^{2a}\mu_{\mathcal Z}^2 \int_U \mathcal K u^4 |\nabla u|^{2s} \zeta^2 dx \\ & +C\varepsilon^{-1} (1+\chi_1)^{2a}M_{\mathcal Z}^2\int_U \mathcal K u^2 |\nabla u|^{2s+2} \zeta^2 dx +C\varepsilon^{-1}(1+\chi_1)^{2a} \int_U \mathcal K |\nabla u|^{2s+2} |\nabla \zeta|^2 dx\\ & +C(1+\chi_1)^a\mu_{\mathcal Z}\int_U \mathcal K |u|^2|\nabla u|^{2s+1} \zeta |\nabla \zeta| dx + C(1+\chi_1)^aM_{\mathcal Z} \int_U \mathcal K |u||\nabla u|^{2s+2} \zeta |\nabla \zeta| dx +I_4. \end{align*} Using the Cauchy inequality, we have, for the fourth integral on the right-hand side, \begin{equation*} \begin{split} (1+\chi_1)^a\mu_{\mathcal Z} |u|^2|\nabla u|^{2s+1} \zeta |\nabla \zeta| &\le \frac 1 2 \left [\varepsilon^{-1}(1+\chi_1)^{2a}\mu_{\mathcal Z}^2 u^4 |\nabla u|^{2s} \zeta^2+ \varepsilon |\nabla u|^{2s+2} |\nabla \zeta|^2 \right], \end{split} \end{equation*} and, for the fifth integral on the right-hand side, \begin{equation*} \begin{split} (1+\chi_1)^aM_{\mathcal Z}|u| \zeta |\nabla \zeta| &\le \frac 1 2\left[ \varepsilon^{-1} (1+\chi_1)^{2a}M_{\mathcal Z}^2 u^2\zeta^2+ \varepsilon |\nabla \zeta|^2 \right]. \end{split} \end{equation*} Therefore, we obtain \begin{equation} \label{d10} \begin{aligned} & \frac{\phi}{2s+2} \frac{d}{dt} \int_U |\nabla u|^{2s+2} \zeta^2 dx +(c_8\chi_1^{-2}-5\varepsilon)I_5 \\ & \le C\varepsilon^{-1}(1+\chi_1)^{2a}\mu_{\mathcal Z}^2 \int_U \mathcal K |\nabla u|^{2s} u^4 \zeta^2 dx +C\varepsilon^{-1} (1+\chi_1)^{2a}M_{\mathcal Z}^2\int_U \mathcal K u^2 |\nabla u|^{2s+2} \zeta^2 dx \\ &\quad +C(\varepsilon^{-1}(1+\chi_1)^{2a}+\varepsilon) \int_U \mathcal K |\nabla u|^{2s+2} |\nabla \zeta|^2 dx + C \int_U |\nabla u|^{2s+2} \zeta|\zeta_t| dx. \end{aligned} \end{equation} Choosing $\varepsilon=c_8\chi_1^{-2}/10$, and integrating \eqref{d10} in time, we get $$\phi \sup_{t\in[0,T]}\int_U |\nabla u(x,t)|^{2s+2} \zeta^2(x,t) dx \text{ and } (s+1)c_8\chi_1^{-2}\inttx{0}{T}{U} \mathcal K |D^2 u|^2 |\nabla u|^{2s} \zeta^2 dx dt$$ are bounded from above by \begin{align*} &\phi \int_U |\nabla u_0(x)|^{2s+2} \zeta^2(x,0) dx +C\mu_{\mathcal Z}^2 (1+\chi_1)^{2(1+a)}\inttx{0}{T}{U} \mathcal K |\nabla u|^{2s} u^4 \zeta^2 dx dt \\ &\quad + C ((1+\chi_1)^{2(1+a)}+\varepsilon)\inttx{0}{T}{U} \mathcal K |\nabla u|^{2s+2} (M_{\mathcal Z}^2 u^2\zeta^2 + |\nabla \zeta|^2) dx dt \\ &\quad + C\inttx{0}{T}{U} |\nabla u|^{2s+2} \zeta|\zeta_t| dx dt. \end{align*} Then using the fact $\varepsilon\le c_8 \chi_0^{-2}/10\le C$ for the last $\varepsilon$, we obtain \eqref{iterate1}. We now estimate $I_0$ further. We use \eqref{RMm}, \eqref{xichi} to estimate $M_{\mathcal Z}$, $\mu_{\mathcal Z}$, $\chi_1$, note, for $m=2,4$, that $u^m \zeta^2\le \widetilde M^m \zeta^2$. With these estimates, we have \begin{equation} \label{JJ0}I_0\le C J_0. \end{equation} Hence, \eqref{irat0} directly follows the first estimate \eqref{iterate1}. Similarly, multiplying the second estimate in \eqref{iterate1} by $(s+1)^{-1}c_8^{-1}\chi_1^2$, then using \eqref{xichi} and \eqref{JJ0}, we obtain \eqref{iterate4}. \end{proof} Next, we combine Lemma \ref{lem61} with the embedding in Theorem \ref{LUembed} to derive a bootstrapping estimate. \begin{lemma}\label{lem62} If $s\ge 0$ then \begin{equation} \label{Kug3} \inttx{0}{T}{U} \mathcal K |\nabla u|^{2s+4} \zeta^2 dxdt+M_*^2 \inttx{0}{T}{U} \mathcal K |D^2 u|^2 |\nabla u|^{2s} \zeta^2 dx dt \le C(I_*+J_*), \end{equation} where \begin{align*} I_*&= \chi_*^2M_*^2 \phi \mathcal D_s + T\chi_*^{4(2s+3)}M_*^6 (M_*+1)^{8s+6} \\ &\quad + \chi_*^{2(4+a)}M_*^2(M_*+1)^4\inttx{0}{T}{U} \mathcal K |\nabla u|^{2s+2} (\zeta^2 + |\nabla \zeta|^2) dx dt\\ &\quad + {\rm sgn}(s)M_*^2 \inttx{0}{T}{U} \mathcal K |D^2 u|^2 \zeta^2 dxdt, \end{align*} and \begin{align*} J_*&= T\chi_*^{4(1+a)}M_*^4 (M_*+1)^{4a}\sup|\zeta_t|^2\\ &\quad + \chi_*^{4(1+a\cdot{\rm sgn}(s))}M_*^4 (M_*+1)^{4a\cdot{\rm sgn}(s)}\inttx{0}{T}{U} \mathcal K |\nabla u|^{2s+2} |\zeta_t|^2 dx dt. \end{align*} \end{lemma} \begin{proof} Denote \begin{align*} \alpha&=\inttx{0}{T}{U} \mathcal K |\nabla u|^{2s+4} \zeta^2 dxdt,& \gamma&=\inttx{0}{T}{U} |\nabla u|^{2s+2} \zeta|\zeta_t| dx dt,\\ \beta&=\inttx{0}{T}{U} \mathcal K |D^2 u|^2 |\nabla u|^{2s} \zeta^2 dx dt,& \beta_0&=\inttx{0}{T}{U} \mathcal K |D^2 u|^2 \zeta^2 dx dt. \end{align*} For $s\ge 0$, by applying \eqref{ibt} to $w(x)=u(x,t)$, $Q(x)=\mathcal Z(x,t)$, $s:=s+1$, and then integrating in $t$ from $0$ to $T$, we have \begin{align*} \alpha &\le C M_*^2 \beta + CM_*^2(1+(M_*\mu_{\mathcal Z}+M_{\mathcal Z})^2M_*^2) \inttx{0}{T}{U} \mathcal K |\nabla u|^{2s+2} (|\nabla \zeta|^2 + \zeta^2 ) dx dt \\ &\quad + C {\rm sgn}(s)M_*^2 \beta_0 + CT(M_*^2M_{\mathcal Z})^{2s+4}(1+M_*^2M_{\mathcal Z})^{2s+2}. \end{align*} Using \eqref{RMm} for upper bounds of $M_{\mathcal Z}$ and $\mu_{\mathcal Z}$, we have \be \begin{aligned} \alpha &\le C M_*^2 \beta+ C\chi_*^4M_*^2(M_*+1)^4 \inttx{0}{T}{U} \mathcal K |\nabla u|^{2s+2} (|\nabla \zeta|^2 + \zeta^2 ) dx dt \\ &\quad + C {\rm sgn}(s)M_*^2 \beta_0 + CT\chi_*^{4(2s+3)}M_*^{4(s+2)}(M_*+1)^{4(s+1)}. \end{aligned} \end{equation} We estimate $\beta$ by using \eqref{iterate4} and the fact $\widetilde M\le M_*$ to have \begin{align*} CM_*^2 \beta &\le C \chi_*^2M_*^2\phi \mathcal D_s +C\chi_*^{2(4+a)}M_*^6\inttx{0}{T}{U} \mathcal K |\nabla u|^{2s} \zeta^2 dx dt \\ &\quad + C\chi_*^{2(4+a)}M_*^2(M_*+1)^2\inttx{0}{T}{U} \mathcal K |\nabla u|^{2s+2} ( \zeta^2 + |\nabla \zeta|^2) dx dt + C\chi_*^2M_*^2 \gamma. \end{align*} Thus, \begin{equation} \label{albe} \begin{aligned} 2\alpha+M_*^2 \beta &\le C\chi_*^2M_*^2 \phi \mathcal D_s +C\chi_*^{2(4+a)}M_*^6\inttx{0}{T}{U} \mathcal K |\nabla u|^{2s} \zeta^2 dx dt \\ &\quad + C \chi_*^{2(4+a)}M_*^2(M_*+1)^4\inttx{0}{T}{U} \mathcal K |\nabla u|^{2s+2} (\zeta^2 + |\nabla \zeta|^2) dx dt\\ &\quad + C {\rm sgn}(s)M_*^2 \beta_0 + CT\chi_*^{4(2s+3)}M_*^{4(s+2)}(M_*+1)^{4(s+1)}+ C\chi_*^2M_*^2 \gamma. \end{aligned} \end{equation} For the second term on the right-hand side of \eqref{albe}, the integral is bounded by \begin{equation*} \inttx{0}{T}{U} \mathcal K |\nabla u|^{2s} \zeta^2 dx dt \le \inttx{0}{T}{U} \mathcal K (|\nabla u|^{2s+2}+1) \zeta^2 dx dt\le \inttx{0}{T}{U} \mathcal K |\nabla u|^{2s+2}\zeta^2dx dt +CT. \end{equation*} Combining this with the third term on the right-hand side of \eqref{albe} gives \begin{align*} &2\alpha+M_*^2 \beta\le C\chi_*^2M_*^2 \phi \mathcal D_s + C \chi_*^{2(4+a)}M_*^2(M_*+1)^4\inttx{0}{T}{U} \mathcal K |\nabla u|^{2s+2} (\zeta^2 + |\nabla \zeta|^2) dx dt\\ &\quad + C {\rm sgn}(s)M_*^2 \beta_0 + CT\chi_*^{4(2s+3)}M_*^{4(s+2)}(M_*+1)^{4(s+1)}+CT\chi_*^{2(4+a)}M_*^6 + C\chi_*^2 M_*^2 \gamma. \end{align*} As far as the two $T$-terms in the last inequality are concerned, the first one has $$M_*^{4(s+2)}(M_*+1)^{4(s+1)}\le M_*^6(M_*+1)^{8s+6},$$ while the second one has $$\chi_*^{2(4+a)}\le \chi_*^{10}\le \chi_*^{4(2s+3)}.$$ Therefore, \begin{equation} 2\alpha+M_*^2 \beta\le CI_* + C\chi_*^2 M_*^2 \gamma.\label{ab0} \end{equation} We estimate the last term by using Cauchy's inequality to obtain \begin{align*} C\chi_*^2 M_*^2 \gamma &\le \inttx{0}{T}{U} \mathcal K|\nabla u|^{2s+4} \zeta^2 dx dt + C\chi_*^4M_*^4\inttx{0}{T}{U} \mathcal K^{-1}|\nabla u|^{2s}|\zeta_t|^2 dxdt\\ &=\alpha + C\chi_*^4M_*^4\inttx{0}{T}{U} \mathcal K |\nabla u|^{2s} \mathcal K^{-2}|\zeta_t|^2 dxdt. \end{align*} In the last integral, we have \begin{align*} |\nabla u|^{2s} \mathcal K^{-2}&\le |\nabla u|^{2s} (1+|\nabla u|+ M_*^2\chi_*^2)^{2a}\\ & \le C(|\nabla u|^{2s} +|\nabla u|^{2s+2a}+ M_*^{4a}\chi_*^{4a}|\nabla u|^{2s})\\ &\le C[1+|\nabla u|^{2s+2}+ M_*^{4a}\chi_*^{4a}(1+{\rm sgn}(s)|\nabla u|^{2s+2})], \end{align*} which can be conveniently rewritten as \begin{equation*} |\nabla u|^{2s} \mathcal K^{-2}\le C (M_*+1)^{4a}\chi_*^{4a}+C (M_*+1)^{4a\cdot{\rm sgn}(s)}\chi_*^{4a\cdot{\rm sgn}(s)}|\nabla u|^{2s+2}. \end{equation*} Thus, \begin{equation}\label{ab} \begin{aligned} C\chi_*^2M_*^2 \gamma &\le \alpha + C\chi_*^{4(1+a\cdot{\rm sgn}(s))}M_*^4 (M_*+1)^{4a\cdot{\rm sgn}(s)}\inttx{0}{T}{U} \mathcal K |\nabla u|^{2s+2}|\zeta_t|^2 dxdt\\ &\quad + C\chi_*^{4(1+a)}M_*^4 (M_*+1)^{4a}\inttx{0}{T}{U} |\zeta_t|^2 dxdt\le \alpha + CJ_*. \end{aligned} \end{equation} Combining \eqref{ab0} and \eqref{ab} yields \begin{equation*} 2\alpha+M_*^2 \beta\le CI_*+\alpha+CJ_*, \text{ which implies } \alpha+M_*^2 \beta\le C(I_*+J_*). \end{equation*} We have proved \eqref{Kug3}. \end{proof} As one can see from \eqref{Kug3} that the integral of higher power $2s+4$ of $|\nabla u|$, with the weight $\mathcal K(x,t)$, can be bounded by the corresponding integral of lower power $2s+2$. However, it still involves a second order term, which is the last integral of $I_*$. This term, as it turns out, can be estimated in \eqref{ab4} below. \subsection{Estimates for the $L_{x,t}^{4-a}$-norm}\label{L4ma} We start using inequality \eqref{Kug3} with the smallest possible value for $s$, i.e., $s=0$. It will result in the $\mathcal K$-weighted $L_{x,t}^2$-estimate, and, consequently, the $L_{x,t}^{4-a}$ estimate for $|\nabla u|$. \begin{proposition}\label{prop63} One has \begin{equation}\label{ab4} \begin{aligned} &\inttx{0}{T}{U} \mathcal K |\nabla u|^{4} \zeta^2 dxdt+M_*^2\inttx{0}{T}{U} \mathcal K |D^2 u|^2 \zeta^2 dxdt\\ &\le C(1+\sup|\nabla \zeta|^2+\sup |\zeta_t|^2)\Big\{ \chi_*^{2(5+a)}M_*^2(M_*+1)^4 \phi( \|\bar u_0\|_{L^2}^2 +\mathcal D_0)\\ &\quad + \chi_*^{4(4+a)} T M_*^4(M_*+1)^8 + \chi_*^{4(3+a)}M_*^2(M_*+1)^4 {\mathcal E}_*\Big\}. \end{aligned} \end{equation} \end{proposition} \begin{proof} Denote by $I$ the sum on the left-hand side of \eqref{ab4}. It follows \eqref{Kug3} with $s=0$ that \begin{align*} I&\le C\chi_*^2M_*^2 \phi \mathcal D_0 + CT\cdot \big [\chi_*^{12}M_*^6 (M_*+1)^{6}+\chi_*^{4(1+a)}M_*^4 (M_*+1)^{4a}\sup|\zeta_t|^2\big ] \\ &\quad + C \chi_*^{2(4+a)}M_*^2(M_*+1)^4\inttx{0}{T}{U} \mathcal K |\nabla u|^{2} (\zeta^2 + |\nabla \zeta|^2) dx dt\\ &\quad + C \chi_*^{4}M_*^4\inttx{0}{T}{U} \mathcal K |\nabla u|^{2} |\zeta_t|^2 dx dt. \end{align*} The second term on the right-hand side is bounded by $$C(1+\sup|\zeta_t|^2)\chi_*^{12} T M_*^4(M_*+1)^8 ,$$ and the sum of the last two terms on the right-hand side is bounded by \begin{equation*} C(1+\sup |\nabla\zeta|^2+\sup |\zeta_t|^2) \chi_*^{2(4+a)} M_*^2(M_*+1)^4 \inttx{0}{T}{U} \mathcal K|\nabla u|^2 dxdt. \end{equation*} Hence, \begin{align*} I&\le C(1+\sup |\nabla\zeta|^2+\sup |\zeta_t|^2)\Big\{ \chi_*^2M_*^2 \phi \mathcal D_0+\chi_*^{12} T M_*^4(M_*+1)^8 \\ &\quad + \chi_*^{2(4+a)}M_*^2(M_*+1)^4 \inttx{0}{T}{U} \mathcal K|\nabla u|^2 dxdt\Big\}. \end{align*} Estimating the last integral by \eqref{gradu2} gives \begin{align*} I&\le C(1+\sup |\nabla\zeta|^2+\sup |\zeta_t|^2)\Big\{ \chi_*^2M_*^2 \phi \mathcal D_0+\chi_*^{12} T M_*^4(M_*+1)^8 \\ &\quad + \chi_*^{2(4+a)}M_*^2(M_*+1)^4 \Big[ \chi_*^2\phi \|\bar u_0\|_{L^2}^2 + \chi_*^{2(4+a)} T M_*^2(M_*+1)^2 + \chi_*^{2(2+a)} {\mathcal E}_*\Big]\Big\}. \end{align*} Grouping the like-terms on the right-hand side and using simple estimations yield inequality \eqref{ab4}. \end{proof} By selecting the cut-off function $\zeta$ in \eqref{ab4} appropriately, we derive the spatial, as well as the spatial-temporal, interior estimates for $|\nabla u|$. \begin{xnotation} For simplicity, we will write $V\Subset U$ to indicate that $V$ is an open, relatively compact subset of $U$. \end{xnotation} \begin{theorem}\label{thm64} Let $U'\Subset U$. {\rm (i)} One has \begin{equation}\label{ab1} \begin{aligned} \inttx{0}{T}{U'} \mathcal K |\nabla u|^{4} dxdt &\le C\Big\{ \chi_*^{2(5+a)}M_*^2(M_*+1)^4 \phi( \|\bar u_0\|_{L^2}^2 +\|\nabla u_0\|_{L^2}^2)\\ &\quad + \chi_*^{4(4+a)} T M_*^4(M_*+1)^8 + \chi_*^{4(3+a)}M_*^2(M_*+1)^4 {\mathcal E}_*\Big\}. \end{aligned} \end{equation} Consequently, \begin{equation}\label{ab11} \inttx{0}{T}{U'} \mathcal K |\nabla u|^{4} dxdt \le C\chi_*^{4(4+a)} M_*^2(M_*+1)^{10} {\mathcal N}_2. \end{equation} {\rm (ii)} If $T_0$ is any number in $(0,T)$, then \begin{equation}\label{ab2} \begin{aligned} \int_{T_0}^{T}\int_{U'} \mathcal K |\nabla u|^{4} dxdt &\le C(1+T_0^{-1})^2\Big\{ \chi_*^{2(5+a)}M_*^2(M_*+1)^4 \phi \|\bar u_0\|_{L^2}^2 \\ &\quad + \chi_*^{4(4+a)} T M_*^4(M_*+1)^8 + \chi_*^{4(3+a)}M_*^2(M_*+1)^4 {\mathcal E}_*\Big\}. \end{aligned} \end{equation} Consequently, \begin{equation}\label{ab22} \int_{T_0}^{T}\int_{U'} \mathcal K |\nabla u|^{4} dxdt \le C(1+T_0^{-1})^2\chi_*^{4(4+a)} M_*^2(M_*+1)^{10}{\mathcal N}_*. \end{equation} {\rm (iii)} If $s\in[2,4]$, then \begin{equation} \label{ab23} \inttx{0}{T}{U'} \mathcal K |\nabla u|^s dxdt \le C\chi_*^{(4+a)s}M_*^{s-2}(M_*+1)^{3s-2}{\mathcal N}_2, \end{equation} and, for $T_0\in(0,T)$, \begin{equation} \label{ab24} \int_{T_0}^T\int_{U'} \mathcal K |\nabla u|^s dxdt \le C(1+T_0^{-1})^{s-2}\chi_*^{(4+a)s}M_*^{s-2}(M_*+1)^{3s-2}{\mathcal N}_*. \end{equation} \end{theorem} \begin{proof} (i) We fix a cut-off function $\zeta=\zeta(x)$ with $\zeta\equiv 1$ on $U'$. We have $|\nabla\zeta|\le C$ and $\zeta_t\equiv 0$. Then, by using inequality \eqref{ab4}, we obtain \begin{align*} \inttx{0}{T}{U'} \mathcal K |\nabla u|^{4} dxdt &\le \inttx{0}{T}{U} \mathcal K |\nabla u|^{4}\zeta^2 dxdt\\ &\le C\Big\{ \chi_*^{2(5+a)}M_*^2(M_*+1)^4 \phi( \|\bar u_0\|_{L^2}^2 +\mathcal D_0)\\ &\quad + \chi_*^{4(4+a)} T M_*^4(M_*+1)^8 + \chi_*^{4(3+a)}M_*^2(M_*+1)^4 {\mathcal E}_*\Big\}. \end{align*} This proves \eqref{ab1}. Now, note on the right-hand side of \eqref{ab1} that \begin{equation} \label{oo} \chi_*^{2(5+a)}M_*^2(M_*+1)^4, \chi_*^{4(4+a)} M_*^4(M_*+1)^8, \chi_*^{4(3+a)}M_*^2(M_*+1)^4 \le \chi_*^{4(4+a)}M_*^2(M_*+1)^{10}. \end{equation} Utilizing these estimates, we obtain \eqref{ab11} from \eqref{ab1}. (ii) We select a different cut-off function $\zeta=\zeta(x,t)$ such that $\zeta=0$ for $0\le t\le T_0/2$, and $\zeta=1$ on $U'\times [T_0,T]$, and its derivatives satisfy \begin{equation} \label{zprop} |\nabla\zeta|\le C \text{ and }0\le \zeta_t \le C T_0^{-1}, \end{equation} where $C>0$ is independent of $T_0,T$. With this function $\zeta$, it is obvious from \eqref{Ds} that $\mathcal D_0=0$. Then, by \eqref{ab4}, we have \begin{align*} \int_{T_0}^{T}\int_{U'} \mathcal K |\nabla u|^{4} dxdt &\le \inttx{0}{T}{U} \mathcal K |\nabla u|^{4}\zeta^2 dxdt\\ &\le C(1+T_0^{-2})\Big\{ C\chi_*^{2(5+a)}M_*^2(M_*+1)^4 \phi \|\bar u_0\|_{L^2}^2\\ &\quad + \chi_*^{4(4+a)} T M_*^4(M_*+1)^8 + \chi_*^{4(3+a)}M_*^2(M_*+1)^4 {\mathcal E}_*\Big\}, \end{align*} which gives \eqref{ab2}. Utilizing \eqref{oo} again for the right-hand side of \eqref{ab2}, we obtain \eqref{ab22}. (iii) The inequalities \eqref{ab23} and \eqref{ab24} already hold for $s=2$ thanks to \eqref{gradu6a} and \eqref{NNNs}, and for $s=4$ thanks to \eqref{ab1} and \eqref{ab11}. Consider $2<s<4$ now. By interpolation inequality \eqref{Lpinter}, we have \begin{align*} \inttx{0}{T}{U'} \mathcal K |\nabla u|^s dxdt \le \Big( \inttx{0}{T}{U'} \mathcal K |\nabla u|^2 dxdt\Big)^\frac{4-s}2\Big( \inttx{0}{T}{U'} \mathcal K |\nabla u|^4 dxdt\Big)^\frac{s-2}2. \end{align*} Applying inequality \eqref{gradu6a}, respectively \eqref{ab11}, to estimate the first, respectively second, integral on the right-hand side, we obtain \begin{align*} \inttx{0}{T}{U'} \mathcal K |\nabla u|^s dxdt &\le C\Big(\chi_*^{2(4+a)}(M_*+1)^4 {\mathcal N}_*\Big)^\frac{4-s}2\Big( \chi_*^{4(4+a)} M_*^2(M_*+1)^{10} {\mathcal N}_2\Big)^\frac{s-2}2, \end{align*} which yields \eqref{ab23}. Similarly, by \eqref{Lpinter}, \eqref{gradu6a} and \eqref{ab22}, we have \begin{align*} \int_{T_0}^T\int_{U'} \mathcal K |\nabla u|^s dxdt & \le \Big( \int_{T_0}^T\int_{U'} \mathcal K |\nabla u|^2 dxdt\Big)^\frac{4-s}2\Big( \int_{T_0}^T\int_{U'} \mathcal K |\nabla u|^4 dxdt\Big)^\frac{s-2}2 \\ &\le C \Big(\chi_*^{2(4+a)}(M_*+1)^4 {\mathcal N}_*\Big)^\frac{4-s}2\Big( (1+T_0^{-1})^2\chi_*^{4(4+a)} M_*^2(M_*+1)^{10} {\mathcal N}_*\Big)^\frac{s-2}2, \end{align*} which implies \eqref{ab24}. \end{proof} The estimates obtained in Theorem \ref{thm64} contain the weight $\mathcal K(x,t)$. Below, we derive the estimates for the standard Lebesgue $L^{4-a}_{x,t}$-norm (without that weight). \begin{corollary}\label{cor65} Let $U'\Subset U$ and $T_0\in(0,T)$. {\rm (i)} One has \begin{equation}\label{ab31} \begin{aligned} \inttx{0}{T}{U'} |\nabla u|^{4-a} dxdt &\le C\chi_*^{4(4+a)} (M_*+1)^{12} {\mathcal N}_2, \end{aligned} \end{equation} and \begin{equation}\label{ab32} \begin{aligned} \int_{T_0}^{T}\int_{U'} |\nabla u|^{4-a} dxdt &\le C(1+T_0^{-1})^2\chi_*^{4(4+a)} (M_*+1)^{12}{\mathcal N}_*. \end{aligned} \end{equation} {\rm (ii)} If $s$ is any number in $(2-a,4-a)$, then \begin{equation} \label{ab33} \inttx{0}{T}{U'} |\nabla u|^s dxdt \le C\chi_*^{(4+a)(s+a)} (M_*+1)^{4(s+a-1)}{\mathcal N}_2, \end{equation} and \begin{equation} \label{ab34} \int_{T_0}^T\int_{U'} |\nabla u|^s dxdt \le C(1+T_0^{-1})^{s+a-2}\chi_*^{(4+a)(s+a)} (M_*+1)^{4(s+a-1)}{\mathcal N}_*. \end{equation} \end{corollary} \begin{proof} (i) Applying \eqref{kugs} to $s=4-a$ gives \begin{align*} |\nabla u|^{4-a}\le C \mathcal K |\nabla u|^4 +C (1+M_*\chi_*)^8. \end{align*} Combining this with \eqref{ab11}, respectively \eqref{ab22}, we obtain \eqref{ab31}, respectively \eqref{ab32}. (ii) Consider $2-a<s<4-a$. By interpolation inequality \eqref{Lpinter} and then using \eqref{gradu6b}, \eqref{NNNs} and \eqref{ab31}, we have \begin{align*} &\inttx{0}{T}{U'} |\nabla u|^s dxdt \le \Big( \inttx{0}{T}{U'} |\nabla u|^{2-a} dxdt\Big)^\frac{4-a-s}2\Big( \inttx{0}{T}{U'} |\nabla u|^{4-a} dxdt\Big)^\frac{s-(2-a)}2 \\ &\le C\Big[\chi_*^{2(4+a)}(M_*+1)^4 {\mathcal N}_*\Big]^\frac{4-a-s}2 \Big[\chi_*^{4(4+a)} (M_*+1)^{12}{\mathcal N}_2\Big]^\frac{s-(2-a)}2 \\ &\le C\chi_*^{(4+a)(s+a)} (M_*+1)^{4(s+a-1)}{\mathcal N}_2. \end{align*} Thus, we obtain \eqref{ab33}. Similarly, by \eqref{Lpinter}, \eqref{gradu6b} and \eqref{ab32} we have \begin{align*} &\int_{T_0}^T\int_{U'} |\nabla u|^s dxdt \le \Big( \int_{T_0}^T\int_{U'} |\nabla u|^{2-a} dxdt\Big)^\frac{4-a-s}2\Big( \int_{T_0}^T\int_{U'} |\nabla u|^{4-a} dxdt\Big)^\frac{s-(2-a)}2 \\ &\le C\Big[\chi_*^{2(4+a)}(M_*+1)^4 {\mathcal N}_*\Big]^\frac{4-a-s}2 \Big[(1+T_0^{-1})^2\chi_*^{4(4+a)} (M_*+1)^{12}{\mathcal N}_*\Big]^\frac{s-(2-a)}2 \\ &\le C(1+T_0^{-1})^{s+a-2}\chi_*^{(4+a)(s+a)} (M_*+1)^{4(s+a-1)}{\mathcal N}_*. \end{align*} Thus, we obtain \eqref{ab34}. \end{proof} \begin{remark} The estimate \eqref{ab34} of the $L^s_{x,t}$-norm of $\nabla u(x,t)$, for $t>0$, requires, as far as the initial data $u_0$ is concerned, at most the $L^\infty$-norm of $u_0$. Therefore, it shows the (formal) regularization effect of the PDE \eqref{ueq}. This observation also applies to Corollary \ref{high1} and Theorem \ref{high5} below. \end{remark} \subsection{Estimates for higher $L_{x,t}^s$-norms}\label{Lhigher} In this subsection, we have estimates for the $L_{x,t}^s$-norms of $\nabla u$ with $s>4-a$. \begin{lemma}\label{lem67} Let $s>2$, and $V$ be an open subset of $U$. {\rm (i)} If $\zeta=\zeta(x)$ with compact support in $V$, then \begin{multline}\label{ab66} \inttx{0}{T}{U} \mathcal K |\nabla u|^{s+2}\zeta^2 dxdt \le C (1+\sup|\nabla \zeta|^2)\\ \cdot \Big\{ \chi_*^{4(s+2+a)}M_*^2 (M_*+1)^{4s+2} {\mathcal N}_s + \chi_*^{2(4+a)}M_*^2(M_*+1)^4 \inttx{0}{T}{V} \mathcal K |\nabla u|^s dx dt\Big\}. \end{multline} {\rm (ii)} If $\zeta=\zeta(x,t)$ with $\zeta(x,0)\equiv 0$ and, for each $t\in[0,T]$, the mapping $\zeta(\cdot,t)$ has compact support in $V$, then \begin{multline}\label{ab77} \inttx{0}{T}{U} \mathcal K |\nabla u|^{s+2} \zeta^2 dxdt \le C(1+\sup|\nabla \zeta|^2+\sup|\zeta_t|^2)\\ \cdot \Big\{ \chi_*^{4(s+2+a)}M_*^2 (M_*+1)^{4s+2} {\mathcal N}_* + \chi_*^{2(4+a)}M_*^2(M_*+1)^6\inttx{0}{T}{V} \mathcal K |\nabla u|^s dx dt\Big\}. \end{multline} \end{lemma} \begin{proof} Denote $$I=\inttx{0}{T}{U} \mathcal K |\nabla u|^{2(s+2)}\zeta^2 dxdt \text{ and }J=\inttx{0}{T}{V} \mathcal K |\nabla u|^{2(s+1)}dx dt.$$ (i) Consider $s>0$. We estimate $I$ by \eqref{Kug3}, neglecting the second term on the left-hand side. Note in this case that $\zeta_t=0$ and hence $J_*=0$. We then use \eqref{ab4} to estimate the last term of $I_*$. The result is \begin{align*} I &\le C M_*^2\chi_*^2\phi \mathcal D_s + CT\chi_*^{4(2s+3)}M_*^6 (M_*+1)^{8s+6} + C\chi_*^{2(4+a)}M_*^2(M_*+1)^4 (1+\sup|\nabla \zeta|^2) J\\ &\quad + C(1+\sup|\nabla \zeta|^2)\Big[ \chi_*^{2(5+a)}M_*^2(M_*+1)^4\phi (\|\bar u_0\|_{L^2}^2+\|\nabla u_0\|_{L^2}^2) + T \chi_*^{4(4+a)}M_*^4(M_*+1)^8 \\ &\quad +\chi_*^{4(3+a)}M_*^2(M_*+1)^4 {\mathcal E}_* \Big]. \end{align*} For the terms containing the initial data, we estimate \begin{align*} M_*^2\chi_*^2\le \chi_*^{2(5+a)}M_*^2(M_*+1)^4, \end{align*} and for the terms containing $T$, we use \begin{equation}\label{Tcoef} \chi_*^{4(2s+3)}M_*^6 (M_*+1)^{8s+6},\ \chi_*^{4(4+a)}M_*^4(M_*+1)^8\le \chi_*^{8(s+2)+4a}M_*^4 (M_*+1)^{8(s+1)}. \end{equation} Hence, we obtain \begin{multline}\label{Kug6} I\le C (1+\sup|\nabla \zeta|^2) \Big\{ \chi_*^{2(5+a)}M_*^2(M_*+1)^4 \phi(\|\bar u_0\|_{L^2}^2+ \|\nabla u_0\|_{L^2}^2+\|\nabla u_0\|_{L^{2s+2}}^{2s+2})\\ + T\chi_*^{8(s+2)+4a}M_*^4 (M_*+1)^{8(s+1)} +\chi_*^{4(3+a)}M_*^2(M_*+1)^4 {\mathcal E}_* + \chi_*^{2(4+a)}M_*^2(M_*+1)^4 J \Big\}. \end{multline} Now, consider $s>2$. By replacing $2s+2$ in \eqref{Kug6} with $s$, noting that $$I \text{ becomes }\inttx{0}{T}{U} \mathcal K |\nabla u|^{s+2}\zeta^2 dxdt,\quad J \text{ becomes }\inttx{0}{T}{V} \mathcal K |\nabla u|^s dx dt, $$ the power $8(s+2)+4a$ becomes $4(s+2+a)$, and the power $8(s+1)$ becomes $4s$, we obtain \begin{multline}\label{ab6} \inttx{0}{T}{U} \mathcal K |\nabla u|^{s+2}\zeta^2 dxdt \le C (1+\sup|\nabla \zeta|^2)\\ \cdot \Big\{ \chi_*^{2(5+a)}M_*^2(M_*+1)^4 \phi(\|\bar u_0\|_{L^2}^2+ \|\nabla u_0\|_{L^2}^2+\|\nabla u_0\|_{L^s}^s) + T\chi_*^{4(s+2+a)}M_*^4 (M_*+1)^{4s}\\ +\chi_*^{4(3+a)}M_*^2(M_*+1)^4 {\mathcal E}_* + \chi_*^{2(4+a)}M_*^2(M_*+1)^4 \inttx{0}{T}{V} \mathcal K |\nabla u|^s dx dt\Big\}. \end{multline} On the right-hand side of \eqref{ab6}, in order to group the terms $\phi(\|\bar u_0\|_{L^2}^2+ \|\nabla u_0\|_{L^2}^2+\|\nabla u_0\|_{L^s}^s)$, $T$, ${\mathcal E}_*$ together, we estimate their coefficients by $$\chi_*^{2(5+a)}M_*^2(M_*+1)^4,\chi_*^{4(s+2+a)}M_*^4 (M_*+1)^{4s},\chi_*^{4(3+a)}M_*^2(M_*+1)^4 \le \chi_*^{4(s+2+a)}M_*^2 (M_*+1)^{4s+2}.$$ Then inequality \eqref{ab66} follows \eqref{ab6}. (ii) Consider $s>0$. Note that $\mathcal D_0=\mathcal D_s=0$. We have from \eqref{Kug3} that \begin{align*} I&\le C(1+\sup|\nabla \zeta|^2+\sup|\zeta_t|^2) \Big\{ T\cdot\big[\chi_*^{4(2s+3)}M_*^6 (M_*+1)^{8s+6} + \chi_*^{4(1+a)}M_*^4 (M_*+1)^{4a}\big] \\ &\quad +\big[ \chi_*^{2(4+a)}M_*^2(M_*+1)^4 + \chi_*^{4(1+a)}M_*^4 (M_*+1)^{4a}\big]\cdot J\Big\} + M_*^2 \inttx{0}{T}{U} \mathcal K |D^2 u|^2 \zeta^2 dxdt. \end{align*} We use \eqref{ab4} to estimate the last term $\displaystyle M_*^2 \inttx{0}{T}{U} \mathcal K |D^2 u|^2 \zeta^2 dxdt$. For the $T$-term, we use \eqref{Tcoef} again. For the $J$-term we use \begin{equation*} \chi_*^{2(4+a)}M_*^2(M_*+1)^4 , \chi_*^{4(1+a)}M_*^4 (M_*+1)^{4a} \le \chi_*^{2(4+a)}M_*^2(M_*+1)^6. \end{equation*} Combining these estimates gives \begin{align*} I&\le C(1+\sup|\nabla \zeta|^2+\sup|\zeta_t|^2) \Big\{ T\chi_*^{8(s+2)+4a}M_*^4 (M_*+1)^{8(s+1)} + \chi_*^{2(4+a)} M_*^2(M_*+1)^6 J\\ &\quad + \chi_*^{2(5+a)}M_*^2(M_*+1)^4 \phi \|\bar u_0\|_{L^2}^2+ T \chi_*^{4(4+a)} M_*^4(M_*+1)^8 + \chi_*^{4(3+a)}M_*^2(M_*+1)^4 {\mathcal E}_*\Big\}. \end{align*} Simplifying the right-hand side once more, we obtain \begin{equation}\label{Kug7} \begin{aligned} I &\le C(1+\sup|\nabla \zeta|^2+\sup|\zeta_t|^2) \Big\{ \chi_*^{2(5+a)}M_*^2(M_*+1)^4 \phi \|\bar u_0\|_{L^2}^2 \\ &\quad + T\chi_*^{8(s+2)+4a}M_*^4 (M_*+1)^{8(s+1)} + \chi_*^{4(3+a)}M_*^2(M_*+1)^4 {\mathcal E}_*\\ &\quad + \chi_*^{2(4+a)}M_*^2(M_*+1)^6 J\Big\}. \end{aligned} \end{equation} Same as in the proof of part (i), when $s>2$, replacing $2s+2$ in \eqref{Kug7} with $s$ yields \eqref{ab77}. \end{proof} \begin{theorem}\label{high0} If $U'\Subset U$ and $s\ge 4$, then \begin{equation}\label{ih0} \inttx{0}{T}{U'}\mathcal K |\nabla u|^{s}dxdt \le C \chi_*^{(4+a)(s+2)}M_*^2 (M_*+1)^{4s}{\mathcal N}_{s-2}. \end{equation} \end{theorem} \begin{proof} (a) When $s=4$ the inequality \eqref{ih0} holds true thanks to the estimate \eqref{ab11}. Hence we only focus on the case $s>4$. (b) Consider the case $s=s_*+2m$ with $s_*>2$ and $m\in \mathbb N$. Let $V$ be an open subset of $U$ such that $U'\Subset V\Subset U$. We claim that \begin{equation}\label{kug0} \begin{aligned} \inttx{0}{T}{U'}\mathcal K |\nabla u|^{s}dxdt & \le C\chi_*^{2(4+a)m}M_*^{2m}(M_*+1)^{4m} \inttx{0}{T}{V} \mathcal K |\nabla u|^{s_*}dxdt\\ &\quad +C \chi_*^{(4+a)s}M_*^2 (M_*+1)^{4s-6}{\mathcal N}_{s-2}. \end{aligned} \end{equation} \textit{Proof of \eqref{kug0}.} Let $\{U_k\}_{k=0}^m$ be a family of smooth, open subsets of $U$ such that \begin{equation}\label{Usets} U'\subset U_m\Subset U_{m-1}\Subset U_{m-2}\Subset \ldots \Subset U_{1} \Subset U_0 \subset V\Subset U. \end{equation} Denote $\displaystyle y_k=\inttx{0}{T}{U_k} \mathcal K |\nabla u|^{s_*+2k}dx dt$ for $0\le k\le m$. Let $k\in\{0,1,2,\ldots, m-1\}$. Choose $\zeta=\zeta_k(x)$, a $C^2$ cut-off function which is equal to $1$ on $U_{k+1}$ and has compact support in $U_k$. Applying \eqref{ab66} to $s:=s_*+2k$, we have \begin{equation}\label{yk2} y_{k+1}\le A y_k+B, \end{equation} where $A=C_k\chi_*^{2(4+a)} M_*^2(M_*+1)^4$ and $\displaystyle B= C_k\chi_*^{4(s_*+2k+2+a)}M_*^2(M_*+1)^{4(s_*+2k)+2}\widehat B$, with \begin{equation*} \widehat B =\phi(\|\bar u_0\|_{L^2}^2+ \|\nabla u_0\|_{L^2}^2+\|\nabla u_0\|_{L_{s_*+2k}}^{s_*+2k})+T+{\mathcal E}_* \end{equation*} for some $C_k>0$ independent of $T$. Note that \begin{align*} \|\nabla u_0\|_{L_{s_*+2k}}^{s_*+2k} =\int_U |\nabla u_0|^{s_*+2k}dx\le \int_U \big[|\nabla u_0|^2+|\nabla u_0|^{s_*+2(m-1)}\big]dx. \end{align*} Hence, \begin{equation*} \widehat B \le \phi(\|\bar u_0\|_{L^2}^2+ 2\|\nabla u_0\|_{L^2}^2+\|\nabla u_0\|_{L^{s_*+2(m-1)}}^{s_*+2(m-1)})+T+{\mathcal E}_* \le 2{\mathcal N}_{s-2}. \end{equation*} Let $C_*=2\max\{C_k:k=1,2,\ldots,m-1\}$. Hence, \begin{equation}\label{yk1} y_{k+1}\le A_* y_k+B_k, \end{equation} where \begin{align*} A_*=C_*\chi_*^{2(4+a)} M_*^2(M_*+1)^4,\quad B_k= C_*\chi_*^{4(s_*+2k+2+a)}M_*^2(M_*+1)^{4(s_*+2k)+2}{\mathcal N}_{s-2} = B_* S^k, \end{align*} with $ S=\chi_*^8(M_*+1)^8$ and $B_*=A_*\chi_*^{4s_*+2a}(M_*+1)^{4s_*-2}{\mathcal N}_{s-2}$. Iterating \eqref{yk1}, we obtain \begin{align*} y_{k+1} &\le A_*( A_*y_{k-1}+B_{k-1})+B_{k} = A_*^2y_{k-1}+A_*B_{k-1}+B_{k}\\ &\le A_*^3y_{k-2}+A_*^2B_{k-2}+A_*B_{k-1}+B_{k}\\ &\le \cdots\le A_*^{k+1}y_{0}+\sum_{j=0}^{k-1} A_*^{k-j} B_j + B_{k}. \end{align*} Letting $k=m-1$, we then have \begin{equation}\label{yk0} y_m \le A_*^m y_0 + \sum_{j=0}^{m-2} A_*^{m-1-j}B_j+B_{m-1} . \end{equation} Dealing with the middle sum on the right-hand side of \eqref{yk0}, elementary calculations show, for $0\le j\le m-2$, that \begin{align*} A_*^{m-j-1}B_j &=A_*^{m-j} \chi_*^{4s_*+2a+8j}(M_*+1)^{4s_*-2+8j}{\mathcal N}_{s-2}\\ &= C \chi_*^{4(s_*+2m)+2a(m-j+1)} M_*^{2(m-j)} (M_*+1)^{4(s_*+2m)-4(m-j)-2}{\mathcal N}_{s-2}\\ &\le C \chi_*^{4s+2a(m+1)}\cdot [ M_*^2 (M_*+1)^{2(m-j-1)} ]\cdot (M_*+1)^{4s-4(m-j)-2}{\mathcal N}_{s-2}\\ &= C \chi_*^{4s+2a(m+1)} M_*^2 (M_*+1)^{4s-2(m-j)-4}{\mathcal N}_{s-2}. \end{align*} Note that $m-j\ge 2$ and \begin{equation}\label{sm} s>2+2m. \end{equation} Then we have \begin{equation}\label{abm} A_*^{m-1-j}B_j \le C \chi_*^{(4+a)s} M_*^2(M_*+1)^{4s-8}{\mathcal N}_{s-2}. \end{equation} For the last term in \eqref{yk0}, one has \begin{align}\label{bm} B_{m-1} &=C\chi_*^{4(s_*+2m+a)}M_*^2(M_*+1)^{4(s_*+2m)-6}{\mathcal N}_{s-2}\notag\\ &=C\chi_*^{4(s+a)}M_*^2(M_*+1)^{4s-6}{\mathcal N}_{s-2}. \end{align} Then combining \eqref{yk0} with \eqref{abm} and \eqref{bm} gives \begin{align*} y_m&\le C\chi_*^{2(4+a)m}M_*^{2m}(M_*+1)^{4m}y_0+C (m-1)\chi_*^{(4+a)s} M_*^2(M_*+1)^{4s-8}{\mathcal N}_{s-2}\\ &\quad +C\chi_*^{4(s+a)}M_*^2(M_*+1)^{4s-6}{\mathcal N}_{s-2}\\ &\le C\chi_*^{2(4+a)m}M_*^{2m}(M_*+1)^{4m}y_0+C \chi_*^{(4+a)s}M_*^2 (M_*+1)^{4s-6}{\mathcal N}_{s-2}. \end{align*} Therefore, estimate \eqref{kug0} follows. (c) Consider the general case $s>4$ now. Then there exist $s_*\in(2,4]$ and integer $m\ge 1$ such that $s=s_*+2m$. We apply estimate \eqref{kug0} using the relation \eqref{sm}, and have \begin{align*} \inttx{0}{T}{U'}\mathcal K |\nabla u|^{s}dxdt & \le C\chi_*^{(4+a)(s-2)}M_*^{2m}(M_*+1)^{2(s-2)} \inttx{0}{T}{V} \mathcal K |\nabla u|^{s_*}dxdt\\ &\quad +C \chi_*^{(4+a)s}M_*^2 (M_*+1)^{4s-6}{\mathcal N}_{s-2}. \end{align*} Using \eqref{sm} again, $$M_*^{2m}\le M_*^2(M_*+1)^{2m-2}\le M_*^2(M_*+1)^{s-4}.$$ Then \begin{align*} \inttx{0}{T}{U'}\mathcal K |\nabla u|^{s}dxdt & \le C\chi_*^{(4+a)(s-2)}M_*^2(M_*+1)^{3s-8} \inttx{0}{T}{V} \mathcal K |\nabla u|^{s_*}dxdt\\ &\quad +C \chi_*^{(4+a)s}M_*^2 (M_*+1)^{4s-6}{\mathcal N}_{s-2}. \end{align*} Note, by Young's inequality and applying \eqref{ab11} to $U':=V$, that \begin{align*} &\inttx{0}{T}{V} \mathcal K |\nabla u|^{s_*}dxdt \le C(T+\inttx{0}{T}{V} \mathcal K |\nabla u|^{4}dxdt) \\ &\le C \chi_*^{4(4+a)}(M_*+1)^{12}\big[ \phi (\|\bar u_0\|_{L^2}^2+\|\nabla u_0\|_{L^2}^2) +T+{\mathcal E}_*\big]. \end{align*} Due to \eqref{NNNs} we can conclude that \begin{equation*} \inttx{0}{T}{U'}\mathcal K |\nabla u|^{s}dxdt \le C\chi_*^{(4+a)(s+2)}M_*^2(M_*+1)^{3s+4} {\mathcal N}_2 +C \chi_*^{(4+a)s}M_*^2 (M_*+1)^{4s-6}{\mathcal N}_{s-2}, \end{equation*} and we obtain \eqref{ih0}. \end{proof} \begin{proposition}\label{high2} If $U'\Subset V\Subset U$, and $s=s_*+2m$ with $s_*>2$ and $m\in \mathbb N$, then \begin{equation}\label{kug3} \begin{aligned} \int_{T_0}^{T}\int_{U'}\mathcal K |\nabla u|^{s}dxdt & \le C(1+t_0^{-1})^{2m}\chi_*^{2(4+a)m}M_*^{2m}(M_*+1)^{6m} \int_{T_0-t_0}^{T}\int_{V}\mathcal K |\nabla u|^{s_*}dxdt\\ &\quad +C(1+t_0^{-1})^{2m}\chi_*^{(4+a)s} M_*^2(M_*+1)^{4s-6}{\mathcal N}_*. \end{aligned} \end{equation} for any numbers $T_0$ and $t_0$ such that $0<t_0<T_0<T$. \end{proposition} \begin{proof} Let $\{U_k\}_{k=0}^m$ be as in \eqref{Usets}. Let $\tau_0=T_0-t_0<\tau_1<\tau_2<\ldots<\tau_m=T_0$ be evenly paced. Define $\displaystyle y_k=\int_{\tau_k}^{T}\int_{U_k} \mathcal K |\nabla u|^{s_*+2k} dxdt$ for $0\le k\le m$. Given $k\in\{0,1,2,\ldots, m-1$\}. Let $\zeta_k(x,t)$ be a smooth cut-off function which is equal to one on $U_{k+1}\times [\tau_{k+1},T]$, has compact support in $U_k\times[\tau_k,T]$, and satisfies $$|\nabla \zeta_k|\le C'_k,\quad 0\le \zeta_{k,t}\le \frac2{\tau_{k+1}-\tau_k}=\frac2{mt_0},$$ where $C'_k>0$ is independent of $T,T_0,t_0$. Then using $s:=s_*+2k$ and $\zeta=\zeta_k$ in \eqref{ab77}, we have the same relation \eqref{yk2}, with the constants defined by \begin{align*} A&=C_k(1+t_0^{-2})\chi_*^{2(4+a)}M_*^2(M_*+1)^6,\\ B&=C_k(1+t_0^{-2})\chi_*^{4(s_*+2k+2+a)}M_*^2(M_*+1)^{4(s_*+2k)+2}{\mathcal N}_*, \end{align*} for some positive constant $C_k$ independent of $T,T_0,t_0$. Set $C_*=\max\{C_k:k=0,1,\ldots,m-1\}$, we obtain \eqref{yk1} where \begin{align*} A_*&=C_*(1+t_0^{-1})^2\chi_*^{2(4+a)}M_*^2(M_*+1)^6,\\ B_k&=C_*(1+t_0^{-1})^2\chi_*^{4(s_*+2k+2+a)}M_*^2(M_*+1)^{4(s_*+2k)+2}{\mathcal N}_* = B_* S^k \end{align*} with the same $S=\chi_*^8(M_*+1)^8$, but \begin{align*} B_*&=C_*(1+t_0^{-1})^2\chi_*^{4(s_*+2+a)}M_*^2(M_*+1)^{4s_*+2}{\mathcal N}_*=A_* \chi_*^{4s_*+2a}(M_*+1)^{4s_*-4}{\mathcal N}_*. \end{align*} Then we obtain \eqref{yk0} by iteration again. For $0\le j\le m-2$, \begin{align*} A_*^{m-j-1}B_j &=A_*^{m-j} \chi_*^{4s_*+2a+8j}(M_*+1)^{4s_*+8j-4}{\mathcal N}_*\\ &\le C(1+t_0^{-1})^{2(m-j)} \chi_*^{4(s_*+2m)+2a(m-j+1)} M_*^{2(m-j)} (M_*+1)^{4s_*+6m+2j-4}{\mathcal N}_*. \end{align*} Simply estimating $M_*^{2(m-j)}\le M_*^2 (M_*+1)^{2(m-j-1)}$, we then have \begin{align*} A_*^{m-j-1}B_j &\le C(1+t_0^{-1})^{2(m-j)} \chi_*^{4(s_*+2m)+2a(m+1)} M_*^2(M_*+1)^{4s_*+8m-6}{\mathcal N}_*\\ &= C(1+t_0^{-1})^{2m} \chi_*^{(4+a)s} M_*^2(M_*+1)^{4s -6}{\mathcal N}_*. \end{align*} Also, \begin{equation*} B_{m-1}=C(1+t_0^{-1})^2\chi_*^{4(s+a)}M_*^2(M_*+1)^{4s-6}{\mathcal N}_*. \end{equation*} Thus, we have from \eqref{yk0} that \begin{align*} y_m&\le C(1+t_0^{-1})^{2m}\chi_*^{2(4+a)m}M_*^{2m}(M_*+1)^{6m}y_0\\ &\quad +C (m-1)(1+t_0^{-1})^{2m} \chi_*^{(4+a)s} M_*^2(M_*+1)^{4s -6}{\mathcal N}_*\\ &\quad +C(1+t_0^{-1})^2\chi_*^{4(s+a)}M_*^2(M_*+1)^{4s-6}{\mathcal N}_*. \end{align*} Hence, we obtain \eqref{kug3}. \end{proof} \begin{theorem}\label{high3} If $U'\Subset U$ and $s>4$, then one has, for any $T_0\in (0,T)$, that \begin{equation} \label{kug4} \int_{T_0}^{T}\int_{U'}\mathcal K |\nabla u|^{s}dxdt \le C (1+T_0^{-1})^s \chi_*^{(4+a)(s+2)}M_*^2 (M_*+1)^{4s+2}{\mathcal N}_*. \end{equation} \end{theorem} \begin{proof} There exist $2<s_*\le 4$ and integer $m\ge 1$ such that $s=s_*+2m$. Let $t_0:=T_0/2$, and $V$ be a set with $U'\Subset V\Subset U$. Applying \eqref{kug3}, we have \begin{equation}\label{kug3b} \begin{aligned} \int_{T_0}^{T}\int_{U'}\mathcal K |\nabla u|^{s}dxdt & \le C(1+T_0^{-1})^{2m}\chi_*^{2(4+a)m}M_*^{2m}(M_*+1)^{6m} y_*\\ &\quad +C(1+T_0^{-1})^{2m}\chi_*^{(4+a)s} M_*^2(M_*+1)^{4s-6}{\mathcal N}_*. \end{aligned} \end{equation} where $\displaystyle y_*=\int_{T_0/2}^{T}\int_V\mathcal K |\nabla u|^{s_*}dxdt$. By Young's inequality and \eqref{ab22} applied to $U':=V$, we have \begin{equation*} y_*\le C\Big(T+\int_{T_0/2}^{T}\int_V\mathcal K |\nabla u|^{4}dxdt\Big) \le C(1+T_0^{-1})^2 \chi_*^{4(4+a)}(M_*+1)^{12}{\mathcal N}_*. \end{equation*} Then \begin{align*} C&(1+T_0^{-1})^{2m}\chi_*^{2(4+a)m}M_*^{2m}(M_*+1)^{6m}y_* \le C(1+T_0^{-1})^{s-2}\chi_*^{(4+a)(s-2)}M_*^{2}(M_*+1)^{8m-2}y_* \\ &\le C(1+T_0^{-1})^{s}\chi_*^{(4+a)(s+2)} M_*^2(M_*+1)^{4s+2}{\mathcal N}_*. \end{align*} Combining this with \eqref{kug3b} gives \eqref{kug4}. \end{proof} \begin{corollary}\label{high1} Let $U'\Subset U$ and $s> 4-a$. Then \begin{equation}\label{ih1} \inttx{0}{T}{U'} |\nabla u|^{s}dxdt \le C \chi_*^{(4+a)(s+a+2)}(M_*+1)^{4(s+a+1/2)}{\mathcal N}_{s+a-2}. \end{equation} Moreover, it holds, for any number $T_0\in(0,T)$, that \begin{equation} \label{ih2} \int_{T_0}^{T}\int_{U'}|\nabla u|^{s}dxdt \le C (1+T_0^{-1})^{s+a} \chi_*^{(4+a)(s+a+2)} (M_*+1)^{4(s+a+1)}{\mathcal N}_*. \end{equation} \end{corollary} \begin{proof} Using \eqref{kugs} and applying \eqref{ih0} with $s$ being substituted by $s+a$, we have \begin{align*} \inttx{0}{T}{U'}|\nabla u|^{s}dxdt &\le C\inttx{0}{T}{U'} (\mathcal K|\nabla u|^{s+a} + (\chi_*(M_*+1))^{2(s+a)})dxdt\\ &\le C \chi_*^{(4+a)(s+a+2)}M_*^2 (M_*+1)^{4(s+a)}{\mathcal N}_{s+a-2} + CT\chi_*^{2(s+a)}(M_*+1)^{2(s+a)}. \end{align*} Note that $T\le {\mathcal N}_{s+a-2}$. Then \eqref{ih1} follows. Similarly, using \eqref{kug4}, instead of \eqref{ih0}, we obtain \eqref{ih2}. \end{proof} \section{Gradient estimates (III)}\label{maxintime} This section is focused on the estimates for the $L_t^\infty L_x^s$-norms of $\nabla u$. For $s\ge 2$, replacing $s$ in \eqref{irat0} with $s/2-1$ gives \begin{equation}\label{iterate2} \begin{aligned} I &\stackrel{\rm def}{=} \phi \sup_{t\in[0,T]}\int_U |\nabla u(x,t)|^s \zeta^2(x,t) dx\\ &\le \phi \int_U |\nabla u_0(x)|^s \zeta^2(x,0) dx +C\chi_*^{2(3+a)}M_*^4\inttx{0}{T}{U} \mathcal K |\nabla u|^{s-2} \zeta^2 dx dt \\ &\quad + C \chi_*^{2(3+a)}(M_*+1)^2\inttx{0}{T}{U} \mathcal K |\nabla u|^s (\zeta^2 + |\nabla \zeta|^2) dx dt + C\inttx{0}{T}{U} |\nabla u|^s \zeta|\zeta_t| dx dt. \end{aligned} \end{equation} \begin{theorem}\label{high4} If $U'\Subset U$, then one has, for all $t\in[0,T]$, that \begin{align}\label{pwtall} \phi \int_{U'} |\nabla u(x,t)|^s dx &\le \phi \int_U |\nabla u_0(x)|^s dx \notag \\ &\quad +C\begin{cases} \chi_*^{4(4+a)}(M_*+1)^{6}{\mathcal N}_0 &\text{ if }s=2,\\ \chi_*^{(s+2)(4+a)}M_*^{s-2}(M_*+1)^{3s+2}{\mathcal N}_2 &\text{ if }2<s\le 4,\\ \chi_*^{(s+4)(4+a)}M_*^2 (M_*+1)^{4(s+1)}{\mathcal N}_{s-2} &\text{ if }s>4. \end{cases} \end{align} \end{theorem} \begin{proof} Denote $\displaystyle J=\phi \sup_{t\in[0,T]}\int_{U'} |\nabla u(x,t)|^s dx.$ Choose $\zeta$ to be the same function $\zeta(x)$ as in the proof of Theorem \ref{thm64}(i). Then we have the relation \begin{equation}\label{JI} J\le I. \end{equation} We then bound $I$ by using inequality \eqref{iterate2}, noticing that the last integral of this inequality vanishes, and the integrand of the second term on its right-hand side can be bounded by \begin{align*} \mathcal K |\nabla u|^{s-2} \le \mathcal K (1 + |\nabla u|^s)\le 1+\mathcal K |\nabla u|^s . \end{align*} After this, combining the two constants for the integrals involving $\mathcal K |\nabla u|^{s} $, we obtain \begin{equation}\label{start7} \begin{aligned} J&\le \phi \int_U |\nabla u_0(x)|^s dx + C\chi_*^{2(3+a)}M_*^4 T\\ &\quad +C\chi_*^{2(3+a)}(M_*+1)^4 \inttx{0}{T}{U} \mathcal K |\nabla u|^s (\zeta^2 + |\nabla \zeta|^2) dx dt. \end{aligned} \end{equation} Consider $s=2$. Using \eqref{gradu4} to estimate the last integral in \eqref{start7}, we obtain \begin{align*} J&\le \phi \int_U |\nabla u_0(x)|^s dx + C\chi_*^{2(3+a)}M_*^4 T\\ &\quad +C\chi_*^{2(3+a)}(M_*+1)^4 \cdot \chi_*^{2(4+a)}(M_*+1)^2 {\mathcal N}_0. \end{align*} Making a generous bound $2(3+a)<2(4+a)$ for the first two exponents of $\chi_*$ above, we obtain the first estimate in \eqref{pwtall}. Consider $2< s\le 4$. Using \eqref{ab23} to estimate the last integral in \eqref{start7}, we obtain \begin{align*} J&\le \phi \int_U |\nabla u_0(x)|^s dx + C\chi_*^{2(3+a)}M_*^4 T\\ &\quad +C\chi_*^{2(3+a)}(M_*+1)^4 \cdot \chi_*^{(4+a)s}M_*^{s-2}(M_*+1)^{3s-2}{\mathcal N}_2. \end{align*} Then the second estimate in \eqref{pwtall} follows. Consider $s> 4$. Using \eqref{ih0} to estimate the last integral in \eqref{start7}, we have \begin{align*} J&\le \phi \int_U |\nabla u_0(x)|^s dx + C\chi_*^{2(3+a)}M_*^4 T\\ &\quad +C\chi_*^{2(3+a)}(M_*+1)^4 \cdot \chi_*^{(4+a)( s+2)}M_*^2 (M_*+1)^{4s} {\mathcal N}_{s-2}. \end{align*} With simple manipulations, we obtain from this the third estimate in \eqref{pwtall}. \end{proof} \begin{theorem}\label{high5} Let $U'\Subset U$ and $T_0\in(0,T)$. Then it holds, for all $t\in[T_0,T]$, that \begin{align}\label{pwtnew} &\phi\int_{U'} |\nabla u(x,t)|^{s}dx \le C \notag \\ & \cdot \begin{cases} \chi_*^{(4+a)^2}(1+T_0^{-1})^{1+a}(M_*+1)^{2(3+a)}\big\{ M_*^{a} (M_*+1)^{2+a} {\mathcal N}_*+{\mathcal N}_0\big\} &\text{ if }s=2,\\ \chi_*^{(4+a)(s+a+2)} (1+T_0^{-1})^{s+a-1} M_*^{s-2} (M_*+1)^{3s+4a+2} {\mathcal N}_* &\text{ if }2< s\le 4-a,\\ \chi_*^{(4+a)(s+a+4)} (1+T_0^{-1})^{s+a+1} M_*^{s-2} (M_*+1)^{3s+4a+10} {\mathcal N}_* &\text{ if }4-a<s\le 4,\\ \chi_*^{(4+a)(s+a+4)}(1+T_0^{-1})^{s+a+1}M_*^2 (M_*+1)^{4s+4a+6} {\mathcal N}_* &\text{ if }s>4. \end{cases} \end{align} Consequently, one has, for all $s\ge 2$ and $t\in[T_0,T]$, that \begin{equation}\label{pwt6} \phi\int_{U'} |\nabla u(x,t)|^{s}dx \le C \chi_*^{(4+a)(s+a+4)}(1+T_0^{-1})^{s+a+1}(M_*+1)^{4(s+a+2)} {\mathcal N}_*. \end{equation} \end{theorem} \begin{proof} Choose $\zeta(x,t)$ to be the cut-off function in the proof of Theorem \ref{thm64}(ii) which satisfies additionally that $\zeta$ has compact support in $V\times [T_0/2,T]$, where $U'\Subset V\Subset U$. Let $J$ be the same as in Theorem \ref{high4}. Again, we have \eqref{JI}, and use \eqref{iterate2} to estimate $I$. Note, on the right-hand side of \eqref{iterate2}, that \begin{equation*} \mathcal K |\nabla u|^{s-2}\le 1+|\nabla u|^s,\quad \mathcal K |\nabla u|^s\le |\nabla u|^s. \end{equation*} Utilizing these properties as well as \eqref{zprop}, we have from \eqref{JI} and \eqref{iterate2} that \begin{equation}\label{JJ} J\le C\chi_*^{2(3+a)}M_*^4 T + C \chi_*^{2(3+a)}(M_*+1)^4(1+T_0^{-1}) \int_{T_0/2}^T\int_V |\nabla u|^s dx dt. \end{equation} Estimate the last integral in \eqref{JJ}, \begin{align*} &\int_{T_0/2}^{T}\int_V |\nabla u|^s dx dt = \int_{T_0/2}^{T}\int_V \mathcal K|\nabla u|^s \mathcal K^{-1}dx dt\\ &\le C\int_{T_0/2}^{T}\int_V \mathcal K(|\nabla u|^{s+a}+|\nabla u|^s\chi_*^{2a}(M_*+1)^{2a})dxdt\\ &= C\int_{T_0/2}^{T}\int_V \mathcal K|\nabla u|^{s+a}dxdt+C\chi_*^{2a}(M_*+1)^{2a} \int_{T_0/2}^{T}\int_V \mathcal K|\nabla u|^s dxdt. \end{align*} Denote by $I_1$ and $I_2$ the last two double integrals. We estimate them, in calculations below, by using inequalities \eqref{ab24} and \eqref{kug4} with $T_0:=T_0/2$ and $U':=V$. Case $s=2$. Applying \eqref{ab24} to $s:=2+a\in(2,4)$ to bound $I_1$, and applying \eqref{gradu4} to $s:=2$ to bound $I_2$ give \begin{align*} J &\le C\chi_*^{2(3+a)}M_*^4 T + C \chi_*^{2(3+a)}(M_*+1)^4(1+T_0^{-1})\\ &\quad \cdot\Big\{ (1+T_0^{-1})^a \chi_*^{(4+a)(2+a)}M_*^a(M_*+1)^{4+3a}{\mathcal N}_* + \chi_*^{2a}(M_*+1)^{2a} \cdot \chi_*^{2(4+a)}(M_*+1)^2 {\mathcal N}_0\Big\}\\ &\le C\chi_*^{2(3+a)}M_*^4 T + C \chi_*^{(4+a)^2}(1+T_0^{-1})^{1+a}M_*^{a} (M_*+1)^{8+3a} {\mathcal N}_*\\ &\quad + C \chi_*^{14+6a}(1+T_0^{-1})(M_*+1)^{6+2a} {\mathcal N}_0. \end{align*} We obtain the first estimate in \eqref{pwtnew}. Case $2<s\le 4-a$. Estimating $I_1$ by \eqref{ab24} applied to $s:=s+a$, and estimating $I_2$ by \eqref{ab24}, we have \begin{align*} J&\le C\chi_*^{2(3+a)}M_*^4 T + C \chi_*^{2(3+a)}(M_*+1)^4(1+T_0^{-1})\\ &\quad \cdot\Big\{ (1+T_0^{-1})^{s+a-2}\chi_*^{(4+a)(s+a)}M_*^{s+a-2}(M_*+1)^{3(s+a)-2}{\mathcal N}_* \\ &\quad + \chi_*^{2a}(M_*+1)^{2a} \cdot (1+T_0^{-1})^{s-2}\chi_*^{(4+a)s}M_*^{s-2}(M_*+1)^{3s-2}{\mathcal N}_*\Big\}\\ &\le C\chi_*^{2(3+a)}M_*^4 T + C \chi_*^{(4+a)(s+a+2)}(1+T_0^{-1})^{s+a-1} M_*^{s-2} (M_*+1)^{3s+4a+2}{\mathcal N}_*. \end{align*} We obtain the second estimate in \eqref{pwtnew}. Case $4-a<s\le 4$. Estimating $I_1$ by \eqref{kug4} applied to $s:=s+a$, and estimating $I_2$ by \eqref{ab24} yield \begin{align*} J&\le C\chi_*^{2(3+a)}M_*^4 T + C \chi_*^{2(3+a)}(M_*+1)^4(1+T_0^{-1})\\ &\quad \cdot\Big\{ (1+T_0^{-1})^{s+a} \chi_*^{(4+a)(s+a+2)}M_*^2 (M_*+1)^{4(s+a)+2}{\mathcal N}_* \\ &\quad + \chi_*^{2a}(M_*+1)^{2a} \cdot (1+T_0^{-1})^{s-2}\chi_*^{(4+a)s}M_*^{s-2}(M_*+1)^{3s-2}{\mathcal N}_*\Big\}\\ &\le C\chi_*^{2(3+a)}M_*^4 T + C \chi_*^{(4+a)(s+a+4)}(1+T_0^{-1})^{s+a+1}M_*^{s-2} (M_*+1)^{3s+4a+10} {\mathcal N}_*. \end{align*} We obtain the third estimate in \eqref{pwtnew}. Case $s>4$. Estimating $I_1$ by \eqref{kug4} for $s:=s+a$, and estimating $I_2$ by \eqref{kug4} result in \begin{align*} J&\le C\chi_*^{2(3+a)}M_*^4 T + C \chi_*^{2(3+a)}(M_*+1)^4(1+T_0^{-1})\\ &\quad \cdot\Big\{ (1+T_0^{-1})^{s+a} \chi_*^{(4+a)(s+a+2)}M_*^2 (M_*+1)^{4(s+a)+2}{\mathcal N}_* \\ &\quad + \chi_*^{2a}(M_*+1)^{2a} \cdot (1+T_0^{-1})^s \chi_*^{(4+a)(s+2)}M_*^2 (M_*+1)^{4s+2}{\mathcal N}_*\Big\}\\ &\le C\chi_*^{2(3+a)}M_*^4 T + C \chi_*^{(4+a)(s+a+4)}(1+T_0^{-1})^{s+a+1}M_*^2 (M_*+1)^{4s+4a+6} {\mathcal N}_*. \end{align*} We obtain the fourth estimate in \eqref{pwtnew}. Finally, one can easily unify the estimates in \eqref{pwtnew} for all $s>2$ with \eqref{pwt6}. This can also be done for the case $s=2$ by comparing ${\mathcal N}_0$ with ${\mathcal N}_*$ using the last relation in \eqref{NNN0}. \end{proof} \begin{remark}\label{smlrmk2} Similar to Remark \ref{smlrmk1}, when $u$, $u_0$, $\bar u_0$ are small in necessary norms, and ${\mathcal E}_*$ is small, then $M_*$ and ${\mathcal N}_0$ are small, which make the the right-hand sides of \eqref{pwtall} and \eqref{pwtnew} to be small. \end{remark} \bigskip \noindent\textbf{\large Acknowledgments.} The authors would like to thank Dat Cao, Akif Ibragimov and Tuoc Phan for very helpful discussions. \def$'$}\def\cprime{$'$} \def\cprime{$'${$'$}\def$'$}\def\cprime{$'$} \def\cprime{$'${$'$} \def$'$}\def\cprime{$'$} \def\cprime{$'${$'$}
{'timestamp': '2019-04-19T02:14:18', 'yymm': '1904', 'arxiv_id': '1904.08636', 'language': 'en', 'url': 'https://arxiv.org/abs/1904.08636'}
\subsection{\kern-0.6em}} \newcommand{\SideTitle}[1]{\medbreak{}\pdfbookmark[2]{\textbullet\quad#1}{\thesubsection{}}\noindent{\textbullet}\:\:\textsc{#1}\par\smallskip} \setlist[enumerate]{leftmargin=20pt, topsep=0pt} \setenumerate[1]{label=\upshape{(\arabic*)} \usepackage{needspace} \newcommand{\keep}[1]{\Needspace*{#1\baselineskip}} \newcommand{\nobreakdash}{\nobreakdash} \AtBeginDocument{% \setlength{\abovedisplayskip}{4pt plus 2pt minus 2pt} \setlength{\belowdisplayskip}{4pt plus 2pt minus 2pt} \setlength{\abovedisplayshortskip}{0pt plus 2pt} \belowdisplayshortskip=\belowdisplayskip } \newcommand{\textemdash\xspace}{\textemdash\xspace} \renewcommand{\qed}{~\hfill\qedsymbol \newcommand{\mathord{\sim}}{\mathord{\sim}} \newcommand{\mathord{\upharpoonright}}{\mathord{\upharpoonright}} \newcommand{\raisebox{-.15ex}{\scalebox{1.5}{\ensuremath\cup}}}{\raisebox{-.15ex}{\scalebox{1.5}{\ensuremath\cup}}} \newcommand{\raisebox{-.15ex}{\scalebox{1.5}{\ensuremath\cap}}}{\raisebox{-.15ex}{\scalebox{1.5}{\ensuremath\cap}}} \newcommand{\Medcap}[2]{\ensuremath{\raisebox{-.15ex}{\scalebox{1.5}{\ensuremath\cap}}_{#1}{#2}_{#1}}} \newcommand{\SEquiv}{\Leftrightarrow \newcommand{\Equiv}{\:\Leftrightarrow\: \newcommand{\LEquiv}{\:\:\mathrel{\Leftarrow\mkern-14mu\Rightarrow}\:\: \newcommand{\Implies}{\:\Rightarrow\: \newcommand{\subseteq}{\subseteq} \newcommand{\supseteq}{\supseteq} \newcommand{\times}{\times} \newcommand{\mathrel{{|}\mkern-3mu{\relbar}}}{\mathrel{{|}\mkern-3mu{\relbar}}} \let\@O\O \renewcommand{\O}{\ifmmode\varnothing\else\@O\fi} \newcommand{\twoheadrightarrow}{\twoheadrightarrow} \newcommand{\scalebox{1.3}{\ensuremath\sim}}{\scalebox{1.3}{\ensuremath\sim}} \newcommand{\stackrel{\mkern-5mu\longsim}{\smash\rightarrow}}{\stackrel{\mkern-5mu\scalebox{1.3}{\ensuremath\sim}}{\smash\rightarrow}} \newcommand{\,{\in}\,}{\,{\in}\,} \newcommand{\xbar}[1]{\def0.05em}{\fontdimen8\textfont3=.5pt\kern\klen\overline{\kern-\klen{#1}\kern-\klen}\kern\klen}{\vphantom{#1}}{0.05em}{\fontdimen8\textfont3=.5pt\kern0.05em}{\fontdimen8\textfont3=.5pt\kern\klen\overline{\kern-\klen{#1}\kern-\klen}\kern\klen}{\vphantom{#1}}\overline{\kern-0.05em}{\fontdimen8\textfont3=.5pt\kern\klen\overline{\kern-\klen{#1}\kern-\klen}\kern\klen}{\vphantom{#1}}{#1}\kern-0.05em}{\fontdimen8\textfont3=.5pt\kern\klen\overline{\kern-\klen{#1}\kern-\klen}\kern\klen}{\vphantom{#1}}}\kern0.05em}{\fontdimen8\textfont3=.5pt\kern\klen\overline{\kern-\klen{#1}\kern-\klen}\kern\klen}{\vphantom{#1}}}{\vphantom{#1}}} \newcommand{\ensuremath{\mathcal{C}}\xspace}{\ensuremath{\mathcal{C}}\xspace} \newcommand{\ensuremath{\mathcal{D}}\xspace}{\ensuremath{\mathcal{D}}\xspace} \newcommand{\ensuremath{\mathcal{F}}\xspace}{\ensuremath{\mathcal{F}}\xspace} \newcommand{\ensuremath{\mathcal{I}}\xspace}{\ensuremath{\mathcal{I}}\xspace} \newcommand{\ensuremath{\mathcal{N}}\xspace}{\ensuremath{\mathcal{N}}\xspace} \newcommand{\ensuremath{\mathcal{M}}\xspace}{\ensuremath{\mathcal{M}}\xspace} \newcommand{\ensuremath{\mathcal{O}}\xspace}{\ensuremath{\mathcal{O}}\xspace} \newcommand{\ensuremath{\mathcal{P}}\xspace}{\ensuremath{\mathcal{P}}\xspace} \newcommand{\ensuremath{\mathbb{L}}\xspace}{\ensuremath{\mathbb{L}}\xspace} \newcommand{\ensuremath{\mathbb{N}}\xspace}{\ensuremath{\mathbb{N}}\xspace} \newcommand{\ensuremath{\mathbb{V}}\xspace}{\ensuremath{\mathbb{V}}\xspace} \newcommand{\Ax}[1]{\textsc{#1}} \newcommand{\mathbb{O}\mathrm{n}}{\mathbb{O}\mathrm{n}} \newcommand{\ensuremath{\mathsf{Z}}\xspace}{\ensuremath{\mathsf{Z}}\xspace} \newcommand{\ensuremath{\mathsf{ZF}}\xspace}{\ensuremath{\mathsf{ZF}}\xspace} \newcommand{\ensuremath{\mathsf{ZFC}}\xspace}{\ensuremath{\mathsf{ZFC}}\xspace} \newcommand{\ensuremath{\mathsf{ZF}^-}\xspace}{\ensuremath{\mathsf{ZF}^-}\xspace} \newcommand{\ensuremath{\mathsf{KP}}\xspace}{\ensuremath{\mathsf{KP}}\xspace} \newcommand{\ensuremath{\mathsf{KP}_{\infty}}\xspace}{\ensuremath{\mathsf{KP}_{\infty}}\xspace} \newcommand{\ensuremath{\mathsf{KP}_{\infty}~+~(\mathbb{V}=\mathbb{L})}\xspace}{\ensuremath{\mathsf{KP}_{\infty}~+~(\mathbb{V}=\mathbb{L})}\xspace} \newcommand{\omega_{1}^{\scriptscriptstyle\mathsf{CK}}}{\omega_{1}^{\scriptscriptstyle\mathsf{CK}}} \DeclareMathOperator{\Def}{\operatorname{Def}} \DeclareMathOperator{\Cof}{\operatorname{Cof}} \DeclareMathOperator{\Dom}{\operatorname{Dom}} \DeclareMathOperator{\Img}{\operatorname{Im}} \DeclareMathOperator{\rg}{\operatorname{rg}} \DeclareMathOperator{\Card}{\operatorname{Card}} \DeclareMathOperator{\Th}{\operatorname{Th}} \DeclareMathOperator{\Cone}{\operatorname{Cone}} \newcommand{\hull}[1]{\ensuremath{\mathsf{H}^{\mathbb{L}_{#1}\mkern-1mu}}} \newcommand{\thull}[1]{\ensuremath{\xbar{\mathsf{H}}^{\mathbb{L}_{#1}}\mkern-1mu}} \newcommand{\seq}[1]{\bgroup{#1}^{< \omega}\egroup} \newcommand{\mathrel{\leqslant_{T}}}{\mathrel{\leqslant_{T}}} \newcommand{\mathrel{\leqslant_{h}}}{\mathrel{\leqslant_{h}}} \newcommand{\All}[1]{\forall{#1}\mkern.5mu} \newcommand{\Exists}[1]{\exists{#1}\mkern.5mu} \newcommand{\Set}[2]{\ensuremath{\{\mkern2mu {#1} \mid {#2} \mkern2mu\}}} \newcommand{\Det}[1]{\ensuremath{\operatorname{Det}(#1)}\xspace} \newcommand{\TDet}[1]{\ensuremath{\operatorname{Turing-Det}(#1)}\xspace} \newcommand{\HDet}[1]{\ensuremath{\operatorname{Hyp-Det}(#1)}\xspace} \newcommand{\HTDet}[1]{\ensuremath{\operatorname{Hyp-Turing-Det}(#1)}\xspace} \newcommand{\WTDet}[2]{\ensuremath{\operatorname{Weak-Turing-Det}_{#1}(#2)}\xspace} \newcommand{{^\ast\mkern-2mu}}{{^\ast\mkern-2mu}} \renewcommand{\*}[1]{\bgroup\boldsymbol{#1}\egroup} \newcommand{\ZFmAleph}[1]{\ensuremath{\ensuremath{\mathsf{ZF}^-}\xspace\! + \textup{"}\aleph_{#1}\textup{ exists"}}\xspace} \newcommand{\ModelZFmAleph}[2]{\ensuremath{#1 \models \ZFmAleph{#2}}\xspace} \newcommand{$\omega$-model\xspace}{$\omega$-model\xspace} \newcommand{\surj}[1]{\gg^{\mkern-3mu\smash{#1}}} \newcommand{\pI}{\textup{I}\xspace \newcommand{\pII}{\textup{II}\xspace \newcommand{\SSigma}[2]{\ensuremath{\Sigma^{#1}_{\smash[b]{#2}}}} \newcommand{\bSigma}[2]{\ensuremath{\pmb{\Sigma}^{#1}_{\smash[b]{#2}}}} \newcommand{\PPi}[2]{\ensuremath{\Pi^{#1}_{\smash[b]{#2}}}} \newcommand{\bPi}[2]{\ensuremath{\pmb{\Pi}^{#1}_{\smash[b]{#2}}}} \newcommand{\DDelta}[2]{\ensuremath{\Delta^{#1}_{\smash[b]{#2}}}} \newcommand{\bDelta}[2]{\ensuremath{\pmb{\Delta}^{#1}_{\smash[b]{#2}}}} \renewcommand{\L}[1]{\bgroup\ensuremath{\mathbb{L}_{#1}}\egroup \newcommand{\LL}[2]{\bgroup\ensuremath{\mathbb{L}_{\smash[t]{#1}}^{\!{#2}}}\egroup} \newcommand{\Lcard}[1]{\L{#1}\nobreakdash-cardinal} \newcommand{\Aleph}[1]{\ensuremath{\aleph_{#1}}} \newcommand{\AlephL}[2]{\ensuremath{\aleph_{#1}^{\mathbb{L}_{#2}}}} \newcommand{\M}[1]{\bgroup\ensuremath{\mathcal{M}_{#1}}\egroup} \newcommand{\T}[1]{\ensuremath{\mathsf{T}_{#1}}} \renewcommand{\S}[1]{\ensuremath{\mathcal{S}_{#1}} \newcommand\@low[2]{_{#1{#2}} \newcommand\low[1]{\bgroup\mathpalette{\@low}{#1}\egroup} \let\@quote" \catcode`\"=\active \newcommand{"}{\ifmmode\textup{\char34}\else\@quote\fi} \renewcommand{\phi}{\varphi} \renewcommand{\theta}{\vartheta} \renewcommand{\le}{\leqslant} \renewcommand{\leq}{\leqslant} \renewcommand{\ge}{\geqslant} \renewcommand{\geq}{\geqslant} \makeatother \title {Variations on $\Delta^1_1$ Determinacy and $\aleph_{\omega_1}$} \thanks {Presented at the\,12th Panhellenic Logic Symposium -- Crete, June 2019.} \author {Ramez L. Sami} \subjclass[2010] {Primary: 03E60;\enskip{}Secondary: 03E15, 03E10.} \address {Department of Mathematics. Université Paris-Diderot, 75205 Paris, Cedex 13, France.} \email {sami@univ-paris-diderot.fr} \begin{document} \newgeometry{left=26mm, right=31mm \begin{abstract} \setlength\parindent{0pt} We consider a weaker form of $\Delta^1_1$ Turing determinacy. Let $2 \leqslant \rho < \omega_{1}^{\scriptscriptstyle{\mathsf{CK}}}$, $\textrm{Weak-Turing-Det}_{\rho}(\Delta^1_1)$ is the statement: \hspace*{1.2em}\emph{Every $\Delta^1_1$ set of reals cofinal in the Turing degrees contains two Turing distinct $\Delta^0_\rho$-equivalent reals.} We show in $\mathsf{ZF}^-$: \hspace*{1.2em}$\textrm{Weak-Turing-Det}_{\rho}(\Delta^1_1)$ implies for every $\nu < \omega_{1}^{\scriptscriptstyle{\mathsf{CK}}}$ there is a transitive model: $M \models \mathsf{ZF}^- + \textup{"}\aleph_\nu \textup{ exists"}$. As a corollary: \begin{itemize}[leftmargin=1.2em] \item[]If every cofinal $\Delta^1_1$ set of Turing degrees contains both a degree and its jump, then for every $\nu < \omega_{1}^{\scriptscriptstyle{\mathsf{CK}}}$, there is a transitive model: $M \models \mathsf{ZF}^- + \textup{"}\aleph_\nu \textup{ exists"}$. \end{itemize} \begin{itemize}[leftmargin=1em, topsep=2pt, label={$\scriptscriptstyle\bullet$}] \item{}With a simple proof, this improves upon a well-known result of Harvey Friedman on the strength of Borel determinacy (though not assessed level-by-level). \item{}Invoking Tony Martin's proof of Borel determinacy, $\textrm{Weak-Turing-Det}_{\rho}(\Delta^1_1)$ implies $\Delta^1_1$ determinacy. \item{}We show further that, assuming $\Delta^1_1$ Turing determinacy, or Borel Turing determinacy, as needed:\\ --\enskip{}Every cofinal $\Sigma^1_1$ set of Turing degrees contains a ``hyp-Turing cone'': $\{x \in \mathcal{D} \mid d_0 \leqslant_T x \leqslant_h d_0 \}$.\\ --\enskip{}For a sequence $(A_{k})_{k < \omega}$ of analytic sets of Turing degrees, cofinal in $\mathcal{D}$, $\bigcap_{k} A_{k}$ is cofinal in $\mathcal{D}$. \end{itemize} \end{abstract} \restoregeometry \vspace*{-10mm} \iffalse \begin{flushright} \vspace*{-2.5\baselineskip} [Presented at the \textbf{12th Panhellenic Logic Symposium} -- June 2019] \bigskip{} \end{flushright} \fi \pdfbookmark[1]{Variations on ∆₁¹ Determinacy and ℵ𝜔₁}{} \maketitle \section*{Introduction} A most important result in the study of infinite games is Harvey Friedman's~\cite{Friedman:Higher_ST}, where it is shown that a proof of determinacy, for Borel games, would require \Aleph1 iterations of the power set operation \textemdash\xspace and this is precisely what Tony Martin used in his landmark proof~\cite{Martin:Borel_Det}. Our focus here is on the Turing determinacy results of \cite{Friedman:Higher_ST}, concentrating on the theory \ensuremath{\mathsf{ZF}^-}\xspace rather than Zermelo's \ensuremath{\mathsf{Z}}\xspace. In the \DDelta11 realm, Friedman essentially shows that the determinacy of Turing closed \DDelta11 games [henceforth \TDet{\DDelta11}] implies the consistency of the theories \ZFmAleph{\nu}, for all recursive ordinals $\nu$. Friedman does produce a level-by-level analysis entailing, e.g., that the determinacy of Turing closed \SSigma{0}{n+\smash6} games implies the consistency of \ZFmAleph{n}.% \footnote{\,Improved by Martin to \SSigma{0}{n+\smash5}.}% \textsuperscript{,}% \footnote{\,In \cite{Montalban-Shore} Montalbán and Shore considerably refine the analysis of the proof theoretic strength of \Det{\Gamma}, for~classes $\Gamma$, where $\PPi03 \subseteq \Gamma \subseteq \DDelta04$.} Importantly, it was further observed by Friedman (unpublished) that these results extend to produce transitive models, rather than just consistency results. See Martin's forthcoming \cite{Martin:Det_Book} for details, see also Van~Wesep's~\cite{Van_Wesep}. \smallskip{} We forego in this paper the level-by-level analysis to provide, in §\ref{sec:Transitive-models}, a simple proof of the existence of transitive models of \ensuremath{\mathsf{ZF}^-}\xspace with uncountable cardinals, from \TDet{\DDelta11}. In so doing, we show that the full force of Turing determinacy isn't needed. The main result is Theorem\:\ref{thm:Models-with-Alephs}, with a simply stated corollary. \smallskip{} For context, by \textbf{Martin's Lemma} (see \ref{subsec:Turing-determinacy}), \TDet{\DDelta11} is equivalent to:\\ $\scriptstyle\bullet$\:\:\emph{Every cofinal \DDelta11 set of Turing degrees contains a cone of degrees} -- i.e., a set \Set{x \in \ensuremath{\mathcal{D}}\xspace}{d_0 \mathrel{\leqslant_{T}} x}. \begin{theorem*}[\ref{thm:Models-with-Alephs}] Let $2 \le \rho < \omega_{1}^{\scriptscriptstyle\mathsf{CK}}$, and assume every \DDelta11 set of reals, cofinal in the Turing degrees, contains two Turing distinct, \DDelta{0}{\rho}-equivalent reals. For every $\nu < \omega_{1}^{\scriptscriptstyle\mathsf{CK}}$, there is a transitive model: \ModelZFmAleph{M}{\nu}. \end{theorem*} \begin{corollary*}[\ref{cor:Models-with-Alephs}] If every cofinal \DDelta11 set of Turing degrees contains both a degree and its jump, then for every $\nu < \omega_{1}^{\scriptscriptstyle\mathsf{CK}}$, there is a transitive model: \ModelZFmAleph{M}{\nu}. \end{corollary*} In §\ref{sec:Delta-Det-Properties-Sigma} several results are derived, showing that \TDet{\DDelta11} imparts weak determinacy properties to the class \SSigma11, such as [\ref{thm:Hyp-Turing-Det-Sigma}]\,:\\ $\scriptscriptstyle\bullet$\;\:\emph{Every cofinal \SSigma11 set of degrees includes a set \Set{x \in \ensuremath{\mathcal{D}}\xspace}{d_0 \mathrel{\leqslant_{T}} x \And x \mathrel{\leqslant_{h}} d_0}, for some~$d_0 \in \ensuremath{\mathcal{D}}\xspace$}.\\ Or, from Borel Turing determinacy, [\ref{thm:Intersection of Sigma cofinal}]\,:\\ $\scriptscriptstyle\bullet$\;\emph{If $(A_{k})_{k < \omega}$ is a sequence of analytic sets of degrees each cofinal in \ensuremath{\mathcal{D}}\xspace, then \Medcap{k}{A} is cofinal in \ensuremath{\mathcal{D}}\xspace}.\medskip{} I wish to thank Tony Martin for inspiring conversations on the present results. He provided the argument for Remark\:\ref{rem:Too-Weak}, below, and observed that my first proof of Theorem\:\ref{thm:Hyp-Det-Sigma} (used for an early version of the main result) was needlessly complex. Parts of §\ref{sec:Delta-Det-Properties-Sigma} go back to the author's dissertation \cite{Sami:Dissertation}, it is a pleasure to acknowledge Robert Solovay's direction. \section{Preliminaries and Notation} The effective descriptive set theory we shall need, as well as basic hyperarithmetic theory, is from Moschovakis' \cite{Moschovakis:DST}, whose terminology and notation we follow. For the theory of admissible sets, we refer to Barwise's~\cite{Barwise:Admissible_Sets}. Standard facts about the \ensuremath{\mathbb{L}}\xspace\nobreakdash-hierarchy are used without explicit mention: see Devlin's \cite{Devlin:Constructibility}, or Van~Wesep's~\cite{Van_Wesep}. \smallskip{} $\ensuremath{\mathcal{N}}\xspace = \omega^\omega = \ensuremath{\mathbb{N}}\xspace^\ensuremath{\mathbb{N}}\xspace$ denotes Baire's space (the set of \emph{reals}), and \ensuremath{\mathcal{D}}\xspace the set of Turing degrees. Subsets of \ensuremath{\mathcal{D}}\xspace shall be identified with the corresponding (Turing closed) sets of reals. $\mathrel{\leqslant_{T}}$, $\mathrel{\leqslant_{h}}$, and $\equiv_T$, $\equiv_h$ denote, respectively, Turing and hyperarithmetic (or \DDelta11) reducibility, and equivalence. \subsection{Turing determinacy.} \label{subsec:Turing-determinacy} A set of reals $A \subseteq \ensuremath{\mathcal{N}}\xspace$ is said to be \emph{cofinal in the \textup[Turing\textup] degrees} if for~all $x \in \ensuremath{\mathcal{N}}\xspace$ there is $y \in A$, such that $x \mathrel{\leqslant_{T}} y$. For $c \in \ensuremath{\mathcal{N}}\xspace$, the \emph{Turing cone} with vertex $c$ is the set $\Cone(c) = \Set{x \in \ensuremath{\mathcal{N}}\xspace}{c \mathrel{\leqslant_{T}} x}$. For a class of sets of reals $\Gamma$, \Det{\Gamma} is the statement that infinite games $G_\omega(A)$ where $A \in \Gamma$ are determined, whereas \TDet{\Gamma} stands for the determinacy of games $G_\omega(A)$ restricted to Turing closed sets $A \in \Gamma$. Recall the following easy yet central: \begin{namedthm*}{Martin's Lemma \cite{Martin:Reduction_Princ}} For a Turing closed set $A \subseteq \ensuremath{\mathcal{N}}\xspace$, the infinite game $G_\omega(A)$ is determined if, and only if, $A$ or its complement contains a cone of Turing degrees. \end{namedthm*} \subsection{The ambient theories.} Our base theory is \ensuremath{\mathsf{ZF}^-}\xspace, \Ax{Zermelo-Fraenkel} set theory stripped of the Power Set axiom.% \footnote{\,All implicit instances of Choice, here, are \ensuremath{\mathsf{ZF}^-}\xspace-provable.} \ensuremath{\mathcal{N}}\xspace or \ensuremath{\mathcal{D}}\xspace may be proper classes in this context, yet speaking of their ``subsets'' (\DDelta11, \SSigma11, Borel or analytic) can be handled as usual, as these sets are codable by integers, or reals. Amenities such as \Aleph1 or \L{\omega_1} aren't available but, since our results here are global (i.e.,~\DDelta11) rather than local, the reader may use instead the more comfortable $\ensuremath{\mathsf{ZF}^-}\xspace + "\ensuremath{\mathcal{P}}\xspace^2(\omega) \textup{ exists}"$. \ensuremath{\mathsf{KP}_{\infty}}\xspace denotes the theory \Ax{Kripke-Platek + Infinity}. Much of the argumentation below takes place inside $\omega$-model\xspace{s} of \ensuremath{\mathsf{KP}_{\infty}}\xspace \textemdash\xspace familiarity with their properties is assumed. \subsection{Constructibility and condensation.} \label{subsec:Constructibility} For an ordinal $\lambda > 0$, and $X \subseteq \L{\lambda}$, $\hull{\lambda}(X)$ denotes the set of elements of \L{\lambda} definable from parameters in $X$, and $\thull{\lambda}(X)$ its transitive collapse. For $X = \O$, one simply writes \hull{\lambda} and \thull{\lambda}. Gödel's Condensation Lemma is the relevant tool here. Note that, since $\L{\lambda} = \thull{\lambda}(\lambda) = \hull{\lambda}(\lambda)$, all elements of \L{\lambda} are definable in \L{\lambda} from ordinal parameters. \subsection{Reflection.} The following reflection principle will be used a few times, to make for shorter proofs.% \footnote{\,Longer ones can always be produced using \DDelta11 selection + \SSigma11 separation.} A property $\Phi(X)$ of subsets $X \subseteq \ensuremath{\mathcal{N}}\xspace$ is said to be ``\PPi11 \emph{on} \SSigma11'' if, for any \SSigma11 relation $U \subseteq \ensuremath{\mathcal{N}}\xspace \times \ensuremath{\mathcal{N}}\xspace$, the set \Set{x \in \ensuremath{\mathcal{N}}\xspace}{\Phi(U_x)} is \PPi11. \begin{theorem*} Let $\Phi(X)$ be a \PPi11 on \SSigma11 property. For any \SSigma11 set $S \subseteq \ensuremath{\mathcal{N}}\xspace$ such that $\Phi(S)$ there is a \DDelta11 set $D \supseteq S$ such that $\Phi(D)$. \end{theorem*} \begin{proof} See Kechris'~\cite[§35.10]{Kechris:Classical_DST} for a boldface version, easily transcribed to lightface. \end{proof} \section{Weak Turing Determinacy} \label{sec:Weak-Turing-determinacy} Examining what's needed to derive the existence of transitive models from Turing determinacy hypotheses, it is possible to isolate a seemingly weaker statement. For $1 \le \rho < \omega_{1}^{\scriptscriptstyle\mathsf{CK}}$, let $x \equiv_\rho y$ denote \DDelta{0}{\rho}-equivalence on \ensuremath{\mathcal{N}}\xspace, that is: $x \in \DDelta{0}{\rho}(y) \;\&\; y \in \DDelta{0}{\rho}(x)$. \,$\equiv_1$ is just Turing equivalence. \begin{definition} For a class $\Gamma$, and $2 \le \rho < \omega_{1}^{\scriptscriptstyle\mathsf{CK}}$, \WTDet{\rho}{\Gamma} is the statement: \begin{itemize}[leftmargin=\parindent, topsep=0pt] \item[]\emph{For every set of reals $A \in \Gamma$ cofinal in the degrees, there are two Turing distinct $x, y \in A$ such that $x \equiv_\rho y$.} \end{itemize} \end{definition} For any recursive $\rho \ge 2$, \WTDet{\rho}{\DDelta11} will suffice to derive the existence of transitive models. The property lifts from \DDelta11 to \SSigma11 \textemdash\xspace note that it is, \emph{a priori}, asymmetric. \begin{theorem} \label{thm:WTuringDelta->WTuringSigma} Let $2 \le \rho < \omega_{1}^{\scriptscriptstyle\mathsf{CK}}$. \WTDet{\rho}{\DDelta11} implies \WTDet{\rho}{\SSigma11}. \end{theorem} \begin{proof} Assume \WTDet{\rho}{\DDelta11}. Let $S \in \SSigma11$ and suppose there are no Turing distinct $x, y \in S$ such that $x \equiv_\rho y$, that is \[ \All{x,y} (x, y \in S \And x \equiv_\rho y \Implies x \equiv_T y). \] This is a statement $\Phi(S)$, where $\Phi(X)$ is a \PPi11 on \SSigma11 property. Reflection yields a \DDelta11 set $D \supseteq S$ such that $\Phi(D)$. By \WTDet{\rho}{\DDelta11}, $D$ is not cofinal in the degrees; \emph{a fortiori}, $S$ isn't either.\qedhere \end{proof} \begin{remark} \label{rem:Too-Weak} One may be tempted to substitute for \WTDet{\rho}{\DDelta11} a simpler hypothesis: \\ \emph{Every \DDelta11 set of reals cofinal in the degrees contains two Turing distinct reals $x, y$ such that $x \equiv_h y$}. \\ It turns out to be too weak and, indeed, provable in Analysis. (Tony Martin, private communication: Building on his \cite{Martin:Friedman_Conj}, he shows that every uncountable \DDelta11 set of reals contains two Turing distinct reals, in every hyperdegree $\ge$ Kleene's \ensuremath{\mathcal{O}}\xspace.) The weaker statement does suffice however, when asserted about the class \SSigma11, see Theorem \ref{thm:Models-with-Alephs-from-Sigma}, below. \end{remark} \section{Transitive Models from Weak Turing Determinacy} \label{sec:Transitive-models} We now state the main result, and a simple special case. The proof is postponed towards the end of the present section. \begin{theorem} \label{thm:Models-with-Alephs} Let $2 \le \rho < \omega_{1}^{\scriptscriptstyle\mathsf{CK}}$, and assume \WTDet{\rho}{\DDelta11}. For every $\nu < \omega_{1}^{\scriptscriptstyle\mathsf{CK}}$, there is a transitive model: \ModelZFmAleph{M}{\nu}. \end{theorem} \begin{corollary} \label{cor:Models-with-Alephs} If every cofinal \DDelta11 set of Turing degrees contains both a degree and its jump, then for every $\nu < \omega_{1}^{\scriptscriptstyle\mathsf{CK}}$, there is a transitive model: \ModelZFmAleph{M}{\nu}.\qed \end{corollary} \SideTitle{Term models.} Given a complete% \footnote{\,Complete extensions are always meant to be consistent.} theory $U \supseteq \ensuremath{\mathsf{KP}_{\infty}~+~(\mathbb{V}=\mathbb{L})}\xspace$, one constructs its term model. To be specific, owing to the presence of the axiom $\ensuremath{\mathbb{V}}\xspace = \ensuremath{\mathbb{L}}\xspace$, to every formula $\psi(v)$ is associated $\xbar\psi(v)$ such that ${U \mathrel{{|}\mkern-3mu{\relbar}} \Exists{v}\psi(v) \SEquiv \Exists{!v}\xbar\psi(v)}$. Just take for $\xbar\psi(v)$ the formula $\psi(v) \land (\All{w <_\ensuremath{\mathbb{L}}\xspace v}) \neg \psi(w)$. Let $(\phi_n(v))_{n < \omega}$ be a recursive in $U$ enumeration of the formulas $\phi(v)$, in the single free variable~$v$, having $U \mathrel{{|}\mkern-3mu{\relbar}} \Exists{!v} \phi(v)$. Using, as metalinguistic device, $(\iota v) \phi(v)$ for ``\emph{the unique $v$ such that $\phi(v)$}'', set: \[ M_U = \Set{n \in \omega}{\All{i < n,}~U \mathrel{{|}\mkern-3mu{\relbar}} (\iota v)\phi_n \neq (\iota v)\phi_i}, \] and define on $M_U$ the relation $\in_U$\,: \[ {\hspace{-7.7eM}} m \in_U n \Equiv U \mathrel{{|}\mkern-3mu{\relbar}} (\iota v)\phi_m \in (\iota v)\phi_n. \] $(M_U, \in_U)$ is a prime model of $U$ and, $U$ being complete, $(M_U, \in_U)$ is recursive in $U$. Using the canonical enumeration $\omega \to M_U$, substitute $\omega$ for $M_U$ and remap $\in_U$ accordingly. The resulting model $\M{U} = (\omega, \in^\M{U})$ shall be called the \emph{term model of} $U$. The function $U \mapsto \M{U}$ is recursive. Whenever \M{U} is an $\omega$-model\xspace, we say that $a \subseteq \omega$ is realized in \M{U} if there is $\mathring{a} \in \omega$ such that $a = \Set{k \in \omega}{k^\M{U} \in^\M{U} \mathring{a}}$. We state, for later reference, a couple of standard facts. \begin{proposition} \label{prop:Realized-reals} Let $U$ be as above. If \M{U} is an $\omega$-model\xspace, and $a \subseteq \omega$ realized in \M{U}, then: \begin{enumerate} \item{}For all $x \mathrel{\leqslant_{h}} a$\,\emph: $x$ is realized in \M{U}. \item{}$a \mathrel{\leqslant_{T}} U$. Hence $U$ is not realized in \M{U}, lest the jump $U'$ be realized in \M{U}, and ${U' \mathrel{\leqslant_{T}} U}$.\qed \end{enumerate} \end{proposition} Note that if $U = \Th(\L{\alpha})$, where $\alpha$ is admissible, then \M{U} is a copy of \hull{\alpha}. Hence $\M{U} \cong \L{\beta}$, for some $\beta \le \alpha$. The following easy proposition is quite familiar. \begin{proposition} \label{prop:Countable-admissible-term-model} Assume $\ensuremath{\mathbb{V}}\xspace = \ensuremath{\mathbb{L}}\xspace$. For cofinally many countable admissible $\alpha$'s, $\L{\alpha} = \hull{\alpha}$, equivalently, $\M{\Th(\L{\alpha})} \cong \L{\alpha}$. \end{proposition} \begin{proof} Suppose not. Let $\lambda$ be the sup of the admissible $\alpha$'s having $\L{\alpha} = \hull{\alpha}$, and let $\kappa > \lambda$ be the first admissible such that $\lambda$ is countable in \L{\kappa}. Since $\lambda$ is definable and countable in \L{\kappa}, $\lambda \cup \{\lambda\} \subseteq \hull{\kappa}$. It follows readily that $\L{\kappa} = \thull{\kappa} = \hull{\kappa}$\,: a contradiction. \end{proof} \SideTitle{Cardinality in the constructible levels.} Set theory within the confines of \L{\lambda}, $\lambda$ an arbitrary limit ordinal, imposes some contortions. For technical convenience, the notion of cardinal needs to be slightly twisted \textemdash\xspace for a time only. \begin{definition}\ \par \begin{enumerate} \item{}For an ordinal $\alpha$, $\Card(\alpha) = \min_{\xi \le \alpha}(\textit{there is a surjection \,} \xi \to \alpha)$. \item{}$\alpha$ is a cardinal if $\alpha = \Card(\alpha)$. \item{}$\Card_\lambda \subseteq \L{\lambda}$ is the class of infinite cardinals as computed in \L{\lambda}. \end{enumerate} \end{definition} \subsection{\kern-.6em} \label{subsec:Biject} Note that, for any limit $\lambda$, from a surjection $g \colon \gamma \to \alpha$ in \L{\lambda}, one can extract $a \subseteq \gamma$ and $r \subseteq \gamma \times \gamma$ such that ${g \mathord{\upharpoonright} a} \colon (a,r) \stackrel{\mkern-5mu\longsim}{\smash\rightarrow} (\alpha, \in)$, and both $(a,r)$, $g \mathord{\upharpoonright} a$ are in \L{\lambda}. Further, if $\lambda$ is admissible, in \L{\lambda} the altered notion of cardinality coincides with the standard one. \begin{convention} \label{label:Convention} For simplicity's sake, the assertion ``\emph{\Aleph{\nu} exists in \L{\lambda}}'' should be understood as: \emph{There is an isomorphism $\nu+1 \stackrel{\mkern-5mu\longsim}{\smash\rightarrow} I$, where $I$ is an initial segment of\/ $\Card_\lambda$}. \noindent{}Its negation is equivalent in \ensuremath{\mathsf{KP}_{\infty}}\xspace to: \emph{There is $\kappa \le \nu$ such that $\Card_\lambda \cong \kappa$}. The notation \AlephL{\nu}{\lambda} carries the obvious meaning. \end{convention} We need the following, presumably ``folklore'', result. A proof is provided in the Appendix, for lack of a convenient reference. \begin{proposition} \label{prop:Folklore} For $\lambda$ a limit ordinal, $\L{\lambda} \models "\*\mu > \omega \textup{ is a successor cardinal}"$ implies $\L{\mu} \models \ensuremath{\mathsf{ZF}^-}\xspace$. \end{proposition} \SideTitle{The theories \texorpdfstring{\T{\nu}}{T\_ν}.} For $\nu < \omega_{1}^{\scriptscriptstyle\mathsf{CK}}$, pick $e_\nu \in \omega$ a recursive index for a wellordering $<_{e_\nu}$ of a subset of $\omega$, of length~$\nu$. Using $e_\nu$, statements about $\nu$ can tentatively be expressed in \ensuremath{\mathsf{KP}_{\infty}}\xspace. In an $\omega$-model\xspace \ensuremath{\mathcal{M}}\xspace of \ensuremath{\mathsf{KP}_{\infty}}\xspace, the~truth of such statements is independent of the choice of $e_\nu$. Indeed, $<_{e_\nu}$ is realized in \ensuremath{\mathcal{M}}\xspace, and its realization is isomorphic in \ensuremath{\mathcal{M}}\xspace to the \ensuremath{\mathcal{M}}\xspace\nobreakdash-ordinal of order-type $\nu$, to be denoted $\nu^\ensuremath{\mathcal{M}}\xspace$. \begin{definition} For $\nu < \omega_{1}^{\scriptscriptstyle\mathsf{CK}}$, \T{\nu} is the theory \[ \ensuremath{\mathsf{KP}_{\infty}~+~(\mathbb{V}=\mathbb{L})}\xspace + "\text{For all limit $\lambda$,\;\Aleph{\nu+1} doesn't exist in \L{\lambda}}". \] \end{definition} This definition is clearly lacking: a recursive index $e_\nu$ coding the ordinal $\nu$ is not made explicit. This is immaterial, as we shall be interested only in $\omega$-model\xspace{s} of \T{\nu}. They possess the following rigidity property. \keep{4} \begin{lemma} \label{lem:Rigidity-Property} Let $\nu < \omega_{1}^{\scriptscriptstyle\mathsf{CK}}$, and $\M1,\,\M2$ be $\omega$-model\xspace{s} of \T{\nu}. Let $u \in \mathbb{O}\mathrm{n}^\M1$, and $w,\,w^\low* \in \mathbb{O}\mathrm{n}^\M2$, for any two isomorphisms $f \colon \LL{u}{\M1} \stackrel{\mkern-5mu\longsim}{\smash\rightarrow} \LL{w}{\M2}$ and $f{^\ast\mkern-2mu} \colon \LL{u}{\M1} \stackrel{\mkern-5mu\longsim}{\smash\rightarrow} \LL{w^\low*}{\M2}$, $f = f{^\ast\mkern-2mu}$. \end{lemma} \begin{proof} By an easy reduction, it suffices to prove this for $u$, a limit \M1-ordinal. Set $\ensuremath{\mathcal{C}}\xspace_u = \Set{c <^\M1 \! u}{\M1 \models \*c \in \Card_\*u}$. The relevant point here is that $\ensuremath{\mathcal{C}}\xspace_u$ is wellordered by $<^\M1$. Indeed, since $\M1 \models "\Aleph{\nu+1} \text{ doesn't exist in } \L{\*u}"$, there is $\kappa \le \nu+1$ such that $\M1 \models {\Card_\*u \cong \kappa^\M1}$ (see~\ref{label:Convention}), and consequently $(\ensuremath{\mathcal{C}}\xspace_u, <^\M1) \cong (\kappa, \in)$. We check first that $f$ and $f{^\ast\mkern-2mu}$ agree on the \M1-ordinals $o <^\M1\! u$. Set $\kappa_u(o) = \Card(o)$, evaluated in \LL{u}{\M1}. Evidently $f(o)=f{^\ast\mkern-2mu}(o)$ for $o \le \omega$ in \M1, next show by transfinite induction on $c \in \ensuremath{\mathcal{C}}\xspace_u$\,: \[ \text{for all \:} o <^\M1 \! u : \enskip{} \kappa_u(o) \le^\M1 \! c \Implies f(o) = f{^\ast\mkern-2mu}(o). \] The inductive hypothesis yields, for all $o <^\M1 \! c$, $f(o) = f{^\ast\mkern-2mu}(o)$, hence $f(c) = f{^\ast\mkern-2mu}(c)$. Let now $o <^\M1 \! u$ have $\kappa_u(o) = c$. Inside \LL{u}{\M1}, $(o, \in)$ is isomorphic to an ordering $s = (a, r)$, where $a \subseteq c$ and $r \subseteq c \times c$, (see\:\ref{subsec:Biject}). Since $f$ and $f{^\ast\mkern-2mu}$ agree on the \M1-ordinals up to $c$, one readily gets $f(s) = f{^\ast\mkern-2mu}(s)$. In \M2, the ordering $f(s)$ is isomorphic to both the ordinals $f(o)$ and $f{^\ast\mkern-2mu}(o)$, hence $f(o) = f{^\ast\mkern-2mu}(o)$. This entails $w = w^\low*$. Now for $x \in \LL{u}{\M1}$, $x$ is definable in \LL{u}{\M1} from \M1-ordinals (see \ref{subsec:Constructibility}), thus $f(x)$ and $f{^\ast\mkern-2mu}(x)$ satisfy in \LL{w}{\M2} the same definition from equal parameters, hence ${f(x) = f{^\ast\mkern-2mu}(x)}$. \end{proof} \SideTitle{Pseudo-wellfounded models.} A relation $\mathord{\vartriangleleft} \subseteq \omega \times \omega$ is said to be \emph{pseudo-wellfounded} if every nonempty $\DDelta11(\vartriangleleft)$ subset of $\omega$ has a $\vartriangleleft$-minimal element. By the usual computation, this is a \SSigma11 property. \begin{definition} For $\nu < \omega_{1}^{\scriptscriptstyle\mathsf{CK}}$, \S{\nu} is the set of theories: \[ \S{\nu} = \Set{U}{\text{$U$ is a complete extension of \T{\nu}, and \M{U} is pseudo-wellfounded} }. \] \end{definition} The first clause in the definition of \S{\nu} is arithmetical, while the ``pseudo-wellfounded'' clause can be written as: \[ \All{x \mathrel{\leqslant_{h}} \M{U}}(x \neq \O \Implies \Exists{k \,{\in}\, x}\All{m \,{\in}\, x}\neg(m \in^\M{U}\! k)). \] Since \M{U} is uniformly $\DDelta11(U)$, \S{\nu} is \SSigma11. Further, for $U \in \S{\nu}$, \M{U} is an $\omega$-model\xspace. The sets \S{\nu} play the central role in the proof. They are sparse, in the following sense. \begin{proposition} For $\nu < \omega_{1}^{\scriptscriptstyle\mathsf{CK}}$, no two distinct members of \S{\nu} have the same hyperdegree. \end{proposition} \begin{proof} Let $U_1,\,U_2 \in \S{\nu}$ have $U_1 \equiv_h U_2$, and let $\M1,\,\M2$ stand for $\M{U_1},\,\M{U_2}$. We will obtain $U_1 = U_2$ by showing $\M1 \cong \M2$. Define a relation between `ordinals' $u \in \M1$ and $w \in \M2$.% \begin{align*} u \simeq w & \Equiv \LL{u}{\M1} \cong \LL{w}{\M2}.\\ \intertext{% Set $I_1 = \Dom(\simeq)$ and $I_2 = \Img(\simeq)$. $I_1$ and $I_2$ are initial segments of $\mathbb{O}\mathrm{n}^\M1$ and $\mathbb{O}\mathrm{n}^\M2$, respectively. Using Lemma \ref{lem:Rigidity-Property}, the relation ``$u \simeq w$'' defines a bijection $I_1 \to I_2$ which is, indeed, the restriction of an isomorphism $F \colon \raisebox{-.15ex}{\scalebox{1.5}{\ensuremath\cup}}_{u \in I_1}\LL{u}{\M1} \stackrel{\mkern-5mu\longsim}{\smash\rightarrow} \raisebox{-.15ex}{\scalebox{1.5}{\ensuremath\cup}}_{w \in I_2} \LL{w}{\M2}$. By the same lemma, it can be expressed as: }% u \simeq w & \Equiv \Exists{f} ( f \colon \LL{u}{\M1} \stackrel{\mkern-5mu\longsim}{\smash\rightarrow} \LL{w}{\M2} )\\ & \Equiv \Exists{!f}( f \colon \LL{u}{\M1} \stackrel{\mkern-5mu\longsim}{\smash\rightarrow} \LL{w}{\M2} ). \intertext{% The expression on the last RHS reads $\Exists{!f} \ensuremath{\mathcal{I}}\xspace(f, U_1, u, U_2, w)$, where \ensuremath{\mathcal{I}}\xspace is a \DDelta11 predicate, thus }% u \simeq w & \Equiv \Exists{f \mathrel{\leqslant_{h}} U_1 \oplus U_2}(f \colon \LL{u}{\M1} \stackrel{\mkern-5mu\longsim}{\smash\rightarrow} \LL{w}{\M2}). \end{align*} By the usual computation, the relation ``$u \simeq w$'' is $\DDelta11(U_1 \oplus U_2)$ $[\:= \DDelta11(U_1) = \DDelta11(U_2)]$. Consequently, $I_1$ and $I_2$ are also $\DDelta11(U_1)$ $[\:= \DDelta11(U_2)]$. $\M1,\,\M2$ being pseudo-wellfounded, $\mathbb{O}\mathrm{n}^\M1 - I_1$ and $\mathbb{O}\mathrm{n}^\M2 - I_2$ each, if nonempty, has a minimum. Call $m_1,\,m_2$ the respective potential minima, and consider the~cases: \keep{2} \begin{itemize}[leftmargin=16pt, itemsep=3pt,label={--}] \item{}$\mathbb{O}\mathrm{n}^\M1 - I_1$ and $\mathbb{O}\mathrm{n}^\M2 - I_2$ both are nonempty. This isn't possible, as there would be an isomorphism $\LL{m_1}{\M1} \stackrel{\mkern-5mu\longsim}{\smash\rightarrow} \LL{m_2}{\M2}$, entailing $m_1 \in I_1$ and $m_2 \in I_2$. \item{}$I_1 = \mathbb{O}\mathrm{n}^\M1$ and $\mathbb{O}\mathrm{n}^\M2 - I_2 \ne \O$. Here $\M1 = \raisebox{-.15ex}{\scalebox{1.5}{\ensuremath\cup}}_{u \in I_1} \LL{u}{\M1} \cong \LL{m_2}{\M2}$, and $U_1$ being now the \nohyphens{theory} of \LL{m_2}{\M2}, is realized in \M2. By hypothesis $U_2 \equiv_h U_1$ hence, by Proposition \ref{prop:Realized-reals}(1), $U_2$ is realized in \M2 (that's \M{U_2}). This contradicts (2) of the same proposition. \item{}The third case, symmetric of the last one, is equally impossible. \item{}The remaining case: $I_1 = \mathbb{O}\mathrm{n}^\M1$ and $I_2 = \mathbb{O}\mathrm{n}^\M2$. Here $\M1 = \raisebox{-.15ex}{\scalebox{1.5}{\ensuremath\cup}}_{u \in I_1} \LL{u}{\M1}$ and ${\M2 = \raisebox{-.15ex}{\scalebox{1.5}{\ensuremath\cup}}_{w \in I_2} \LL{w}{\M2}}$, thus $F \colon \M1 \stackrel{\mkern-5mu\longsim}{\smash\rightarrow} \M2$ is the desired isomorphism.\qedhere \end{itemize} \end{proof} \begin{proof}[\textbf{Proof of Theorem\:\ref{thm:Models-with-Alephs}}] We may assume $\ensuremath{\mathbb{V}}\xspace = \ensuremath{\mathbb{L}}\xspace$. Fix $\nu < \omega_{1}^{\scriptscriptstyle\mathsf{CK}}$. \textbf{Claim.} There is a limit ordinal $\lambda$, such that: \Aleph{\nu+1} exists in \L{\lambda}. Suppose no such $\lambda$ exists. It follows that for all admissible $\alpha > \omega$, $\L{\alpha} \models \T{\nu}$. This entails that the \SSigma11 set \S{\nu} is cofinal in the degrees: indeed, using Proposition~\ref{prop:Countable-admissible-term-model}, given $x \subseteq \omega$ there is an ${\alpha > \omega}$, admissible, such that ${x \in \L{\alpha}}$ and $\M{\Th(\L{\alpha})} \cong \L{\alpha}$. Thus $x \mathrel{\leqslant_{T}} \Th(\L{\alpha})$ and, \M{\Th(\L{\alpha})} being wellfounded, $\Th(\L{\alpha}) \in \S{\nu}$. Invoking now \WTDet{\rho}{\DDelta11} and Proposition~\ref{thm:WTuringDelta->WTuringSigma}, \WTDet{\rho}{\SSigma11} holds. Hence, there are distinct $U_1,\,U_2 \in \S{\nu}$ such that $U_1 \equiv_\rho U_2$, contradicting the previous proposition.\qed\textsubscript{\textbf{Claim}} \smallskip{} Let $\lambda$ be as claimed, and set $\mu = \AlephL{\nu+1}{\lambda}$. In \L{\lambda}, $\mu$ is a successor cardinal hence, by Proposition~\ref{prop:Folklore}, $\L{\mu} \models \ensuremath{\mathsf{ZF}^-}\xspace$. Further for all $\xi \le \nu$, $\AlephL{\xi}{\lambda} \in \L{\mu}$, and \AlephL{\xi}{\lambda} is an \Lcard{\mu} (now in the usual sense), hence ${\L{\mu} \models \ZFmAleph{\nu}}$. \end{proof} Note the following byproduct of the previous proposition, and the proof just given (substituting $U_1 \equiv_h U_2$ for $U_1 \equiv_\rho U_2$, in the proof) \textemdash\xspace in contradistinction to Remark\:\ref{rem:Too-Weak}. \begin{theorem} \label{thm:Models-with-Alephs-from-Sigma} Assume every \SSigma11 set of reals, cofinal in the degrees, contains two Turing distinct reals $x, y$, such that $x \equiv_h y$. For every $\nu < \omega_{1}^{\scriptscriptstyle\mathsf{CK}}$, there is a transitive model:\,$M \models \ZFmAleph{\nu}$.\qed \end{theorem} An easy consequence of the main result: \WTDet{\rho}{\DDelta11} implies full \DDelta11 determinacy. The proof goes through Martin's Borel determinacy theorem: no direct argument is known for this sort of implication \textemdash\xspace apparently first observed by Friedman for \TDet{\DDelta11}. \begin{theorem} \label{thm:Full-Det} For $2 \le \rho < \omega_{1}^{\scriptscriptstyle\mathsf{CK}}$, \WTDet{\rho}{\DDelta11} implies \Det{\DDelta11}. \end{theorem} \begin{proof} Assume \WTDet{\rho}{\DDelta11}. Let $A \subseteq \ensuremath{\mathcal{N}}\xspace$ be \DDelta11, say $A \in \SSigma{0}{\nu}$ where $\nu < \omega_{1}^{\scriptscriptstyle\mathsf{CK}}$. Applying Theorem\:\ref{thm:Models-with-Alephs}, there is a transitive \ModelZFmAleph{M}{\nu}. Invoking (non-optimally) Martin's main result from \cite{Martin:Borel_Det} inside $M$, \SSigma{0}{\nu} games are determined. The statement ``\emph{the game $G_\omega(A)$ is determined}\,'' is \SSigma12. By Mostowki's absoluteness theorem, being true in $M$, it holds in the universe: $G_\omega(A)$ is indeed determined. \end{proof} \section{\texorpdfstring{\DDelta11 Determinacy and Properties of \SSigma11 Sets}{∆₁¹ Determinacy and Properties of Σ₁¹ Sets}} \label{sec:Delta-Det-Properties-Sigma} We proceed now to show that \DDelta11 determinacy imparts weak determinacy properties to the class~\SSigma11. In view of Theorem\:\ref{thm:Full-Det}, there is no point, here, in working from weaker hypotheses. \begin{definition} The hyp-Turing cone with vertex $d \in \ensuremath{\mathcal{D}}\xspace$ is the set of degrees \[ \Cone_{h}(d) = \Set{x \in \ensuremath{\mathcal{D}}\xspace}{d \mathrel{\leqslant_{T}} x \And x \mathrel{\leqslant_{h}} d} = \Cone(d) \cap \DDelta11(d). \] \HTDet{\Gamma} is the statement: \emph{Every cofinal set of degrees $A \in \Gamma$ contains a hyp-Turing~cone}. \end{definition} \keep{5} \begin{theorem} \label{thm:Sigma-seq-degrees} Assume \TDet{\DDelta11}. If $(S_{k})_{k < \omega}$ is a \SSigma11 sequence of sets of Turing degrees, each cofinal in \ensuremath{\mathcal{D}}\xspace, then $\Medcap{k}{S} \neq \O$ \textemdash\xspace and, indeed, \Medcap{k}{S} contains a hyp-Turing cone. \end{theorem} \begin{proof} Let the $S_{k}$'s be given as the sections of a \SSigma11 relation $S \subseteq \omega \times \ensuremath{\mathcal{N}}\xspace$ and assume \Medcap{k}{S} contains no hyp-Turing cone, that is:% \[ \All{x \in \ensuremath{\mathcal{N}}\xspace} \Exists{y \mathrel{\leqslant_{h}} x} (x \mathrel{\leqslant_{T}} y \And y \notin \Medcap{k}{S}). \] This is a statement $\Phi(S)$, where $\Phi(X)$ is a \PPi11 on \SSigma11 property. Reflection yields a \DDelta11 relation $D \supseteq S$ such that $\Phi(D)$. By shrinking $D$, if needed, we may ensure that its sections $D_{k}$ are Turing closed, preserving $\Phi(D)$. Now, $D_{k} \supseteq S_{k}$ and \Medcap{k}{D} contains no hyp-Turing cone. A contradiction ensues using \TDet{\DDelta11} and Martin's Lemma: each $D_{k}$, being cofinal in the degrees, contains a Turing cone, hence so does the intersection \Medcap{k}{D}. \end{proof} The converse is immediate: if \TDet{\DDelta11} fails, using Martin's Lemma there is a \DDelta11 set ${A \subseteq \ensuremath{\mathcal{D}}\xspace}$, such that $A$ and $\mathord{\sim} A$ are both cofinal in \ensuremath{\mathcal{D}}\xspace, and the \DDelta11 ``sequence'' $\langle A, \mathord{\sim} A \rangle$ has empty intersection. Relativizing \ref{thm:Sigma-seq-degrees}, one readily gets: \begin{corollary} \label{thm:Intersection of Sigma cofinal} Assume Borel Turing determinacy. If $(A_{k})_{k < \omega}$ is a sequence of analytic sets of Turing degrees, each cofinal in \ensuremath{\mathcal{D}}\xspace, then \Medcap{k}{A} is cofinal in \ensuremath{\mathcal{D}}\xspace.\qed \end{corollary} An interesting degenerate case of \ref{thm:Sigma-seq-degrees}, where the sequence $(S_{k})_{k < \omega}$ consists of a single \SSigma11 term. \begin{theorem} \label{thm:Hyp-Turing-Det-Sigma} \TDet{\DDelta11} implies \HTDet{\SSigma11}.\qed \end{theorem} In view of Theorem\:\ref{thm:Full-Det}, the implication is, of course, an equivalence. A similar result holds for full determinacy. \begin{definition} For a game $G_\omega(A)$, a strategy $\sigma$ for Player \pI is called a hyp-winning strategy if $\All{\tau \mathrel{\leqslant_{h}} \sigma}(\sigma{*}\tau \in A)$, i.e., applying $\sigma$, Player \pI wins against any $\DDelta11(\sigma)$ sequence of moves by Player~\pII. \end{definition} \begin{theorem} \label{thm:Hyp-Det-Sigma} Assume \Det{\DDelta11}. For all \SSigma11 sets $S$, one of the following holds for the game $G_\omega(S)$. \begin{enumerate} \item{}Player \pI has a hyp-winning strategy. \item{}Player \pII has a winning strategy. \end{enumerate} \end{theorem} \begin{proof} Let $S$ be \SSigma11, and assume Player \pI has no hyp-winning strategy for $G_\omega(S)$\,: $\All{\sigma} \Exists{\tau \mathrel{\leqslant_{h}} \sigma}$ $(\sigma{*}\tau \notin S)$. As above, Reflection yields a \DDelta11 set $D \supseteq S$ such that Player \pI has no hyp-winning strategy for $G_\omega(D)$, hence no winning strategy. Invoking \Det{\DDelta11}, Player \pII has a winning strategy for $G_\omega(D)$ which is, \emph{a fortiori}, winning for $G_\omega(S)$. \end{proof} \section*{Appendix} \stepcounter{section} The point of the present section is to sketch a proof of Proposition~\ref{prop:Folklore} (\ref{prop:Folklore2}, below), without dissecting the \ensuremath{\mathbb{L}}\xspace construction \textemdash\xspace albeit with a recourse to admissible sets. Finer results are certainly known. \ensuremath{\mathcal{F}}\xspace is the set of formulas, $\ensuremath{\mathcal{F}}\xspace \in \L{\omega+1}$. $\models_\L{\alpha}$ is the satisfaction relation for \L{\alpha}, \[ {\models_\L{\alpha}} (\phi,\vec{s}) \LEquiv \phi \in \ensuremath{\mathcal{F}}\xspace \And \vec{s} \in \LL{\alpha}{<\omega} \And \L{\alpha} \models \phi[\vec{s}]. \] Apart from the classic Condensation Lemma (see \ref{subsec:Constructibility}), we shall need the following familiar result. \emph{For a limit ordinal $\lambda > \omega$, and $\beta < \lambda$, $\models_\L{\beta} \in \L{\lambda}$}. See \cite[§7.1]{Van_Wesep}. \begin{notation*} $X \surj{\lambda} Y$ abbreviates $\Exists{f \,{\in}\, \L{\lambda}}(f \colon X \twoheadrightarrow Y)$, where `$\twoheadrightarrow$' stands for surjective map. \end{notation*} Recall: in the present context, ``$\mu$ is an \Lcard{\lambda}'' means ``there is no $\xi < \mu$ such that $\xi \surj{\lambda} \mu$''. \begin{lemma} Let $\lambda > \omega$ be a limit ordinal. For $0 < \alpha \le \gamma < \lambda$, if $\L{\beta} =\thull{\gamma}(\alpha)$, then $\seq{\alpha} \surj{\lambda} \beta$. \end{lemma} \begin{proof} Observe that $\L{\beta} = \hull{\beta}(\alpha)$, and $\beta \le \gamma < \lambda$. In \L{\beta}, every $\xi \in \beta$ is the unique solution of some formula $\phi(v, \vec{\*\eta})$, with parameters $\vec{\eta} \in \seq{\alpha}$. Thus, from the fact that $\models_\L{\beta} \in \L{\lambda}$ one readily derives $\ensuremath{\mathcal{F}}\xspace \times \seq{\alpha} \surj{\lambda} \beta$. Using an injection $\ensuremath{\mathcal{F}}\xspace \times \seq{\alpha} \to \seq{\alpha}$, in \L{\lambda}, one gets $\seq{\alpha} \surj{\lambda} \beta$. \end{proof} \begin{proposition} \label{prop:Appendix-prop} Let $\lambda > \omega$ be a limit ordinal, and $\mu < \lambda$ an \Lcard{\lambda}, $\mu > \omega$. \begin{enumerate} \item{}$\mu$ is admissible. \item{}For $\alpha < \mu$ and $\alpha \le \gamma < \lambda$, if $\L{\beta} = \thull{\gamma}(\alpha)$, then $\beta < \mu$. \textup{(A downward Löwenheim-Skolem property.)} \end{enumerate} \end{proposition} \begin{proof} We check (1) and (2) simultaneously, by induction on $\mu$. Set $\xbar\mu = \min_{\eta \le \mu}(\seq{\eta} \surj{\lambda} \mu)$. We claim that $\xbar\mu = \mu$. Easily, $\xbar\mu$ is an \Lcard{\lambda} and $\xbar\mu > \omega$. If $\mu$ is the first \Lcard{\lambda} $> \omega$, then $\xbar\mu = \mu$. Else, if $\xbar\mu < \mu$, then by the induction hypothesis $\xbar\mu$ is admissible, thus there is an \L{\xbar\mu}\nobreakdash-definable bijection $\xbar\mu \to \seq{\xbar\mu}$. Hence $\xbar\mu \surj{\lambda} \seq{\xbar\mu}$, and since $\seq{\xbar\mu} \surj{\lambda} \mu$, $\xbar\mu \surj{\lambda} \mu$, contradicting the fact that $\mu$ is an \Lcard{\lambda}. Now (2) holds for $\mu$. Indeed, if $0 < \alpha < \mu$, and $\alpha \le \gamma < \lambda$, and $\L{\beta} = \thull{\gamma}(\alpha)$, then by the previous lemma, $\seq{\alpha} \surj{\lambda} \beta$. Hence, since $\alpha < \mu = \xbar\mu$, $\beta < \mu$. \smallskip{} To complete the proof that $\mu$ is admissible, only $\Delta_0$ \Ax{Collection} needs checking. Say $\L{\mu} \models \All{x \,{\in}\, \*a} \Exists{y} \phi(x, y, \vec{\*p})$, where $\phi$ is a $\Delta_0$ formula, and $a, \vec{p} \in \L{\mu}$. Pick $\alpha < \mu$ with $a,\,\vec{p} \in \L{\alpha}$, and set ${\L{\beta} = \thull{\mu}(\alpha)}$. $\beta \ge \alpha$, and $\L{\beta} \models \All{x \,{\in}\, \*a} \Exists{y} \phi(x, y, \vec{\*p})$. Applying (2), $\beta < \mu$ and $b =^\text{def} \L{\beta} \in \L{\mu}$. By $\Delta_0$ absoluteness, $\L{\mu} \models \All{x \,{\in}\, \*a} \Exists{y \,{\in}\, \*b} \phi(x,y,\vec{\*p})$. \end{proof} \begin{proposition}[\ref{prop:Folklore}] \label{prop:Folklore2} For $\lambda$ a limit ordinal, $\L{\lambda} \models "\*\mu > \omega \textup{ is a successor cardinal}" \Implies \L{\mu} \models \ensuremath{\mathsf{ZF}^-}\xspace$. \end{proposition} \begin{proof} Let $\pi$ be the cardinal predecessor of $\mu$ in \L{\lambda}. We argue that $\pi$ is the largest cardinal in \L{\mu}. Indeed, for $\pi \le \eta < \mu$, pick $\gamma < \lambda$ such that $\Exists{f \in \L{\gamma}}(f \colon \pi \twoheadrightarrow \eta)$, and set $\L{\beta} = \thull{\gamma}(\eta+1)$. We~get $\Exists{f \in \L{\beta}}(f \colon \pi \twoheadrightarrow \eta)$ and, invoking Proposition~\ref{prop:Appendix-prop}(2), $\beta < \mu$. \smallskip{} Next, check that $\mu$ is regular inside \L{\lambda}. The standard \ensuremath{\mathsf{ZFC}}\xspace proof for the regularity of infinite successor cardinals goes through here: for each nonzero $\eta < \mu$, using the wellordering $<_\L{\mu}$, select $f_\eta \in \L{\mu}$, $f_\eta \colon \pi \twoheadrightarrow \eta$, and note that the sequence $(f_\eta)_{0 < \eta < \mu}$ is in $\L{\mu+1} \subseteq \L{\lambda}$, etc. \smallskip{} Finally, to show that $\L{\mu} \models \ensuremath{\mathsf{ZF}^-}\xspace$: Since by \ref{prop:Appendix-prop}(1) $\mu$ is admissible, using the definable bijection $\mu \to \L{\mu}$, it suffices to verify \Ax{Replacement} for ordinal class-functions. Let therefore $F \colon \mu \to \mu$ be definable over \L{\mu}, from parameters $\vec{p}$. Given $a \in \L{\mu}$, $a \subseteq \mu$, by regularity of $\mu$ in \L{\lambda}, $F[a]$ is bounded in $\mu$. Pick ${\alpha < \mu}$, with $F[a] \subseteq \alpha$ and $a,\,\vec{p} \in \L{\alpha}$. Set $\L{\beta} = \thull{\mu}(\alpha)$, applying Proposition~\ref{prop:Appendix-prop}(2), $\beta < \mu$. Since $F[a] \in \L{\mu+1}$, $F[a] \in \L{\beta+1} \subseteq \L{\mu}$. \end{proof} \bibliographystyle{amsalpha}
{'timestamp': '2019-10-11T02:12:08', 'yymm': '1910', 'arxiv_id': '1910.04481', 'language': 'en', 'url': 'https://arxiv.org/abs/1910.04481'}
\section{Introduction} The electrical conductivity is one of the fundamental properties of solids and, therefore, of ongoing interest for both theory and experiment. In recent years, advances in experimental techniques revealed the need of reconsidering theoretical descriptions of the conductivity including interband coherence effects, that is going beyond independent quasiparticles. Hall measurements in very high magnetic fields have led to new insights into the nonsuperconducting state of high-temperature superconductors \cite{Badoux2016,Laliberte2016,Collignon2017,Putzke2019}. Although at high magnetic field, the product of cyclotron frequency and lifetime was found to be small, $\omega_c\tau\ll 1$, suggesting a sizable scattering rate $\Gamma=1/2\tau$. Various theories assume the onset of an emergent order parameter $\Delta$ to explain the experimental findings \cite{Storey2016,Verret2017,Eberlein2016,Mitscherling2018,Sharma2018,Morice2017,Charlebois2017}. Due to a nonzero $\Gamma$, it is questionable whether the conductivity is correctly described if interband coherence effects are neglected. Indeed, it was shown for spiral antiferromagnetic spin density waves that interband coherence effects are negligible not due to a general argument comparing the energy scales of the scattering and the gap, $\Gamma/\Delta$, but only by numerical prefactors specific to the material in question \cite{Mitscherling2018}. Interband coherence effects are also the key to describe the intrinsic anomalous Hall effect, that is a transverse current without applied magnetic field that is not caused by (skew) scattering. In the last decades theoretical progress was made in identifying basic mechanisms, improving theoretical methods and revealing its deep relation to topology \cite{Nagaosa2010,Xiao2010}. In recent years there is an increasing interest in transport properties of systems with topological properties in material science \cite{Culcer2020, Xu2020, Sun2020}, including Heusler compounds \cite{Kubler2014, Manna2018, Noky2020}, Weyl semimetals \cite{Destraz2020, Li2020, Nagaosa2020}, and graphene \cite{Sharpe2019, McIver2020} and in other fields like in microcavities \cite{Gianfrate2020} and cold atoms \cite{Cooper2019}. The derivation of a formula for the conductivity of a given model is challenging. The broadly used and intuitive Boltzmann transport theory in its traditional formulation is not able to capture interband coherence effects \cite{Mahan2000} and, therefore, misses related phenomena. In order to describe the anomalous Hall effect, the Boltzmann approach was modified by identifying further contributions like the so-called anomalous velocity \cite{Karplus1954}, which has led to a consistent theoretical description \cite{Sinitsyn2008}. By contrast, microscopic approaches give a systematic framework but are usually less transparent and harder to interpret. The combination of both approaches, that is a systematic microscopic derivation combined with a Boltzmann-like physical interpretation, in order to find further relevant contributions is still subject of recent research \cite{Sinitsyn2007}. The established connection between the intrinsic anomalous Hall conductivity and the Berry curvature \cite{Thouless1982, Niu1985, Kohmoto1985, Onoda2002, Jungwirth2002} combined with {\it ab initio} band structure calculations \cite{Fang2003,Yao2004} has become a powerful tool for combining theoretical and experimental results and is state-of-the-art in recent studies \cite{Culcer2020, Xu2020, Sun2020, Kubler2014, Manna2018, Noky2020, Destraz2020, Li2020, Nagaosa2020, Sharpe2019, McIver2020, Gianfrate2020, Cooper2019}. Common microscopic approaches to the anomalous Hall conductivity are based on the work of Bastin {\it et al.} and Streda \cite{Bastin1971, Streda1982, Crepieux2001}. Starting from Kubo's linear response theory \cite{Mahan2000} in a Matsubara Green's function formalism, Bastin {\it et al.} \cite{Bastin1971} expanded in the frequency $\omega$ of the external electric field ${\bf E}(\omega)$ after analytic continuation and obtained the DC conductivity $\sigma^{\alpha\beta}$, where $\alpha,\beta=x,y,z$ are the directions of the induced current and the electric field, respectively. Streda \cite{Streda1982} further decomposed this result into so-called {\it Fermi-surface} and {\it Fermi-sea contributions} $\sigma^{\alpha\beta,I}$ and $\sigma^{\alpha\beta,II}$ that are defined by containing the derivative of the Fermi function or the Fermi function itself, respectively. This or similar decompositions are common starting points of further theoretical investigations \cite{Nagaosa2010, Sinitsyn2007, Crepieux2001, Dugaev2005, Onoda2006, Yang2006, Kontani2007, Nunner2008, Onoda2008, Tanaka2008, Kovalev2009, Streda2010, Pandey2012, Burkov2014, Chadova2015, Kodderitzsch2015, Mizoguchi2016}. However, those decompositions are usually not unique and {\it a priori} not motivated by stringent mathematical or physical reasons. In this paper, we present a complete microscopic derivation of the longitudinal and the anomalous Hall conductivity for a general momentum-block-diagonal two-band model within a Matsubara Green's function formalism. We allow for finite temperature and a constant scattering rate $\Gamma$ that is diagonal and equal, but arbitrarily large for both bands. Our derivation is combined with a systematic analysis of the underlying structure of the involved quantities, which allows us to identify criteria for a unique and physically motivated decomposition of the conductivity formulas into contributions with distinct properties. In Sec.~\ref{sec:twobandsystem} we define the model and its coupling to electric and magnetic fields. In Sec.~\ref{sec:conductivity} we present the detailed derivation and close by giving final formulas of the longitudinal and the anomalous Hall conductivity. The key ingredient of the conductivity is the Bloch Hamiltonian matrix $\lambda_{\bf p}$. Changing to the eigenbasis separates the momentum derivative of $\lambda_{\bf p}$, the generalized velocity, into a diagonal quasivelocity matrix and an off-diagonal Berry-connection-like matrix. The former one leads to the so-called {\it intraband contribution} $\sigma^{\alpha\beta}_\text{intra}$ that involves only quasiparticle spectral functions of one band in each term. The latter one mixes the quasiparticle spectral functions of both bands and leads to the so-called {\it interband contribution} $\sigma^{\alpha\beta}_\text{inter}$, which captures the interband coherence effects beyond independent quasiparticles. The conductivity depends on the direction of the current and the external electric field. We uniquely decompose the conductivity in its {\it symmetric}, $\sigma^{\alpha\beta,s}=\sigma^{\beta\alpha,s}$, and {\it antisymmetric}, $\sigma^{\alpha\beta,a}=-\sigma^{\beta\alpha,a}$, part. The intraband contribution is symmetric, but the interband contribution splits into a symmetric and antisymmetric part. We obtain \begin{align} \sigma^{\alpha\beta}=\sigma^{\alpha\beta}_\text{intra}+\sigma^{\alpha\beta,s}_\text{inter}+\sigma^{\alpha\beta,a}_\text{inter} \, . \end{align} The result of Boltzmann transport theory \cite{Mahan2000} is re-obtained by the intraband contribution. The symmetric interband contribution is a correction only present for finite scattering rate $\Gamma$ and is shown to be controlled by the quantum metric. The antisymmetric interband contribution involves the Berry curvature and generalizes previous formulas of the anomalous Hall conductivity \cite{Thouless1982, Niu1985, Kohmoto1985, Onoda2002, Jungwirth2002} to finite scattering rate $\Gamma$. We show that the effect of $\Gamma$ is captured entirely by the product of quasiparticle spectral functions specific for each contribution. For the anomalous Hall conductivity, this combination of spectral functions becomes independent of $\Gamma$, or ``dissipationless'' \cite{Nagaosa2010}, in the clean limit (small $\Gamma$). In Sec.~\ref{sec:discussion} we discuss the properties of the contributions and several aspects of the derivation in detail. We re-derive the Bastin and Streda formula \cite{Bastin1971, Streda1982, Crepieux2001} within our notation and give the relation to the decomposition presented above. We show that our derivation provides a strategy to drastically reduce the effort in performing the trace over the two subsystems, which otherwise may lead to numerous terms and, thus, make an analytic treatment tedious. The scattering rate $\Gamma$ of arbitrary size allows us to perform a detailed analysis of the clean and dirty limit. We draw the connection between our derivation and concepts of quantum geometry, by which we identify the interpretation of the interband contributions as caused by the quantum geometric tensor. Finally, we comment on the Berry curvature as an effective magnetic field, the dependence on the coordinate system as well as the possibility of quantization of the anomalous Hall conductivity \cite{Thouless1982, Niu1985, Kohmoto1985}. In Sec.~\ref{sec:examples} we apply our results to different examples. Within a simple model of a Chern insulator, we discuss the modification of the quantized Hall effect due to a finite scattering rate $\Gamma$. We reconsider the ferromagnetic multi-d-orbital model by Kontani {\it et al.} \cite{Kontani2007} to discuss the scaling behavior of the scattering rate $\Gamma$ in the dirty limit. The result is both qualitatively and quantitatively in good agreement with experimental results for ferromagnets (see Ref.~\onlinecite{Onoda2008} and references therein). We discuss the spiral spin density wave on a two-dimensional square lattice as an example of a model with broken translation symmetry but combined symmetry in translation and spin-rotation, which is captured by our general two-band system. In section~\ref{sec:conclusion} we summarize our results. Some of the detailed calculations are presented in the Appendix. \section{General two-band system} \label{sec:twobandsystem} \subsection{Model} \label{sec:twobandsystem:model} We assume the quadratic momentum-block-diagonal tight-binding Hamiltonian \begin{align} \label{eqn:H} H=\sum_{\bf p} \Psi^\dagger_{\bf p} \lambda^{}_{\bf p} \Psi^{}_{\bf p} \, , \end{align} where $\lambda_{\bf p}$ is a hermitian $2\times2$ matrix, $\Psi_{\bf p}$ is a fermionic spinor and $\Psi^\dag_{\bf p}$ is its hermitian conjugate. Without loss of generality we parameterize $\lambda_{\bf p}$ as \begin{align} \label{eqn:lam} \lambda_{\bf p}=\begin{pmatrix} \epsilon_{{\bf p},A} && \Delta_{\bf p} \\[3mm] \Delta^*_{\bf p} && \epsilon_{{\bf p},B}\end{pmatrix} \,, \end{align} where $\epsilon_{{\bf p},\sigma}$ are two arbitrary (real) bands of the subsystems $\sigma=A,B$. The complex function $\Delta_{\bf p}$ describes the coupling between $A$ and $B$. The spinor $\Psi_{\bf p}$ consists of the annihilation operator $c_{{\bf p},\sigma}$ of the subsystems, \begin{align} \label{eqn:spinor} \Psi^{}_{\bf p}=\begin{pmatrix} c^{}_{{\bf p}+{\bf Q}_A,A} \\[1mm] c^{}_{{\bf p}+{\bf Q}_B,B} \end{pmatrix} \, , \end{align} where ${\bf Q}_\sigma$ are arbitrary but fixed offsets of the momentum. The subsystems $A$ and $B$ can be further specified by a set of spatial and/or non-spatial quantum numbers like spin or two atoms in one unit cell. We label the positions of the unit cells via the Bravais lattice vector ${\bf R}_i$. If needed, we denote the spatial position of subsystem $\sigma$ within a unit cell as $\boldsymbol\rho_\sigma$. The Fourier transformation of the annihilation operator from momentum to real space and vice versa are given by \begin{align} \label{eqn:FourierC} &c^{}_{j,\sigma}=\frac{1}{\sqrt{L}}\sum_{\bf p}\,c^{}_{{\bf p},\sigma}\,e^{i{\bf p}\cdot({\bf R}_j+\boldsymbol\rho_\sigma)} \, , \\ \label{eqn:FourierCInv} &c^{}_{{\bf p},\sigma}=\frac{1}{\sqrt{L}}\sum_j\,c^{}_{j,\sigma}\,e^{-i{\bf p}\cdot({\bf R}_j+\boldsymbol\rho_\sigma)} \, , \end{align} where $L$ is the number of lattice sites. By choosing a unit of length so that a single unit cell has volume 1, $L$ is the volume of the system. Note that we included the precise position ${\bf R}_i+\boldsymbol\rho_\sigma$ of the subsystem $\sigma$ in the Fourier transformation \cite{Tomczak2009, Nourafkan2018}. The considered momentum-block-diagonal Hamiltonian \eqref{eqn:H} is not necessarily (lattice-)translational invariant due to the ${\bf Q}_\sigma$ in \eqref{eqn:spinor}. The translational invariance is present only for ${\bf Q}_A={\bf Q}_B$, that is no relative momentum difference within the spinor. In the case ${\bf Q}_A\neq{\bf Q}_B$, the Hamiltonian is invariant under combined translation and rotation in spinor space. This difference can be explicitly seen in the real space hoppings presented in Appendix~\ref{appendix:peierls}. Using the corresponding symmetry operator one can map a spatially motivated model to \eqref{eqn:H} and, thus, obtain a physical interpretation of the parameters \cite{Sandratskii1998}. \subsection{Coupling to electric and magnetic fields} \label{sec:twobandsystem:peierls} We couple the Hamiltonian \eqref{eqn:H} to external electric and magnetic fields ${\bf E}({\bf r},t)$ and ${\bf B}({\bf r},t)$ via the Peierls substitution, that is a phase factor gained by spatial motion, and neglect further couplings. The derivation in this and the following subsection generalizes the derivation performed in the context of spiral spin density waves \cite{Voruganti1992, Mitscherling2018}. We Fourier transform the Hamiltonian \eqref{eqn:H} via \eqref{eqn:FourierC} defining \begin{align} \label{eqn:FourierH} \sum_{\bf p}\Psi^\dagger_{\bf p}\lambda^{}_{\bf p}\Psi^{}_{\bf p}=\sum_{jj'}\Psi^\dagger_j\lambda^{}_{jj'}\Psi^{}_{j'} \, , \end{align} where the indices $j$ indicate the unit cell coordinates ${\bf R}_j$. We modify the entries of the real space hopping matrix $\lambda_{jj'}=(t_{jj',\sigma\sigma'})$ by \begin{align} \label{eqn:Peierls} t_{jj',\sigma\sigma'}\rightarrow t_{jj',\sigma\sigma'}\,e^{-ie\int^{{\bf R}_j+\boldsymbol\rho_\sigma}_{{\bf R}_{j'}+\boldsymbol\rho_{\sigma'}}{\bf A}({\bf r},t)\cdot d{\bf r}} \, . \end{align} ${\bf A}({\bf r},t)$ is the vector potential. We have set the speed of light $c=1$ and have chosen the coupling charge to be the electron charge $q=-e$. Note that we have included hopping inside the unit cell by using the precise position ${\bf R}_j+\boldsymbol\rho_\sigma$ of the subsystems $\sigma$ \cite{Tomczak2009, Nourafkan2018}. Neglecting $\boldsymbol\rho_\sigma$ would lead to unphysical results (see Ref.~\onlinecite{Tomczak2009,Nourafkan2018} and example in Sec.~\ref{sec:examples:doubling}). The coupling \eqref{eqn:Peierls} does not include local processes induced by the vector potential, for instance, via $c^\dag_{j,A}c^{\phantom{\dag}}_{j,B}$ if $\boldsymbol\rho_A=\boldsymbol\rho_B$. Using the (incomplete) Weyl gauge such that the scalar potential is chosen to vanish, the electric and magnetic fields are entirely described by the vector potential via ${\bf E}({\bf r},t)=-\partial_t {\bf A}({\bf r},t)$ and ${\bf B}({\bf r},t)=\nabla\times{\bf A}({\bf r},t)$. We are interested in the long-wavelength regime of the external fields, especially in the DC conductivity. Assuming that the vector potential ${\bf A}({\bf r},t)$ varies only slowly over the hopping ranges defined by nonzero $t_{jj',\sigma\sigma'}$ allows us to approximate the integral inside the exponential in \eqref{eqn:Peierls}. As shown in Appendix~\ref{appendix:peierls} we get \begin{align} \label{eqn:HA} H[{\bf A}]=\sum_{\bf p}\Psi^\dagger_{\bf p} \lambda^{}_{\bf p} \Psi^{}_{\bf p}+\sum_{{\bf p}\bp'}\Psi^\dagger_{{\bf p}}\mathscr{V}^{}_{{\bf p}\bp'}\Psi^{}_{{\bf p}'} \, . \end{align} The first term is the unperturbed Hamiltonian \eqref{eqn:H}. The second term involves the electromagnetic vertex $\mathscr{V}_{{\bf p}\bp'}$ that captures the effect of the vector potential and vanishes for zero potential, that is $\mathscr{V}_{{\bf p}\bp'}[{\bf A}=0]=0$. The Hamiltonian is no longer diagonal in momentum ${\bf p}$ due to the spatial modulation of the vector potential. The vertex is given by \begin{align} \label{eqn:Vpp'} \mathscr{V}_{{\bf p}\bp'}=\sum^\infty_{n=1} \frac{e^n}{n!}&\sum_{\substack{{\bf q}_1...{\bf q}_n \\ \alpha_1...\alpha_n}} \lambda^{\alpha_1...\alpha_n}_{\frac{{\bf p}+{\bf p}'}{2}}\nonumber\\&\times A^{\alpha_1}_{{\bf q}_1}(t)...A^{\alpha_n}_{{\bf q}_n}(t)\,\delta_{\sum_m {\bf q}_m,{\bf p}-{\bf p}'} \,. \end{align} The n-th order of the vertex is proportional to the product of n modes ${\bf A}_{\bf q}(t)$ of the vector potential given by \begin{align} \label{eqn:FourierAq} {\bf A}({\bf r},t)=\sum_{\bf q} {\bf A}_{\bf q}(t)e^{i{\bf q}\cdot{\bf r}} \, . \end{align} $A^\alpha_{\bf q}$ denotes the $\alpha=x,y,z$ component of the mode. The Dirac delta function assures momentum conservation. Each order of the vertex is weighted by the n-th derivative of the bare Bloch Hamiltonian \eqref{eqn:lam}, that is \begin{align} \label{eqn:DlamDef} \lambda^{\alpha_1...\alpha_n}_{\frac{{\bf p}+{\bf p}'}{2}}=\left.\partial_{\alpha_1}...\partial_{\alpha_n}\lambda^{}_{\bf p} \right|_{{\bf p}=\frac{{\bf p}+{\bf p}'}{2}} \, , \end{align} where $\partial_\alpha=\partial/\partial p^\alpha$ is the momentum derivative in $\alpha$ direction. Note that both the use of the precise position of the subsystems in the Fourier transformation \cite{Tomczak2009, Nourafkan2018} as well as the momentum-block-diagonal Hamiltonian are crucial for this result. \subsection{Current and conductivity} We derive the current of Hamiltonian \eqref{eqn:HA} induced by the vector potential within an imaginary-time path-integral formalism in order to assure consistency and establish notation. We sketch the steps in the following. Details of the derivation are given in Appendix~\ref{appendix:current}. We set $k_B=1$ and $\hbar=1$. We rotate the vector potential modes ${\bf A}_{\bf q}(t)$ in the vertex \eqref{eqn:Vpp'} to imaginary time $\tau=i t$ and Fourier transform ${\bf A}_{\bf q}(\tau)$ via \begin{align} {\bf A}_{\bf q}(\tau)=\sum_{q_0}{\bf A}_qe^{-iq_0\tau} \, , \end{align} where $q_0=2n\pi T$ are bosonic Matsubara frequencies for $n\in \mathds{Z}$ and temperature $T$. We combine these frequencies $q_0$ and the momenta ${\bf q}$ in four vectors for shorter notation, $q=(iq_0,{\bf q})$. The real frequency result will be recovered by analytic continuation, $iq_0\rightarrow \omega+i0^+$, at the end of the calculation. After the steps above, the electromagnetic vertex $\mathscr{V}_{pp'}$ involving Matsubara frequencies $p$ and $p'$ is of the same form as \eqref{eqn:Vpp'} with momentum replaced by the four vector. The Dirac delta function assures both frequency and momentum conservation. The (euclidean) action of \eqref{eqn:HA} reads \begin{align} \label{eqn:S} S[\Psi,\Psi^*]=-\sum_p \Psi^*_p \mathscr{G}^{-1}_p \Psi^{}_p+\sum_{pp'} \Psi^*_p \mathscr{V}^{}_{pp'} \Psi^{}_{p'} \, \end{align} where $\Psi_p$ and $\Psi_p^*$ are (complex) Grassmann fields. The inverse (bare) Green's function is given by \begin{align} \label{eqn:Green} \mathscr{G}^{-1}_p=ip_0+\mu-\lambda_{\bf p} + i\Gamma\,\text{sign}\, p_0 \, . \end{align} We include the chemical potential $\mu$. $p_0=(2n+1)\pi T$ are fermionic Matsubara frequencies for $n\in \mathds{Z}$ and temperature $T$. We assume the simplest possible current-relaxation scattering rate $\Gamma>0$ as a constant imaginary part proportional to the identity matrix. The scattering rate $\Gamma$ is momentum- and frequency-independent as well as diagonal and equal for both subsystems $\sigma=A,B$. Such approximations on $\Gamma$ are common in the literature of multiband systems \cite{Mitscherling2018, Verret2017, Eberlein2016, Mizoguchi2016, Tanaka2008, Kontani2007}. A physical derivation of $\Gamma$, for instance, due to interaction or impurity scattering depends on details of the models, which we do not further specify in our general two-band system. We are aware that each generalization of $\Gamma$ may effect parts of the following derivations and conclusions. We do not assume that $\Gamma$ is small and derive the current for $\Gamma$ of arbitrary size. The current $j^\alpha_q$ in $\alpha$ direction that is induced by the external electric and magnetic fields is given by the functional derivative of the grand canonical potential $\Omega[{\bf A}]$ with respect to the vector potential, \begin{align} j^\alpha_q=-\frac{1}{L}\frac{\delta \Omega[{\bf A}]}{\delta A^\alpha_{-q}} \,. \end{align} We expand the current up to first order in the vector potential and define \begin{align} \label{eqn:defPi} j^\alpha_q=j^\alpha_\text{para}-\sum_\beta\Pi^{\alpha\beta}_q A^\beta_q +... \, . \end{align} $j^\alpha_\text{para}$ is the paramagnetic current, that is a current without an external field. It vanishes for $E^\pm({\bf p})=E^\pm(-{\bf p}-{\bf p}^\pm)$, where $E^\pm({\bf p})$ are the two quasiparticle bands and ${\bf p}^\pm$ are arbitrary but fixed momenta \cite{Voruganti1992}. Since this condition is usually fulfilled, for instance due to an inversion symmetric dispersion, we omit $j^\alpha_\text{para}$ in the following. The polarization tensor reads \begin{align} \label{eqn:PiFull} \Pi^{\alpha\beta}_q\hspace{-0.5mm}=\hspace{-0.5mm}e^2\frac{T}{L}\sum_p \text{tr}\big[\mathscr{G}^{}_p\lambda^\alpha_{p+q/2}\mathscr{G}^{}_{p+q}\lambda^\beta_{p+q/2}\big]\hspace{-0.5mm}-\hspace{-0.5mm}(q\hspace{-0.5mm}=\hspace{-0.5mm}0) \, , \end{align} where the second term corresponds to the first term evaluated at $q=0$. We have $\Pi^{\alpha\beta}_{q=0}=0$ as required by gauge invariance of the vector potential. The matrix trace due to the two subsystems is denoted by $\text{tr}$. Note that the matrices do not commute in general. Thus, the order of the Green's functions and vertices are important. We assume that the electric field is constant in space and the magnetic field is constant in time. Then, the vector potential splits additive into two parts, ${\bf A}({\bf r},t)={\bf A}^E(t)+{\bf A}^B({\bf r})$, such that ${\bf E}(t)=-\partial_t {\bf A}^E(t)$ and ${\bf B}({\bf r})=\nabla\times{\bf A}^B({\bf r})$. Performing the rotation to imaginary time and Fourier transformation lead to \begin{align} {\bf A}_q={\bf A}^E_{iq_0}\delta_{{\bf q},0}+{\bf A}^B_{\bf q}\delta_{iq_0,0} \, , \end{align} which allows for separation of effects by the electric and the magnetic field. We are interested in the current induced by an external electric field. Since we are not considering any external magnetic field in the following, we omit the momentum dependence ${\bf q}$. In order to have a clear form of the relevant entities for the further calculations, we introduce the compact notation $\text{Tr}\big[\cdot\big]=e^2T L^{-1}\sum_p\text{tr}\big[\cdot-(iq_0=0)\big]$, which involve the prefactors, the summation over $p$ as well as the subtraction of the argument at $q_0$. Then, the polarization tensor reads \begin{align} \label{eqn:PiFinal} \Pi^{\alpha\beta}_{iq_0}=\text{Tr}\big[\mathscr{G}^{}_{ip_0+iq_0,{\bf p}}\lambda^\beta_{\bf p} \mathscr{G}^{}_{ip_0,{\bf p}}\lambda^\alpha_{\bf p}\big] \, . \end{align} We permuted the matrices by using the matrix trace, so that the first Green's function involves the external Matsubara frequency $iq_0$. We are interested in the conductivity tensor $\sigma^{\alpha\beta}_\omega$ that is defined as the coefficient of the linear order contribution to the current with respect to the external electric field, that is \begin{align} j^\alpha_\omega=\sigma^{\alpha\beta}_\omega E^\beta_\omega+... \, . \end{align} The polarization tensor and the conductivity are related via analytic continuation, \begin{align} \label{eqn:sigmaPi} \sigma^{\alpha\beta}(\omega)=-\frac{1}{i\omega}\Pi^{\alpha\beta}_{iq_0\rightarrow\omega+i0^+} \,. \end{align} The DC conductivity (tensor) is the zero frequency limit of the conductivity tensor, $\sigma^{\alpha\beta}=\sigma^{\alpha\beta}(\omega\rightarrow 0)$. \section{Longitudinal and anomalous Hall conductivity} \label{sec:conductivity} For given $\lambda_{\bf p}$, $\mu$, $T$ and $\Gamma$ all quantities in the polarization tensor $\Pi^{\alpha\beta}_{iq_0}$ in \eqref{eqn:PiFinal} are known, such that a numerical evaluation is directly possible by performing the Matsubara summation explicitly. Furthermore, analytic continuation is straightforward leading to a conductivity formula at real frequency $\omega$. Here, we combine this analytic derivation with an analysis of the underlying structure of $\Pi^{\alpha\beta}_{iq_0}$ in order to identify criteria for physically and mathematically motivated decompositions. \subsection{Spherical representation} The crucial quantity to evaluate \eqref{eqn:PiFinal} is the Bloch Hamiltonian matrix $\lambda_{\bf p}$, both present in the Green's function $\mathscr{G}_{ip_0,{\bf p}}$ and the vertex $\lambda^\alpha_{\bf p}$. The basic property of the $2\times2$ matrix $\lambda_{\bf p}$ is its hermiticity allowing us to expand it in the identity matrix $\mathds{1}$ and the three Pauli matrices \begin{align} \sigma_x=\begin{pmatrix} 0 && 1 \\ 1 && 0 \end{pmatrix}, \, \sigma_y=\begin{pmatrix} 0 && -i \\ i && 0 \end{pmatrix}, \, \sigma_z=\begin{pmatrix} 1 && 0 \\ 0 && -1 \end{pmatrix} \,, \end{align} which we combine to the Pauli vector $\boldsymbol{\sigma}=(\sigma_x,\sigma_y,\sigma_z)$. The indexing $x,y,z$ must not be confused with the spatial directions. We get the compact notation \begin{align} \lambda_{\bf p}&=g_{\bf p}\,\mathds{1}+{\bf d}_{\bf p}\cdot\boldsymbol{\sigma} \end{align} with a momentum-dependent function $g_{\bf p}$ and a momentum-dependent vector field ${\bf d}_{\bf p}$ \cite{Gianfrate2020, Volovik1988, Dugaev2008, Asboth2016, Bleu2018}. The Bloch Hamiltonian $\lambda_{\bf p}$ can be understood as a four-dimensional vector field that assigns $(g_{\bf p},{\bf d}_{\bf p})$ to each momentum ${\bf p}$. In 2D, we can visualize $g_{\bf p}$ as a surface on top of which we indicate the vector ${\bf d}_{\bf p}$ by its length and direction. An example is shown in Fig.~\ref{fig:plotLam}. The velocity, which is the momentum-derivative of $\lambda_{\bf p}$, is the modulation of these fields. \begin{figure} \centering \includegraphics[width=8cm]{fig1.pdf} \caption{We can represent the Bloch Hamiltonian $\lambda_{\bf p}$ by a number $g_{\bf p}$ and a vector ${\bf d}_{\bf p}$. For a 2D system, we can visualize $g_{\bf p}$ as a surface on top of which we indicate the vector ${\bf d}_{\bf p}$ by its length $r_{\bf p}$ (color) and direction given by the angles $\Theta_{\bf p}$ and $\varphi_{\bf p}$. The (generalized) velocity and, thus, the conductivity is given by the modulation of these fields. Here, we show $\lambda_{\bf p}$ of the example in Sec.~\ref{sec:examples:Kontani}. \label{fig:plotLam}} \end{figure} It is very useful to represent the vector ${\bf d}$ via its length $r$ and the two angles $\Theta$ and $\varphi$ in spherical coordinates, ${\bf d}=(r\cos\varphi\sin\Theta,r\sin\varphi\sin\Theta,r\cos\Theta)$. The Bloch Hamiltonian matrix $\lambda_{\bf p}$ in spherical coordinates reads \begin{align} \label{eqn:lamPolar} \lambda_{\bf p}= \begin{pmatrix} g_{\bf p}+r_{\bf p}\cos\Theta_{\bf p} && r_{\bf p}\sin\Theta_{\bf p} e^{-i\varphi_{\bf p}} \\[2mm] r_{\bf p}\sin\Theta_{\bf p} e^{i\varphi_{\bf p}} && g_{\bf p}-r_{\bf p}\cos\Theta_{\bf p}\end{pmatrix} \, . \end{align} Both \eqref{eqn:lam} and \eqref{eqn:lamPolar} are equivalent and impose no restriction on the Hamiltonian than hermiticity. In the following, we exclusively use $\lambda_{\bf p}$ in spherical coordinates. For given $\epsilon_{{\bf p},A},\,\epsilon_{{\bf p},B}$ and $\Delta_{\bf p}$ in \eqref{eqn:lam} the construction of \eqref{eqn:lamPolar} is straightforward. We give the relations explicitly, since they may provide a better intuitive understanding of the involved quantities. We define the two functions $g_{\bf p}$ and $h_{\bf p}$ by % \begin{align} \label{eqn:gh} &g_{\bf p}=\frac{1}{2}(\epsilon_{{\bf p},A}+\epsilon_{{\bf p},B}) \, , &h_{\bf p}=\frac{1}{2}(\epsilon_{{\bf p},A}-\epsilon_{{\bf p},B}) \, . \end{align} The radius $r_{\bf p}$ is given by $h_{\bf p}$ and the absolute value of $\Delta_{\bf p}$ via \begin{align} &r_{\bf p}=\sqrt{h^2_{\bf p}+|\Delta_{\bf p}|^2} \, . \end{align} The angle $\Theta_{\bf p}$ describes the ratio between $h_{\bf p}$ and $|\Delta_{\bf p}|$. The angle $\varphi_{\bf p}$ is equal to the negative phase of $\Delta_{\bf p}$. They are given by \begin{align} &\cos\Theta_{\bf p} = \frac{h_{\bf p}}{r_{\bf p}} &&\sin\Theta_{\bf p} = \frac{|\Delta_{\bf p}|}{r_{\bf p}} \, , \\ \label{eqn:Phi} &\cos\varphi_{\bf p}=\text{Re}\frac{\Delta_{\bf p}}{|\Delta_{\bf p}|}&&\sin\varphi_{\bf p}=-\text{Im}\frac{\Delta_{\bf p}}{|\Delta_{\bf p}|} \, . \end{align} The advantage of the spherical form \eqref{eqn:lamPolar} is its simplicity of the eigenvalues and eigenvectors. We denote the eigensystem at momentum ${\bf p}$ as $\pm_{\bf p}$. The eigenenergies are \begin{align} E^\pm_{\bf p}=g^{}_{\bf p}\pm r^{}_{\bf p} \end{align} with corresponding eigenvectors \begin{align} &\label{eqn:+}|+_{\bf p}\rangle=e^{i\phi^+_{\bf p}}\begin{pmatrix} \cos \frac{1}{2}\Theta_{\bf p} \\[2mm] e^{i\varphi_{\bf p}}\,\sin \frac{1}{2}\Theta_{\bf p} \end{pmatrix}\,, \\[5mm] &\label{eqn:-}|-_{\bf p}\rangle=e^{i\phi^-_{\bf p}}\begin{pmatrix} -e^{-i\varphi_{\bf p}}\,\sin \frac{1}{2}\Theta_{\bf p} \\[2mm]\cos \frac{1}{2}\Theta_{\bf p} \end{pmatrix}\,. \end{align} These eigenvectors are normalized and orthogonal, $\langle +_{\bf p}|+_{\bf p}\rangle=\langle -_{\bf p}|-_{\bf p}\rangle=1$ and $\langle +_{\bf p}|-_{\bf p}\rangle=\langle -_{\bf p}|+_{\bf p}\rangle=0$. The two phases $\phi^\pm_{\bf p}$ reflect the freedom to choose a phase of the normalized eigenvectors when diagonalizing at fixed momentum ${\bf p}$, that is a ``local'' $U(1)$ gauge symmetry. We include it explicitly for an easier comparison with other gauge choices and to make gauge-dependent quantities more obvious in the following calculations. \subsection{Interband coherence effects} The polarization tensor $\Pi^{\alpha\beta}_{iq_0}$ in \eqref{eqn:PiFinal} is the trace of the product of Green's function matrices and vertex matrices. A trace is invariant under unitary transformations (or, in general, similarity transformations) due to its cyclic property. We transform all matrices by the $2\times 2$ unitary transformation $U_{\bf p}=\begin{pmatrix}|+_{\bf p}\rangle & |-_{\bf p}\rangle\end{pmatrix}$, whose columns are composed of the eigenvectors $|\pm_{\bf p}\rangle$. The matrix $U_{\bf p}$ diagonalizes the Bloch Hamiltonian matrix \begin{align} \label{eqn:Ep} \mathcal{E}_{\bf p}=U^\dagger_{\bf p}\lambda^{}_{\bf p} U^{}_{\bf p}=\begin{pmatrix} E^+_{\bf p} && 0 \\[1mm] 0 && E^-_{\bf p} \end{pmatrix} \, , \end{align} where we defined the quasiparticle band matrix $\mathcal{E}_{\bf p}$. We transform the Green's function matrix in \eqref{eqn:Green} and get the diagonal Green's function \begin{align} \label{eqn:Gdiag} \mathcal{G}_{ip_0,{\bf p}}&=U^\dagger_{\bf p} \mathscr{G}^{}_{ip_0,{\bf p}}U^{}_{\bf p}\nonumber\\&= \big[ip_0+\mu-\mathcal{E}_{\bf p}+i\Gamma \,\text{sign}p_0\big]^{-1} \, . \end{align} Note that the assumptions of $\Gamma$ to be proportional to the identity matrix is crucial to obtain a diagonal Green's function matrix by this transformation. In general, the vertex matrix $\lambda^\alpha_{\bf p}$ will not be diagonal after unitary transformation with $U_{\bf p}$, since it involves the momentum derivative $\lambda^\alpha_{\bf p}=\partial_\alpha \lambda_{\bf p}$, which does not commute with the momentum-dependent $U_{\bf p}$. Expressing $\lambda_{\bf p}$ in terms of $\mathcal{E}_{\bf p}$ we get \begin{align} \label{eqn:UdagLamUDeriv} U^\dagger_{\bf p}\lambda^\alpha_{\bf p} U^{}_{\bf p}=U^\dagger_{\bf p}\big[\partial^{}_\alpha\lambda^{}_{\bf p}\big] U^{}_{\bf p}=U^\dagger_{\bf p}\big[\partial^{}_\alpha\big(U^{}_{\bf p}\mathcal{E}^{}_{\bf p}U^\dagger_{\bf p}\big)\big]U^{}_{\bf p} \,. \end{align} The derivative of $\mathcal{E}_{\bf p}$ leads to the eigenvelocities $\mathcal{E}^\alpha_{\bf p}=\partial^{}_\alpha \mathcal{E}^{}_{\bf p}$. The two other terms from the derivative contain the momentum derivative of $U_{\bf p}$. Using the identity $\big(\partial^{}_\alfU^\dagger_{\bf p}\big)U^{}_{\bf p}=-U^\dagger_{\bf p}\big(\partial^{}_\alpha U^{}_{\bf p}\big)$ of unitary matrices we end up with \begin{align} \label{eqn:UdagLamU} U^\dagger_{\bf p}\lambda^\alpha_{\bf p} U^{}_{\bf p}=\mathcal{E}_{\bf p}^\alpha+\mathcal{F}^\alpha_{\bf p}\, , \end{align} where we defined $\mathcal{F}^\alpha_{\bf p}=-i\big[\mathcal{A}_{\bf p}^\alpha,\mathcal{E}^{}_{\bf p}\big]$ with \begin{align} \label{eqn:BerryConnection} \mathcal{A}_{\bf p}^\alpha=iU^\dagger_{\bf p}\big(\partial^{}_\alpha U^{}_{\bf p}\big) \,. \end{align} Since $\mathcal{F}_{\bf p}$ involves the commutator with the diagonal matrix $\mathcal{E}_{\bf p}$, $\mathcal{F}^\alpha_{\bf p}$ is a purely off-diagonal matrix. Thus, we see already at that stage that $\mathcal{F}_{\bf p}$ causes the mixing of the two quasiparticle bands and, thus, captures exclusively the interband coherence effects. We refer to $\mathcal{F}^\alpha_{\bf p}$ as ``(interband) coherence matrix''. Let us have a closer look at $\mathcal{A}^\alpha_{\bf p}$ defined in \eqref{eqn:BerryConnection}. The matrix $U_{\bf p}$ consists of the eigenvectors $|\pm_{\bf p}\rangle$. Its complex conjugation $U^\dagger_{\bf p}$ consists of the corresponding $\langle\pm_{\bf p}|$. Thus, we can identify the diagonal elements of $\mathcal{A}_{\bf p}^\alpha$ as the Berry connection of the eigenstates $|\pm_{\bf p}\rangle$, that is $\mathcal{A}^{\alpha,\pm}_{\bf p}=i\langle\pm_{\bf p}|\partial_\alpha\pm_{\bf p}\rangle$, where $|\partial_\alpha \pm_{\bf p}\rangle=\partial_\alpha |\pm_{\bf p}\rangle$ is the momentum derivative of the eigenstate \cite{Berry1984,Zak1989}. $\mathcal{A}_{\bf p}^\alpha$ is hermitian due to the unitarity of $U_{\bf p}$. This allows us to express it in terms of the identity and the Pauli matrices, $\mathcal{A}_{\bf p}^\alpha=\mathcal{I}^\alpha_{\bf p}+\mathcal{X}_{\bf p}^\alpha+\mathcal{Y}_{\bf p}^\alpha+\mathcal{Z}_{\bf p}^\alpha$, where \begin{align} \label{eqn:I} \mathcal{I}^\alpha_{\bf p}&= -\frac{1}{2}\big[\phi^{+,\alpha}_{\bf p}+\phi^{-,\alpha}_{\bf p}\big]\,\mathds{1} \, ,\\ \label{eqn:X} \mathcal{X}^\alpha_{\bf p}&=-\frac{1}{2}\big[\varphi^\alpha_{\bf p}\sin\Theta_{\bf p}\cos\tilde\varphi_{\bf p}+\Theta^\alpha_{\bf p}\sin\tilde\varphi_{\bf p}\big]\,\sigma_x \, , \\ \label{eqn:Y} \mathcal{Y}^\alpha_{\bf p}&=-\frac{1}{2}\big[\varphi^\alpha_{\bf p}\sin\Theta_{\bf p}\sin\tilde\varphi_{\bf p}-\Theta_{\bf p}^\alpha\cos\tilde\varphi_{\bf p}\big]\,\sigma_y \, ,\\ \label{eqn:Z} \mathcal{Z}^\alpha_{\bf p}&=-\frac{1}{2}\big[\phi^{+,\alpha}_{\bf p}-\phi^{-,\alpha}_{\bf p}+\varphi^\alpha_{\bf p}\big(1-\cos\Theta_{\bf p}\big)\big]\,\sigma_z \, , \end{align} and $\tilde\varphi_{\bf p}=\varphi_{\bf p}+\phi^+_{\bf p}-\phi^-_{\bf p}$. We calculated the prefactors by using \eqref{eqn:+} and \eqref{eqn:-} and used the short notation $\Theta^\alpha_{\bf p}=\partial_\alpha\Theta_{\bf p}$ and $\varphi^\alpha_{\bf p}=\partial_\alpha\varphi_{\bf p}$ for the momentum derivative in $\alpha$ direction. Each component of $\mathcal{A}_{\bf p}$ is gauge dependent by involving $\phi^{\pm,\alpha}_{\bf p}=\partial^{}_\alpha \phi^\pm_{\bf p}$ or $\tilde\varphi^{}_{\bf p}$. The coherence matrix $\mathcal{F}^\alpha_{\bf p}$ involves only the off-diagonal matrices $\mathcal{X}^\alpha_{\bf p}$ and $\mathcal{Y}^\alpha_{\bf p}$, since the diagonal contributions $\mathcal{I}_{\bf p}$ and $\mathcal{Z}_{\bf p}$ vanish by the commutator with the diagonal matrix $\mathcal{E}_{\bf p}$. We see that the coherence matrix $\mathcal{F}^\alpha_{\bf p}$ is gauge dependent due to $\tilde\varphi_{\bf p}$. However, the product $\mathcal{F}^\alpha_{\bf p}\mathcal{F}^\beta_{\bf p}$ is gauge independent as we can see by \begin{align} \label{eqn:FF} \mathcal{F}^\alpha_{\bf p}\mathcal{F}^\beta_{\bf p}&\propto\big(\mathcal{X}^\alpha_{\bf p}+\mathcal{Y}^\alpha_{\bf p}\big)\big(\mathcal{X}^\beta_{\bf p}+\mathcal{Y}^\beta_{\bf p}\big)\nonumber \\[3mm] &\propto \begin{pmatrix} 0 & e^{-i\tilde\varphi_{\bf p}} \\ e^{i\tilde\varphi_{\bf p}} & 0\end{pmatrix} \begin{pmatrix} 0 & e^{-i\tilde\varphi_{\bf p}} \\ e^{i\tilde\varphi_{\bf p}} & 0 \end{pmatrix}\propto \mathds{1} \, , \end{align} where we dropped gauge-independent quantities in each step. The quasiparticle velocity $\mathcal{E}^\alpha_{\bf p}$ is also gauge independent. \subsection{Decomposition} With these remarks we evaluate the polarization tensor $\Pi^{\alpha\beta}_{iq_0}$ given in \eqref{eqn:PiFinal}. The unitary transformation by the eigenbasis $|\pm_{\bf p}\rangle$ leads to \begin{align} \Pi^{\alpha\beta}_{iq_0}=\text{Tr}\big[\mathcal{G}^{}_{ip_0+iq_0,{\bf p}}\big(\mathcal{E}^\beta_{\bf p}+\mathcal{F}^\beta_{\bf p}\big)\mathcal{G}^{}_{ip_0,{\bf p}}\big(\mathcal{E}^\alpha_{\bf p}+\mathcal{F}^\alpha_{\bf p}\big)\big] \, . \end{align} The Green's function matrices \eqref{eqn:Gdiag} are diagonal, whereas the vertices \eqref{eqn:UdagLamU} contain the diagonal matrix $\mathcal{E}^\alpha_{\bf p}$ and the off-diagonal matrix $\mathcal{F}^\alpha_{\bf p}$. The matrix trace only gives nonzero contribution if the product of the four matrices involves an even number of off-diagonal matrices, that is zero or two in this case. Thus, the mixed terms involving both $\mathcal{E}^\alpha_{\bf p}$ and $\mathcal{F}^\alpha_{\bf p}$ vanish. This leads to the decomposition of $\Pi^{\alpha\beta}_{iq_0}$ into an {\it intraband} and an {\it interband contribution}: \begin{align} \label{eqn:DecompIntraInter} \Pi^{\alpha\beta}_{iq_0}=\Pi^{\alpha\beta}_{iq_0,\text{intra}}+\Pi^{\alpha\beta}_{iq_0,\text{inter}} \,. \end{align} In the intraband contribution the two eigensystems $\pm_{\bf p}$ are not mixed, whereas they mix in the interband contribution due to the interband coherence matrix $\mathcal{F}^\alpha_{\bf p}$. The individual contributions in \eqref{eqn:DecompIntraInter} are gauge independent due to \eqref{eqn:FF} but not unique in a mathematical sense. For instance, we can use any similarity transformation and perform similar steps as discussed above. The sum of the contributions leads to the same final result, but the individual contributions may have less obvious physical interpretations. We discuss this point in Sec.~\ref{sec:discussion:basis} in more detail. The matrix trace $\text{tr}$ is invariant under transposition. For the product of several symmetric and antisymmetric (or skew-symmetric) matrices $A,\,B,\,C,\,D$ this leads to \begin{align} \label{eqn:TraceTrans} \text{tr}\big(ABCD\big)\hspace{-0.7mm}=\hspace{-0.6mm}\text{tr}\big(D^\text{T}C^\text{T}B^\text{T}A^\text{T}\big)\hspace{-0.7mm}=\hspace{-0.7mm}(-1)^{n}\,\hspace{-0.1mm}\text{tr}\big(DCBA\big) \end{align} with $A^\text{T}$ being the transposed matrix of $A$, and so on, and $n$ the number of antisymmetric matrices involved. We refer to the procedure in \eqref{eqn:TraceTrans} via ``trace transposition'' or ``reversing the matrix order under the trace'' in the following \cite{Mitscherling2018}. We call the trace with a positive sign after trace transposition {\it symmetric} and a trace with a negative sign after trace transposition {\it antisymmetric}. Every trace of arbitrary square matrices can be uniquely decomposed in this way. We analyze the intra- and interband contribution in \eqref{eqn:DecompIntraInter} with respect to their behavior under trace transposition. The intraband contribution involves the quasiparticle velocities $\mathcal{E}^\alpha_{\bf p}$ and the Green's functions, that is \begin{align} \label{eqn:intra} \Pi^{\alpha\beta}_{iq_0,\text{intra}}=\text{Tr}\big[\mathcal{G}^{}_{ip_0+iq_0,{\bf p}}\mathcal{E}^\beta_{\bf p} \mathcal{G}^{}_{ip_0,{\bf p}}\mathcal{E}^\alpha_{\bf p}\big] \, . \end{align} All matrices are diagonal and, thus, symmetric. We see that the intraband contribution is symmetric under trace transposition. The interband contribution involves diagonal Green's functions and $\mathcal{F}^\alpha_{\bf p}$, which is neither symmetric nor antisymmetric. We decompose it into its symmetric and antisymmetric part \begin{align} &\mathcal{F}^{\alpha,s}_{\bf p}=\frac{1}{2}\big(\mathcal{F}^\alpha_{\bf p}+(\mathcal{F}^\alpha_{\bf p})^\text{T}\big)=-i\big[\mathcal{Y}^\alpha_{\bf p},\mathcal{E}_{\bf p}\big] \, ,\\ &\mathcal{F}^{\alpha,a}_{\bf p}=\frac{1}{2}\big(\mathcal{F}^\alpha_{\bf p}-(\mathcal{F}^\alpha_{\bf p})^\text{T}\big)=-i\big[\mathcal{X}^\alpha_{\bf p},\mathcal{E}_{\bf p}\big] \, . \end{align} By this, the interband contribution decomposes into a symmetric and antisymmetric contribution under trace transposition, \begin{align} \label{eqn:DecompSymAntisym} \Pi^{\alpha\beta}_{iq_0,\,\text{inter}}=\Pi^{\alpha\beta,\text{s}}_{iq_0,\,\text{inter}}+\Pi^{\alpha\beta,\text{a}}_{iq_0,\,\text{inter}} \,, \end{align} where \begin{align} \label{eqn:inter_sym} \Pi^{\alpha\beta,\text{s}}_{iq_0,\,\text{inter}}&=\text{Tr}\big[4r_{\bf p}^2\mathcal{G}^{}_{ip_0+iq_0,{\bf p}}\mathcal{X}^\beta_{\bf p} \mathcal{G}^{}_{ip_0,{\bf p}}\mathcal{X}^\alpha_{\bf p}\big]\nonumber\\[1mm] &+\text{Tr}\big[4r_{\bf p}^2\mathcal{G}^{}_{ip_0+iq_0,{\bf p}}\mathcal{Y}^\beta_{\bf p} \mathcal{G}^{}_{ip_0,{\bf p}}\mathcal{Y}^\alpha_{\bf p}\big]\,,\\[1mm] \label{eqn:inter_antisym} \Pi^{\alpha\beta,\text{a}}_{iq_0,\,\text{inter}}&=\text{Tr}\big[4r_{\bf p}^2\mathcal{G}^{}_{ip_0+iq_0,{\bf p}}\mathcal{X}^\beta_{\bf p} \mathcal{G}^{}_{ip_0,{\bf p}}\mathcal{Y}^\alpha_{\bf p}\big]\nonumber\\[1mm] &+\text{Tr}\big[4r_{\bf p}^2\mathcal{G}^{}_{ip_0+iq_0,{\bf p}}\mathcal{Y}^\beta_{\bf p} \mathcal{G}^{}_{ip_0,{\bf p}}\mathcal{X}^\alpha_{\bf p}\big]\,. \end{align} We used $\mathcal{E}_{\bf p}=g_{\bf p}+r_{\bf p}\sigma_z$ and performed the commutator explicitly. Interestingly, the symmetry under trace transposition, which is due to the multiband character, is connected to the symmetry of the polarization tensor or, equivalently, of the conductivity tensor $\sigma=(\sigma^{\alpha\beta})$ itself: Trace transposition of \eqref{eqn:intra}, \eqref{eqn:inter_sym} and \eqref{eqn:inter_antisym} is equal to the exchange of $\alpha\leftrightarrow\beta$, the directions of the current and the external electric field. In \eqref{eqn:FF} we showed that the product $\mathcal{F}^\alpha_{\bf p}\mathcal{F}^\beta_{\bf p}$ is gauge independent. However, this product is neither symmetric nor antisymmetric with respect to $\alpha\leftrightarrow\beta$. Up to a prefactor its symmetric and antisymmetric parts read \begin{align} &\mathcal{F}^\alpha_{\bf p}\mathcal{F}^\beta_{\bf p}+\mathcal{F}^\beta_{\bf p}\mathcal{F}^\alpha_{\bf p}\propto\{\mathcal{X}^\alpha_{\bf p},\mathcal{X}^\beta_{\bf p}\}+\{\mathcal{Y}^\alpha_{\bf p},\mathcal{Y}^\beta_{\bf p}\}=\mathcal{C}^{\alpha\beta}_{\bf p}\,,\\[2mm] &\mathcal{F}^\alpha_{\bf p}\mathcal{F}^\beta_{\bf p}-\mathcal{F}^\beta_{\bf p}\mathcal{F}^\alpha_{\bf p}\propto [\mathcal{X}^\alpha_{\bf p},\mathcal{Y}^\beta_{\bf p}]+[\mathcal{Y}^\alpha_{\bf p},\mathcal{X}^\beta_{\bf p}]=-i\,\Omega^{\alpha\beta}_{\bf p}\,, \end{align} % which defines the symmetric function $\mathcal{C}^{\alpha\beta}_{\bf p}$ and antisymmetric function $\Omega^{\alpha\beta}_{\bf p}$, which are both real-valued diagonal matrices. Using \eqref{eqn:X} and \eqref{eqn:Y} we get \begin{align} \label{eqn:CM} \mathcal{C}^{\alpha\beta}_{\bf p}&=\frac{1}{2}\big(\Theta^\alpha_{\bf p}\Theta^\beta_{\bf p}+\varphi^\alpha_{\bf p}\varphi^\beta_{\bf p}\sin^2\Theta_{\bf p}\big)\mathds{1}\,,\\ \label{eqn:OmegaM} \Omega^{\alpha\beta}_{\bf p}&=\frac{1}{2}\big(\varphi^\alpha_{\bf p}\Theta^\beta_{\bf p}-\varphi^\beta_{\bf p}\Theta^\alpha_{\bf p}\big)\sin\Theta_{\bf p}\,\sigma_z\,. \end{align} We see explicitly that $\mathcal{C}^{\alpha\beta}_{\bf p}$ and $\Omega^{\alpha\beta}_{\bf p}$ are gauge independent. Note that $\mathcal{C}^{\alpha\beta}_{\bf p}$ involves equal contributions for both quasiparticle bands, whereas $\Omega^{\alpha\beta}_{\bf p}$ involves contributions of opposite sign for the two quasiparticle bands. Furthermore, we can check explicitly that $\Omega^{\alpha\beta}_{\bf p}=\partial_\alpha\mathcal{Z}^\beta_{\bf p}-\partial_\beta\mathcal{Z}^\alpha_{\bf p}$. Thus, $\Omega^{\alpha\beta}_{\bf p}$ is the Berry curvature of the eigenbasis $|\pm_{\bf p}\rangle$. In Sec.~\ref{sec:discussion:quantumgeometry} we will show that the product $\mathcal{F}^\alpha_{\bf p}\mathcal{F}^\beta_{\bf p}$ is proportional to the quantum geometric tensor $\mathcal{T}^{\alpha\beta,n}_{\bf p}$. The Berry curvature is proportional to the imaginary part of $\mathcal{T}^{\alpha\beta,n}_{\bf p}$ and the real part of $\mathcal{T}^{\alpha\beta,n}_{\bf p}$ is the quantum metric \cite{Provost1980, Anandan1991,Anandan1990, Cheng2013, Bleu2018}. We will show that the two components of $C^{\alpha\beta}_{\bf p}$ in \eqref{eqn:CM} are twice the quantum metric of the eigenbasis $|\pm_{\bf p}\rangle$, that is $C^{\alpha\beta,\pm}_{\bf p}=2\,g^{\alpha\beta,\pm}_{\bf p}$, which are equal $g^{\alpha\beta,+}_{\bf p}=g^{\alpha\beta,-}_{\bf p}$ in our two-band system. This provides a new interpretation of $C^{\alpha\beta}_{\bf p}$, which has been labeled ``coherence factor'' previously \cite{Voruganti1992} and has been studied in detail in the context of the longitudinal conductivity for spiral spin-density waves \cite{Mitscherling2018} without noticing this relation. We will refer to $C^{\alpha\beta,\pm}_{\bf p}$ as ``quantum metric factor'' in the following. \subsection{Matsubara summation} We continue by performing the Matsubara summations and analytic continuation. The Matsubara sum in \eqref{eqn:intra}, \eqref{eqn:inter_sym} and \eqref{eqn:inter_antisym} is of the form \begin{align} \label{eqn:Matsum} I_{iq_0}=T\sum_{p_0}\text{tr}\big[(\mathcal{G}_n-\mathcal{G})M_1\mathcal{G} M_2\big] \end{align} with two matrices $M_1$ and $M_2$ that are symmetric and/or antisymmetric. We omit the momentum dependence for simplicity in this paragraph. We further shorten the notation of the Green's functions $\mathcal{G} \equiv \mathcal{G}_{ip_0}$ and $\mathcal{G}_{\pm n}\equiv \mathcal{G}_{ip_0\pm iq_0}$. If $I_{iq_0}$ is symmetric under trace transposition, that is for the intraband and the symmetric interband contribution, we split \eqref{eqn:Matsum} into two equal parts. In the second part we reverse the matrix order under the trace and change the Matsubara summation $ip_0\rightarrow ip_0-iq_0$. We get \begin{align} \label{eqn:Isq0} &I^\text{s}_{iq_0}=\frac{T}{2}\sum_{p_0}\text{tr}\big[\big((\mathcal{G}_n-\mathcal{G})+(\mathcal{G}_{-n}-\mathcal{G})\big)M_1\mathcal{G} M_2\big] \, . \end{align} If $I_{iq_0}$ is antisymmetric, that is for the antisymmetric interband contribution, we obtain after the same steps \begin{align} \label{eqn:Iaq0} &I^{\text{a}}_{iq_0}=\frac{T}{2}\sum_{p_0}\text{tr}\big[(\mathcal{G}_n-\mathcal{G}_{-n})M_1\mathcal{G} M_2\big] \, . \end{align} We perform the Matsubara summation and analytic continuation $iq_0\rightarrow \omega+i0^+$ of the external frequency leading to $I^\text{s}_\omega$ and $I^\text{a}_\omega$. We are interested in the DC limit. The detailed Matsubara summation and the zero frequency limit are performed in Appendix~\ref{appendix:Matsubara}. We end up with \begin{align} \label{eqn:Isw} &\lim_{\omega\rightarrow 0}\frac{I^\text{s}_\omega}{i\omega}=\frac{\pi}{2}\hspace{-1mm}\int\hspace{-1mm}d\epsilon f_\epsilon'\hspace{0.5mm}\text{tr}\big[A_\epsilon M_1 A_\epsilon M_2+A_\epsilon M_2 A_\epsilon M_1\big] \, ,\\ \label{eqn:Iaw} &\lim_{\omega\rightarrow 0}\frac{I^\text{a}_\omega}{i\omega}=-i\hspace{-1mm}\int\hspace{-1.5mm} d\epsilon f_\epsilon\hspace{0.2mm} \text{tr}\big[P'_\epsilon M_1A_\epsilon M_2\hspace{-0.5mm}-\hspace{-0.5mm}P'_\epsilon M_2A_\epsilon M_1\big], \end{align} where $f_\epsilon = (e^{\epsilon/T}+1)^{-1}$ is the Fermi function and $f'_\epsilon$ its derivative. Furthermore, it involves the spectral function matrix $A_\epsilon=-(\mathcal{G}^R_\epsilon-\mathcal{G}^A_\epsilon)/2\pi i$ and the derivative of the principle-value function matrix $P'_\epsilon=\partial_\epsilon(\mathcal{G}^R_\epsilon+\mathcal{G}^A_\epsilon)/2$. In \eqref{eqn:Isw} and \eqref{eqn:Iaw} we exclusively used the spectral function $A_\epsilon$ and the principle-value function $P_\epsilon$, which are both real-valued functions, and avoided the complex-valued retarded or advanced Green's functions. As we have a real-valued DC conductivity, the combination of $M_1$ and $M_2$ has to be purely real in \eqref{eqn:Isw} and complex in \eqref{eqn:Iaw}. The symmetric part \eqref{eqn:Isw} involves the derivative of the Fermi function $f'_\epsilon$, whereas the antisymmetric part \eqref{eqn:Iaw} involves the Fermi function $f_\epsilon$. This suggests to call the latter one the {\it Fermi-surface contribution} and the former one the {\it Fermi-sea contribution}. However, this distinction is not unique, since we can perform partial integration in the internal frequency $\epsilon$. For instance, the decomposition proposed by Streda \cite{Streda1982} is different. We will discuss this aspect in Sec.~\ref{sec:discussion:BastinStreda}. Using the explicit form of the Green's function in \eqref{eqn:Gdiag}, the spectral function matrix reads \begin{align} \label{eqn:AM} A_\epsilon = \begin{pmatrix} A^+_\epsilon && \\ && A^-_\epsilon \end{pmatrix} \end{align} with the spectral functions of the two quasiparticle bands \begin{align} \label{eqn:Apm} A^\pm_\epsilon=\frac{\Gamma/\pi}{(\epsilon+\mu-E^\pm_{\bf p})^2+\Gamma^2} \, . \end{align} For our specific choice of $\Gamma$ the spectral function is a Lorentzian function, that peaks at $E^\pm_{\bf p}-\mu$ for small $\Gamma$. Using \eqref{eqn:Apm} the derivative of the principle-value function $P'_\epsilon$ can be rewritten in terms of the spectral function as \begin{align} \label{eqn:Pprime} P'_\epsilon=2\pi^2 A^2_\epsilon-\frac{\pi}{\Gamma}A_\epsilon \,. \end{align} When inserting this into \eqref{eqn:Iaw} the second, linear term drops out. We see that \eqref{eqn:Isw} and \eqref{eqn:Iaw} can be completely expressed by combinations of quasiparticle spectral functions. Note that \eqref{eqn:Pprime} is valid only for a scattering rate $\Gamma$ that is frequency-independent as well as proportional to the identity matrix. We apply the result of the Matsubara summation \eqref{eqn:Isw} and \eqref{eqn:Iaw} to the symmetric and antisymmetric interband contributions \eqref{eqn:inter_sym} and \eqref{eqn:inter_antisym}. Since $M_1$ and $M_2$ are off-diagonal matrices in both cases, the commutation with the diagonal spectral function matrix $A_\epsilon$ simply flips its diagonal entries, that is $M_iA_\epsilon=\overline{A}_\epsilon M_i$ where $\overline{A}_\epsilon$ is given by \eqref{eqn:AM} with $A^+_\epsilon\leftrightarrow A^-_\epsilon$ exchanged. We collect the product of involved matrices and identify \begin{align} &A_\epsilon\big(\hspace{-0.5mm}\mathcal{X}^\beta\mathcal{X}^\alpha\hspace{-0.9mm}+\hspace{-0.7mm}\mathcal{X}^\alpha\mathcal{X}^\beta\hspace{-0.9mm}+\hspace{-0.7mm}\mathcal{Y}^\beta\mathcal{Y}^\alpha\hspace{-0.9mm}+\hspace{-0.7mm}\mathcal{Y}^\alpha\mathcal{Y}^\beta\big)\overline{A}_\epsilon \hspace{-0.8mm}=\hspace{-0.8mm}A_\epsilon C^{\alpha\beta}\overline{A}_\epsilon \, ,\\[2mm] &A^2_\epsilon\big(\hspace{-0.5mm}\mathcal{X}^\beta\mathcal{Y}^\alpha\hspace{-1.0mm}-\hspace{-0.9mm}\mathcal{Y}^\alpha\mathcal{X}^\beta\hspace{-1.0mm}+\hspace{-0.9mm}\mathcal{Y}^\beta\mathcal{X}^\alpha\hspace{-1.0mm}-\hspace{-0.9mm}\mathcal{X}^\alpha\mathcal{Y}^\beta\big)\overline{A}_\epsilon\hspace{-0.8mm} =\hspace{-0.8mm}iA^2_\epsilon\Omega^{\alpha\beta}\overline{A}_\epsilon \, , \end{align} where $\mathcal{C}^{\alpha\beta}_{\bf p}$ and $\Omega^{\alpha\beta}$ were defined in \eqref{eqn:CM} and \eqref{eqn:OmegaM}. \subsection{Formulas of the conductivity tensor} As the final step we combine all our results. The conductivity and the polarization tensor are related via \eqref{eqn:sigmaPi}. We write out the trace over the eigenstates explicitly. The DC conductivity $\sigma^{\alpha\beta}$ decomposes into five different contributions: \begin{align} \label{eqn:DecompSigma} \sigma^{\alpha\beta}&=\sigma^{\alpha\beta}_{\text{intra},+}+\sigma^{\alpha\beta}_{\text{intra},-}\nonumber\\[1mm]&+\sigma^{\alpha\beta,\text{s}}_\text{inter}\nonumber\\[1mm]&+\sigma^{\alpha\beta,\text{a}}_{\text{inter},+}+\sigma^{\alpha\beta,\text{a}}_{\text{inter},-} \, . \end{align} These contributions are distinct by three categories: (a) intra- and interband, (b) symmetric and antisymmetric with respect to $\alpha\leftrightarrow\beta$ (or, equivalently, with respect to trace transposition) and (c) quasiparticle band $\pm$. We do not distinguish the two quasiparticle band contributions of the symmetric interband contribution because they are equal in our two-band model. Each contribution consists of three essential parts: i) the Fermi function $f(\epsilon)$ or its derivative $f'(\epsilon)$, ii) a spectral weighting factor involving a specific combination of the quasiparticle spectral functions $A^n_{\bf p}(\epsilon)$ with $n=\pm$, that is \begin{align} \label{eqn:Wintra}&w^n_{{\bf p},\text{intra}}(\epsilon)=\pi\big(A^n_{\bf p}(\epsilon)\big)^2 \, ,\\ \label{eqn:Wsinter}&w^s_{{\bf p},\text{inter}}(\epsilon)=4\pi r^2_{\bf p} A^+_{\bf p}(\epsilon)A^-_{\bf p}(\epsilon) \, ,\\ \label{eqn:Wainter}&w^{a,n}_{{\bf p},\text{inter}}(\epsilon)=8\pi^2r^2_{\bf p} \big(A^n_{\bf p}(\epsilon)\big)^2A^{-n}_{\bf p}(\epsilon)\, , \end{align} and iii) a momentum-dependent weighting factor involving the changes in the scalar field $g_{\bf p}$ and vector field ${\bf d}_{\bf p}$ in a specific form, that is the quasiparticle velocities $E^{\pm,\alpha}_{\bf p}$, the quantum metric factor $C^{\alpha\beta}_{\bf p}$ and the Berry curvatures $\Omega^{\alpha\beta,\pm}_{\bf p}$ given as \begin{align} &E^{\pm,\alpha}_{\bf p}=g^\alpha_{\bf p}\pm r^\alpha_{\bf p} \, ,\\ &C^{\alpha\beta}_{\bf p}=\frac{1}{2}\big(\Theta^\alpha_{\bf p}\Theta^\beta_{\bf p}+\varphi^\alpha_{\bf p}\varphi^\beta_{\bf p}\sin^2\Theta_{\bf p}\big) \, ,\\ \label{eqn:Omega}&\Omega^{\alpha\beta,\pm}_{\bf p}=\pm \frac{1}{2}\big(\varphi^\alpha_{\bf p}\Theta^\beta_{\bf p}-\varphi^\beta_{\bf p}\Theta^\alpha_{\bf p}\big)\sin\Theta_{\bf p} \, , \end{align} where $g^\alpha_{\bf p}=\partial_\alpha g_{\bf p}$, $r^\alpha_{\bf p}=\partial_\alpha r_{\bf p}$, $\Theta^\alpha_{\bf p}=\partial_\alpha \Theta_{\bf p}$ and $\varphi^\alpha_{\bf p}=\partial_\alpha \varphi_{\bf p}$ with the momentum derivative in $\alpha$ direction $\partial_\alpha=\partial/\partial p^\alpha$. We write the conductivity in units of the conductance quantum $2\pi\sigma_0=e^2/\hbar=e^2$ for $\hbar=1$ and perform the thermodynamic limit by replacing $L^{-1}\sum_{\bf p}\rightarrow \int \frac{d^d{\bf p}}{(2\pi)^d}$, where $d$ is the dimension of the system. We end up with \begin{align} &\sigma^{\alpha\beta}_{\text{intra},n}\hspace{-1.0mm}=\hspace{-0mm}-\frac{e^2}{\hbar}\hspace{-1.5mm}\int\hspace{-1.9mm}\frac{d^d{\bf p}}{(2\pi)^d}\hspace{-1.7mm}\int\hspace{-1.5mm}d\epsilon \,f'(\epsilon) w^n_{{\bf p},\text{intra}}(\epsilon) E^{n,\alpha}_{\bf p} E^{n,\beta}_{\bf p}\hspace{-1mm}, \label{eqn:SintraN} \\ &\sigma^{\alpha\beta,\text{s}}_\text{inter}\hspace{1.5mm}=\hspace{-0.2mm}-\frac{e^2}{\hbar}\hspace{-1.5mm}\int\hspace{-1.9mm}\frac{d^d{\bf p}}{(2\pi)^d}\hspace{-1.7mm}\int\hspace{-1.5mm}d\epsilon\,f'(\epsilon)w^s_{{\bf p},\text{inter}}(\epsilon) \,C^{\alpha\beta}_{\bf p}\,, \label{eqn:SinterS} \\ &\sigma^{\alpha\beta,\text{a}}_{\text{inter},n}\hspace{-1mm}=\hspace{-0.2mm}-\frac{e^2}{\hbar}\hspace{-1.5mm}\int\hspace{-1.9mm}\frac{d^d{\bf p}}{(2\pi)^d}\hspace{-1.7mm}\int\hspace{-1.5mm}d\epsilon\,\,f(\epsilon)\, w^{a,n}_{{\bf p},\text{inter}}(\epsilon)\,\Omega^{\alpha\beta,n}_{\bf p}\,. \label{eqn:SinterAN} \end{align} If we restore SI units, the conductivity has units $1/\Omega\,\text{m}^{d-2}$ for dimension $d$. Note that we have $\sigma^{\alpha\beta}\propto e^2/h$ in a two-dimensional system and $\sigma^{\alpha\beta}\propto e^2/ha$ in a stacked quasi-two-dimensional system, where $a$ is the interlayer distance. For given $\lambda_{\bf p}$, $\mu$, $T$ and $\Gamma$ the evaluation of \eqref{eqn:SintraN}, \eqref{eqn:SinterS} and \eqref{eqn:SinterAN} is straightforward. The mapping of $\lambda_{\bf p}$ to spherical coordinates is given in \eqref{eqn:gh}-\eqref{eqn:Phi}. The spectral function $A^\pm_{\bf p}(\epsilon)$ is defined in \eqref{eqn:Apm}. \section{Discussion} \label{sec:discussion} \subsection{Relation to Bastin and Streda formula} \label{sec:discussion:BastinStreda} Microscopic approaches to the anomalous Hall conductivity are frequently based on the formulas of Bastin {\it et al.} \cite{Bastin1971} and Streda \cite{Streda1982}. A modern derivation is given by Cr\'epieux {\it et al.} \cite{Crepieux2001}. We present a re-derivation in our notation and discuss the relation to our results. We omit the momentum dependence for a simpler notation in this section. We start with the polarization tensor $\Pi^{\alpha\beta}_{iq_0}$ in \eqref{eqn:PiFinal} before analytic continuation. In contrast to our discussion, we perform the Matsubara sum and the analytic continuation in \eqref{eqn:sigmaPi} immediately and get \begin{align} \sigma^{\alpha\beta}_\omega=-\frac{1}{i\omega}\text{Tr}_{\epsilon,{\bf p}}&\big[f_\epsilon\, \big(\mathscr{A}^{}_\epsilon\lambda^\beta \mathscr{G}^A_{\epsilon-\omega}\lambda^\alpha+\mathscr{G}^R_{\epsilon+\omega}\lambda^\beta \mathscr{A}^{}_\epsilon \lambda^\alpha \nonumber \\&-\mathscr{A}_\epsilon \lambda^\beta \mathscr{P}_\epsilon \lambda^\alpha-\mathscr{P}_\epsilon\lambda^\beta \mathscr{A}_\epsilon \lambda^\alpha\big)\big] \, . \end{align} We combined the prefactors, the summation over momenta and the frequency integration as well as the matrix trace in the short notation $\text{Tr}_{\epsilon,{\bf p}}\big[\cdot\big]=e^2L^{-1}\sum_{\bf p}\int d\epsilon\,\text{tr}\big[\cdot\big]$. The first and second line are obtained by the argument explicitly given in \eqref{eqn:PiFinal} and its $(iq_0=0)$ contribution, respectively. Details of the Matsubara summation and the analytic continuation are given in Appendix \ref{appendix:Matsubara}. $\mathscr{G}^R_\epsilon$ and $\mathscr{G}^A_\epsilon$ are the retarded and advanced Green's function of \eqref{eqn:Green}, respectively. $\mathscr{A}_\epsilon=-(\mathscr{G}^R_\epsilon-\mathscr{G}^A_\epsilon)/2\pi i$ is the spectral function matrix and $\mathscr{P}_\epsilon=(\mathscr{G}^R_\epsilon+\mathscr{G}^A_\epsilon)/2$ is the principle-value function matrix. $f_\epsilon$ is the Fermi function. We derive the DC limit by expanding $\sigma^{\alpha\beta}_\omega$ in the frequency $\omega$ of the external electric field ${\bf E}(\omega)$. The diverging term $\propto 1/\omega$ vanishes, which can be checked by using $\mathscr{G}^R_\epsilon=\mathscr{P}_\epsilon-i\pi \mathscr{A}_\epsilon$ and $\mathscr{G}^A_\epsilon=\mathscr{P}_\epsilon+i\pi \mathscr{A}_\epsilon$. The constant term is \begin{align} \label{eqn:Bastin} \sigma^{\alpha\beta}_\text{Bastin}=i\,\text{Tr}_{\epsilon,{\bf p}}\big[f_\epsilon&\, \big(-\mathscr{A}_\epsilon\lambda^\beta (\mathscr{G}^A_\epsilon)'\lambda^\alpha+(\mathscr{G}^R_\epsilon)'\lambda^\beta \mathscr{A}_\epsilon \lambda^\alpha\big)\big]\, , \end{align} which was derived by Bastin {\it et al.} \cite{Bastin1971}. The derivative with respect to the internal frequency $\epsilon$ is denoted by $\big(\cdot\big)'$. The expression in \eqref{eqn:Bastin} is written in the subsystem basis, in which we expressed the Bloch Hamiltonian $\lambda_{\bf p}$ in \eqref{eqn:H}. Due to the matrix trace, we can change to the diagonal basis via \eqref{eqn:Gdiag} and \eqref{eqn:UdagLamU}. In Sec.~\ref{sec:conductivity} we identified the symmetry under $\alpha\leftrightarrow\beta$ as a good criterion for a decomposition. The Bastin formula is neither symmetric nor antisymmetric in $\alpha\leftrightarrow\beta$. When we decompose $\sigma^{\alpha\beta}_\text{Bastin}$ into its symmetric and antisymmetric part, we can easily identify our result \eqref{eqn:DecompSigma}, that is \begin{align} &\frac{1}{2}\big(\sigma^{\alpha\beta}_\text{Bastin}+\sigma^{\beta\alpha}_\text{Bastin}\big)=\sigma^{\alpha\beta}_{\text{intra},+}+\sigma^{\alpha\beta}_{\text{intra},-}+\sigma^{\alpha\beta,s}_\text{inter} \, , \label{eqn:BastinSym}\\ &\frac{1}{2}\big(\sigma^{\alpha\beta}_\text{Bastin}-\sigma^{\beta\alpha}_\text{Bastin}\big)=\sigma^{\alpha\beta,a}_{\text{inter},+}+\sigma^{\alpha\beta,a}_{\text{inter},-} \label{eqn:BastinAntiSym} \, . \end{align} This identification is expected as the decomposition into the symmetric and antisymmetric part is unique. We note that this separation naturally leads to Fermi-surface \eqref{eqn:BastinSym} and Fermi-sea contributions \eqref{eqn:BastinAntiSym} in the same form that we defined in Sec.~\ref{sec:conductivity}. Based on our derivation we argue that we should see the symmetry under $\alpha\leftrightarrow\beta$ as the fundamental difference between \eqref{eqn:BastinSym} and \eqref{eqn:BastinAntiSym} instead of the property involving $f_\epsilon$ or $f'_\epsilon$. The Bastin formula \eqref{eqn:Bastin} is the starting point for the derivation of the Streda formula \cite{Streda1982,Crepieux2001}. We split $\sigma^{\alpha\beta}_\text{Bastin}$ into two equal parts and perform partial integration in $\epsilon$ on the latter one. We obtain \begin{align} &\sigma^{\alpha\beta}_\text{Bastin}=\frac{i}{2}\text{Tr}_{\epsilon,{\bf p}}\big[ \, f_\epsilon\, \big(-\mathscr{A}_\epsilon\lambda^\beta (\mathscr{G}^A_\epsilon)'\lambda^\alpha+(\mathscr{G}^R_\epsilon)'\lambda^\beta \mathscr{A}_\epsilon \lambda^\alpha\big)\big] \nonumber \\ &\hspace{6mm}-\frac{i}{2}\text{Tr}_{\epsilon,{\bf p}}\big[ \, f'_\epsilon\, \big(-\mathscr{A}_\epsilon\lambda^\beta \mathscr{G}^A_\epsilon\lambda^\alpha+\mathscr{G}^R_\epsilon\lambda^\beta \mathscr{A}_\epsilon \lambda^\alpha\big)\big] \nonumber\\ &\hspace{6mm}-\frac{i}{2}\text{Tr}_{\epsilon,{\bf p}}\big[ \, f_\epsilon\, \big(-\mathscr{A}'_\epsilon\lambda^\beta \mathscr{G}^A_\epsilon\lambda^\alpha+\mathscr{G}^R_\epsilon\lambda^\beta \mathscr{A}'_\epsilon \lambda^\alpha\big)\big]. \end{align} We replace the spectral function by its definition $\mathscr{A}_\epsilon=-(\mathscr{G}^R_\epsilon-\mathscr{G}^A_\epsilon)/2\pi i$ and sort by $f_\epsilon$ and $f'_\epsilon$. By doing so the Streda formula decomposes into two contributions, historically labeled as \begin{align} \label{eqn:DecompStreda} \sigma^{\alpha\beta}_\text{Streda}=\sigma^{\alpha\beta,I}_\text{Streda}+\sigma^{\alpha\beta,II}_\text{Streda} \end{align} with the ``Fermi-surface contribution'' \begin{align} &\sigma^{\alpha\beta,I}_\text{Streda}=\frac{1}{4\pi}\text{Tr}_{\epsilon,{\bf p}}\big[ \, f'_\epsilon\, \big(-(\mathscr{G}^R_\epsilon-\mathscr{G}^A_\epsilon)\lambda^\beta \mathscr{G}^A_\epsilon \lambda^\alpha\nonumber\\&\hspace{1.5cm}+G^R_\epsilon\lambda^\beta (\mathscr{G}^R_\epsilon-\mathscr{G}^A_\epsilon) \lambda^\alpha\big)\big] \, ,\label{eqn:StredaI} \end{align} and the ``Fermi-sea contribution'' \begin{align} &\sigma^{\alpha\beta,II}_\text{Streda}=-\frac{1}{4\pi}\text{Tr}_{\epsilon,{\bf p}}\big[ \, f_\epsilon\, \big(\mathscr{G}^A_\epsilon\lambda^\beta (\mathscr{G}^A_\epsilon)'\lambda^\alpha-(\mathscr{G}^A_\epsilon)'\lambda^\beta \mathscr{G}^A_\epsilon \lambda^\alpha\nonumber\\&\hspace{1.5cm}+(\mathscr{G}^R_\epsilon)'\lambda^\beta \mathscr{G}^R_\epsilon\lambda^\alpha-\mathscr{G}^R_\epsilon\lambda^\beta (\mathscr{G}^R_\epsilon)' \lambda^\alpha\big)\big] \, . \label{eqn:StredaII} \end{align} The decomposition \eqref{eqn:DecompStreda} explicitly shows the ambiguity in the definition of Fermi-sea and Fermi-surface contributions due to the possibility of partial integration in the internal frequency $\epsilon$. Following our distinction by the symmetry with respect to $\alpha\leftrightarrow\beta$, we notice that the second contribution \eqref{eqn:StredaII} is antisymmetric, whereas the first contribution \eqref{eqn:StredaI} is neither symmetric nor antisymmetric. If we decompose \eqref{eqn:StredaI} into its symmetric and antisymmetric part and combine the latter one with \eqref{eqn:StredaII}, we recover our findings \begin{align} &\frac{1}{2}\big(\sigma^{\alpha\beta,I}_\text{Streda}+\sigma^{\beta\alpha,I}_\text{Streda}\big)=\sigma^{\alpha\beta}_{\text{intra},+}+\sigma^{\alpha\beta}_{\text{intra},-}+\sigma^{\alpha\beta,s}_\text{inter} \,, \\ \label{eqn:StredaAntiSym} &\frac{1}{2}\big(\sigma^{\alpha\beta,I}_\text{Streda}-\sigma^{\beta\alpha,I}_\text{Streda}\big)+\sigma^{\alpha\beta,II}_\text{Streda}=\sigma^{\alpha\beta,a}_{\text{inter},+}\hspace{-0.3mm}+\hspace{-0.3mm}\sigma^{\alpha\beta,a}_{\text{inter},-}\, , \end{align} as expected by the uniqueness of this decomposition. We see that the antisymmetric interband contribution, which is responsible for the anomalous Hall effect, is given by parts of Streda's Fermi-surface and Fermi-sea contributions combined \cite{Kodderitzsch2015}. In the literature different parts of \eqref{eqn:StredaI} and \eqref{eqn:StredaII} are identified to be relevant when treating disorder effects via quasiparticle lifetime broadening or beyond \cite{Nagaosa2010, Sinitsyn2007, Crepieux2001, Dugaev2005, Onoda2006, Yang2006, Kontani2007, Nunner2008, Onoda2008, Tanaka2008, Kovalev2009, Streda2010, Pandey2012, Burkov2014, Chadova2015, Kodderitzsch2015, Mizoguchi2016}. Due to the mathematical uniqueness and the clear physical interpretation we propose \eqref{eqn:BastinAntiSym} or, equivalently, \eqref{eqn:StredaAntiSym} as a good starting point for further studies on the anomalous Hall conductivity. \subsection{Basis choice and subsystem basis} \label{sec:discussion:basis} The polarization tensor $\Pi^{\alpha\beta}_{iq_0}$ in \eqref{eqn:PiFinal} is the trace of a matrix and is, thus, invariant under unitary (or, more general, similarity) transformations of this matrix. In other words, the conductivities can be expressed within a different basis than the eigenbasis, which we used for the final formulas in \eqref{eqn:SintraN}-\eqref{eqn:SinterAN} in Sec.~\ref{sec:conductivity}. The obvious advantage of the eigenbasis is that we can easily identify terms with clear physical interpretation like the quasiparticle spectral functions $A^\pm_{\bf p}(\epsilon)$, the quasiparticle velocities $E^{\pm,\alpha}_{\bf p}$, the quantum metric factor $C^{\alpha\beta}_{\bf p}$ and the Berry curvature $\Omega^{\alpha\beta,\pm}_{\bf p}$. In general, we can use any invertible matrix $U_{\bf p}$ and perform similar steps as we did in our derivation: In analogy to \eqref{eqn:Ep} and \eqref{eqn:Gdiag} we obtain a transformed Bloch Hamiltonian matrix $\tilde \lambda^{}_{\bf p}=U^{-1}_{\bf p}\lambda^{}_{\bf p} U^{}_{\bf p}$ and a corresponding Green's function matrix. Reconsidering the steps in \eqref{eqn:UdagLamUDeriv}, we obtain a new decomposition \eqref{eqn:UdagLamU} of the velocity matrix with an analogue of the Berry-connection-like matrix in \eqref{eqn:BerryConnection}. We see that the following steps of decomposing the Berry-connection-like matrix, separating the involved matrices of the polarization tensor into their diagonal and off-diagonal parts and splitting the off-diagonal matrices into their symmetric and antisymmetric components under transposition are possible but lengthy. A special case is $U_{\bf p}=\mathds{1}$, by which we express the conductivity in the subsystem basis, in which we defined the Bloch Hamiltonian $\lambda_{\bf p}$ in \eqref{eqn:lam}. Following the derivation in Sec.~\ref{sec:discussion:BastinStreda} we obtain \eqref{eqn:Bastin}, which we further decompose into the symmetric and antisymmetric part with respect to $\alpha\leftrightarrow\beta$, $\sigma^{\alpha\beta}=\sigma^{\alpha\beta,s}+\sigma^{\alpha\beta,a}$. We obtain \begin{align} \label{eqn:SigmaSAB} &\sigma^{\alpha\beta,s}=-\pi\,\text{Tr}_{\epsilon,{\bf p}}\big[f'_\epsilon \mathscr{A}_\epsilon \lambda^\beta \mathscr{A}_\epsilon \lambda^\alpha\big]\,,\\[1mm] \label{eqn:SigmaAAB} &\sigma^{\alpha\beta,a}=2\pi^2\, \text{Tr}_{\epsilon,{\bf p}}\big[f_\epsilon\big(\mathscr{A}^2_\epsilon \lambda^\beta \mathscr{A}_\epsilon \lambda^\alpha-\mathscr{A}_\epsilon \lambda^\beta \mathscr{A}^2_\epsilon \lambda^\alpha\big)\big]\, . \end{align} We replaced $\mathscr{P}_\epsilon'$ by using \eqref{eqn:Pprime}. These expressions still involve the matrix trace. Obviously, an immediate evaluation of this trace without any further simplifications would produce very lengthy expressions. A mayor reduction of the effort to perform the matrix trace is the decomposition into symmetric and antisymmetric parts with respect to trace transposition, which was defined in \eqref{eqn:TraceTrans}. We expand $\mathscr{A}_\epsilon$, $\lambda^\alpha$ and $\lambda^\beta$ into their diagonal and off-diagonal components, which we further decompose into parts proportional to $\sigma_x$ and $\sigma_y$. For instance in \eqref{eqn:SigmaSAB}, we obtain 81 combinations, where several combinations vanish by tracing an off-diagonal matrix. We get symmetric as well as antisymmetric contributions under trace transposition. However, the latter ones will eventually vanish due to the antisymmetry in $\alpha\leftrightarrow\beta$. Similarly, the symmetric contributions under trace transposition will drop out in \eqref{eqn:SigmaAAB}. By this analysis, we explicitly see that our approach discussed in Sec.~\ref{sec:conductivity} does not only lead to a physically motivated decomposition but also reduces the effort of performing the matrix trace drastically and, thus, can be seen as a potential strategy to treat multiband systems beyond our two-band system analytically. \subsection{Limit of small and large scattering rate $\Gamma$ and the low temperature limit} \label{sec:discussion:limits} In our derivation in Sec.~\ref{sec:conductivity} we did not assume any restrictions on the size of the scattering rate $\Gamma$. Thus, the formulas \eqref{eqn:SintraN}-\eqref{eqn:SinterAN} are valid for a scattering rate $\Gamma$ of arbitrary size. In the following we discuss both the clean limit (small $\Gamma$) and the dirty limit (large $\Gamma$) analytically. We are not only interested in the limiting behavior of the full conductivity $\sigma^{\alpha\beta}$ in \eqref{eqn:DecompSigma}, but also in the behavior of the individual contributions \eqref{eqn:SintraN}-\eqref{eqn:SinterAN}. The dependence on $\Gamma$ is completely captured by the three different spectral weighting factors $w^n_{{\bf p},\text{intra}}$, $w^s_{{\bf p},\text{inter}}$ and $w^{a,n}_{{\bf p},\text{inter}}$, which involve a specific product of quasiparticle spectral functions and are defined in \eqref{eqn:Wintra}-\eqref{eqn:Wainter}. Parts of the clean limit were already discussed by the author and Metzner elsewhere \cite{Mitscherling2018}. We review it here for consistency and a complete overview within our notation. We further discuss the zero temperature limit. The spectral weighting factor of the intraband conductivities $w^n_{{\bf p},\text{intra}}$ in \eqref{eqn:Wintra} involves the square of the spectral function of the same band, $\big(A^n_{\bf p}(\epsilon)\big)^2$, and, thus, peaks at the corresponding quasiparticle Fermi surface defined by $E^n_{\bf p}-\mu=0$ for small $\Gamma$. If $\Gamma$ is so small that the quasiparticle velocities $E^{\pm,\alpha}_{\bf p}$ are almost constant in a momentum range in which the variation of $E^\pm_{\bf p}$ is of order $\Gamma$, we can approximate \begin{align} w^n_{{\bf p},\text{intra}}(\epsilon)\approx \frac{1}{2\Gamma}\delta(\epsilon+\mu-E^n_{\bf p})\sim \mathcal{O}(\Gamma^{-1})\,. \label{eqn:winG0} \end{align} Thus, the intraband conductivities $\sigma^{\alpha\beta}_{\text{intra},\pm}$ diverge as $1/\Gamma$, consistent with Boltzmann transport theory \cite{Mahan2000}. The spectral weighting factor of the symmetric interband conductivity $w^s_{{\bf p},\text{inter}}$ in \eqref{eqn:Wsinter} is the product of the spectral functions of the two bands, $A^+_{\bf p}(\epsilon)A^-_{\bf p}(\epsilon)$. For small $\Gamma$, $w^s_{{\bf p},\text{inter}}$ peaks equally at the Fermi surface of both bands. For increasing $\Gamma$, the gap starts to fill up until the peaks merge and form one broad peak at $(E^+_{\bf p}+E^-_{\bf p})/2-\mu=g_{\bf p}-\mu$. It decreases further for even larger $\Gamma$. Since each spectral function $A^n_{\bf p}(\epsilon)$ has half width of $\Gamma$ at half the maximum value, the relevant scale for the crossover is $2\Gamma=E^+_{\bf p}-E^-_{\bf p}=2r_{\bf p}$. We sketch $w^s_{{\bf p},\text{inter}}$ in Fig.~\ref{fig:WInter} for several choices of $\Gamma$. If the quantum metric factor $C^{\alpha\beta}_{\bf p}$ is almost constant in a momentum range in which the variation of $E^\pm_{\bf p}$ is of order $\Gamma$ and, furthermore, if $\Gamma\ll r_{\bf p}$ we can approximate \begin{align} w^s_{{\bf p},\text{inter}}(\epsilon)\approx\Gamma\sum_{n=\pm}\delta(\epsilon+\mu-E^n_{\bf p})\sim\mathcal{O}(\Gamma^1)\,. \label{eqn:wsG0} \end{align} We see that the symmetric interband conductivity $\sigma^{\alpha\beta,s}_\text{inter}$ scales linearly in $\Gamma$ and is suppressed by a factor $\Gamma^2$ compared to the intraband conductivities. \begin{figure} \centering \includegraphics[width=8cm]{fig2a.pdf}\\ \includegraphics[width=8cm]{fig2b.pdf} \caption{The spectral weighting factors $w^{s}_{{\bf p},\text{inter}}$ (top) and $w^{a,+}_{{\bf p},\text{inter}}$ (bottom, solid), and its primitive $W^{a,+}_{{\bf p},\text{inter}}$ (bottom, dashed) for different choices of $\Gamma$. \label{fig:WInter}} \end{figure} The spectral weighting factor of the antisymmetric interband conductivities $w^{a,n}_{{\bf p},\text{inter}}$ in \eqref{eqn:Wainter} is the square of the spectral function of one band multiplied by the spectral function of the other band, $\big(A^n_{\bf p}(\epsilon)\big)^2A^{-n}_{\bf p}(\epsilon)$. In the clean limit, it is dominated by a peak at $E^n_{\bf p}-\mu$. For increasing $\Gamma$, the peak becomes asymmetric due to the contribution of the spectral function of the other band at $E^{-n}_{\bf p}-\mu$ and develops a shoulder. For $2\Gamma\gg E^+_{\bf p}-E^-_{\bf p}=2r_{\bf p}$ it eventually becomes one broad peak close to $(E^+_{\bf p}+E^-_{\bf p})/2-\mu=g_{\bf p}-\mu$. We sketch $w^{a,+}_\text{inter}$ in Fig.~\ref{fig:WInter} for several choices of $\Gamma$. If the Berry curvature $\Omega^{\alpha\beta,n}_{\bf p}$ is almost constant in a momentum range in which the variation of $E^n_{\bf p}$ is of order $\Gamma$ and, furthermore, if $\Gamma\ll r_{\bf p}$ we can approximate \begin{align} w^{n,a}_{{\bf p},\text{inter}}(\epsilon)\approx \delta(\epsilon+\mu-E^n_{\bf p}) \sim \mathcal{O}(\Gamma^0)\, . \label{eqn:wanG0} \end{align} Thus, the antisymmetric interband conductivities $\sigma^{\alpha\beta,a}_{\text{inter},\pm}$ become $\Gamma$ independent, or ``dissipationless'' \cite{Nagaosa2010}. The symmetric interband conductivity is suppressed by a factor $\Gamma$ compared to the antisymmetric interband conductivities. The antisymmetric interband conductivities are suppressed by a factor $\Gamma$ compared to the intraband conductivities. However, note that the leading order might vanish, for instance, when integrating over momenta or due to zero Berry curvature. Using \eqref{eqn:winG0}, \eqref{eqn:wsG0} and \eqref{eqn:wanG0} we see that the intraband conductivities and the symmetric interband conductivity are proportional to $-f'(E^\pm_{\bf p}-\mu)$ whereas the antisymmetric interband conductivities involve the Fermi function $f(E^\pm_{\bf p}-\mu)$ in the clean limit. Thus, the former ones are restricted to the vicinity of the Fermi surface at low temperature $k_BT\ll 1$. In contrast, all occupied states contribute to the antisymmetric interband conductivities. The consistency with the Landau Fermi liquid picture was discussed by Haldane \cite{Haldane2004}. The Fermi function $f(\epsilon)$ and its derivative $f'(\epsilon)$ capture the temperature broadening effect in the different contributions \eqref{eqn:SintraN}-\eqref{eqn:SinterAN} of the conductivity. In the following, we have a closer look at the low temperature limit. Since $f'(\epsilon)\rightarrow -\delta(\epsilon)$ for $k_BT\ll 1$ the spectral weighting factors of the intraband and the symmetric interband conductivity read $-w^{n}_{{\bf p},\text{intra}}(0)$ and $-w^{s}_{{\bf p},\text{inter}}(0)$, respectively, after frequency integration over $\epsilon$. The antisymmetric interband conductivities involve the Fermi function, which results in the Heaviside step function for $k_BT\ll 1$, that is $f(\epsilon)\rightarrow \Theta(-\epsilon)$. Thus, the frequency integration has still to be performed from $-\infty$ to $0$. In order to circumvent this complication, we define the primitive $(W^{n,a}_{{\bf p},\text{inter}}(\epsilon))'=w^{n,a}_{{\bf p},\text{inter}}(\epsilon)$ with the boundary condition $W^{n,a}_{{\bf p},\text{inter}}(-\infty)=0$. The zero temperature limit is then performed after partial integration in $\epsilon$ by \begin{align} \int\hspace{-1mm} d\epsilon f(\epsilon)\,w^{n,a}_{{\bf p},\text{inter}}(\epsilon)&=-\int\hspace{-1mm} d\epsilon f'(\epsilon)\,W^{n,a}_{{\bf p},\text{inter}}(\epsilon)\nonumber\\[1mm]&\approx W^{n,a}_{{\bf p},\text{inter}}(0)\,. \end{align} In Fig.~\ref{fig:WInter} we sketch $W^{n,a}_{{\bf p},\text{inter}}(\epsilon)$ for $\Gamma=0.3\,r_{\bf p}$. At finite $\Gamma$, it is a crossover from zero to approximately one, that eventually approaches a step function at $E^n_{\bf p}-\mu$ for small $\Gamma$. At low temperature $k_BT\ll 1$, the occupied states with $E^n_{\bf p}-\mu<0$ contribute significantly to the antisymmetric interband conductivities as expected. Note that $\int\hspace{-1mm}d\epsilon\, w^{n,a}_{{\bf p},\text{inter}}(\epsilon)=r^2_{\bf p}(r^2_{\bf p}+3\Gamma^2)/(r^2_{\bf p}+\Gamma^2)^2\approx 1+\Gamma^2/r_{\bf p}^2$, so that a step function of height 1 is only approached in the limit $\Gamma\rightarrow 0$. In the following, we discuss the limiting cases of the spectral weighting factors $w^n_{{\bf p},\text{intra}}(0)$, $w^s_{{\bf p},\text{inter}}(0)$, and $W^{n,a}_{{\bf p},\text{inter}}(0)$, that is in the low temperature limit. We start with the case of a band insulator in the clean limit and assume a chemical potential below, above or in between the two quasiparticle bands as well as a scattering rate much smaller than the gap, $\Gamma\ll|E^n_{\bf p}-\mu|$. Within this limit, we find very distinct behavior of the spectral weighting factors of the intraband conductivities and of the symmetric interband conductivity on the one hand and the spectral weighting factor of the antisymmetric interband conductivities on the other hand. The former ones scale like \begin{align} &w^n_{{\bf p},\text{intra}}(0)\approx \frac{\Gamma^2}{\pi(\mu-E^n_{\bf p})^4}\sim \mathcal{O}(\Gamma^2) \, ,\\ &w^s_{{\bf p},\text{inter}}(0)\approx \frac{4r^2_{\bf p}\Gamma^2}{\pi(\mu-E^+_{\bf p})^2(\mu-E^-_{\bf p})^2}\sim\mathcal{O}(\Gamma^2) \, . \end{align} We see that the intraband and the symmetric interband conductivity for filled or empty bands are only present due to a finite scattering rate. The spectral weighting factor of the antisymmetric interband conductivities has a different behavior whether the bands are all empty, all filled or the chemical potential is in between both bands. By expanding $W^{n,a}_{{\bf p},\text{inter}}(0)$ we get \begin{align} W^{n,a}_{{\bf p},\text{inter}}(0)&= \frac{1}{2}\big[1+\text{sgn}(\mu-E^n_{\bf p})\big]\nonumber\\&+\big[2+\sum_{\nu=\pm}\text{sgn}(\mu-E^\nu_{\bf p})\big]\frac{\Gamma^2}{4r^2_{\bf p}}+\mathcal{O}(\Gamma^3) \, . \end{align} Note that a direct expansion of $w^{n,a}_{{\bf p},\text{inter}}(\epsilon)$ followed by the integration over $\epsilon$ from $-\infty$ to $0$ is not capable to capture the case of fully occupied bands, which shows that the regularization by a finite $\Gamma$ is crucial to avoid divergent integrals in the low temperature limit. For completely filled bands $\mu>E^+_{\bf p},E^-_{\bf p}$ we have $W^{n,a}_{{\bf p},\text{inter}}(0)\approx 1+\Gamma^2/r^2_{\bf p}$ in agreement with the discussion above. For completely empty bands $\mu<E^+_{\bf p},E^-_{\bf p}$ we have $W^{n,a}_{{\bf p},\text{inter}}(0)\propto \Gamma^3$. If the chemical potential lies in between both bands $E^-_{\bf p}<\mu<E^+_{\bf p}$ we have $W^{-,a}_{{\bf p},\text{inter}}(0)=1+\Gamma^2/2r^2_{\bf p}$ and $W^{+,a}_{{\bf p},\text{inter}}(0)=\Gamma^2/2r^2_{\bf p}$. The antisymmetric interband conductivities involve the Berry curvature, which is equal for both bands up to a different sign, $\Omega^{\alpha\beta,+}=-\Omega^{\alpha\beta,-}$. Thus, the antisymmetric interband conductivity summed over both bands involves \begin{align} W^{+,a}_{{\bf p},\text{inter}}(0)-&W^{-,a}_{{\bf p},\text{inter}}(0)= \frac{1}{2}\big[\text{sgn}(\mu\hspace{-0.5mm}-\hspace{-0.5mm}E^+_{\bf p})\hspace{-0.5mm}-\hspace{-0.5mm}\text{sgn}(\mu\hspace{-0.5mm}-\hspace{-0.5mm}E^-_{\bf p})\big]\nonumber\\&-\frac{16r^3_{\bf p}\Gamma^3}{3\pi(\mu-E^+_{\bf p})^3(\mu-E^-_{\bf p})^3}+\mathcal{O}(\Gamma^5) \, . \end{align} We see that a scattering-independent or ``dissipationless'' term is only present for a chemical potential in between the two bands. The next order in $\Gamma$ is at least cubic. Note that different orders can vanish in the conductivities after the integration over momenta. Our formulas \eqref{eqn:SintraN}-\eqref{eqn:SinterAN} are valid for an arbitrarily large scattering rate $\Gamma$. We study the dirty limit (large $\Gamma$) in the following. In contrast to the clean limit, it is crucial to distinct the two following cases: fixed chemical potential and fixed particle number. The latter condition leads to a scattering-dependent chemical potential $\mu(\Gamma)$, which modifies the scaling of the spectral weighting factors. To see this, we calculate the total particle number per unit cell at small temperature and get \begin{align} n&=\sum_{\nu=\pm}\int\hspace{-1.5mm} d\epsilon \int\hspace{-1.5mm} \frac{d^d{\bf p}}{(2\pi)^d} A^\nu_{\bf p}(\epsilon) f(\epsilon)\nonumber\\ &\approx1-\sum_{\nu=\pm}\frac{1}{\pi}\int\hspace{-1.5mm} \frac{d^d{\bf p}}{(2\pi)^d}\arctan\frac{E^\nu_{\bf p}-\mu}{\Gamma}\nonumber\\ &\approx 1-\frac{2}{\pi}\arctan\frac{c-\mu}{\Gamma} \,. \label{eqn:muGamma} \end{align} In the last step we assumed that $\Gamma$ is much larger than the band width $(E^+_\text{max}-E^-_\text{min})/2=W\ll \Gamma$, where $E^+_\text{max}$ is the maximum of the upper band and $E^-_\text{min}$ is the minimum of the lower band. We denote the center of the bands as $c=(E^+_\text{max}+E^-_\text{min})/2$. Solving for the chemical potential gives the linear dependence on $\Gamma$, $\mu(\Gamma)=c+\mu_\infty\Gamma$ with \begin{align} \mu_\infty=-\tan \frac{(1-n)\pi}{2} \,. \end{align} Note that at half filling, $n=1$, the chemical potential becomes scattering independent, $\mu_\infty=0$. At $n=0,2$ we have $\mu_\infty=\mp\infty$. We assume a scattering rate much larger than the bandwidth $W\ll \Gamma$ in the following. In a first step, we consider the case of fixed particle number. We discuss the limiting cases of the spectral weighting factors $w^n_{{\bf p},\text{intra}}(0)$, $w^s_{{\bf p},\text{inter}}(0)$ and $W^{n,a}_{{\bf p},\text{inter}}(0)$ by expanding up to several orders in $1/\Gamma$. The expansion of the spectral weighting factor of the intraband conductivities $w^n_{{\bf p},\text{intra}}(0)$ in \eqref{eqn:Wintra} reads \begin{align} &w^n_\text{intra}(0)\approx \frac{1}{(1+\mu^2_\infty)^2}\frac{1}{\pi \Gamma^2}+\frac{4\mu_\infty}{(1+\mu_\infty^2)^3}\frac{E^n_{\bf p}-c}{\pi \Gamma^3}\nonumber\\&\hspace{1.5cm}-\frac{2(1-5\mu_\infty^2)}{(1+\mu_\infty^2)^4}\frac{(E^n_{\bf p}-c)^2}{\pi\Gamma^4} \, . \label{eqn:WnInfty} \end{align} The prefactors involve $\mu_\infty$ at each order and an additional momentum-dependent prefactor at cubic and quartic order. The expansion of the spectral weighting factor of the symmetric interband conductivity $w^s_{{\bf p},\text{inter}}(0)$ in \eqref{eqn:Wsinter} reads \begin{align} &w^s_\text{inter}(0)\approx \frac{4}{(1+\mu^2_\infty)^2}\frac{r^2_{\bf p}}{\pi \Gamma^2}+\frac{16\mu_\infty}{(1+\mu_\infty^2)^3}\frac{r^2_{\bf p}(g_{\bf p}-c)}{\pi\Gamma^3}\nonumber\\&\hspace{0cm}-\big[\frac{8(1-5\mu_\infty^2)}{(1+\mu^2_\infty)^4}\frac{r^2_{\bf p} (g_{\bf p}-c)^2}{\pi\Gamma^4}+\frac{8(1-\mu_\infty^2)}{(1+\mu^2_\infty)^4}\frac{r^4_{\bf p}}{\pi\Gamma^4}\big] \, . \label{eqn:WsInfty} \end{align} Note that all orders involve a momentum-dependent prefactor. In both $w^n_\text{intra}(0)$ and $w^s_\text{inter}(0)$ the cubic order vanishes at half filling by $\mu_\infty=0$. The expansion of the spectral weighting factor of the antisymmetric interband conductivities $W^{n,a}_{{\bf p},\text{inter}}(0)$ in \eqref{eqn:Wainter} reads \begin{align} &W^{a,\pm}_\text{inter}(0)\approx \big[\frac{3\pi}{2}\hspace{-0.5mm}+\hspace{-0.5mm}3\arctan\mu_\infty\hspace{-0.5mm}+\hspace{-0.5mm}\frac{\mu_\infty(5+3\mu_\infty^2)}{(1+\mu_\infty^2)^2}\big]\frac{r^2_{\bf p}}{\pi\Gamma^2}\nonumber\\&\hspace{2cm}-\frac{8}{3(1+\mu_\infty^2)^3}\frac{3r^2_{\bf p} (g_{\bf p}-c)\pm r^3_{\bf p}}{\pi\Gamma^3} \, . \label{eqn:WaInfty} \end{align} Note that the expansion of $w^{a,\pm}_\text{inter}(\epsilon)$ with subsequent frequency integration from $-\infty$ to $0$ leads to divergences and predicts a wrong lowest order behavior. Due to the property of the Berry curvature, $\Omega^{\alpha\beta,+}_{\bf p}=-\Omega^{\alpha\beta,-}_{\bf p}$, the quadratic order drops out of the antisymmetric interband conductivity summed over the two bands, leading to \begin{align} &W^{a,+}_\text{inter}(0)-W^{a,-}_\text{inter}(0)\approx \nonumber\\&-\frac{16}{3(1+\mu_\infty^2)^3}\frac{r^3_{\bf p}}{\pi\Gamma^3}-\frac{32\mu_\infty}{(1+\mu^2_\infty)^4}\frac{r^3_{\bf p}(g_{\bf p}-c)}{\pi\Gamma^4}\nonumber\\[1mm]&+\big[\frac{16(1-7\mu^2_\infty)}{(1+\mu^2_\infty)^5}\frac{r^3_{\bf p} (g_{\bf p}-c)^2}{\pi\Gamma^5}+\frac{16(3-5\mu^2_\infty)}{5(1+\mu^2_\infty)^5}\frac{r^5_{\bf p}}{\pi\Gamma^5}\big] \, . \label{eqn:WaDiffInfty} \end{align} The antisymmetric interband conductivity summed over the two bands is at least cubic in $1/\Gamma$ in contrast to the intraband and the symmetric interband conductivity, which are at least quadratic. The integration over momenta in the conductivities can cause the cancellation of some orders or can reduce the numerical prefactor drastically, so that the crossover to lower orders take place far beyond the scale that is numerically or physically approachable. By giving the exact prefactors above, this can be checked not only qualitatively but also quantitatively for a given model. If needed, the expansion to even higher orders is straightforward. The dirty limit for fixed chemical potential does not involve orders due to the scattering dependence of $\mu(\Gamma)$, however modifies the prefactor due to a constant $\mu$. The corresponding expansion of the different spectral weighting factors can be obtained simply by setting $\mu_\infty=0$ and $c=\mu$ in \eqref{eqn:WnInfty} - \eqref{eqn:WaDiffInfty}. The scaling behavior $\sigma^{xx}\sim \Gamma^{-2}$ of the longitudinal conductivity and $\sigma^{xy}\sim \Gamma^{-3}$ of the anomalous Hall conductivity (for zero $\sigma^{xy}_{\text{intra},\pm}$) is consistent with Kontani {\it et al.} \cite{Kontani2007} and Tanaka {\it et. al.} \cite{Tanaka2008}. We emphasize, however, that a scattering dependence of $\mu$ and the integration over momenta may modify the upper scalings. Thus, the scaling relation $\sigma^{xy}\propto (\sigma^{xx})^\nu$ useful in the analysis of experimental results (see, for instance, Ref.~\onlinecite{Onoda2008}) is not necessarily $\nu=1.5$ in the limit $W\ll\Gamma$ \cite{Tanaka2008}. We will show an example in Sec.~\ref{sec:examples:Kontani}. \subsection{Quantum geometric tensor} \label{sec:discussion:quantumgeometry} Beside the Green's function, the generalized velocity is the other key ingredient in the polarization tensor \eqref{eqn:PiFinal}. We showed that the phase gained by spatial motion in an electric field leads to a generalized velocity, which is given by the momentum derivative of the Bloch Hamiltonian matrix expressed in the subsystem basis. The momentum derivative of the Bloch Hamiltonian in another basis does not capture all relevant contributions and leads to incomplete or inconsistent results (see, for instance, \cite{Tomczak2009, Nourafkan2018} and the example in Sec.~\ref{sec:examples:doubling}). We presented the procedure how to derive those additional contributions after basis change in Sec.~\ref{sec:conductivity}. As a consequence of the momentum dependence of the eigenbasis $|\pm_{\bf p}\rangle$ we derived the coherence matrix $\mathcal{F}^\alpha_{\bf p}$, which involves the Berry connection and, thus, suggests a deeper connection to topological and quantum geometrical concepts. We review these concepts and relate them to our results in a broader and more general perspective in the following. Expressing the velocity operator given by $\partial_\alpha \hat \lambda_{\bf p}$ of a general multiband (and not necessarily two-band) Bloch Hamiltonian $\hat \lambda_{\bf p}$ in its orthogonal and normalized eigenbasis $|n_{\bf p}\rangle$ with eigenvalues $E^n_{\bf p}$ naturally leads to intraband and interband contributions via \begin{align} \langle n_{\bf p}|(\partial_\alpha \hat \lambda_{\bf p})|m_{\bf p}\rangle &= \delta_{nm}\,E^{n,\alpha}_{\bf p}\nonumber\\[1mm]&+i(E^n_{\bf p}-E^m_{\bf p})\mathcal{A}^{\alpha,nm}_{\bf p} \end{align} after treating the momentum derivative and the momentum dependence of the eigenbasis carefully. The first line involves the quasiparticle velocities $E^{n,\alpha}_{\bf p}=\partial_\alpha E^n_{\bf p}$ and is only present for $n=m$. The second line involves the Berry connection $\mathcal{A}^{\alpha,nm}_{\bf p}=i\langle n_{\bf p}|\partial_\alpha m_{\bf p}\rangle$, where $|\partial_\alpha m_{\bf p}\rangle$ is the momentum derivative of the eigenstate $|m_{\bf p}\rangle$ \cite{Berry1984,Zak1989}, and is only present for $n\neq m$. In our two band model, the first term corresponds to $\mathcal{E}^\alpha_{\bf p}$ in \eqref{eqn:Ep}, the second term to $\mathcal{F}^\alpha_{\bf p}$ in \eqref{eqn:UdagLamU} and the $\mathcal{A}^{\alpha,nm}_{\bf p}$ are the elements of the matrix $\mathcal{A}_{\bf p}$ in \eqref{eqn:BerryConnection} with $n,m=\pm$, that is \begin{align} \mathcal{A}_{\bf p}=iU^\dagger_{\bf p}\partial^{}_\alpha U^{}_{\bf p}=\begin{pmatrix}\mathcal{A}^{\alpha,+}_{\bf p} & \mathcal{A}^{\alpha,+-}_{\bf p} \\[2mm] \mathcal{A}^{\alpha,-+}_{\bf p} & \mathcal{A}^{\alpha,-}_{\bf p} \end{pmatrix}\, . \end{align} We omitted the second $n$ of the diagonal elements $\mathcal{A}^{\alpha,nn}_{\bf p}$ for shorter notation. The diagonal elements $\mathcal{A}^{\alpha,+}_{\bf p}$ and $\mathcal{A}^{\alpha,-}_{\bf p}$ correspond to $\mathcal{I}^\alpha_{\bf p}+\mathcal{Z}^\alpha_{\bf p}$ in \eqref{eqn:I} and \eqref{eqn:Z}. The off-diagonal elements $\mathcal{A}^{\alpha,+-}_{\bf p}$ and $\mathcal{A}^{\alpha,-+}_{\bf p}$ correspond to $\mathcal{X}^\alpha_{\bf p}+\mathcal{Y}^\alpha_{\bf p}$ in \eqref{eqn:X} and \eqref{eqn:Y}. The Berry connection $\mathcal{A}^{\alpha,nm}_{\bf p}$ is not invariant under the ``local`` $U(1)$ gauge transformation $|n_{\bf p}\rangle\rightarrow e^{i\phi^n_{\bf p}}|n_{\bf p}\rangle$ and, thus, should not show up in physical quantities like the conductivity. In other words, not the Hilbert space but the projective Hilbert space is physically relevant \cite{Provost1980,Anandan1991,Anandan1990,Cheng2013}. For our two band model, we discussed this aspect by allowing the phases $\phi^\pm_{\bf p}$ in \eqref{eqn:+} and \eqref{eqn:-} explicitly. In general, the transformation of the Berry connection with respect to the gauge transformation above reads \begin{align} \mathcal{A}^{\alpha,n}_{\bf p}&\rightarrow \mathcal{A}^{\alpha,n}_{\bf p}+i\phi^{n,\alpha}_{\bf p} \,,\\[1mm] \mathcal{A}^{\alpha,nm}_{\bf p}&\rightarrow \mathcal{A}^{\alpha,nm}_{\bf p} e^{-i(\phi^n_{\bf p}-\phi^m_{\bf p})} \,, \end{align} with $\phi^{n,\alpha}_{\bf p}=\partial_\alpha \phi^n_{\bf p}$. Obviously, the combination \begin{align} \label{eqn:T} \mathcal{T}^{\alpha\beta,n}_{\bf p}=\sum_{m\neq n}\mathcal{A}^{\alpha,nm}_{\bf p}\mathcal{A}^{\beta,mn}_{\bf p} \end{align} is gauge independent. In our two-band model we used the same argument in \eqref{eqn:FF}. We rewrite \eqref{eqn:T} by using $\langle n_{\bf p}|\partial_\alpha m_{\bf p}\rangle=-\langle \partial_\alpha n_{\bf p}|m_{\bf p}\rangle$ and $\sum_{m\neq n} |m_{\bf p}\rangle\langle m_{\bf p}|=1-|n_{\bf p}\rangle\langle n_{\bf p}|$ and obtain \begin{align} \mathcal{T}^{\alpha\beta,n}_{\bf p}=\langle\partial_\alpha n_{\bf p}|\partial_\beta n_{\bf p}\rangle-\langle \partial_\alpha n_{\bf p}|n_{\bf p}\rangle\langle n_{\bf p}|\partial_\beta n_{\bf p}\rangle \, . \end{align} We have recovered the quantum geometric tensor, which is the Fubini-Study metric of the projective Hilbert space \cite{Provost1980,Anandan1991, Anandan1990, Cheng2013, Bleu2018}. In our two-band model, the (diagonal) elements of the product $\mathcal{F}^\alpha_{\bf p}\mathcal{F}^\beta_{\bf p}$ are proportional to the quantum geometric tensor $\mathcal{T}^{\alpha\beta,\pm}_{\bf p}$. Since the interband contribution \eqref{eqn:DecompSymAntisym}, which we decomposed into its symmetric and antisymmetric part with respect to $\alpha\leftrightarrow\beta$, is controlled by the quantum geometric tensor, this suggests to split $\mathcal{T}^{\alpha\beta,n}_{\bf p}$ into its symmetric and antisymmetric part as well. Using the property of the Berry connection under complex conjugation in \eqref{eqn:T}, we see that the symmetric part is the real part and the antisymmetric part is the imaginary part of $\mathcal{T}^{\alpha\beta,n}_{\bf p}$, respectively. We define the real-valued quantities $C^{\alpha\beta,n}_{\bf p}$ and $\Omega^{\alpha\beta,n}_{\bf p}$ via \begin{align} \mathcal{T}^{\alpha\beta,n}_{\bf p}=\frac{1}{2}\big(C^{\alpha\beta,n}_{\bf p}-i\Omega^{\alpha\beta,n}_{\bf p}\big) \end{align} with $\mathcal{C}^{\alpha\beta,n}_{\bf p}=\mathcal{C}^{\beta\alpha,n}_{\bf p}$ and $\Omega^{\alpha\beta,n}_{\bf p}=-\Omega^{\beta\alpha,n}_{\bf p}$. We have recovered the Berry curvature \begin{align} \label{eqn:OmegaRot} \Omega^{\alpha\beta,n}_{\bf p}&=-2\,\text{Im}\mathcal{T}^{\alpha\beta,n}_{\bf p}=\partial_\alpha\mathcal{A}^{\beta,n}_{\bf p}-\partial_\beta\mathcal{A}^{\alpha,n}_{\bf p} \, . \end{align} The Berry curvature is the curl of the Berry connection. Using \eqref{eqn:T} one can show that $\sum_n \Omega^{\alpha\beta,n}_{\bf p}=0$. In order to understand the meaning of the symmetric part $C^{\alpha\beta,n}_{\bf p}$ we consider the squared distance function \begin{align} \label{eqn:QuantumDistance} D\big(|n_{\bf p}\rangle,|n_{{\bf p}'}\rangle\big)^2&=1-|\langle n_{\bf p}|n_{{\bf p}'}\rangle|^2 \, , \end{align} where $|n_{\bf p}\rangle$ and $|n_{{\bf p}'}\rangle$ are two normalized eigenstates of the same band $E^n_{\bf p}$ at different momentum \cite{Provost1980, Anandan1991,Anandan1990, Cheng2013, Bleu2018}. The distance function is invariant under the gauge transformations $|n_{\bf p}\rangle\rightarrow e^{i\phi^n_{\bf p}}|n_{\bf p}\rangle$. It is maximal, if the two states are orthogonal, and zero, if they differ only by a phase. We can understand the function in \eqref{eqn:QuantumDistance} as the distance of the projective Hilbert space in the same manner as $||n_{\bf p}\rangle - |n_{{\bf p}'}\rangle|$ is the natural distance in the Hilbert space, which is, in contrast, not invariant under the upper gauge transformation \cite{Provost1980}. If we expand the distance between the two eigenstates $|n_{\bf p}\rangle$ and $|n_{{\bf p}+d{\bf p}}\rangle$, whose momenta differ only by an infinitesimal momentum $d{\bf p}$, up to second order, we find a metric tensor $g^{\alpha\beta,n}_{\bf p}$ that is given by the real part of the quantum geometric tensor. We see that \begin{align} C^{\alpha\beta,n}_{\bf p}=2\,g^{\alpha\beta,n}_{\bf p}=2\,\text{Re}\mathcal{T}^{\alpha\beta,n}_{\bf p} \, . \end{align} In our two-band system, the metrics of the two subsystems are equal, that is $g^{\alpha\beta,+}_{\bf p}=g^{\alpha\beta,-}_{\bf p}$, and so is $C^{\alpha\beta}_{\bf p}$. We see that the interband conductivities $\sigma^{\alpha\beta,s}_\text{inter}$ and $\sigma^{\alpha\beta,a}_{\text{inter},n}$ in \eqref{eqn:SinterS} and \eqref{eqn:SinterAN} are controlled by the quantum geometric tensor $\mathcal{T}^{\alpha\beta,n}$ and, thus, caused by a nontrivial geometry of the Bloch state manifold $\{|n_{\bf p}\rangle\}$. We can specify this further by noticing that the symmetric interband conductivity \eqref{eqn:SinterS} is related to the quantum metric and the antisymmetric interband conductivities \eqref{eqn:SinterAN} are related to the Berry curvature. $\sigma^{\alpha\beta,s}_\text{inter}$ was studied in detail recently in the context of spiral magnetic order in application to Hall experiments on high-temperature superconductors \cite{Mitscherling2018, Bonetti2020}. By the upper analysis we provide a new interpretation of these results. In order to highlight the connection to the quantum metric, we refer to the expression $\mathcal{C}^{\alpha\beta,n}_{\bf p}$ via ''quantum metric factor``, which is more precise than ''coherence term`` \cite{Voruganti1992}. Recently, there is increasing interest in the quantum geometric tensor and the quantum metric in very different fields \cite{Gianfrate2020, Bleu2018, Zanardi2007, Gao2015, Peotta2015, Srivastava2015, Julku2016, Piechon2016, Liang2017} including corrections to semiclassical equations of motion in the context of the anomalous Hall conductivity \cite{Gao2014,Bleu2018b} and the effect on the effective mass \cite{Iskin2019}. Based on our microscopic derivation we emphasize that the precise way, in which the quantum geometric tensor has to be included in transport phenomena, is nontrivial. \subsection{Anomalous Hall effect, anisotropic longitudinal conductivity and quantization} \label{sec:discussion:anomalousHall} The Berry curvature tensor $\Omega^{\alpha\beta,n}_{\bf p}$ is antisymmetric in $\alpha\leftrightarrow\beta$ and, thus, has three independent components in a 3-dimensional system, which can be mapped to a Berry curvature vector $\mathbf{\Omega}^n_{\bf p}=\begin{pmatrix}\Omega^{yz,n}_{\bf p}, & -\Omega^{xz,n}_{\bf p}, & \Omega^{xy,n}_{\bf p}\end{pmatrix}$. In order to use the same notation in a 2-dimensional system we set the corresponding elements in $\mathbf{\Omega}^n_{\bf p}$ to zero, for instance, $\Omega^{yz,n}_{\bf p}=\Omega^{xz,n}_{\bf p}=0$ for a system in the x-y-plane. By using the definition of the conductivity and our result \eqref{eqn:SinterAN} of the antisymmetric interband contribution we can write the current vector ${\bf j}^a_n$ of band $n=\pm$ induced by $\mathbf{\Omega}^n_{\bf p}$ as \begin{align} &{\bf j}^a_n\hspace{-0.5mm}=\hspace{-0.5mm}-\frac{e^2}{\hbar}\hspace{-1mm}\int\hspace{-1.5mm}\frac{d^d{\bf p}}{(2\pi)^d}\hspace{-1mm}\int\hspace{-1.5mm}d\epsilon\,\,f(\epsilon)\, w^{a,n}_{{\bf p},\text{inter}}(\epsilon)\,{\bf E}\times\mathbf{\Omega}^n_{\bf p}\, \end{align} The Berry curvature vector $\mathbf{\Omega}^n_{\bf p}$ acts like an effective magnetic field \cite{Nagaosa2010,Xiao2010} in analogy to the Hall effect induced by an external magnetic field ${\bf B}$. We see that the antisymmetric interband contribution of the conductivity in \eqref{eqn:SinterAN} is responsible for the intrinsic anomalous Hall effect, that is a Hall current without an external magnetic field that is not caused by (skew) scattering. In a $d$-dimensional system, the conductivity tensor is a $d\times d$ matrix $\sigma=(\sigma^{\alpha\beta})$. Beside its antisymmetric part, which describes the anomalous Hall effect, it does also involve a symmetric part $\sigma_\text{sym}$ due to the intraband and the symmetric interband contributions \eqref{eqn:SintraN} and \eqref{eqn:SinterS}. We can diagonalize the, in general, non-diagonal matrix $\sigma_\text{sym}$ by a rotation $\mathcal{R}$ of the coordinate system, which we fixed to an orthogonal basis $\mathbf{e}_x,\,\mathbf{e}_y,\,\mathbf{e}_z$ when labeling $\alpha$ and $\beta$ in \eqref{eqn:defPi}. If the rotation $\mathcal{R}$ is chosen such that $\mathcal{R}^T\sigma_\text{sym}\mathcal{R}$ is diagonal, the antisymmetric part in the rotated basis is described by the the rotated Berry curvature vector $\mathcal{R}^T\mathbf{\Omega}^n_{\bf p}$. We see that a rotation within the plane of a two-dimensional system does not effect $\mathbf{\Omega}^n_{\bf p}$, which highlights the expected isotropy of the anomalous Hall effect consistent with the interpretation of $\mathbf{\Omega}^n_{\bf p}$ as an effective magnetic field perpendicular to the plane. The possibility to diagonalize the symmetric part $\sigma_\text{sym}$ shows that the diagonal and off-diagonal intraband and symmetric interband contributions in \eqref{eqn:SintraN} and \eqref{eqn:SinterS} are part of the (anisotropic) longitudinal conductivity in a rotated coordinate system. Finally, we discuss the possibility of quantization of the anomalous Hall conductivity. Let us assume a two-dimensional system that is lying in the x-y plane without loss of generality. The Chern number of band $n$ is given by \begin{align} \label{eqn:Chern} C_n=-\frac{1}{2\pi}\int_\text{BZ}\mathbf{\Omega}^n_{\bf p}\cdot d{\bf S}=-2\pi \int \frac{d^2 {\bf p}}{(2\pi)^2} \,\Omega^{xy,n}_{\bf p} \end{align} and is quantized to integer numbers \cite{Thouless1982, Xiao2010, Nagaosa2010}. We can define a generalized Chern number dependent on the temperature, the scattering rate and the chemical potential as \begin{align} & C_n(T,\Gamma,\mu) \hspace{-0.5mm}=\hspace{-0.5mm}-2\pi\hspace{-1mm}\int\hspace{-1.5mm}\frac{d^2{\bf p}}{(2\pi)^2}\hspace{-1mm}\int\hspace{-1.5mm}d\epsilon\,\,f(\epsilon)\, w^{a,n}_{{\bf p},\text{inter}}(\epsilon)\,\Omega^{xy,n}_{\bf p}\, , \end{align} which is weighted by the Fermi function as well as by the spectral weighting factor $w^{a,n}_{{\bf p},\text{inter}}(\epsilon)$ defined in \eqref{eqn:Wainter}. Thus, we include the effect of band occupation, temperature and finite scattering rate. The antisymmetric interband conductivity, that is the anomalous Hall conductivity, then reads \begin{align} \sigma^{xy,a}_{\text{inter},n}=\frac{e^2}{h} C_n(T,\Gamma,\mu) \, . \end{align} In the clean limit $\Gamma\ll 1$ we recover the broadly used result of Onoda {\it et al.} \cite{Onoda2002} and Jungwirth {\it et al.} \cite{Jungwirth2002}. If we further assume zero temperature $k_B T\ll 1$ and a completely filled band $n$, we recover the famous TKNN formula for the quantized anomalous Hall effect \cite{Thouless1982}, where the anomalous Hall conductivity is quantized to $\frac{e^2}{h}\nu$ due to the quantized integer Chern number $\nu=C_n$. Note that finite temperature, finite $\Gamma$ and partially filled bands break the quantization. Furthermore, we may be able to relate the antisymmetric interband conductivity to topological charges and, thus, obtain a quantized anomalous Hall conductivity. The Berry curvature $\mathbf{\Omega}^n_{\bf p}$ is the curl of the Berry connection ${\bf \mathcal{A}}^n_{\bf p}=\begin{pmatrix}\mathcal{A}^{x,n}_{\bf p}, & \mathcal{A}^{y,n}_{\bf p}, & \mathcal{A}^{z,n}_{\bf p}\end{pmatrix}$, see \eqref{eqn:OmegaRot}. Via Stokes' theorem, the integral over a two-dimensional surface within the Brillouin zone can be related to a closed line integral. This line integral may define a quantized topological charge, which leads to a quantized value of $\sigma^{\alpha\beta,a}_{\text{inter},n}$ integrated over this surface. For instance, this causes a quantized radial component of the current in a $\mathcal{P}\mathcal{T}$-symmetric Dirac nodal-line semimetal \cite{Rui2018}. \section{Examples} \label{sec:examples} We discuss several examples in the following section. Each example includes a short physical motivation that leads to a Hamiltonian of the form \eqref{eqn:H} with specified quantum numbers $A$ and $B$ of the two subsystems. We emphasize that this step is necessary for a transparent justification of the coupling of the electric field to the physical system. \subsection{Artificial doubling of the unit cell} \label{sec:examples:doubling} In this short example we emphasized the importance to use the precise position ${\bf R}_i+\boldsymbol\rho_\sigma$ of the subsystem $\sigma=A,B$ in the Peierls substitution in Sec.~\ref{sec:twobandsystem:peierls} in order to obtain physically consistent results \cite{Tomczak2009,Nourafkan2018}. We compare the conductivity of a linear chain of atoms with interatomic distance 1 and nearest-neighbor hopping $t$ with the conductivity that we calculate in an artificially doubled unit cell. We denote the (one-dimensional) momentum as $p$. The dispersion is $\epsilon_p=2t\cos p$ with Brillouin zone $p=(-\pi,\pi]$. We artificially double the unit cell with sites $A$ and $B$. Thus, the distance between the unit cells $j$ and $j'$ is ${\bf r}_{jj'}=2$. The subsystems are at position $\boldsymbol\rho_A=0$ and $\boldsymbol\rho_B=1$ within a unit cell. The corresponding Brillouin zone is $p=(-\pi/2,\pi/2]$ and the Bloch Hamiltonian reads \begin{align} \lambda_p=\begin{pmatrix} 0 && 2t\cos p \\ 2t\cos p && 0 \end{pmatrix} \, . \end{align} When mapping $\lambda_{\bf p}$ to the spherical representation \eqref{eqn:lamPolar} we have $g_p=h_p=0$ and the two angles are $\Theta_p=\pi/2$ and $\varphi_p=0$. The two bands are $E^\pm_p=\pm 2t |\cos p|$. Since the angles are momentum-independent, we see that the interband contributions vanish, that is $\sigma^{xx,s}_\text{inter}=\sigma^{xx,a}_{\text{inter},+}=\sigma^{xx,a}_{\text{inter},-}=0$, where $x$ labels the direction of the chain. The (intraband) conductivity is equal to the undoubled case like physically expected. Note that a coupling between the two subsystems $A$ and $B$ do not necessarily lead to interband contributions of the conductivity. \subsection{Wilson fermion model} \label{sec:examples:Wilson} We discuss the Wilson fermion model, a two-dimensional lattice model of a Chern insulator \cite{Grushin2018}. We mainly focus on the quantized anomalous Hall effect due to a finite Chern number of the fully occupied band in order to illustrate our discussion in Sec.~\ref{sec:discussion:anomalousHall}. We motivate the Wilson fermion model via a tight-binding model presented by Nagaosa {\it et al.} \cite{Nagaosa2010}. We assume a two-dimensional square lattice with three orbitals $s,\,p_x,\,p_y$ and spin $\sigma$. The three orbitals are located at the same lattice site. We include hopping between these sites and a simplified spin-orbit interaction between the $z$ component of the spin and the orbital moment. We assume to be in the ferromagnetic state with spin $\uparrow$ only. Due to spin-orbit interaction the $p$-orbitals are split into $p_x\pm ip_y$. The effective two-band low-energy model is of the form \eqref{eqn:H}. We identify the two subsystems as $A=(s,\uparrow)$ and $B=(p_x-ip_y,\uparrow)$. We have $\boldsymbol\rho_A=\boldsymbol\rho_B=0$ and ${\bf Q}_A={\bf Q}_B=0$. The Bloch Hamiltonian reads \begin{align} \lambda_{\bf p}=\begin{pmatrix} \epsilon_s\hspace{-0.5mm}-\hspace{-0.5mm}2t_s\big(\cos p_x\hspace{-0.5mm}+\hspace{-0.5mm}\cos p_y\big) & \sqrt{2}t_{sp} \big(i \sin p_x\hspace{-0.5mm}+\hspace{-0.5mm}\sin p_y\big) \\[1mm] \sqrt{2}t^*_{sp}(-i\sin p_x\hspace{-0.5mm}+\hspace{-0.5mm}\sin p_y\big) & \epsilon_p\hspace{-0.5mm}+\hspace{-0.5mm}t_p\big(\cos p_x\hspace{-0.5mm}+\hspace{-0.5mm}\cos p_y\big) \end{pmatrix} \, , \end{align} where $\epsilon_s$ and $\epsilon_p$ are the energy levels of the two orbitals. $t_s$ and $t_p$ describes the hopping within one orbital and $t_{sp}$ describes the hopping between the two orbitals. We refer for a more detailed motivation to Nagaosa {\it et al.} \cite{Nagaosa2010}. In the following we further reduce the number of parameters by setting $t_s=t$, $t_p/t=2$, $t_{sp}/t=1/\sqrt{2}$ and $\epsilon_s/t=-\epsilon_p/t=m$. We recover the two-dimensional Wilson fermion model \cite{Grushin2018} with only one free dimensionless parameter $m$. We discuss the conductivity of this model as a function of $m$ and the chemical potential $\mu$. We give some basic properties of the model. The quasiparticle dispersions are \begin{align} E^\pm_{\bf p}/t=\pm\sqrt{(m\hspace{-0.5mm}-\hspace{-0.5mm}2\cos p_x\hspace{-0.5mm}-\hspace{-0.5mm}2\cos p_y)^2\hspace{-0.5mm}+\hspace{-0.5mm}\sin^2 p_x\hspace{-0.5mm}+\hspace{-0.5mm}\sin^2 p_y} \,. \end{align} The gap closes in form of a Dirac point at $(p_x,p_y)=(\pm\pi,\pm\pi)$ for $m=-4$, at $(0,\pm\pi)$ and $(\pm \pi,0)$ for $m=0$ and at $(0,0)$ for $m=4$. For instance, the linearized Hamiltonian for $m=4$ near the gap reads $\lambda_{\bf p}/t=p_y\sigma_x-p_x\sigma_y$. The Chern number of the lower band calculated by \eqref{eqn:Chern} is $C_-=-1$ for $-4<m<0$, $C_-=1$ for $0<m<4$ and $C_-=0$ for $|m|>4$. As expected, $C_+=-C_-$. The bandwidth is $W/t=4+|m|$. \begin{figure} \centering \includegraphics[width=8cm]{fig3a.pdf}\\ \includegraphics[width=8cm]{fig3b.pdf} \caption{The longitudinal conductivity $\sigma^{xx}$ and anomalous Hall conductivity $\sigma^{xy}$ for different $\Gamma/t=0.1,\,0.5,\,1$ at $\mu=0$ and $T=0$. The vertical lines indicate the gap closings at $m=\pm4$ and $m=0$. \label{fig:3}} \end{figure} We calculate the diagonal conductivity $\sigma^{xx}$ and off-diagonal conductivity $\sigma^{xy}$ by using \eqref{eqn:SintraN}-\eqref{eqn:SinterAN} in the zero temperature limit. The intraband and the symmetric interband contribution to the off-diagonal conductivity vanish after integrating over momenta, so that $\sigma^{xx}=\sigma^{yy}$ is the longitudinal conductivity and $\sigma^{xy}$ is the (antisymmetric) anomalous Hall conductivity. In Fig.~\ref{fig:3} we plot $\sigma^{xx}=\sigma^{xx}_{\text{intra},+}+\sigma^{xx}_{\text{intra},-}+\sigma^{xx,s}_\text{inter}$ (upper figure) and $\sigma^{xy}=\sigma^{xy,a}_{\text{inter},+}+\sigma^{xy,a}_{\text{inter},-}$ (lower figure) as a function of the parameter $m$ at half filling, $\mu=0$. For small scattering rate $\Gamma=0.1\,t$ we find peaks of high longitudinal conductivity (blue) only when the gap closes at $m=\pm 4 $ and $m=0$, indicated by the vertical lines. For increasing scattering rate $\Gamma=0.5\,t$ (orange) the peaks are broaden and the conductivity inside the gap is nonzero. For even higher scattering rate $\Gamma=1\,t$ (green) the peak structure eventually disappears and a broad range of finite conductivity is present. The anomalous Hall conductivity $\sigma^{xy}$ is quantized to $e^2/h$ due to a nonzero Chern number of the fully occupied lower band for low scattering rate $\Gamma=0.1\,t$ (blue). At higher scattering rate $\Gamma=0.5\,t$ (orange) and $\Gamma=1\,t$ (green) the quantization is no longer present most prominent for $m=\pm4$ and $m=0$, where the gap closes. \begin{figure} \centering \includegraphics[width=8cm]{fig4a.pdf}\\ \includegraphics[width=8cm]{fig4b.pdf} \caption{The different contributions to $\sigma^{xx}$ and $\sigma^{xy}$ as a function of the chemical potential $\mu$ for $m=2$, $\Gamma=0.5\,t$ and $T=0$. The vertical lines indicate the upper and lower end of the bands at $\mu/t=\pm6$ and the gap between $\mu/t=\pm1$. \label{fig:4}} \end{figure} In Fig.~\ref{fig:4} we show the different contributions to the longitudinal and anomalous Hall conductivity as a function of the chemical potential $\mu$ for $m=2$ and $\Gamma=0.5\,t$. The lower and upper band end at $\mu/t=\pm 6$, respectively, and we have a gap of size $2\,t$ between $\mu/t=\pm 1$, both indicated by vertical lines. In the upper figure we show the longitudinal conductivity $\sigma^{xx}$ (blue) and its three contributions, the intraband conductivity of the lower band $\sigma^{xx}_{\text{intra},-}$ (green), the intraband conductivity of the upper band $\sigma^{xx}_{\text{intra},+}$ (orange) and the symmetric interband conductivity $\sigma^{xx,s}_\text{inter}$ (red). We see that for $-6<\mu/t<-1$ the conductivity is dominated by the lower band, whereas it is dominated by the upper band for $1<\mu/t<6$. Inside the gap $-1<\mu/t<1$ the main contribution is due to the symmetric interband conductivity. We further see smearing effects at $\mu/t=\pm6$ and $\mu/t=\pm 1$. In the lower figure we show the anomalous Hall conductivity $\sigma^{xy}$ (blue) as well as their two contributions, the antisymmetric interband conductivity of the lower band $\sigma^{xy,a}_{\text{inter},-}$ (green) and the upper band $\sigma^{xy,a}_{\text{inter},+}$ (orange). Both contributions are essentially zero for $\mu/t<-1$. Inside the gap $-1<\mu/t<1$, only the contribution of the lower band rises to approximately $e^2/h$, whereas the contribution of the upper band remains close to zero. Thus, we obtain a nonzero anomalous Hall conductivity. Above $\mu/t>1$ the contribution of the upper band compensates the contribution of the lower band. Due to this cancellation, a large anomalous Hall effect is only present for a chemical potential inside the band gap. We see that a finite scattering rate $\Gamma$ leads to a maximal value of the anomalous Hall conductivity of the two individual bands that is larger than $e^2/h$ as shown in Sec.~\ref{sec:discussion:limits}. Inside the gap the total anomalous Hall conductivity is reduced due to the nonzero contribution of the upper band. Around $\mu/t=\pm1$ we see smearing effects. \subsection{Ferromagnetic multi-d-orbital model} \label{sec:examples:Kontani} We discuss a quasi-two-dimensional ferromagnetic multi-d-orbital model with spin-orbit coupling based on the work of Kontani {\it et al.} \cite{Kontani2007}. Similar to the previous example this model involves a nonzero Berry curvature and we expect a nonzero anomalous Hall conductivity, which is, by contrast, not quantized. We mainly focus on the scaling dependence with respect to the scattering rate $\Gamma$ of the different contributions using our results of Sec.~\ref{sec:discussion:limits}. We comment on the consequences when analyzing experimental results in the dirty limit by determining the scaling behavior $\sigma^{xy}\propto (\sigma^{xx})^\nu$. We consider a square lattice tight-binding model with onsite $d_{xz}$ and $d_{yz}$ orbitals. We assume nearest-neighbor hopping $t$ between the $d_{xz}$ orbitals in $x$ direction and between the $d_{yz}$ orbitals in $y$ direction. Next-nearest-neighbor hopping $t'$ couples both types of orbitals. We assume a ferromagnetic material with magnetic moments in $z$ direction that is completely spin-polarized in the spin $\downarrow$ direction. The Hamiltonian is of the form \eqref{eqn:H}, when we identify the two subsystems with quantum numbers $A=(d_{xz},\downarrow)$ and $B=(d_{yz},\downarrow)$. We have $\boldsymbol\rho_A=\boldsymbol\rho_B=0$ and ${\bf Q}_A={\bf Q}_B=0$. The Bloch Hamiltonian reads \begin{align} \lambda_{\bf p}=\begin{pmatrix} -2t\cos p_x & 4t'\sin p_x \sin p_y + i \lambda \\[1mm] 4t'\sin p_x \sin p_y - i \lambda & -2t\cos p_y \end{pmatrix} \, . \end{align} We included spin-orbit coupling $\lambda$. Further details and physical motivations can be found in Kontani {\it et al.} \cite{Kontani2007}. We take the same set of parameters setting $t'/t=0.1$ and $\lambda/t=0.2$ as in Ref.~\onlinecite{Kontani2007}. We fix the particle density per unit cell to $n=0.4$ and adapt the chemical potential adequately. We consider temperature zero. \begin{figure} \centering \includegraphics[width=8cm]{fig5.pdf} \caption{The (negative) chemical potential $\mu$ as a function of the scattering rate $\Gamma$ for $t'/t=0.1$ and $\lambda/t=0.2$ at $n=0.4$. The chemical potential $\mu$ is scattering independent below $\Gamma/t\ll0.2$ and scales linearly $\mu=\mu_\infty \Gamma$ above $\Gamma/t\gg2.2$ with $\mu_\infty=-1.376$ (dashed lines). \label{fig:5}} \end{figure} The chemical potential $\mu$ becomes a function of the scattering rate for fixed particle number $n$ according to \eqref{eqn:muGamma}. Whereas constant in the clean limit, the linear dependence on $\Gamma$ in the dirty limit is crucial and has to be taken into account carefully via a nonzero $\mu_\infty=-\tan(1-n)\pi/2\approx -1.376$ for $n=0.4$. We have $c=(E^+_\text{max}+E^-_\text{min})/2=0$. In Fig.~\ref{fig:5} we plot the chemical potential $\mu/t$ as a function of $\Gamma/t$ obtained by inverting $n(\mu,\Gamma)=0.4$ numerically for fixed $\Gamma$. We find the expected limiting behavior in the clean and dirty limit indicated by dashed lines. The vertical lines are at those $\Gamma/t$, where $\Gamma/t$ is equal to the spin-orbit coupling $\lambda/t=0.2$, which is the minimal gap between the lower and the upper band $E^\pm_{\bf p}$, and the band width $W/t=2.2$. Both scales give a rough estimate for the crossover region between constant and linear chemical potential. \begin{figure} \centering \includegraphics[width=8cm]{fig6a.pdf}\\ \includegraphics[width=8cm]{fig6b.pdf} \caption{The longitudinal (top) and anomalous Hall (bottom) conductivity and their nonzero contributions as a function of the scattering rate $\Gamma/t$ for $t'/t=0.1$ and $\lambda/t=0.2$ at $n=0.4$. For $\Gamma/t\ll 0.2$ we find the scaling of the clean limit given by \eqref{eqn:winG0}-\eqref{eqn:wanG0} (dashed lines). For $\Gamma/t\gg 2.2$ we find the scaling of the dirty limit given by \eqref{eqn:WnInfty}-\eqref{eqn:WaDiffInfty} with vanishing lowest order for $\sigma^{xy}$ (dashed lines). For $0.2<\Gamma/t<2.2$ we have a crossover regime. \label{fig:6}} \end{figure} We discuss the diagonal conductivity $\sigma^{xx}=\sigma^{yy}$ and off-diagonal conductivity $\sigma^{xy}$ as a function of the scattering rate $\Gamma/t$. The off-diagonal symmetric contributions $\sigma^{xy}_{\text{intra},n}$ and $\sigma^{xy,s}_\text{inter}$ vanish by integration over momenta. We calculate the longitudinal conductivity $\sigma^{xx}=\sigma^{xx}_{\text{intra},+}+\sigma^{xx}_{\text{intra},-}+\sigma^{xx,s}_\text{inter}$ and the (antisymmetric) anomalous Hall conductivity $\sigma^{xy}=\sigma^{xy,a}_{\text{inter},+}+\sigma^{xy,a}_{\text{inter},-}$ by using \eqref{eqn:SintraN}-\eqref{eqn:SinterAN} at zero temperature. In a stacked quasi-two-dimensional system, the conductivities are proportional to $e^2/ha$, where $a$ is the interlayer distance. When choosing $a\approx \SI{4}{\angstrom}$, we have $e^2/ha\approx\SI[parse-numbers=false]{10^{3}}{\ohm^{-1}\cm^{-1}}$. In this chapter we express the conductivities in SI units $1/\Omega\,\text{cm}$ for a simple comparison with experimental results on ferromagnets (see Ref.~\onlinecite{Onoda2008} and references therein). \begin{figure} \centering \includegraphics[width=8cm]{fig7.pdf}\\ \caption{The anomalous Hall conductivity $\sigma^{xy}$ as a function of the longitudinal conductivity $\sigma^{xx}$ for $t'/t=0.1$ and $\lambda/t=0.2$ at $n=0.4$. The vertical and horizontal lines indicate the corresponding value at $\Gamma/t=0.2$ and $\Gamma/t=2.2$ in Fig.~\ref{fig:6}. In the clean and dirty limit, we find $\sigma^{xy}\propto (\sigma^{xx})^0$ and $\sigma^{xy}\propto (\sigma^{xx})^2$, respectively, in agreement with the individual scaling in $\Gamma$ (gray dashed lines). The crossover regime can be approximated by a scaling $\sigma^{xy}\propto (\sigma^{xx})^{1.6}$ (red dashed line).\label{fig:7}} \end{figure} In Fig.~\ref{fig:6} we plot the longitudinal (top) and the anomalous Hall (bottom) conductivity (blue lines) and their nonzero contributions as a function of the scattering rate $\Gamma/t$. In the clean limit, $\Gamma/t\ll0.2$, we obtain the expected scaling \eqref{eqn:winG0}-\eqref{eqn:wanG0} indicated by dashed lines. The intraband contributions (orange and green lines in the upper figure) scale as $1/\Gamma$, whereas the symmetric intraband contribution (red line) scales like $\Gamma$. The anomalous Hall conductivity becomes scattering independent $\Gamma^0$ in the clean limit. In absolute scales both the longitudinal and anomalous Hall conductivity are dominated by the lower band $E^-_{\bf p}$ (green lines), consistent with a filling of $n=0.4$. In the dirty limit, $\Gamma/t\gg2.2$, the intraband and the symmetric interband contributions of the longitudinal conductivity scale as $\Gamma^{-2}$, which is the lowest order in the expansions \eqref{eqn:WnInfty} and \eqref{eqn:WsInfty}. The anomalous Hall conductivities $\sigma^{xy,a}_{\text{inter},\pm}$ scale as $\Gamma^{-3}$ in agreement with \eqref{eqn:WaInfty}. The lowest order $\Gamma^{-2}$ in \eqref{eqn:WaInfty} vanishes after integration over momenta. We have $\sigma^{xy,a}_{\text{inter},+}=-\sigma^{xy,a}_{\text{inter},-}$ that leads to a $\Gamma^{-4}$-dependence of the anomalous Hall conductivity summed over both bands, which is different than expected previously \cite{Kontani2007, Tanaka2008}. The dashed lines in the dirty limit are explicitly calculated via our results in Sec.~\ref{sec:discussion:limits}. In the intermediate range $0.2<\Gamma<2.2$ we find a crossover between the different scalings. We could only reproduce results consistent with those of Kontani {\it et al.} \cite{Kontani2007} by assuming a constant chemical potential that is fixed to its value in the clean limit, that is if we neglect the scattering dependence of the chemical potential \eqref{eqn:muGamma} for fixed particle number $n=0.4$ within our calculation. In Fig.~\ref{fig:7} we plot the anomalous Hall conductivity as a function of the longitudinal conductivity. The representation is useful for comparison with experimental results, where the scattering dependence is not known explicitly. The result is both qualitatively and quantitatively in good agreement with experimental results for ferromagnets (see Ref.~\onlinecite{Onoda2008} and references therein). We find three regimes: In the clean regime we get $\sigma^{xy}\propto (\sigma^{xx})^0$, since the anomalous Hall conductivity becomes scattering independent. In the dirty regime we have $\sigma^{xy}\propto (\sigma^{xx})^2$, which can be easily understood by the scaling behavior shown in Fig.~\ref{fig:6}. The black dashed lines are calculated explicitly via \eqref{eqn:WnInfty}-\eqref{eqn:WaDiffInfty}. We indicated the regime boundaries by gray lines that correspond to the conductivities at $\Gamma/t=0.2$ and $\Gamma/t=2.2$. In the intermediate regime that corresponds to the crossover between the different scalings in Fig.~\ref{fig:6} we get a good agreement with a scaling $\sigma^{xy}\propto (\sigma^{xx})^{1.6}$ (red dashed line). The scaling behavior $\sigma^{xy}\propto (\sigma^{xx})^{1.6}$ is observed experimentally and discussed theoretically in various publications in the recent years (see \cite{Onoda2006, Miyasato2007, Onoda2008, Kovalev2009, Xiong2010, Lu2013, Zhang2015, Sheng2020} and references therein). Within our theory we clearly identify the intermediate regime, $\sigma^{xx}\approx 100\sim 5000\,(\Omega\,\text{cm})^{-1}$, as a crossover regime not related to a (proper) scaling behavior. This is most prominent when we show the logarithmic derivative of the anomalous Hall conductivity as a function of the longitudinal conductivity in Fig.~\ref{fig:8} for different particle numbers $n=0.2,0.4,0.6$, next-nearest neighbor hoppings $t'/t=0.1,0.2$ and spin-orbit couplings $\lambda/t=0.1,0.2$. We see a clear crossover from $\sigma^{xy}\propto(\sigma^{xx})^0$ to $\sigma^{xy}\propto(\sigma^{xx})^2$ in a range of $\sigma^{xx}=10\sim 30000\,(\Omega\,\text{cm})^{-1}$ (red vertical lines), which is even larger than estimated by the scales $\Gamma=\lambda=0.2\,t$ and $\Gamma=W=2.2\,t$ indicated by the gray lines in Fig.~\ref{fig:7}. This crossover regime is insensitive to parameters over a broad range. Interestingly, various experimental results are found within the range $10\sim 30000\,(\Omega\,\text{cm})^{-1}$ (see Fig.~12 in Ref.~\cite{Onoda2008} for a summary). We have checked that a smooth crossover similar to the presented curve in Fig.~\ref{fig:7} qualitatively agrees with these experimental results within their uncertainty. Following the seminal work of Onoda {\it et al.} \cite{Onoda2006,Onoda2008}, which treated intrinsic and extrinsic contributions on equal footing, the experimental and theoretical investigation of the scaling including, for instance, vertex correction, electron localization and quantum corrections from Coulomb interaction is still ongoing research \cite{Kovalev2009, Xiong2010, Lu2013, Zhang2015, Sheng2020} and is beyond the scope of this paper. \begin{figure} \centering \includegraphics[width=8cm]{fig8.pdf} \caption{The logarithmic derivative of the anomalous Hall conductivity $\sigma^{xy}$ as a function of the longitudinal conductivity $\sigma^{xx}$ for different particle numbers $n$ and next-nearest neighbor hopping $t'/t$ and spin-orbit coupling $\lambda/t$. In between $\sigma^{xx}=10\sim3\times 10^4\,\si{(\ohm\,\cm)^{-1}}$ (red lines) we have a crossover regime between the scaling $\sigma^{xy}\propto(\sigma^{xx})^0$ in the clean limit and $\sigma^{xy}\propto(\sigma^{xx})^2$ in the dirty limit (gray lines). The range is insensitive to parameters over a broad range.\label{fig:8}} \end{figure} \subsection{Spiral magnetic order} \label{sec:examples:spiral} A finite momentum difference ${\bf Q}={\bf Q}_A-{\bf Q}_B$ between the two subsystems in the spinor \eqref{eqn:spinor} described by quantum numbers $\sigma=A,B$ breaks the lattice-translation invariance of the Hamiltonian \eqref{eqn:H}. However, the Hamiltonian is still invariant under a combined translation and rotation inside the subsystems $A$ and $B$ \cite{Sandratskii1998}. We discuss spiral spin density waves as a physical realization \cite{Shraiman1989, Kotov2004, Schulz1990, Kato1990, Fresard1991, Chubukov1992, Chubukov1995, Raczkowski2006, Igoshev2010, Igoshev2015, Yamase2016, Bonetti2020, Eberlein2016, Mitscherling2018}. We assume a two-dimensional tight-binding model on a square lattice with spin. The two subsystems are the two spin degrees of freedom, that is $A=\,\,\uparrow$ and $B=\,\,\downarrow$ located at the lattice sites ${\bf R}_i$ with $\boldsymbol\rho_A=\boldsymbol\rho_B=0$. We set ${\bf Q}_A={\bf Q}$ and ${\bf Q}_B=0$ and assume a Bloch Hamiltonian \begin{align} \label{eqn:spiralH} \lambda_{\bf p}=\begin{pmatrix} \epsilon_{{\bf p}+{\bf Q}} && -\Delta \\[2mm] -\Delta && \epsilon_{\bf p} \end{pmatrix} \,, \end{align} where $\epsilon_{\bf p}=-2t(\cos p_x+\cos p_y)-4t'\cos p_x \cos p_y$, which includes nearest- and next-nearest-neighbor hopping $t$ and $t'$, respectively. We assume a real onsite coupling $\Delta$ between $|{\bf p}+{\bf Q},\uparrow\rangle$ and $|{\bf p},\downarrow\rangle$. This coupling leads to a nonzero onsite magnetic moment $\langle {\bf S}_i\rangle=\frac{1}{2}\sum_{\sigma,\sigma'}\langle c^\dag_{i,\sigma}\boldsymbol{\sigma}^{}_{\sigma\sigma'}c^{}_{i,\sigma'}\rangle=m\, \mathbf{n}_i$. The direction $\mathbf{n}_i$ lies in the $x$-$y$-plane and is given by \begin{align} \mathbf{n}_i=\begin{pmatrix}\cos ({\bf Q}\cdot{\bf R}_i)\\[1mm]-\sin ({\bf Q}\cdot {\bf R}_i)\\[1mm]0 \end{pmatrix}\,. \end{align} The magnetization amplitude $m$ is uniform and controlled by the coupling via \begin{align} m=-\frac{\Delta}{L} \sum_{\bf p}\int d\epsilon f(\epsilon) \frac{A^+_{\bf p}(\epsilon)-A^-_{\bf p}(\epsilon)}{E^+_{\bf p}-E^-_{\bf p}}\,, \end{align} where $E^n_{\bf p}$ are the two quasiparticle bands and $A^n_{\bf p}(\epsilon)$ are the quasiparticle spectral functions. In Fig.~\ref{fig:9} we show magnetization patterns $\langle{\bf S}_i\rangle$ for different ${\bf Q}$ on a square lattice. The magnetic moment of the form $\langle {\bf S}_i\rangle=m\,\mathbf{n}_i$ is the defining character of a spiral spin density wave in contrast to collinear spin density waves with magnetic moments of the form $\langle {\bf S}_i\rangle=m_i\,\mathbf{n}$, where the direction remains constant but the length is modulated. Collinear spin density waves are not invariant under combined translation and spin-rotation. The two special cases ${\bf Q}=(0,0)$ and ${\bf Q}=(\pi,\pi)$ correspond to ferromagnetic and N\'eel-antiferromagnetic order, respectively. Otherwise, we refer to the order as purely spiral. For instance, ${\bf Q}=(\pi/2,\pi/2)$ describes a $90^\circ$ rotation per lattice site in both $x$ and $y$ direction as shown in Fig.~\ref{fig:9} (c). Due to the invariance under combined translational and spin-rotation, this case can be described via \eqref{eqn:spiralH} without considering a four-times larger unit cell. The above form of the Hamiltonian also captures ${\bf Q}$ that are incommensurate with the underlying lattice, when enlarging the unit cell to any size does not restore translation symmetry \cite{Sandratskii1998}. In Fig.~\ref{fig:9} (d) we show such an incommensurate order with ${\bf Q}=(\pi/\sqrt{2},\pi/\sqrt{2})$. Spiral order with ${\bf Q}=(\pi-2\pi \eta,\pi)$ or symmetry related with $\eta>0$ is found in the two-dimensional $t-J$ model \cite{Shraiman1989, Kotov2004} and in the two-dimensional Hubbard model \cite{Schulz1990, Kato1990, Fresard1991, Chubukov1992, Chubukov1995, Raczkowski2006, Igoshev2010, Igoshev2015, Yamase2016, Bonetti2020} by various theoretical methods. A visualization of the magnetization pattern for $\eta=0$ and $\eta=0.025$ are shown in Fig.~\ref{fig:9} (a) and (b), respectively. \begin{figure} \centering \includegraphics[width=7.5cm]{fig9.pdf} \caption{The magnetization patterns $\langle {\bf S}_i\rangle\propto\mathbf{n}_i$ for different ordering vectors (a) ${\bf Q}=(\pi,\pi)$, (b) ${\bf Q}=(0.95\pi,\pi)$, (c) ${\bf Q}=(\pi/2,\pi/2)$ and (d) ${\bf Q}=(\pi/\sqrt{2},\pi/\sqrt{2})$ on a square lattice. \label{fig:9}} \end{figure} The real and constant coupling $\Delta$ in \eqref{eqn:spiralH} results in an angle $\varphi_{\bf p}=\pi$ of the spherical representation \eqref{eqn:lamPolar}, which is momentum independent. As a consequence the Berry curvature \eqref{eqn:Omega} and, thus, the antisymmetric interband contributions \eqref{eqn:SinterAN} are identically zero. We calculate the diagonal and the (symmetric) off-diagonal conductivities $\sigma^{\alpha\beta}=\sigma^{\alpha\beta}_{\text{intra},+}+\sigma^{\alpha\beta}_{\text{intra},-}+\sigma^{\alpha\beta,s}_\text{inter}$ with $\alpha,\beta=x,y$ in an orthogonal basis $e_x$ and $e_y$ aligned with the underlying square lattice (see Fig.~\ref{fig:9}). We calculate the different contributions via \eqref{eqn:SintraN} and \eqref{eqn:SinterS} at zero temperature. The formulas of the conductivity and (ordinary) Hall conductivity of Hamiltonian \eqref{eqn:spiralH} under the same assumptions on the scattering rate $\Gamma$ were derived by the author and Metzner recently \cite{Mitscherling2018}. They discussed the relevance of the symmetric interband contribution $\sigma^{xx,s}_{\text{inter}}$ and $\sigma^{yy,s}_{\text{inter}}$ of the longitudinal conductivity in the context of high-temperature superconductors, where spiral magnetic order of the form ${\bf Q}=(\pi-2\pi \eta,\pi)$ and symmetry related may explain experimental findings \cite{Badoux2016, Laliberte2016, Collignon2017}. In this specific application the interband contributions, which are beyond the standard Boltzmann transport theory, are irrelevant not due to a general argument comparing energy scales, $\Gamma/\Delta$, but due to the numerical prefactors of the material in question. The off-diagonal conductivity $\sigma^{xy}$ for ${\bf Q}=(\pi-2\pi \eta,\pi)$ vanishes after integration over momenta. We have a closer look at the condition, under which the off-diagonal conductivity $\sigma^{xy}$ vanishes. The off-diagonal interband conductivity $\sigma^{xy}_{\text{intra},n}$ of the band $n=\pm$ involves the product of the two quasiparticle velocities $E^{x,n}_{\bf p} E^{y,n}_{\bf p}$ in $x$ and $y$ direction. Beside the trivial case of a constant quasiparticle band, we expect a nonzero product for almost all momenta. Thus, in general, $\sigma^{xy}_{\text{intra},n}$ only vanishes by integration over momenta. Let us consider the special cases ${\bf Q}=(Q,0)$ and ${\bf Q}=(Q,\pi)$, where we fixed the $y$ component to $0$ or $\pi$. The $x$ component is arbitrary. The following arguments also holds for fixed $x$ and arbitrary $y$ component. Those two special cases include ferromagnetic $(0,0)$, N\'eel antiferromagnetic $(\pi,\pi)$ and the order $(\pi-2\pi \eta,\pi)$ found in the Hubbard model. For the upper ${\bf Q}$, the two quasiparticle bands $E^n_{\bf p}$ are symmetric under reflection on the $x$ axis, that is $E^n(p_x,-p_y)=E^n(p_x,p_y)$. Thus, the momentum components of the off-diagonal conductivity are antisymmetric, $\sigma^{xy}(p_x,-p_y)=-\sigma^{xy}(p_x,p_y)$, which leads to a zero off-diagonal conductivity when integrating over momenta. \begin{figure} \centering \includegraphics[width=8cm]{fig10.pdf} \caption{The relative angle between the principle axis and the ordering vector ${\bf Q}\propto (\cos\Theta_{\bf Q},\sin \Theta_{\bf Q})$ as a function of $\Theta_{\bf Q}$ for $t'/t=0.1$, $\Delta/t=1$, $\Gamma/t=0.05$, $n=0.2$ and different lengths $|{\bf Q}|$. Both axes are aligned for $0^\circ$, $\pm 90^\circ$ and $\pm 180^\circ$, since $\sigma^{xy}$ vanishes, as well as for $\pm 45^\circ$ and $\pm 135^\circ$, since $\sigma^{xx}=\sigma^{yy}$ are equal (vertical lines). \label{fig:10}} \end{figure} As discussed in Sec.~\ref{sec:discussion:anomalousHall}, a non-diagonal symmetric conductivity matrix $\sigma=(\sigma^{\alpha\beta})$ due to nonzero off-diagonal conductivities $\sigma^{xy}=\sigma^{yx}$ can be diagonalized by a rotation of the coordinate system. For instance, we considered the basis vectors $e_x$ and $e_y$ aligned with the underlying square lattice (see Fig.~\ref{fig:9}). In our two-dimensional case we describe the rotation of the basis by an angle $\Theta$. In Fig.~\ref{fig:10} we plot the difference between the rotation angle $\Theta_\text{axis}$ that diagonalizes the conductivity matrix $\sigma$ and the direction of the ordering vector ${\bf Q}\propto (\cos\Theta_{\bf Q},\sin\Theta_{\bf Q})$ as a function of $\Theta_{\bf Q}$ for $t'/t=0.1$, $\Delta/t=1$, $\Gamma/t=0.05$ and $n=0.2$ at different lengths $|{\bf Q}$|. The chemical potential is adapted adequately. We see that both directions are close to each other but not necessarily equal with a maximal deviation of a few degrees. The angles $\Theta_{\bf Q}=0^\circ,\,\pm 90^\circ,\,\pm 180^\circ$ corresponds to the case of vanishing $\sigma^{xy}$ discussed above, so that the rotated basis axes are parallel to the original $e_x$ and $e_y$ axes. At the angles $\Theta_{\bf Q}=\pm 45^\circ\,\pm 135^\circ$ the ordering vector ${\bf Q}$ is of the form $(Q,Q)$. Thus, the $x$ and $y$ direction are equivalent, which results in equal diagonal conductivities $\sigma^{xx}=\sigma^{yy}$. A $2\times 2$ conductivity matrix $\sigma$ with equal diagonal elements is diagonalized by rotations with angles $\Theta_\text{axis}=\pm 45^\circ\,\pm 135^\circ$ independent of the precise value of the entries and, thus, independent on the length of ${\bf Q}$. These angles are indicated by vertical lines. \begin{figure} \centering \includegraphics[width=7.5cm]{fig11.pdf} \caption{The off-diagonal conductivity $\sigma^{xy}$ as a function of ${\bf Q}=(Q,Q)$ for $t'/t=0.1$, $\Delta/t=1$, $\Gamma/t=0.05$ and different particle numbers $n=0.1,0.2,0.3$. \label{fig:11}} \end{figure} In the following, we focus on the special case of ordering vector ${\bf Q}=(Q,Q)$. The conductivity matrix is diagonal within the basis $(e_x\pm e_y)/\sqrt{2}$, which corresponds to both diagonal directions in Fig.~\ref{fig:9}. The longitudinal conductivities are $\sigma^{xx}\pm \sigma^{xy}$ with $\sigma^{xx}=\sigma^{yy}$. Thus, the presence of spiral magnetic order results in an anisotropy (or ''nematicity``) of the longitudinal conductivity as pointed out previously \cite{Mitscherling2018,Bonetti2020}. The strength of the anisotropy is given by $2\sigma^{xy}$ for ${\bf Q}=(Q,Q)$. In Fig~\ref{fig:11} we show $\sigma^{xy}$ as a function of ${\bf Q}=(Q,Q)$ for $t'/t=0.1$, $\Delta/t=1$, $\Gamma/t=0.05$ and different particle numbers $n=0.1,\,0.2,\,0.3$. The chemical potential is adapted adequately. The values $|(\pi/\sqrt{2},\pi/\sqrt{2})|=\pi$ and $|(\pi/2,\pi/2)|=\pi/\sqrt{2}$ correspond to the cases presented in Fig.~\ref{fig:10}. We see that the anisotropy vanishes for ferromagnetic $(0,0)$ and N\'eel-antiferromagnetic $(\pi,\pi)$ order as expected. The largest anisotropy for the presented set of parameters is close to $(\pi/2,\pi/2)$. In Fig.~\ref{fig:9} (a), (c) and (d) we show the corresponding magnetization patterns. In Fig.~\ref{fig:12} we show the off-diagonal conductivity, that is the anisotropy, and its three different contributions as a function of the chemical potential $\mu/t$ for $t'/t=0.1$, ${\bf Q}=(\pi/\sqrt{2},\pi/\sqrt{2})$, $\Delta/t=2$ and $\Gamma/t=1$. The overall size is reduced compared to the previous examples by approximately one order of magnitude as expected by the scaling $\sigma^{xy}\propto 1/\Gamma$. As we vary the chemical potential, we get nonzero conductivity within the bandwidth given by approximately $-4.9\,t$ to $4.2\,t$. Both the off-diagonal conductivity and its different contributions take positive and negative values in contrast to the diagonal conductivities. For $\Delta/t=2$ we have a band gap between $-0.3\,t$ and $0.1\,t$ with nonzero conductivities due to the large value of $\Gamma$. We see that for negative and positive chemical potential outside the gap, $\sigma^{xy}$ is mainly given by the contribution of the lower band $\sigma^{xy}_{\text{intra},-}$ or upper band $\sigma^{xy}_{\text{intra},+}$, respectively. Inside the gap, we have both contributions of the two bands due to smearing effects and the symmetric interband contribution $\sigma^{xy,s}_\text{inter}$, which are all comparable in size. The overall behavior is very similar to the diagonal conductivity presented in Fig.~\ref{fig:4} for another model as both results have the same origin in the intraband and the symmetric interband contributions of the conductivity. \begin{figure} \centering \includegraphics[width=7.5cm]{fig12.pdf} \caption{The off-diagonal conductivity $\sigma^{xy}$ and its nonzero contributions as a function of the chemical potential $\mu/t$ for $t'/t=0.1$, ${\bf Q}=(\pi/\sqrt{2},\pi/\sqrt{2})$, $\Delta/t=2$ and $\Gamma/t=1$. The vertical lines indicate the bandwidth and the band gap. \label{fig:12}} \end{figure} \begin{figure} \centering \includegraphics[width=8cm]{fig13.pdf} \caption{The diagonal (blue) and off-diagonal (orange) conductivity as a function of $\Gamma/t$ for $t'/t=0.1$, $\Delta/t=1$ and ${\bf Q}=(\pi/2,\pi/2)$ at $n=0.2$. The calculated limiting behaviors in the clean and dirty limit are indicated by dashed lines. \label{fig:13}} \end{figure} In Fig.~\ref{fig:13} we show the diagonal (blue) and off-diagonal (orange) conductivity as a function of the scattering rate $\Gamma/t$ for $t'/t=0.1$, $\Delta/t=1$ and ${\bf Q}=(\pi/2,\pi/2)$ at $n=0.2$. We fixed the particle number by calculating the chemical potential at each $\Gamma$. In the clean limit both $\sigma^{xx}$ and $\sigma^{xy}$ scale like $1/\Gamma$ as expected for intraband contributions \eqref{eqn:winG0} (dashed lines). In Sec.~\ref{sec:discussion:limits} we showed that both the diagonal and the off-diagonal conductivities scale like $\Gamma^{-2}$ in first order due to their intraband character. However, for the considered parameters the diagonal conductivity $\sigma^{xx}$ scales like $\Gamma^{-2}$, whereas the off-diagonal conductivity $\sigma^{xy}$ scales like $\Gamma^{-4}$. The dashed lines are calculated via \eqref{eqn:WnInfty} for the respective order. The off-diagonal conductivity eventually scales like $\Gamma^{-2}$ for $\Gamma$ far beyond the numerically accessible range due to very small prefactors in the expansion. We explicitly see that a precise analysis of the individual prefactors of the expansion in the dirty limit as discussed in Sec.~\ref{sec:discussion:limits} is useful in order to understand this unexpected scaling behavior. \section{Conclusion} \label{sec:conclusion} We presented a complete microscopic derivation of the longitudinal conductivity and the anomalous Hall conductivity for a general momentum-block-diagonal two-band model. We performed our derivation for finite temperature and a constant scattering rate $\Gamma$ that is diagonal and equal, but arbitrarily large for both bands. The derivation was combined with a systematic analysis of the underlying structure of the involved quantities, which led to the identification of two fundamental criteria for a unique and physically motivated decomposition of the conductivity formulas. {\it Intraband} and {\it interband} contributions are defined by the involved quasiparticle spectral functions of one or both bands, respectively. {\it Symmetric} and {\it antisymmetric} contributions are defined by the symmetry under the exchange of the current and the electric field directions. We showed that the different contributions have distinct physical interpretations. The (symmetric) intraband contributions of the lower and the upper band \eqref{eqn:SintraN} capture the conductivity due to independent quasiparticles, which reduces to the result of standard Boltzmann transport theory \cite{Mahan2000} in the clean (small $\Gamma$) limit. Interband coherence effects beyond independent quasiparticles are described by the interband contributions. The symmetric interband contribution \eqref{eqn:SinterS} is a correction due to finite $\Gamma$ and caused by a nontrivial quantum metric. The antisymmetric interband contributions of the lower and the upper band \eqref{eqn:SinterAN} are caused by the Berry curvature and describe the intrinsic anomalous Hall effect. They generalize the broadly used formula by Onoda {\it et al.} \cite{Onoda2006} and Jungwirth {\it et al.} \cite{Jungwirth2002} to finite $\Gamma$. We found that the interband contributions are controlled by the quantum geometric tensor of the underlying eigenbasis manifold. Thus, we provided the geometric interpretation of the symmetric interband contribution, which was analyzed in detail in the context of spiral magnetic order \cite{Mitscherling2018} but whose connection to the quantum metric was not noticed before. It might be an interesting question how those or further concepts of quantum geometry can be connected to transport phenomena. Our microscopic derivation suggests that the precise way, in which those concepts have to be included in other transport quantities, is nontrivial. By performing a derivation for $\Gamma$ of arbitrary size, we were able to discuss the clean (small $\Gamma$) and dirty limit (large $\Gamma$) analytically. The dependence on $\Gamma$ of each contribution was shown to be captured entirely by a specific product of the quasiparticle spectral functions of the lower and upper band. In the clean limit we recovered the expected $1/\Gamma$ scaling \cite{Mahan2000} of the intraband conductivities and the constant (or ''dissipationless`` \cite{Nagaosa2010}) limit of the intrinsic anomalous Hall conductivity. For large $\Gamma$, we showed that some orders of the conductivity contributions might vanish or be strongly suppressed when integrating over momenta. We provided the precise prefactors of the expansion, which might be helpful for the analysis of unexpected scaling behaviors. We suggested a different definition of the Fermi-sea and Fermi-surface contributions of the conductivity than previously proposed by Streda \cite{Streda1982}. We based our definition on the symmetry under exchange of the current and the electric field directions. We found that the symmetric parts \eqref{eqn:SintraN} and \eqref{eqn:SinterS} and antisymmetric part \eqref{eqn:SinterAN} of the conductivity involve the derivative of the Fermi function and the Fermi function itself, respectively, when entirely expressed in terms of quasiparticle spectral functions. The same decomposition naturally arises when decomposing the Bastin formula \cite{Bastin1971} into its symmetric and antisymmetric part. The symmetry under exchange of the current and the electric field directions might also help to identify useful decompositions of the conductivity with distinct physical interpretation and properties beyond the scope of this paper. During the derivation, the conductivity involves the matrix trace over the two subsystems of the two-band model. In general, the evaluation of this matrix trace may lead to numerous terms and, thus, may make an analytical treatment tedious. We presented the analysis of the involved matrices with respect to their behavior under transposition as a useful strategy to reduce this computational effort. Thus, our derivation strategy may be useful for an analytical treatment of multiband systems beyond our two-band system or for higher expansions in electric and magnetic fields. We presented different examples capturing the broad application range of our general model. We discussed the quantized anomalous Hall conductivity for a Chern insulator at finite $\Gamma$ and showed that the quantization is no longer present for large $\Gamma$ due to the contribution of the former unoccupied band. We analyzed the scaling behavior of the anomalous Hall conductivity with respect to the longitudinal conductivity $\sigma^{xy}\propto (\sigma^{xx})^\nu$ for a ferromagnetic multi-d-orbital model. Our results are qualitatively and quantitatively in good agreement with experimental findings (see Ref.~\cite{Onoda2008} for an overview). Whereas there is a proper scaling of the anomalous Hall conductivity of $\nu=0$ and $\nu=2$ in the clean and dirty limit, respectively, we identified a crossover regime without a proper scaling behavior for intermediate conductivities $\sigma^{xx}=10\sim 30000\,(\Omega\,\text{cm})^{-1}$, in which various ferromagnets were found. The treatment of intrinsic and extrinsic contributions on equal footing as well as the experimental and theoretical investigation of the scaling including, for instance, vertex correction, electron localization and quantum corrections from Coulomb interaction is still ongoing research \cite{Onoda2006, Onoda2008, Kovalev2009, Xiong2010, Lu2013, Zhang2015, Sheng2020} and beyond the scope of this paper. We discussed spiral spin density waves as an example of a system with broken lattice-translation but combined lattice-translation and spin-rotation symmetry, which is captured by our general model. We showed that the presence of spiral magnetic order can lead to a (symmetric) off-diagonal conductivity in spite of the underlying square lattice, which results in an anisotropic longitudinal conductivity in a rotated coordinate system. \begin{acknowledgments} I would like to thank A.~Leonhardt, M.~M.~Hirschmann and W.~Metzner for various enlightening discussions and stimulating comments as well as for careful proofreading the manuscript. I am grateful to K.~F\"ursich, R.~Nourafkan, V.~Scarola, A.~Schnyder, J.~S\'ykora, and A.-M.~Tremblay for helpful discussions at different stages of this work. \end{acknowledgments} \begin{appendix} \section{Peierls substitution} \label{appendix:peierls} \subsection{Hopping in real space} The Peierls substitution adds a phase factor to the hoppings in real space. Thus, in order to apply \eqref{eqn:Peierls} we Fourier transform the diagonal elements $\epsilon_{{\bf p},\sigma}$ of the two subsystems $\sigma=A,B$ and the coupling between these two systems $\Delta_{\bf p}$ of Hamiltonian \eqref{eqn:H} to real space. The Fourier transformation of the creation operator $c^{}_{i,\sigma}$ and $c_{{\bf p},\sigma}$ were defined in \eqref{eqn:FourierC} and \eqref{eqn:FourierCInv}. The intraband hopping $t_{jj',\sigma}\equiv t_{jj',\sigma\sigma}$ of one subsystem, which is defined by \begin{align} \sum_{\bf p} c^\dag_{{\bf p}+{\bf Q}_\sigma,\sigma}\epsilon^{}_{{\bf p},\sigma} c^{}_{{\bf p}+{\bf Q}_\sigma,\sigma}=\sum_{jj'}c^\dag_{j,\sigma}t^{}_{jj',\sigma}c^{}_{j',\sigma} \, , \end{align} is given by \begin{align} \label{eqn:tijsigma} t_{jj',\sigma}=\left(\frac{1}{L}\sum_{\bf p}\epsilon_{{\bf p},\sigma}\,e^{i{\bf r}_{jj'}\cdot{\bf p}}\right)\,e^{i{\bf r}_{jj'}\cdot {\bf Q}_\sigma}\, . \end{align} We see that the intraband hopping is only a function of the difference between unit cells ${\bf r}_{jj'}={\bf R}_j-{\bf R}_{j'}$. The fixed offset ${\bf Q}_\sigma$ leads to a phase shift. The interband hopping $t_{jj',AB}$ between the two subsystems, which is defined by \begin{align} \sum_{\bf p} c^\dag_{{\bf p}+{\bf Q}_A,A}\Delta^{}_{\bf p} c^{}_{{\bf p}+{\bf Q}_B,B}=\sum_{jj'}c^\dag_{j,A}t^{}_{jj',AB}c^{}_{j',B} \, , \end{align} is given by \begin{align} \label{eqn:tijAB} t_{jj',AB}=&\left(\frac{1}{L}\sum_{\bf p}\Delta_{\bf p}\,e^{i{\bf p}\cdot({\bf r}_{jj'}+\boldsymbol\rho_A-\boldsymbol\rho_B)}\right)\,e^{i{\bf r}_{jj'}\cdot({\bf Q}_A+{\bf Q}_B)/2}\nonumber\\ &\times e^{i{\bf R}_{jj'}\cdot({\bf Q}_A-{\bf Q}_B)}\,e^{i(\boldsymbol\rho_A\cdot{\bf Q}_A-\boldsymbol\rho_B\cdot{\bf Q}_B)} \, . \end{align} We see that it is both a function of ${\bf r}_{jj'}$ and the mean position between unit cells ${\bf R}_{jj'}=({\bf R}_j+{\bf R}_{j'})/2$, which breaks translational invariance and is linked to nonequal ${\bf Q}_A\neq{\bf Q}_B$. Similar to \eqref{eqn:tijsigma} we have different phase shifts due to $\boldsymbol\rho_\sigma$ and ${\bf Q}_\sigma$. Those phases are necessary to obtain a consistent result in the following. \subsection{Derivation of electromagnetic vertex $\mathscr{V}_{{\bf p}\bp'}$} We derive the Hamiltonian $H[{\bf A}]$ given in \eqref{eqn:HA} after Peierls substitution. We omit the time dependence of the vector potential ${\bf A}[{\bf r}]\equiv {\bf A}({\bf r},t)$ for a shorter notation in this section. The Peierls substitution \eqref{eqn:Peierls} of the diagonal and off-diagonal elements of $\lambda_{jj'}$ defined in \eqref{eqn:FourierH} and calculated in \eqref{eqn:tijsigma} and \eqref{eqn:tijAB} in the long-wavelength regime read \begin{align} \label{eqn:Peierls1} &t_{jj',\sigma}\rightarrow t_{jj',\sigma}\,e^{-ie{\bf A}[{\bf R}_{jj'}+\boldsymbol\rho_\sigma]\cdot {\bf r}_{jj'}} \, ,\\ &t_{jj',AB}\rightarrow t_{jj',AB}\,e^{-ie{\bf A}[{\bf R}_{jj'}+\frac{\boldsymbol\rho_A+\boldsymbol\rho_B}{2}]\cdot\big({\bf r}_{jj'}+\boldsymbol\rho_A-\boldsymbol\rho_B\big)} \, . \end{align} In a first step, we consider the diagonal elements. We expand the exponential of the hopping $t^{\bf A}_{jj',\sigma}$ after Peierls substitution \eqref{eqn:Peierls1} and Fourier transform the product of vector potentials $\big({\bf A}[{\bf R}_{jj'}+\boldsymbol\rho_\sigma]\cdot {\bf r}_{jj'}\big)^n$ via \eqref{eqn:FourierAq}. We get \begin{align} t^{\bf A}_{jj',\sigma}=&\sum_n\frac{(-i e)^n}{n!}\sum_{{\bf q}_1...{\bf q}_n}t_{jj',\sigma}\nonumber\\&\times e^{i\sum_m{\bf q}_m\cdot({\bf R}_{jj'}+\boldsymbol\rho_\sigma)}\prod^n_m{\bf r}_{jj'}\cdot{\bf A}_{{\bf q}_m}\,. \end{align} After insertion of the hopping \eqref{eqn:tijsigma} we Fourier transform $t^{\bf A}_{jj',\sigma}$ back to momentum space defining $\epsilon^{\bf A}_{{\bf p}\bp',\sigma}$ via \begin{align} \sum_{jj'}c^\dag_{j,\sigma}t^{\bf A}_{jj',\sigma}c^{}_{j',\sigma}=\sum_{{\bf p}\bp'} c^\dag_{{\bf p}+{\bf Q}_\sigma,\sigma}\epsilon^{\bf A}_{{\bf p}\bp',\sigma} c^{}_{{\bf p}'+{\bf Q}_\sigma,\sigma} \, . \end{align} The summation over ${\bf R}_{jj'}$ leads to momentum conservation. The phase factor proportional to the position $\boldsymbol\rho_\sigma$ inside the unit cell cancels. During the calculation we can identify \begin{align} &-\frac{i}{L}\sum_{\bf p}\sum_{{\bf r}_{jj'}}\epsilon_{{\bf p},\sigma}\, e^{i{\bf r}_{jj'}\cdot({\bf p}-{\bf p}_0)}\,\big({\bf r}_{jj'}\cdot {\bf A}_{\bf q}\big)\nonumber\\ &=\sum_{\alpha=x,y,z}\left.\frac{\partial\epsilon_{{\bf p},\sigma}}{\partial p^\alpha}\right|_{{\bf p}={\bf p}_0}A^\alpha_{\bf q} \end{align} as the derivative of the band $\epsilon_{{\bf p},\sigma}$ at ${\bf p}_0=({\bf p}+{\bf p}')/2$. We continue with the off-diagonal element. The derivation of $\Delta^{\bf A}_{{\bf p}\bp'}$ is analogue to the derivation above. The phase factors in \eqref{eqn:tijAB} assure that we can identify the derivative of the interband coupling $\Delta_{\bf p}$ via \begin{align} &-\frac{i}{L}\sum_{\bf p}\sum_{{\bf r}_{jj'}}\Delta_{\bf p}\, e^{i({\bf r}_{jj'}+\boldsymbol\rho_A-\boldsymbol\rho_B)\cdot({\bf p}-{\bf p}_0)}\nonumber\\&\times\big({\bf r}_{jj'}+\boldsymbol\rho_A-\boldsymbol\rho_B\big)\cdot {\bf A}_{\bf q} =\sum_{\alpha=x,y,z}\left.\frac{\partial\Delta_{\bf p}}{\partial p^\alpha}\right|_{{\bf p}={\bf p}_0}A^\alpha_{\bf q} \, . \end{align} As in the diagonal case, the summation over ${\bf R}_{jj'}$ leads to momentum conservation and additional phase factors drop. Finally, we write the result in matrix form and separate the zeroth element of the exponential expansion. We end up with \eqref{eqn:HA} and electromagnetic vertex $V_{{\bf p}\bp'}$ given in \eqref{eqn:Vpp'}. \section{Current} \label{appendix:current} \subsection{Derivation} Since the action $S[\Psi,\Psi^*]$ in \eqref{eqn:S} is quadratic in the Grassmann fields $\Psi$ and $\Psi^*$, the Gaussian path integral leads to the partition function $Z=\det\big(\mathscr{G}^{-1}-\mathscr{V}\big)$, where the Green's function $\mathscr{G}$ and the electromagnetic vertex $\mathscr{V}$ are understood as matrices of both Matsubara frequencies and momenta. The grand canonical potential $\Omega$ is related to the partition function via $\Omega=-T\ln Z$ with temperature $T$ and $k_B=1$. We factor out the part that is independent of the vector potential, that is $\Omega_0=-T\,\text{Tr} \ln \mathscr{G}^{-1}$, and expand the logarithm $\ln(1-x)=-\sum_n x^n/n$ of the remaining part. We obtain \begin{align} \label{eqn:OmegaExpansion} \Omega[{\bf A}]=\Omega_0+T\sum_{n=1}^\infty \frac{1}{n}\text{Tr}\big(\mathscr{G}\mathscr{V}\big)^n \, . \end{align} Using the definition of the Green's function and the vertex in \eqref{eqn:Vpp'} and \eqref{eqn:Green}, one can check explicitly that $\Omega[{\bf A}]$ is real at every order in $n$. The current $j^\alpha_q$ in direction $\alpha=x,y,z$ and Matsubara frequency and momentum $q=(iq_0,{\bf q})$ is given as functional derivative of the grand canonical potential with respect to the vector potential, $j^\alpha_q=-1/L\, \delta\Omega[{\bf A}]/\delta A^\alpha_{-q}$. The Green's function $\mathscr{G}$ has no dependence on the vector potential. We denote the derivative of the electromagnetic vertex, the current vertex, as $\dot \mathscr{V}^\alpha_q = -1/L\,\delta \mathscr{V}/\delta A^\alpha_{-q}$. We expand $\Omega[{\bf A}]$ in \eqref{eqn:OmegaExpansion} up to second order and obtain \begin{align} \label{eqn:jexp} j^\alpha_q=T\,\text{Tr}\big(\mathscr{G}\dot \mathscr{V}^\alpha_q\big)+T\,\text{Tr}\big(\mathscr{G}\dot \mathscr{V}^\alpha_q \mathscr{G}\mathscr{V}\big)+... \, , \end{align} where we used the cyclic property of the trace to recombine the terms of second order. Both the electromagnetic vertex $\mathscr{V}$ and the current vertex $\dot \mathscr{V}^\alpha_q$ are a series of the vector potential. We expand the current up to first order in the vector potential. The expansion of the electromagnetic vertex $\mathscr{V}$ is given in \eqref{eqn:Vpp'}. The expansion of the current vertex reads \begin{align} \dot \mathscr{V}^\alpha_{q,pp'}=&-\frac{e}{L}\sum^\infty_{n=0}\frac{e^n}{n!}\sum_{\substack{q_1...q_n \\ \alpha_1...\alpha_n}} \lambda^{\alpha\alf_1...\alpha_n}_{\frac{p+p'}{2}} \nonumber\\&\times A^{\alpha_1}_{q_1}...A^{\alpha_n}_{q_n}\,\delta_{\sum_m q_m,p-p'+q} \,. \end{align} Note that the current vertex $\dot V^\alpha_{q,pp'}$ has a zeroth order, which is independent of the vector potential, whereas the electromagnetic vertex $V_{pp'}$ is at least linear in the vector potential. Thus, the first contribution in \eqref{eqn:jexp} leads to two contributions that are \begin{align} \label{eqn:jexp1} \text{Tr}\big(\mathscr{G}\dot \mathscr{V}^\alpha_q\big)=&-e\frac{T}{L}\sum_p \text{tr}\big[\mathscr{G}^{}_p\lambda^\alpha_p\big]\delta^{}_{q,0}\nonumber\\&-\sum_\beta e\frac{T}{L}\sum_p \text{tr}\big[\mathscr{G}^{}_p\lambda^{\alpha\beta}_p \big]A^\beta_q \, . \end{align} The first term is known as {\it paramagnetic current}, which is a current without any external field. The second term is known as {\it diamagnetic} contribution. The second contribution in \eqref{eqn:jexp} up to linear order in the vector potential gives \begin{align} \label{eqn:jexp2} \text{Tr}\big(\mathscr{G}\dot \mathscr{V}^\alpha_q \mathscr{G}\mathscr{V}\big)=-e\sum_\beta\frac{T}{L}\sum_p \text{tr}\big[\mathscr{G}^{}_p\lambda^\alpha_{p+\frac{q}{2}}\mathscr{G}^{}_{p+q}\lambda^\beta_{p+\frac{q}{2}}\big]A^\beta_q \, . \end{align} This term is known as {\it paramagnetic} contribution. In a last step we combine the diamagnetic and paramagnetic contribution. In \eqref{eqn:jexp1} we use the definition $\lambda^{\alpha\beta}_{\bf p}=\partial_\alpha \lambda^\beta_{\bf p}$ in \eqref{eqn:DlamDef} and perform partial integration in the momentum integration. The derivative of the Green's function is $\partial^{}_\alpha \mathscr{G}^{}_{ip_0,{\bf p}}=\mathscr{G}^{}_{ip_0,{\bf p}}\,\lambda^\alpha_{\bf p} \,\mathscr{G}^{}_{ip_0,{\bf p}}$, which follows by \eqref{eqn:Green}. We see that the diamagnetic contribution is the $q=0$ contribution of \eqref{eqn:jexp2}. By defining $\Pi^{\alpha\beta}_q$ in \eqref{eqn:defPi} we can read of \eqref{eqn:PiFull}. \subsection{Absence of the paramagnetic current} The first term of \eqref{eqn:jexp1} is independent of the vector potential and, thus, a paramagnetic current \begin{align} j^\alpha_\text{para}=-e\frac{T}{L}\sum_p \text{tr}\big[G^{}_p\lambda^\alpha_p\big]\delta^{}_{q,0} \end{align} without any external source. We perform the Matsubara summation and diagonalize the Bloch Hamiltonian $\lambda_{\bf p}$. The paramagnetic current reads \begin{align} j^\alpha_{\text{para}}=-\frac{e}{L}\sum_{\bf p}\int d\epsilon\,f(\epsilon) \sum_{n=\pm} A^n_{\bf p}(\epsilon) E^{n,\alpha}_{\bf p} \, , \end{align} involving the Fermi function $f(\epsilon)$, the quasiparticle velocities $E^{n,\alpha}_{\bf p}=\partial_\alpha E^n_{\bf p}$ of the two quasiparticle bands $E^\pm_{\bf p}=\frac{1}{2}(\epsilon_{{\bf p},A}+\epsilon_{{\bf p},B})\pm\sqrt{\frac{1}{4}(\epsilon_{{\bf p},A}-\epsilon_{{\bf p},B})^2+|\Delta_{\bf p}|^2}$ and the spectral functions $A^\pm_{{\bf p},\sigma}=\frac{\Gamma/\pi}{(\epsilon-E^\pm_{\bf p})^2+\Gamma^2}$. In general, the different contributions at fixed momentum ${\bf p}$ are nonzero. If the quasiparticle bands fulfill $E^\pm({\bf p})=E^\pm(-{\bf p}-{\bf p}^\pm)$ for a fixed momentum ${\bf p}^\pm$, we have $E^{\pm,\alpha}({\bf p})=-E^{\pm,\alpha}(-{\bf p}-{\bf p}^\pm)$. Thus, the paramagnetic current $j^\alpha_\text{para}$ vanishes by integrating over momenta \cite{Voruganti1992}. \section{Matsubara summation} \label{appendix:Matsubara} In this section we omit the momentum dependence for shorter notation. We can represent any Matsubara Green's function matrix $G_{ip_0}$ in the spectral representation as \begin{align} \label{eqn:Gip0} G_{ip_0}=\int\hspace{-1.5mm}d\epsilon \,\frac{A(\epsilon)}{ip_0-\epsilon} \end{align} with corresponding spectral function matrix $A(\epsilon)\equiv A_\epsilon$. The retarded and advanced Green's function matrices are \begin{align} &G^R_\epsilon=\int\hspace{-1.5mm}d\epsilon'\,\frac{A(\epsilon')}{\epsilon-\epsilon'+i0^+}\, ,\\[0.5mm] &G^A_\epsilon=\int\hspace{-1.5mm}d\epsilon'\,\frac{A(\epsilon')}{\epsilon-\epsilon'-i0^+} \, . \end{align} We define the principle-value matrix $P(\epsilon)\equiv P_\epsilon$ via \begin{align} P(\epsilon)=P.V.\hspace{-1mm}\int\hspace{-1.5mm}d\epsilon'\,\frac{A(\epsilon')}{\epsilon-\epsilon'} \, , \end{align} where $P.V.$ denotes the principle value of the integral. Using the integral identity $\frac{1}{\epsilon-\epsilon'\pm i0^+}=P.V. \frac{1}{\epsilon-\epsilon'}\mp i\pi \,\delta(\epsilon-\epsilon')$ we have \begin{align} &A_\epsilon=-\frac{1}{\pi}\text{Im}\,G^R_\epsilon\equiv-\frac{1}{2\pi i}\big(G^R_\epsilon-G^A_\epsilon\big) \, , \\[1mm] &P_\epsilon=\text{Re}\,G^R_\epsilon\equiv\frac{1}{2}\big(G^R_\epsilon+G^A_\epsilon\big) \, . \end{align} Note that $A_\epsilon$ and $P_\epsilon$ are hermitian matrices. We preform the Matsubara summation of \eqref{eqn:Isq0} and \eqref{eqn:Iaq0}: We replace each Matsubara Green's function by its spectral representation \eqref{eqn:Gip0}. We perform the Matsubara summation on the product of single poles via the residue theorem by introducing the Fermi function $f(\epsilon)\equiv f_\epsilon$ and perform analytic continuation of the bosonic Matsubara frequency $iq_0\rightarrow \omega+i0^+$. Finally, one integration is performed by identifying $G^R_{\epsilon+\omega}$, $G^R_{\epsilon-\omega}$ or $P_\epsilon$. A more general application with a detailed description of this procedure can be found in Ref.~\onlinecite{Mitscherling2018}. In our application we have three distinct cases. The first case involves the Green's function matrix $G_{ip_0+iq_0}$ leading to \begin{align} T&\sum_{p_0}\left.\text{tr}\big[G_{ip_0+iq_0}M_1G_{ip_0}M_2\big]\right|_{iq_0\rightarrow \omega+i0^+}\nonumber\\ &\hspace{1.5mm}= \int\hspace{-1.5mm}d\epsilon \,f_\epsilon\,\text{tr}\big[A^{}_\epsilon M^{}_1G^A_{\epsilon-\omega}M^{}_2+G^R_{\epsilon+\omega}M^{}_1A^{}_\epsilon M^{}_2\big] \, . \end{align} The second case involves the Green's function matrix $G_{ip_0-iq_0}$ leading to \begin{align} T&\sum_{p_0}\left.\text{tr}\big[G_{ip_0-iq_0}M_1G_{ip_0}M_2\big]\right|_{iq_0\rightarrow \omega+i0^+}\nonumber\\ &\hspace{1.5mm}= \int\hspace{-1.5mm} d\epsilon\, f_\epsilon\,\text{tr}\big[A^{}_\epsilon M^{}_1G^R_{\epsilon+\omega}M^{}_2+G^A_{\epsilon-\omega}M^{}_1A^{}_\epsilon M^{}_2\big] \, . \end{align} The third case involves no bosonic Matsubara frequency $iq_0$ and is given by \begin{align} T&\sum_{p_0}\left.\text{tr}\big[G_{ip_0}M_1G_{ip_0}M_2\big]\right|_{iq_0\rightarrow \omega+i0^+}\nonumber\\ &\hspace{1.5mm}= \int\hspace{-1.5mm} d\epsilon\, f_\epsilon\,\text{tr}\big[A_\epsilon M_1P_\epsilon M_2+P_\epsilon M_1A_\epsilon M_2\big] \, . \end{align} We can rewrite these three cases by using \begin{align} \label{eqn:GRexp}&G^R_\epsilon=P_\epsilon-i\pi A_\epsilon \, ,\\ \label{eqn:GAexp}&G^A_\epsilon=P_\epsilon+i\pi A_\epsilon \, , \, \end{align} in order to express all results only by the hermitian matrices $A_\epsilon$ and $P_\epsilon$. The Matsubara summation of \eqref{eqn:Isq0} after analytic continuation reads \begin{align} I^s_\omega&=\frac{1}{2}\int\hspace{-1.5mm} d\epsilon\, f_\epsilon \big[A_\epsilon M_1 \{(P_{\epsilon+\omega}\hspace{-0.5mm}-\hspace{-0.5mm}P_\epsilon)\hspace{-0.5mm}+\hspace{-0.5mm}(P_{\epsilon-\omega}\hspace{-0.5mm}-\hspace{-0.5mm}P_\epsilon)\}M_2\nonumber\\[1mm] &+\{(P_{\epsilon+\omega}\hspace{-0.5mm}-\hspace{-0.5mm}P_\epsilon)\hspace{-0.5mm}+\hspace{-0.5mm}(P_{\epsilon-\omega}\hspace{-0.5mm}-\hspace{-0.5mm}P_\epsilon)\} M_1 A_\epsilon M_2 \nonumber\\[1mm] &-i\pi A_\epsilon M_1 \{(A_{\epsilon+\omega}\hspace{-0.5mm}-\hspace{-0.5mm}A_\epsilon)\hspace{-0.5mm}-\hspace{-0.5mm}(A_{\epsilon-\omega}\hspace{-0.5mm}-\hspace{-0.5mm}A_\epsilon)\}M_2\nonumber\\[1mm] &-i\pi \{(A_{\epsilon+\omega}\hspace{-0.5mm}-\hspace{-0.5mm}A_\epsilon)\hspace{-0.5mm}-\hspace{-0.5mm}(A_{\epsilon-\omega}\hspace{-0.5mm}-\hspace{-0.5mm}A_\epsilon)\} M_1 A_\epsilon M_2\big] \, . \end{align} We divide by $i\omega$ and perform the zero frequency limit leading to the frequency derivatives $\lim_{\omega\rightarrow 0}(P_{\epsilon\pm\omega}-P_\epsilon)/\omega=\pm P'_\epsilon$ and $\lim_{\omega\rightarrow 0}(A_{\epsilon\pm\omega}-A_\epsilon)/\omega=\pm A'_\epsilon$, which we denote by $(\cdot)'$. The first two lines of the sum vanish. We get \begin{align} \lim_{\omega\rightarrow 0}\frac{I^s_\omega}{i\omega}&=-\pi\int\hspace{-1.5mm}d\epsilon\, f_\epsilon \big[A_\epsilon M_1 A'_\epsilon M_2\hspace{-0.5mm}+\hspace{-0.5mm}A'_\epsilon M_1 A_\epsilon M_2\big] . \end{align} We can apply the product rule and partial integration in $\epsilon$ and end up with \eqref{eqn:Isw}. The Matsubara summation of \eqref{eqn:Iaq0} after analytic continuation is \begin{align} I^a_\omega&=\frac{1}{2}\int\hspace{-0.5mm}d\epsilon\, f_\epsilon \big[-A_\epsilon M_1 \{(P_{\epsilon+\omega}\hspace{-0.5mm}-\hspace{-0.5mm}P_\epsilon)\hspace{-0.5mm}-\hspace{-0.5mm}(P_{\epsilon-\omega}\hspace{-0.5mm}-\hspace{-0.5mm}P_\epsilon)\}M_2\nonumber\\[1mm] &+\{(P_{\epsilon+\omega}\hspace{-0.5mm}-\hspace{-0.5mm}P_\epsilon)\hspace{-0.5mm}-\hspace{-0.5mm}(P_{\epsilon-\omega}\hspace{-0.5mm}-\hspace{-0.5mm}P_\epsilon)\} M_1 A_\epsilon M_2 \nonumber\\[1mm] &+i\pi A_\epsilon M_1 \{(A_{\epsilon+\omega}\hspace{-0.5mm}-\hspace{-0.5mm}A_\epsilon)\hspace{-0.5mm}+\hspace{-0.5mm}(A_{\epsilon-\omega}\hspace{-0.5mm}-\hspace{-0.5mm}A_\epsilon)\}M_2\nonumber\\[1mm] &-i\pi \{(A_{\epsilon+\omega}\hspace{-0.5mm}-\hspace{-0.5mm}A_\epsilon)\hspace{-0.5mm}+\hspace{-0.5mm}(A_{\epsilon-\omega}\hspace{-0.5mm}-\hspace{-0.5mm}A_\epsilon)\} M_1 A_\epsilon M_2\big]\, . \end{align} We divide by $i\omega$ and perform the zero frequency limit. The last two lines of the summation drop. We end up with \eqref{eqn:Iaw}. \end{appendix}
{'timestamp': '2020-08-27T02:00:18', 'yymm': '2008', 'arxiv_id': '2008.11218', 'language': 'en', 'url': 'https://arxiv.org/abs/2008.11218'}
\section{Introduction} Understanding the spatial distribution of poverty is an important step before poverty can be eradicated worldwide. In recent decades, we have seen dramatic declines of poverty in many areas including India, China, and many areas of East Asia, Latin America and Africa. There is much to celebrate in this decline, as billions of people have risen out of poverty. The poverty that remains – the roughly 1 billion individuals worldwide below the international poverty line of \$1.90 per day – are distributed non-uniformly across space, often in rural and urban ``pockets'' that are inaccessible and frequently changing. It's posited that these areas are unlikely to integrate with the path of the global economy unless policy measures are taken to ensure their integration. The first step to addressing this poverty is knowing with precision where it is located. Unfortunately, this has proven to be a non-trivial task. The standard method of generating a geographic distribution of poverty – a ``poverty map'' – involves combining a household consumption survey with a broader survey, typically a Census. While this method is accurate enough to produce official statistics (Elbers, et al., 2003), it has several disadvantages. Censuses and consumption surveys are expensive, costing millions for countries to produce. The lag time between survey and production of poverty rates can be several years due to the time needed to collect, administer, and produce statistics on poverty rates. Finally, because of security concerns and geographic remoteness, it is often infeasible to survey every area within a country. The combination of computer vision trained against satellite imagery holds much promise for the creation of frequently updated and accurate poverty maps. Several research groups have explored the capabilities of computer vision trained against satellite imagery to estimate poverty. Jean et al. (2015) use a transfer learning approach that uses the penultimate layer of a CNN trained against night time lights as explanatory variables to estimate poverty. Engstrom et al. (2017) use intermediate features (cars, roofs, crops) identified through computer vision to estimate poverty. This paper takes the direct route and estimates an end-to-end CNN trained to estimate poverty rates of urban and rural municipalities in Mexico. We complement these by incorporating land use estimates estimated from Planet imagery. The results are modest but encouraging. The best models, which incorporate land use as a predictor, explain 57\% of poverty in a 10\% validation sample. However, looking at all MCS-ENIGH municipalities, the explanatory power drops to 44\%. We speculate as to why we see this decline out of the validation sample and suggest some possible improvements. \section{Data} \subsection{Mexican Survey Data} The CNN is trained using survey data from the 2014 MCS-ENIGH. Poverty benchmark data is created using a combination of the 2014 MCS-ENIGH household survey, the second from the 2015 Intercensus. The 2014 MCS-ENIGH survey covers 58,125 households, of which approximately 75\% are urban and 25\% are rural. The survey samples 896 municipalities out of roughly 2,500. The survey collects income per adult equivalent, which is the income metric used to calculate the official poverty rate. The 2015 Intercensus is a survey of households conducted every 5 years. For 2015 the Intercensus samples 229,622 households. The Intercensus contains only household labor income and transfer income, and not total household income. However, labor income and household income are strongly linearly correlated, with an $R^{2}$ value of approximately 0.9. We experimented with different samples from the Intercensus to determine whether number of household data points on which the CNN is trained affects performance accuracy. We considered two separate poverty rates: the minimum well-being poverty line and the well-being poverty line. These poverty lines varied for urban and rural areas. For each administrative unit we calculated the fraction of households living in poverty. Thus the end-to-end prediction task beings with satellite imagery and ends with a prediction for each administrative area of the distribution over three ``buckets'': below minimum well-being, between minimum well-being and well-being, and above well-being. \subsection{Satellite Imagery} We used satellite imagery provided by both Planet and Digital Globe, examples of which is shown in figure 1. Assessing the comparative tradeoffs between Planet and Digital was one of the goals of the project. Digital Globe imagery is of higher resolution, with spatial resolution of 50 cm$^{2}$, and covers the years 2014-2015. Planet imagery varies in resolution between 3 - 5 m$^{2}$ and ranged in date from late 2015 to early 2017. Digital Globe imagery is only used in urban areas, as coverage in rural areas is sparse. Planet imagery is ``4-band'', and includes the near-infrared (NIR) band, while the Digital Globe imagery does not. We experimented with including the NIR band during the training process, but ultimately saw better results with the exclusion of this band. \begin{figure}[H] \centering{} \includegraphics[scale=0.4]{Imagery_DG_Planet}\label{planet_DG}\caption{Example Digital Globe (left) and Planet (right) imagery, Michoacan, (Satellite images (c) 2017 Digital Globe, Inc., Planet)} \end{figure} \section{Technical Methodology} During the training process we experimented with two CNN architectures. The first is GoogleNet (Szegedy et al., 2014) and the second is a variant of VGG (Simoyan, 2014). We experimented with various solvers and weight initializations which were evaluated against an internal development or ``dev'' set. According to tests using this dev set, the GoogleNet architecture outperformed the VGG architecture. We also experimented with fine-tuning the weights of the GoogleNet models. We compared fine-tuned models, using weights initialized at the values of a model trained against ImageNet. \paragraph{\textmd{Digital Globe and Planet imagery both include three bands of Red, Blue and Green (RGB) values. Planet imagery includes a 4th band for near infrared. We experimented with training models to include this additional information. The ImageNet dataset consists only of RGB imagery, so it is not-trivial to fine-tune from an ImageNet-trained model to a model with a 4-band input. Therefore, for 4-band input we trained from scratch and only attempted fine-tuning from ImageNet for 3-band versions of the imagery. That is to say, ultimately the NIR band was dropped. }} \section{Results} Focusing on the urban subsample, table \ref{Urban_Enigh_Trained-1} presents the CNN predictions for urban areas using imagery for either Digital Globe or Planet, using the 10\% withheld validation sample. We present $R^{2}$ estimates that show the correlation between predicted poverty and benchmark poverty as measured in the 2015 Intercensus. $R^{2}$ is estimated at 0.61 using the Digital Globe imagery, and 0.54 using Planet imagery. Recall we can only compare urban areas due to lack of coverage of rural areas for Digital Globe. The drop in performance is modest but not severe, especially considering that Planet imagery offers daily revisit rates of the earth's landmass. Poverty estimates for urban areas in Mexico are mapped shown in figure 2. Table \ref{Urban_Enigh_Trained} shows the model performance varying the subsample to include more than the 10\% validation sample. In the 10\% validation sample, using CNN predictions, we estimate an $R^{2}$ value between predicted and true poverty between 0.47 and 0.54. When adding landcover classification, estimated via Planet imagery, to the CNN predictions we estimate an $R^{2}$ value between 0.57 and 0.64. However, when we compare this to estimates within all MCS-ENIGH areas, the coefficient of variation falls to 0.4 and 0.44 in urban and rural areas respectively. Outside of the 896 municipalities that comprise the MCS-ENIGH survey we see explanatory power fall precipitously, to roughly 0.3. The poor performance outside of MCS-ENIGH municipalities is puzzling. This could be due to weighting tiles by geographic area instead of population. It could also be due to MCS-ENIGH municipalities having differential characteristics from non-MCS-ENIGH municipalities, such as more homogeneous population density or differential size. \begin{table}[t] \centering{}\caption{Comparison Digital Globe versus Planet Imagery, 10\% Validation Sample} \label{Urban_Enigh_Trained-1} {\tiny{} \begin{tabular}{>{\raggedright}b{0.08\paperwidth}>{\centering}m{0.17\paperwidth}>{\centering}m{0.15\paperwidth}>{\centering}b{0.07\paperwidth}} \textbf{\small{}Sample} & \textbf{\small{}$R^{2}$ CNN Predictions using Digital Globe imagery } & \textbf{\small{}$R^{2}$ CNN Predictions using Planet imagery } & \textbf{\small{}\# municipalities }\tabularnewline \hline Urban areas & 0.61 & 0.54 & 58\tabularnewline \hline \end{tabular} \end{table} \begin{figure}[H] \begin{centering} \includegraphics[scale=0.4]{UrbanPovMap} \par\end{centering} \centering{}\label{planet_DG-1}\caption{Poverty Estimates, Urban Municipalities} \end{figure} \begin{table}[t] \centering{}\caption{CNN Predictions In and Out of Sample} \label{Urban_Enigh_Trained} {\tiny{} \begin{tabular}{>{\raggedright}b{0.1\paperwidth}|>{\centering}m{0.05\paperwidth}>{\centering}m{0.08\paperwidth}>{\centering}m{0.08\paperwidth}>{\centering}m{0.08\paperwidth}>{\centering}b{0.07\paperwidth}} \multicolumn{1}{>{\raggedright}b{0.1\paperwidth}}{\textbf{\small{}Validation}} & \textbf{\small{}Sample} & \textbf{\small{}$R^{2}$ CNN Predictions} & \textbf{\small{}$R^{2}$ Landcover } & \textbf{\small{}$R^{2}$ Both} & \textbf{\small{}Areas }\tabularnewline \hline \multirow{3}{0.1\paperwidth}{{\small{}10\% MCS-ENIGH Validation}} & {\small{}All } & {\small{}0.47} & {\small{}0.49} & {\small{}0.57} & {\small{}109}\tabularnewline & {\small{}Urban} & {\small{}0.54} & {\small{}0.52} & {\small{}0.64} & {\small{}58}\tabularnewline & {\small{}Rural} & {\small{}0.47} & {\small{}0.49} & {\small{}0.64} & {\small{}51}\tabularnewline \multicolumn{1}{>{\raggedright}b{0.1\paperwidth}}{} & & & & & \tabularnewline \multirow{3}{0.1\paperwidth}{{\small{}all MCS-ENIGH areas}} & {\small{}All } & {\small{}0.37} & {\small{}0.37} & {\small{}0.44} & {\small{}1115}\tabularnewline & {\small{}Urban} & {\small{}0.34} & {\small{}0.31} & {\small{}0.4} & {\small{}619}\tabularnewline & {\small{}Rural} & {\small{}0.38} & {\small{}0.34} & {\small{}0.44} & {\small{}496}\tabularnewline \multicolumn{1}{>{\raggedright}b{0.1\paperwidth}}{} & & & & & \tabularnewline \multirow{3}{0.1\paperwidth}{{\small{}non MCS-ENIGH areas}} & {\small{}All } & {\small{}0.15} & {\small{}0.23} & {\small{}0.28} & {\small{}2834}\tabularnewline & {\small{}Urban} & {\small{}0.06} & {\small{}0.19} & {\small{}0.21} & {\small{}944}\tabularnewline & {\small{}Rural} & {\small{}0.22} & {\small{}0.25} & {\small{}0.31} & {\small{}1890}\tabularnewline \hline \end{tabular} \end{table} \section*{References} {\small{}{[}1{]} Elbers, Chris, Jean O. Lanjouw, and Peter Lanjouw. \textquotedbl{}Micro–level estimation of poverty and inequality.\textquotedbl{} Econometrica 71, no. 1 (2003): 355-364.}{\small \par} {\small{}{[}2{]} Engstrom, R., Hersh, J., Newhouse, D. “Poverty from space: using high resolution satellite imagery for estimating economic well-being and geographic targeting.” (2016).}{\small \par} {\small{}{[}3{]} Jean, Neal, Marshall Burke, Michael Xie, W. Matthew Davis, David B. Lobell, and Stefano Ermon. \textquotedbl{}Combining satellite imagery and machine learning to predict poverty.\textquotedbl{} Science 353, no. 6301 (2016): 790-794.}{\small \par} {\small{}{[}4{]} Szegedy, Christian, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. \textquotedbl{}Going deeper with convolutions.\textquotedbl{} In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-9. 2015.}{\small \par} {\small{}{[}5{]} Simonyan, Karen, and Andrew Zisserman. \textquotedbl{}Very deep convolutional networks for large-scale image recognition.\textquotedbl{} arXiv preprint arXiv:1409.1556 (2014).}{\small \par} \end{document}
{'timestamp': '2017-11-20T02:02:24', 'yymm': '1711', 'arxiv_id': '1711.06323', 'language': 'en', 'url': 'https://arxiv.org/abs/1711.06323'}
\section{The surgery theorem applied to grid diagrams} \label {sec:Surgery} Our main goal here is to present the statement of Theorem~\ref{thm:Surgery} below, which expresses the Heegaard Floer homology $\HFm$ of an integral surgery on a link in terms of a grid diagram for the link (or, more precisely, in terms of holomorphic polygon counts on a symmetric product of the grid). We also state similar results for the cobordism maps on $\HFm$ induced by two-handle additions (Theorem~\ref{thm:Cobordisms}), for the other completed versions of Heegaard Floer homology (Theorem~\ref{thm:AllVersions}), and for the mixed invariants of four-manifolds (Proposition~\ref{prop:mixed}). The proofs of all the results from this section are given in \cite{LinkSurg}. \subsection {Hyperboxes of chain complexes} \label {sec:hyper} We start by summarizing some homological algebra from \cite[Section 3]{LinkSurg}. When $f$ is a function, we denote its $n\th$ iterate by $ f^{\circ n}$, i.e., $f^{\circ 0} = id, \ f^{\circ 1} = f, \ f^{\circ (n+1)} = f^{\circ n} \circ f$. For ${\mathbf {d}} = (d_1, \dots, d_n) \in (\Z_{\ge 0})^n$ a collection of nonnegative integers, we set $$ \mathbb{E}({\mathbf {d}}) = \{ \eps = (\eps_1, \dots, \eps_n) \ | \ \eps_i \in \{0,1, \dots, d_i\}, \ i=1, \dots, n \}. $$ In particular, $ \mathbb{E}_n = \mathbb{E}(1, \dots, 1) = \{0,1\}^n$ is the set of vertices of the $n$-dimensional unit hypercube. For $\eps = (\eps_1, \dots, \eps_n)\in \mathbb{E}({\mathbf {d}})$, set $$ \| \eps \| = \eps_1 + \dots + \eps_n.$$ We can view the elements of $\mathbb{E}({\mathbf {d}})$ as vectors in $\R^n$. There is a partial ordering on $\mathbb{E}({\mathbf {d}})$, given by $\eps' \leq \eps \iff \forall i, \ \eps'_i \leq \eps_i$. We write $\eps' < \eps$ if $\eps' \leq \eps$ and $\eps' \neq \eps$. We say that two multi-indices $\eps, \eps'$ with $\eps \leq \eps'$ are {\em neighbors} if $\eps' - \eps \in \mathbb{E}_n$, i.e., none of their coordinates differ by more than one. We define an {\em $n$-dimensional hyperbox of chain complexes} $\mathcal{H} = \bigl( (C^{\eps})_{\eps \in \mathbb{E}({\mathbf {d}})}, ({D}^{\eps})_{\eps \in \mathbb{E}_n} \bigr)$ of size ${\mathbf {d}} \in (\Z_{\ge 0})^n$ to consist of a collection of $\Z$-graded vector spaces $$ (C^{\eps})_{\eps \in \mathbb{E}({\mathbf {d}})}, \ \ C^{\eps} = \bigoplus_{* \in \Z} C^{\eps}_*,$$ together with a collection of linear maps $$ {D}^{\eps} : C_*^{\eps^0} \to C_{*+\| \eps \|-1 }^{\eps^0 + \eps},$$ defined for all $\eps^0 \in \mathbb{E}({\mathbf {d}})$ and $\eps \in \mathbb{E}_n $ such that $\eps^0 + \eps \in \mathbb{E}({\mathbf {d}})$ (i.e., the multi-indices of the domain and the target are neighbors). The maps ${D}^\eps$ are required to satisfy the relations \begin {equation} \label {eq:d2} \sum_{\eps' \leq \eps} {D}^{\eps - \eps'} \circ {D}^{\eps'} = 0, \end {equation} for all $\eps \in \mathbb{E}_n$. If ${\mathbf {d}} = (1, \dots, 1)$, we say that $\mathcal{H}$ is a {\em hypercube of chain complexes}. Note that ${D}^{\eps}$ in principle also depends on $\eps^0$, but we omit that from notation for simplicity. Further, if we consider the total complex $$ C_* = \bigoplus_{\eps \in \mathbb{E}({\mathbf {d}})} C^{\eps}_{* + \|\eps\|}, $$ we can think of $ {D}^{\eps}$ as a map from $C_*$ to itself, by extending it to be zero when is not defined. Observe that ${D} = \sum {D}^{\eps}: C_* \to C_{*-1}$ is a chain map. Let $\mathcal{H}= \bigl( (C^{\eps})_{\eps \in \mathbb{E}({\mathbf {d}})}, ({D}^{\eps})_{\eps \in \mathbb{E}_n} \bigr)$ be an $n$-dimensional hyperbox of chain complexes. Let us imagine the hyperbox $[0, d_1] \times \dots \times [0, d_n]$ to be split into $d_1d_2 \cdots d_n$ unit hypercubes. At each vertex $\eps$ we see a chain complex $(C^{\eps}, D^{(0,\dots, 0)})$. Associated to each edge of one of the unit hypercubes is a chain map. Along the two-dimensional faces we have chain homotopies between the two possible ways of composing the edge maps, and along higher-dimensional faces we have higher homotopies. There is a natural way of turning the hyperbox $\mathcal{H}$ into an $n$-dimensional hypercube $\hat \mathcal{H} = (\hat C^{\eps}, \hat {D}^{\eps})_{\eps \in \mathbb{E}_n}$, which we called the {\em compressed hypercube} of $\mathcal{H}$. The compressed hypercube has the property that along its $i\th$ edge we see the composition of all the $d_i$ edge maps on the $i\th$ axis of the hyperbox. In particular, if $n=1$, then $\mathcal{H}$ is a string of chain complexes and chain maps: $$ C^{(0)} \xrightarrow{{D}^{(1)}} C^{(1)} \xrightarrow{{D}^{(1)}} \cdots \xrightarrow{{D}^{(1)}} C^{(d)}, $$ and the compressed hypercube $\hat \mathcal{H}$ is $$ C^{(0)} \xrightarrow{\left({D}^{(1)}\right)^{\circ d}} C^{(d)}.$$ For general $n$ and ${\mathbf {d}} = (d_1, \dots, d_n)$, the compressed hypercube $\hat \mathcal{H}$ has at its vertices the same complexes as those at the vertices of the original hyperbox $\mathcal{H}$: $$ \hat C^{(\eps_1, \dots, \eps_n)} = C^{(\eps_1d_1, \dots, \eps_n d_n)}, \ \eps = (\eps_1, \dots, \eps_n) \in \mathbb{E}_n.$$ If along the $i\th$ coordinate axis in $\mathcal{H}$ we have the edge maps $f_i={D}^{(0, \dots, 0, 1, 0, \dots, 0)}$, then along the respective axis in $\hat \mathcal{H}$ we see $f_i^{\circ d_i}$. Given a two-dimensional face of $\mathcal{H}$ with edge maps $f_i$ and $f_j$ and chain homotopies $f_{\{i, j\}}$ between $f_i \circ f_j$ and $f_j \circ f_i$, to the respective compressed face in $\hat \mathcal{H}$ we assign the map $$\sum_{k_i=1}^{d_i} \sum_{k_j=1}^{d_j} f_i^{\circ(k_i-1)} \circ f_j^{\circ (k_j-1)} \circ f_{\{i,j\}} \circ f_j^{\circ (d_j - k_j)} \circ f_i^{\circ (d_i - k_i)},$$ which is a chain homotopy between $f_i^{\circ d_i} \circ f_j^{\circ d_j}$ and $f_j^{\circ d_j} \circ f_i^{\circ d_i}$. The formulas for what we assign to the higher-dimensional faces in $\hat \mathcal{H}$ are more complicated, but they always involve sums of compositions of maps in $\mathcal{H}$. Let $\leftexp{0}{\mathcal{H}}$ and $\leftexp{1}{\mathcal{H}}$ be two hyperboxes of chain complexes, of the same size ${\mathbf {d}} \in (\Z_{\ge 0})^n$. A {\em chain map} $F : \leftexp{0}{\mathcal{H}} \to \leftexp{1}{\mathcal{H}}$ is defined to be a collection of linear maps $$F_{\eps^0}^\eps: \leftexp{0}{C}^{\eps^0}_* \to \leftexp{1}{C}^{\eps^0 + \eps} _{*+ \| \eps \|},$$ satisfying $$\sum_{\eps' \leq \eps} \bigl( {D}^{\eps - \eps'}_{\eps^0 + \eps'} \circ F^{\eps'}_{\eps^0} + F^{\eps - \eps'}_{\eps^0 + \eps'} \circ {D}^{\eps'}_{\eps^0} \bigr) = 0,$$ for all $\eps^0 \in \mathbb{E}({\mathbf {d}}), \eps \in \mathbb{E}_n$ such that $\eps^0 + \eps \in \mathbb{E}({\mathbf {d}})$. In particular, $F$ gives an ordinary chain map between the total complexes $\leftexp{0}{C}$ and $\leftexp{1}{C}$. Starting with from here, we can define chain homotopies and chain homotopy equivalences between hyperboxes by analogy with the usual notions between chain complexes. The construction of $\hat \mathcal{H}$ from $\mathcal{H}$ is natural in the following sense. Given a chain map $F : \leftexp{0}{\mathcal{H}} \to \leftexp{1}{\mathcal{H}}$, there is an induced chain map $\hat F : {}^{0}{\hat \mathcal{H}} \to {}^{1}{\hat \mathcal{H}}$ between the respective compressed hypercubes. Moreover, if $F$ is a chain homotopy equivalence, then so is $\hat F$. \subsection {Chain complexes from grid diagrams} \label {subsec:ags} Consider an oriented, $\ell$-component link $\orL \subset S^3$. We denote the components of $\orL$ by $\{ L_i\}^{\ell}_{i=1}$. Let $$ \mathbb{H}(L)_i = \frac{{\operatorname{lk}}(L_i, L - L_i)}{2} + \Z \subset \Q, \ \ \mathbb{H}(L) = \bigoplus_{i=1}^\ell \mathbb{H}(L)_i,$$ where ${\operatorname{lk}}$ denotes linking number. Further, let $$ \bH(L)_i = \mathbb{H}(L)_i \cup \{-\infty, + \infty\}, \ \ \bH(L) = \bigoplus_{i=1}^{\ell} \bH(L)_i.$$ Let $G$ be a toroidal grid diagram representing the link $\orL$ and having at least one free marking, as in \cite[Section 12.1]{LinkSurg}. Precisely, $G$ consists of a torus ${\mathcal{T}}$, viewed as a square in the plane with the opposite sides identified, and split into $n$ annuli (called rows) by $n$ horizontal circles $\alpha_1, \dots, \alpha_n$, and into $n$ other annuli (called columns) by $n$ vertical circles $\beta_1, \dots, \beta_n$. Further, we are given some markings on the torus, of two types: $X$ and $O$, such that: \begin {itemize} \item each row and each column contains exactly one $O$ marking; \item each row and each column contains at most one $X$ marking; \item if the row of an $O$ marking contains no $X$ markings, then the column of that $O$ marking contains no $X$ markings either. An $O$ marking of this kind is called a {\em free marking}. We assume that the number $q$ of free markings is at least $1$. \end {itemize} Observe that $G$ contains exactly $n$ $O$ markings and exactly $n-q$ $X$ markings. A marking that is not free is called {\em linked}. The number $n$ is called the {\em grid number} or the {\em size} of $G$. We draw horizontal arcs between the linked markings in the same row (oriented to go from the $O$ to the $X$), and vertical arcs between the linked markings in the same column (oriented to go from the $X$ to the $O$). Letting the vertical arcs be overpasses whenever they intersect the horizontal arcs, we obtain a planar diagram for a link in $S^3$, which we ask to be the given link $\orL$. We let ${\mathbf{S}} = {\mathbf{S}}(G)$ be the set of matchings between the horizontal and vertical circles in $G$. Any $\mathbf x \in {\mathbf{S}}$ admits a Maslov grading $M(\mathbf x) \in \Z$ and an Alexander multi-grading $$A_i(\mathbf x) \in \mathbb{H}(L)_i, \ i \in \{1, \dots, \ell \}.$$ (For the precise formulas for $M$ and $A_i$, see \cite{MOST}, where they were written in the context of grid diagrams without free markings. However, the same formulas also apply to the present setting.) For $\mathbf x, \mathbf y \in {\mathbf{S}}$, we let $\EmptyRect(\mathbf x, \mathbf y)$ be the space of empty rectangles between $\mathbf x$ and $\mathbf y$, as in \cite{MOST}. Specifically, a {\em rectangle} from $\mathbf x$ to $\mathbf y$ is an embedded rectangle $r$ in the torus, whose lower left and upper right corners are coordinates of $\mathbf x$, and whose lower right and upper left corners are coordinates of $\mathbf y$; and moreover, if all other coordinates of $\mathbf x$ coincide with all other coordinates of $\mathbf y$. We say that the rectangle is {\em empty} if its interior contains none of the coordinates of $\mathbf x$ (or $\mathbf y$). For $r \in \EmptyRect(\mathbf x, \mathbf y)$, we denote by $O_j(r)$ and $X_j (r) \in \{0,1\}$ the number of times $O_j$ (resp.\ $X_j$) appears in the interior of $r$. We can arrange so that the free markings are labeled $O_{n-q+1}, \dots, O_n$. For simplicity, we write \begin {equation} \label {eq:Fi} F_i(r) = O_{n-q+i}(r), \ i=1, \dots, q. \end {equation} Let $\mathbb O_i$ and $\Xs_i$ be the set of $O$'s (resp.\ $X$'s) belonging to the $i\th$ component of the link. We fix orderings of the elements of $\mathbb O_i$ and $\Xs_i$, for all $i$. The $i^{th}$ coordinate of the Alexander multi-grading is characterized uniquely up to an overall additive constant by the property that $$ A_i(\mathbf x) - A_i(\mathbf y) = \sum_{j \in \Xs_i} X_j(r) - \sum_{j \in \mathbb O_i} O_j(r),$$ where here $r$ is any rectangle from $\mathbf x$ to $\mathbf y$. For $i \in \{1, \dots, \ell \}$ and $s \in \bH(L)_i$, we set $$ E^i_s(r) = \begin {cases} \sum_{j \in \mathbb O_i} O_j(r) & \text{ if } \Filt_i(\mathbf x) \leq s, \Filt_i(\mathbf y) \leq s \\ (s-\Filt_i(\mathbf x)) + \sum_{j \in \Xs_i} X_j(r) & \text{ if } \Filt_i(\mathbf x) \leq s, \Filt_i(\mathbf y) \geq s \\ \sum_{j \in \Xs_i} X_j(r) & \text{ if } \Filt_i(\mathbf x) \geq s, \Filt_i(\mathbf y) \geq s \\ (\Filt_i(\mathbf x) - s) + \sum_{j \in \mathbb O_i} O_j(r) & \text{ if } \Filt_i(\mathbf x) \geq s, \Filt_i(\mathbf y) \leq s. \end {cases} $$ Alternatively, we can write \begin {align} \label {eix} E^i_s(r) &= \max(s-A_i(\mathbf x), 0) - \max(s-A_i(\mathbf y), 0)+ \sum_{j \in \Xs_i} X_j(r) \\ \label {eio} &= \max(A_i(\mathbf x) -s, 0) - \max(A_i(\mathbf y) -s,0) + \sum_{j \in \mathbb O_i} O_j(r). \end {align} In particular, $E^i_{-\infty}(r) = \sum_{j \in \Xs_i} X_j(r)$ and $E^i_{+\infty}(r) = \sum_{j \in \mathbb O_i} O_j(r)$. Given $\mathbf s = (s_1, \dots, s_\ell) \in \bH(L)$, we define an associated {\em generalized Floer chain complex} $\mathfrak A^-(G, \mathbf s) = \mathfrak A^- (G, s_1, \dots, s_\ell)$ as follows. $\mathfrak A^- (G, \mathbf s)$ is the free module over $\mathcal R = \mathbb F[[U_1, \dots, U_{\ell+q}]]$ generated by ${\mathbf{S}}$, endowed with the differential: $$ \partial \mathbf x = \sum_{\mathbf y \in {\mathbf{S}}}\,\, \sum_{r\in \EmptyRect(\mathbf x, \mathbf y)} U_1^{E^1_{s_1}(r)} \cdots U_\ell^{E^\ell_{s_\ell}(r)} \cdot U_{\ell+1}^{F_1(r)} \cdots U_{\ell+q}^{F_q(r)} \mathbf y. $$ Note that we can view $\mathfrak A^-(G, \mathbf s)$ as a suitably modified Floer chain complex in $\mathrm{Sym}^n({\mathcal{T}})$, equipped with Lagrangian tori $\Ta = \alpha_1 \times \dots \times \alpha_n$, the product of horizontal circles, and $\Tb = \beta_1 \times \dots \times \beta_n$, the product of vertical circles. The notation $\mathbf s$ indicates the way we count powers of $U$'s. Empty rectangles are the same as holomorphic strips of index one in $\mathrm{Sym}^n({\mathcal{T}})$, compare \cite{MOS}. The $\mathfrak A^- (G, \mathbf s)$ can be equipped with a ${\mathbb{Z}}$-grading $\mu_\mathbf s$ such that the differential ${\partial}$ decreases $\mu_\mathbf s$ by one. Indeed, when none of the values $s_i$ is $-\infty$, we can set the grading on generators to be \begin {equation} \label {eq:ms} \mu_\mathbf s(\mathbf x) = M(\mathbf x) - 2\sum_{i=1}^\ell \max(A_i(\mathbf x)-s_i, 0), \end {equation} and let each $U_i$ decrease grading by $2$. When some of the values $s_i$ are $-\infty$, we replace the corresponding expressions $ \max(A_i(\mathbf x)-s_i, 0)$ by $A_i(\mathbf x)$ in the Equation~\eqref{eq:ms}. \begin {remark} When $L=K$ is a knot, the complex $\mathfrak A^-(G, s)$ is a multi-basepoint version of the subcomplex $A_s^- = C\{\max(i, j-s) \leq 0\}$ of the knot Floer complex $\CFK^{\infty}(Y, K)$, in the notation of \cite{Knots}. A similar complex $A_s^+ = C\{\max(i, j-s) \geq 0\}$ appeared in the integer surgery formula in \cite{IntSurg}, stated there in terms of $\mathit{HF}^+$ rather than $\HFm$. \end {remark} \subsection {Summary of the construction} \label {sec:summary} For the benefit of the reader, before moving further we include here a short summary of Sections~\ref{sec:DestabSeveral}--\ref{sec:surgery} below. The aim of these sections is to be able to state the Surgery Theorem~\ref{thm:Surgery}, which expresses $\HFm$ of an integral surgery on a link $\orL \subset S^3$ (with framing $\Lambda$) as the homology of a certain chain complex ${\mathcal{C}}^-(G, \Lambda)$ associated to a grid diagram $G$ for~$\orL$ (with at least one free marking). Given a sublink $M \subseteq L$, we let $G^{L-M}$ be the grid diagram for $L-M$ obtained from $G$ by deleting all the rows and columns that support components of $M$. Roughly, the complex ${\mathcal{C}}^-(G, \Lambda)$ is built as an $\ell$-dimensional hypercube of complexes. Each vertex corresponds to a sublink $ M \subseteq L$, and the chain complex at that vertex is the direct product of generalized Floer complexes $\mathfrak A^-(G^{L-M}, \mathbf s)$, over all possible values $\mathbf s$; the reader is encouraged to peek ahead at the expression~\eqref{eq:cgl}. The differential on ${\mathcal{C}}^-(G, \Lambda)$ is a sum of some maps denoted $\Phi^{\orM}_\mathbf s$, which are associated to oriented sublinks $\orM$: given a sublink $M$, we need to consider all its possible orientations, not just the one induced from the orientation of $\orL$. When $M$ has only one component, the maps $\Phi^{\orM}_\mathbf s$ are chain maps going from a generalized Floer complex of $G^{L'}$ (for $L'$ containing $M$) to one of $G^{L'-M}$. When $M$ has two components, $\Phi^{\orM}_\mathbf s$ are chain homotopies between different ways of composing the respective chain maps removing one component at a time; for more components of $M$ we get higher homotopies, etc. They fit together in a hypercube of chain complexes, as in Section~\ref{sec:hyper}. Each map $\Phi^{\orM}_\mathbf s$ will be a composition of three kinds of maps, see Equation~\eqref{eq:phims} below: an inclusion map $ {\mathcal{I}}^{\orM}_\mathbf s$, a ``destabilization map'' $ \hat{D}_{p^{\orM}(\mathbf s)}^{\orM}$, and an isomorphism $ \Psi_{p^{\orM}(\mathbf s)}^{\orM}$. (Here, $p^{\orM}(\mathbf s)$ refers to a natural projection from the set $\mathbb{H}(L)$ to itself, see Section~\ref{sec:inclusions}.) The inclusion maps $ {\mathcal{I}}^{\orM}_\mathbf s$ are defined in Section~\ref{sec:inclusions}, and go between different generalized Floer complexes for the same grid (but different values of $\mathbf s$). The destabilization maps $\hat {D}_{\mathbf s}^{\orM}$ are defined in Section~\ref{sec:desublink}, and go from a generalized Floer complex for a grid $G^{L'}$ to one associated to another diagram, which is obtained from the grid by handlesliding some beta curves over others. Finally, the isomorphisms $\Psi_\mathbf s^{\orM}$ relate these latter complexes to generalized Floer complexes for the smaller grid $G^{L'-M}$; these isomorphisms are defined in Section~\ref{subsec:ks}. The main difficulty lies in correctly defining the destabilization maps. When the link $M$ has one component, they are constructed by composing some simpler maps ${D}^{Z}_\mathbf s$. A map ${D}^Z_\mathbf s$ corresponds to a single handleslide, done so that the new beta curve encircles a marking $Z \in M$. The type ($X$ or~$O$) of the marking $Z$ is determined by the chosen orientation $\orM$ for $M$. More generally, we define destabilization maps corresponding to a whole set ${\mathcal{Z}}$ of markings in Section~\ref{sec:DestabSeveral} below. When ${\mathcal{Z}}$ has two markings, the respective maps ${D}^{\mathcal{Z}}_\mathbf s$ are chain homotopies; in general, the maps ${D}^{\mathcal{Z}}_\mathbf s$ fit into a hypercube of chain complexes. To construct the destabilization maps for a sublink (which we do in Section~\ref{sec:desublink}), we build a hyperbox out of maps of the form ${D}^{\mathcal{Z}}_\mathbf s$, and apply the compression procedure from Section~\ref{sec:hyper}. \subsection {Destabilization at a set of markings} \label {sec:DestabSeveral} Let $Z$ be one of the linked markings (of type $X$ or $O$) on the grid diagram $G$. We define a subset $J(Z) \subset \bH(L)$ as follows. If $Z \in \mathbb O_i$ for some component $L_i$, set $$ J(Z) = \{(s_1, \dots, s_\ell) \in \bH(L) \ | \ s_i = +\infty \}.$$ If $Z \in \Xs_i$, set $$ J(Z) = \{(s_1, \dots, s_\ell) \in \bH(L) \ | \ s_i = -\infty \}.$$ For $\mathbf s \in J(Z)$, note that when we compute powers of $U$ in the chain complex $\mathfrak A^-(G, \mathbf s)$, the other markings in the same column or row as $Z$ do not play any role. Next, consider a set of linked markings ${\mathcal{Z}}= \{Z_1, \dots, Z_k \}$. We say that ${\mathcal{Z}}$ is {\em consistent} if, for any $i$, at most one of the sets ${\mathcal{Z}} \cap \mathbb O_i$ and ${\mathcal{Z}} \cap \Xs_i$ is nonempty. If ${\mathcal{Z}}$ is consistent, we set $$ J({\mathcal{Z}}) = \bigcap_{i=1}^k J(Z_i).$$ If ${\mathcal{Z}}$ is a consistent set of linked markings, we define a new set of curves $\mathbb\beta^{\mathcal{Z}} = \{\beta_j^{{\mathcal{Z}}}\mid j=1, \dots, n\}$ on the torus ${\mathcal{T}}$, as follows. Let $j_i$ be the index corresponding to the vertical circle $\beta_{j_i}$ just to the left of a marking $Z_i \in {\mathcal{Z}}$. We let $\beta_{j_i}^{{\mathcal{Z}}}$ be a circle encircling $Z_i$ and intersecting $\beta_{j_i}$, as well as the $\alpha$ curve just below $Z_i$, in two points each; in other words, $\beta_{j_i}^{{\mathcal{Z}}}$ is obtained from $\beta_j$ by handlesliding it over the vertical curve just to the right of $Z_i$. For those $j$ that are not $j_i$ for any $Z_i \in {\mathcal{Z}}$, we let $\beta_j^{{\mathcal{Z}}}$ be a curve that is isotopic to $\beta_j$, intersects $\beta_j$ in two points, and does not intersect any of the other beta curves. See Figure~\ref{fig:betaZ}. (The hypothesis about the existence of at least one free marking is important here, because it ensures that $\mathbb\beta^{\mathcal{Z}}$ contains at least one vertical beta curve.) \begin{figure} \begin{center} \input{betaz.pstex_t} \end{center} \caption {{\bf A new collection of curves.} We show here a part of a grid diagram, with the horizontal segments lying on curves in $\mathbb\alpha$ and the straight vertical segments lying on curves in $\mathbb\beta$. The dashed curves (including the two circles) represent curves in $\mathbb\beta^{\mathcal{Z}}$, where ${\mathcal{Z}}$ consists of the two markings $Z_1$ and $Z_2$. The maximal degree intersection point $\Theta^{\emptyset, {\mathcal{Z}}}$ is represented by the black dots.} \label{fig:betaZ} \end{figure} For any consistent collection ${\mathcal{Z}}$, we denote $$\mathbb{T}_{\beta}^{{\mathcal{Z}}} = \beta_1^{\mathcal{Z}} \times \dots \times \beta_n^{\mathcal{Z}} \subset \mathrm{Sym}^n({\mathcal{T}}).$$ The fact that $\mathbf s \in J({\mathcal{Z}})$ implies that there is a well-defined Floer chain complex $\mathfrak A^-(\Ta, \Tb^{\mathcal{Z}}, \mathbf s)$, where the differentials take powers of the $U_i$'s according to $\mathbf s$, generalizing the constructions $\mathfrak A^-(G,\mathbf s)$ in a natural manner. More precisely, $\mathfrak A^-(\Ta,\Tb^{{\mathcal{Z}}},\mathbf s)$ is generated over $\mathcal R$ by $\Ta\cap\Tb^{{\mathcal{Z}}}$, with differential given by $$ \partial \mathbf x = \sum_{\mathbf y \in \Ta\cap\Tb^{{\mathcal{Z}}}} \sum_{\{\phi\in \pi_2(\mathbf x, \mathbf y) | \Mas(\phi)=1\}} U_1^{E^1_{s_1}(\phi)} \cdots U_\ell^{E^\ell_{s_\ell}(\phi)} \cdot U_{\ell+1}^{F_1(\phi)} \cdots U_{\ell+q}^{F_q(\phi)} \mathbf y, $$ where $\pi_2(\mathbf x, \mathbf y)$ is the set of homology classes of Whitney disks from $\mathbf x$ to $\mathbf y$, compare \cite{HolDisk}, and the functions $E^i_s$ and $F_i$ are as defined in Equations \eqref{eq:Fi}, ~\eqref{eix} or~\eqref{eio}, except we apply them to the domain of $\phi$ (a two-chain on the torus), rather than to a rectangle. When we have two collections of linked markings ${\mathcal{Z}}, {\mathcal{Z}}'$ such that ${\mathcal{Z}} \cup {\mathcal{Z}}'$ is consistent, we require that $\beta_i^{\mathcal{Z}}$ and $\beta_i^{{\mathcal{Z}}'}$ intersect in exactly two points. As such, there is always a maximal degree intersection point $\Theta^{{\mathcal{Z}}, {\mathcal{Z}}'} \in \mathbb{T}_{\beta}^{{\mathcal{Z}}} \cap \mathbb{T}_{\beta}^{{\mathcal{Z}}'}$. For each consistent collection of linked markings ${\mathcal{Z}}= \{Z_1, \dots, Z_m \}$, and each $\mathbf s \in J({\mathcal{Z}})$, we define an $m$-dimensional hypercube of chain complexes $$\mathcal{H}^{\mathcal{Z}}_\mathbf s = \bigl( C^{{\mathcal{Z}}, \eps}_{\mathbf s}, {D}^{{\mathcal{Z}}, \eps}_\mathbf s \bigr)_{\eps \in \mathbb{E}_m}$$ as follows. For $\eps \in \mathbb{E}_m = \{0,1\}^m$, we let $${\mathcal{Z}}^\eps = \{Z_i \in {\mathcal{Z}} \mid \eps_i = 1\}$$ and set $$ C^{{\mathcal{Z}}, \eps}_{\mathbf s, *} = \mathfrak A^-(\Ta, \Tb^{{\mathcal{Z}}^\eps}, \mathbf s).$$ For simplicity, we denote $\Theta^{{\mathcal{Z}}^\eps, {\mathcal{Z}}^{\eps'}}$ by $\Theta^{\mathcal{Z}}_{\eps, \eps'}$. We write $\eps \prec \eps'$ whenever $\eps, \eps' \in \mathbb{E}_m$ are immediate successors, i.e., $\eps < \eps'$ and $\|\eps' - \eps\|=1$. For a string of immediate successors $\eps= \eps^0 \prec \eps^1 \prec \cdots \prec \eps^k = \eps'$, we let \begin {equation} \label {eq:order} {D}^{{\mathcal{Z}}, \eps^0 \prec \eps^1 \prec \cdots \prec \eps^k }_\mathbf s : C^{{\mathcal{Z}}, \eps}_* \to C^{{\mathcal{Z}}, \eps'}_{* + k-1}, \end {equation} \begin {multline*} {D}^{{\mathcal{Z}}, \eps^0 \prec \eps^1 \prec \cdots \prec \eps^k }_\mathbf s (\mathbf x) = \\ \sum_{\mathbf y \in \Ta \cap \Tb^{{\mathcal{Z}}^{\eps^k}}} \sum_{\{\phi \in \pi_2(\mathbf x, \Theta^{{\mathcal{Z}}}_{\eps^0, \eps^1}, \dots, \Theta^{{\mathcal{Z}}}_{\eps^{k-1}, \eps^k}, \mathbf y)| \mu(\phi) =1-k\}} \bigl(\# \mathcal {M}(\phi)\bigr) \cdot U_1^{E^1_{s_1}(\phi)} \cdots U_\ell^{E^\ell_{s_\ell}(\phi)} \cdot U_{\ell+1}^{F_1(r)} \cdots U_{\ell+q}^{F_q(r)} \mathbf y \end {multline*} be the map defined by counting isolated pseudo-holomorphic polygons in the symmetric product $\mathrm{Sym}^n({\mathcal{T}})$. Here, $$\pi_2(\mathbf x, \Theta^{{\mathcal{Z}}}_{\eps^0, \eps^1}, \dots, \Theta^{{\mathcal{Z}}}_{\eps^{k-1}, \eps^k}, \mathbf y)$$ denotes the set of homotopy classes of polygons with edges on $\Ta, \Tb^{{\mathcal{Z}}^{\eps^0}}, \dots, \Tb^{{\mathcal{Z}}^{\eps^k}}$, in this cyclic order, and with the specified vertices. The number $\mu(\phi) \in {\mathbb{Z}}$ is the Maslov index, and $\mathcal {M}(\phi)$ is the moduli space of holomorphic polygons in the class $\phi$. The Maslov index has to be $1-k$ for the expected dimension of the moduli space of polygons to be zero. This is because the moduli space of conformal structures on a disk with $k+2$ marked points has dimension $(k+2)-3=k-1$. Note that this definition of $\mu$ is different from the one in \cite[Section 4.2]{BrDCov}, where it denoted expected dimension. In the special case $k=0$, we need to divide $\mathcal {M}(\phi)$ by the action of ${\mathbb{R}}$ by translations; the resulting ${D}^{{\mathcal{Z}}, \eps^0}$ is the usual differential ${\partial}$. Define \begin {equation} \label {eq:dezede} {D}^{{\mathcal{Z}}, \eps}_\mathbf s : C^{{\mathcal{Z}}, \eps^0}_* \to C^{{\mathcal{Z}}, \eps^0+\eps}_{* + k-1}, \ \ \ {D}^{{\mathcal{Z}}, \eps}_\mathbf s = \sum_{\eps^0 \prec \eps^1 \prec \cdots \prec \eps^k = \eps^0 + \eps} {D}^{{\mathcal{Z}}, \eps^0 \prec \eps^1 \prec \cdots \prec \eps^k }_\mathbf s . \end {equation} The following is a particular case of \cite[Lemma 6.12]{LinkSurg}; compare also \cite[Lemma 9.7]{HolDisk} and \cite[Lemma 4.3]{BrDCov}: \begin {lemma} \label {lemma:d2} For any consistent collection ${\mathcal{Z}}= \{Z_1, \dots, Z_m \}$ and $\mathbf s \in J({\mathcal{Z}})$, the resulting $\mathcal{H}^{\mathcal{Z}}_\mathbf s= (C^{{\mathcal{Z}}, \eps}_\mathbf s, {D}^{{\mathcal{Z}}, \eps}_\mathbf s)_{\mathbf s \in \mathbb{E}_m}$ is a hypercube of chain complexes. \end {lemma} \begin {remark} \label {rem:note} For future reference, it is helpful to introduce a different notation for the maps \eqref{eq:order} and \eqref{eq:dezede}. First, let us look at \eqref{eq:order} in the case $k=m, \ \eps^0 = (0, \dots, 0)$ and $\eps^m = (1, \dots, 1)$. A string of immediate successors $\eps^0 \prec \cdots \prec \eps^m$ is the same as a re-ordering $(Z_{\sigma(1)}, \dots, Z_{\sigma(m)})$ of ${\mathcal{Z}}=\{Z_1, \dots, Z_m\}$, according to the permutation $\sigma$ in the symmetric group $S_m$ such that $$ {\mathcal{Z}}^{\eps^i} = {\mathcal{Z}}^{\eps^{i-1}} \cup \{Z_{\sigma(i)}\}.$$ We then write: $$ {D}^{(Z_{\sigma(1)}, \dots, Z_{\sigma(m)})}_\mathbf s = {D}^{{\mathcal{Z}}, \eps^0 \prec \eps^1 \prec \cdots \prec \eps^k }_\mathbf s.$$ In particular, we let ${D}^{({\mathcal{Z}})} _\mathbf s= {D}^{(Z_1,\dots, Z_m)}_\mathbf s$ be the map corresponding to the identity permutation. Further, we write ${D}^{{\mathcal{Z}}}_\mathbf s$ for the longest map ${D}^{{\mathcal{Z}}, (1, \dots, 1)}_\mathbf s$ in the hypercube $\mathcal{H}^{\mathcal{Z}}_\mathbf s$, that is, \begin {equation} \label {eq:dezeds} {D}^{\mathcal{Z}}_\mathbf s = \sum_{\sigma \in S_m} {D}^{(Z_{\sigma(1)}, \dots, Z_{\sigma(m)})}_\mathbf s. \end {equation} Observe that ${D}^{\mathcal{Z}}_\mathbf s$, unlike ${D}^{({\mathcal{Z}})}_\mathbf s$, is independent of the ordering of ${\mathcal{Z}}$. Observe also that an arbitrary map ${D}^{{\mathcal{Z}}, \eps}_\mathbf s$ from $\mathcal{H}^{\mathcal{Z}}_\mathbf s$ is the same as the longest map ${D}^{{\mathcal{Z}}^\eps}_\mathbf s$ in a sub-hypercube of $\mathcal{H}^{\mathcal{Z}}_\mathbf s$. In this notation, the result of Lemma~\ref{lemma:d2} can be written as: $$ \sum_{{\mathcal{Z}}' \subseteq {\mathcal{Z}}} {D}^{{\mathcal{Z}} \setminus {\mathcal{Z}}'}_\mathbf s \circ {D}^{{\mathcal{Z}}}_\mathbf s = 0.$$ See Figure~\ref{fig:destabs} for a picture of the hypercube corresponding to destabilization at a set ${\mathcal{Z}} = \{Z_1, Z_2\}$ of two linked markings. \begin{figure} \begin{center} \input{destabs.pstex_t} \end{center} \caption {{\bf Destabilization maps for two linked markings.} The straight lines represent chain maps corresponding to destabilization at one marking, and the curved map is a chain homotopy between the two compositions. Each chain map ${D}_\mathbf s^{(Z_i)}$ could also have been written as ${D}_\mathbf s^{\{Z_i\}}$.} \label{fig:destabs} \end{figure} \end {remark} \subsection {Sublinks and inclusion maps} \label {sec:inclusions} Suppose that $M\subseteq L$ is a sublink. We choose an orientation on $M$ (possibly different from the one induced from $\orL$), and denote the corresponding oriented link by $\orM$. We let $I_+(\orL, \orM)$ (resp.\ $I_-(\orL, \orM)$) to be the set of indices $i$ such that the component $L_i$ is in $M$ and its orientation induced from $\orL$ is the same as (resp.\ opposite to) the one induced from $\orM$. For $i \in \{1, \dots, \ell\}$, we define a map $p_i^{\orM} : \bH(L)_i \to \bH(L)_i $ by $$ p_i^{\orM}(s) = \begin{cases} +\infty & \text{ if } i \in I_+(\orL, \orM), \\ -\infty & \text{ if } i \in I_-(\orL, \orM), \\ s & \text{ otherwise.} \end {cases}$$ Then, for $\mathbf s = (s_1, \dots, s_\ell) \in \bH(L)$, we set $$p^{\orM} (\mathbf s)= \bigl(p_1^{\orM}(s_1), \dots, p_\ell^{\orM}(s_\ell)\bigr).$$ and define an inclusion map $$ {\mathcal{I}}^{\orM}_\mathbf s : \mathfrak A^-(G, \mathbf s) \to \mathfrak A^- (G, p^{\orM}(\mathbf s))$$ by \begin {equation} \label {eq:Proj} {\mathcal{I}}^{\orM}_\mathbf s \mathbf x = \prod_{i \in I_+(\orL, \orM)} U_i^{\max(A_i(\mathbf x) - s_i, 0)} \cdot \prod_{i \in I_-(\orL, \orM)} U_i^{\max(s_i - A_i(\mathbf x), 0)} \cdot \mathbf x. \end {equation} provided the exponents are finite, that is, $s_i \neq -\infty$ for all $i \in I_+(\orL, \orM)$, and $s_i \neq +\infty$ for all $i \in I_-(\orL, \orM)$. These conditions will always be satisfied when we consider inclusion maps in this paper. Equations~\eqref{eix} and \eqref{eio} imply that ${\mathcal{I}}^{\orM}_\mathbf s$ is a chain map. Let $N$ be the complement of the sublink $M$ in $L$. We define a reduction map \begin {equation} \label {eq:psic} \psi^{\orM} : \bH(L) \longrightarrow \bH(N) \end {equation} as follows. The map $\psi^{\orM}$ depends only on the summands $\bH(L)_i$ of $\bH(L)$ corresponding to $L_i \subseteq N$. Each of these $L_i$'s appears in $N$ with a (possibly different) index $j_i$, so there is a corresponding summand $\bH(N)_{j_i}$ of $\bH(N)$. We then set $$ \psi^{\orM}_i : \bH(L)_i \to \bH(N)_{j_i}, \ \ s_i \mapsto s_i - \frac{{\operatorname{lk}}(L_i, \orM)}{2},$$ where $L_i$ is considered with the orientation induced from $L$, while $\orM$ is with its own orientation. We then define $\psi^{\orM}$ to be the direct sum of the maps $\psi^{\orM}$, pre-composed with projection to the relevant factors. Note that $\psi^{\orM} = \psi^{\orM} \circ p^{\orM}$. \subsection {Destabilized complexes} \label {subsec:ks} Let ${\mathcal{Z}} = \{Z_1, \dots, Z_m\}$ be a consistent set of linked markings, and pick $\mathbf s \in J({\mathcal{Z}})$. In Section~\ref{sec:DestabSeveral} we introduced a Floer complex $\mathfrak A^-(\Ta, \Tb^{\mathcal{Z}}, \mathbf s)$ based on counting holomorphic curves. Our next goal is to describe this complex combinatorially. Let $L({\mathcal{Z}}) \subseteq L$ be the sublink consisting of those components $L_i$ such that at least one of the markings on $L_i$ is in ${\mathcal{Z}}$. We orient $L({\mathcal{Z}})$ as $\orL({\mathcal{Z}})$, such that a component $L_i$ is given the orientation coming from $\orL$ when ${\mathcal{Z}} \cap \mathbb O_i \neq \emptyset$, and is given the opposite orientation when ${\mathcal{Z}} \cap \Xs_i \neq \emptyset$. Moreover, we let $L(({\mathcal{Z}})) \subseteq L$ be the sublink consisting of the components $L_i$ such that either all $X$ or all $O$ markings on $L_i$ are in ${\mathcal{Z}}$. Consider the grid diagram $G^{{\mathcal{Z}}}$ obtained from $G$ by deleting the markings in $\Xs_i$ when ${\mathcal{Z}} \cap \mathbb O_i \neq \emptyset$, deleting the markings in $\mathbb O_i$ when ${\mathcal{Z}} \cap \Xs_i \neq \emptyset$, and finally deleting the rows and columns containing the markings in ${\mathcal{Z}}$. The former free $O$ markings in $G$ remain as free markings in $G^{{\mathcal{Z}}}$. However, in $G^{\mathcal{Z}}$ there may be some additional free markings, coming from linked $O$ or $X$ markings in $G$ that were not in ${\mathcal{Z}}$, but were on the same link component as a marking in ${\mathcal{Z}}$. To be consistent with our previous conventions, we relabel all the newly free $X$ markings as $O$'s. Thus, $G^{{\mathcal{Z}}}$ becomes a grid diagram (with free markings) for the link $L - L({\mathcal{Z}})$, with the orientation induced from $\orL$. We define complexes $\mathfrak A^-(G^{\mathcal{Z}}, \mathbf s)$ for $\mathbf s \in J({\mathcal{Z}})$ as in Section~\ref{subsec:ags}, but rather than assigning a separate $U$ variable to each newly free marking, we keep track of these markings using the same $U$ variables they had in $G$. This way we lose the variables $U_i$ for $L_i \subseteq L(({\mathcal{Z}}))$, so the complex $\mathfrak A^-(G^{\mathcal{Z}}, \mathbf s)$ is naturally defined over the free power series ring in the variables $U_i$ for $i=1, \dots, \ell$ with $L_i \not\subseteq L(({\mathcal{Z}}))$, as well as $U_{\ell+1}, \dots, U_{\ell+q}$. Note that holomorphic disks of index one in the symmetric product of $G^{\mathcal{Z}}$ are in one-to-one correspondence with rectangles on $G^{\mathcal{Z}}$. For $Z \in {\mathcal{Z}}$, let $j(Z) \in \{1, \dots, \ell\}$ be such that $L_{j(Z)}$ is the component of $L$ containing $Z$, and let $j(Z)'$ correspond to the component $L_{j(Z)'}$ containing the markings in the row exactly under the row through $Z$. We define a complex $$ \mathcal K({\mathcal{Z}}) = \bigotimes_{Z \in {\mathcal{Z}}} \Bigl( \mathcal R \xrightarrow{U_{j(Z)} - U_{j(Z)'} } \mathcal R \Bigr).$$ Using the argument in \cite[Proposition 12.2]{LinkSurg}, for a suitable choice of almost complex structure on the symmetric product $\mathrm{Sym}^n ({\mathcal{T}})$, we have an isomorphism \begin {equation} \label {eq:psiz} \Psi^{\mathcal{Z}}_\mathbf s : \mathfrak A^-(\Ta, \Tb^{\mathcal{Z}}, \mathbf s) \to \mathfrak A^-(G^{\mathcal{Z}}, \psi^{\orL({\mathcal{Z}})}(\mathbf s))[[\{U_i\}_{ L_i \subseteq L(({\mathcal{Z}})) } ]] \otimes_{\mathcal R} \mathcal K({\mathcal{Z}}). \end {equation} Here, the map $ \psi^{\orL({\mathcal{Z}})} : \bH(L) \longrightarrow \bH(L-L({\mathcal{Z}}))$ is as in Equation~\eqref{eq:psic}. To get an understanding of Equation~\eqref{eq:psiz}, note that there is a clear one-to-one correspondence between the generators of each side. The differentials correspond likewise: on the left hand side, apart from rectangles, we also have for example annuli as the one shown in Figure~\ref{fig:annuli}, which give rise to the factors in the complex $\mathcal K({\mathcal{Z}})$. For a generic almost complex structure on the symmetric product, we still have a map $\Psi^{\mathcal{Z}}_\mathbf s$, but this is a chain homotopy equivalence rather than an isomorphism. \begin{figure} \begin{center} \input{annuli.pstex_t} \end{center} \caption {{\bf The complex $\mathcal K({\mathcal{Z}})$.} The figure shows part of a grid diagram with some arcs on the $\alpha$ and $\beta^{{\mathcal{Z}}}$ curves drawn. There are two intersection points (marked as bullets) between the alpha curve below the marking $Z_1 \in {\mathcal{Z}}$, and the corresponding beta curve. There are two differentials going from the left to the right generator: a bigon containing $Z_1$ and an annulus containing $Z_2$, both drawn shaded in the diagram. This produces one of the factors in the definition of the complex $\mathcal K({\mathcal{Z}})$. } \label{fig:annuli} \end{figure} A particular instance of the discussion above appears when we have a sublink $M \subseteq L$, with some orientation $\orM$. We then set $${\mathcal{Z}}(\orM) = \bigcup_{i \in I_+(\orL, \orM)} \mathbb O_i \cup \bigcup_{i \in I_-(\orL, \orM)} \Xs_i. $$ Note that $\orL({\mathcal{Z}}(\orM)) = \orM$. In this setting, the destabilized grid diagram $G^{L - M}=G^{{\mathcal{Z}}(\orM)}$ is obtained from $G$ by eliminating all rows and columns on which $M$ is supported. It represents the link $L - M$. For simplicity, we denote $$\mathcal K(\orM) = \mathcal K({\mathcal{Z}}(\orM)), \ \ J(\orM) = J({\mathcal{Z}}(\orM)),$$ and, for $\mathbf s \in J(\orM)$, \begin {equation} \label {eq:psim} \Psi^{\orM}_\mathbf s = \Psi^{{\mathcal{Z}}(\orM)}_\mathbf s: \mathfrak A^-(\Ta, \Tb^{{\mathcal{Z}}(\orM)}, \mathbf s) \to \mathfrak A^-(G^{L-M}, \psi^{\orM}(\mathbf s))[[\{U_i \}_{L_i \subseteq M }]] \otimes_{\mathcal R} \mathcal K(\orM). \end {equation} \subsection {Destabilization of a sublink} \label {sec:desublink} Let $M \subseteq L$ be a sublink, endowed with an arbitrary orientation $\orM$. For any $\mathbf s \in J(\orM) = J({\mathcal{Z}}(\orM))$, we construct a hyperbox of chain complexes $\mathcal{H}^{\orL, \orM}_{ \mathbf s}$, as follows. Order the components of $M$ according to their ordering as components of $L$: $$ M = L_{i_1} \cup \dots \cup L_{i_{m}}, \ \ i_1 < \dots < i_{m}.$$ For $j=1, \dots, m$, let us denote $M_j = L_{i_j}$ for simplicity, and equip $M_j$ with the orientation $\orM_j$ induced from $\orM$. Then ${\mathcal{Z}}(\orM_j)$ is either $\mathbb O_{i_j}$ or $\Xs_{i_j}$. In either case, we have an ordering of its elements, so we can write $$ {\mathcal{Z}}(\orM_j) = \{Z^{\orM_j}_1, \dots, Z^{\orM_j}_{d_j} \},$$ where $d_j$ is the cardinality of ${\mathcal{Z}}(\orM_j)$. The hyperbox $\mathcal{H}^{\orL, \orM}_\mathbf s$ is $m$-dimensional, of size ${\mathbf {d}}^M = (d_1, \dots, d_m)$. For each multi-index $\eps = (\eps_1, \dots, \eps_m) \in \mathbb{E}({\mathbf {d}}^M)$, we let ${\mathcal{Z}}(\orM)^\eps \subseteq {\mathcal{Z}}(\orM)$ be the collection of markings $$ {\mathcal{Z}}(\orM)^{\eps} = \bigcup_{j=1}^m \{Z^{\orM_j}_1, \dots, Z^{\orM_j}_{\eps_j} \}. $$ We then let $$\mathbb\beta^{\eps} = \mathbb\beta^{{\mathcal{Z}}(\orM)^\eps}$$ be the collection of beta curves destabilized at the points of ${\mathcal{Z}}(\orM)^\eps$. For each $\eps$, consider the Heegaard diagram $\mathcal{H}^{\orL, \orM}_{\eps} = ({\mathcal{T}}, \mathbb\alpha, \mathbb\beta^\eps)$, with the $z$ basepoints being the markings in $\Xs_i$ for $L_i \not\subseteq M$, and the $w$ basepoints being the markings in ${\mathcal{Z}}(\orM)$, together with those in $\mathbb O_i$ for $L_i \not\subseteq M$. This diagram represents the link $\orL - M$. At each vertex $\eps \in \mathbb{E}({\mathbf {d}}^M)$ we place the Floer complex $$C_\mathbf s^{\orL, \orM, \eps} = \mathfrak A^-(\Ta, \Tb^{{\mathcal{Z}}(\orM)^\eps}, \mathbf s),$$ and along the faces we have linear maps $${D}^{\orL, \orM, \eps'}_\mathbf s = {D}_\mathbf s^{{\mathcal{Z}}(\orM)^\eps, \eps'}, \ \eps'\in \mathbb{E}_m \subseteq \mathbb{E}({\mathbf {d}}^M),$$ where $ {D}_\mathbf s^{{\mathcal{Z}}(\orM)^\eps, \eps'}$ are as in~\eqref{eq:dezede} above. We compress the hyperbox of Floer complexes associated to $\mathcal{H}^{\orL, \orM}_{\mathbf s}$, cf.\ Section~\ref{sec:hyper}, and define \begin {equation} \label {eq:dezed} \hat {D}^{\orM}_\mathbf s : \mathfrak A^-(G, \mathbf s)\to \mathfrak A^-(\Ta, \Tb^{{\mathcal{Z}}(\orM)}, \mathbf s). \end {equation} to be the longest diagonal map in the compressed hypercube $\hat \mathcal{H}^{\orL, \orM}_{\mathbf s}$. For example, when $M = L_i$ is a single component, the map $\hat {D}^{\orM}_\mathbf s$ is a composition of the triangle maps corresponding to handleslides over the basepoints in ${\mathcal{Z}}(\orM)$, in the given order. When $M$ has several components, it is a composition of more complicated polygon maps, corresponding to chain homotopies (of higher order) between compositions of the handleslide maps. Note that for each $\mathbf s \in \mathbb{H}(L)$, we have $p^{\orM}(\mathbf s) \in J({\mathcal{Z}}(\orM))$ by definition. Therefore, by composing the maps \eqref{eq:Proj}, \eqref{eq:dezed} and \eqref{eq:psim} (the latter two taken with respect to $p^{\orM}(\mathbf s)$ rather than $\mathbf s$), we obtain a map \begin {equation} \begin{aligned} \Phi_\mathbf s^{\orM} &: \mathfrak A^- (G, \mathbf s) \longrightarrow \mathfrak A^-(G^{L-M}, \psi^{\orM}(\mathbf s))[[\{U_i \}_{L_i \subseteq M }]] \otimes_{\mathcal R} \mathcal K(\orM),\\ \Phi_\mathbf s^{\orM} &= \Psi_{p^{\orM}(\mathbf s)}^{\orM} \circ \hat{D}_{p^{\orM}(\mathbf s)}^{\orM} \circ {\mathcal{I}}^{\orM}_\mathbf s , \end{aligned} \label {eq:phims}\end {equation} defined for any $\mathbf s \in \bH(L)$. For simplicity, let us denote \begin {equation} \label {eq:dems} {D}^{\orM}_\mathbf s = \Psi_{\mathbf s}^{\orM} \circ \hat{D}_{\mathbf s}^{\orM} : \mathfrak A^-(G, \mathbf s) \to \mathfrak A^-(G^{L-M}, \psi^{\orM}(\mathbf s))[[\{U_i \}_{L_i \subseteq M }]] \otimes_{\mathcal R} \mathcal K(\orM). \end {equation} The following is a variant of \cite[Proposition 7.4]{LinkSurg}, discussed in the proof of \cite[Proposition 12.6]{LinkSurg}: \begin {lemma} \label {lemma:comm} Let $M_1 , M_2 \subseteq L$ be two disjoint sublinks, with orientations $\orM_1$ and $\orM_2$. For any $\mathbf s \in J(\orM_1)$, we have: \begin {equation} \label {eq:comm} {\mathcal{I}}^{\orM_2}_{\psi^{\orM_1}(\mathbf s)} \circ {D}^{\orM_1}_\mathbf s = {D}^{\orM_1}_{p^{\orM_2}(\mathbf s)} \circ {\mathcal{I}}^{\orM_2}_\mathbf s. \end {equation} \end {lemma} For any $\orM$ and $\mathbf s \in \bH(L)$, by applying Lemma~\ref{lemma:comm} and the properties of compression from Section~\ref{sec:hyper}, we get: \begin {equation} \label {eq:PhiPhi} \sum_{\orM_1 \amalg \orM_2 = \orM} \Phi^{\orM_2}_{\psi^{\orM_1} (\mathbf s)} \circ \Phi^{\orM_1}_{\mathbf s} = 0, \end {equation} where $\orM_1$ and $\orM_2$ are considered with the orientations induced from $\orM$. See \cite[Proposition 12.6]{LinkSurg}. \subsection {The surgery theorem} \label {sec:surgery} Let us fix a framing $\Lambda$ for the link $\orL$. For a component $L_i$ of $L$, we let $\Lambda_i$ be its induced framing, thought of as an element in $H_1(Y-L)$. This last group can be identified with ${\mathbb{Z}}^\ell$ using the basis of oriented meridians for the components. Under this identification, for $i \neq j$, the $j\th$ component of the vector $\Lambda_i$ is the linking number between $L_i$ and $L_j$. The $i\th$ component of $\Lambda_i$ is its homological framing coefficient of $L_i$ as a knot. Given a sublink $N \subseteq L$, we let $\Omega(N)$ be the set of all possible orientations on $N$. For $\orN \in \Omega(N)$, we let $$ \Lambda_{\orL, \orN} = \sum_{i \in I_-(\orL, \orN)} \Lambda_i \in {\mathbb{Z}}^\ell.$$ We consider the $\mathcal R$-module \begin {equation} \label {eq:cgl} {\mathcal{C}}^-(G, \Lambda) = \bigoplus_{M \subseteq L}\,\, \prod_{\mathbf s \in \mathbb{H}(L)} \Bigl( \mathfrak A^-(G^{L - M}, \psi^{M}(\mathbf s))[[\{ U_i\}_{ L_i \subseteq M}]] \Bigr) \otimes_\mathcal R \mathcal K(M), \end {equation} where $\psi^{M}$ simply means $\psi^{\orM}$ with $\orM$ being the orientation induced from the one on $\orL$. We equip ${\mathcal{C}}^-(G, \Lambda)$ with a boundary operator ${\mathcal{D}}^-$ as follows. For $\mathbf s \in \mathbb{H}(L)$ and $\mathbf x \in \bigl( \mathfrak A^-(G^{L - M}, \psi^{M}(\mathbf s))[[\{ U_i\}_{ L_i \subseteq M}]] \bigr) \otimes_\mathcal R \mathcal K(M)$, we set \begin {align*} {\mathcal{D}}^-(\mathbf s, \mathbf x) &= \sum_{\vphantom{\orN}N \subseteq L - M}\,\, \sum_{\orN \in \Omega(N)} (\mathbf s + \Lambda_{\orL, \orN}, \Phi^{\orN}_\mathbf s(\mathbf x)) \\ &\in \bigoplus_{\vphantom{\orN}N \subseteq L - M} \,\,\bigoplus_{\orN \in \Omega(N)} \Bigl( \mathbf s+ \Lambda_{\orL, \orN}, \bigl( \mathfrak A^-(G^{L-M-N}, \psi^{M \cup \orN} (\mathbf s)) [[\{ U_i\}_{ L_i \subseteq M \cup N}]]\bigr) \otimes_\mathcal R \mathcal K(M \cup N) \Bigr) \\ &\subset {\mathcal{C}}^-(G, \Lambda). \end {align*} According to Equation~\eqref{eq:PhiPhi}, ${\mathcal{C}}^-(G, \Lambda)$ is a chain complex. Let $H(L, \Lambda) \subseteq {\mathbb{Z}}^\ell$ be the lattice generated by $\Lambda_i, i=1, \dots, \ell$. The complex ${\mathcal{C}}^-(G, \Lambda)$ splits into a direct product of complexes ${\mathcal{C}}^-(G, \Lambda, {\mathfrak{u}})$, according to equivalence classes ${\mathfrak{u}} \in \mathbb{H}(L)/H(L, \Lambda)$. (Note that $H(L, \Lambda)$ is not a subspace of $\mathbb{H}(L)$, but it still acts on $\mathbb{H}(L)$ naturally by addition.) Further, the space of equivalence classes $\mathbb{H}(L)/H(L, \Lambda)$ can be canonically identified with the space of ${\operatorname{Spin^c}}$ structures on the three-manifold $S^3_\Lambda(L)$ obtained from $S^3$ by surgery along the framed link $(L, \Lambda)$. Given a ${\operatorname{Spin^c}}$ structure ${\mathfrak{u}}$ on $Y_\Lambda(L)$, we set $$ {\mathfrak{d}}({\mathfrak{u}}) = \gcd_{\xi \in H_2(Y_\Lambda(L); {\mathbb{Z}})} \langle c_1({\mathfrak{u}}), \xi\rangle,$$ where $c_1({\mathfrak{u}})$ is the first Chern class of the ${\operatorname{Spin^c}}$ structure. Thinking of ${\mathfrak{u}}$ as an equivalence class in $\mathbb{H}(L)$, we can find a function $\nu: {\mathfrak{u}} \to {\mathbb{Z}}/({\mathfrak{d}}({\mathfrak{u}}){\mathbb{Z}})$ with the property that $$ \nu(\mathbf s + \Lambda_i) \equiv \nu(\mathbf s) + 2s_i,$$ for any $i=1, \dots, \ell$ and $\mathbf s = (s_1, \dots, s_\ell) \in {\mathfrak{u}}$. The function $\nu$ is unique up to the addition of a constant. For $\mathbf s \in \mathbb{H}(L)$ and $\mathbf x \in \bigl( \mathfrak A^-(G^{L - M}, \psi^{M}(\mathbf s))[[\{ U_i\}_{ L_i \subseteq M}]] \bigr) \otimes_\mathcal R \mathcal K(M)$, let $$ \mu(\mathbf s, \mathbf x) = \mu^M_\mathbf s(\mathbf x) +\nu(\mathbf s) - |M|,$$ where $\mu^M_\mathbf s = \mu_{\psi^M(\mathbf s)}$ is as in Equation~\eqref{eq:ms}, and $|M|$ denotes the number of components of $M$. Then $\mu$ gives a relative ${\mathbb{Z}}/({\mathfrak{d}}({\mathfrak{u}}){\mathbb{Z}})$-grading on the complex ${\mathcal{C}}^-(G, \Lambda, {\mathfrak{u}})$. The differential ${\mathcal{D}}^-$ decreases $\mu$ by one modulo ${\mathfrak{d}}({\mathfrak{u}})$. The following is \cite[Theorem 12.7]{LinkSurg}: \begin {theorem} \label {thm:Surgery} Fix a grid diagram $G$ with $q\geq 1$ free markings, such that $G$ represents an oriented, $\ell$-component link $\orL$ in $S^3$. Fix also a framing $\Lambda$ of $L$. Then, for every ${\mathfrak{u}} \in {\operatorname{Spin^c}}(S^3_\Lambda(L))$, we have an isomorphism of relatively graded $\mathbb F[[U]]$-modules: \begin {equation} \label {eq:surgery} H_*({\mathcal{C}}^-(G, \Lambda, {\mathfrak{u}}), {\mathcal{D}}^-) \cong \HFm_{*}(S^3_\Lambda(L), {\mathfrak{u}}) \otimes_\mathbb F H_*(T^{n-q-\ell}), \end {equation} where $n$ is the grid number of $G$. \end {theorem} Observe that the left hand side of Equation~\eqref{eq:surgery} is a priori a module over $\mathcal R = {\mathbb{F}}[[U_1, \dots, U_{\ell+q}]]$. Part of the claim of Theorem~\ref{thm:Surgery} is that all the $U_i$'s act the same way, so that we have a $\mathbb F[[U]]$-module. \subsection {Maps induced by surgery} \label {sec:Maps} Let $L' \subseteq L$ be a sublink, with the orientation $\orL'$ induced from~$\orL$. Denote by $W=W_{\Lambda}(L', L)$ the cobordism from $S^3_{\Lambda|_{L'}}(L')$ to $S^3_{\Lambda}(L)$ given by surgery on $L - L'$, framed with the restriction of $\Lambda$. Let $H(L, \Lambda|_{L'}) \subseteq {\mathbb{Z}}^\ell$ be the sublattice generated by the framings $\Lambda_i$, for $L_i \subseteq L'$. There is an identification: $${\operatorname{Spin^c}} ( W_{\Lambda}(L', L)) \cong \mathbb{H}(L)/H(L, \Lambda|_{L'})$$ under which the natural projection $$\pi^{L, L'} : \bigl( \mathbb{H}(L)/H(L, \Lambda|_{L'})\bigr) \longrightarrow \bigl(\mathbb{H}(L)/H(L, \Lambda)\bigr)$$ corresponds to restricting the ${\operatorname{Spin^c}}$ structures to $S^3_{\Lambda}(L)$, while the map $$ \psi^{L-L'}: \bigl(\mathbb{H}(L)/H(L, \Lambda|_{L'})\bigr) \to \bigl(\mathbb{H}(L')/H(L', \Lambda|_{L'})\bigr) $$ corresponds to restricting them to $S^3_{\Lambda|_{L'}}(L')$. Observe that, for every equivalence class $\mathbf t \in \mathbb{H}(L)/H(L, \Lambda|_{L'})$, $$ {\mathcal{C}}^-(G, \Lambda)^{L', \mathbf t} = \bigoplus_{L - L' \subseteq M \subseteq L}\,\prod_{\{\mathbf s \in \mathbb{H}(L)|[\mathbf s] = \mathbf t\}} \Bigl( \mathfrak A^-(G^{L - M}, \psi^{M}(\mathbf s))[[\{ U_i\}_{ L_i \subseteq M}]] \Bigr) \otimes_\mathcal R \mathcal K(M)$$ is a subcomplex of ${\mathcal{C}}^-(G, \Lambda, \pi^{L, L'}(\mathbf t)) \subseteq {\mathcal{C}}^-(G, \Lambda)$. This subcomplex is chain homotopy equivalent to $${\mathcal{C}}^-(G^{L'}, \Lambda|_{L'}, \psi^{L-L'}(\mathbf t)) \otimes H_*(T^{(n-n')-(\ell-\ell')}),$$ where \begin{multline*} {\mathcal{C}}^-(G^{L'}, \Lambda|_{L'}, \psi^{L-L'}(\mathbf t)) =\\ \bigoplus_{M' \subseteq L'} \prod_{\{\mathbf s' \in \mathbb{H}(L')| [\mathbf s'] = \psi^{L-L'}(\mathbf t)\}} \Bigl( \mathfrak A^-(G^{L' - M'}, \psi^{M'}(\mathbf s'))[[\{ U_i\}_{ L_i \subseteq M'}]] \Bigr) \otimes_{\mathcal R'} \mathcal K(M') \end{multline*} and $\mathcal R'$ is the power series ring in the $U_i$ variables from $L'$. The chain homotopy equivalence is induced by taking $M$ to $M' = M - (L-L')$, $\mathbf s$ to $\mathbf s'=\psi^{\orL - \orL'}(\mathbf s)$, and getting rid of the $U_i$ variables from $L - L'$ via relations coming from $\mathcal K(L-L')$. Theorem~\ref{thm:Surgery} implies that the homology of ${\mathcal{C}}^-(G, \Lambda)^{L', \mathbf t}$ is isomorphic to $$ \HFm_{*}(S^3_{\Lambda'}(L'), \mathbf t|_{S^3_{\Lambda|_{L'}}(L')}) \otimes H_*(T^{n-q-\ell}).$$ In \cite{HolDiskFour}, Ozsv\'ath and Szab\'o associated a map $F^-_{W,\mathbf s}$ to any cobordism $W$ between connected three-manifolds, and ${\operatorname{Spin^c}}$ structure $\mathbf t$ on that cobordism. In the case when the cobordism $W$ consists only of two-handles (i.e., is given by integral surgery on a link), the following theorem gives a way of understanding the map $F^-_{W, \mathbf t}$ in terms of grid diagrams: \begin {theorem} \label {thm:Cobordisms} Let $\orL \subset S^3$ be an $\ell$-component link, $L' \subseteq L$ a sublink, $G$ a grid diagram for $\orL$ of grid number $n$ and having $q \geq 1$ free markings, and $\Lambda$ a framing of $L$. Set $W=W_{\Lambda}(L', L)$. Then, for any $\mathbf t \in {\operatorname{Spin^c}} ( W) \cong \mathbb{H}(L)/H(L, \Lambda|_{L'})$, the following diagram commutes: $$\begin {CD} H_*({\mathcal{C}}^-(G, \Lambda)^{L', \mathbf t}) @>{\phantom{F^-_{W, \mathbf t} \otimes {\operatorname{Id}}}}>> H_*({\mathcal{C}}^-(G, \Lambda, \pi^{L, L'}(\mathbf t))) \\ @V{\cong}VV @VV{\cong}V \\ \HFm_{*}(S^3_{\Lambda'}(L'), \mathbf t|_{S^3_{\Lambda|_{L'}}(L')}) \otimes H_*(T^{n-q-\ell}) @>{F^-_{W, \mathbf t} \otimes {\operatorname{Id}}}>> \HFm_{*}(S^3_{\Lambda}(L), \mathbf t|_{S^3_{\Lambda|_{L}}(L)}) \otimes H_*(T^{n-q-\ell}). \end {CD}$$ Here, the top horizontal map is induced from the inclusion of chain complexes, while the two vertical isomorphisms are the ones from Theorem~\ref{thm:Surgery}. \end {theorem} Theorem~\ref{thm:Cobordisms} is basically \cite[Theorem 11.1]{LinkSurg}, but stated here in the particular setting of grid diagrams. See \cite[Remark 12.8]{LinkSurg}. \subsection {Other versions} The chain complex ${\mathcal{C}}^-(G, \Lambda, {\mathfrak{u}})$ was constructed so that the version of Heegaard Floer homology appearing in Theorem~\ref{thm:Surgery} is $\HFm$. We now explain how one can construct similar chain complexes $\hat {\mathcal{C}}(G, \Lambda, {\mathfrak{u}}), {\mathcal{C}}^+(G, \Lambda, {\mathfrak{u}})$ and ${\mathcal{C}}^{\infty}(G, \Lambda, {\mathfrak{u}})$, corresponding to the theories $\widehat{\mathit{HF}}, \mathit{HF}^+$ and $\HFinf$. The chain complex $\hat {\mathcal{C}}(G, \Lambda, {\mathfrak{u}})$ is simply obtained from ${\mathcal{C}}^-(G, \Lambda)$ by setting one of the variables $U_i$ equal to zero. Its homology computes $\widehat{\mathit{HF}}(S^3_{\Lambda}(L), {\mathfrak{u}}) \otimes H^*(T^{n-q-\ell})$. The chain complex ${\mathcal{C}}^\infty(G, \Lambda, {\mathfrak{u}})$ is obtained from ${\mathcal{C}}^-(G, \Lambda, {\mathfrak{u}})$ by inverting all the $U_i$ variables. It is a vector space over the field of semi-infinite Laurent polynomials $$\mathcal R^{\infty} = \mathbb F[[U_1, \dots, U_{\ell+q}; U_1^{-1},\dots, U_{\ell+q}^{-1}]. $$ In other words, $\mathcal R^{\infty}$ consists of those power series in $U_i$'s that are sums of monomials with degrees bounded from below. Note that $C^-(G, \Lambda, {\mathfrak{u}})$ is a subcomplex of ${\mathcal{C}}^\infty(G, \Lambda, {\mathfrak{u}})$. We denote the quotient complex by ${\mathcal{C}}^+(G, \Lambda, {\mathfrak{u}})$. Theorems~\ref{thm:Surgery} and \ref{thm:Cobordisms} admit the following extension: \begin {theorem} \label {thm:AllVersions} Fix a grid diagram $G$ (of grid number $n$, and with $q \geq 1$ free markings) for an oriented, $\ell$-component link $\orL$ in $S^3$, and fix a framing $\Lambda$ of $L$. Choose also an ordering of the components of $\orL$, as well as of the $O$ and $X$ markings on the grid $G$. Set $V = H_*(T^{n-q-\ell})$. Then, for every ${\mathfrak{u}} \in {\operatorname{Spin^c}}(S^3_\Lambda(L)) \cong \mathbb{H}(L)/H(L, \Lambda)$, there are vertical isomorphisms and horizontal long exact sequences making the following diagram commute: $$\begin {CD} \cdots \to H_*({\mathcal{C}}^-(G, \Lambda, {\mathfrak{u}})) @>>> H_*({\mathcal{C}}^{\infty}(G, \Lambda, {\mathfrak{u}})) @>>> H_*({\mathcal{C}}^+(G, \Lambda, {\mathfrak{u}})) \to \cdots\\ @VV{\cong}V @VV{\cong}V @VV{\cong}V \\ \cdots \to \HFm_{*}(S^3_\Lambda(L), {\mathfrak{u}}) \otimes V @>>> \HFinf_{*}(S^3_\Lambda(L), {\mathfrak{u}}) \otimes V @>>> \mathit{HF}^+_{*}(S^3_\Lambda(L), {\mathfrak{u}}) \otimes V \to \cdots \end {CD}$$ Furthermore, the maps in these diagrams behave naturally with respect to cobordisms, in the sense that there are commutative diagrams analogous to those in Theorem~\ref{thm:Cobordisms}, involving the cobordism maps $F^-_{W, {\mathfrak{t}}}, F^{\infty}_{W, {\mathfrak{t}}}, F^+_{W, {\mathfrak{t}}}$. \end {theorem} Compare \cite[Theorem 11.2]{LinkSurg}. \subsection {Mixed invariants of closed four-manifolds} We recall the definition of the closed four-manifold invariant from \cite{HolDiskFour}. Let $X$ be a closed, oriented four-manifold with $b_2^+(X) \geq 2$. By puncturing $X$ in two points we obtain a cobordism $W$ from $S^3$ to $S^3$. We can cut $W$ along a three-manifold $N$ so as to obtain two cobordisms $W_1, W_2$ with $b_2^+(W_i) > 0$; further, the manifold $N$ can be chosen such that $\delta H^1(N; \Z) \subset H^2(W; \Z)$ is trivial. (If this is the case, $N$ is called an {\em admissible cut}.) Let ${\mathfrak{t}}$ be a ${\operatorname{Spin^c}}$ structure on $X$ and ${\mathfrak{t}}_1, {\mathfrak{t}}_2$ its restrictions to $W_1, W_2$. In this situation, the cobordism maps $$ F^-_{W_1, {\mathfrak{t}}_1} : \HFm(S^3) \to \HFm(N, {\mathfrak{t}}|_N)$$ and $$ F^+_{W_2, {\mathfrak{t}}_2}: \mathit{HF}^+(N, {\mathfrak{t}}|_N) \to \mathit{HF}^+(S^3)$$ factor through $\mathit{HF}_{\operatorname{red}}(N, {\mathfrak{t}}|_N)$, where $$ \mathit{HF}_{\operatorname{red}} = {\operatorname{Coker}}(\HFinf \to \mathit{HF}^+) \cong {\operatorname{Ker}} (\HFm \to \HFinf).$$ By composing the maps to and from $\mathit{HF}_{\operatorname{red}}$ we obtain the mixed map $$ F^{\operatorname{mix}}_{W, {\mathfrak{t}}}: \HFm(S^3) \to \mathit{HF}^+(S^3),$$ which changes degree by the quantity $$ d({\mathfrak{t}}) = \frac{c_1({\mathfrak{t}})^2 - 2\chi(X) - 3\sigma(X)}{4}.$$ Let $\Theta_-$ be the maximal degree generator in $\HFm(S^3)$. Clearly the map $ F^{\operatorname{mix}}_{W, {\mathfrak{t}}}$ can be nonzero only when $d({\mathfrak{t}})$ is even and nonnegative. If this is the case, the value \begin {equation} \label {eq:mixedOS} \Phi_{X, {\mathfrak{t}}} = U^{d({\mathfrak{t}})/2} \cdot F^{\operatorname{mix}}_{W, {\mathfrak{t}}}(\Theta_-) \in \mathit{HF}^+_{0}(S^3) \cong {\mathbb{F}} \end {equation} is an invariant of the four-manifold $X$ and the ${\operatorname{Spin^c}}$ structure ${\mathfrak{t}}$. It is conjecturally the same as the Seiberg-Witten invariant of $(X, {\mathfrak{t}})$. \begin {definition} \label {def:lp} Let $X$ be a closed, oriented four-manifold with $b_2^+(X) \geq 2$. A {\em cut link presentation for $X$} consists of a link $L \subset S^3$, a decomposition of $L$ as a disjoint union $$ L = L_1 \amalg L_2 \amalg L_3,$$ and a framing $\Lambda$ for $L$ (with restrictions $\Lambda_i$ to $L_i, i=1, \dots, 4$) with the following properties: \begin {itemize} \item $S^3_{\Lambda_1}(L_1)$ is a connected sum of $m$ copies of $S^1 \times S^2$, for some $m \geq 0$. We denote by $W_1$ the cobordism from $S^3$ to $\#^m (S^1 \times S^2)$ given by $m$ one-handle attachments. \item $S^3_{\Lambda_1 \cup \Lambda_2 \cup \Lambda_3} (L_1 \cup L_2 \cup L_3)$ is a connected sum of $m'$ copies of $S^1 \times S^2$, for some $m' \geq 0$. We denote by $W_4$ the cobordism from $\#^{m'} (S^1 \times S^2)$ to $S^3$ given by $m'$ three-handle attachments. \item If we denote by $W_2$ resp.\ $W_3$ the cobordisms from $S^3_{\Lambda_1}(L_1)$ to $S^3_{\Lambda_1\cup \Lambda_2}(L_1 \cup L_2)$, resp.\ from $S^3_{\Lambda_1\cup \Lambda_2}(L_1 \cup L_2)$ to $S^3_{\Lambda_1 \cup \Lambda_2 \cup \Lambda_3} (L_1 \cup L_2 \cup L_3)$, given by surgery on $L_2$ resp.\ $L_3$ (i.e., consisting of two-handle additions), then $$ W = W_1 \cup W_2 \cup W_3 \cup W_4$$ is the cobordism from $S^3$ to $S^3$ obtained from $X$ by deleting two copies of $B^4$. \item The manifold $N=S^3_{\Lambda_1\cup \Lambda_2}(L_1 \cup L_2)$ is an admissible cut for $W$, i.e., $b_2^+(W_1 \cup W_2) > 0, b_2^+(W_3 \cup W_4) > 0$, and $\delta H^1(N) =0$ in $H^2(W)$. \end {itemize} \end {definition} It is proved in \cite[Lemma 8.7]{LinkSurg} that any closed, oriented four-manifold $X$ with $b_2^+(X) \geq 2$ admits a cut link presentation. \begin {definition} Let $X$ be a closed, oriented four-manifold with $b_2^+(X) \geq 2$. A {\em grid presentation} $\Gamma$ for $X$ consists of a cut link presentation $(L = L_1 \cup L_2 \cup L_3, \Lambda)$ for $X$, together with a grid presentation for $L$. \end {definition} The four-manifold invariant $\Phi_{X, {\mathfrak{t}}}$ can be expressed in terms of a grid presentation $\Gamma$ for $X$ as follows. By combining the maps $F^-_{W_2, {\mathfrak{t}}|_{W_2}}$ and $F^+_{W_3, {\mathfrak{t}}|_{W_3}}$ using their factorization through $\mathit{HF}_{\operatorname{red}}$, we obtain a mixed map $$ F^{{\operatorname{mix}}}_{W_2 \cup W_3, {\mathfrak{t}}|_{W_2 \cup W_3}} : \HFm(\#^m (S^1 \times S^2)) \to \mathit{HF}^+(\#^{m'} (S^1 \times S^2)).$$ Using Theorem~\ref{thm:AllVersions}, we can express the maps $F^-_{W_2, {\mathfrak{t}}|_{W_2}}$ and $F^+_{W_3, {\mathfrak{t}}|_{W_3}}$ (or, more precisely, their tensor product with the identity on $V= H_*(T^{n-\ell})$) in terms of counts of holomorphic polygons on a symmetric product of the grid. Combining these polygon counts, we get a mixed map $$ F^{{\operatorname{mix}}}_{\Gamma, {\mathfrak{t}}}: H_*({\mathcal{C}}^-(G, \Lambda)^{L_1, {\mathfrak{t}}|_{W_2 \cup W_3}}) \to H_*({\mathcal{C}}^+(G, \Lambda)^{L_1 \cup L_2 \cup L_3, {\mathfrak{t}}|_{\#^{m'} (S^1 \times S^2)}}).$$ We conclude that $F^{{\operatorname{mix}}}_{\Gamma, {\mathfrak{t}}}$ is the same as $ F^{{\operatorname{mix}}}_{W_2 \cup W_3, {\mathfrak{t}}|_{W_2 \cup W_3}} \otimes {\operatorname{Id}}_V$, up to compositions with isomorphisms on both the domain and the target. Note, however, that at this point we do not know how to identify elements in the domains (or targets) of the two maps in a canonical way. For example, we know that there is an isomorphism \begin {equation} \label {eq:isoV} H_*({\mathcal{C}}^-(G, \Lambda)^{L_1, {\mathfrak{t}}|_{W_2 \cup W_3}}) \cong \HFm(\#^m (S^1 \times S^2)) \otimes V, \end {equation} but it is difficult to say exactly what the isomorphism is. Nevertheless, both $ \HFm(\#^m (S^1 \times S^2))$ and $V$ have unique maximal degree elements $\Theta_{\max}^m$ and $\Theta_V$, respectively. We can identify what $\Theta_{\max}^m \otimes \Theta_V$ corresponds to on the left hand side of ~\eqref{eq:isoV} by simply computing degrees. Let us denote the respective element by $$\Theta_{\max}^\Gamma \in H_*({\mathcal{C}}^-(G, \Lambda)^{L_1, {\mathfrak{t}}|_{W_2 \cup W_3 }}). $$ The following proposition says that one can decide whether $\Phi_{X, {\mathfrak{t}}} \in {\mathbb{F}}$ is zero or one from information coming from a grid presentation $\Gamma$: \begin {proposition} \label {prop:mixed} Let $X$ be a closed, oriented four-manifold $X$ with $b_2^+(X) \geq 2$, with a ${\operatorname{Spin^c}}$ structure ${\mathfrak{t}}$ with $d({\mathfrak{t}}) \geq 0$ even. Let $\Gamma$ be a grid presentation for $X$. Then $\Phi_{X, {\mathfrak{t}}} = 1$ if and only if $ U^{d({\mathfrak{t}})/2}\cdot F^{{\operatorname{mix}}}_{\Gamma, {\mathfrak{t}}} (\Theta_{\max}^\Gamma)$ is nonzero. \end {proposition} Compare \cite[Theorem 11.7]{LinkSurg}. \subsection{The link surgeries spectral sequence} \label {sec:spectral} We recall the construction from \cite[Section 4]{BrDCov}. Let $M = M_1 \cup \dots \cup M_\ell$ be a framed $\ell$-component link in a 3-manifold $Y$. For each $\eps = (\eps_1, \dots, \eps_\ell) \in \mathbb{E}_\ell = \{0,1\}^\ell$, we let $Y(\eps)$ be the $3$-manifold obtained from $Y$ by doing $\eps_i$-framed surgery on $M_i$ for $i=1, \dots, \ell$. When $\eps'$ is an immediate successor to $\eps$ (that is, when $\eps < \eps'$ and $\|\eps' - \eps\| = 1$), the two-handle addition from $Y(\eps)$ to $Y(\eps')$ induces a map on Heegaard Floer homology $$ F^-_{\eps < \eps'} : \HFm(Y(\eps)) \longrightarrow \HFm (Y(\eps')). $$ The following is the link surgery spectral sequence (Theorem 4.1 in \cite{BrDCov}, but phrased here in terms of $\HFm$ rather than $\widehat{\mathit{HF}}$ or $\mathit{HF}^+$): \begin {theorem}[Ozsv\'ath-Szab\'o] \label {thm:OSspectral} There is a spectral sequence whose $E^1$ term is $\bigoplus_{\eps \in \mathbb{E}_\ell} \HFm(Y(\eps))$, whose $d_1$ differential is obtained by adding the maps $F^-_{\eps < \eps'}$ (for $\eps'$ an immediate successor of $\eps$), and which converges to $E^{\infty} \cong \HFm(Y)$. \end {theorem} \begin{remark} A special case of Theorem~\ref{thm:OSspectral} gives a spectral sequence relating the Khovanov homology of a link and the Heegaard Floer homology of its branched double cover. See \cite[Theorem~1.1]{BrDCov}. \end{remark} The spectral sequence in Theorem~\ref{thm:OSspectral} can be understood in terms of grid diagrams as follows. We represent $Y(0,\dots, 0)$ itself as surgery on a framed link $(L', \Lambda')$ inside $S^3$. Let $L'_1, \dots, L'_{\ell'}$ be the components of $L'$. There is another framed link $(L=L_1 \cup \dots \cup L_\ell, \Lambda)$ in $S^3$, disjoint from $L'$, such that surgery on each component $L_i$ (with the given framing) corresponds exactly to the 2-handle addition from $Y(0, \dots, 0)$ to $Y(0, \dots, 0, 1, 0, \dots, 0)$, where the $1$ is in position $i$. For $\eps \in \mathbb{E}_\ell$, we denote by $L^\eps$ the sublink of $L$ consisting of those components $L_i$ such that $\eps_i = 1$. Let $G$ be a toroidal grid diagram representing the link $L' \cup L \subset S^3$. As mentioned in Section~\ref{sec:Maps}, inside the surgery complex ${\mathcal{C}}^-(G, \Lambda' \cup \Lambda)$ (which is an $(\ell'+\ell)$-dimensional hypercube of chain complexes) we have various subcomplexes which compute the Heegaard Floer homology of surgery on the sublinks on $L' \cup L$. We will restrict our attention to those sublinks that contain $L'$, and use the respective subcomplexes to construct a new, $\ell$-dimensional hypercube of chain complexes ${\mathcal{C}}^-(G, \Lambda' \cup \Lambda \! \sslash \! L)$ as follows. At a vertex $\eps \in \mathbb{E}_\ell$ we put the complex $$ {\mathcal{C}}^-(G, \Lambda' \cup \Lambda \! \sslash \! L)^\eps = {\mathcal{C}}^-(G^{L' \cup L^\eps}, \Lambda' \cup \Lambda|_{L^\eps}) \otimes H_*(T^{(n-n_\eps)-(\ell-\| \eps \|)}),$$ where $n_\eps$ is the size of the grid diagram $G^{L' \cup L^\eps}$. Consider now an edge from $\eps$ to $\eps' = \eps +\tau_i$ in the hypercube $\mathbb{E}_\ell$. The corresponding complex ${\mathcal{C}}^-(G^{L' \cup L^\eps}, \Lambda' \cup \Lambda|_{L^\eps})$ decomposes as a direct product over all ${\operatorname{Spin^c}}$ structures $\mathbf s$ on $Y(\eps) = S^3(L' \cup L^\eps, \Lambda' \cup \Lambda|_{L^\eps})$. As explained in Section~\ref{sec:Maps}, each factor ${\mathcal{C}}^-(G^{L' \cup L^\eps}, \Lambda' \cup \Lambda|_{L^\eps}, \mathbf s)$ (tensored with the appropriate homology of a torus) admits an inclusion into ${\mathcal{C}}^-(G^{L' \cup L^{\eps'}}, \Lambda' \cup \Lambda|_{L^{\eps'}})$ as a subcomplex. In fact, there are several such inclusion maps, one for each ${\operatorname{Spin^c}}$ structure $\mathbf t$ on the 2-handle cobordism from $Y(\eps)$ to $Y(\eps')$ such that $\mathbf t$ restricts to $\mathbf s$ on $Y(\eps)$. Adding up all the inclusion maps on each factor, one obtains a combined map $$G^-_{\eps < \eps'} : {\mathcal{C}}^-(G, \Lambda' \cup \Lambda \! \sslash \! L)^\eps \longrightarrow {\mathcal{C}}^-(G, \Lambda' \cup \Lambda \! \sslash \! L)^{\eps'}.$$ We take $G^-_{\eps < \eps'}$ to be the edge map in the hypercube of chain complexes ${\mathcal{C}}^-(G, \Lambda' \cup \Lambda \! \sslash \! L)$. Since the edge maps are just sums of inclusions of subcomplexes, they commute on the nose along each face of the hypercube. Therefore, in the hypercube ${\mathcal{C}}^-(G, \Lambda' \cup \Lambda \! \sslash \! L)$ we can take the diagonal maps to be zero, along all faces of dimension at least two. This completes the construction of ${\mathcal{C}}^-(G, \Lambda' \cup \Lambda \! \sslash \! L)$. As an $\ell$-dimensional hypercube of chain complexes, its total complex admits a filtration by $-\|\eps\|$, which induces a spectral sequence; we refer to the filtration by $-\|\eps\|$ as the {\em depth filtration} on ${\mathcal{C}}^-(G, \Lambda' \cup \Lambda \! \sslash \! L)$. The following is \cite[Theorem 11.9]{LinkSurg}, adapted here to the setting of grid diagrams: \begin {theorem} \label{thm:SpectralSequence} Fix a grid diagram $G$ with $q \geq 1$ free markings, such that $G$ represents an oriented link $\orL' \cup \orL$ in $S^3$. Fix also framings $\Lambda$ for $L$ and $\Lambda'$ for $L'$. Suppose $G$ has grid number $n$, and that $L$ has $\ell$ components $L_1, \dots, L_\ell$. Let $Y(0,\dots,0) = S^3_{\Lambda'}(L')$, and let $Y(\eps)$ be obtained from $Y(0,\dots,0)$ by surgery on the components $L_i \subseteq L$ with $\eps_i = 1$ (for any $\eps \in \mathbb{E}_\ell$). Then, there is an isomorphism between the link surgeries spectral sequence from Theorem~\ref{thm:OSspectral}, tensored with $V = H_*(T^{n-q-\ell})$, and the spectral sequence associated to the depth filtration on ${\mathcal{C}}^-(G, \Lambda' \cup \Lambda \! \sslash \! L)$. \end{theorem} \section {Enhanced domains of holomorphic polygons} \label {sec:enhanced} \subsection {Domains and shadows} \label {sec:shadows} In the construction of the complex ${\mathcal{C}}^-(G, \Lambda)$ in Section~\ref{sec:surgery}, the only non-combinatorial ingredients were the holomorphic polygon counts in the definition of the maps ${D}^{\mathcal{Z}}_\mathbf s$. (We use here the notation from Remark~\ref{rem:note}.) According to Equation~\eqref{eq:dezeds}, the maps ${D}^{\mathcal{Z}}_\mathbf s$ are in turn summations of maps of the form ${D}^{({\mathcal{Z}})}_\mathbf s$, where $({\mathcal{Z}})$ denotes an ordering of a consistent collection of markings ${\mathcal{Z}}$. Let $({\mathcal{Z}}) = (Z_1, \dots, Z_k)$. The maps $$ {D}^{({\mathcal{Z}})}_\mathbf s : \mathfrak A^-(\Ta, \Tb^{{\mathcal{Z}}^{0}},\mathbf s) \to \mathfrak A^-(\Ta, \Tb^{{\mathcal{Z}}^0 \cup {\mathcal{Z}}},\mathbf s),$$ correspond to destabilization (in the given order) at a set of markings ${\mathcal{Z}}$, starting with a diagram $G$ already destabilized at a base set of markings ${\mathcal{Z}}^{0}$. Our goal is to get as close as possible to a combinatorial description of these maps. In light of the isomorphisms~\eqref{eq:psiz}, we can assume without loss of generality that ${\mathcal{Z}}_0 = \emptyset$. The difficulty lies in understanding the counts of pseudo-holomorphic polygons $\# \mathcal {M}(\phi)$, where $$ \phi \in \pi_2(\mathbf x, \Theta^{\emptyset, \{Z_1\}}, \Theta^{\{Z_1\}, \{Z_1, Z_2\}}, \dots, \Theta^{\{Z_1, \dots, Z_{k-1}\}, \{Z_1, \dots, Z_{k}\} } , \mathbf y)$$ is a homotopy class of $(2+k)$-gons with edges on $$\Ta, \Tb, \Tb^{\{Z_1\}}, \Tb^{\{Z_1, Z_2\}}, \dots, \Tb^{\{Z_1, \dots, Z_k\}},$$ in this cyclic order. The Maslov index $\mu(\phi)$ is required to be $1-k$. As in \cite[Definition 2.13]{HolDisk}, every homotopy class $\phi$ has an associated {\em domain} $D(\phi)$ on the surface ${\mathcal{T}}$. The domain is a linear combination of regions, i.e., connected components of the complement in ${\mathcal{T}}$ of all the curves $\alpha, \beta, \beta^{\{Z_1\}}, \beta^{\{Z_1, Z_2\}}, \dots, \beta^{\{Z_1, \dots, Z_k\}}$. If $\phi$ admits a holomorphic representative, then $D(\phi)$ is a linear combination of regions, all appearing with nonnegative coefficients. Let us mark an asterisk in each square of the grid diagram $G$. When we construct the new beta curves $\beta^{\{Z_1\}}, \beta^{\{Z_1, Z_2\}}, \dots, \beta^{\{Z_1, \dots, Z_k\}}$ (all obtained from the original $\beta$ curves by handleslides), we make sure that each beta curve encircling a marking does not include an asterisk, and also that whenever we isotope a beta curve to obtain a new beta curve intersecting it in two points, these isotopies do not cross the asterisks. Then the regions on ${\mathcal{T}}$ fall naturally into two types: {\em small} regions, which do not contain asterisks, and {\em large} regions, which do. We define the {\em shadow} ${\mathit{Sh}}(R)$ of a large region $R$ to be the square in $G$ containing the same asterisk as $R$; and the shadow of a small region to be the empty set. If $$D = \sum a_i R_i$$ is a linear combination of regions, with $a_i \in {\mathbb{Z}}$, we define its shadow to be $$ {\mathit{Sh}}(D) = \sum a_i {\mathit{Sh}}(R_i).$$ The {\em shadow} ${\mathit{Sh}}(\phi)$ of a homotopy class $\phi$ is defined as ${\mathit{Sh}}(D(\phi))$. See Figure~\ref{fig:shadow}. \begin{figure} \begin{center} \input{shadow.pstex_t} \end{center} \caption {{\bf The domain of a triangle and its shadow.} On the left, we show the domain $D(\phi)$ of a homotopy class $\phi$ of triangles in $\mathrm{Sym}^n({\mathcal{T}})$. The alpha and beta curves are the horizontal and vertical straight lines, respectively, while the curved arcs (including the one encircling $Z$) are part of the $\beta^{\{Z\}}$ curves. The black squares mark components of $\Theta^{\emptyset, \{Z\}} \in \Tb \cap \Tb^{\{Z\}}$. On the right, we show the shadow ${\mathit{Sh}}(\phi)$.} \label{fig:shadow} \end{figure} We will study homotopy classes $\phi$ by looking at their shadows, together with two additional pieces of data as follows. First, each homotopy class $\phi$ corresponds to a $(2+k)$-gon where one vertex is an intersection point $\mathbf y \in \Ta \cap \Tb^{{\mathcal{Z}}}$. In particular, $\mathbf y$ contains exactly one of the two intersection points between the alpha curve just below the marking $Z_i$ and the beta curve encircling $Z_i$, for $i=1, \dots, k$. Set $\epsilon_i = 0$ if that point is the left one, and $\epsilon_j = 1$ if it is the right one. Second, for $i=1, \dots, k$, let $\rho_i \in \Z$ be the multiplicity of the domain $D(\phi)$ at the marking $Z_i$. Then, we define the {\em enhanced domain} $E(\phi)$ associated to $\phi$ to be the triple $({\mathit{Sh}}(\phi), \epsilon(\phi), \rho(\phi))$ consisting of the shadow ${\mathit{Sh}}(\phi)$, the collection $\epsilon(\phi)=(\epsilon_1, \dots, \epsilon_k)$ corresponding to $\mathbf y$, and the set of multiplicities $\rho(\phi) = (\rho_1, \dots, \rho_k)$. \subsection {Grid diagrams marked for destabilization} \label {sec:uleft} We now turn to studying enhanced domains on their own. For simplicity, we will assume that all the markings $Z_1, \dots, Z_k$ relevant for our destabilization procedure are $O$'s. (The set of allowable domains will be described by the same procedure if some markings are $X$'s.) Consider a toroidal grid diagram $G$ of grid number $n$. We ignore the $X$'s, and consider the $n$ $O$'s with associated variables $U_1, \dots, U_n$. Note that, unlike in Section~\ref{subsec:ags}, here we use one variable for each $O$, rather than one for each link component. Some subset $$ {\mathcal{Z}} = \{O_{i_1}, \dots, O_{i_k} \} $$ of the $O$'s, corresponding to indices $i_1, \dots, i_k \in \{1, \dots, n\}$, is marked for destabilization. (These are the markings $Z_1, \dots, Z_k$ from the previous subsection.) We will assume that $k < n$. (This is always the case in our setting, when we start with at lest one free basepoint.) The corresponding points on the lower left of each $O_{i_j}$ are denoted $p_j$ and called {\em destabilization points}. Let $C^-(G)=\CFm(\Ta, \Tb)$ be the chain complex freely generated over ${\mathbb{F}}[[U_1, \dots, U_n]]$ by the $n!$ elements of ${\mathbf{S}}(G)$, namely the $n$-tuples of intersection points on the grid. Its differential ${\partial}$ counts rectangles, while keeping track of the $U$'s: $$ {\partial}^- \mathbf x = \sum_{\mathbf y \in {\mathbf{S}}(G)} \sum_{r \in \EmptyRect(\mathbf x, \mathbf y)} U_1^{O_1(r)} \cdots U_n^{O_n(r)} \mathbf y.$$ Each generator $\mathbf x \in {\mathbf{S}}(G)$ has a well-defined homological grading $M(\mathbf x) \in {\mathbb{Z}}$. Note that $M(U_i \cdot \mathbf x) = M(\mathbf x) -2$. Let $G^{\mathcal{Z}}$ be the destabilized diagram. There is a similar complex $C^-(G^{\mathcal{Z}})$. We identify ${\mathbf{S}}(G^{\mathcal{Z}})$ with a subset of ${\mathbf{S}}(G)$ by adjoining to an element of ${\mathbf{S}}(G^{\mathcal{Z}})$ the destabilization points. The set of {\em enhanced generators} ${\mathbf{ES}}(G, {\mathcal{Z}})$ consists of pairs $(\mathbf y, \epsilon)$, where $\mathbf y\in {\mathbf{S}}(G^{\mathcal{Z}})$ and $\epsilon = (\epsilon_1, \dots, \epsilon_k)$ is a collection of markings ($\epsilon_j = 0$ or $1$) at each destabilization point. We will also denote these markings as $L$ for a left marking ($\epsilon_j = 0$) or $R$ for a right marking ($\epsilon_j = 1$). The homological grading of an enhanced generator is \[ M(\mathbf y, \epsilon) = M(\mathbf y) + \sum_{j=1}^k \epsilon_j. \] We define an enhanced destabilized complex $\mathit{EC}^-(G, {\mathcal{Z}})$ whose generators are ${\mathbf{ES}}(G, {\mathcal{Z}})$. It is formed as the tensor product of $k$ mapping cones \begin {equation} \label {mapcone} C^-(G^{\mathcal{Z}}) \to C^-(G^{\mathcal{Z}}) \end {equation} given by the map $U_{i_j} - U_{i'_j}$ from $L$ to $R$, where $U_{i'_j}$ corresponds to the $O$ in the row directly below the destabilization point $p_j$. For a suitable choice of almost-complex structure on the symmetric product, there is a natural isomorphism $$\mathit{EC}^-(G, {\mathcal{Z}}) \cong \CFm(\Ta, \Tb^{\{O_{i_1}, \dots, O_{i_k}\}}),$$ with the caveat that on the right hand side we count different $U$ variables than we did in Section~\ref{sec:DestabSeveral}. It was shown in \cite[Section~3.2]{MOST} that in the case $k=1$ there is a quasi-isomorphism \begin{equation} \label{eq:destab-chain-map} F: C^-(G) \to \mathit{EC}^-(G, {\mathcal{Z}}) \end{equation} given by counting snail-like domains as in Figure~\ref{fig:ULeft}. \begin{figure} \begin{center} \input{ULeft.pstex_t} \end{center} \caption {{\bf Domains for destabilization at one point.} We label initial points by dark circles, and terminal points by empty circles. The top row lists domains of type $L$ (i.e., ending in an enhanced generator with an $L$ marking), while the second row lists some of type $R$. Darker shading corresponds to higher local multiplicities. The domains in each row (top or bottom) are part of an infinite series, corresponding to increasing complexities. The series in the first row also contains the trivial domain of type~$L$, not shown here.} \label{fig:ULeft} \end{figure} \begin{definition} \label{def:EnhancedDomain} Given a domain $D$ on the $\alpha$-$\beta$ grid (that is, a linear combination of squares), we let $O_i(D)$ be the multiplicity of $D$ at $O_i$. We define an {\em enhanced domain} $(D, \epsilon, \rho)$ to consists of: \begin {itemize} \item A domain $D$ on the grid between points $\mathbf x \in {\mathbf{S}}(G)$ and $\mathbf y \in {\mathbf{S}}(G^{\mathcal{Z}})$ (in particular, the final configuration contains all destabilization points); \item A set of markings $\epsilon = (\epsilon_1, \dots, \epsilon_k)$ at each destabilization point (so that $(\mathbf y, \epsilon)$ is an enhanced generator); and \item A set of integers $\rho=(\rho_1, \dots, \rho_k)$, one for each destabilization point. \end {itemize} \end{definition} We call $\rho_j$ the {\em real multiplicity} at $O_{i_j}$. The number $t_j = O_{i_j}(D)$ is called the {\em total multiplicity}, and the quantity $f_j = t_j - \rho_j$ is called the {\em fake multiplicity} at $O_{i_j}$. The reason for this terminology is that, if $D$ is the shadow of a holomorphic $(k+2)$-gon~$D'$, then the real multiplicity $\rho_j$ is the multiplicity of $D'$ at $O_{i_j}$. On the other hand, the fake multiplicity (if positive) is the multiplicity that appears in the shadow even though it does not come from the polygon. For example, in the domain pictured in Figure~\ref{fig:shadow}, the fake multiplicity at the $Z$ marking is one. Consider the full collection of real multiplicities $(N_1, \dots, N_n)$, where $N_{i_j} = \rho_j$ for $j \in \{1, \dots, k\}$, and $N_i = O_i(D)$ when $O_i$ is not one of the $O$'s used for destabilization. We say that the enhanced domain $(D, \epsilon, \rho)$ goes from the generator $\mathbf x$ to $U_1^{N_1} \cdots U_n^{N_n} \cdot (\mathbf y, \epsilon)$. These are called the initial and final point of the enhanced domain, respectively. We define the \emph{index} of an enhanced domain to be \begin{equation} \label{eq:enhanced-index} \begin {aligned} I(D, \epsilon, \rho) &= M(\mathbf x) - M(U_1^{N_1} \cdots U_n^{N_n} \cdot (\mathbf y, \epsilon)) \\ &= M(\mathbf x) - M(\mathbf y) - \sum_{j=1}^k \epsilon_j + 2\sum_{j=1}^k N_i \\ &= M(\mathbf x) - M(U_1^{O_1(D)}\cdots U_n^{O_n(D)}\cdot \mathbf y) - \sum_{j=1}^k (\epsilon_j + 2 f_j) \\ &= I(D) - \sum_{j=1}^k (\epsilon_j + 2 f_j). \end {aligned} \end{equation} Here $I(D)$ is the ordinary Maslov index of $D$, given by Lipshitz's formula \cite[Corollary 4.3]{LipshitzCyl}: \begin {equation} \label {eq:lipshitz} I(D) = \sum_{x \in \mathbf x} n_x(D) + \sum_{y \in \mathbf y} n_y(D), \end {equation} where $n_p(D)$ denotes the average multiplicity of $D$ in the four quadrants around the point $p$. (Lipshitz's formula in the reference has an extra term, the Euler measure of $D$, but this is zero in our case because $D$ can be decomposed into rectangles.) For example, consider the domains in Figure~\ref{fig:ULeft}. We turn them into enhanced domains by adding the respective marking ($L$ or $R$) as well as choosing the real multiplicity at the destabilization point to be zero. Then all those domains have index zero, regardless of how many (non-destabilized) $O$'s they contain and with what multiplicities. \begin {lemma} Suppose that $(D, \epsilon, \rho)$ is the enhanced domain associated to a homotopy class $\phi$ of $(k+2)$-gons as in Section~\ref{sec:uleft}. Then the Maslov indices match: $\mu(\phi) = I(D, \epsilon, \rho)$. \end {lemma} \begin {proof} Note that $I(D,\epsilon, \rho)$ is the difference in Maslov indices between $\mathbf x$ (in the original grid diagram) and $\tilde \mathbf y = U_1^{N_1} \cdots U_n^{N_n} \cdot (\mathbf y, \epsilon)$ (in the destabilized diagram), cf.\ Equation~\eqref{eq:enhanced-index}. Because $\mu(\phi)$ is additive with respect to pre- or post-composition with rectangles, we must have $\mu(\phi) = I(D, \epsilon, \rho)+C$, where $C$ is a constant that only depends on the grid. To compute $C$, consider the trivial enhanced domain with $D=0, \epsilon = (0, \dots, 0), \rho =(0, \dots, 0)$. This is associated to a class $\phi$ whose support is a disjoint union of $n$ polygons, all of whom are $(k+2)$-gons with three acute angles and $k-1$ obtuse angles. See Figure~\ref{fig:local} for an example of a quadrilateral of this type. It is easy to check that $\mu(\phi)=0$, which implies $C=0$. \end {proof} Given an enhanced domain $(D, \epsilon, \rho)$, we denote by $a_j, b_j, c_j, d_j$ the multiplicities of $D$ in the four squares around $p_j$, as in Figure~\ref{fig:abcd}. In particular, $b_j = t_j$ is the total multiplicity there. Note that, if $p_j \not \in \mathbf x$, then \begin{align} \label {eq:abcd1} a_j + d_j &= b_j + c_j +1,\\ \shortintertext{while if $p_j \in \mathbf x$ then} a_j + d_j &= b_j + c_j.\label {eq:abcd0} \end{align} \begin{figure} \begin{center} \input{abcd.pstex_t} \end{center} \caption {{\bf Local multiplicities around a destabilization point.} The marked point in the middle is the destabilization point ($p_j$ for left destabilization and $q_j$ for right destabilization). } \label{fig:abcd} \end{figure} \begin{definition} \label{def:PositiveEnhancedDomain} We say that the enhanced domain $(D, \epsilon, \rho)$ is {\em positive} if $D$ has nonnegative multiplicities everywhere in the grid, and, furthermore, for every $j \in \{1, \dots, k \}$, we have \begin {equation} \label {ineq:abcd1} \begin{aligned} a_j &\geq f_j,&b_j &\geq f_j,\\ c_j &\geq f_j + \epsilon_j -1,\qquad &d_j &\geq f_j + \epsilon_j. \end{aligned} \end {equation} \end{definition} Observe that the second of the above inequalities implies $b_j - f_j = \rho_j \geq 0$. Thus, in a positive enhanced domain, all real multiplicities $\rho_j$ are nonnegative. \begin {lemma} Suppose that $(D, \epsilon, \rho)$ is the enhanced domain associated to a homotopy class $\phi$ of $(k+2)$-gons as in Section~\ref{sec:uleft}, and that the homotopy class $\phi$ admits at least one holomorphic representative. Then $(D, \epsilon, \rho)$ is positive, in the sense of Definition~\ref{def:PositiveEnhancedDomain}. \end {lemma} \begin {proof} If a homotopy class $\phi$ has at least one holomorphic representative it has positive multiplicities (for suitable choices of almost-complex structure on the symmetric product, see for example Lemma~3.2 of~\cite{HolDisk}). It follows now that $D$ has nonnegative multiplicities everywhere in the grid and that $\rho_j = b_j - f_j \geq 0$. It remains to check the three other relations. For concreteness, let us consider the case $k=2$, with $\epsilon_j = 0$, cf.\ Figure~\ref{fig:local}. The two circles encircling the destabilization point in the middle are of the type $\beta^{\{Z_1\}}$ (the right one) and $\beta^{\{Z_1,Z_2\}}$ (the left one). \begin{figure} \begin{center} \input{local.pstex_t} \end{center} \caption {{\bf Local multiplicities in detail.} This is a more detailed version of Figure~\ref{fig:abcd}, in which we show two other types of beta curves: $\beta^{\{Z_1\}}$ and $\beta^{\{Z_1, Z_2\}}$. We mark the local multiplicity in each region. The shaded region is the domain of a quadrilateral of index zero.} \label{fig:local} \end{figure} If we are given the domain of a holomorphic quadrilateral (for instance, the shaded one in Figure~\ref{fig:local}), at most of the intersection points the alternating sum of nearby multiplicities is zero. The only exceptions are the four bulleted points in the figure, which correspond to the vertices of the quadrilateral, and where the alternating sum of nearby multiplicities is $\pm 1$. (At the central intersection between an alpha and an original beta curve, there may not be a vertex and the alternating sum of the multiplicities may be~$0$.) If $u_j$, $v_j$, and $w_j$ are the multiplicities in the regions indicated in Figure~\ref{fig:local}, by adding up suitable such relations between local multiplicities, we obtain \begin{alignat*}{3} a_j-f_j &= a_j + \rho_j - b_j &&= u_j &&\!\geq 0\\ c_j -f_j + 1 &= c_j +\rho_j - b_j + 1 &&= v_j &&\!\geq 0\\ d_j -f_j &= d_j + \rho_j - b_j &&= w_j &&\!\geq 0, \end{alignat*} as desired. The proof in the $\epsilon_j = 1$ case, or for other values of $k$, is similar. \end {proof} Note that an enhanced domain $E=(D, \epsilon, \rho)$ is determined by its initial point~$\mathbf x$ and final point $\tilde \mathbf y = U_1^{N_1} \cdots U_n^{N_n} \cdot (\mathbf y, \epsilon)$ up to the addition to~$D$ of linear combinations of {\em periodic domains} of the following form: \begin {itemize} \item A column minus the row containing the same $O_i$; \item A column containing one of the $O_{i_j}$'s used for destabilization. \end {itemize} When we add a column of the latter type, the total multiplicity $t_j$ (and hence also the fake multiplicity $f_j$) increase by $1$. The real multiplicity $\rho_j$ does not change. \begin{definition} \label{def:PositivePair} We say that the pair $[\mathbf x, \tilde \mathbf y]$ is {\em positive} if there exists a positive enhanced domain with $\mathbf x$ and $\tilde \mathbf y$ as its initial and final points. \end{definition} \subsection {Positive domains of negative index} Figure~\ref{fig:ULeft2} shows some examples of positive enhanced domains of negative index, in a twice destabilized grid. We see that the index can get as negative as we want, even if we fix the number of destabilizations at two (but allow the size of the grid to change). \begin{figure} \begin{center} \input{ULeft2.pstex_t} \end{center} \caption {{\bf Positive domains of negative index.} We destabilize at two $O$'s, marked by the two larger circles in the picture. The top row shows two positive domains of type $LL$, the first of index $-1$, the second of index $-2$. The second row shows three positive domains of type $RL$, of indices $-1, -2$ and $-3$, respectively. Darker shading corresponds to higher local multiplicities. In each case, the real multiplicity $\rho_j$ is zero.} \label{fig:ULeft2} \end{figure} However, this phenomenon is impossible for one destabilization: \begin{proposition} \label{prop:oneO} Let $G$ be a toroidal grid diagram with only one $O$ marked for destabilization. Then any positive enhanced domain has nonnegative index. Furthermore, if the enhanced domain is positive and has index zero, then it is of one of the types from the sequences in Figure~\ref{fig:ULeft}. \end{proposition} \begin {proof} Let us first consider a positive enhanced domain $(D, \epsilon, \rho)$ going between the generators $\mathbf x$ and $\tilde \mathbf y = U_1^{N_1} \cdots U_n^{N_n} \cdot (\mathbf y, \epsilon)$. With the notations of Section~\ref{sec:uleft}, we have $k=1$, $\epsilon_1 \in \{0,1\}$ (corresponding to the type of the domain, $L$ or $R$), $a_1$, $b_1 = t_1$, $c_1$, $d_1$ are the local multiplicities around the destabilization point, and $\rho_1$ and $f_1 = t_1 - \rho_1$ are the real and fake multiplicities there. Note that $t_1 \geq f_1$. Without loss of generality we can assume that every row or column contains at least one square where the multiplicity of $D$ is zero; otherwise we could subtract that row or column and obtain another positive domain, whose index is at least as large. Indeed, if the row or column does not contain the destabilization point $p_1$ on its boundary, then subtracting it decreases the index by $2$. If we subtract the row just to the left of $p_1$, or the column just below $p_1$, by simultaneously increasing $\rho_1$ by one we can preserve the positivity conditions and leave the index the same. If we subtract the row or column through the $O$ marked for destabilization, while leaving $\rho$ unchanged, the positivity and the index are again unaffected. We say that a row or a column is \emph{special} if $\mathbf x$ and $\mathbf y$ intersect it in different points. Let $m$ be the number of special rows; it is the same as the number of special columns. Note that, as we move fro a row (or column) curve, the multiplicity of the domain $D$ can only change by $\pm 1$, and it can do that only if the row (or column) is special. This is because of our assumption that we have zeros in every row and column. Let us look at the column containing the destabilized $O$. The domain has multiplicity $d_1$ in the spot right below $O$, and it has multiplicity zero on some other spot on that column. We can move from $O$ to the multiplicity zero spot either by going up or down. As we go in either direction, we must encounter at least $d_1$ special rows. This means that the total number of special rows is at least $2d_1$. Using the fourth inequality in~\eqref{ineq:abcd1}, we get \begin {equation} \label {eq:mt} m \geq 2d_1 \geq 2(f_1+ \epsilon_1). \end {equation} The ordinary index of the domain $D$ is given by Equation~\eqref{eq:lipshitz}, involving the sums of the average local multiplicities of $D$ at the points of $\mathbf x$ and $\mathbf y$. One such point is the destabilization point $p_1$ which is part of $\mathbf y$. Using the relations in Equation~\eqref{ineq:abcd1}, we find that the average vertex multiplicity there is \begin {equation} \label {eq:f14} \frac{a_1+b_1 + c_1 + d_1}{4} \geq f_1 + \frac{2\epsilon_1 - 1}{4}. \end {equation} On the other hand, apart from the destabilization point, $\mathbf x$ and $\mathbf y$ together have either $2m-1$ (if $p_1 \not \in \mathbf x$) or $2m$ (if $p_1 \in \mathbf x$) corner vertices, where the average multiplicity has to be at least $1/4$. Together with Equations~\eqref{eq:mt} and \eqref{eq:f14}, this implies that \begin {equation} \label {eq:idf} I(D) \geq \frac{2m-1}{4} + f_1 + \frac{2\epsilon_1 - 1}{4}. \end {equation} Using the formula for the index of an enhanced domain, together with Equations~\eqref{eq:mt} and \eqref{eq:idf}, we obtain \begin {equation} \label {equation} I(D, \epsilon, \rho) = I(D) - \epsilon_1 - 2f_1 \geq \frac{m}{2} - f_1 - \frac{\epsilon_1}{2} - \frac{1}{2} \geq \frac{\epsilon_1-1}{2}. \end {equation} Since the index is an integer, we must have $I(D, \epsilon, \rho) \geq 0$. Equality happens only when $D$ has average vertex multiplicity $1/4$ at all its corners other than the destabilization point. An easy analysis shows that the domain must be of the required shape. \end {proof} \subsection {Holomorphic triangles on grid diagrams} \label {sec:triangles} \begin{lemma} \label{lemma:snail} Fix a snail-like domain $D$ (as in Figure~\ref{fig:ULeft}) for destabilization at one point. Then the count of holomorphic triangles in $\mathrm{Sym}^n({\mathcal{T}})$ with $D$ as their shadow is one mod $2$. \end{lemma} \begin {proof} The trivial domain $D$ of type $L$ corresponds to a homotopy class $\phi$ whose support is a disjoint union of triangles with $90^\circ$ angles. Hence, the corresponding holomorphic count is one. To establish the claim in general, note that the destabilization map at one point (given by counting holomorphic triangles) is a chain map (in fact, a quasi-isomorphism) from $C^-(G)$ to $\mathit{EC}^-(G, {\mathcal{Z}})$, compare \eqref{eq:destab-chain-map}. By Proposition~\ref{prop:oneO}, the only counts involved in this map are the ones corresponding to snail-like domains. If to each snail-like domain we assign the coefficient one when counting it in the map, the result is the chain map \eqref{eq:destab-chain-map}. By induction on complexity, along the lines of \cite[Lemma 3.5]{MOST}, we do indeed need to assign one to each snail-like domain in order for the result to be a chain map. \end {proof} Proposition~\ref{prop:oneO} and Lemma~\ref{lemma:snail} imply that one can count combinatorially (mod $2$) all the index zero holomorphic triangles in a grid diagram (with one point marked for destabilization) with fixed shadow. Indeed, if the shadow is a snail-like domain, then the count is one, and otherwise it is zero. \section {Formal complex structures and surgery} \label {sec:comps} Let $\orL \subset S^3$ be a link with framing $\Lambda$, and let ${\mathfrak{u}}$ be a ${\operatorname{Spin^c}}$ structure on the surgered manifold $S^3_{\Lambda}(L)$. Our goal in this section is to explain a combinatorial procedure for calculating the ranks of the groups $\HFm(S^3_{\Lambda}(L), {\mathfrak{u}})$. The procedure will be based on Theorem~\ref{thm:Surgery}. The algorithm is made more complicated because Proposition~\ref{prop:oneO} and therefore Lemma~\ref{lemma:snail} are false if there is more than one destabilization point; we therefore have to make an appropriate choice of domains to count, a choice we call a ``formal complex structure''. \subsection {The complex of positive pairs} \label{sec:complex} Let $G$ be a grid diagram (of size $n$) marked for destabilization at a collection ${\mathcal{Z}}$ of some $O$'s, with $| {\mathcal{Z}}| = k< n$, as in Section~\ref{sec:uleft}. In that section we defined the (quasi-isomorphic) complexes $C^-(G)$ and $\mathit{EC}^-(G, {\mathcal{Z}})$. Let us consider the Hom complex $$ \Hom_\mathcal R (C^-(G), \mathit{EC}^-(G, {\mathcal{Z}})),$$ where $\mathcal R={\mathbb{F}}[[U_1, \dots, U_n]]$. Since $C^-(G)$ is a free $\mathcal R$-module, we can naturally identify it with its dual using the basis given by ${\mathbf{S}}(G)$. Thus, if we view the $\Hom$ complex as an ${\mathbb{F}}$-vector space, its generators (in the sense of direct products) are pairs $[\mathbf x, \tilde \mathbf y]$, where $\tilde \mathbf y$ is an enhanced generator possibly multiplied by some powers of $U$, i.e., $\tilde \mathbf y = U_1^{N_1} \cdots U_n^{N_n} \cdot (\mathbf y, \epsilon)$. The homological degree of a generator in the Hom complex is $M(\tilde \mathbf y) - M(\mathbf x)$. However, we would like to think of the generators as being enhanced domains (up to the addition of periodic domains), so in order to be consistent with the formula for the index of domains we set $$ I([\mathbf x, \tilde \mathbf y]) = M(\mathbf x) - M(\tilde \mathbf y)$$ and view the Hom complex as a cochain complex, with a differential $d$ that increases the grading. It has the structure of an $\mathcal R$-module, where multiplication by a variable $U_i$ increases the grading by two: $U_i [\mathbf x, \tilde \mathbf y] = [\mathbf x, U_i \tilde \mathbf y]$. Note that the $\Hom$ complex is bounded from below with respect to the Maslov grading. There is a differential on the complex given by $$ d [\mathbf x, \tilde \mathbf y] = [{\partial}^* \mathbf x, \tilde \mathbf y] + [\mathbf x, {\partial} \tilde \mathbf y].$$ Thus, taking the differential of a domain consists in summing over the ways of pre- and post-composing the domain with a rectangle. If a pair $[\mathbf x, \tilde \mathbf y]$ is positive (as in Definition~\ref{def:PositivePair}), then by definition it represents a positive domain; adding a rectangle to it keeps it positive. Indeed, note that if the rectangle crosses an $O$ used for destabilization, then the real and total multiplicities increase by $1$, but the fake multiplicity stays the same, so the inequalities \eqref{ineq:abcd1} are still satisfied. Therefore, the positive pairs $[\mathbf x, \tilde \mathbf y]$ generate a subcomplex $\mathit{CP}^*(G, {\mathcal{Z}})$ of the Hom complex. For the moment, let us ignore its structure as an $\mathcal R$-module, and simply consider it as a cochain complex over ${\mathbb{F}}$. We denote its cohomology by $\mathit{HP}^*(G, {\mathcal{Z}})$. We make the following: \begin {conjecture} \label {conj} Let $G$ be a toroidal grid diagram of size $n$ with a collection ${\mathcal{Z}}$ of $O$'s marked for destabilization, such that $|{\mathcal{Z}}| < n$. Then $\mathit{HP}^{d}(G, {\mathcal{Z}}) = 0$ for $d < 0$. \end {conjecture} Proposition~\ref{prop:oneO} implies Conjecture~\ref{conj} in the case when only one $O$ is marked for destabilization. Indeed, in that case we have $\mathit{CP}^d(G, {\mathcal{Z}}) = 0$ for $d < 0$, so the homology is also zero. In the case of several destabilization points, we can prove only a weaker form of the conjecture, namely Theorem~\ref{thm:sparse} below. However, this will suffice for our application. \begin {definition} Let $G$ be a toroidal grid diagram of size $n > 1$, with a collection ${\mathcal{Z}}$ of some $O$'s marked for destabilization. If none of the $O$'s marked for destabilization sit in adjacent rows or adjacent columns, we say that the pair $(G, {\mathcal{Z}})$ is {\em sparse}. (Note that if $(G, {\mathcal{Z}})$ is sparse, we must have $|{\mathcal{Z}}| \leq (n+1)/2 < n$.) \end {definition} Recall that in the Introduction we gave a similar definition, which applies to grid diagrams $G$ (with free markings) representing links in $S^3$. Precisely, we said that $G$ is {\em sparse} if none of the linked markings sit in adjacent rows or in adjacent columns. Observe that, if $G$ is sparse, then the pair $(G, {\mathcal{Z}})$ is sparse for any collection ${\mathcal{Z}}$ of linked markings. \begin {theorem} \label {thm:sparse} If $(G, {\mathcal{Z}})$ is a sparse toroidal grid diagram with $k$ $O$'s marked for destabilization, then $\mathit{HP}^d(G, {\mathcal{Z}}) = 0$ for $d < \min \{0, 2-k\}$. \end {theorem} Once we show this, it will also follow that, for $(G, {\mathcal{Z}})$ sparse, the homology of $\mathit{CP}^*(G, {\mathcal{Z}}) \otimes M$ is zero in degrees $<\min \{0, 2-k\}$, where $M$ is an $\mathcal R$-module (a quotient of $\mathcal R$) obtained by setting some of the $U$ variables equal to each other. The proof of Theorem~\ref{thm:sparse} will be given in Section~\ref{sec:sparse}. \subsection {An extended complex of positive pairs} \label {sec:ext} Let us now return to the set-up of Section~\ref{sec:Surgery}, where $\orL$ is an oriented link with a grid presentation $G$ of grid number $n$, and with $q \geq 1$ free markings. When ${\mathcal{Z}}$ is a consistent set of linked markings ($X$'s or $O$'s) on $G$, we set $$ J_\infty({\mathcal{Z}}) = \{ \mathbf s=(s_1, \dots, s_\ell) \in J({\mathcal{Z}}) \mid s_i = \pm \infty \text{ for all } i \}.$$ Let ${\mathcal{Z}}_0$ and ${\mathcal{Z}}$ be two disjoint sets of linked markings on $G$ such that ${\mathcal{Z}}_0 \cup {\mathcal{Z}}$ is consistent. For $\mathbf s \in J_\infty({\mathcal{Z}}_0 \cup {\mathcal{Z}})$, we can define a cochain complex $$ \mathit{CP}^*(G; {\mathcal{Z}}_0, {\mathcal{Z}}, \mathbf s),$$ which is the subcomplex of \begin {equation} \label {eq:Hom} \Hom_{\mathcal R} \bigl(\mathfrak A^-(\Ta, \Tb^{{\mathcal{Z}}_0}, \mathbf s), \mathfrak A^-(\Ta, \Tb^{{\mathcal{Z}}_0 \cup {\mathcal{Z}}}, \mathbf s)\bigr ) \end {equation} spanned by positive pairs. Here positivity of pairs has the same meaning as before: it is defined in terms of enhanced domains, by looking at the grid diagram $G^{{\mathcal{Z}}_0}$ destabilized at the points of ${\mathcal{Z}}_0$. \begin {lemma} \label {lemma:jinf} If ${\mathcal{Z}}_0 \amalg {\mathcal{Z}}$ is a consistent set of linked markings on a grid $G$ and $\mathbf s, \mathbf s' \in J_\infty({\mathcal{Z}}_0 \cup {\mathcal{Z}})$, then the complexes $\mathit{CP}^*(G; {\mathcal{Z}}_0, {\mathcal{Z}}, \mathbf s)$ and $ \mathit{CP}^*(G; {\mathcal{Z}}_0, {\mathcal{Z}}, \mathbf s')$ are canonically isomorphic. \end {lemma} \begin {proof} It suffices to consider the case when $\mathbf s$ and $\mathbf s'$ differ in only one place, say $s_i \neq s_i'$ for a component $L_i \not \subseteq L({\mathcal{Z}}_0 \cup {\mathcal{Z}})$, where $L({\mathcal{Z}}_0 \cup {\mathcal{Z}})$ is as in Section~\ref{subsec:ks}. Suppose $s_i = - \infty$ while $s_i' = +\infty$. Then the desired isomorphism is \begin {equation} \label {eq:cbzed} \mathit{CP}^*(G; {\mathcal{Z}}_0, {\mathcal{Z}}, \mathbf s) \longrightarrow \mathit{CP}^*(G; {\mathcal{Z}}_0, {\mathcal{Z}}, \mathbf s'), \ \ \ [\mathbf x, \tilde \mathbf y] \to U_i^{A_i(\mathbf x)- A_i(\mathbf y)} [\mathbf x, \tilde \mathbf y], \end {equation} where $\tilde \mathbf y = U_1^{N_1} \cdots U_\ell^{N_\ell} (\mathbf y,\epsilon)$ and $A_i$ is the component of the Alexander grading corresponding to $L_i$. To verify that this is a well-defined map, we proceed as follows. If $D$ is a positive domain representing the pair $[\mathbf x, \tilde \mathbf y]$, note that $$ A_i(\mathbf x) - A_i (\mathbf y) = \sum_{j \in \Xs_i} X_i(D) -\sum_{j \in \mathbb O_i} O_i(D) $$ and $N_i = \sum_{j \in \mathbb O_i} O_i(D)$, so $$A_i(\mathbf x) - A_i(\mathbf y) + N_i = \sum_{j\in\Xs_i} X_i(D) \geq 0.$$ This implies that $U_i^{A_i(\mathbf x)- A_i(\mathbf y)} [\mathbf x, \tilde \mathbf y]$ is an element of $\mathit{CP}^*(G;{\mathcal{Z}}_0,{\mathcal{Z}},\mathbf s')$, with the same domain $D$ as a positive representative. One can then easily check that the map from Equation~\eqref{eq:cbzed} is a chain map. The inverse of \eqref{eq:cbzed} is also a positivity-preserving chain map, this time given by multiplication with $ U_i^{A_i(\mathbf y)- A_i(\mathbf x)}$. Indeed, a similar argument applies: a positive domain $D$ representing a pair $[\mathbf x, \tilde \mathbf y] $ in $\mathit{CP}^*(G; {\mathcal{Z}}_0, {\mathcal{Z}}, \mathbf s')$ has $N_i = \sum_{j \in \Xs_i} X_i(D)$, and we get $A_i(\mathbf y) - A_i(\mathbf x) + N_i \geq 0$. \end {proof} In view of Lemma~\ref{lemma:jinf}, we can drop $\mathbf s$ from the notation and refer to $ \mathit{CP}^*(G; {\mathcal{Z}}_0, {\mathcal{Z}}, \mathbf s)$ simply as $ \mathit{CP}^*(G; {\mathcal{Z}}_0, {\mathcal{Z}})$. Let $G^{{\mathcal{Z}}_0, {\mathcal{Z}}}$ be the grid diagram $G^{{\mathcal{Z}}_0}$ with all $X$ markings on components $L_i \not \subseteq L({\mathcal{Z}}_0 \cup {\mathcal{Z}})$ deleted and the remaining $X$ markings relabeled as $O$. Further, we mark for destabilization the markings in ${\mathcal{Z}}$. Then $(G^{{\mathcal{Z}}_0, {\mathcal{Z}}}, {\mathcal{Z}})$ is a grid diagram with only $O$'s, some of them marked for destabilization, as in Sections~\ref{sec:uleft} and \ref{sec:complex}. As such, it has a complex of positive pairs $\mathit{CP}^*(G^{{\mathcal{Z}}_0, {\mathcal{Z}}}, {\mathcal{Z}})$, cf.\ Section~\ref{sec:complex}. In light of the isomorphisms \eqref{eq:psiz}, we see that \begin {equation} \label {eq:gze} \mathit{CP}^*(G; {\mathcal{Z}}_0, {\mathcal{Z}}) \cong \mathit{CP}^*(G^{{\mathcal{Z}}_0, {\mathcal{Z}}}, {\mathcal{Z}}) \otimes M, \end {equation} where $M$ is an $\mathcal R$-module obtained by setting some $U$ variables equal to each other. Let ${\mathcal{Z}}_0, {\mathcal{Z}}, {\mathcal{Z}}'$ be disjoint sets of linked markings on $G$, such that ${\mathcal{Z}}_0 \cup {\mathcal{Z}} \cup {\mathcal{Z}}'$ is consistent. There are natural composition maps \begin {equation} \label {eq:compose} \circ : \mathit{CP}^i(G; {\mathcal{Z}}_0 \cup {\mathcal{Z}}, {\mathcal{Z}}') \otimes \mathit{CP}^j(G; {\mathcal{Z}}_0, {\mathcal{Z}}) \to \mathit{CP}^{i+j}(G; {\mathcal{Z}}_0, {\mathcal{Z}} \cup {\mathcal{Z}}'), \end {equation} obtained from the respective $\Hom$ complexes by restriction. We define the {\em extended complex of positive pairs} associated to $G$ to be $$ \mathit{CE}^*(G) = \bigoplus_{{\mathcal{Z}}_0, {\mathcal{Z}}} \mathit{CP}^*(G; {\mathcal{Z}}_0, {\mathcal{Z}}), $$ where the direct sum is over all collections ${\mathcal{Z}}_0, {\mathcal{Z}}$ such that ${\mathcal{Z}}_0 \cup {\mathcal{Z}}$ is consistent. This breaks into a direct sum $$ \mathit{CE}^*(G) = \bigoplus_{k = 0}^{n-q} \mathit{CE}^*(G; k),$$ according to the cardinality $k$ of ${\mathcal{Z}}$, i.e., the number of points marked for destabilization. Putting together the maps \eqref{eq:compose}, we obtain global composition maps: \begin {equation} \label {eq:circ} \circ : \mathit{CE}^i(G; k) \otimes \mathit{CE}^j(G; l) \to \mathit{CE}^{i+j}(G; k+l), \end {equation} where the compositions are set to be zero when not a priori well-defined on the respective summands. These composition maps satisfy a Leibniz rule for the differential. The complex $\mathit{CE}^*(G)$ was mentioned in the Introduction. There we stated Conjecture~\ref{conj:extended}, which says that for any toroidal grid diagram $G$, we have $$ \mathit{HE}^d(G) = 0 \text{ when } d <0.$$ Observe that Conjecture~\ref{conj:extended} would be a direct consequence of Conjecture~\ref{conj}, because of Equation~\eqref{eq:gze}. We prove a weaker version of Conjecture~\ref{conj:extended}, which applies only to sparse grid diagrams (as defined in the Introduction). \begin {theorem} \label {thm:esparse} Let $G$ be a sparse toroidal grid diagram representing a link $L$. Then $\mathit{HE}^d(G; k) = 0$ whenever $d < \min \{0, 2-k\}$. \end {theorem} \begin {proof} This follows from Theorem~\ref{thm:sparse}, using Equation~\eqref{eq:gze}. The key observation is that all destabilizations of a sparse diagram at linked markings are also sparse. \end {proof} \subsection{Formal complex structures} Let $G$ be a grid presentation for the link $L$, such that $G$ has $q \geq 1$ free markings. Our goal is to find an algorithm for computing $\HFm_{*}(S^3_\Lambda(L), {\mathfrak{u}})$, where $\Lambda$ is a framing of $L$ and ${\mathfrak{u}}$ a ${\operatorname{Spin^c}}$ structure on $S^3_{\Lambda}(L)$. We will first describe how to do so assuming that Conjecture~\ref{conj:extended} is true, and using Theorem~\ref{thm:Surgery}. By Theorem~\ref{thm:Surgery}, we need to compute the homology of the complex ${\mathcal{C}}^-(G, \Lambda, {\mathfrak{u}}) \subseteq {\mathcal{C}}^-(G, \Lambda)$, with its differential ${\mathcal{D}}^-$. In the definition of ${\mathcal{D}}^-$ we use the maps $\Phi^{\orM}_\mathbf s$, which in turn are based on destabilization maps $\hat {D}^{\orM}_\mathbf s$ of the kind constructed in Section~\ref{sec:desublink}. In turn, the maps $\hat {D}^{\orM}_\mathbf s$, obtained by compression, are sums of compositions of maps of the type ${D}^{\mathcal{Z}}_\mathbf s$ (for unordered sets of markings ${\mathcal{Z}}$), as in Section~\ref{sec:DestabSeveral}. Consider the maps \begin {equation} \label {eq:dezz} {D}^{{\mathcal{Z}}}_\mathbf s: \mathfrak A^-(\Ta , \Tb^{{\mathcal{Z}}_0}, \mathbf s) \to \mathfrak A^-(\Ta, \Tb^{{\mathcal{Z}}_0 \cup {\mathcal{Z}}}, \mathbf s), \end {equation} for $\mathbf s \in J({\mathcal{Z}}_0 \cup {\mathcal{Z}})$, compare Remark~\ref{rem:note}. Each such map is defined by counting holomorphic $(k+2)$-gons of index $1-k$ in $\mathrm{Sym}^n({\mathcal{T}})$, where $k$ is the cardinality of ${\mathcal{Z}}$. The holomorphic polygon counts that appear are the same regardless of the value $\mathbf s \in J({\mathcal{Z}}_0 \cup {\mathcal{Z}})$. Hence, if we fix ${\mathcal{Z}}_0$ and ${\mathcal{Z}}$, the map ${D}^{{\mathcal{Z}}}_\mathbf s$ for one value of $\mathbf s$ determines those for all other $\mathbf s$. \begin {remark} One instance of this principle can be seen at the level of the maps ${D}^{\orM}_\mathbf s$ defined in Equation~\eqref{eq:dems}. Fixing $\orM$, the maps ${D}^{\orM}_\mathbf s$ are determined by the map for one value of $\mathbf s \in J(\orM)$, using Equation~\eqref{eq:comm}. Indeed, in that situation the respective inclusion maps ${\mathcal{I}}^{\orM}_\mathbf s$ are invertible, essentially because they satisfy Equation~\eqref{eq:comm}. Hence, taking $\orM_1 = \orM$ and $\orM_2 = \orL-M$, we find that the map ${D}^{\orM}_\mathbf s$ is determined by ${D}^{\orM}_{p^{\orL-M}(\mathbf s)}$. \end {remark} Pick some $\mathbf s \in J_\infty({\mathcal{Z}}_0 \cup {\mathcal{Z}}) \subset J({\mathcal{Z}}_0 \cup {\mathcal{Z}})$. We will focus on the map \eqref{eq:dezz} for that value $\mathbf s$. We seek to understand the map ${D}^{\mathcal{Z}}_\mathbf s$ combinatorially. In the case $k=0$, we know that holomorphic bigons are the same as empty rectangles on ${\mathcal{T}}$, cf.~\cite{MOS}. For $k=1$, one can still count holomorphic triangles explicitly, cf.\ Section~\ref{sec:triangles}. Unfortunately, for $k \geq 2$, the count of holomorphic $(k+2)$-gons seems to depend on the almost complex structure on $\mathrm{Sym}^n({\mathcal{T}})$. The best we can hope for is not to calculate the maps ${D}^{{\mathcal{Z}}}$ explicitly, but to calculate them up to chain homotopy. In turn, this will give an algorithm for computing the chain complex ${\mathcal{C}}^-(G, \Lambda, {\mathfrak{u}})$ up to chain homotopy equivalence, and this is enough for knowing its homology. Recall that a complex structure $j$ on the torus ${\mathcal{T}}$ gives rise to a complex structure $\mathrm{Sym}^n(j)$ on the symmetric product $\mathrm{Sym}^n({\mathcal{T}})$. In \cite{HolDisk}, in order to define Floer homology the authors used a certain class of perturbations of $\mathrm{Sym}^n(j)$, which are (time dependent) almost complex structures on $\mathrm{Sym}^n({\mathcal{T}})$. For each almost complex structure $J$ in this class, one can count $J$-holomorphic polygons for various destabilization maps. In particular, for any $$ \mathbf x \in \Ta \cap \Tb^{{\mathcal{Z}}_0}, \ \ \ \mathbf y \in \Ta \cap \Tb^{{\mathcal{Z}}_0 \cup {\mathcal{Z}}}, \ \tilde \mathbf y = U_1^{i_1} \cdots U_{\ell+q}^{i_{\ell+q}} (\mathbf y,\epsilon)$$ there is a number $$ n_J^{{\mathcal{Z}}_0, {\mathcal{Z}}}(\mathbf x, \tilde \mathbf y) \in {\mathbb{F}} $$ such that the destabilization map~\eqref{eq:dezz} is given by $$ {D}^{{\mathcal{Z}}}_\mathbf s \mathbf x = \sum_{\mathbf y} n_J^{{\mathcal{Z}}_0, {\mathcal{Z}}}(\mathbf x, \tilde \mathbf y) \cdot \tilde \mathbf y. $$ In other words, $n_J^{{\mathcal{Z}}_0, {\mathcal{Z}}}(\mathbf x, \mathbf y)$ is the count of $J$-holomorphic $(k+2)$-gons between $\mathbf x$ and $\tilde \mathbf y$, in all possible homotopy classes $\phi$ with $\mu(\mathbf x, \mathbf y) =\mu(\phi) = 1-k$, and coming from all possible orderings of the elements of ${\mathcal{Z}}$. (Recall that ${D}^{{\mathcal{Z}}}$ is a sum of maps of the form ${D}^{(Z_1,\dots , Z_k)}$, where $(Z_1,\dots , Z_k)$ is an ordering of ${\mathcal{Z}}$.) Observe that, in order for $n_J^{{\mathcal{Z}}_0, {\mathcal{Z}}}(\mathbf x, \tilde \mathbf y)$ to be nonzero, the pair $[\mathbf x, \tilde \mathbf y]$ has to be positive. Hence, the set of values $n_J^{{\mathcal{Z}}_0, {\mathcal{Z}}}(\mathbf x, \tilde \mathbf y)$ produces well-defined elements in the extended complex of positive pairs on $G$: $$ c_k(J) = \sum n_J^{{\mathcal{Z}}_0, {\mathcal{Z}}}(\mathbf x, \tilde \mathbf y) \cdot [\mathbf x, \tilde \mathbf y] \in \mathit{CP}^{1-k}(G; {\mathcal{Z}}_0, {\mathcal{Z}}) \subseteq \mathit{CE}^{1-k}(G; k), \ \ k \geq 1.$$ Lemma~\ref{lemma:d2} implies that the elements $c_k(J)$ satisfy the following compatibility conditions, with respect to the composition product~\eqref{eq:circ}: $$ dc_k(J) = \sum_{i=1}^{k-1} c_i(J) \circ c_{k-i}(J).$$ In particular, $dc_1(J) = 0$. Note that $c_1(J)$ is given by the count of snail-like domains, and therefore is independent of $J$. We denote it by $$c_1^{{\operatorname{sn}}} \in \mathit{CE}^0(G; 1).$$ \begin {definition} \label{def:FormalComplexStructure} A {\em formal complex structure} $\mathfrak{c}$ on the grid diagram $G$ (of grid number $n$, and with $q \geq 1$ free markings) consists of a family of elements $$c_k \in \mathit{CE}^{1-k}(G; k),\quad k = 1, \dots, n-q,$$ satisfying $c_1 = c_1^{{\operatorname{sn}}}$ and the compatibility conditions: \begin {equation} \label {eq:compat} dc_k = \sum_{i=1}^{k-1} c_i \circ c_{k-i}. \end {equation} \end {definition} In particular, an (admissible) almost complex structure $J$ on $\mathrm{Sym}^n({\mathcal{T}})$ induces a formal complex structure $\mathfrak{c}(J)$ on $G$. \begin {remark} If we let $\mathfrak{c}=(c_1, c_2, \dots) \in \mathit{CE}^*(G)$, the relation~\eqref{eq:compat} is summarized by the equation \begin {equation} \label {eq:master} d\mathfrak{c} = \mathfrak{c} \circ \mathfrak{c}. \end {equation} \end {remark} \begin {definition} \label {def:homot} Two formal complex structures $\mathfrak{c} = (c_1, c_2, \dots), \mathfrak{c}'=(c_1', c_2', \dots)$ on a grid diagram $G$ (of grid number $n$, with $q \geq 1$ free markings) are called {\em homotopic} if there exists a sequence of elements $$ h_k \in \mathit{CE}^{-k}(G; k),\quad k = 1, \dots, n-q $$ satisfying $h_1 = 0$ and \begin {equation} \label {eq:homs} c_k - c_k' = dh_k + \sum_{i=1}^{k-1} \bigl( c'_i \circ h_{k-i} + h_i \circ c_{k-i} \bigr ). \end {equation} \end {definition} Observe that, if $J$ and $J'$ are (admissible) almost complex structures on $\mathrm{Sym}^n({\mathcal{T}})$, one can interpolate between them by a family of almost complex structures. The resulting counts of holomorphic $(2+k)$-gons of index $-k$ induce a homotopy between $\mathfrak{c}(J)$ and $\mathfrak{c}(J')$. There is therefore a canonical homotopy class of formal complex structures that come from actual almost complex structures. \begin {lemma} \label {lemma:1k} Assume $\mathit{HE}^{1-k}(G; k) = 0$ for any $k = 2, \dots, n-q$. Then any two formal complex structures on $G$ are homotopic. \end {lemma} \begin {proof} Let $\mathfrak{c} = (c_1, c_2, \dots), \mathfrak{c}'=(c_1', c_2', \dots)$ be two formal complex structures on $G$. We prove the existence of the elements $h_k$ by induction on $k$. When $k=1$, we have $c_1 = c_1' = c_1^{{\operatorname{sn}}}$ so we can take $h_1 =0$. Assume we have constructed $h_i$ for $i < k$ satisfying~\eqref{eq:homs}, and we need $h_k$. Since by hypothesis the cohomology group $\mathit{CE}(G; k)$ is zero in degree $1-k$, it suffices to show that \begin{equation} \label{eq:ck-cycle} c_k - c_k' - \sum_{i=1}^{k-1} \bigl( c'_i \circ h_{k-i} + h_i \circ c_{k-i} \bigr ) \end{equation} is a cocycle. Indeed, we have \begin {multline*} d \Bigl( c_k - c_k' - \sum_{i=1}^{k-1} \bigl( c'_i \circ h_{k-i} + h_i \circ c_{k-i} \bigr ) \Bigr ) \\ \begin{aligned} &= \sum_{i=1}^{k-1} c_i \circ c_{k-i} - \sum_{i=1}^{k-1} c'_i \circ c'_{k-i} - \sum_{i=1}^{k-1} \bigl( dc'_i \circ h_{k-i} + c'_i \circ dh_{k-i} + dh_i \circ c_{k-i} + h_i \circ dc_{k-i} \bigr) \\ &= \sum_{i=1}^{k-1} (c_i - c_i' - dh_i) \circ c_{k-i} + \sum_{i=1}^{k-1} c_i' \circ (c_{k-i} - c'_{k-i} -dh_{k-i}) - \sum_{i=1}^{k-1} dc'_i \circ h_{k-i} - \sum_{i=1}^{k-1} h_i \circ dc_{k-i} \\ &= \sum(c_{\alpha}'h_{\beta} c_{\gamma} + h_{\alpha}c_{\beta} c_{\gamma}) + \sum(c_{\alpha}'h_{\beta} c_{\gamma} + c'_{\alpha}c'_{\beta} h_{\gamma}) - \sum c'_{\alpha}c'_{\beta} h_{\gamma} - \sum h_{\alpha}c_{\beta} c_{\gamma}\\ &= 0. \end{aligned} \end {multline*} In the second-to-last line the summations are over $\alpha, \beta, \gamma \geq 1$ with $\alpha + \beta + \gamma = k$, and we suppressed the composition symbols for simplicity. \end {proof} \subsection {Combinatorial descriptions} Consider a formal complex structure $\mathfrak{c}$ on a grid $G$ (of grid number $n$, with $q \geq 1$ free markings), a framing $\Lambda$ for $L$, and an equivalence class ${\mathfrak{u}} \in \bigl(\mathbb{H}(L)/H(L, \Lambda)\bigr)$ $\cong {\operatorname{Spin^c}}(S^3_\Lambda(L))$. We seek to define a complex ${\mathcal{C}}^-(G, \Lambda, {\mathfrak{u}}, \mathfrak{c})$ analogous to the complex ${\mathcal{C}}^-(G, \Lambda, {\mathfrak{u}})$ from Section~\ref{sec:surgery}, but defined using the elements $c_k$ instead of the holomorphic polygon counts. (In particular, if $\mathfrak{c} = \mathfrak{c}(J)$ for an actual almost complex structure $J$, we want to recover the complex ${\mathcal{C}}^-(G, \Lambda, {\mathfrak{u}})$ from Section~\ref{sec:surgery}.) Let us explain the construction of ${\mathcal{C}}^-(G, \Lambda, {\mathfrak{u}}, \mathfrak{c})$. Recall that in Section~\ref{sec:surgery} the complex ${\mathcal{C}}^-(G, \Lambda, {\mathfrak{u}})$ was built by combining hypercubes of the form $\mathcal{H}^{{\mathcal{Z}}}_\mathbf s$, for various consistent sets of linked markings ${\mathcal{Z}}$ and values $\mathbf s \in J({\mathcal{Z}})$. We can define analogous hypercubes $\mathcal{H}^{\mathcal{Z}}_\mathbf s(\mathfrak{c})$, by counting $(k+2)$-gons according to the coefficients of enhanced domains that appear in $\mathfrak{c}_k$. Taking into account the naturality properties of compression discussed at the end of Section~\ref{sec:hyper}, the proof of the following lemma is straightforward: \begin {lemma} \label {lemma:xx} A homotopy between formal complex structures $\mathfrak{c}, \mathfrak{c}'$ on $G$ induces a chain homotopy equivalence between the complexes ${\mathcal{C}}^-(G, \Lambda, {\mathfrak{u}}, \mathfrak{c})$ and ${\mathcal{C}}^-(G, \Lambda, {\mathfrak{u}}, \mathfrak{c}')$. \end {lemma} With this in mind, we are ready to prove the two theorems advertised in the Introduction. \begin{proof}[Proof of Theorem~\ref{thm:Three}] The algorithm to compute $\HFm$ of an integral surgery on a link $\orL$ goes as follows. First, choose a sparse grid diagram $G$ for $\orL$ (for example, by taking the sparse double of an ordinary grid diagram, as in Figure~\ref{fig:sparse}). Then, choose any formal complex structure on $G$, construct the complex ${\mathcal{C}}^-(G, \Lambda, {\mathfrak{u}}, \mathfrak{c})$, and take its homology. Let us explain why this algorithm is finite and gives the desired answer. Observe that $\mathit{CE}^*(G)$ is finite in each degree, and the number $k$ of destabilization points is bounded above by $n-q$, so the direct sum $\oplus_{k \geq 1} \mathit{CE}^{1-k}(G; k)$ is a finite set. Further, we know that a formal complex structure exists, because it could be induced by some almost complex structure $J$. Thus, we can find a formal complex structure $\mathfrak{c}$ on $G$ by a brute force approach: go over all the (necessarily finite) sequences $ \mathfrak{c} = (c_1=c_1^{{\operatorname{sn}}}, c_2, c_3, \dots) \in \oplus_{k \geq 1} \mathit{CE}^{1-k}(G; k)$, and pick the first one that satisfies Equation~\eqref{eq:master}. Then, by Theorem~\ref{thm:esparse} and Lemma~\ref{lemma:1k} we know that all possible $\mathfrak{c}$'s are homotopic. Lemma~\ref{lemma:xx}, together with Theorem~\ref{thm:Surgery}, tells us that the homology of ${\mathcal{C}}^-(G, \Lambda, {\mathfrak{u}}, \mathfrak{c})$ is indeed the right answer (after dividing out the factor corresponding to the homology of a torus). (We could alternately take a somewhat more efficient, step-by-step approach to finding $\mathfrak{c}$: start with $c_1=c_1^{{\operatorname{sn}}}$ and inductively find each $c_k$ for $k \ge 2$. Since Formula~\eqref{eq:ck-cycle} represents a cycle, the obstruction to extending a partial formal complex structure to the next step vanishes.) Note that although this description is combinatorial in nature, the complex ${\mathcal{C}}^-(G, \Lambda, {\mathfrak{u}}, \mathfrak{c})$ is still infinite dimensional, being an infinite direct product of modules over a ring of power series. However, we can replace it by a quasi-isomorphic, finite dimensional complex over ${\mathbb{F}}={\mathbb{Z}}/2{\mathbb{Z}}$ using the vertical and horizontal truncation procedures from \cite[Section 6]{LinkSurg}. Taking the homology of the truncated complex is clearly an algorithmic task. One can calculate the other versions of Heegaard Floer homology ($\HFinf, \mathit{HF}^+$) in a similar way, using Theorem~\ref{thm:AllVersions}. \end {proof} \begin{proof}[Proof of Theorem~\ref{thm:Four}] One can calculate the maps induced by two-handle additions using Theorems~\ref{thm:Cobordisms} and \ref{thm:AllVersions}. Furthermore, we can calculate the mixed invariants of closed four-manifolds using Proposition~\ref{prop:mixed}. In all cases, one proceeds by choosing an arbitrary formal complex structure $\mathfrak{c}$ on the grid diagram $G$, and computing the respective groups or maps using polygon counts prescribed by $\mathfrak{c}$. \end {proof} We also have the following: \begin {theorem} \label {thm:SpectralComb} Fix a sparse grid diagram $G$ for an oriented link $\orL' \cup \orL$ in $S^3$. Fix also framings $\Lambda$ for $L$ and $\Lambda'$ for $L'$. Suppose that $L$ has $\ell$ components $L_1, \dots, L_\ell$. Let $Y(0,\dots,0) = S^3_{\Lambda'}(L')$, and, for any $\eps \in \mathbb{E}_\ell$, let $Y(\eps)$ be obtained from $Y(0,\dots,0)$ by surgery on the components $L_i \subseteq L$ with $\eps_i = 1$. Then, all the pages of the link surgeries spectral sequence from Theorem~\ref{thm:OSspectral} (with $E^1 = \oplus_{\eps \in \mathbb{E}_\ell} \CFm(Y(\eps))$ and coefficients in ${\mathbb{F}} ={\mathbb{Z}}/2{\mathbb{Z}}$) are algorithmically computable. \end {theorem} \begin {proof} Use the equivalent description of the spectral sequence given in Theorem~\ref{thm:SpectralSequence}. Choose a formal complex structure $\mathfrak{c}$ on the grid diagram $G$, and construct a complex ${\mathcal{C}}^-(G, \Lambda' \cup \Lambda, \mathfrak{c} \! \sslash \! L)$ analogous to ${\mathcal{C}}^-(G, \Lambda' \cup \Lambda \! \sslash \! L)$, but using the polygon counts given by $\mathfrak{c}$. Then compute the spectral sequence associated to the depth filtration on ${\mathcal{C}}^-(G, \Lambda' \cup \Lambda, \mathfrak{c} \! \sslash \! L)$. \end {proof} \begin {remark} Suppose a link $L$ has grid number $m$, that is, $m$ is the lowest number such that $G$ admits a grid presentation of that size. Our algorithms above are based on a sparse grid diagram for $L$, and such a diagram must have grid number at least $2m$. If Conjecture~\ref{conj:extended} were true, we would obtain more efficient algorithms, because we could start with a diagram of grid number only $m+1$ (by adding only one free marking to the minimal grid). \end {remark} \begin {remark} Our present techniques do not give a combinatorial procedure for finding the map $F^-_{W,\mathbf t}$ associated to an arbitrary cobordism map (in a given degree) or even its rank. However, suppose $W$ is a cobordism between connected three-manifolds $Y_1$ and $Y_2$ such that the induced maps $H_1(Y_1; {\mathbb{Z}})/{\operatorname{Tors}} \to H_1(W; {\mathbb{Z}})/{\operatorname{Tors}}$ and $H_1(Y_2; {\mathbb{Z}})/{\operatorname{Tors}} \to H_1(W; {\mathbb{Z}})/{\operatorname{Tors}}$ are surjective. Then the ranks of $F^-_{W, {\mathfrak{t}}}$ in fixed degrees can be computed using the same arguments as in \cite[Section 4]{LMW}. Indeed, they are determined by the ranks of the map induced by the two-handle additions which are part of the cobordism $W$. \end {remark} \section{Sparse grid diagrams} \label {sec:sparse} This section is devoted to the proof of Theorem~\ref{thm:sparse}. Let $(G, {\mathcal{Z}})$ be a sparse toroidal grid diagram with some $O$'s marked for destabilization, as in Section~\ref{sec:uleft}. We define a filtration ${\mathcal{F}}$ on $\mathit{CP}^*(G, {\mathcal{Z}})$ as follows. Let us mark one $X$ in each square of the grid with the property that neither its row nor its column contains an $O$ marked for destabilization. (Note that these $X$'s have nothing to do with the original set of $X$'s from the link.) See Figure~\ref{fig:ys}, where the squares marked by an $X$ are shown shaded. Given a pair $[\mathbf x, \tilde \mathbf y]$, let $(D, \epsilon, \rho)$ be an enhanced domain from $\mathbf x$ to $\tilde \mathbf y$. Let $X(D)$ be the number of $X$'s inside $D$, counted with multiplicity. Define \begin {equation} \label {eq:fex} {\mathcal{F}}([\mathbf x, \tilde \mathbf y]) = -X(D) - \sum_{j=1}^k \rho_{j}. \end {equation} It is easy to see that $X(D)$ does not change under addition of periodic domains. Since the same is true for the real multiplicities $\rho_{j}$, the value ${\mathcal{F}}([\mathbf x, \tilde \mathbf y])$ is well-defined. Furthermore, pre- or post-composing with a rectangle can only decrease ${\mathcal{F}}$, so ${\mathcal{F}}$ is indeed a filtration on $\mathit{CP}^*(G, {\mathcal{Z}})$. To show that $\mathit{HP}^d(G, {\mathcal{Z}}) = 0 $ (for $d > 0$) it suffices to check that the homology of the associated graded groups to ${\mathcal{F}}$ are zero. In the associated graded complex ${\operatorname{gr}_{\F}\CP^*(G, \zed) }$, the differential only involves composing with rectangles $r$ such that $r$ is supported in a row or column going through some $O_{i_j} \in {\mathcal{Z}}$ marked for destabilization, but $r$ does not contain $O_{i_j}$. We cannot post-compose with such a rectangle, because it would move the destabilization corners in $\mathbf y$, and that is not allowed. Thus, the differential of ${\operatorname{gr}_{\F}\CP^*(G, \zed) }$ only involves pre-composing with rectangles as above. For each positive pair ${\mathbf{p}} = [\mathbf x, \tilde \mathbf y]$, we let $C{\mathcal{F}}^*(G,{\mathcal{Z}}, {\mathbf{p}})$ be the subcomplex of ${\operatorname{gr}_{\F}\CP^*(G, \zed) }$ generated by ${\mathbf{p}}$ and those pairs related to ${\mathbf{p}}$ by a sequence of nonzero differentials in ${\operatorname{gr}_{\F}\CP^*(G, \zed) }$. More precisely, for each complex over ${\mathbb{F}}$ freely generated by a set $S$, we can form an associated graph whose set of vertices is $S$ and with an edge from $\mathbf x$ to $\mathbf y$ ($\mathbf x, \mathbf y\in S$) whenever the coefficient of $\mathbf y$ in $d\mathbf x$ is one. Then the graph of $C{\mathcal{F}}^*(G,{\mathcal{Z}},{\mathbf{p}})$ is the connected component containing~${\mathbf{p}}$ of the graph of ${\operatorname{gr}_{\F}\CP^*(G, \zed) }$ (with respect to the standard basis). \begin {definition} Let $(G, {\mathcal{Z}})$ be a toroidal grid diagram with ${\mathcal{Z}}= \{O_{i_j}| j=1, \dots, k\}$ marked for destabilization. The four corners of the square containing $O_{i_j}$ are called \emph{inner corners} at $O_{i_j}$. An element $\mathbf x \in {\mathbf{S}}(G)$ is called {\em inner} if, for each $j=1, \dots, k$, at least one of the inner corners at $O_{i_j}$ is part of $\mathbf x$. The element $\mathbf x$ is called {\em outer} otherwise. \end {definition} \begin {lemma} \label {lemma:inner} Let $(G, {\mathcal{Z}})$ be a sparse toroidal grid diagram with some $O$'s marked for destabilization, where $|{\mathcal{Z}}| = k \geq 2$. Let ${\mathbf{p}} = [\mathbf x, \tilde \mathbf y]$ be a positive pair such that $\mathbf x$ is inner. Then the index $I({\mathbf{p}})$ is at least $2-k$. \end {lemma} \begin {proof} Let $(D, \epsilon, \rho)$ be a positive enhanced domain going between $\mathbf x$ and $\tilde \mathbf y$. Then $$ I({\mathbf{p}}) = I(D, \epsilon, \rho) = I(D) - \sum_{j=1}^k (\epsilon_j + 2f_j)$$ by Equation~(\ref{eq:enhanced-index}). Just as in the proof of Proposition~\ref{prop:oneO}, without loss of generality we can assume that every row or column contains at least one square where the multiplicity of $D$ is zero; hence, as we move to an adjacent row (or column), the multiplicity of the domain~$D$ can only change by $0$ or $\pm 1$. According to Equation~\eqref{eq:lipshitz}, the usual index $I(D)$ is given by the sum of the average multiplicities of $D$ at the corners. For each $j =1, \dots, k$, define $n_j$ to be the sum of the average multiplicities at the corners situated on one of the following four lines: the two vertical lines bordering the column of $O_{i_j}$ and the two horizontal lines bordering the row of $O_{i_j}$; with the caveat that, if such a corner $c$ (with average multiplicity $a$ around it) appears on one of the four lines for $O_{i_j}$ and also on one of the four lines for $O_{i_l}$, $l \neq j$, then we let $c$ contribute $a/2$ to $n_j$ and $a/2$ to $n_l$. For an example, see Figure~\ref{fig:inner}. Note that, since the diagram is sparse, the average multiplicities at inner corners are only counted in one~$n_l$. \begin{figure} \begin{center} \input{inner.pstex_t} \end{center} \caption {{\bf An example of a positive enhanced domain with $\mathbf x$ inner.} We have $k=2$ and the two $O$'s marked for destabilization ($O_{i_j}, j=1,2$) are shown in the figure with the value of $j$ written inside. We view this as a domain of type $LL$, meaning that $\epsilon_1 = \epsilon_2 = 0$, and with the real multiplicities at the destabilization points equal to zero, so that $f_1 = f_2 = 1$. The domain has index $0$, and the quantities $n_1$ and $n_2$ both equal $3\cdot (1/4) + (3/4) + (1/4 + 3/4)/2 = 2$. } \label{fig:inner} \end{figure} We get \begin {equation} \label {eq:index1} I({\mathbf{p}}) \geq \sum_{j=1}^k (n_j - \epsilon_j - 2f_j). \end {equation} We will prove that \begin{equation} \label{eq:nj-bound} n_j \geq 2f_j + \epsilon_j/2. \end{equation} Indeed, the relations \eqref{ineq:abcd1} imply that the average multiplicity at the destabilization point $p_j$ for $O_{i_j}$ (which is part of $\mathbf y$) is $$\frac{a_j+b_j + c_j + d_j}{4} \geq f_j + \frac{2\epsilon_j-1}{4}. $$ Since $\mathbf x$ is inner, there is also one point of $\mathbf x$, call it $x$, in a corner of the square containing $O_{i_j}$. There are four cases, according to the position of $x$. We first consider the case when the marking at $O_{i_j}$ is $L$ (i.e., $\epsilon_j = 0$). \begin{enumerate} \item If $x$ is the lower left corner (which is the same as the destabilization point), we have $a_j + d_j = b_j +c_j \geq 2f_j$ there, so the average multiplicity at $x$ is at least $f_j$; since the corner counts in both $\mathbf x$ and $\mathbf y$, we get $n_j \geq 2f_j$, as desired. \item If $x$ is the lower right corner, since $b_j, d_j \geq f_j$ and the multiplicity of $D$ can change by at most $\pm 1$ as we pass a column, we find that the average multiplicity at $x$ is at least $f_j - 1/4$. Together with the contribution from $p_j \in \mathbf y$, this adds up to $2f_j - 1/2$. Suppose both the contributions from $p_j$ and $x$ are exactly $f_j-1/4$. Then if $r$ is a square other than $O_j$ in the row through $O_j$, and $s$ is the square directly below it, the local multiplicity at $r$ is one greater than it is at $s$. This contradicts the positivity assumption combined with the assumption that there is at least one square with multiplicity~$0$ in the row containing $O_j$. Hence, the contribution from $x$ and $p_j$ is at least $2f_j$, as desired. \item The case when $x$ is the upper left corner is similar to lower right, with the roles of the the row and the column through $O_{i_j}$ swapped. \item Finally, if $x$ is the upper right corner, then the average multiplicity there is at least $f_j - 3/4$. Together with the contribution from $p_j \in \mathbf y$, we get a contribution of at least $2f_j - 1$. There are two remaining corners on the vertical lines through $p_j$ and $x$; we call them $c_1 \in \mathbf x$ and $c_2 \in \mathbf y$, respectively. We claim that the contributions of the average multiplicities of $c_1$ and $c_2$ to $n_j$ sum up to at least $1/2$. Indeed, if at least one of these average multiplicities is $\geq 3/4$, their sum is $\geq 1$, which might be halved (because the contribution may be split with another $n_l$) to get at least $1/2$. If both of the average multiplicities are $1/4$ (i.e., both $c_1$ and $c_2$ are $90^\circ$ corners), they must lie on the same horizontal line, and therefore their contributions are not shared with any of the other $n_l$'s; so they still add up to $1/2$. A similar argument gives an additional contribution of at least $1/2$ from the two remaining corners on the row through $O_{i_j}$. Adding it all up, we get $n_j \geq (2f_j - 1) + 1/2 + 1/2 = 2f_j$. \end{enumerate} This completes the proof of Equation~\eqref{eq:nj-bound} when $D$ is of type~$L$ at $O_{i_j}$. When $D$ is of type~$R$ there (i.e., $\epsilon_j = 1$), the contribution of~$x$ to $n_j$ is at least $f_j - \frac{3}{4}$. Studying the four possible positions of $x$, just as in the $L$ case, gives additional contributions to $n_j$ of at least $1$, which proves Equation~\eqref{eq:nj-bound}. In fact, the contributions are typically strictly greater than $1$; the only situation in which we can have equality in~\eqref{eq:nj-bound} when $D$ is of type~$R$ is when $x$ is the upper right corner and the local multiplicities around $x$ and $p_j$ are exactly as in Figure~\ref{fig:equality}. \begin{figure} \begin{center} \input{equality.pstex_t} \end{center} \caption {{\bf Equality in Equation~\eqref{eq:nj-bound}.} On the left, we show the local multiplicities around $O_{i_j}$ in the case $\epsilon_j =1,n_j = 2f_j + 1/2$. (Note that the real multiplicity at $O_{i_j}$ is then zero.) On the right we picture a domain of this type, with $k=1$, $f_1 = 1$. There are two corners, marked by $\times$, whose contributions are not counted in $n_1$. As a consequence, the Inequality~\eqref{eq:index1} is strict.} \label{fig:equality} \end{figure} Putting Equation~\eqref{eq:nj-bound} together with Inequality~\eqref{eq:index1}, we obtain \begin {equation} \label {eq:index2} I({\mathbf{p}}) \geq \sum_{j=1}^k (-\epsilon_j/2) \geq -k/2. \end {equation} Our goal was to show that $I({\mathbf{p}}) \geq 2-k$. This follows directly from~\eqref{eq:index2} for $k \geq 4$; it also follows when $k = 3$, by observing that $I({\mathbf{p}})$ is an integer. The only remaining case is $k=2$, when we want to show $I({\mathbf{p}}) \geq 0$, but Inequality~\eqref{eq:index2} only gives $I({\mathbf{p}}) \geq -1$. However, if $I({\mathbf{p}}) = -1$ we would have equality in all inequalities that were used to arrive at Inequality~\eqref{eq:index2}. In particular, both destabilizations are of type $R$, the corresponding $x$'s are the upper right corners of the respective squares, and the local multiplicities there are as in Figure~\ref{fig:equality}. Observe that as we move down from the row above $O_{i_j}$ to the row containing $O_{i_j}$, the local multiplicity cannot decrease. The same is true as we move down from the row containing $O_{i_j}$ to the one just below. On the other hand, looking at the column of $O_{i_1}$, as we go down around the grid from the square just below $O_{i_1}$ (where the multiplicity is $f_1+1$) to the one just above $O_{i_1}$ (where the multiplicity is $f_j-1$), we must encounter at least two horizontal circles where the multiplicity decreases. By our observation above, neither of these circles can be one of the two that bound the row through $O_{i_2}$. However, one or two of them could be the circles of the two remaining corners on the column through $O_{i_1}$. These corners only contribute to $n_1$, not to $n_2$, and since we had equality when we counted their contribution to be at least $1/2$, it must be the case that each of them is a $90^{\circ}$ corner, with a contribution of $1/4$. This means that they must lie on the same horizontal circle. Hence, there must be one other horizontal circle along which local multiplicities of our domain decrease as we cross it from above. On this circle there are some additional corners, with a nontrivial contribution to $I(D)$ unaccounted for; compare the right hand side of Figure~\ref{fig:equality}. (Note however that in that figure, we consider $k=1$, rather than $k=2$.) These vertices contribute extra to the vertex multiplicity, which means that $I({\mathbf{p}}) > -1$. \end {proof} \begin {remark} It may be possible to improve the inequality~\eqref{eq:index2} to $I({\mathbf{p}}) \geq 0$ for every $k > 0$ along the same lines, by doing a more careful analysis of the contributions to $I(D)$. \end {remark} \begin {lemma} \label {lemma:outer} Let $(G, {\mathcal{Z}})$ be a sparse toroidal grid diagram, and ${\mathbf{p}} = [\mathbf x, \tilde \mathbf y]$ a positive pair such that $\mathbf x$ is outer. Then the cohomology of $C{\mathcal{F}}^*(G, {\mathcal{Z}}, {\mathbf{p}})$ is zero. \end {lemma} \begin {proof} First, observe that, since the differential of ${\operatorname{gr}_{\F}\CP^*(G, \zed) }$ only involves pre-compositions, $\tilde \mathbf y$ is the same for all generators of $C{\mathcal{F}}^*(G, {\mathcal{Z}}, {\mathbf{p}})$. Furthermore, since $\mathbf x$ is outer, all the other generators $[\mathbf x', \tilde \mathbf y]$ of $C{\mathcal{F}}^*(G, {\mathcal{Z}}, {\mathbf{p}})$ are outer. Let $j$ be such that no corner of $O_{i_j}$ is in $\mathbf x$. Consider a new filtration ${\mathcal{G}}$ on $C{\mathcal{F}}^*(G, {\mathcal{Z}}, {\mathbf{p}})$ given as follows. Let us mark one $Y$ in each square of the grid that lies in a column or row through an $O$ marked for destabilization, but does not lie in the column going though $O_{i_j}$ (where $j$ was chosen above), nor does it lie in the same square as one of the other $O_{i_l}$'s. Further, we mark $n-1$ copies of $Y$ in the square directly below the square of $O_{i_j}$. (Here $n$ is the size of the grid.) Finally, we mark one extra $Y$ in the square directly to the left of each $O_{i_l}$, $l \neq j$. See Figure~\ref{fig:ys} for an example. Observe that for every periodic domain equal to the row through some $O_{i_l}$ minus the column through $O_{i_l}$ (for some $l=1, \dots, k$, including $l=j$), the signed count of $Y$'s in that domain is zero. Consider now the squares in $G$ that do not lie in any row or column that goes through an $O$ marked for destabilization. We denote them by $s_{u,v}$, with $u, v \in \{1, \dots, n-k\}$, where the two indices $u$ and $v$ keep track of the (renumbered) row and column, respectively. Note that all these squares are already marked by an $X$, used to define the filtration ${\mathcal{F}}$. We will additionally mark them with several $Y$'s, where the exact number $\alpha_{i,j}$ of $Y$'s in $s_{i,j}$ is to be specified soon. \begin{figure} \begin{center} \input{ys.pstex_t} \end{center} \caption {{\bf The markings defining the filtrations.} We show here a grid diagram of grid number $6$, with two $O$'s marked for destabilization, namely $O_{i_1}=O_2$ and $O_{i_2}=O_5$. We draw small circles surrounding each of the two destabilization points. Each shaded square contains an $X$ and this defines the filtration ${\mathcal{F}}$. One could also imagine an $X$ marking in the upper right quadrant of each small destabilization disk, to account for the term $\rho_j$ in Equation~\eqref{eq:fex}. Given an outer generator $\mathbf x$, we need to choose some $j \in \{1,2\}$ such that no corner $O_{i_j}$ is in $\mathbf x$. Suppose we chose $j=1$. We then define a second filtration ${\mathcal{G}}$ on the components of the associated graded of ${\mathcal{F}}$, using the $Y$ markings as shown, plus five (invisible) $Y$ markings in the lower left corner of each small destabilization disk. We first choose the $Y$ markings in the unshaded squares, then mark the shaded squares so that every periodic domain has a total of zero markings, counted with multiplicities.} \label{fig:ys} \end{figure} Given one of the generators ${\mathbf{p}}' = [\mathbf x', \tilde \mathbf y]$ of $C{\mathcal{F}}^*(G, {\mathcal{Z}}, {\mathbf{p}})$, choose an enhanced domain $(D, \epsilon, \rho)$ from $\mathbf x'$ to $\mathbf y$, and let $Y(D)$ be the number of $Y$'s inside $D$ (counted with multiplicity). Set \begin {equation} \label {eq:gp} {\mathcal{G}}({\mathbf{p}}') = - Y(D) - (n-1)\sum_{l=1}^k (c_l(D) - f_l(D) - \epsilon_l(D) +1). \end {equation} Here $c_l, f_l, \epsilon_l$ are as in Section~\ref{sec:uleft}. The second term in the formula above is chosen so that adding to $D$ a column through some $O_{i_l}$ does not change the value of ${\mathcal{G}}$; this term can be interpreted as marking $n-1$ additional $Y$'s in the lower left quadrant of the small destabilization disk around $O_{i_l}$. We require that the quantity ${\mathcal{G}}$ is well-defined, i.e., it should not depend on $D$, but only on the pair ${\mathbf{p}}'$. For this to be true, we need to ensure that the addition of a periodic domain to $D$ does not change ${\mathcal{G}}$. This can be arranged by a judicious choice of the quantities $\alpha_{u,v}$, for $u, v \in \{1, \dots, n-k\}$. Indeed, the only generators of the space of periodic domains that have not been already accounted for are those of the form: column minus row though some $O$ not marked for destabilization. Note that there is a permutation $\sigma$ of $\{1, \dots, n-k\}$ such that the unmarked $O$'s are in the squares $s_{u\sigma(u)}$. There are $n-k$ conditions that we need to impose on $\alpha_{u,v}$, namely \begin {equation} \label {eq:linear} \sum_{v=1}^{n-k} \alpha_{uv} - \sum_{v=1}^{n-k} \alpha_{v\sigma(u)} = t_u, \end {equation} for $u=1, \dots, n-k$. Here $t_u$ are determined by the number of $Y$'s that we already marked in the respective row and column (as specified above, in squares not marked by $X$), with an extra contribution in the case of the row just below some $O_{i_l}$ and the column just to the left of $O_{i_l}$, to account for the term $c_l(D) - f_l(D) - \epsilon_l(D) +1$ from Equation~\eqref{eq:gp}. Note that $\sum_{u=1}^{n-k} t_u = 0$. We claim that there exists a solution (in rational numbers) to the linear system described in Equation~\eqref{eq:linear}. Indeed, the system is described by a $(n-k)$-by-$(n-k)^2$ matrix $A$, each of whose columns contains one $1$ entry, one $-1$ entry, and the rest just zeros. If a vector $(\beta_u)_{u=1, \dots, n-k}$ is in the kernel of the transpose $A^t$, it must have $\beta_u - \beta_v = 0$ for all $u, v$. In other words, ${\operatorname{Im}}(A)^\perp = {\operatorname{Ker}}(A^t)$ is the span of $(1, \dots, 1)$, so $(t_1, \dots, t_{n-k})$ must be in the image of $A$. By multiplying all the values in a rational solution of \eqref{eq:linear} by a large integer (and also multiplying the number of $Y$'s initially placed in the rows and columns of the destabilized $O$'s by the same integer), we can obtain a solution of \eqref{eq:linear} in integers. By adding a sufficiently large constant to the $\alpha_{uv}$ (but not to the number of $Y$'s initially placed), we can then obtain a solution in nonnegative integers, which we take to be our definition of $\alpha_{uv}$. We have now arranged so that ${\mathcal{G}}$ is an invariant of ${\mathbf{p}}'$. Moreover, pre-composing with a rectangle can only decrease ${\mathcal{G}}$, and it keeps ${\mathcal{G}}$ the same only when the rectangle (which a priori has to be supported in one of the rows and columns through some $O_{i_l}$) is actually supported in the column through $O_{i_j}$, and does not contain the square right below $O_{i_l}$. It follows that ${\mathcal{G}}$ is indeed a filtration on $C{\mathcal{F}}^*(G, {\mathcal{Z}}, {\mathbf{p}})$. We denote the connected components of the associated graded complex by $C{\mathcal{G}}^*(G,{\mathcal{Z}}, {\mathbf{p}}')$; it suffices to show that these have zero cohomology. Without loss of generality, we will focus on $C{\mathcal{G}}^*(G, {\mathcal{Z}}, {\mathbf{p}})$. The complex $C{\mathcal{G}}^*(G, {\mathcal{Z}}, {\mathbf{p}})$ can only contain pairs $[\mathbf x', \tilde \mathbf y]$ such that $\mathbf x'$ differs from $\mathbf x$ by either pre- or post-composition with a rectangle supported in the column through $O_{i_j}$. The condition that this rectangle does not contain the square right below $O_{i_l}$ is automatic, because $\mathbf x$ and $\mathbf x'$ are outer. We find that there can be at most two elements in $C{\mathcal{G}}^*(G, {\mathcal{Z}}, {\mathbf{p}})$, namely ${\mathbf{p}} = [\mathbf x, \tilde \mathbf y]$ and ${\mathbf{p}}' = [\mathbf x', \tilde \mathbf y]$, where $\mathbf x'$ is obtained from $\mathbf x$ by switching the horizontal coordinates of the components of $\mathbf x$ in the two vertical circles bordering the column of $O_{i_j}$. Provided that ${\mathbf{p}}'$ is positive, the pairs ${\mathbf{p}}$ and ${\mathbf{p}}'$ are related by a differential, so the cohomology of $C{\mathcal{G}}^*(G, {\mathcal{Z}}, {\mathbf{p}})$ would indeed be zero. Therefore, the last thing to be checked is that ${\mathbf{p}}'$ is positive. We know that ${\mathbf{p}}$ is positive, so we can choose a positive enhanced domain $D$ representing ${\mathbf{p}}$. Recall that the destabilization point near $O_{i_j}$ is denoted $p_j$ and is part of $\mathbf y$. Draw a vertical segment $S$ going down the vertical circle from $p_j$ to a point of $\mathbf x$. There are three cases: \begin {enumerate} \item There is no point of $\mathbf x'$ on the segment $S$. Then there exists a rectangle going from $\mathbf x'$ to $\mathbf x$, and ${\mathbf{p}}'$ appears in $d {\mathbf{p}}$ by pre-composition. Adding the rectangle to $D$ preserves positivity. \item\label{case:del-rect} There is a point of $\mathbf x'$ on $S$, and the multiplicity of $D$ just to the right of $S$ is nonnegative. Then there is a rectangle, just to the right of $S$, going from $\mathbf x$ to $\mathbf x'$. We get a positive representative for ${\mathbf{p}}'$ by subtracting this rectangle from $D$. \item There is a point of $\mathbf x'$ on $S$, and the multiplicity of $D$ is zero somewhere just to the right of $S$. Note that as we cross the segment $S$ from left to right the drop in multiplicity is constant; since $D$ is positive, this drop must be nonnegative. In particular, $c_j \geq d_j$. Relation~\eqref{eq:abcd1} implies $a_j \geq b_j + 1$. Looking at the inequalities in~\eqref{ineq:abcd1}, we see that we can use two of them (the ones involving $b_j$ and $d_j$) to improve the other two: \begin {equation} \label {ineq:abcd2} a_j \geq b_j + 1 \geq f_j + 1, \qquad c_j \geq d_j \geq f_j + \epsilon_j. \end {equation} Let us add to $D$ the periodic domain given by the column through $O_{i_j}$. This increases $b_j, d_j$, and $f_j$ by $1$ while keeping $a_j$ and $c_j$ constant. Nevertheless, Inequality~\eqref{ineq:abcd2} shows that the inequalities~\eqref{ineq:abcd1} are still satisfied for the new domain $\tilde D$. Thus $\tilde D$ is positive, and its multiplicity just to the right of $S$ is everywhere nonnegative. We can then subtract a rectangle from $\tilde D$ to obtain a positive representative for ${\mathbf{p}}'$ as in case~\eqref{case:del-rect}. \end {enumerate} The three cases are pictured in Figure~\ref{fig:3cases}. \end {proof} \begin{figure} \begin{center} \input{3cases.pstex_t} \end{center} \caption {{\bf The three cases in Lemma~\ref{lemma:outer}.} On the left of each picture we show a positive $L$ domain representing ${\mathbf{p}}=[\mathbf x, \tilde \mathbf y]$. These domains have index $0$, $2$ and $2$, respectively (assuming that all the real multiplicities are zero). On the right of each picture we show the corresponding positive domain representing ${\mathbf{p}}' = [\mathbf x', \tilde \mathbf y]$. These domains have all index $1$. The components of $\mathbf x$ are the black dots, the components of $\mathbf x'$ the gray dots, and those of $\mathbf y$ the white dots. The segment $S$ is drawn thicker.} \label{fig:3cases} \end{figure} \begin {proof}[Proof of Theorem~\ref{thm:sparse}] The case when $k=0$ is trivial, and the case $k=1$ follows from Proposition~\ref{prop:oneO}. For $k \geq 2$, Lemma~\ref{lemma:inner} says that for any pair $[\mathbf x, \tilde \mathbf y]$ of index $< 2-k$, the generator $\mathbf x$ is outer. Lemma~\ref{lemma:outer} then shows that the homology of ${\operatorname{gr}_{\F}\CP^*(G, \zed) }$ is zero in the given range, which implies the same for $\mathit{HP}^*(G, {\mathcal{Z}})$. \end {proof} \bibliographystyle{custom}
{'timestamp': '2011-05-19T02:03:28', 'yymm': '0910', 'arxiv_id': '0910.0078', 'language': 'en', 'url': 'https://arxiv.org/abs/0910.0078'}
\section{Introduction} One of the most remarkable effects predicted by quantum field theory is the Lamb shift, the shift of the energy levels of an atom which is caused by the coupling to the quantum vacuum. It is known that this level shift can be modified by external influences like a cavity \cite{Meschede92}, for example. Its presence alters the mode structure of the vacuum and leads to a Lamb shift which is different from its free-space value. In this paper, we study the effect of acceleration on radiative energy shifts. It may not seem obvious at first sight why acceleration should lead to a modification of the Lamb shift. To see this, one has to combine results from different subfields of physics. First we note that for a uniformly accelerated observer, the Minkowski vacuum appears as a thermal heat bath of so-called ``Rindler particles''. This is usually interpreted as a consequence of the non-equivalence of the particle concept in inertial and accelerated frames \cite{Fulling73,Unruh76,Birrell82}. The second ingredient we need is the fact that the presence of photons leads to additional energy shifts for atomic systems. This effect is called AC Stark shift or light shift \cite{Cohen-Tannoudji92} and is connected with the virtual absorption and emission of real photons. In particular, a thermal photon field causes such an effect \cite{Barton72,Knight72,Farley81}. Consequently, taking these results together, we can gain a heuristic insight why the Lamb shift of an atom is modified if the atom is uniformly accelerating. A corresponding effect is to be expected for other types of acceleration. The first aim of this paper is the calculation of radiative energy shifts. It can be carried out in an elegant manner using the formalism of Dalibard, Dupont-Roc, and Cohen-Tannoudji (DDC) \cite{Dalibard82,Dalibard84}. This approach has the advantage that it allows also to separate the contributions of vacuum fluctuations and radiation reaction to the energy shifts. The independent treatment of these two effects has a tradition in Heisenberg picture quantum electrodynamics \cite{Ackerhalt73,Senitzky73,Milonni73,Ackerhalt74,Milonni75} and beyond \cite{Barut90}. In a previous paper \cite{Audretsch94}, we have studied the influences of vacuum fluctuations and radiation reaction on the spontaneous transitions of a uniformly accelerated atom moving through the Minkowski vacuum It leads to a modified value of the Einstein A coefficient for spontaneous emission. In addition we have shown how the lack of a balance between vacuum fluctuations and radiation reaction causes a spontaneous excitation from the ground state. This gives an interpretation of the physics underlying the Unruh effect. For the radiative shift of atomic levels, the separate discussion of vacuum fluctuations and radiation reaction may also be interesting from a conceptual point of view, since in heuristic pictures the Lamb shift has been often associated with the notion of vacuum fluctuations alone \cite{Welton48}. This discussion is the second aim of this paper. The fully realistic calculation of the influence of acceleration on the Lamb shift would require to deal with a multilevel atom coupled to the electromagnetic field. This will not be done in the present paper. To keep the discussion as clear and transparent as possible, we will restrict to the simplest nontrivial example, a two-level atom in interaction with a massless scalar quantum field. However, as we will see, most of the essential features of the full problem are also present in the simple model. The structure of the results can be seen clearly. Furthermore, we will discuss only the nonrelativistic contribution to the energy shift, i. e. we neglect the effects which are due to the quantum nature of the electron field. However, it is well known that the nonrelativistic part gives the dominant contribution to the Lamb shift. Since we are only interested in the structure of the results, we will concentrate on this part of the problem. Our treatment applies the formalism of DDC to the case of a two-level atom coupled to a scalar quantum field and generalizes it to an arbitrary stationary trajectory of the atom. We will proceed as follows: We consider the time evolution of an arbitrary atomic observable $G$ as given by the Heisenberg equations of motion. Since we are only interested in atom variables, we trace out the field degrees of freedom in the part of the Heisenberg equations that is due to the coupling with the field. It is then possible to identify in the resulting expressions unambiguously the part that acts as an effective Hamiltonian with respect to the time evolution of $G$. The expectation value of this operator in an atom state $|b \rangle$ gives the radiative energy shift of this level. We must emphasize that the modification of radiative level shifts by acceleration which is considered here must carefully be distinguished from the direct influence of acceleration or curvature on the energy levels of an atom \cite{Parker80,Audretsch93}. These corrections are obtained in the simplest case by the inclusion of a term $amx$ into the Dirac equation of the atom. They are assumed to be already included in the otherwise unperturbed energies $\pm {1\/ 2} \omega_0$ of the atom. In contrast to this, the effects considered in this paper are true radiative corrections caused by the interaction of the atom with the quantum field. The organization of the paper is the following: In Sec. 2, we define our model and set up the Heisenberg equations of motion, which are solved formally. We apply the formalism of DDC to our model in Sec. 3 and generalize it to an atom moving on an arbitrary stationary trajectory through the Minkowski vacuum. In Sec. 4, we calculate the level shift for a two-level atom at rest. We find that for the symmetric operator ordering adopted here, the only contribution to relative energy shifts comes from vacuum fluctuations, while the effect of radiation reaction is the same for both levels. The result obtained for the scalar theory are then compared to the standard results for the electromagnetic field and a similar structure is found. Sec. 5 deals with a uniformly accelerated atom. Because the analysis becomes more involved in this case, we use some methods from quantum field theory in accelerated frames, which are discussed in the Appendix. As a result, we find that the contribution of vacuum fluctuations to the level shift is altered by the appearance of a thermal term with the Unruh temperature $T= \hbar a/(2\pi c k)$. The contribution of radiation reaction is the same as for an atom at rest and does not contribute to relative shifts. Finally, we point out the similarity of the results to those obtained for the Lamb shift in a thermal heat bath \cite{Barton72,Knight72,Farley81}. \section{Two-level atom interacting with a massless scalar quantum field} To investigate how the radiative energy shifts of atoms are modified by acceleration, we choose the simplest nontrivial example: a two-level atom in interaction with a real massless scalar field. We consider an atom on a timelike Killing trajectory $x(\tau) =( t(\tau), {\vec x}(\tau))$, which is parametrized by the proper time $\tau$. It will be called {\it stationary trajectory} \cite{Letaw81}. The important consequence of the stationarity is that the level spacing $\omega_0$ of the two states $|- \rangle$ and $|+ \rangle$ of the atom does not depend on $\tau$. The zero of energy is chosen so that that the energies of the two stationary states are $-{1\/2}\omega_0$ and $+{1\/2}\omega_0$. As mentioned in the Introduction, a possible constant modification of the energy level spacing which is directly caused by the acceleration is assumed to be already included. The free Hamiltonian of the two-level atom which generates the atom's time evolution with respect to the proper time $\tau$ is given by \begin{equation} H_A (\tau) =\omega_0 R_3 (\tau) \label{eq1} \end{equation} where we have written $R_3 = {1\/2} |+ \rangle \langle + | - {1\/2} |- \rangle \langle - |$, following Dicke \cite{Dicke54}. The free Hamiltonian of the quantum field is \begin{equation} {\tilde H}_F (t) = \int d^3 k \, \omega_{\vec k}\, a^\dagger_{\vec k} a_{\vec k} . \label{eq2}\end{equation} where $a^\dagger_{\vec k}$, $a_{\vec k}$ are creation and annihilation operators `photons' with momentum ${\vec k}$. The Hamiltonian (\ref{eq2}) governs the time evolution of the quantum field with respect to the time variable $t$ of the inertial frame. However, to derive the Heisenberg equations of motion of the coupled system, we have to choose a common time variable. It turns out that it is most reasonable to refer generally to the atom's proper time $\tau$. The free Hamiltonian of the field with respect to $\tau$ can be obtained by a simple change of the time variable in the Heisenberg equations: \begin{equation} H_F (\tau) = \int d^3 k\, \omega_{\vec k} \,a^\dagger_{\vec k} a_{\vec k} {dt\/d \tau}. \label{eq3}\end{equation} We couple the atom and the quantum field by the interaction Hamiltonian \begin{equation} H_I = \mu R_2 (\tau) \phi ( x(\tau)) . \label{eq5}\end{equation} which is a scalar model of the electric dipole interaction. Here we have introduced $R_2 = {1\/2} i ( R_- - R_+)$, where $R_+ = |+ \rangle \langle - |$ and $R_- = |- \rangle \langle +|$ are raising and lowering operators for the atom. $\mu$ is a small coupling constant. The field operator in (\ref{eq5}) is evaluated along the world line $x(\tau)$ of the atom. In the solutions of the Heisenberg equations of the atom and field variables, two physically different contributions can be distinguished: (1) the {\it free part}, which is the part of the solution that goes back to the free Hamiltonians (\ref{eq1}) and (\ref{eq3}) and which is present even in the absence of the interaction, (2) the {\it source part} which is caused by the coupling between atom and field and represents their mutual influence: \begin{eqnarray} R_\pm (\tau) &=& R^f_\pm (\tau) + R^s_\pm (\tau), \\ R_3(\tau) &=& R^f_3 (\tau) + R_3^s (\tau), \\ \phi(x(\tau)) &=& \phi^f (x(\tau)) + \phi^s (x(\tau)).\label{eq5a}\end{eqnarray} Their explicit form has been given in \cite{Audretsch94}. \section{Radiative energy shifts: The contributions of vacuum fluctuations and radiation reaction} To determine the Lamb shift of an accelerated atom, we use the physically appealing formalism of Dalibard, Dupont-Roc, and Cohen-Tannoudji (DDC) \cite{Dalibard82,Dalibard84}. We will generalize it for calculating level shifts to an atom on an arbitrary stationary world line $x(\tau)$. One attractive feature of the formalism is that it allows a separate discussion of the two physical mechanisms which both contribute to radiative energy shifts: the contributions of {\it vacuum fluctuations} and of {\it radiation reaction}. Let us first discuss how these two parts can be separated (for a more detailed discussion see \cite{Audretsch94}): Because of the coupling (\ref{eq5}), the field operator $\phi$ will appear in the Heisenberg equation of motion of an arbitrary atomic variable $G$. According to (\ref{eq5a}), $\phi$ can be split into its free and source part. The following physical mechanisms can be connected with these two contributions: (1) the part of the Heisenberg equation which contains $\phi^f$ represents the change in G due to {\it vacuum fluctuations}. It is caused by the fluctuations of the field which are present even in the vacuum. (2) The term containing $\phi^s$ represents the influence of the atom on the field. This part of the field in turn reacts back on the atom. This mechanism is called {\it radiation reaction}. Explicitly, the contributions of vacuum fluctuations and radiation reaction to $dG/d \tau$ are: \begin{equation} \left( {d G(\tau) \/ d \tau} \right)_{vf} = {1\/2}i \mu \Bigl( \phi^f (x(\tau)) [R_2 (\tau), G (\tau)] + [R_2 (\tau), G (\tau)] \phi^f (x(\tau)) \Bigr), \label{eq15}\end{equation} \begin{equation} \left( {d G(\tau) \/ d \tau} \right)_{rr} = {1\/2}i \mu \Bigl( \phi^s (x(\tau)) [R_2 (\tau), G (\tau)] + [R_2 (\tau), G (\tau)] \phi^s (x(\tau)) \Bigr), \label{eq16}\end{equation} where we followed DDC \cite{Dalibard82} in choosing a symmetric ordering between atom and field operators. Because we are interested only in observables of the atom, we take the average of (\ref{eq15}) and (\ref{eq16}) in the vacuum state of the quantum field. In a perturbative treatment, we take into account only terms up to second order in $\mu$. Proceeding in a similar manner as in Refs. \cite{Dalibard84,Audretsch94}, it is then possible to identify in Eqs. (\ref{eq15}) and (\ref{eq16}) an {\it effective Hamiltonian} for atomic observables that governs the time evolution with respect to $\tau$ in addition to the free hamiltonian (\ref{eq1}). The averaged equations (\ref{eq15}) and (\ref{eq16}) can be written \begin{equation} \langle 0| \left( {d G (\tau) \/ d\tau} \right)_{vf,rr} |0\rangle = i [H^{eff}_{vf,rr} (\tau), G(\tau)] + \hbox{non-Hamiltonian terms}. \label{eq26}\end{equation} where in order $\mu^2$ \begin{equation} H_{vf}^{eff} (\tau) = {1\/ 2} i \mu^2 \int_{\tau_0}^\tau d \tau' C^F(x(\tau),x(\tau')) [R_2^f(\tau'),R_2^f(\tau)] \label{eq33}\end{equation} \begin{equation} H_{rr}^{eff} (\tau) = -{1\/ 2} i \mu^2 \int_{\tau_0}^\tau d \tau' \chi^F(x(\tau),x(\tau')) \{ R_2^f(\tau'),R_2^f(\tau)\} \label{eq34}\end{equation} These effective Hamiltonians depend on the field variables only through the simple statistical functions \begin{equation} C^{F}(x(\tau),x(\tau')) = {1\/2} \langle 0| \{ \phi^f (x(\tau)), \phi^f (x(\tau')) \} | 0 \rangle, \label{eq24} \end{equation} \begin{equation} \chi^F(x(\tau),x(\tau')) = {1\/2} \langle 0| [ \phi^f (x(\tau)), \phi^f (x(\tau'))] | 0 \rangle \label{eq25}\end{equation} which are called the {\it symmetric correlation function} and the {\it linear susceptibility of the field}. The non-Hamiltonian terms in (\ref{eq26}) describe the effects of relaxation. The expectation values of the effective Hamiltonians (\ref{eq33}) and (\ref{eq34}) in an atomic state $|b \rangle$ represent the energy shift of the atomic level $|b\rangle$ caused by the coupling to the quantum field. The total shift contains the contribution of vacuum fluctuations as well as the contribution of radiation reaction. Taking the expectation values of (\ref{eq33}) and (\ref{eq34}), we obtain the radiative energy shifts of the level $|b\rangle$ due to vacuum fluctuations \begin{equation} (\delta E_b)_{vf} = - i \mu^2 \int_{\tau_0}^\tau d \tau' \, C^F(x(\tau),x(\tau')) \chi^A_b(\tau,\tau'), \label{eq35} \end{equation} and due to radiation reaction \begin{equation} (\delta E_b)_{rr} = - i \mu^2 \int_{\tau_0}^\tau d \tau' \, \chi^F(x(\tau),x(\tau')) C^A_b(\tau,\tau'), \label{eq36}\end{equation} where we have defined the {\it symmetric correlation function} \begin{equation} C^{A}_b(\tau,\tau') = {1\/2} \langle b| \{ R_2^f (\tau), R_2^f (\tau')\} | b \rangle \label{eq38}\end{equation} and the {\it linear susceptibility of the atom} \begin{equation} \chi^A_b(\tau,\tau') = {1\/2} \langle b| [ R_2^f (\tau), R_2^f (\tau')] | b \rangle \label{eq37}\end{equation} As a result, we note that Eqs. (\ref{eq35}) and (\ref{eq36}) for the radiative energy shifts for atoms that move on an arbitrary stationary trajectory differs from the results for atoms at rest \cite{Dalibard82,Dalibard84} only in that the statistical functions of the field (\ref{eq24}) and (\ref{eq25}) are evaluated along the world line of the atom which may now be an accelerated one. The statistical functions of the atom do not depend on the trajectory $x(\tau)$. Below we will need the explicit forms of the different statistical functions. Those refering to the atom can be written \begin{eqnarray} C^{A}_b(\tau,\tau') & =& {1\/2} \sum_d |\langle b| R_2^f (0) | d \rangle |^2 \left( e^{i \omega_{bd}(\tau - \tau')} + e^{-i \omega_{bd} (\tau - \tau')} \right), \label{eq39}\\ \chi^A(\tau,\tau')_b & =& {1\/2} \sum_d |\langle b| R_2^f (0) | d \rangle |^2 \left(e^{i \omega_{bd}(\tau - \tau')} - e^{-i \omega_{bd}(\tau - \tau')} \right), \label{eq40}\end{eqnarray} where $\omega_{bd}= \omega_b-\omega_d$ and the sum extends over a complete set of atomic states. \section{Radiative energy shift for an atom at rest} Let us first reproduce the standard result for the Lamb shift for an atom at rest in the scalar theory. It can be compared afterwards with the corresponding result for a uniformly accelerated atom. The statistical functions of the field for the trajectory \begin{equation} t(\tau) =\tau, \qquad {\vec x} (\tau)=0 \label{eq41}\end{equation} can be easily evaluated. We obtain \begin{eqnarray} C^F(x(\tau),x(\tau')) &=& {1\/ 8\pi^2}\int d \omega \, \omega \left( e^{-i \omega (\tau-\tau')} + e^{i \omega (\tau-\tau')}\right) \label{eq42} \\ \chi^F(x(\tau),x(\tau')) &=& {1\/ 8\pi^2}\int d \omega\, \omega \left( e^{-i \omega (\tau-\tau')} - e^{i \omega (\tau-\tau')}\right). \label{eq43} \end{eqnarray} The contribution of vacuum fluctuations to the radiative shift of level $|b\rangle$ can be calculated from (\ref{eq35}) \begin{equation} (\delta E_b)_{vf} = {\mu^2\/ 8\pi^2} \sum_d |\langle b|R_2^f(0)| d\rangle|^2 \int_0^\infty d\omega \,\omega \left({{\cal P}\/ \omega + \omega_{bd}} - {{\cal P}\/ \omega -\omega_{bd}} \right), \label{eq47}\end{equation} where $\cal P$ denotes the principle value. The integral is logarithmically divergent, as expected for a nonrelativistic calculation of radiative shifts \cite{Bethe47}. As is well known, the introduction of a cutoff is therefore necessary. The summation over $d$ displays the role of virtual transitions to other levels. The {\it relative} energy shifts of the two levels due to vacuum fluctuations can be calculated by evaluating the $d$ summation for $|b\rangle = |\pm\rangle$. The resulting expression is \begin{eqnarray} \Delta_{vf} &=& (\delta E_+)_{vf} - (\delta E_-)_{vf} \\ &=& {\mu^2\/ 16\pi^2} \int_0^\infty d\omega \,\omega \left({{\cal P}\/ \omega +\omega_{bd}} - {{\cal P}\/ \omega -\omega_{bd}} \right). \label{eq48}\end{eqnarray} For the contribution of radiation reaction we obtain \begin{equation} (\delta E_b)_{rr} = -{\mu^2\/ 8\pi^2} \sum_d |\langle b|R_2^f(0)| d\rangle|^2 \int_0^\infty d\omega \,\omega \left({{\cal P}\/ \omega + \omega_{bd}} + {{\cal P}\/ \omega -\omega_{bd}} \right). \label{eq49}\end{equation} This expression diverges linearly. However, as can be seen by explicitly evaluating the sum, radiation reaction does not give a contribution to the relative shift of the two levels: \begin{equation} \Delta_{rr} = (\delta E_+)_{rr} - (\delta E_-)_{rr} = 0. \label{eq50}\end{equation} Thus the {\it Lamb shift} as the the relative radiative energy shift of the two-level atom at rest is caused entirely by vacuum fluctuations: \begin{eqnarray} \Delta_0 &\equiv& \Delta_{vf} + \Delta_{rr} \nonumber\\ &=& {\mu^2\/ 16\pi^2} \int_o^\infty d\omega \,\omega \left({{\cal P}\/ \omega +\omega_{bd}} - {{\cal P}\/ \omega -\omega_{bd}} \right). \label{eq51}\end{eqnarray} This expression agrees structurally with the standard result for the Lamb shift of a two-level atom \cite{Ackerhalt73,Senitzky73,Milonni73,Ackerhalt74,Milonni75}. The modifications are caused by the differences between the electromagnetic and the scalar theory. We mention that Welton's picture of radiative shifts \cite{Welton48}, who tried to interpret the Lamb shift only in terms of vacuum fluctuations conforms with the fact that the energy shift (\ref{eq51}) is caused solely by vacuum fluctuations. This feature is a peculiarity of the simple model of a two-level atom, however. For a real multilevel atom, the contribution of radiation reaction will be different for different levels, and a mass renormalization will become necessary. \section{Radiative energy shifts for a uniformly accelerated atom} Let us now consider the case of a uniformly accelerated two-level atom. It moves on the trajectory \begin{equation} t(\tau)={1\/ a}\sinh a \tau, \qquad z(\tau)={1\/ a}\cosh a \tau, \label{eq52}\end{equation} $$ x(\tau)=y(\tau)=0, $$ where $a$ is the proper acceleration. The calculation of the statistical functions of the field turns out to be much more complicated for an accelerated atom than for an atom at rest. It is most convenient to employ methods from the quantum field theory in accelerated frames. In order to keep the discussion transparent, we have put the calculation into the Appendix and simply quote the results here: \begin{eqnarray} C^F (x(\tau),x(\tau')) &=& {1\/ 8\pi^2} \int_0^\infty d\omega' \,\omega' \coth\left({\pi\omega'\/a}\right)\left( e^{-i \omega' (\tau-\tau')} + e^{i \omega'(\tau-\tau')}\right) \label{eq53}\\ \chi^F (x(\tau),x(\tau')) &=& {1\/ 8\pi^2} \int_0^\infty d\omega' \,\omega' \left( e^{-i \omega' (\tau-\tau')} - e^{i \omega'(\tau-\tau')}\right) \label{eq54}\end{eqnarray} We see that the expression (\ref{eq54}) for the linear susceptibility of the field is formally identical for an accelerated atom and an atom at rest. However, with regard to the cutoff prescription to be given below, it is important to note that $\omega'$ in (\ref{eq53}) and (\ref{eq54}) denotes the energy in the accelerated frame, i. e. as measured by an observer on the trajectory (\ref{eq52}). The remaining calculation is straightforward. By substituting the statistical functions (\ref{eq39}), (\ref{eq40}), (\ref{eq53}), and (\ref{eq54}) into the general formulas (\ref{eq35}) and (\ref{eq36}) for the level shifts, we find \begin{eqnarray} (\delta E_b)_{vf} &=& {\mu^2\/8\pi^2} \sum_d|\langle b|R^f_2 (0)|d \rangle|^2 \nonumber\\ &&\qquad\times \int_0^\infty d\omega'\,\omega' \coth\left({\pi\omega'\/a} \right) \hbox{Im}\left\{ \int_0^\infty du \left( e^{i(\omega' +\omega_{bd})u} - e^{i(\omega'-\omega_{bd})u}\right)\right\}, \label{eq55}\\ (\delta E_b)_{rr} &=& -{\mu^2\/8\pi^2} \sum_d|\langle b|R^f_2 (0)|d \rangle|^2 \nonumber\\ &&\qquad\times\int_0^\infty d\omega'\,\omega' \hbox{Im}\left\{ \int_0^\infty du \left( e^{i(\omega' +\omega_{bd})u} + e^{i(\omega'-\omega_{bd})u}\right)\right\}. \label{eq56}\end{eqnarray} The result for the radiative energy shift for a uniformly accelerated atom can now be obtained by evaluating the integrals. The contribution of vacuum fluctuations is \begin{equation} (\delta E_b)_{vf} = {\mu^2\/8\pi^2} \sum_d|\langle b|R^f_2 (0)|d \rangle|^2 \int_0^\infty d\omega'\,\omega'\left( {1} + {2\/ e^{2\pi\omega'/a}-1}\right)\left({{\cal P}\/\omega'+\omega_{bd}} - {{\cal P}\/\omega'-\omega_{bd}}\right). \label{eq57}\end{equation} Comparing this formula to Eq. (\ref{eq47}) for an atom at rest, we note that the acceleration-caused correction is additive and contains a characteristic thermal term with the Unruh temperature $T = \hbar a/(2\pi kc)$. On the other hand, the contribution of radiation reaction, \begin{equation} (\delta E_b)_{rr} = -{\mu^2\/8\pi^2} \sum_d|\langle b|R^f_2 (0)|d \rangle|^2 \int_0^\infty d\omega'\,\omega' \left({{\cal P}\/\omega'+\omega_{bd}} + {{\cal P}\/\omega'-\omega_{bd}}\right) \label{eq58}\end{equation} is exactly the same as for an atom at rest. The uniform acceleration has no effect on the shift caused by radiation reaction and we find again no relative energy shift: $\Delta_{rr}=0$. Thus the {\it Lamb shift} $\Delta$ can be obtained from the contribution of vacuum fluctuations (\ref{eq57}) by evaluating the summation over $d$ for each of the two levels separately: \begin{eqnarray} \Delta &=& \Delta_{vf}= (\delta E_+)_{vf} - (\delta E_-)_{vf}\nonumber\\ &=& {\mu^2\/ 16\pi^2}\int_0^\infty d \omega'\, \omega' \left[ {1} + {2\/ e^{2\pi\omega'/a}-1}\right]\left({{\cal P}\/\omega'+\omega_0} - {{\cal P}\/\omega'-\omega_0}\right). \label{eq59}\end{eqnarray} {}From the two terms in square brackets, we can distinguish two contributions in (\ref{eq59}). The first one has the same functional form as $\Delta_0$ of (\ref{eq51}) for an atom at rest. Because it is logarithmically divergent, the introduction of a cutoff is necessary. The only sensible way to do this is to impose the cutoff frequency in the rest frame of the atom. Since $\omega'$ in (\ref{eq59}) is the frequency in the accelerated system, this can be done quite naturally in the present formalism. The second contribution in (\ref{eq59}) represents the modification of the Lamb shift caused by the acceleration and contains the thermal term. Inspection of the integral shows that this correction is finite. Equation (\ref{eq59}) can thus be written \begin{equation} \Delta = \Delta_0 + {\omega_0\mu^2 \/ 16\pi^2} \,G \left({a\/ 2\pi \omega_0} \right), \label{eq60}\end{equation} where $\Delta_0$ is the Lamb shift (\ref{eq51}) for $a=0$ and $G(u)$ is defined by \begin{equation} G(u) = u \int_0^\infty dx {x\/ e^x -1} \left({{\cal P}\/ x+u^{-1}} - {{\cal P}\/ x-u^{-1}} \right). \label{eq61}\end{equation} The evaluation of the integral must be done numerically. The result is shown in Fig. 1. In the limit of small $u=a/(2\pi \omega_0)$, (\ref{eq60}) can be approximated by \begin{equation} \Delta = \Delta_0 + {a^2\mu^2 \/ 192 \pi^2}{1 \/ \omega_0}. \label{eq62}\end{equation} The dependence on $a$ can be read off from Fig. 1. To estimate the order of magnitude of the acceleration needed for an appreciable effect, we demand $(\Delta -\Delta_0)/\omega_0 = \mu^2/(16 \pi)$ and therefore $G(u)=1$. This amounts to $a \approx c \omega_0$, which gives $a \approx 10^{24} \hbox{m/s}^2$ for an 1 eV transition. \begin{figure}[t] \epsfysize=8cm \hspace{1.5cm} \epsffile{shift.eps} \caption{ Plot of the function $G(u)$ defined in (\protect\ref{eq61}). The inset shows the behaviour at small values of $u$ (cf. (\protect\ref{eq62})).} \end{figure} Finally, we note the structural similarity of the equations (\ref{eq59}) and (\ref{eq60}) to the corresponding expressions obtained for the Lamb shift in a thermal heat bath \cite{Barton72,Knight72,Farley81}. They have in common the appearance of the factor in square brackets under the integral in (\ref{eq59}) with $T= a/(2\pi)$. A heuristic connection between the two physical situations has already been given in the Introduction. The differences are due to the properties of the scalar theory and the fact that we considered a two-level atom instead of a real atom. \section{Conclusion} In this paper, we have discussed two main points. First, we studied the influence of acceleration on the radiative energy shift of atoms. The formalism can be applied to an atom on an arbitrary stationary trajectory. We considered here the special case of uniformly accelerated motion. The resulting expression for the Lamb shift of a uniformly accelerated two-level atom is given in Eq. (\ref{eq59}) and (\ref{eq60}). Comparison with the corresponding formula (\ref{eq51}) for an atom at rest shows that the correction consists in the appearance of the factor in square brackets, which contains the thermal term. This modification is structurally the same as for the Lamb shift in a thermal heat bath. The second main goal of the paper has been the identification of the two physical mechanisms responsible for the radiative shifts. Using the formalism of DDC, we were able to discuss the contributions of vacuum fluctuations and radiation reaction separately. The effect of radiation reaction is the same for a uniformly accelerated atom as for an atom at rest. It does not affect relative energy shifts. It is interesting to note that also in the case of other radiative phenomena like the Unruh effect and spontaneous emission, radiation reaction is not influenced by acceleration \cite{Audretsch94}. The contribution of vacuum fluctuation, however, is modified by the thermal terms and gives the total effect.
{'timestamp': '1996-03-04T21:29:53', 'yymm': '9503', 'arxiv_id': 'gr-qc/9503058', 'language': 'en', 'url': 'https://arxiv.org/abs/gr-qc/9503058'}
\section{Introduction} One of the most prominent concerns of wireless communication systems is to ensure the security of the users against eavesdropping attacks. In last years, wireless physical layer security has gained a lot of attention from the research community. Compared to the traditional cryptographic techniques, physical layer security is performed with less complexity and is more convenient for the emerging ad-hoc and decentralized networks and the next wave of innovative systems known as the Internet of Things. The information theoretic security was firstly introduced by Shannon in \cite{shannon01}. Many years later, Wyner proposed a new model for the wiretap channel \cite{wyner01}. In his seminal work, Wyner presented the degraded wiretap channel, where a source exploits the structure of the medium channel to transmit a message reliably to the intended receiver while leaking asymptotically no information to the eavesdropper. Ulterior works generalized Wyner's work to the case of non-degraded channels \cite{csiszar01}, Gaussian channels \cite{leung11}, and fading channels \cite{gopala01,liang01}. Recent research interest has been given to analyzing the secrecy capacity of multi-users systems. For the broadcast multi-users scenario, the secrecy capacity of parallel and fading channels, assuming perfect main CSI at the transmitter, was considered in~\cite{ashish01} and\cite{ulukus1}. The case of imperfect main channel estimation was elaborated in~\cite{hyadi2}. For the multiple access scenario, the authors in~\cite{tekin01} and \cite{ulukus3} investigated the secrecy capacity of degraded wiretap channels. The problem of analyzing the secrecy capacity of multiple antenna systems has also been of great interest. The secrecy capacity for the multiple-input single-output (MISO) wiretap Gaussian channel was proven in \cite{ashish04} and \cite{li2}. Another work \cite{ashish02} characterized the secrecy capacity for the MISO case, with a multiple-antenna eavesdropper, when the main and the eavesdropping channels are known to all terminals. The secrecy capacity of the multiple-input multiple-output (MIMO) transmission with a multiple-antenna eavesdropper was considered in \cite{oggier11} and \cite{ashish11} when the channel matrices are fixed and known to all terminals. The secrecy capacity region of the Gaussian MIMO broadcast wiretap channel was derived in \cite{ulukus2}. Taking full advantage of the ability of the physical layer to secure wireless transmissions, requires a complete knowledge of the channel state information (CSI) at the transmitter (CSIT); which is difficult to have in practical scenarios. One way to overcome this challenge is by using feedback \cite{heath01,love11}. This is a natural setting as CSI feedback is incorporated in most if not all communication standards. For the case of a single user transmission, an upper and a lower bounds on the secrecy capacity were presented, in \cite{rezki11}, for block-fading channels with finite rate feedback. For the MIMO case, the work in \cite{lin11} and \cite{liu01} evaluated the impact of quantized channel information on the achievable secrecy rate, for multiple-antenna wiretap channels, using artificial noise. In this paper, we investigate the ergodic secrecy capacity of a broadcast wiretap channel where a source transmits to multiple legitimate receivers in the presence of an eavesdropper. In particular, we analyze the case of block-fading channels with limited CSI feedback. More specifically, we consider that the transmitter is unaware of the channel gains to the legitimate receivers and to the eavesdropper and is only provided a finite CSI feedback. This feedback is sent by the legitimate receivers through error-free links with limited capacity. Both the common message transmission, where the same message is broadcasted to all the legitimate receivers, and the independent messages transmission, where the source broadcasts multiple independent messages, are considered. Assuming an average power constraint at the transmitter, we provide an upper and a lower bounds on the ergodic secrecy capacity for the common message case, and an upper and a lower bounds on the secrecy sum-rate for the independent messages case. For the particular case of infinite feedback, we prove that our bounds coincide. The paper is organized as follows. Section~\ref{model} describes the system model. The main results along with the corresponding proofs are introduced in section~\ref{BCM} for the common message transmission and in section~\ref{BIM} for the independent messages case. Finally, selected simulation results are presented in section~\ref{NR}, while section~\ref{conclusion} concludes the paper. \textit{Notations:} Throughout the paper, we use the following notational conventions. The expectation operation is denoted by $\mathbb{E}[.]$, the modulus of a scalar $x$ is expressed as $|x|$, and we define $\{\nu\}^+{=}\max (0,\nu)$. The entropy of a discrete random variable $X$ is denoted by $H(X)$, and the mutual information between random variables $X$ and $Y$ is denoted by $I(X;Y)$. A sequence of length $n$ is denoted by $X^n$, $X(k)$ represents the $k$-th element of $X$, and $X(l,k)$ the $k$-th element of $X$ in the $l$-th fading block. \section{System Model}\label{model} We consider a broadcast wiretap channel where a transmitter~$\text{T}$ communicates with $K$ legitimate receivers $(\text{R}_1,\cdots,\text{R}_K)$ in the presence of an eavesdropper $\text{E}$ as depicted in Fig.~\ref{fig:model}. Each terminal is equipped with a single antenna for transmission and reception. The respective received signals at each legitimate receiver $\text{R}_k; k\in\{1,\cdots,K\},$ and the eavesdropper, at fading block $l$, $l{=}1,{\cdots ,}L$, are given by \begin{equation}\label{Sys_Out} \begin{aligned} &Y_k(l,j)=h_k(l)X(l,j)+v_k(l,j)\\ &Y_\text{e}(l,j)\hspace{0.06cm}=h_\text{e}(l)X(l,j)\hspace{0.05cm}+w_\text{e}(l,j), \end{aligned} \end{equation} where $j{=}1,{\cdots ,}\kappa$, with $\kappa$ representing the length of each fading block, $X(l,j)$ is the $j$-th transmitted symbol in the $l$-th fading block, $h_k(l)\!\in\!\mathbb{C}$, $h_\text{e}(l)\!\in\!\mathbb{C}$ are the complex Gaussian channel gains corresponding to each legitimate channel and the eavesdropper's channel, respectively, and $v_k(l,j)\!\in\!\mathbb{C}$, $w_\text{e}(l,j)\!\in\!\mathbb{C}$ represent zero-mean, unit-variance circularly symmetric white Gaussian noises at $\text{R}_k$ and~$\text{E}$, respectively. We consider a block-fading channel where the channel gains remain constant within a fading block, i.e., $h_k(\kappa l)=h_k(\kappa l-1)=\cdots =h_k(\kappa l-\kappa +1)=h_k(l)$ and $h_\text{e}(\kappa l)=h_\text{e}(\kappa l-1)=\cdots =h_\text{e}(\kappa l-\kappa +1)=h_\text{e}(l)$. We assume that the channel encoding and decoding frames span a large number of fading blocks, i.e., $L$ is large, and that the blocks change independently from a fading block to another. An average transmit power constraint is imposed at the transmitter such that \begin{equation}\label{pavg} \frac{1}{n}\sum_{t=1}^n\mathbb{E}\left[|X(t)|^2\right]\leq P_{\text{avg}}, \end{equation} with $n{=}\kappa L$, and where the expectation is over the input distribution. The channel gains $h_k$ and $h_\text{e}$ are independent, ergodic and stationary with bounded and continuous probability density functions (PDFs). In the rest of this paper, we denote $|h_k|^2$ and $|h_\text{e}|^2$ by $\gamma_k$ and $\gamma_\text{e}$, respectively. We assume perfect CSI at the receiving nodes. That is, each legitimate receiver is instantaneously aware of its channel gain $h_k(l)$, and the eavesdropper knows $h_\text{e}(l)$, with $l{=}1,\cdots ,L$. The statistics of the main and the eavesdropping channels are available to all nodes. Further, we assume that the transmitter is not aware of the instantaneous channel realizations of neither channel. However, each legitimate receiver provides the transmitter with $b$-bits CSI feedback through an error-free orthogonal channel with limited capacity. This feedback is transmitted at the beginning of each fading block and is also tracked by the other legitimate receivers. The eavesdropper knows all channels and also track the feedback links so that they are not sources of secrecy. The adopted feedback strategy consists on partitioning the main channel gain support into $Q$ intervals $[\tau_1, \tau_2), \cdots , [\tau_q, \tau_{q+1}), \cdots , [\tau_Q, \infty)$, where $Q{=}2^b$. That is, during each fading block, each legitimate receiver $\text{R}_k$ determines in which interval, $[\tau_q, \tau_{q+1})$ with $q{=}1,\cdots ,Q$, its channel gain $\gamma_k$ lies and feedbacks the associated index $q$ to the transmitter. At the transmitter side, each feedbacked index $q$ corresponds to a power transmission strategy $P_q$ satisfying the average power constraint. We assume that all nodes are aware of the main channel gain partition intervals $[\tau_1, \tau_2), \cdots , [\tau_q, \tau_{q+1}), \cdots , [\tau_Q, \infty)$, and of the corresponding power transmission strategies $\{P_1, \cdots , P_Q\}$. \begin{figure}[t] \psfrag{t}[l][l][1.9]{$\text{T}$} \psfrag{e}[l][l][1.9]{$\text{E}$} \psfrag{r1}[l][l][1.9]{$\text{R}_1$} \psfrag{r2}[l][l][1.9]{$\text{R}_k$} \psfrag{r3}[l][l][1.9]{$\text{R}_K$} \psfrag{h1}[l][l][1.5]{$h_1$} \psfrag{h2}[l][l][1.5]{$h_k$} \psfrag{h3}[l][l][1.5]{$h_K$} \psfrag{g}[l][l][1.5]{\hspace{-0.1cm}$h_\text{e}$} \psfrag{feedback}[l][l][1.5]{\hspace{0.3cm}$b$-bits Feedback Link} \psfrag{l}[l][l][2]{$\left. \rule{0pt}{2.2cm} \right\}$} \psfrag{transmitter}[l][l][1.7]{Transmitter} \psfrag{eavesdropper}[l][l][1.7]{Eavesdropper} \psfrag{receivers}[l][l][1.7]{$\hspace{-0.1cm}K$ Legitimate Receivers} \begin{center \scalebox{0.4}{\includegraphics{system_v1.eps}} \end{center \caption{Broadcast wiretap channel with limited CSI feedback.}\vspace{-0.2cm} \label{fig:model} \end{figure} The transmitter wishes to send some secret information to the legitimate receivers. In the case of common message transmission, a $(2^{n\mathcal{R}_\text{s}}, n)$ code consists of the following elements: \begin{itemize} \item A message set $\mathcal{W}=\left\lbrace 1,2,{\cdots},2^{n\mathcal{R}_\text{s}}\right\rbrace$ with the messages $W\in\mathcal{W}$ independent and uniformly distributed over $\mathcal{W}$; \item A stochastic encoder $f: \mathcal{W}\rightarrow\mathcal{X}^n$ that maps each message $w$ to a codeword $x^n\in\mathcal{X}^n$; \item A decoder at each legitimate receiver $g_k: \mathcal{Y}_k^n\rightarrow\mathcal{W}$ that maps a received sequence $y_k^n\in\mathcal{Y}_k^n$ to a message $\hat{w}_k\in\mathcal{W}$. \end{itemize} A rate $\mathcal{R}_\text{s}$ is an \textit{achievable secrecy rate} if there exists a sequence of $(2^{n\mathcal{R}_\text{s}}, n)$ code such that both the average error probability at each legitimate receiver \begin{equation} P_{\text{e}_k}=\frac{1}{2^{n\mathcal{R}_\text{s}}}\sum_{w=1}^{2^{n\mathcal{R}_\text{s}}}\text{Pr}\left[W\neq\hat{W}_k\big|W=w\right], \end{equation} and the leakage rate at the eavesdropper \begin{equation} \frac{1}{n} I(W;Y_e^n,h_e^L,h_1^L,\cdots,h_K^L,q_1^L,\cdots,q_K^L), \end{equation} go to zero as $n$ goes to infinity. The \textit{secrecy capacity} $\mathcal{C}_\text{s}$ is defined as the maximum achievable secrecy rate, i.e., $\displaystyle{\mathcal{C}_\text{s}\triangleq\sup\mathcal{R}_\text{s},}$ where the supremum is over all achievable secrecy rates. In the case of independent messages transmission, the achievable secrecy rate tuple $\left(\mathcal{R}_1,\mathcal{R}_2,\cdots,\mathcal{R}_K\right)$ can be analogously defined. \section{Broadcasting a Common Message}\label{BCM} In this section, we examine the case of common message transmission when a unique confidential information is broadcasted to all the legitimate receivers in the presence of an eavesdropper. Taking into account the adopted system model, we present an upper and a lower bounds on the ergodic secrecy capacity. \subsection{Main Results}\label{MR1} \textit{Theorem 1:} The common message secrecy capacity, $\mathcal{C}_{\text{s}}$, of a block fading broadcast channel, with an error free $b$-bits CSI feedback sent by each legitimate receiver, at the beginning of each fading block, is bounded by \begin{equation}\label{th1} \mathcal{C}_{\text{s}}^-\leq \mathcal{C}_{\text{s}}\leq \mathcal{C}_{\text{s}}^+, \end{equation} such as\vspace{-0.3cm} \begin{subequations} \begin{align}\label{th1_a} \mathcal{C}_{\text{s}}^-{=}\hspace{-0.2cm}\min_{1\leq k\leq K}\max_{\{\tau_q;P_q\}_{q=1}^Q}\sum_{q=1}^Q\Theta_{\tau_q}^{\gamma_k}\underset{\gamma_\text{e}}{\mathbb{E}}\!\left[\left\lbrace\log\!\left(\frac{1{+}\tau_qP_q}{1{+}\gamma_\text{e}P_q}\right)\right\rbrace^{\hspace{-0.1cm}+}\right], \end{align} and\vspace{-0.3cm} \begin{align}\label{th1_b} \mathcal{C}_{\text{s}}^+&{=}\hspace{-0.1cm}\min_{1\leq k\leq K}\max_{\{\tau_q;P_q\}_{q=1}^Q}\sum_{q=0}^Q\Theta_{\tau_q}^{\gamma_k}\nonumber\\ &\qquad\times\underset{\gamma_\text{e},\gamma_k}{\mathbb{E}}\!\left[\left\lbrace\!\log\!\left(\frac{1{+}\gamma_kP_q}{1{+}\gamma_\text{e}P_q}\right)\!\right\rbrace^+\!\Bigg|\tau_q\!\leq\!\gamma_k\!<\!\tau_{q+1}\right], \end{align} where $Q{=}2^b$, $\{\tau_q~\!|~\!0{=}\tau_0{<}\tau_1{<}\cdots{<}\tau_Q\}_{q=1}^Q$ represent the reconstruction points describing the support of $\gamma_k$ with $\tau_{Q+1}{=}\infty$ for convenience, $\{P_q\}_{q=1}^Q$ are the power transmission strategies satisfying the average power constraint, and $\displaystyle{\Theta_{\tau_q}^{\gamma_k}{=}\text{Pr}\left[\tau_q{\leq}\gamma_k{<}\tau_{q+1}\right]}$ for all $q\in\{1,{\cdots},Q\}$. \end{subequations}\vspace{0.2cm} \textit{Proof:} A detailed proof of Theorem 1 is provided in the following subsection.\vspace{0.2cm} \noindent It is worth mentioning that the main difference between our bounds is that the transmission scheme, for the achievable secrecy rate, uses the feedback information to adapt both the rate and the power in such a way that the transmission rate is fixed during each fading block. Also, Theorem~1 states that even with 1-bit feedback, sent by each legitimate receiver at the beginning of each fading block, a positive secrecy rate can still be achieved. \textit{Corollary 1:} The common message secrecy capacity of a block fading broadcast channel with perfect main CSIT, and the average power constraint in \eqref{pavg}, is given by \begin{align}\label{cr1} \mathcal{C}_{\text{s}}=\min_{1\leq k\leq K}\max_{P(\gamma_k)}\underset{\gamma_k,\gamma_\text{e}}{\mathbb{E}}\!\left[\left\lbrace\log\!\left(\frac{1{+}\gamma_kP(\gamma_k)}{1{+}\gamma_\text{e}P(\gamma_k)}\right)\right\rbrace^+\right], \end{align} with $\mathbb{E} [P(\gamma_k)]\leq P_\text{avg}$. \vspace{0.3cm} \textit{Proof:} Corollary 1 results directly from the expressions of the achievable rate in \eqref{th1_a} and the upper bound in \eqref{th1_b}, by letting $\displaystyle{\Theta_{\tau_q}^{\gamma_k}=\frac{1}{Q}}$ and taking into consideration that as $Q\rightarrow\infty$, the set of reconstruction points, $\{\tau_1,{\cdots},\tau_Q\}$, becomes infinite and each legitimate receiver $\text{R}_k$ is basically forwarding $\gamma_k$ to the transmitter. $\hfill \square$ To the best of our knowledge, this result has not been reported in earlier works. For the special case of single user transmission, the secrecy capacity in corollary 1 coincides with the result in Theorem~2 from reference \cite{gopala01}. \subsection{Ergodic Capacity Analysis}\label{ECA1} In this subsection, we establish the obtained results for the ergodic secrecy capacity presented in the previous subsection. \subsubsection{Proof of Achievability in Theorem 1}\label{PLB_CM} Since the transmission is controlled by the feedbacked information, we consider that, during each fading block, if the main channel gain of the receiver with the weakest channel gain falls within the interval $[\tau_q,\tau_{q+1})$, $q\in\{1,{\cdots},Q\}$, the transmitter conveys the codewords at rate $\mathcal{R}_q=\log\left(1{+}\tau_qP_q\right).$ Rate $\mathcal{R}_q$ changes only periodically and is held constant over the duration interval of a fading block. This setup guarantees that when $\gamma_\text{e}{>}\tau_q$, the mutual information between the transmitter and the eavesdropper is upper bounded by $\mathcal{R}_q$. Otherwise, this mutual information will be $\log\left(1{+}\gamma_\text{e}P_q\right)$. Besides, we adopt a probabilistic transmission model where the communication is constrained by the quality of the legitimate channels. Given the reconstruction points, $\tau_1{<}\tau_2{<}\cdots{<}\tau_Q{<}\tau_{Q+1}{=}\infty$, describing the support of each channel gain $\gamma_k; k\in\{1,\cdots ,K\}$, and since the channel gains of the $K$ receivers are independent, there are $M{=}Q^K$ different states for the received feedback information. Each of these states, $\mathcal{J}_m; m\in\{1,\cdots ,M\},$ represents one subchannel. The transmission scheme consists on transmitting an independent codeword, on each of the $M$ subchannels, with a fixed rate. We define the following rates $$\displaystyle{\mathcal{R}_\text{s}^-=\sum_{m=1}^M\text{Pr}\left[\mathcal{J}_m\right]\underset{\gamma_\text{e}}{\mathbb{E}}\left[\left\lbrace\log\left(\frac{1{+}\tau_m^\text{min}P_m}{1{+}\gamma_\text{e}P_m}\right)\right\rbrace^+\right],}$$ and $\displaystyle{\mathcal{R}_{\text{e},m}=\underset{\gamma_\text{e}}{\mathbb{E}}\left[\log\left(1{+}\gamma_\text{e}P_m\right)\right],}$ where $\tau_m^\text{min}$ is the quantized channel gain corresponding to the weakest receiver in state $\mathcal{J}_m$ and $P_m$ is the associated power policy satisfying the average power constraint. \textit{Codebook Generation:} We construct $M$ independent codebooks $\mathcal{C}_1$, $\cdots$, $\mathcal{C}_M$, one for each subchannel, constructed similarly to the standard wiretap codes. Each codebook $\mathcal{C}_m$ is a $(n,2^{n\mathcal{R}_\text{s}^-})$ code with $2^{n(\mathcal{R}_\text{s}^-{+}\mathcal{R}_{\text{e},m})}$ codewords randomly partitioned into~$2^{n\mathcal{R}_\text{s}^-}$ bins. \textit{Encoding and Decoding:} Given a particular common message $w{\in}\{1,2,{\cdots},2^{n\mathcal{R}_\text{s}^-}\}$, to be transmitted, the encoder selects $M$ codewords, one for each subchannel. More specifically, if the message to be sent is $w$, then for each subchannel $m$, the encoder randomly selects one of the codewords $U_m^n$ from the $w$th bin in $\mathcal{C}_m$. During each fading block, of length $\kappa$, the transmitter experiences one of the events $\mathcal{J}_m$. Depending on the encountered channel state, the transmitter broadcasts $\kappa\mathcal{R}_q$ information bits of $U_m^{n}$ using a Gaussian codebook. By the weak law of large numbers, when the total number of fading blocks $L$ is large, the entire binary sequences are transmitted with high probability. To decode, each legitimate receiver considers the observations corresponding to all $M$ subchannels. And since $\log\left(1{+}\tau_m^\text{min}P_q\right){<}\log\left(1{+}\gamma_kP_q\right)$ is valid for all fading blocks, the receivers can recover all codewords, with high probability, and hence recover message~$w$. The expression of $\mathcal{R}_\text{s}^-$ can then be reformulated as \begin{align} &\mathcal{R}_\text{s}^-=\sum_{m=1}^M\text{Pr}\left[\mathcal{J}_m\right]\underset{\gamma_\text{e}}{\mathbb{E}}\left[\left\lbrace\log\left(\frac{1{+}\tau_m^\text{min}P_m}{1{+}\gamma_\text{e}P_m}\right)\right\rbrace^+\right]\\ &=\hspace{-0.2cm}\min_{1\leq k\leq K}\sum_{m=1}^M\text{Pr}\left[\mathcal{J}_m\right]\underset{\gamma_\text{e}}{\mathbb{E}}\left[\left\lbrace\log\left(\frac{1{+}\tau_{k,m}P_m}{1{+}\gamma_\text{e}P_m}\right)\right\rbrace^+\right]\label{stp1}\\ &=\hspace{-0.2cm}\min_{1\leq k\leq K}\sum_{m=1}^M\sum_{q=1}^Q\text{Pr}\left[\mathcal{J}_m,\tau_{k,m}{=}\tau_q\right]\underset{\gamma_\text{e}}{\mathbb{E}}\left[\left\lbrace\log\left(\frac{1{+}\tau_qP_q}{1{+}\gamma_\text{e}P_q}\!\right)\!\right\rbrace^{\hspace{-0.1cm}+}\right]\label{stp2}\\ &=\hspace{-0.2cm}\min_{1\leq k\leq K}\sum_{q=1}^Q\text{Pr}\left[\tau_q{\leq}\gamma_k{<}\tau_{q+1}\right]\underset{\gamma_\text{e}}{\mathbb{E}}\!\left[\!\left\lbrace\!\log\!\left(\!\frac{1{+}\tau_qP_q}{1{+}\gamma_\text{e}P_q}\!\right)\!\right\rbrace^{\hspace{-0.1cm}+}\right],\label{stp3} \end{align} where \eqref{stp1} results since the logarithm function is monotonic and the sum and the expectation are taking over positive terms, \eqref{stp2} is obtained by noting that $\tau_m^\text{min}\in\{\tau_1,{\cdots},\tau_Q\}$ and applying the total probability theorem, and \eqref{stp3} comes from the fact that $\sum_{m=1}^M\text{Pr}\left[\mathcal{J}_m,\tau_{k,m}{=}\tau_q\right]{=}\text{Pr}\left[\tau_q{\leq}\gamma_k{<}\tau_{q+1}\right]$. Since each user gets to know the feedback information of the other legitimate receivers, our proof is also valid when the reconstruction points $\{\tau_q\}_{q=1}^Q$ and the transmission strategies $\{P_q\}_{q=1}^Q$, associated with each legitimate receiver, are different. That is, we can choose these quantization parameters to satisfy~\eqref{th1_a}. \textit{Secrecy Analysis:} We need to prove that the equivocation rate satisfies $\mathcal{R}_\text{e}\geq\mathcal{R}_\text{s}^--\epsilon$. Let $\Gamma^L{=}\left\{\gamma_1^L,\gamma_2^L,{\cdots},\gamma_K^L\right\}$ and $F^L{=}\left\{F_1^L,F_2^L,{\cdots},F_K^L\right\}$, with $F_k(l)\in\{\tau_1,{\cdots},\tau_Q\}$ being the feedback information sent by receiver $k$ in the $l$-th fading block. We have\vspace{-0.1cm} \begin{align} &n\mathcal{R}_\text{e}=H(W|Y_\text{e}^n,\gamma_\text{e}^L,\Gamma^L,F^L)\\ &\geq I(W;X^n|Y_\text{e}^n,\gamma_\text{e}^L,\Gamma^L,F^L)\\ &=H(X^n|Y_\text{e}^n\!,\!\gamma_\text{e}^L\!,\!\Gamma^L\!,\!F^L){-}H(X^n|Y_\text{e}^n\!,\!\gamma_\text{e}^L\!,\!\Gamma^L\!,\!F^L\!,\!W).\label{lb1_step0} \end{align} \vspace{-0.1cm} On one hand, we can write \allowdisplaybreaks \begin{align} &H(X^n|Y_\text{e}^n,\gamma_\text{e}^L,\Gamma^L,F^L)\nonumber\\ &=\sum_{l=1}^L\hspace{-0.1cm}\resizebox{.38\textwidth}{!}{$\displaystyle{H(X^\kappa(l)|Y_\text{e}^\kappa(l)\!,\!\gamma_\text{e}(l)\!,\!\gamma_1(l)\!,\!{\cdots}\!,\!\gamma_K(l)\!,\!F_1(l)\!,\!{\cdots}\!,\!F_K(l))}$}\label{lb1_step1}\\ &\geq\!\sum_{l\in\mathcal{D}_L}\hspace{-0.15cm}\resizebox{.38\textwidth}{!}{$\displaystyle{H(X^\kappa(l)|Y_\text{e}^\kappa(l)\!,\!\gamma_\text{e}(l)\!,\!\gamma_1(l)\!,\!{\cdots}\!,\!\gamma_K(l)\!,\!F_1(l)\!,\!{\cdots}\!,\!F_K(l))}$}\label{lb1_step2}\\ &\geq \!\sum_{l\in\mathcal{D}_L}\hspace{-0.15cm}\kappa\!\resizebox{.37\textwidth}{!}{$\displaystyle{\left(\min_{1\leq k\leq K}\sum_{q=1}^Q\Theta_{\tau_q}^{\gamma_k(l)}\left(\mathcal{R}_q{-}\log\left(1{+}\gamma_\text{e}(l)P_q\right)\right){-}\epsilon'\!\right)}$}\\ &=\!\sum_{l=1}^L\hspace{-0.1cm}\kappa\!\left(\!\min_{1\leq k\leq K}\sum_{q=1}^Q\hspace{-0.1cm}\Theta_{\tau_q}^{\gamma_k(l)}\!\left\lbrace \!\mathcal{R}_q\!-\!\log\!\left(1{+}\gamma_\text{e}(l)P_q\right)\!\right\rbrace^{+}\hspace{-0.1cm}\!-\!\epsilon'\!\right)\\ &=n\min_{1\leq k\leq K}\sum_{q=1}^Q\Theta_{\tau_q}^{\gamma_k}~\!\underset{\gamma_\text{e}}{\mathbb{E}}\left[\left\lbrace \mathcal{R}_q{-}\log\left(1{+}\gamma_\text{e}P_q\right)\right\rbrace^+\right]-n\epsilon'\label{lb1_step3}\\ &=n\mathcal{R}_\text{s}^- -n\epsilon',\label{lb1_step3p1} \end{align} where \eqref{lb1_step1} results from the memoryless property of the channel and the independence of the $X^\kappa(l)$'s, \eqref{lb1_step2} is obtained by removing all the terms corresponding to the fading blocks $l\not\in\mathcal{D}_L$, with $\mathcal{D}_L=\cup_{k\in\{1,{\cdots},K\}}\left\lbrace l\in\{1,{\cdots},L\}:F_k(l)>h_\text{e}(l)\right\rbrace$, $\displaystyle{\Theta_{\tau_q}^{\gamma_k(l)}{=}\text{Pr}\left[\tau_q{\leq}\gamma_k(l){<}\tau_{q+1}\right]}$, and \eqref{lb1_step3} follows from the ergodicity of the channel as $L\rightarrow\infty$. On the other hand, using list decoding argument at the eavesdropper side and applying Fano's inequality~\cite{gopala01}, $\frac{1}{n}H(X^n|Y_\text{e}^n,\gamma_\text{e}^L,\Gamma^L,F^L,W)$ vanishes as $n\rightarrow\infty$ and we can write \begin{equation}\label{lb1_step4} H(X^n|Y_\text{e}^n,\gamma_\text{e}^L,\Gamma^L,F^L,W)\leq n\epsilon''. \end{equation} Substituting \eqref{lb1_step3p1} and \eqref{lb1_step4} in \eqref{lb1_step0}, we get $\mathcal{R}_\text{e}\geq \mathcal{R}_\text{s}^- -\epsilon$, with $\epsilon=\epsilon'+\epsilon''$, and $\epsilon'$ and $\epsilon''$ are selected to be arbitrarily small. This concludes the proof.$\hfill \square$ \subsubsection{Proof of the Upper Bound in Theorem 1}\label{PUB_CM} To establish the upper bound on the common message secrecy capacity in~(\ref{th1_b}), we start by supposing that the transmitter sends message $w$ to only one legitimate receiver $\text{R}_k$. Using the result in \cite{rezki11}, for single user transmission with limited CSI feedback, the secrecy capacity of our system can be upper bounded as \begin{equation}\label{UB_CM_stp1} \mathcal{C}_{\text{s}}\!\leq\!\hspace{-0.2cm}\max_{\{\tau_q;P_q\}_{q=1}^Q}\sum_{q=0}^Q\Theta_{\tau_q}^{\gamma_k}\underset{\gamma_\text{e},\gamma_k}{\mathbb{E}}\!\left[\!\left\lbrace\!\log\!\left(\!\frac{1{+}\gamma_kP_q}{1{+}\gamma_\text{e}P_q}\!\right)\!\right\rbrace^{\hspace{-0.1cm}+}\!\Bigg|\tau_q\!\leq\!\gamma_k\!<\!\tau_{q+1}\!\right], \end{equation} with $\displaystyle{\Theta_{\tau_q}^{\gamma_k}{=}\text{Pr}\left[\tau_q{\leq}\gamma_k{<}\tau_{q+1}\right]}$. \noindent Since the choice of the receiver to transmit to is arbitrary, we tighten this upper bound by choosing the legitimate receiver $\text{R}_k$ that minimizes this quantity, yielding \begin{align}\label{UB_CM_stp2} \mathcal{C}_{\text{s}}^+&=\min_{1\leq k\leq K}\max_{\{\tau_q;P_q\}_{q=1}^Q}\sum_{q=0}^Q\Theta_{\tau_q}^{\gamma_k}\\ &\qquad\quad\times\underset{\gamma_\text{e},\gamma_k}{\mathbb{E}}\!\left[\!\left\lbrace\!\log\!\left(\!\frac{1{+}\gamma_kP_q}{1{+}\gamma_\text{e}P_q}\!\right)\!\right\rbrace^{\hspace{-0.1cm}+}\!\Bigg|\tau_q\!\leq\!\gamma_k\!<\!\tau_{q+1}\!\right].\qquad \square\nonumber \end{align} \section{Broadcasting Independent Messages}\label{BIM} In this section, we consider the independent messages case when multiple confidential messages are broadcasted to the legitimate receivers in the presence of an eavesdropper. Taking into account the adopted system model, we present an upper and a lower bounds on the ergodic secrecy sum-capacity. \subsection{Main Results}\label{MR2} \textit{Theorem 2:} The secrecy sum-capacity, $\widetilde{\mathcal{C}}_{\text{s}}$, of a block fading broadcast channel, with an error free $b$-bits CSI feedback sent by each legitimate receiver, at the beginning of each fading block, is bounded by \begin{equation}\label{th2} \widetilde{\mathcal{C}}_{\text{s}}^-\leq \widetilde{\mathcal{C}}_{\text{s}}\leq \widetilde{\mathcal{C}}_{\text{s}}^+, \end{equation} such as\vspace{-0.3cm} \begin{subequations} \begin{align}\label{th2_a} \widetilde{\mathcal{C}}_{\text{s}}^-{=}\max_{\{\tau_q;P_q\}_{q=1}^Q}\sum_{q=1}^Q\Theta_{\tau_q}^{\gamma_\text{max}}\hspace{0.2cm}\underset{\gamma_\text{e}}{\mathbb{E}}\left[\left\lbrace\log\left(\frac{1{+}\tau_qP_q}{1{+}\gamma_\text{e}P_q}\right)\right\rbrace^+\right], \end{align} and\vspace{-0.3cm} \begin{align}\label{th2_b} &\widetilde{\mathcal{C}}_{\text{s}}^+{=}\max_{\{\tau_q;P_q\}_{q=1}^Q}\sum_{q=0}^Q\Theta_{\tau_q}^{\gamma_\text{max}}\nonumber\\ &\times\underset{\gamma_\text{e},\gamma_\text{max}}{\mathbb{E}}\!\left[\left\lbrace\!\log\!\left(\frac{1{+}\gamma_\text{max}P_q}{1{+}\gamma_\text{e}P_q}\right)\!\right\rbrace^+\Bigg|\tau_q\!\leq\!\gamma_\text{max}\!<\!\tau_{q+1}\right], \end{align} where $\displaystyle{\gamma_\text{max}{=}\max_{1\leq k\leq K}\gamma_k}$, $Q{=}2^b$, $\{\tau_q~\!|~\!0{=}\tau_0{<}\tau_1{<}\cdots{<}\tau_Q\}_{q=1}^Q$ represent the reconstruction points describing the support of $\gamma_\text{max}$ with $\tau_{Q+1}{=}\infty$ for convenience, $\{P_q\}_{q=1}^Q$ are the power transmission strategies satisfying the average power constraint, and $\Theta_{\tau_q}^{\gamma_\text{max}}{=}\text{Pr}\left[\tau_q{\leq}\gamma_\text{max}{<}\tau_{q+1}\right]$ for all $q\in\{1,{\cdots},Q\}$. \end{subequations} \textit{Proof:} A detailed proof of Theorem 2 is provided in the following subsection. Theorem~2 states that even with 1-bit feedback, sent by the strongest legitimate receiver at the beginning of each fading block, a positive secrecy rate can still be achieved.\vspace{0.2cm} \textit{Remarks:} \begin{itemize} \item The presented results, for both common message and independent messages transmissions, are also valid in the case when multiple non-colluding eavesdroppers conduct the attack. In the case when the eavesdroppers collude, the results can be extended by replacing the term $\gamma_\text{e}$ with the squared norm of the vector of channel gains of the colluding eavesdroppers. \item In the analyzed system, we assumed unit variance Gaussian noises at all receiving nodes. The results can be easily extended to a general setup where the noise variances are different. \end{itemize} \textit{Corollary 2:} The secrecy sum-capacity of a block fading broadcast channel with perfect main CSIT, and the average power constraint in \eqref{pavg}, is given by \begin{align}\label{cr2} \widetilde{\mathcal{C}}_{\text{s}}=\max_{P(\gamma_\text{max})}\underset{\gamma_\text{max},\gamma_\text{e}}{\mathbb{E}}\!\left[\left\lbrace\log\!\left(\frac{1{+}\gamma_\text{max}P(\gamma_\text{max})}{1{+}\gamma_\text{e}P(\gamma_\text{max})}\right)\right\rbrace^+\right], \end{align} with $\gamma_\text{max}{=}\max_{1\leq k\leq K}\gamma_k$, and $\mathbb{E}[P(\gamma_\text{max})]\leq P_\text{avg}$. \vspace{0.3cm} \textit{Proof:} Corollary 2 results directly by following a similar reasoning as for the proof of Corollary 1. $\hfill \square$ To the best of our knowledge, this result has not been reported in earlier works. For the special case of single user transmission, the secrecy sum-capacity in corollary 2 coincides with the result in Theorem~2 from reference \cite{gopala01}. \subsection{Ergodic Capacity Analysis}\label{ECA2} In this subsection, we establish the obtained results for the ergodic secrecy sum-capacity \subsubsection{Proof of Achievability in Theorem 2}\label{PLB_IM} The lower bound on the secrecy sum-capacity, presented in \eqref{th2_a}, is achieved using a time division multiplexing scheme that selects periodically one receiver to transmit to. More specifically, we consider that, during each fading block, the source only transmits to the legitimate receiver with the highest $\tau_q$, and if there are more than one, we choose one of them randomly. Since we are transmitting to only one legitimate receiver at a time, the achieving coding scheme consists on using independent standard single user Gaussian wiretap codebooks. During each fading block, the transmitter receives~$K$ feedback information about the CSI of the legitimate receivers. Since the channel gains of the $K$ receivers are independent, there are $M{=}Q^K$ different states for the received feedback information, as discussed in the proof of achievability of Theorem 1. Each of these states, $\mathcal{J}_m; m\in\{1,\cdots ,M\},$ represents one subchannel. The transmission scheme consists on sending an independent message, intended for the receiver with the highest $\tau_q$, on each of the $M$ subchannels, with a fixed rate. Let $\tau_m^\text{max}$ be the maximum received feedback information on channel $m$. The overall achievable secrecy sum-rate can be written as \begin{align} &\widetilde{\mathcal{R}}_\text{s}^-=\sum_{m=1}^M\text{Pr}[\mathcal{J}_m]~\!\underset{\gamma_\text{e}}{\mathbb{E}}\left[\left\lbrace\log\left(\frac{1{+}\tau_m^\text{max}P(\tau_m^\text{max})}{1{+}\gamma_\text{e}P(\tau_m^\text{max})}\right)\right\rbrace^+\right]\label{Rs_IM_stp1}\\ &=\sum_{q=1}^Q\text{Pr}[\tau_q{\leq}\gamma _\text{max}{<}\tau_{q+1}]~\!\underset{\gamma_\text{e}}{\mathbb{E}}\left[\left\lbrace\log\left(\frac{1{+}\tau_qP_q}{1{+}\gamma_\text{e}P_q}\right)\right\rbrace^+\right],\label{Rs_IM_stp2} \end{align} where \eqref{Rs_IM_stp1} is obtained by using a Gaussian codebook with power $P(\tau_m^\text{max})$, satisfying the average power constraint, on each subchannel $m$ \cite{gopala01}, and \eqref{Rs_IM_stp2} follows by using the fact that $\tau_m^\text{max}\in\{\tau_1,\cdots ,\tau_Q\}$ and rewriting the summation over these indices. Also, we note that the probability of adapting the transmission with $\tau_q$ corresponds to the probability of having $\tau_q{\leq}\gamma _\text{max}{<}\tau_{q+1}$, with $\gamma_\text{max}=\max_{1\leq k\leq K}\gamma_k$. Maximizing over the main channel gain reconstruction points $\tau_q$ and the associated power transmission strategies $P_q$, for each $q\in\{1,\cdots ,Q\}$, concludes the proof.$\hfill \square$ \subsubsection{Proof of the Upper Bound in Theorem 2}\label{PUB_IM} To prove that $\widetilde{\mathcal{C}}_{\text{s}}^+$, presented in \eqref{th2_b}, is an upper bound on the secrecy sum-capacity, we consider a new genie-aided channel whose capacity upper bounds the capacity of the $K$-receivers channel with limited CSI feedback. The new channel has only one receiver that observes the output of the strongest main channel. The output signal of the genie-aided receiver is given by $Y_\text{max}(t)=h_\text{max}(i)X(t)+v(t)$, at time instant $t$, with $h_\text{max}$ being the channel gain of the best legitimate channel, i.e., $|h_\text{max}|^2{=}\gamma_\text{max}$ and $\gamma_\text{max}{=}\max_{1\leq k\leq K}\gamma_k$. Let $\tau_q; q\in\{1,{\cdots},Q\}$ be the feedback information sent by the new receiver to the transmitter about its channel gain, i.e., $\tau_q$ is feedbacked when $\tau_q{\leq}\gamma_\text{max}{<}\tau_{q+1}$. First, we need to prove that the secrecy capacity of this new channel upper bounds the secrecy sum-capacity of the $K$-receivers channel with limited CSI. To this end, it is sufficient to show that if a secrecy rate point $(\mathcal{R}_1,\mathcal{R}_2,{\cdots},\mathcal{R}_K)$ is achievable on the $K$-receivers channel with limited CSI feedback, then, a secrecy sum-rate $\sum_{k=1}^K\mathcal{R}_k$ is achievable on the new channel. Let $(W_1,\!W_2,\!{\cdots},\!W_K)$ be the independent transmitted messages corresponding to the rates $(\mathcal{R}_1,\!\mathcal{R}_2,\!{\cdots},\!\mathcal{R}_K)$, and $(\hat{W}_1,\hat{W}_2,{\cdots},\hat{W}_K)$ the decoded messages. Thus, for any $\epsilon{>}0$ and $n$ large enough, there exists a code of length $n$ such that $\text{Pr}[\hat{W}_k\!\neq\!W_k]\!\leq\!\epsilon$ at each of the $K$ receivers, and \begin{equation} \frac{1}{n}H\!(W_k|W_1\!,\!\cdots\!,W_{k\!-\!1}\!,\!W_{k\!+\!1}\!,\!\cdots\!,\!W_K\!,\!Y_\text{e}^n\!,\!\gamma_\text{e}^L\!,\!\!F^L)\!\geq\!\mathcal{R}_k\!-\!\epsilon ,\label{sc23} \end{equation} with $F^L{=}\{F_1^L,F_2^L,{\cdots},F_K^L\}$, and $F_k(l)~\!{\in}~\!\{\tau_1,{\cdots},\tau_Q\}$ is the feedback information sent by receiver $k$ in the $l$-th fading block. Now, we consider the transmission of message $W{=}(W_1\!,\!W_2\!,\!\cdots\!,\!W_K)$ to the genie-aided receiver~using the same encoding scheme as for the $K$-receivers case. Adopting a decoding scheme similar to the one used at each~of the $K$ legitimate receivers, it is clear that the genie-aided~receiver can decode message $W$ with a negligible probability of error, i.e., $\text{Pr}(\hat{W}\!\neq\!W)\!\leq\!\epsilon$. For the secrecy condition, we have \begin{align} &\frac{1}{n}H\!(W|Y_\text{e}^n\!,\!\gamma_\text{e}^L\!,\!\gamma_\text{max}^L,\!F_\text{max}^L)\nonumber\\ &=\frac{1}{n}H\!(W_1\!,\!W_2\!,\!\cdots\!,\!W_K|Y_\text{e}^n\!,\!\gamma_\text{e}^L\!,\!\gamma_\text{max}^L,\!F_\text{max}^L)\\ &\geq\hspace{-0.1cm}\sum_{k=1}^K\!\frac{1}{n}H\!(W_k|W_1,\!\cdots\!,W_{k\!-\!1}\!,\!W_{k\!+\!1}\!,\!\cdots\!,\!W_K\!,\!Y_\text{e}^n\!,\!\gamma_\text{e}^L\!,\!\gamma_\text{max}^L,\!F_\text{max}^L)\\ &\geq\hspace{-0.1cm}\sum_{k=1}^K\!\frac{1}{n}H\!(W_k|W_1,\!\cdots\!,W_{k\!-\!1}\!,\!W_{k\!+\!1}\!,\!\cdots\!,\!W_K\!,\!Y_\text{e}^n\!,\!\gamma_\text{e}^L\!,\!\gamma_\text{max}^L,\!F^L)\label{UB_IM_v0} \end{align}\vspace{-0.4cm} \begin{equation} \geq\hspace{-0.1cm}\sum_{k=1}^K\mathcal{R}_k{-}K\epsilon ,\label{UB_IM_v01}\hspace{5.5cm} \end{equation} where $F_\text{max}^L{=}\{F_\text{max}(1),{\cdots},F_\text{max}(L)\}$ and $F_\text{max}(l)$ is the feedback information sent by the genie-aided receiver in the $l$-th fading block, \eqref{UB_IM_v0} follows from the fact that $F_\text{max}{\in}\{F_1,{\cdots},F_K\}$ and that conditioning reduces the entropy, and where \eqref{UB_IM_v01} follows from the secrecy constraint \eqref{sc23}. Now, we need to prove that $\widetilde{\mathcal{C}}_\text{s}^+$ upper bounds the secrecy capacity of the genie-aided channel. Let $\widetilde{\mathcal{R}}_\text{e}$ be the equivocation rate of the new channel. We have \allowdisplaybreaks \begin{align} &n\widetilde{\mathcal{R}}_\text{e}=H(W|Y_\text{e}^n,\gamma_\text{e}^L,\gamma_\text{max}^L,F_\text{max}^L)\label{Re_step1}\\ &=I(W\!;\!Y_\text{max}^n|Y_\text{e}^n\!,\!\gamma_\text{e}^L\!,\!\gamma_\text{max}^L\!,\!F_\text{max}^L){+}H(W|Y_\text{max}^n\!,\!Y_\text{e}^n\!,\!\gamma_\text{e}^L\!,\!\gamma_\text{max}^L\!,\!F_\text{max}^L)\\ &\leq I(W;Y_\text{max}^n|Y_\text{e}^n,\gamma_\text{e}^L,\gamma_\text{max}^L,F_\text{max}^L){+} n\epsilon \label{Re_step2}\\ &=\sum_{l=1}^L\sum_{k=1}^\kappa\hspace{-0.1cm}I(W\!;\!Y_\text{max}(l,k)|Y_\text{e}^n\!,\!\gamma_\text{e}^L\!,\!\gamma_\text{max}^L,F_\text{max}^L,Y_\text{max}^{\kappa(l\!-\!1)\!+\!(k\!-\!1)}){+}n\epsilon \\ &=\sum_{l=1}^L\sum_{k=1}^\kappa H(Y_\text{max}(l,k)|Y_\text{e}^n,\gamma_\text{e}^L,\gamma_\text{max}^L,F_\text{max}^L,Y_\text{max}^{\kappa(l\!-\!1)\!+\!(k\!-\!1)})\nonumber\\ &-H(Y_\text{max}(l,k)|W\!,\!Y_\text{e}^n\!,\!\gamma_\text{e}^L\!,\!\gamma_\text{max}^L\!,\!F_\text{max}^L\!,\!Y_\text{max}^{\kappa(l\!-\!1)\!+\!(k\!-\!1)}){+} n\epsilon \\ &\leq \sum_{l=1}^L\sum_{k=1}^\kappa H(Y_\text{max}(l,k)|Y_\text{e}(l,k),\gamma_\text{e}(l),\gamma_\text{max}(l),F_\text{max}^l)\\ &-H(Y_\text{max}(l,k)|W\!,\!X(l,k)\!,\!Y_\text{e}^n,\gamma_\text{e}^L\!,\!\gamma_\text{max}^L\!,\!F_\text{max}^L\!,\!Y_\text{max}^{\kappa(l\!-\!1)\!+\!(k\!-\!1)}){+} n\epsilon\nonumber\\ &=\sum_{l=1}^L\sum_{k=1}^\kappa H(Y_\text{max}(l,k)|Y_\text{e}(l,k),\gamma_\text{e}(l),\gamma_\text{max}(l),F_\text{max}^l)\\ &-H(Y_\text{max}(l,k)|X(l,k),Y_\text{e}(l,k),\gamma_\text{e}(l),\gamma_\text{max}(l),F_\text{max}^l){+} n\epsilon\nonumber\\ &= \sum_{l=1}^L\sum_{k=1}^\kappa\hspace{-0.1cm}I(X(l\!,\!k);\!Y_\text{max}(l\!,\!k)|Y_\text{e}(l,k)\!,\!\gamma_\text{e}(l)\!,\!\gamma_\text{max}(l)\!,\!F_\text{max}^l)\!+\!n\epsilon \end{align}\vspace{-0.8cm} \begin{align} &\leq \sum_{l=1}^L\sum_{k=1}^\kappa \left\lbrace I(X(l,k);Y_\text{max}(l,k)|\gamma_\text{max}(l),F_\text{max}^l)\right.\hspace{2cm}\nonumber\\ &\left.\hspace{1.6cm}-I(X(l,k);Y_\text{e}(l,k)|\gamma_\text{e}(l),F_\text{max}^l)\right\rbrace^+ {+} n\epsilon \label{Re_step3}\\ &= \sum_{l=1}^L\kappa \left\lbrace I(X(l);Y_\text{max}(l)|\gamma_\text{max}(l),F_\text{max}^l)\right.\nonumber\\ &\left.\hspace{1.6cm}-I(X(l);Y_\text{e}(l)|\gamma_\text{e}(l),F_\text{max}^l)\right\rbrace^+ {+} n\epsilon ,\label{Re_step3p1} \end{align} where inequality \eqref{Re_step2} follows from the fact that $H(W|Y_\text{max}^n,Y_\text{e}^n,\gamma_\text{e}^L,\gamma_\text{max}^L,F_\text{max}^L){\leq}H(W|Y_\text{max}^n,\gamma_\text{max}^L,F_\text{max}^L),$ and Fano's inequality $H(W|Y_\text{max}^n,\gamma_\text{max}^L,F_\text{max}^L){\leq}n\epsilon ,$ and \eqref{Re_step3} holds true by selecting the appropriate value for the noise correlation to form the Markov chain $X(l){\rightarrow}Y_\text{max}(l){\rightarrow}Y_\text{e}(l)$ if $\gamma_\text{max}(l){>}\gamma_\text{e}(l)$ or $X(l){\rightarrow}Y_\text{e}(l){\rightarrow}Y_\text{max}(l)$ if $\gamma_\text{max}(l){\leq}\gamma_\text{e}(l)$, as explained in \cite{liang01}. The right-hand side of \eqref{Re_step3p1} is maximized by a Gaussian input. That is, taking $X(l){\sim}\mathcal{CN}\left(0,\omega_l^{1/2}(F_\text{max}^l)\right)$, with the power policy $\omega_l(F_\text{max}^l)$ satisfying the average power constraint, we can write \begin{align} &n\widetilde{\mathcal{R}}_\text{e}\leq\kappa\sum_{l=1}^L\mathbb{E}\!\left[\!\left\lbrace\log\left(\frac{1{+}\gamma_\text{max}(l)\omega_l(F_\text{max}^l)}{1{+}\gamma_\text{e}(l)\omega_l(F_\text{max}^l)}\right)\right\rbrace^+\right]{+}n\epsilon \\ &{=}\kappa\sum_{l=1}^L\mathbb{E}\!\left[\!\mathbb{E}\!\resizebox{.35\textwidth}{!}{$\displaystyle{\left[\!\left\lbrace\!\log\!\left(\!\frac{1{+}\gamma_\text{max}(l)\omega_l(F_\text{max}^l)}{1{+}\gamma_\text{e}(l)\omega_l(F_\text{max}^l)}\!\right)\!\right\rbrace^{\hspace{-0.15cm}+}\!\bigg|F_\text{max}(l),\gamma_\text{max}(l),\gamma_\text{e}(l)\right]}$}\right]\hspace{-0.15cm}{+}n\epsilon \\ &{\leq}\kappa\sum_{l=1}^L\mathbb{E}\!\left[\resizebox{.36\textwidth}{!}{$\displaystyle{\!\left\lbrace\log\!\left(\!\frac{1{+}\gamma_\text{max}(l)\mathbb{E}\!\left[\omega_l\scriptstyle{(F_\text{max}^l)|F_\text{max}(l),\gamma_\text{max}(l),\gamma_\text{e}(l)}\right]}{1{+}\gamma_\text{e}(l)\mathbb{E}\!\left[\omega_l\scriptstyle{(F_\text{max}^l)|F_\text{max}(l),\gamma_\text{max}(l),\gamma_\text{e}(l)}\right]}\right)\!\right\rbrace^{\hspace{-0.15cm}+}}$}\right]\hspace{-0.15cm}{+}n\epsilon \label{Re_step4} \\ &=\kappa\sum_{l=1}^L\mathbb{E}\!\left[\left\lbrace\log\left(\frac{1{+}\gamma_\text{max}(l)\Omega_l(F_\text{max}(l))}{1{+}\gamma_\text{e}(l)\Omega_l(F_\text{max}(l))}\right)\!\right\rbrace^{\hspace{-0.15cm}+}\right]\hspace{-0.1cm}{+}n\epsilon \label{Re_step5} \\ &=\kappa\sum_{l=1}^L\mathbb{E}\left[\left\lbrace\log\left(\frac{1{+}\gamma_\text{max}\Omega_l(F_\text{max})}{1{+}\gamma_\text{e}\Omega_l(F_\text{max})}\right)\right\rbrace^+\right]\!+\!n\epsilon ,\label{Re_step6} \end{align} where \eqref{Re_step4} is obtained using Jensen's inequality, $\Omega_l(F_\text{max}(l))$ in \eqref{Re_step5} is defined as $\displaystyle{\Omega_l(F_\text{max}(l)){=}\mathbb{E}\left[\omega_l(F_\text{max}^l)|F_\text{max}(l),\gamma_\text{max}(l),\gamma_\text{e}(l)\right],}$ and where \eqref{Re_step6} follows from the ergodicity and the stationarity of the channel gains, i.e., the expectation in \eqref{Re_step5} does not depend on the block fading index. Thus, we have \begin{align} &\widetilde{\mathcal{R}}_\text{e}\leq\frac{1}{L}\sum_{l=1}^L\mathbb{E}\!\left[\left\lbrace\log\left(\frac{1{+}\gamma_\text{max}\Omega_l(F_\text{max})}{1{+}\gamma_\text{e}\Omega_l(F_\text{max})}\right)\right\rbrace^+\right]\!+\!\epsilon\\ &\leq\mathbb{E}\left[\left\lbrace\log\left(\frac{1{+}\gamma_\text{max}\Omega(F_\text{max})}{1{+}\gamma_\text{e}\Omega(F_\text{max})}\right)\right\rbrace^+\right]+\epsilon ,\label{Re_step7} \end{align} where \eqref{Re_step7} comes from applying Jensen's inequality once again, with $\displaystyle{\Omega(F_\text{max}){=}\frac{1}{L}\sum_{l=1}^L\Omega_l(F_\text{max}).}$ Maximizing over the main channel gain reconstruction points $\tau_q$ and the associated power transmission strategies $P_q$, for each $q\in\{1,\cdots ,Q\}$, concludes the proof.$\hfill \square$ \section{Numerical Results}\label{NR} In this section, we provide selected simulation results for the case of Rayleigh fading channels with $\mathbb{E}\left[\gamma_\text{e}\right]{=}\mathbb{E}\left[\gamma_k\right]{=}1$; $k~\!{\in}~\!\{1,{\cdots},K\}$. Figure~\ref{fig:fig1} illustrates the common message achievable secrecy rate $\mathcal{C}_\text{s}^-$, in nats per channel use (npcu), with $K{=}3$ and various $b$-bits feedback, $b{=}6,4,2,1.$ The secrecy capacity $\mathcal{C}_\text{s}$, from Corollary 1, is also presented as a benchmark. It represents the secrecy capacity with full main CSI at the transmitter. We can see that, as the capacity of the feedback link grows, i.e., the number of bits $b$ increases, the achievable rate grows toward the secrecy capacity $\mathcal{C}_\text{s}$. The independent messages transmission case is illustrated in figure~\ref{fig:fig2}. Two scenarios are considered; the transmission of three independent messages to three legitimate receivers, $K{=}3$, and the transmission of six independent messages with $K{=}6$. Both the achievable secrecy sum-rate $\widetilde{\mathcal{C}}_\text{s}^-$, with 4-bits CSI feedback, and the secrecy capacity $\widetilde{\mathcal{C}}_\text{s}$, with perfect main CSI, are depicted. The curves are presented in npcu. From this figure, we can see that the secrecy throughput of the system, when broadcasting multiple messages, increases with the number of legitimate receivers $K$. \begin{figure}[t] \psfrag{}[l][l][1]{} \psfrag{rate}[l][l][1.3]{\hspace{-1cm}Secrecy Rate (npcu)} \psfrag{snr}[l][l][1.3]{SNR (dB)} \psfrag{Secrecy Capacity with Perfect Main CSI--------}[l][l][1.2]{Secrecy Capacity $\mathcal{C}_\text{s}$ with Perfect Main CSI} \psfrag{Secrecy Rate with 6-bits CSI Feedback}[l][l][1.2]{Secrecy Rate $\mathcal{C}_\text{s}^-$ with 6-bits CSI Feedback} \psfrag{Secrecy Rate with 4-bits CSI Feedback}[l][l][1.2]{Secrecy Rate $\mathcal{C}_\text{s}^-$ with 4-bits CSI Feedback} \psfrag{Secrecy Rate with 2-bits CSI Feedback}[l][l][1.2]{Secrecy Rate $\mathcal{C}_\text{s}^-$ with 2-bits CSI Feedback} \psfrag{Secrecy Rate with 1-bit CSI Feedback}[l][l][1.2]{Secrecy Rate $\mathcal{C}_\text{s}^-$ with 1-bit CSI Feedback} \vspace{-0.7cm} \begin{center}% \scalebox{0.55}{\includegraphics{CM_Pavg_Q.eps}} \end{center} \vspace{-0.7cm} \caption{Common message secrecy rate, for Rayleigh fading channels, with $K{=}3$ and various $b$-bits feedback, $b{=}6,4,2,1.$} \label{fig:fig1} \end{figure} \begin{figure}[ht] \psfrag{}[l][l][1]{} \psfrag{rate}[l][l][1.3]{\hspace{-1.3cm}Secrecy Sum-Rate (npcu)} \psfrag{snr}[l][l][1.3]{SNR (dB)} \psfrag{Secrecy Capacity with Perfect Main CSI and k=3--------}[l][l][1.15]{Secrecy Capacity $\widetilde{\mathcal{C}}_\text{s}$ with Perfect Main CSI and $K{=}3$} \psfrag{Secrecy Capacity with Perfect Main CSI and k=6}[l][l][1.15]{Secrecy Capacity $\widetilde{\mathcal{C}}_\text{s}$ with Perfect Main CSI and $K{=}6$} \psfrag{Secrecy Rate with 4-bits CSI Feedback and k=3}[l][l][1.15]{Secrecy Rate $\widetilde{\mathcal{C}}_\text{s}^-$ with 4-bits CSI Feedback and $K{=}3$} \psfrag{Secrecy Rate with 4-bits CSI Feedback and k=6}[l][l][1.15]{Secrecy Rate $\widetilde{\mathcal{C}}_\text{s}^-$ with 4-bits CSI Feedback and $K{=}6$} \vspace{-0.55cm} \begin{center}% \scalebox{0.55}{\includegraphics{IM_Pavg_K.eps}} \end{center} \vspace{-0.7cm} \caption{Independent messages secrecy sum-rate, for Rayleigh fading channels, with $b{=}4$ and two different values of the total number of legitimate receivers, $K{=}3$ and $K{=}6$.}\vspace{-0.4cm} \label{fig:fig2} \end{figure} \section{Conclusion}\label{conclusion} In this work, we analyzed the ergodic secrecy capacity of a broadcast block-fading wiretap channel with limited CSI feedback. Assuming full CSI on the receivers' side and an average power constraint at the transmitter, we presented lower and upper bounds on the ergodic secrecy capacity and the sum-capacity when the feedback link is limited to $b$ bits per fading block. The feedback bits are provided to the transmitter by each legitimate receiver, at the beginning of each coherence block, through error-free public links with limited capacity. The obtained results show that the secrecy rate when broadcasting a common message is limited by the legitimate receiver having, on average, the worst main channel link, i.e., the legitimate receiver with the lowest average SNR. For the independent messages case, we proved that the achievable secrecy sum-rate is contingent on the legitimate receiver with the best instantaneous channel link. Furthermore, we showed that the presented bounds coincide, asymptotically, as the capacity of the feedback links become large, i.e. $b\rightarrow\infty$; hence, fully characterizing the secrecy capacity and the sum-capacity in this case. As an extension of this work, it would be of interest to examine the behavior of the secrecy capacity bounds in the low and the high SNR regimes. It would also be interesting to look at the scaling laws of the system as the number of legitimate receivers increases. \bibliographystyle{IEEEtran}
{'timestamp': '2016-03-24T01:09:15', 'yymm': '1603', 'arxiv_id': '1603.06877', 'language': 'en', 'url': 'https://arxiv.org/abs/1603.06877'}
\section{Introduction} \label{PSPINTRO} Parker Solar Probe \cite[{\emph{PSP}};][]{2016SSRv..204....7F,doi:10.1063/PT.3.5120} is flying closer to the Sun than any previous spacecraft (S/C). Launched on 12 Aug. 2018, on 6 Dec. 2022 {\emph{PSP}} had completed 14 of its 24 scheduled perihelion encounters (Encs.)\footnote{The PSP solar Enc. is defined as the orbit section where the S/C is below 0.25~AU from the Sun's center.} around the Sun over the 7-year nominal mission duration. The S/C flew by Venus for the fifth time on 16 Oct. 2021, followed by the closest perihelion of 13.28 solar radii (\rst) on 21 Nov. 2021. The S/C will remain on the same orbit for a total of seven solar Encs. After Enc.~16, {\emph{PSP}} is scheduled to fly by Venus for the sixth time to lower the perihelion to 11.44~$R_\odot$ for another five orbits. The seventh and last Venus gravity assist (VGA) of the primary mission is scheduled for 6 Nov. 2024. This gravity assist will set {\emph{PSP}} for its last three orbits of the primary mission. The perihelia of orbits 22, 23, and 24 of 9.86~$R_\odot$ will be on 24 Dec. 2024, 22 Mar. 2025, and 19 Jun. 2025, respectively. The mission’s overarching science objective is to determine the structure and dynamics of the Sun’s coronal magnetic field and to understand how the corona is heated, the solar wind accelerated, and how energetic particles are produced and their distributions evolve. The {\emph{PSP}} mission targets processes and dynamics that characterize the Sun’s expanding corona and solar wind, enabling the measurement of coronal conditions leading to the nascent solar wind and eruptive transients that create space weather. {\emph{PSP}} is sampling the solar corona and solar wind to reveal how it is heated and the solar wind and solar energetic particles (SEPs) are accelerated. To achieve this, {\emph{PSP}} measurements will be used to address the following three science goals: (1) Trace the flow of energy that heats the solar corona and accelerates the solar wind; (2) Determine the structure and dynamics of the plasma and magnetic fields at the sources of the solar wind; and (3) Explore mechanisms that accelerate and transport energetic particles. Understanding these phenomena has been a top science goal for over six decades. {\emph{PSP}} is primarily an exploration mission that is flying through one of the last unvisited and most challenging regions of space within our solar system, and the potential for discovery is huge. The returned science data is a treasure trove yielding insights into the nature of the young solar wind and its evolution as it propagates away from the Sun. Numerous discoveries have been made over the first four years of the prime mission, most notably the ubiquitous magnetic field switchbacks closer to the Sun, the dust-free zone (DFZ), novel kinetic aspects in the young solar wind, excessive tangential flows beyond the Alfv\'en critical point, dust $\beta$-streams resulting from collisions of the Geminids meteoroid stream with the zodiacal dust cloud (ZDC), and the shortest wavelength thermal emission from the Venusian surface. Since 28 Apr. 2021 ({\emph{i.e.}}, perihelion of Enc.~8), the S/C has been sampling the solar wind plasma within the magnetically-dominated corona, {\emph{i.e.}}, sub-Alfv\'enic solar wind, marking the beginning of a critical phase of the {\emph{PSP}} mission. In this region solar wind physics changes because of the multi-directionality of wave propagation (waves moving sunward and anti-sunward can affect the local dynamics including the turbulent evolution, heating and acceleration of the plasma). This is also the region where velocity gradients between the fast and slow speed streams develop, forming the initial conditions for the formation, further out, of corotating interaction regions (CIRs). The science data return ({\emph{i.e.}}, data volume) from {\emph{PSP}} exceeded the pre-launch estimates by a factor of over four. Since the second orbit, orbital coverage extended from the nominal perihelion Enc. to over 70\% of the following orbit duration. We expect this to continue throughout the mission. The {\emph{PSP}} team is also looking into ways to extend the orbital coverage to the whole orbit duration. This will allow sampling the solar wind and SEPs over a large range of heliodistances. The {\emph{PSP}} science payload comprises four instrument suites: \begin{enumerate} \item{}FIELDS investigation makes measurements of the electric and magnetic fields and waves, S/C floating potential, density fluctuations, and radio emissions over 20 MHz of bandwidth and 140 dB of dynamic range. It comprises \begin{itemize} \item Four electric antennas (V1-V4) mounted at the base of the S/C thermal protection system (TPS). The electric preamplifiers connected to each antenna provide outputs to the Radio Frequency Spectrometer (RFS), the Time Domain Sampler (TDS), and the Antenna Electronics Board (AEB) and Digital Fields Board (DFB). The V1-V4 antennas are exposed to the full solar environment. \item A fifth antenna (V5) provides low (LF) and medium (MF) frequency outputs. \item Two fluxgate magnetometers (MAGs) provide data with bandwidth of $\sim140$~Hz and at 292.97 Samples/sec over a dynamic range of $\pm65,536$~nT with a resolution of 16 bits. \item A search coil magnetometer (SCM) measures the AC magnetic signature of solar wind fluctuations, from 10 Hz up to 1 MHz. \end{itemize} V5, the MAGs, and the SCM are all mounted on the boom in the shade of the TPS. For further details, see \citet{2016SSRv..204...49B}. \item{}The Solar Wind Electrons Alphas and Protons (SWEAP) Investigation measures the thermal solar wind, {\emph{i.e.}}, electrons, protons and alpha particles. SWEAP measures the velocity distribution functions (VDFs) of ions and electrons with high energy and angular resolution. It consists of the Solar Probe Cup (SPC) and the Solar Probe Analyzers (SPAN-A and SPAN-B), and the SWEAP Electronics Module (SWEM): \begin{itemize} \item SPC is fully exposed to the solar environment as it looks directly at the Sun and measures ion and electron fluxes and flow angles as a function of energy. \item SPAN-A is mounted on the ram side and comprises an ion and electron electrostatic analyzers (SPAN-i and SPAN-e, respectively). \item SPAN-B is an electron electrostatic analyzer on the anti-ram side of the S/C. \item The SWEM manages the suite by distributing power, formatting onboard data products, and serving as a single electrical interface to the S/C. \end{itemize} The SPANs and the SWEM reside on the S/C bus behind the TPS. See \citet{2016SSRv..204..131K} for more information. \item{} The Integrated Science Investigation of the Sun (IS$\odot$IS) investigation measures energetic particles over a very broad energy range (10s of keV to 100 MeV). IS$\odot$IS is mounted on the ram side of the S/C bus. It comprises two Energetic Particle Instruments (EPI) to measure low (EPI-Lo) and high (EPI-Hi) energy: \begin{itemize} \item EPI-Lo is time-of-flight (TOF) mass spectrometer that measures electrons from $\sim25–1000$~keV, protons from $\sim0.04–7$~MeV, and heavy ions from $\sim0.02–2$~MeV/nuc. EPI-Lo has 80 apertures distributed over eight wedges. Their combined fields-of-view (FOVs) cover nearly an entire hemisphere. \item EPI-Hi measures electrons from $\sim0.5–6$~MeV and ions from $\sim1–200$ MeV/nuc. EPI-Hi consists of three telescopes: a high energy telescope (HET; double ended) and two low energy telescopes LET1 (double ended) and LET2 (single ended). \end{itemize} See \citep{2016SSRv..204..187M} for a full description of the IS$\odot$IS investigation. \item{} The Wide-Field Imager for Solar PRobe (WISPR) is the only remote-sensing instrument suite on the S/C. WISPR is a white-light imager providing observations of flows and transients in the solar wind over a $95^\circ\times58^\circ$ (radial and transverse, respectively) FOV covering elongation angles from $13.5^\circ$ to $108^\circ$. It comprises two telescopes: \begin{itemize} \item WISPR-i covers the inner part of the FOV ($40^\circ\times40^\circ$). \item WISPR-o covers the outer part of the FOV ($58^\circ\times58^\circ$). \end{itemize} See \citet{2016SSRv..204...83V} for further details. \end{enumerate} Before tackling the {\emph{PSP}} achievements during the first four years of the prime mission, a brief historical context is given in \S\ref{HistoricalContext}. \S\ref{PSPMSTAT} provides a brief summary of the {\emph{PSP}} mission status. \S\S\ref{MagSBs}-\ref{PSPVENUS} describe the {\emph{PSP}} discoveries during the first four years of operations: switchbacks, solar wind sources, kinetic physics, turbulence, large-scale structures, energetic particles, dust, and Venus, respectively. The conclusions and discussion are given in \S\ref{SUMCONC}. {\emph{Although sections 3-12 may have some overlap and cross-referencing, each section can be read independently from the rest of the paper.}} \section{Historical Context: {\emph{Mariner~2}}, {\emph{Helios}}, and {\emph{Ulysses}}} \label{HistoricalContext} Before {\emph{PSP}}, several space missions shaped our understanding of the solar wind for decades. Three stand out as trailblazers, {\emph{i.e.}}, {\emph{Mariner~2}}, {\emph{Helios}} \citep{Marsch1990} , and {\emph{Ulysses}} \citep{1992AAS...92..207W}. {\emph{Mariner~2}}, launched on 27 Aug. 1962, was the first successful mission to a planet other than the Earth ({\emph{i.e.}}, Venus). Its measurements of the solar wind are a first and among the most significant discoveries of the mission \citep[see ][]{1962Sci...138.1095N}. Although the mission returned data for only a few months, the measurements showed the highly variable nature and complexity of the plasma flow expanding anti-sunward \citep{1965ASSL....3...67S}. However, before the launch of {\emph{PSP}}, almost everything we knew about the inner interplanetary medium was due to the double {\emph{Helios}} mission. This mission set the stage for an initial understanding of the major physical processes occurring in the inner heliosphere. It greatly helped the development and tailoring of instruments onboard subsequent missions. The two {\emph{Helios}} probes were launched on 10 Dec. 1974 and 15 Jan. 1976 and placed in orbit in the ecliptic plane. Their distance from the Sun varied between 0.29 and 1 astronomical unit (AU) with an orbital period of about six months. The payload of the two {\emph{Helios}} comprised several instruments: \begin{itemize} \item Proton, electron, and alpha particle analyzers; \item Two DC magnetometers; \item A search coil magnetometer; \item A radio wave experiment; \item Two cosmic ray experiments; \item Three photometers for the zodiacal light; and \item A dust particle detector \end{itemize} Here we provide a very brief overview of some of the scientific goals achieved by {\emph{Helios}} to make the reader aware of the importance that this mission has had in the study of the solar wind and beyond. {\emph{Helios}} established the mechanisms which generate dust particles at the origin of the zodiacal light (ZL), their relationship with micrometeorites and comets, and the radial dependence of dust density \citep{1976BAAS....8R.457L}. Micrometeorite impacts of the dust particle sensors allowed to study asymmetries with respect to (hereafter w.r.t.) the ecliptic plane and the different origins related to stone meteorites or iron meteorites and suggested that many particles run on hyperbolic orbits aiming out of the solar system \citep{1980P&SS...28..333G}. {\emph{Helios}}' plasma wave experiment firstly confirmed that the generation of type~III radio bursts is a two-step process, as theoretically predicted by \citet{1958SvA.....2..653G} and revealed enhanced levels of ion acoustic wave turbulence in the solar wind. In addition, the radial excursion of {\emph{Helios}} allowed proving that the frequency of the radio emission increases with decreasing the distance from the Sun and the consequent increase of plasma density \citep{1979JGR....84.2029G,1986A&A...169..329K}. The radio astronomy experiments onboard both S/C were the first to provide ``three-dimensional (3D) direction finding" in space, allowing to follow the source of type~III radio bursts during its propagation along the interplanetary magnetic field lines. In practice, they provided a significant extension of our knowledge of the large-scale structure of the interplanetary medium via remote sensing \citep{1984GeCAS......111K}. The galactic and cosmic ray experiment studied the energy spectra, charge composition, and flow patterns of both solar and galactic cosmic rays (GCRs). {\emph{Helios}} was highly relevant as part of a large network of cosmic ray experiments onboard S/C located between 0.3 and 10~AU. It contributed significantly to confirming the role of the solar wind suprathermal particles as seed particles injected into interplanetary shocks to be eventually accelerated \citep{1976ApJ...203L.149M}. Coupling observations by {\emph{Helios}} and other S/C at 1~AU allowed studying the problem of transport performing measurements in different conditions relative to magnetic connectivity and radial distance from the source region. Moreover, joint measurements between {\emph{Helios}} and {\emph{Pioneer}}~10 gave important results about the modulation of cosmic rays in the heliosphere \citep{1978cosm.conf...73K,1984GeCAS......124K}. The solar wind plasma experiment and the magnetic field experiments allowed us to investigate the interplanetary medium from the large-scale structure to spatial scales of the order of the proton Larmor radius for more than an entire 11-year solar cycle. The varying vantage point due to a highly elliptic orbit allowed us to reach an unprecedented description of the solar wind's large-scale structure and the dynamical processes that take place during the expansion into the interplanetary medium \citep{1981sowi.conf..126S}. {\emph{Helios}}' plasma and magnetic field continuous observations allowed new insights into the study of magneto-hydrodynamic (MHD) turbulence opening Pandora's box in our understanding of this phenomenon of astrophysical relevance \citep[see reviews by][]{1995Sci...269.1124T,2013LRSP...10....2B}. Similarly, detailed observations of the 3D velocity distribution of protons, alphas, and electrons not only revealed the presence of anisotropies w.r.t. the local magnetic field but also the presence of proton and alpha beams as well as electron strahl. Moreover, these observations allowed us to study the variability and evolution of these kinetic features with heliocentric distance and different Alfv\'enic turbulence levels at fluid scales \citep[see the review by][]{1995Sci...269.1124T}. Up to the launch of {\emph{Ulysses}} on 6 Oct. 1990, the solar wind exploration was limited to measurements within the ecliptic plane. Like {\emph{PSP}}, the idea of flying a mission to explore the solar polar region dates back to the 1959 Simpson's Committee report. Using a Jupiter gravity assist, {\emph{Ulysses}} slang shot out of the ecliptic to fly above the solar poles and provide unique measurements. During its three solar passes in 1994-95, 2000-01, and 2005, {\emph{Ulysses}} covered two minima and one maximum of the solar sunspot cycle, revealing phenomena unknown to us before \citep[see][]{2008GeoRL..3518103M}. All measurements were, however, at heliodistances beyond 1~AU and only {\emph{in~situ}}, as there were no remote-sensing instruments onboard. \section{Mission Status} \label{PSPMSTAT} After a decade in the making, {\emph{PSP}} began its 7-year journey into the Sun’s corona on 12 Aug. 2018 \citep{9172703}. Following the launch, about six weeks later, the S/C flew by Venus for the first of seven gravity assists to target the initial perihelion of 35.6~$R_\odot$. As the S/C continues to perform VGAs, the perihelion has been decreased to 13.28~$R_\odot$ after the fifth VGA, with the anticipation of a final perihelion of 9.86~$R_\odot$ in the last three orbits. Fig.~\ref{Fig_PSPStatus} shows the change in perihelia as the S/C has successfully completed the VGAs and the anticipated performance in future orbits. Following the seventh VGA, the aphelion is below Venus’ orbit. So, no more VGAs will be possible, and the orbit perihelion will remain the same for a potential extended mission. As shown in Fig.~\ref{Fig_PSPStatus}, the S/C had completed 13 orbits by Oct. of 2022, with an additional 11 orbits remaining in the primary mission. As designed, these orbits are separated into a solar Enc. phase and a cruise phase. Solar Encs. are dedicated to taking the data that characterize the near-Sun environment and the corona. The cruise phase of each orbit is devoted to a mix of science data downlink, S/C operations, and maintenance, and science in regions further away from the Sun. \begin{figure}[!ht] \begin{center} \includegraphics[width=1\columnwidth]{PSP_Mission_Status_SSRv2022.pdf} \includegraphics[width=1\columnwidth]{PSP_Mission_Status_SSRv2022_WhereIsPSP12212022.pdf} \caption{(Top-blue) PSP's perihelion distance is decreased by performing gravity assists using Venus (VGAs). After seven close flybys of Venus, the final perihelion is anticipated to be 9.86~$R_\odot$ from the Sun's center. (Top-orange) The modeled temperature of the TPS sunward face at each perihelion. The thermal sensors on the S/C (behind the TPS) confirm the TPS thermal model. It is noteworthy that there are no thermal sensors on the TPS itself. (Bottom) The trajectory of PSP during the 7-year primary mission phase as a function of days after the launch on 12 Aug. 2018. The green (red) color indicates the completed (future) part of the PSP orbit. The green dot shows the PSP heliodistance on 21 Dec. 2022.} \label{Fig_PSPStatus} \end{center} \end{figure} The major engineering challenge for the mission before launch was to design and build a TPS that would keep the bulk of the S/C at comfortable temperatures during each solar Enc. period. Fig.~\ref{Fig_PSPStatus} also shows the temperature of the TPS' sunward face at each perihelion, the maximum temperature in each orbit. Given the anticipated temperature at the final perihelion of nearly 1000$^\circ$C, the TPS does not include sensors for the direct measurement of the TPS temperature. However, the S/C has other sensors, such as the barrier blanket temperature sensor and monitoring of the cooling system, with which the S/C's overall thermal model has been validated. Through orbit 13, the thermal model and measured temperatures agree very well, though actual temperatures are slightly lower as the model included conservative assumptions for inputs such as surface properties. This good agreement holds throughout the orbits, including aphelion. For the early orbits, the reduced solar illumination when the S/C is further away from the Sun raised concerns before launch that the cooling system might freeze unless extra energy was provided to the S/C by tilting the system to expose more surface area to the Sun near aphelion. This design worked as expected, and temperatures near aphelion have been comfortably above the point where freezing might occur. The mission was designed to collect science data during solar Encs. ({\emph{i.e.}}, inside 0.25~AU) and return that data to Earth during the cruise phase, when the S/C is further away from the Sun. The system was designed to do this using a Ka-band telecommunications link, one of the first uses of this technology for APL\footnote{The Johns Hopkins University Applied Physics Laboratory, Laurel, Maryland.}, with the requirement of returning an average of 85 Gbits of science data per orbit. While the pre-launch operations plan comfortably exceeded this, the mission has returned over three times the planned data volume through the first 13 orbits, with increased data return expected through the remaining orbits. The increased data return is mainly due to better than expected performance of the Ka-band telecommunications system. It has resulted in the ability to measure and return data throughout the orbit, not just in solar Encs., to characterize the solar environment fully. Another major engineering challenge before launch was the ability of the system to detect and recover from faults and to maintain attitude control of the S/C to prevent unintended exposure to solar illumination. The fault management is, by necessity, autonomous, since the S/C spends a significant amount of time out of communication with Mission Operations during the solar Enc. periods in each orbit. A more detailed discussion of the design and operation of the S/C autonomy system is found in \citet{9172703}. Through 13 orbits, the S/C has successfully executed each orbit and operated as expected in this harsh environment. We have seen some unanticipated issues, associated mainly with the larger-than-expected dust environment, that have affected the S/C. However, the autonomy system has successfully detected and recovered from all of these events. The robust design of the autonomy system has kept the S/C safe, and we expect this to continue through the primary mission. Generally, the S/C has performed well within the expectations of the engineering team, who used conservative design and robust, redundant systems to build the highly capable {\emph{PSP}}. Along with this, a major factor in the mission's success so far is the tight coupling between the engineering and operations teams and the science team. Before launch, this interaction gave the engineering team insight into this unexplored near-Sun environment, resulting in designs that were conservative. After launch, the operations and science teams have worked together to exploit this conservatism to achieve results far beyond expectations. \section{Magnetic Field Switchbacks} \label{MagSBs} Abrupt and significant changes of the interplanetary magnetic field direction were reported as early as the mid-1960's \citep[see][]{1966JGR....71.3315M}. The cosmic ray anisotropy remained well aligned with the field. \citet{1967JGR....72.1917M} also reported increases in the radial solar wind speed accompanying the magnetic field deviations from the Parker spiral. Using {\emph{Ulysses}}’ data recorded above the solar poles at heliodistances $\ge1$~AU, \citep{1999GeoRL..26..631B} analyzed the propagation direction of waves to show that these rotations in the magnetic field of $90^\circ$ w.r.t. the Parker spiral are magnetic field line folds rather than opposite polarity flux tubes originating at the Sun. Magnetic field inversions were observed at 1~AU by the International Sun-Earth Explorer-3 ({\emph{ISEE}}-3 [\citealt{1979NCimC...2..722D}]; \citealt{1996JGR...10124373K}) and the Advanced Composition ({\emph{ACE}} [\citealt{1998SSRv...86....1S}]; \citealt{2009ApJ...695L.213G,2016JGRA..12110728L}). Inside 1~AU, the magnetic field reversals were also observed in the {\emph{Helios}} \citep{1981ESASP.164...43P} solar wind measurements as close as 0.3~AU from the Sun's center \citep{2016JGRA..121.5055B,2018MNRAS.478.1980H}. The magnetic field switchbacks took center stage recently owing to their prominence and ubiquitousness in the {\emph{PSP}} measurements inside 0.2~AU. \subsection{What is a switchback?} Switchbacks are short magnetic field rotations that are ubiquitously observed in the solar wind. They are consistent with local folds in the magnetic field rather than changes in the magnetic connectivity to solar source regions. This interpretation is supported by the observation of suprathermal electrons \citep{1996JGR...10124373K}, the differential streaming of alpha particles \citep{2004JGRA..109.3104Y} and proton beams \citep{2013AIPC.1539...46N}, and the directionality of Alfv\'en waves (AWs) \citep{1999GeoRL..26..631B}. Because of the intrinsic Alfv\'enic nature of these structures $-$ implying a high degree of correlation between magnetic and velocity fluctuations in all field components $-$ the magnetic field fold has a distinct velocity counterpart. Moreover, the so called \emph{one-sided} aspect of solar wind fluctuations during Alfv\'enic streams \citep{2009ApJ...695L.213G}, which is a consequence of the approximate constancy of the magnetic field strength $\Bm=|\B|$ during these intervals, has a direct impact on the distribution of $B_R$ and $V_R$ in switchbacks. Under such conditions (constant $B$ and Alfv\'enic fluctuations), large magnetic fields rotations, and switchbacks in particular, always lead to bulk speed enhancements \citep{2014GeoRL..41..259M}, resulting in a spiky solar wind velocity profile during Alfv\'enic periods. Since the amplitude of the velocity spikes associated to switchbacks is proportional to the local Alfv\'en speed $\va$, the speed modulation is particularly intense in fast-solar-wind streams observed inside 1~AU, where $\va$ is larger, and it was suggested that velocity spikes could be even larger closer-in \citep{2018MNRAS.478.1980H}. Despite the previous knowledge of switchbacks in the solar wind community and some expectations that they could have played some role closer to the Sun, our consideration of these structures has been totally changed by {\emph{PSP}}, since its first observations inside 0.3~AU \citep{2019Natur.576..228K,2019Natur.576..237B}. The switchback occurrence rate, morphology, and amplitude as observed by {\emph{PSP}}, as well as the fact that they are ubiquitously observed also in slow, though mostly Alfv\'enic, solar wind, made them one of the most interesting and intriguing aspects of the first {\emph{PSP}} Encs. In this section, we summarize recent findings about switchbacks from the first {\emph{PSP}} orbits. In Section \ref{SB_obs} we provide an overview of the main observational properties of these structures in terms of size, shape, radial evolution, and internal and boundary properties; in Section \ref{sec: theory switchbacks} we present current theories for the generation and evolution of switchbacks, presenting different types of models, based on their generation at the solar surface or {\emph{in situ}} in the wind. \S\ref{SB_discussion} contains a final discussion of the state-of-art of switchbacks' observational and theoretical studies and a list of current open questions to be answered by {\emph{PSP}} in future Encs. \subsection{Observational properties of switchbacks}\label{SB_obs} \subsubsection{Velocity increase inside switchbacks}\label{sub:obs: velocity increase} At first order, switchbacks can be considered as strong rotations of the magnetic-field vector $\B$, with no change in the magnetic field intensity $\Bm=|\B|$. Geometrically, this corresponds to a rotation of $\B$ with its tip constrained on a sphere of constant radius $\Bm$. Such excursions are well represented by following the $\Bm$ direction in the RT plane, during the time series of a large amplitude switchback, like in the left panels of Fig.~\ref{fig:big_sb}. The top left panel represents the typical $\B$ pattern observed since Enc.~1 \citep{2020MNRAS.498.5524W}: the background magnetic field, initially almost aligned with the radial ($B_R<0$) in the near-Sun regions observed by {\emph{PSP}}, makes a significant rotation in the RT plane, locally inverting its polarity ($B_R>0$). All this occurs keeping $\Bm\sim\mathrm{const.}$ and points follow a circle of approximately constant radius during the rotation; as a consequence this increases significantly the transverse component of $\B$ and $B_T\gg B_R$ when approaching $90^\circ$. Due to the high Alfv\'enicity of the fluctuations in near-Sun streams sampled by {\emph{PSP}}, the same pattern is observed for the velocity vector, with similar and proportional variations in $V_R$ and $V_T$ (bottom left panel). While the magnetic field is frame-invariant, the circular pattern seen for the velocity vector is not and its center identifies the so-called de Hoffman-Teller frame (dHT): the frame in which the motional electric field associated to the fluctuations is zero and where the switchbacks magnetic structure can be considered at rest. This frame is typically traveling at the local Alfv\'en speed ahead of the solar wind protons, along the magnetic field. This is consistent with the velocity measurements in the bottom left panel of Fig.~\ref{fig:big_sb}, where the local $\va$ is of the order of $\sim50$~km~s$^{-1}$ and agrees well with the local of the centre of the circle, which is roughly 50~km~s$^{-1}$ ahead of the minimum $V_R$ seen at the beginning of the interval. Because of the geometrical property above, there is a direct relation between the $\B$ excursion and the resulting modulation of the flow speed in switchbacks. Remarkably, switchbacks always lead to speed increases, characterized by a spiky, one-sided profile of $V_R$, independent of the underlying magnetic field polarity; {\emph{i.e.}}, regardless $\B$ rotates from $0^\circ$ towards $180^\circ$, or vice-versa \citep{2014GeoRL..41..259M}. As a consequence, it is possible to derive a simple phenomenological relation that links the instantaneous proton radial velocity $V_R$ to the magnetic field angle w.r.t. the radial $\theta_{BR}$, where $\cos\theta_{BR}=B_R$/\Bm. Moreover, since the solar wind speed is typically dominated by its radial component, this can be considered an approximate expression for the proton bulk speed within switchbacks \citep{2015ApJ...802...11M}: \begin{equation} V_p=V_0+\va[1\pm\cos{\theta_{BR}}],\label{eq_v_in_sb} \end{equation} where $V_0$ is the background solar wind speed and the sign in front of the cosine takes into account the underlying Parker spiral polarity ($-\cos{\theta_{BR}}$ if $B_R>0$, $+\cos{\theta_{BR}}$ otherwise). As apparent from Eq.~(\ref{eq_v_in_sb}), the speed increase inside a switchback with constant $\Bm$ has a maximum amplitude of $2\times\va$. This corresponds to magnetic field rotations that are full reversals; for moderate deflections, the speed increase is smaller, typically of the order of $\sim{\va}$ for a $90^\circ$ deflection. Also, because the increase in $V_p$ is proportional to the local Alfv\'en speed, larger enhancements are expected closer to the Sun. \begin{figure} \begin{center} \includegraphics[width=1\columnwidth]{Fig_big_sb.jpg} \caption{ {\it Left:} Magnetic field and velocity vector rotations during a large amplitude switchback during {\emph{PSP}} Enc.~1 \citep{2020MNRAS.498.5524W}. {\it Right:} An example of switchback observed by {\emph{PSP}} during Enc.~6. Top panel shows the almost complete magnetic field reversal of $B_R$ (black), while the magnetic field intensity $|B|$ (red) remains almost constant through the whole structure. The bottom panel shows the associated jump in the radial velocity $V_R$. In a full switchback the bulk speed of the solar wind protons can increase by up to twice the Alfv\'en speed $\va$; as a consequence we observe a jump from $\sim300$~km~s$^{-1}$ to $\sim600$~km~s$^{-1}$ in the speed during this interval ($\va\sim150$~km~s$^{-1}$).} \label{fig:big_sb} \end{center} \end{figure} The right panels of Fig.~\ref{fig:big_sb} show one of the most striking examples of switchbacks observed by {\emph{PSP}} during Enc.~6. This corresponds to an almost full reversal of $B_R$, from approximately $100$ to ${-100}$ nT, maintaining the magnetic field intensity remarkably constant during the vector $\B$ rotation. As a consequence, the background bulk flow proton velocity ($\sim300$~km~s$^{-1}$) goes up by almost $2~\va$, leading to a speed enhancement up to 600~km~s$^{-1}$ inside the structure ($\va\sim150$~km~s$^{-1}$). This has the impressive effect of turning the ambient slow solar wind into fast for the duration of the crossing, without a change in the connection to the source. It is an open question if even larger velocity jumps could be observed closer in, when $\va$ approaches $200-300$~km~s$^{-1}$ and becomes comparable to the bulk flow itself, and what would be the consequences on the overall flow energy and dynamics. Finally, it is worth emphazising that the velocity enhancements discussed above relate only to the main proton core population in the solar wind plasma. Other species, like proton beams and alpha particles, react differently to switchbacks and may or may not partake in the Alfv\'enic motion associated to these structures, depending on their relative drift w.r.t. the proton core. In fact, alpha particles typically stream faster than protons along the magnetic field in Alfv\'enic streams, with a drift speed that in the inner heliosphere can be quite close to $\va$. As a consequence they sit close to the zero electric field reference frame (dHT) and display much smaller oscillations and speed variations in switchbacks (in the case they stream exactly at the same speed as the phase velocity of the switchback, they are totally unaffected and do not feel any fold in the field \citep[see {\emph{e.g.}},][]{2015ApJ...802...11M}. Similarly, proton beams have drift speeds that exceed the local Alfv\'en speed close to the Sun and therefore, because they stream faster than the dHT, they are observed to oscillate out of phase with the main proton core \citep[{\emph{i.e.}}, they get slower inside switchbacks and the core-beam structure of the proton VDF is locally reversed where $B_R$ flips sign;][]{2013AIPC.1539...46N}. The same happens for the electron strahl, leading to an inversion in the electron heat-flux. \subsubsection{Characteristic Scales, Size and Shape}\label{sub:obs: shapes and sizes} Ideally, switchbacks would be imaged from a range of angles, providing a straightforward method to visualize their shape. However, as mentioned above, these structures are Alfv\'enic and have little change in plasma density, which is essential for line of sight (LOS) images from remote sensing instruments. We must instead rely on the {\emph{in~situ}} observations from a single S/C, which are fundamentally local measurements. Therefore, it is important to understand the relationship between the true physical structure of a switchback and the data measured by a S/C, as this can influence the way in which we think about and study them. For example, a small duration switchback in the {\emph{PSP}} time series may be due to a physically smaller switchback, or because {\emph{PSP}} clipped the edge of a larger switchback. This ambiguity also applies to a series of multiple switchbacks, which may truly be several closely spaced switchbacks or in fact one larger, more degraded switchback \citep{2021ApJ...915...68F}. \citet{2020ApJS..246...39D} provided the first detailed statistics on switchbacks for {\emph{PSP}}'s first Enc. They showed that switchback duration could vary from a few seconds to over an hour, with no characteristic timescale. Through studying the waiting time (the time between each switchback) statistics, they found that switchbacks exhibited long term memory, and tended to aggregate, which they take as evidence for similar coronal origin. Many authors define switchbacks as deflections, above some threshold, away from the Parker spiral. The direction of this deflection, {\emph{i.e.}} towards +T, is also interesting as it could act as a testable prediction of switchback origin theories \cite{2021ApJ...909...95S}. For Enc.~1 at least, \citet{2020ApJS..246...39D} showed that deflections were isotropic about the Parker spiral direction, although they did note that the longest switchbacks displayed a weak preference for deflections in +T. \citet{2020ApJS..246...45H} also found that switchbacks displayed a slight preference to deflect in T rather than N, although there was no distinction between -T or +T. The authors refer to the clock angle in an attempt to quantify the direction of switchback deflection. This is defined as the ``angle of the vector projected onto a plane perpendicular to the Parker spiral that also contains N", where 0$^{\circ}$, 90$^{\circ}$, 180$^{\circ}$ and 270$^{\circ}$ refer to +N, +T, -N, -T directions respectively. Unlike the entire switchback population, the longest switchbacks did show a preference for deflection direction, that often displayed clustering about a certain direction that was not correlated to the solar wind flow direction. Crucially, \citet{2020ApJS..246...45H} demonstrated a correlation between the duration of a switchback and the direction of deflection. They then asserted that the duration of a switchback was related to the way in which {\emph{PSP}} cut through the true physical shape. Since switchbacks are Alfv\'enic, the direction of the magnetic field deflection also creates a flow deflection. This, when combined with the S/C velocity (which had a maximum tangential component of +90~km~s$^{-1}$ during the first Enc.), sets the direction at which {\emph{PSP}} travels through a switchback. As a first attempt, they assumed the switchbacks were aligned with the radial direction or dHT, allowing for the angle of {\emph{PSP}} w.r.t. the flow to be calculated. The authors then demonstrated that as the angle to the flow decreased, the switchback duration increased, implying that these structures were long and thin along the flow direction, with transverse scales around $10^{4}$~km. This idea was extended by \citet{2021AA...650A...1L} to more solar wind streams across the first two Encs. Instead of assuming a flow direction, they instead started with the idea that the structures were long and thin, and attempted to measure their orientation and dimensions. Allowing the average switchback width and aspect ratio to be free parameters they fit an expected model to the distribution of switchback durations, w.r.t. the S/C cutting angle. They applied this method while varying the switchback orientation, finding the orientation that was most consistent with the long, thin model. Switchbacks were found to be aligned away from the radial direction, towards to the Parker spiral. The statistical average switchback width was around $50,000$~km, with an aspect ratio of the order of 10, although there was a large variation. \citet{2021AA...650A...1L} again emphazised that the duration of a switchback is a function of how the S/C cut through the structure, which is in turn related to the switchback deflection, dimensions, orientation and S/C velocity. A similar conclusion was also reached by \citet{2020MNRAS.494.3642M} who argued that the direction of {\emph{Helios}} w.r.t. switchbacks could influence the statistics seen in the data. Unlike the previous studies that relied on large statistics, \citet{2020ApJ...893...93K} analyzed several case study switchbacks during the first Enc., finding currents at the boundaries. They argued that these currents flowed along the switchback surface, and also imagined switchbacks to be cylindrical. Analysing the flow deflections relative to the S/C for three switchbacks, they found a transverse scale of $7,000$~km and $50,000$~km for a compressive and Alfv\'enic switchback, respectively. A similar method was applied to a larger set of switchbacks by \citet{2021AA...650A...3L}, who used minimum variance analysis (MVA) to find the normal directions of the leading and trailing edge. After calculating the width of the edges, an average normal velocity was multiplied by the switchback duration to give a final width. They found that the transverse switchback scale varied from several thousand km to a solar radius ($695,000$~km), with the mode value lying between $10^{4}$~km and $10^{5}$~km. A novel approach to probe the internal structure of switchbacks was provided by \citet{2021AA...650L...4B}, who studied the behavior of energetic particles during switchback periods in the first five {\emph{PSP}} Encs. Energetic particles (80-200 MeV/nucleus) continued to stream anti-sunward during a switchback in 86\% of cases, implying that the radius of magnetic field curvature inside switchbacks was smaller or comparable to the ion gyroradius. Using typical solar wind parameters ($\Bm\sim50$ nT, ion energy $100$~eV) this sets an upper limit of $\sim4000$~km for the radius of curvature inside a switchback. Assuming a typical S-shaped curve envisaged by \citet{2019Natur.576..228K}, this would constrain the switchback width to be less than $\sim16,000$~km. A summary of the results is displayed in Table \ref{tab:shape_size}, which exhibits a large variation but a general consensus that the switchback transverse scale ranges from $10^{3}$~km to $10^{5}$~km. Future areas of study should be focused on how the switchback shape and size varies with distance from the Sun. However, a robust method for determining how {\emph{PSP}} cut through the switchback must be found for progress to be made in this area. For example, an increased current density or wave activity at the boundary may be used a signature of when {\emph{PSP}} is clipping the edge of a switchback. Estimates of switchback transverse scale, like \citet{2021AA...650A...3L}, could be constrained with the use of energetic particle data \citep{2021AA...650L...4B} on a case-by-case basis, improving the link between the duration measured by a S/C and the true physical size of the switchback. \begin{table}[t] \centering \resizebox{\textwidth}{!}{ \begin{tabular}{l|l|l|l} {\bf{Study}} & {\bf{Enc.}} & {\bf{Transverse Scale (km)}} & {\bf{Aspect}} \\ \hline \citet{2020ApJS..246...45H} & 1 & $10^{4}$ km & - \\ \hline \citet{2021AA...650A...1L} & 1,\,2 & $50,000$ km & $\sim 10$ \\ \hline \citet{2020ApJ...893...93K} & 1 & $7000$ km for compressive & - \\ \hline & & $50,000$ km for Alfv\'enic & \\ \citet{2021AA...650A...3L} & 1 & $10^{3}$ km - $10^{5}$ km & - \\ \hline \citet{2021AA...650L...4B} & 1-5 & $< 16,000$ km* & - \\ \hline \end{tabular} } \caption{Summary of the results regarding switchback shape and size, including which {\emph{PSP}} Encs. were used in the analysis. *assuming an S-shape structure. }\label{tab:shape_size} \end{table} \subsubsection{Occurrence and radial evolution in the solar wind}\label{sub:obs: occurrence} \begin{figure} \begin{center} \includegraphics[width=.49\columnwidth]{rates_all30.png} \includegraphics[width=.49\columnwidth]{rates_all3.png} \caption{Cumulative counts of switchbacks as a function of radial distance from {\emph{PSP}}, {\emph{Helios}} and two polar passes of {\emph{Ulysses}} (in 1994 and 2006). The left plot shows counts per km of switchbacks of duration up to 30 minute, while the right plot shows the same quantity but for switchbacks of duration up to 3 hours. {\emph{PSP}} data (43 in total) were binned in intervals of width $\Delta R=0.05$~AU. The error bars denote the range of data points in each bin \citep{2021ApJ...919L..31T}.} \label{fig_rates} \end{center} \end{figure} Understanding how switchbacks evolve with radial distance is one of the key elements not only to determine their origin, but also to understand if switchbacks may contribute to the evolution of the turbulent cascade in the solar wind and to solar wind energy budget. Simulations (\S\ref{sub: theory SB propagation}) and observations (\S\ref{sec: obs: boundaries}) suggest that switchbacks may decay and disrupt as they propagate in the inner heliosphere. As a consequence, it is expected that the occurrence of switchbacks decreases with radial distance in the absence of an ongoing driver capable of reforming switchbacks {\emph{in~situ}}. On the contrary, the presence of an efficient driving mechanism is expected to lead to an increase, or to a steady state, of the occurrence of switchbacks with heliocentric distance. Based on this idea, \citet{2021ApJ...919...60M} analyzed the occurrence rate (counts per hour) of switchbacks with radial distance using data from Encs.~3 through 7 of {\emph{PSP}}. The authors conclude that the occurrence rate depends on the wind speed, with higher count rates for higher wind speed, and that it and does not depend on the radial distance. Based on this result, \citet{2021ApJ...919...60M} exclude {\emph{in~situ}} generation mechanisms. However, it is interesting to note that counts of switchbacks observed by {\emph{PSP}} are highly scattered with radial distance, likely due to the mixing of different streams \citep{2021ApJ...919...60M}. \citet{2021ApJ...919L..31T} also report highly scattered counts of switchbacks with radial distance, although they argue that the presence of decaying and reforming switchbacks might also contribute to such an effect. \citet{2021ApJ...919L..31T} analyzed the count rates (counts per km) of switchbacks by complementing {\emph{PSP}} data with {\emph{Helios}} and {\emph{Ulysses}}. Their analysis shows that the occurrence of switchbacks is scale-dependent, a trend that is particularly clear in {\emph{Helios}} and {\emph{Ulysses}} data. In particular, they found that the fraction of switchbacks of duration of a few tens of seconds and longer increases with radial distance and that the fraction of those of duration below a few tens of seconds instead decreases. The overall cumulative counts per km, two examples of which are shown in Fig.~\ref{fig_rates}, show such a trend. Results from this analysis led \citet{2021ApJ...919L..31T} to conclude that switchbacks in the solar wind can decay and reform in the expanding solar wind, with {\emph{in~situ}} generation being more efficient at the larger scales. They also found that the mean radial amplitude of switchbacks decays faster than the overall turbulent fluctuations, in a way that is consistent with the radial decrease of the mean radial field. They argued that this could be the result of a saturation of amplitudes and may be a signature of decay processes of switchbacks as they evolve and propagate in the inner Heliosphere. \subsubsection{Thermodynamics and energetics}\label{sub: obs: thermodynamics} An important question about switchbacks is whether the plasma inside these structures is different compared to the background surrounding plasma. We have seen already that switchbacks exhibit a bulk speed enhancement in the main core proton population. As this increase in speed corresponds to a net acceleration in the center of mass frame, the plasma kinetic energy is therefore larger in switchbacks than in the background solar wind. This result suggests these structures carry a significant amount of energy with them as the solar wind flows out into the inner heliosphere. A question that directly follows is whether the plasma is also hotter inside w.r.t. outside. Attempting to answer this important question with SPC is non-trivial, since the measurements are restricted to a radial cut of the full 3D ion VDF \citep{2016SSRv..204..131K, 2020ApJS..246...43C}. While the magnetic field rotation in switchbacks enables the sampling of many different angular cuts as the S/C Encs. these structures, the cuts are not directly comparable as they represent different combinations of $T_\perp$ and $T_\|$ \cite[See for example,][]{2020ApJS..246...70H}: \begin{equation} \label{SPCtemp} w_{r}=\sqrt{w^{2}_{\parallel }\left( \hat{r} \cdot \hat{b} \right)^{2} +w^{2}_{\perp }\left[ 1-\left( \hat{r} \cdot \hat{b} \right)^{2} \right]}, \end{equation} \noindent where $w_r$ is the measured thermal speed of the ions, related to temperature by $w=\sqrt{2 k_B T/m}$, and $\hat{\bm{b}}=\B/\Bm$. Therefore, SPC measurements of temperature outside switchbacks, where the magnetic field is typically radial, sample the proton parallel temperature, $T_{p\|}$. In contrast, as $\B$ rotates towards $90^\circ$ within a switchback, the SPC cut typically provides a better estimate of $T_{p\perp}$. To overcome this, \citet{2020ApJS..246...70H} investigated the proton temperature anisotropy statistically. They assumed that the proton VDF does not vary significantly over the SPC sampling time as $\B$ deflects away from the radial direction, and then solved Eq.~\ref{SPCtemp} for both $w_\parallel$ and $w_\perp$. While this method does reveal some information about the the underlying temperature anisotropy, this approach is not suitable for the comparison of anisotropy within a single switchback since it assumes, $\textit{a priori}$, that the anisotropy is fixed compared to the background plasma. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{Fig_SPC_sb_Woolley.jpg} \caption{{\it Left}: SPC measurements of the core proton radial temperature during a large amplitude switchback shown in the bottom panel. The measured core proton temperature (upper panel) is modulated by the B angle and it's maximum when measuring $T_{p\perp}$ at roughly $90^\circ$, consistent with a dominant $T_{p\perp}>T_{p\|}$ anisotropy in the background plasma. {\it Right}: cuts of the ion VDF made by SPC at different angles: antiparallel (anti-radial), orthogonal and parallel (radial) to $\B$. The fit of the proton core is shown in pink. The bottom panel compares the radial and anti-radial VDFs, where the latter has been flipped to account for the field reversal inside the switchback. Figure adapted from \cite{2020MNRAS.498.5524W}.} \label{fig_sb_spc} \end{center} \end{figure} Another possibility is to investigate switchbacks that exhibit a reversal in the sign of $B_R$, in other words, $\theta_{BR}\simeq180^\circ$ inside the switchback for a radial background field. This technique provides two estimates of $T_{p\|}$: outside the switchback, when the field is close to (anti-)radial, and inside, when $B_R$ is reversed. This is the only way to compare the same radial SPC cut of the VDF inside and outside switchbacks, leading then to a direct comparison between the two resulting $T_{p\|}$ values. \citet{2020MNRAS.498.5524W} first attempted this approach, and a summary of their results are presented in Fig.~\ref{fig_sb_spc}. A switchback with an almost complete reversal in the field direction is tracked in the left panels; the bottom panel shows the angle of the magnetic field, from almost anti-radial to radial and back again. The measured core proton temperature, $T_{cp\|}$ (upper left panel), increases with angle, $\theta_{BR}$, and reaches a maximum at $\theta_{BR}\simeq 90^\circ$, consistent with a dominant $T_{p\perp}>T_{p\|}$ anisotropy in the background plasma. On the other hand, when the SPC sampling direction is (anti-)parallel to $\Bm$ (approximately $0^\circ$ and $180^\circ$), \citet{2020MNRAS.498.5524W} find the same value for $T_{cp\|}$. Therefore, they concluded that the plasma inside switchbacks is not significantly hotter than the background plasma. The right panels show radial cuts of the ion VDF made by SPC at different angles: anti-parallel (anti-radial), orthogonal and parallel (radial) to $\B$. The fit of the proton core is shown in pink. The bottom panel compares the measurement in the radial and anti-radial direction, once the latter has been flipped to account for the field reversal inside the switchback; the two distributions fall on top of each other, suggesting that core protons undergo a rigid rotation in velocity space inside the switchback, without a significant deformation of the VDF. The comparison in the panels also shows that the core temperature is larger for oblique angles $^\circ$ (large $T_{cp,\perp}$) and that the proton beam switches sides during the reversal, as discussed in \cite{2013AIPC.1539...46N}. They conclude that plasma inside switchbacks, at least those with the largest angular deflections, exhibits a negligible difference in the parallel temperature compared to the background, and therefore, the speed enhancement of the proton core inside these structures does not follow the expected $T$-$V$ relation \citep[{\emph{e.g.}}, see][]{2019MNRAS.488.2380P}. This scenario is consistent with studies about turbulent properties and associated heating inside and outside switchbacks \citep{2020ApJ...904L..30B, 2021ApJ...912...28M}. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{fig_Tpatches_woodham.pdf} \caption{Overview of plasma properties inside a group of switchback patches. The bottom panel shows the core proton parallel and perpendicular temperatures measured by SPAN. The colours in $T_{\|}$ encode the deflection of $\B$ from the radial direction. Patches (grey sectors exhibit systematically higher $T_{\|}$ than in quiet periods, while $T_{\perp}$ is mostly uniform throughout the interval. Figure adapted from \citet{2021A&A...650L...1W}.} \label{fig_sb_span2} \end{center} \end{figure} On the other hand, SPAN measurements of the core proton parallel and perpendicular temperatures show a large-scale modulation by patches of switchbacks \citep{2021A&A...650L...1W}. Fig.~\ref{fig_sb_span2} shows an overview of magnetic field and plasma properties through an interval that contains a series of switchback patches and quiet radial periods during Enc.~2. The bottom panel highlights the behavior of $T_{\perp}$ and $T_{\|}$ through the structures. The former is approximately constant throughout the interval, consistent with an equally roughly constant solar wind speed explained by the well-known speed-temperature relationship in the solar wind \citep[for example, see][and references therein]{2006JGRA..11110103M}. In contrast, the latter shows large variations, especially during patches when a systematic larger $T_{\|}$ is observed. As a consequence, increases in $T_{\|}$ are also correlated with deflections in the magnetic field directions (colors refer to the instantaneous angle $\theta_{BR}$). The origin of such a correlation between $\theta_{BR}$ and $T_{\|}$ is not fully understood yet, although the large-scale enhancement of the parallel temperature within patches could be a signature of some preferential heating of the plasma closer to the Sun ({\emph{e.g.}}, by interchange reconnection), supporting a coronal origin for these structures. \subsubsection{Switchback boundaries and small-scale waves}\label{sec: obs: boundaries} \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{ssr2021_switchbacks_figure1a.png} \caption{The magnetic field dynamics for a typical deflection (switchback) of the magnetic field observed at heliocentric distance of $35.6~R_\odot$ during {\emph{PSP}}’s first solar Enc., on 4 Nov. 2018 (left) and at heliocentric distance of $\sim50~R_\odot$ on 10 Nov. 2018 (right). The radial component of the magnetic field (red curve in panel (a)) exhibits an almost complete inversion at the switchback boundary and becomes positive (anti-sunward). The transverse components are shown in blue (T, in the ecliptic plane) and in green (N –normal component, transverse to the ecliptic plane). The magnetic field magnitude is shown in black. Panel (b) represents plasma bulk velocity components (with a separate scale for the radial component $V_z$ shown in red) with the same color scheme as in panel (a). Panels (c) and (d) represent the proton density and temperature. Panel (e) presents the magnetic field waveforms from SCM (with the instrumental power cut-off below 3 Hz). The dynamic spectrum of these waveforms are shown in Panel (f), in which the red-dashed curve indicates the local lower hybrid ($f_{LH}$) frequency. Panels (g-j) represent the magnetic and eclectic field perturbations around the switchback leading boundary, the wavelet spectrum of the magnetic field perturbation, and radial component of the Poynting flux (blue color indicates propagation from the Sun and red sunward propagation). The same parameters for the trailing boundary are presented in panels (k-n).} \label{fig:icx1} \end{center} \end{figure} Switchback boundaries are plasma discontinuities, which separate two plasmas inside and outside the structure moving with different velocities that may have different temperatures and densities. Fig.~\ref{fig:icx1} shows a “typical” switchback, highlighting: (1) the sharp rotation of magnetic field as well as the dropouts in field intensity on the boundaries (Fig.~\ref{fig:icx1}a), in agreement with \citet{2020ApJS..249...28F}; (2) the increase of radial velocity showing the Alfv\'enicity (Fig.~\ref{fig:icx1}b); (3) the plasma density enhancements at the boundaries of the switchback (Fig.~\ref{fig:icx1}c), from 300~cm$^{-3}$ to $\sim500$ and 400~cm$^{-3}$ at the leading and trailing edges respectively with some decrease of plasma density inside the structure \citep{2020ApJS..249...28F} down to $250-280$~cm$^{-3}$; and (4) enhanced wave activity inside the switchback and at the boundaries (Fig.~\ref{fig:icx1}d) predominantly below $f_{LH}$ with the higher amplitude wave bursts at the boundaries. The detailed superimposed epoch analysis of plasma and magnetic field parameters presented in \citet{2020ApJS..249...28F} showed that magnetic field magnitude dips and plasma density enhancement are the characteristic features associated with switchbacks boundaries. It is further shown that wave activity decays with heliocentric distances. Together with the activity inside switchbacks, the boundaries also relax during propagation \citep{2020ApJS..246...68M, 2021ApJ...915...68F, 2021A&A...650A...4A} suggesting that the switchback boundary formation process is dynamic and evolving, even occurring near the {\emph{PSP}} observation point inside of $40~R_\odot$ \citep{2021ApJ...915...68F}. \begin{figure} \begin{center} \includegraphics[width=1\columnwidth]{Akhavan-Tafti_etal_SB_DiscType.pdf} \caption{(a) Discontinuity classification of 273 magnetic switchbacks. Scatter plot of relative normal component of magnetic field of upstream, pristine solar wind and relative variation in magnetic field intensity across switchbacks’ leading (QL-to-SPIKE) transition regions. The color shading indicates the switchbacks’ distance from the Sun. (b) Scatter plot of the ratio of number of RD events to that of ED as a function of distance from the Sun. The histogram of event count per radial distance (bin width = $1~R_\odot$) is provided on the right y-axis in blue for reference. (c) Stacked bar plots of the relative ratios of RD:TD:ED:ND discontinuities at 0.2~AU \citep[{\emph{PSP}};][]{2021A&A...650A...4A}, 1.0~AU \citep[{\emph{ISEE}};][]{1984JGR....89.5395N}, and $1.63-3.73$~AU \citep[{\emph{Ulysses}};][]{2002GeoRL..29.1383Y}.} \label{fig:icx2} \end{center} \end{figure} The analysis of MHD discontinuity types was performed by \citet{2021AA...650A...3L} who found that $32\%$ of switchbacks may be attributed to rotational discontinuities (RD), $17\%$ to tangential discontinuities (TD), about $42\%$ to the group of discontinuities that are difficult to unambiguously define (ED), and $9\%$ that do not belong to any of these groups (ND). Similarly, as shown in {\bf{Fig.~\ref{fig:icx2}}}, a recent study by \citet{2021A&A...650A...4A} reported that the relative occurrence rate of RD-type switchbacks goes down with heliocentric distance (Fig.~\ref{fig:icx2}b), suggesting that RD-type switchbacks may fully disappear past 0.3~AU. However, RD-type switchbacks have been observed at both Earth \citep[1~AU;][]{1984JGR....89.5395N} and near Jupiter \citep[2.5~AU;][]{2002GeoRL..29.1383Y}, though at smaller rates of occurrence (Fig.~\ref{fig:icx2}c) than that measured by {\emph{PSP}}. Future investigations are needed to examine (1) the mechanisms via which switchbacks may evolve, and (2) whether the dominant switchback evolution mechanism changes with heliocentric distance. Various studies have also investigated wave activity on switchback boundaries \citep{2020ApJS..246...68M, 2020ApJ...891L..20A,2021AA...650A...3L}: the boundary surface MHD wave (observed at the leading edge of the switchback in Fig.~\ref{fig:icx1} and highlighted in panels (g-h)) and the localized whistler bursts in the magnetic dip (observed at the trailing edge of the switchback in Fig.~\ref{fig:icx1} and highlighted in panels (k-n)). The whistler wave burst in Fig.~\ref{fig:icx1}(k-n) had Poynting flux directed to the Sun that leaded to significant Doppler downshift of wave frequency in the S/C frame \citep{2020ApJ...891L..20A}. Because of their sunward propagation these whistler waves can efficiently scatter strahl electron population. These waves are often observed in the magnetic field magnitude minima at the switchback boundaries, {\emph{i.e.}}, can be considered as the regular feature associated with switchbacks. Lastly, features related to reconnection are occasionally observed at switchback boundaries, albeit only in about $1\%$ of the observed events \citep{2021A&A...650A...5F,2020ApJS..246...34P}. If occurring, reconnection on the boundary of switchbacks with the solar wind magnetic field may lead to the disappearance of some switchbacks \citep{2020AGUFMSH034..06D}. Surprisingly, there has been no evidence of reconnection on switchback boundaries at distances greater than $50~R_\odot$. \citet{2020ApJS..246...34P} explained that the absence of reconnection at these boundaries may be due to (a) large, albeit sub-Alfv\'enic, velocity shears at switchback boundaries which can suppress reconnection \citep{2003JGRA..108.1218S}, or that (b) switchback boundaries, commonly characterized as Alfv\'enic current sheets, are isolated RD-type discontinuities that do not undergo local reconnection. \citet{2021A&A...650A...4A} similarly showed that switchback boundaries theoretically favor magnetic reconnection based on their plasma beta and magnetic shear angle characteristics \citep{2003JGRA..108.1218S}. However, the authors concluded that negligible magnetic curvature, that is highly stretched magnetic field lines \citep{2019JGRA..124.5376A, 2019GeoRL..4612654A}, at switchback boundaries may inhibit magnetic reconnection. Further investigations are needed to explore whether and how magnetic curvature evolves with heliocentric distance. \subsection{Theoretical models}\label{sec: theory switchbacks} In this section, we outline the collection of theoretical models that have been formulated to explain observations of switchbacks. These are based on a variety of physical effects, and there is, as of yet, no consensus about the key ingredients needed to explain observations. In the following we discuss each model and related works in turn, organized by the primary physical effect that is assumed to drive switchback formation. These are (i) Interchange reconnection (\S\ref{sub: theory interchange }), (ii) Other solar-surface processes (\S\ref{sub: theory coronal jets}), (iii) Interactions between solar-wind streams (\S\ref{sub: theory stream interactions}), and (iv) Expanding AWs and turbulence (\S\ref{sub: theory alfven waves }). Within each of these broad categories, we discuss the various theories and models, some of which differ in important ways. In addition, some models naturally involve multiple physical effects, which we try to note as appropriate. The primary motivation for understanding the origin of switchbacks is to understand their relevance to the heating and acceleration of the solar-wind. As discussed in, {\emph{e.g.}}, \citet{2009LRSP....6....3C}, magnetically driven wind models fall into the two broad classes of wave/turbulence-driven (WTD) and reconnection/loop-opening (RLO) models. A natural question is how switchbacks relate to the heating mechanism and what clues they provide as to the importance of different forms of heating in different types of wind. With this in mind, it is helpful to further, more broadly, categorize the mechanisms discussed above into ``{\emph{ex situ}}'' mechanisms (covering interchange reconnection and other solar-surface processes) -- in which switchbacks result from transient, impulsive events near the surface of the sun -- and ``{\emph{in situ}}'' mechanisms (covering stream interactions and AWs), in which switchbacks result from processes within the solar wind as it propagates outwards. An {\emph{ex situ}} switchback formation model, with its focus on impulsive events, naturally ties into an RLO heating scenario; an {\emph{in situ}} formation process, by focusing on local processes in the extended solar wind, naturally ties into a WTD scenario. This is particularly true given the significant energy content of switchbacks in some {\emph{PSP}} observations (see \S\ref{sub:obs: velocity increase}), although there are also important caveats in some of the models. Thus, understanding the origin of switchbacks is key to understanding the origin of the solar wind itself. How predictions from different models hold up when compared to observations may provide us with important clues. This is discussed in more detail in the summary of the implications of different models and how they compare to observations in \S\ref{sub: sb summary theory}. \subsubsection{Interchange reconnection}\label{sub: theory interchange } \begin{figure} \begin{center} \includegraphics[width=.90\columnwidth]{modelsfigure} \caption{Graphical overview covering most of the various proposed switchback-generation mechanisms, reprinted from {\footnotesize \texttt{https://www.nasa.gov/feature/goddard/2021/} \texttt{switchbacks-science-explaining-parker-solar-probe-s-magnetic-puzzle}}. The mechanisms are classified into those that form switchbacks (1) directly through interchange reconnection ({\emph{e.g.}}, \citealt{2020ApJ...894L...4F,2021ApJ...913L..14H,2020ApJ...896L..18S}); (2) through ejection of flux ropes by interchange reconnection \citep{2021A&A...650A...2D,2022ApJ...925..213A}; (3) from expanding/growing AWs and/or Alfv\'enic turbulence \citep{2020ApJ...891L...2S,2021ApJ...918...62M,2021ApJ...915...52S}; (4) due to roll up from nonlinear Kelvin-Helmholtz instabilities \citep{2020ApJ...902...94R}; and (5) through magnetic field lines that stretch between sources of slower and faster wind (\citealp{2021ApJ...909...95S}; see also \citealt{2006GeoRL..3314101L}).} \label{fig:icx} \end{center} \end{figure} Interchange reconnection refers to the process whereby a region of open magnetic-field lines reconnect with a closed magnetic loop \citep{2005ApJ...626..563F}. Since this process is expected to be explosive and suddenly change the shape and topology of the field, it is a good candidate for the origin of switchbacks and has been considered by several authors. The basic scenario is shown in Fig.~\ref{fig:icx}. \citet{2020ApJ...894L...4F} first pointed out the general applicability of interchange reconnection to the {\emph{PSP}} observations (the possible relevance to earlier {\emph{Ulysses}} observations had also been discussed in \citealt{2004JGRA..109.3104Y}). They focus on the large transverse flows measured by {\emph{PSP}} as evidence for the global circulation of open flux enabled by the interchange reconnection process \citep{2001ApJ...560..425F,2005ApJ...626..563F}. Given that switchbacks tend to deflect preferentially in the transverse direction (see \S\ref{sub:obs: shapes and sizes}; \citealp{2020ApJS..246...45H}), they argue that these two observations are suggestively compatible: an interchange reconnection event that enables the transverse transport of open flux would naturally create a transverse switchback. Other authors have focused more on the plasma-physics process of switchback formation, including the reconnection itself and the type of perturbation it creates. \citet{2021A&A...650A...2D} used two-dimensional (2D) particle-in-cell (PIC) simulations to study the hypothesis that switchbacks are flux-rope structures that are ejected by bursty interchange reconnection. They present two 2D simulations, the first focusing on the interchange reconnection itself and the second on the structure and evolution of a flux rope in the solar wind. They find generally positive conclusions: flux ropes with radial-field reversals, nearly constant $\Bm$, and temperature enhancements are naturally generated by interchange reconnection; and, flux-rope initial conditions relax into structures that match {\emph{PSP}} observations reasonably well. Further discussion of the evolution of such structures, in particular how they evolve and merge with radius, is given in \citet{2022ApJ...925..213A} (see also \S\ref{sub: theory SB propagation}) who also argue that the complex internal structure of observed switchbacks is consistent with the merging process. A challenge of the scenario is to reproduce the high Alfv\'enicity ($\delta \B\propto \delta \bm{v}$) of {\emph{PSP}} observations, although the merging process of \citet{2022ApJ...925..213A} naturally halts once Alfv\'enic structures develop, suggesting we may be observing this end result at {\emph{PSP}} altitudes. A somewhat modified reconnection geometry has been explored with 2D MHD simulations by \citet{2021ApJ...913L..14H}. They introduce an interchange reconnection process between open and closed regions with discontinuous guide fields, which is enabled by footpoint shearing motions and favors the emission of AWs from the reconnection site. They find quasi-periodic, intermittent emission of MHD waves, classifying the open-flux regions as ``un-reconnected,'' ``newly reconnected,'' and ``post-reconnected.'' Impulsive AWs, which can resemble switchbacks, robustly propagate outwards in both the newly and post-reconnected regions. While both regions have enhanced temperatures, the newly-reconnected regions have more slow-mode activity and the post-reconnected regions have lower densities, features of the model that may be observable at higher altitudes by {\emph{PSP}}. They also see that flux ropes, which are ejected into the open field lines, rapidly disappear after the secondary magnetic reconnection between the impacting flux rope and the impacted open field lines; it is unclear whether this difference with \citet{2021A&A...650A...2D} is a consequence of the MHD model or the different geometry. Finally, \cite{2020ApJ...903....1Z} focus more on the the evolution of magnetic-field structures generated by the reconnection process, which would often be in clustered in time as numerous open and closed loops reconnect over a short period. They argue that the strong radial-magnetic-field perturbations associated with switchbacks imply that their complex structures should propagate at the fast magnetosonic speed (but see also \S\ref{sub: theory alfven waves } below), deriving an equation from WKB ({\emph{i.e.}}, the Wentzel, Kramers, and Brillouin approximation) theory for how the structures evolve as they propagate outwards from a reconnection site to {\emph{PSP}} altitudes. The model is compared to data in more detail in \citet{2021ApJ...917..110L}, who use a Markov Chain Monte Carlo technique to fit the six free parameters of the model ({\emph{e.g.}}, wave angles and the initial perturbation) to seven observed variables taken from {\emph{PSP}} time-series data for individual switchbacks. They find reasonable agreement, with around half of the observed switchbacks accepted as good fits to the model. \cite{2020ApJ...903....1Z}'s WKB evolution equation implies that $|\delta \B|/|\B_0|$ grows in amplitude out to $\sim50~R_\odot$ (whereupon it starts decaying again), and the shape of the proposed structures implies that switchbacks should often be observed as closely spaced double-humped structures. Their assumed fast-mode polarization implies that switchbacks that are more elongated in the radial direction will also exhibit larger variation in $\Bm$, because radial elongation, combined with $\nabla\cdot \B=0$, implies a mostly perpendicular wavenumber. This could be tested directly (see \S\ref{sub:obs: shapes and sizes}) and is a distinguishing feature between the fast-mode and AW based models (which generically predict $\Bm\sim{\rm const}$; \S\ref{sub: theory alfven waves }). Overall, we see that the various flavors of interchange-reconnection based models have a number of attractive features, in particular their natural explanation of the likely preferred tangential deflections of large switchbacks (\S\ref{sub:obs: shapes and sizes}; \citealp{2020ApJS..246...45H,2022MNRAS.517.1001L}), along with the bulk tangential flow \citep{2019Natur.576..228K}, and of the possible observed temperature enhancements (\S\ref{sub: obs: thermodynamics}; although to our knowledge, there are not yet clear predictions for separate $T_\perp$ and $T_\|$ dynamics). However, a number of features remain unclear, including (depending on the model in question) the Alfv\'enicity of the structures that are produced and how they survive and evolve as they propagate to {\emph{PSP}} altitudes (see \S\ref{sub: theory SB propagation}). \subsubsection{Other solar-surface processes}\label{sub: theory coronal jets} \cite{2020ApJ...896L..18S} present a phenomenological model for how switchbacks might form from the same process that creates coronal jets, which are small-scale filament eruptions observed in X-ray and extreme ultraviolet (EUV). Their jet model (proposed in \citealt{2015Natur.523..437S}) involves jets originating as erupting-flux-rope ejections through a combination of internal and interchange reconnection (thus this model would also naturally belong to \S\ref{sub: theory interchange } above). Observations that suggest jets originate around regions of magnetic-flux cancellation ({\emph{e.g.}}, \citealt{2016ApJ...832L...7P}) support this concept. \cite{2020ApJ...896L..18S} propose that the process can also produce a magnetic-field twist that propagates outwards as an AW packet that eventually evolves into a switchback. Although there is good evidence for equatorial jets reaching the outer corona (thus allowing the switchback propagation into the solar wind) their relation to switchbacks is somewhat circumstantial at the present time; further studies of this mechanism could, for instance, attempt to correlate switchback and jet occurrences by field-line mapping. Using 3D MHD simulations, \cite{2021ApJ...911...75M} examined how photospheric motions at the base of a magnetic flux tube might excite motions that resemble switchbacks. They introduced perturbations at the lower boundary of a pressure-balanced magnetic-field solution, considering either a field-aligned, jet-like flow, or a transverse, vortical flows. Switchback-like fluctuations evolve in both cases: from the jet, a Rayleigh-Taylor-like instability that causes the field to from rolls; from the vortical perturbations, large-amplitude AWs that steepen nonlinearly. However, they also conclude that such perturbations are unlikely to enter the corona: the roll-ups fall back downwards due to gravity and the torsional waves unwind as the background field straightens. They conclude that while such structures are likely to be present in the chromosphere, it is unclear whether they are related to switchbacks as observed by {\emph{PSP}}, since propagation effects will clearly play a dominant role (see \S\ref{sub: theory SB propagation}). \subsubsection{Interactions between wind streams}\label{sub: theory stream interactions} There exist several models that relate the formation of switchbacks in some way to the interaction between neighbouring solar-wind streams with different speeds. These could be either large-scale variations between separate slow- and fast-wind regions, or smaller-scale ``micro-streams,'' which seem to be observed ubiquitously in imaging studies of the low corona \citep{2018ApJ...862...18D} as well as in {\emph{in~situ}} data \citep{2021ApJ...923..174B,2021ApJ...919...96F}.\footnote{We also caution, however, that switchbacks themselves create large radial velocity perturbations (see \S\ref{sub:obs: velocity increase}), which clearly could not be a cause of switchbacks.} Because these models require the stream shear to overwhelm the magnetic tension, they generically predict that switchbacks start forming primarily outside the Alfv\'en critical zone, once $V_R\gtrsim B$, and/or once $\beta\gtrsim1$. However, the mechanism of switchback formation differs significantly between the models. \citet{2006GeoRL..3314101L} presented an early proposal of this form to explain {\emph{Ulysses}} observations. Using 2D MHD simulations, they studied the evolution of a large-amplitude parallel (circularly polarized) AW propagating in a region that also includes strong flow shear from a central smaller-scale velocity stream. They find that large-magnitude field reversals develop across the stream due to the stretching of the field. However, the reversals are also associated with large compressive fluctuations in the thermal pressure, $\Bm$, and plasma $\beta$. Although these match various {\emph{Ulysses}} datasets quite well, they are much less Alfv\'enic then most switchbacks observed by {\emph{PSP}}. \cite{2020ApJ...902...94R} consider the scenario where nonlinear Kelvin-Helmholtz instabilities develop across micro-stream boundaries, with the resulting strong turbulence producing switchbacks. This is motivated in part by the Solar TErrestrial RElations Observaory \citep[{\emph{STEREO}};][]{2008SSRv..136....5K} observations of the transition between ``striated'' (radially elongated) and ``floculated'' (more isotropic) structures \citep{2016ApJ...828...66D} around the surface where $\beta\approx1$, which is around the Alfv\'en critical zone. Since this region is where the velocity shear starts to be able to overwhelm the stabilizing effect of the magnetic field, it is natural to imagine that the instabilities that develop will contribute to the change in fluctuation structure and the generation of switchbacks. Comparing {\emph{PSP}} and {\emph{ex~situ}} observations with theoretical arguments and numerical simulations, \cite{2020ApJ...902...94R} argue that this scenario can account for a range of solar-wind properties, and that the conditions -- {\emph{e.g.}}, the observed Alfv\'en speed and prevalence of small-scale velocity shears -- are conducive to causing shear-driven turbulence. Their 3D MHD simulations of shear-driven turbulence generate a significant reversed-field fraction that is comparable to {\emph{PSP}} observations, with the distributions of $\Bm$, radial field, and tangential flows having a promising general shape. However, it remains unclear whether turbulence generated in this way is sufficiently Alfv\'enic to explain observations, since they see somewhat larger variation in $\Bm$ than observed in many {\emph{PSP}} intervals (but see \citealt{2021ApJ...923..158R}). A key prediction of this model is that switchback activity should generally increase with distance from the Sun, since the turbulence that creates the switchbacks should continue to be driven so long as there remains sufficient velocity shear between streams. This feature is a marked contrast to models that invoke switchback generation through interchange reconnection or other Solar-surface processes. \cite{2021ApJ...909...95S} consider a simpler geometric explanation -- that switchbacks result from global magnetic-field lines that stretch across streams with different speeds, rather than due to waves or turbulence generation. This situation is argued to naturally result from the global transport of magnetic flux as magnetic-field footpoints move between sources of wind with different speeds, with the footpoint motions sustained by interchange reconnection to conserve magnetic flux \citep{2001ApJ...560..425F}. A field line that moves from a source of slower wind into faster wind (thus traversing faster to slower wind as it moves radially outwards) will naturally reverse its radial field across the boundary due to the stretching by velocity shear. This explanation focuses on the observed asymmetry of the switchbacks -- as discussed in \S\ref{SB_obs}, the larger switchback deflections seem to show a preference to be tangential and particularly in the +T (Parker-spiral) direction, which is indeed the direction expected from the global transport of flux through interchange reconnection.\footnote{Note, however, that \citealt{2020ApJS..246...45H} argue that the global tangential flow asymmetry is not a consequence of the switchbacks themselves.} Field reversals are argued to develop their Alfv\'enic characteristics beyond the Alfv\'en point, since the field kink produced by a coherent velocity shear does not directly produce $\delta \B\propto \delta \bm{v}$ or $\Bm\sim{\rm const}$ (as also seen in the simulations of \citealt{2006GeoRL..3314101L}). \subsubsection{Expanding Alfv\'en waves and turbulence}\label{sub: theory alfven waves } The final class of models relate to perhaps the simplest explanation: that switchbacks are spherically polarized ($\Bm={\rm const}$) AWs (or Alfv\'enic turbulence) that have reached amplitudes $|\delta \B|/|\B_0|\gtrsim 1$ (where $\B_0$ is the background field). The idea follows from the realisation \citep{1974sowi.conf..385G,1974JGR....79.2302B} that an Alfv\'enic perturbation -- one with $\delta \bm{v}=\B/\sqrt{4\pi\rho}$ and $\Bm$, $\rho$, and the thermal pressure all constant -- is an exact nonlinear solution to the MHD equations that propagates at the Alfv\'en velocity. This is true no matter the amplitude of the perturbation compared to $\B_0$, a property that seems unique among the zoo of hydrodynamic and hydromagnetic waves (other waves generally form into shocks at large amplitudes). Once $|\delta \B|\gtrsim |\B_0|$ such states will often reverse the magnetic field in order to maintain their spherical polarization (they involve a perturbation $\delta \B$ parallel to $\B_0$). Moreover, as they propagate in an inhomogeneous medium, nonlinear AWs behave just like small-amplitude waves \citep{1974JGR....79.1539H,1974JGR....79.2302B}; this implies that in the expanding solar wind, where the decreasing Alfv\'en speed causes $|\delta \B|/ |\B_0|$ to increase, waves propagating outwards from the inner heliosphere can grow to $|\delta \B|\gtrsim |\B_0|$, feasibly forming switchbacks from initially small-amplitude waves. In the process, they may develop the sharp discontinuities characteristic of {\emph{PSP}} observations if, as they grow, the constraint of constant $\Bm$ becomes incompatible with smooth $\delta \B$ perturbations. However, in the more realistic scenario where there exists a spectrum of waves, this wave growth competes with the dissipation of the large-scale fluctuations due to turbulence induced by wave reflection \citep[see, {\emph{e.g.}},][]{1989PhRvL..63.1807V,2007ApJ...662..669V,2009ApJ...707.1659C,2022PhPl...29g2902J} or other effects \citep[{\emph{e.g.}},][]{1992JGR....9717115R}. If dissipation is too fast, it will stop the formation of switchbacks; so, in this formation scenario turbulence and switchbacks are inextricably linked (as is also the case in the scenario of \citealt{2020ApJ...902...94R}). Thus, understanding switchbacks will require understanding and accurately modelling the turbulence properties, evolution, and amplitude \citep{2018ApJ...865...25U,2019ApJS..241...11C,2013ApJ...776..124P}. Several recent papers have explored the general scenario from different standpoints, finding broadly consistent results. \citet{2020ApJ...891L...2S} and \citet{2022PhPl...29g2902J} used expanding-box MHD simulations to understand how AWs grow in amplitude and develop turbulence. The basic setup involves starting from a purely outwards propagating (fully imbalanced) population of moderate-amplitude randomly phased waves and expanding the box to grow the waves to larger amplitudes. Switchbacks form organically as the waves grow, with their strength ({\emph{e.g.}}, the strength and proportion of field reversals) and properties ({\emph{e.g.}}, the extent to which $\Bm$ is constant across switchbacks) depending on the expansion rate and the wave spectrum. While promising, these simulations are highly idealized -- {\emph{e.g.}}, the expanding box applies only outside the Alfv\'en point, the equation of state was taken as isothermal. While this has hindered the comparison to some observational properties, there are also some promising agreements \citep{2022PhPl...29g2902J}. Similar results were found by \cite{2021ApJ...915...52S} using more comprehensive and realistic simulations that capture the full evolution of the solar wind from coronal base to {\emph{PSP}} altitudes. Their simulation matches well the bulk properties of the slow-Alfv\'enic wind seen by {\emph{PSP}} and develops strong switchbacks beyond $\sim10-20~R_\odot$ (where the amplitude of the turbulence becomes comparable to the mean field). They find switchbacks that are radially elongated, as observed, although the proportion of field reversals is significantly lower than observed (this was also the case in \citealt{2020ApJ...891L...2S}). It is unclear whether this discrepancy is simply due to insufficient numerical resolution or a more fundamental issue with the AW scenario. \cite{2021ApJ...915...52S} do not see a significant correlation between switchbacks and density perturbations (see \S\ref{sub: obs: thermodynamics} for discussion), while more complex correlations with kinetic thermal properties \citep{2020MNRAS.498.5524W} cannot be in addressed either this model or the simpler local ones \citep{2020ApJ...891L...2S,2022PhPl...29g2902J}. \citet{2021ApJ...918...62M} consider a complementary, analytic approach to the problem, studying how one-dimensional, large-amplitude AWs grow and change shape in an expanding plasma. This shows that expansion necessarily generates small compressive perturbations in order to maintain the wave's spherical polarization as it grows to large amplitudes, providing specific $\beta$-dependent predictions for magnetic and plasma compressibility. The model has been extended to include the Parker spiral by {\citet{2022PhPl...29k2903S}, who find that the interaction with a rotating background field causes the development of tangential asymmetries in the switchback deflection directions. These, and the compressive predictions of \citet{2021ApJ...918...62M}, seem to explain various aspects of simulations \citep{2022PhPl...29g2902J}. Overall, these analyses provide simple, geometric explanations for various switchback properties, most importantly the preference for switchbacks to be radially elongated (\S\ref{sub:obs: shapes and sizes} and Table~\ref{tab:shape_size}); however, they are clearly highly idealised, particularly concerning the neglect of turbulence. The models also struggle to reproduce the extremely sharp switchback boundaries seen in {\emph{PSP}} data, which is likely a consequence of considering one-dimensional (1D) waves, since much sharper structures evolve in similar calculations with 2D or 3D structure \citep{2022arXiv220607447S}. Overall, AW models naturally recover the Alfv\'enicity ($\delta \B\propto\bm{v}$ and nearly constant $\Bm$) and radial elongation of switchbacks seen in {\emph{PSP}} observations, but may struggle with some other features. It remains unclear whether detailed aspects of the preferred tangential deflections of large switchbacks can be recovered \citep{2022A&A...663A.109F,2022MNRAS.517.1001L}, although large-ampliude AWs do develop tangentially asymmetries as a consequence of expansion and the rotating Parker spiral \citep{2022PhPl...29g2902J,2022PhPl...29k2903S}. Similarly, further work is needed to understand the compressive properties, in particular in a kinetic plasma.\footnote{Note, however, that AW models do not predict an \emph{absence} of compressive features in switchbacks. Indeed, compressive flows are necessary to maintain spherical polarization as they grow in amplitude due to expansion \citep{2021ApJ...918...62M}.} AW models naturally predict an increase in switchback occurrence with radial distance out to some maximum (whereupon it may decrease again), although the details depend on low-coronal conditions and the influence of turbulence, which remain poorly understood \citep{2022PhPl...29g2902J}. Computational models have also struggled to reproduce the very high switchback fractions observed by {\emph{PSP}}; whether this is due to numerical resolution or more fundamental issues remains poorly understood. \subsubsection{Propagation and evolution of switchbacks}\label{sub: theory SB propagation} A final issue to consider, particularly for understanding the distinction between {\emph{ex~situ}} and {\emph{in~situ}} generation mechanisms, is how a hypothetical switchback evolves as it propagates and is advected outwards in the solar wind. In particular, if switchbacks are to be formed at the solar surface, they must be able to propagate a long way without disrupting or dissolving. Further, different formation scenarios predict different occurrence rates and size statistics as a function of heliocentric radius (\S\ref{sub:obs: occurrence}), and it is important to understand how they change shape and amplitude in order to understand what solar-wind observations could tell us about coronal conditions. Various studies have focused on large-amplitude Alfv\'enic initial conditions, thus probing the scenario where Alfv\'enic switchback progenitors are released in the low corona ({\emph{e.g.}}, due to reconnection), perhaps with subsequent evolution resulting from the AW/turbulence effects considered in \S\ref{sub: theory alfven waves }. Using 2D MHD simulations, they start from an analytic initial condition with a magnetic perturbation that is large enough to reverse the mean field and an Alfv\'enic velocity $\delta \bm{v} = \pm \delta \B/\sqrt{4\pi \rho}$. While \citet{2005ESASP.592..785L} showed that such structures rapidly dissolve if $\Bm$ is not constant across the wave, \citet{2020ApJS..246...32T} reached the opposite conclusion for switchbacks with constant $\Bm$ (as relevant to observations), with their initial conditions propagating unchanged for hundreds of Alfv\'en times before eventually decaying due to parametric instability. They concluded that even relatively short wavelength switchbacks can in principle survive propagating out to tens of solar radii. Using the same initial conditions, \citet{2021ApJ...914....8M} extended the analysis to include switchbacks propagating through a radially stratified environment. They considered a fixed, near-exponential density profile and a background magnetic field with different degrees of expansion, which changes the radial profile of $\va$ in accordance with different possible conditions in the low corona. Their basic results are consistent with the expanding-AW theory discussed above (\S\ref{sub: theory alfven waves }), with switchbacks in super-radially expanding background fields maintaining strong field deflections, while those in radially expanding or non-expanding backgrounds unfold as they propagate outwards. The study also reveals a number of non-WKB effects from stratification, such as a gravitational damping from plasma entrained in the switchback. More generally, they point out that after propagating any significant distance in a radially stratified environment, a switchback will have deformed significantly compared to the results from \citet{2020ApJS..246...32T}, either changing shape or unfolding depending on the background profile. This blurs the line between {\emph{ex~situ}} and {\emph{in~situ}} formation scenarios. The above studies, by fixing $\delta \bm{v} = \pm \delta \B/\sqrt{4\pi \rho}$ and $\Bm={\rm const}$, effectively assume that switchbacks are Alfv\'enic. \citet{2022ApJ...925..213A} have considered the evolution and merging of flux ropes, that are ejected from interchange reconnection sites in the scenario of \citet{2021A&A...650A...2D}. They show that while flux ropes are likely to form initially with an aspect ratio of near unity, merging of ropes through slow reconnection of the wrapping magnetic field is energetically favorable. This merging continues until the axial flows inside the flux ropes increase to near Alfv\'enic values, at which point the process becomes energetically unfavorable. This process also causes flux ropes to increasingly radially elongated with distance from the sun, which is observationally testable (see \S\ref{sub:obs: shapes and sizes}) and may be the opposite prediction to AW based models (since the wave vector rotates towards the parallel direction with expansion). \citet{2021A&A...650A...2D} also argue that the complex, inner structure of observed switchbacks is consistent with this merging process. The WKB fast-mode-like calculation of \citet{2020ApJ...903....1Z} produces somewhat modified scalings (which nonetheless predict a switchback amplitude that increases with radius), but does not address the stability or robustness of the structures. Considerations relating to the long-time stability of switchbacks are less relevant to the shear-driven models of \citet{2020ApJ...902...94R,2021ApJ...909...95S}, in which switchbacks are generated in the Alfv\'en zone and beyond (where $V_R\gtrsim \va$), so will not have propagated a significant distance before reaching {\emph{PSP}} altitudes. Overall, we see that Alfv\'enic switchbacks are expected to be relatively robust, as are flux-rope structures, although they evolve significantly through merging. This suggests that source characteristics could be retained (albeit with significant changes in shape) as they propagate through the solar wind. If indeed switchbacks are of low-coronal origin, this is encouraging for the general program of using switchbacks to learn about the important processes that heat and accelerate the solar wind. \begin{figure} \begin{center} \includegraphics[width=.87\columnwidth]{fig_bale_fargette_1.jpg} \includegraphics[width=.87\columnwidth]{fig_bale_fargette_2.jpg} \caption{{\it Top:} Variation of plasma properties observed during patches of switchbacks during Enc.~6. The second panel shows that Helium abundance is modulated with the patch profile too, with enhanced fractional density inside patches of switchbacks. The periodicity is consistent with the crossing of funnels emerging from the Solar atmosphere. Figure adapted from \cite{2021ApJ...923..174B} {\it Bottom:} Cartoon showing the association between switchback patches and their periodicity with supergranular and granular structure in Corona. Figure adapted from \cite{2021ApJ...919...96F}.} \label{fig_sb_span} \end{center} \end{figure} \subsection{Outlook and open questions}\label{SB_discussion} \subsubsection{Connection to Solar sources}\label{sub:solar sources} Because of their persistence in the solar wind, switchbacks can also be considered as tracers of processes occurring in the Solar atmosphere and therefore can be used to identify wind sources at the Sun. Recent work by \citet{2021MNRAS.508..236W} compared the properties of two switchback patches during different Encs. and suggested that patches could be linked to coronal hole boundary regions at the solar surface. \citet{2021MNRAS.508..236W} also showed that these periods, which had bulk velocities of $\sim300$~km~s$^{-1}$, could sometimes be characterized by a particularly low alpha abundance. The cause of this low alpha abundance is not known, but it could be related to the processes and mechanisms governing the release of the solar wind at the surface. Moreover, local modulation in the alpha fraction observed {\emph{in situ}} and crucially during switchback patches could be a direct signature of spatial modulation in solar sources; \cite{2021ApJ...923..174B} have identified funnels as a possible source for these structures. Such an interpretation is consistent with the finding presented by \cite{2021ApJ...919...96F} who have identified some periodicity in patches that is consistent with that expected from Solar supergranulation, and also some smaller scale signatures potentially related to granular structures inside funnels (see Fig.~\ref{fig_sb_span}). Another interesting interpretation has been proposed by \citep{2021ApJ...920L..31N} who suggest that patches of switchbacks observed in the inner heliosphere by {\emph{PSP}} could then evolve with radial distance into structures with an overall higher bulk velocity, like the microstreams observed by {\emph{Ulysses}} in the polar wind \citep{1995JGR...10023389N}. According to the authors, then microstreams might be the result of accumulated and persistent velocity enhancements resulting from a series of switchbacks or patches. At the same time, \cite{2021ApJ...920L..31N} also propose that the individual switchbacks inside the patches could be generated by minifilament/flux rope eruptions that cause coronal jets \citep{2020ApJ...896L..18S}, so that microstreams are a consequence of a series of such jet-driven switchbacks occurring in close succession. \subsubsection{Implications for our understanding of the solar wind}\label{sub: sb summary theory} Switchback observations hold promise as a way to better constrain our understanding of the solar-wind itself. In particular, most of the theoretical models discussed in \S\ref{sec: theory switchbacks} also suggest broader implications for coronal and solar-wind heating, and thus the origin of the solar wind. Given this, although there is currently no consensus about the key ingredients that form switchbacks, if a particular model does gain further observational support, this may lead to more profound shifts in our understanding of the heliosphere. Here, we attempt to broadly characterize the implications of the different formation scenarios in order to highlight the general importance of understanding switchbacks. Further understanding will require constant collaboration between theory and observations, whereby theorists attempt to provide the most constraining, and unique, possible tests of their proposed mechanisms in order to better narrow down the possibilities. Such a program is strongly supported by the recent observations of switchback modulation on solar supergranulation scales, which suggest a direct connection to solar-surface processes (see above \S\ref{sub:solar sources}). Above (\S\ref{sec: theory switchbacks}), we broadly categorized models into ``{\emph{ex~situ}}'' and ``{\emph{in~situ}},'' which involved switchbacks being formed on or near the solar surface, or in the bulk solar wind, respectively. These classes then naturally tied into RLO models of coronal heating for {\emph{ex~situ}} models, or to WTD coronal-heating theories for {\emph{in~situ}} switchback formation (with some modifications for specific models). But, the significant differences between different models narrow down the correspondence further than this. Let us consider some of the main proposals and what they will imply, if correct. In the discussion below, we consider some of the same models as discussed in \S\ref{sec: theory switchbacks}, but grouped together by their consequence to the solar wind as opposed to the switchback formation mechanism. \citet{2020ApJ...894L...4F} and \citet{2021ApJ...909...95S} propose that switchbacks are intimately related to the global transport of open magnetic flux caused by interchange reconnection, either through the ejection of waves or due to the interaction between streams. This global circulation would have profound consequences more generally, {\emph{e.g.}}, for coronal and solar-wind heating \citep{2003JGRA..108.1157F} or the magnetic-field structure of the solar wind structure (the sub- and super-Parker spiral; \citealp{2021ApJ...909...95S}). Some other interchange reconnection or impulsive jet mechanisms -- in particular, the ejection of flux ropes \citet{2021A&A...650A...2D,2022ApJ...925..213A}, or MHD waves \citet{2020ApJ...903....1Z,2021ApJ...913L..14H,2020ApJ...896L..18S} from the reconnection site -- do not necessarily involve the same open-flux-transport mechanism, but clearly favor an RLO-based coronal heating scenario, and have more specific consequences for each individual models; for example, the importance flux-rope structures to the energetics of the inner heliosphere for \citet{2021A&A...650A...2D}, magnetosonic perturbations in \citet{2020ApJ...903....1Z}, or specific photospheric/chromospheric motions for \citet{2021ApJ...913L..14H,2020ApJ...896L..18S}. The model of \citet{2020ApJ...902...94R} suggests a very different scenario, whereby shear-driven instabilities play a crucial role outside of the Alfv\'en critical zone (where $V_R\gtrsim\va$), where they would set the properties of the turbulent cascade and change the energy budget by boosting heating and acceleration of slower regions. Finally, in the Alfv\'enic turbulence/waves scenario, switchbacks result from the evolution of turbulence to very large amplitudes $|\delta \B|\gtrsim |\B_0|$ \citep{2020ApJ...891L...2S,2021ApJ...915...52S,2021ApJ...918...62M}. In turn, given the low plasma $\beta$, this implies that the energy contained in Alfv\'enic fluctuations is significant even at {\emph{PSP}} altitudes (at least in switchback-filled regions), by which point they should already have contributed significantly to heating. Combined with low-coronal observations \citep[{\emph{e.g.}},][]{2007Sci...318.1574D}, this is an important constraint on wave-heating models \citep[{\emph{e.g.}},][]{2007ApJS..171..520C}. Finally, despite the significant differences between models highlighted above, it is also worth noting some similarities. In particular, features of various models are likely to coexist and/or feed into one another. For example, some of the explosive, {\emph{ex~situ}} scenarios (\S\ref{sub: theory interchange }--\ref{sub: theory coronal jets}) propose that such events release AWs, which then clearly ties into the AW/expansion scenario of \ref{sub: theory alfven waves }. Indeed, as pointed out by \citet{2021ApJ...914....8M}, the subsequent evolution of switchbacks as they propagate in the solar wind cannot be \emph{avoided}, which muddies the {\emph{in~situ}}/{\emph{ex~situ}} distinction. In this case, the distinction between {\emph{in~situ}} and {\emph{ex~situ}} scenarios would be more related to the relevance of distinct, impulsive events to wave launching, as opposed to slower, quasi-continuous stirring of motions. Similarly, \citet{2020ApJ...902...94R} discuss how waves propagating upwards from the low corona could intermix and contribute to the shear-driven dynamics that form the basis for their model. These interrelationships, and the coexistence of different mechanisms, should be kept in mind moving forward as we attempt to distinguish observationally between different mechanisms. \subsubsection{Open Questions and Future Encs.} Given their predominance in the solar wind plasma close to the Sun and because each switchback is associated with a positive increase in the bulk kinetic energy of the flow -- as they imply a net motion of the centre of mass of the plasma -- it is legitimate to consider whether switchbacks play any dynamical role in the acceleration of the flow and its evolution in interplanetary space. Moreover, it is an open question if the kinetic energy and Poynting flux carried by these structures have an impact on the overall energy budget of the wind. To summarize, these are some of the main currently open questions about these structures and their role in the solar wind dynamics: \begin{itemize} \item Do switchbacks play any role in solar wind acceleration? \item Does energy transported by switchbacks constitute a important possible source for plasma heating during expansion? \item Do switchbacks play an active role in driving and maintaining the turbulent cascade? \item Are switchbacks distinct plasma parcels with different properties than surrounding plasma? \item Do switchbacks continuously decay and reform during expansion? \item Are switchbacks signatures of key processes in the Solar atmosphere and tracers of specific types of solar wind sources? (Open field regions vs. streamer) \item Can switchback-like magnetic-field reversals exist close to the Sun, inside the Alfv\'en radius? \end{itemize} Answering these questions require taking measurements even closer to the Sun and accumulate more statistics for switchbacks in different types of streams. These should include the fast solar wind, which has been so far seldomly encuntered by {\emph{PSP}} because of solar minimum and then a particularly flat heliospheric current sheet (HCS; which implies a very slow wind close to the ecliptic). Crucially, future {\emph{PSP}} Encs. will provide the ideal conditions for answering these open questions, as the S/C will approach the solar atmosphere further, likely crossing the Alfv\'en radius. Further, this phase will coincide with increasing solar activity, making possible to sample coronal hole sources of fast wind directly. \section{Solar Wind Sources and Associated Signatures} \label{SWSAS} A major outstanding research question in solar and heliophysics is establishing causal relationships between properties of the solar wind and the source of that same plasma in the solar atmosphere. Indeed, investigating these connections is one of the major science goals of {\emph{PSP}}, which aims to ``\textit{determine the structure and dynamics of the plasma and magnetic fields at the sources of the solar wind}" \cite[Science Goal \#2;][]{2016SSRv..204....7F} by making {\emph{in situ}} measurements closer to the solar corona than ever before. In this section, we outline the major outcomes, and methods used to relate {\emph{PSP}}’s measurements to specific origins on the Sun. In \S\ref{SWSAS:modeling} we review modeling efforts and their capability to identify source regions. In \S\ref{SWSAS:slowlafv} we outline how {\emph{PSP}} has identified significant contributions to the slow solar wind from coronal hole origins with high alfv\'enicity. In \S\ref{SWSAS:strmblt} we review {\emph{PSP}}’s measurements of streams associated with the streamer belt and slow solar wind more similar to 1~AU measurements. In \S\ref{SWSAS:fstwnd} we examine {\emph{PSP}}’s limited exposure to fast solar wind, and the diagnostic information about its coronal origin carried by electron pitch angle distributions (PADs). In \S\ref{SWSAS:actvrgn} we recap {\emph{PSP}}’s measurements of energetic particles associated with solar activity and impulsive events, as well as how they can disentangle magnetic topology and identify pathways by which the Sun’s plasma can escape. Finally, in \S\ref{SWSAS:sbs} we briefly discuss clues to the solar origin of streams in which {\emph{PSP}} observes switchbacks and refer the reader to \S\ref{MagSBs} for much more detail. \subsection{Modeling and Verifying Connectivity} \label{SWSAS:modeling} Association of solar wind sources with specific streams of plasma measured {\emph{in situ}} in the inner heliosphere requires establishing a connection between the Sun and S/C along which solar wind flows and evolves. One of the primary tools for this analysis is combined coronal and heliospheric modeling, particularly of the solar and interplanetary magnetic field which typically governs the large scale flow streamlines for the solar wind. In support of {\emph{PSP}}, there has been a broad range of such modeling efforts with goals including making advance predictions to test and improve the models, as well as making detailed connectivity estimates informed by {\emph{in situ}} measurements after the fact. The physics contained in coronal/heliospheric models lies on a continuum mediated by computational difficulty ranging from high computational tractability and minimal physics (Potential Field Source Surface [PFSS] models, \cite{1969SoPh....9..131A,1969SoPh....6..442S,1992ApJ...392..310W} to comprehensive (but usually still time-independent) fluid plasma physics \citep[MHD, {\emph{e.g.}},][]{1996AIPC..382..104M,2012JASTP..83....1R} but requiring longer computational run times. Despite these very different overall model properties, in terms of coronal magnetic structure they often agree very well with each other, especially at solar minimum \cite{2006ApJ...653.1510R}. In advance of {\emph{PSP}}’s first solar Enc. in Nov. 2018, \cite{2019ApJ...874L..15R} and \cite{2019ApJ...872L..18V} both ran MHD simulations. They utilized photospheric observations from the Carrington rotation (CR) prior to the Enc. happening and model parameters which were not informed by any {\emph{in situ}} measurements. Both studies successfully predicted {\emph{PSP}} would lie in negative polarity solar wind during perihelion and cross the HCS shortly after (see Fig.~\ref{FIG:Badman2020}). Via field line tracing, \cite{2019ApJ...874L..15R} in particular pointed to a series of equatorial coronal holes as the likely sources of this negative polarity and subsequent HCS crossing. This first source prediction was subsequently confirmed with careful comparison of {\emph{in~situ}} measurements of the heliospheric magnetic field and tuning of model parameters. This was done both with PFSS modeling \citep{2019Natur.576..237B,2020ApJS..246...23B,2020ApJS..246...54P}, Wang-Sheeley-Arge (WSA) PFSS $+$ Current Sheet Modeling \citep{2020ApJS..246...47S} and MHD modeling \citep{2020ApJS..246...24R,2020ApJS..246...40K,2021A&A...650A..19R}, all pointing to a distinct equatorial coronal hole at perihelion as the dominant solar wind source. As discussed in \S\ref{SWSAS:slowlafv} the predominant solar wind at this time was slow but with high alfv\'enicity. This first Enc. has proved quite unique in how distinctive its source was, with subsequent Encs. typically connecting to polar coronal hole boundaries and a flatter HCS such that {\emph{PSP}} trajectory skirts along it much more closely \citep{2020ApJS..246...40K, 2021A&A...650L...3C, 2021A&A...650A..19R}. It is worth discussing how these different modeling efforts made comparisons with {\emph{in situ}} data in order to determine their connectivity. The most common is the timing and heliocentric location of current sheet crossings measured {\emph{in situ}} which can be compared to the advected polarity inversion line (PIL) predicted by the various models \citep[{\emph{e.g.}},][and Fig.~\ref{FIG:Badman2020}]{2020ApJS..246...47S}. By ensuring a given model predicts when these crossings occur as accurately as possible, the coronal magnetic geometry can be well constrained and provide good evidence that field line tracing through the model is reliable. Further, models can be tuned in order to produce the best agreement possible. This tuning process was used to constrain models using {\emph{PSP}} data to more accurately find sources. For example, \citet{2020ApJS..246...54P} found evidence that different source surface heights (the primary free parameter of PFSS models) were more appropriate at different times during {\emph{PSP}}’s first two Encs. implying a non-spherical surface where the coronal field becomes approximately radial, and that a higher source surface height was more appropriate for {\emph{PSP}}’s second Enc. compared to its first. This procedure can also be used to distinguish between choices of photospheric magnetic field data \citep[{\emph{e.g.}},][]{2020ApJS..246...23B}. \begin{figure*} \centering \includegraphics[width=\textwidth]{Badman2020.png} \caption{Illustration of mapping procedure and model validation. A PFSS model is run using a source surface height of 2.0~$R_\odot$ and a Global Oscillation Network Group \citep[{\emph{GONG}};][]{1988AdSpR...8k.117H} ZQS magnetogram from 6 Nov. 2018 (the date of the first {\emph{PSP}} perihelion). A black solid line shows the resulting PIL. Running across the plot is {\emph{PSP}}’s first solar Enc. trajectory ballistically mapped to the model outer boundary. The color (red or blue) indicated the magnetic polarity measured {\emph{in situ}} which compares well to the predicted crossings of the PIL. The resulting mapped field lines are shown as curves connecting {\emph{PSP}}’s trajectory to the Sun. A background map combining EUV images from the {\emph{STEREO}}-A Extreme Ultraviolet Imager \citep[EUVI -- 195~{\AA};][]{2004SPIE.5171..111W} and the Advanced Imaging Assembly \citep[AIA -- 193~{\AA};][]{2012SoPh..275...17L} on the Solar Dynamic Observatory \citep[{\emph{SDO}};][]{2012SoPh..275....3P} shows the mapped locations correspond to coronal holes (dark regions) implying the locations of open magnetic field in the model are physical. Figure adapted from \citet{2020ApJS..246...23B} } \label{FIG:Badman2020} \end{figure*} Further validation of source estimates for {\emph{PSP}} have been made in different ways. For example, if a given source estimate indicates a specific coronal hole, or polar coronal hole extension or boundary, the model can be used to produce contours of the open field and compared with EUV observations of coronal holes to detect if the modeled source is empirically present, and if so if its size and shape are accurately captured \citep[{\emph{e.g.}},][]{2011SoPh..269..367L,2020ApJS..246...23B}. Other {\emph{in situ}} properties besides magnetic polarity have been compared in novel ways: \cite{2020ApJS..246...37R} showed for {\emph{PSP}}’s second Enc. a distinct trend in {\emph{in situ}} density depending on whether the S/C mapped to the streamer belt or outside (see \S\ref{SWSAS:strmblt} for more details). MHD models \citep[{\emph{e.g.}},][]{2020ApJS..246...24R,2020ApJS..246...40K,2021A&A...650A..19R} have also allowed direct timeseries predictions of other {\emph{in situ}} quantities at the location of {\emph{PSP}} which can be compared directly to the measured timeseries. Kinetic physics such as plasma distributions have provided additional clues: \cite{2020ApJ...892...88B} showed cooler electron strahl temperatures for solar wind mapping to a larger coronal hole during a fast wind stream, and hotter solar wind mapping to the boundaries of a smaller coronal hole during a slow solar wind stream. This suggests a connection between the strahl temperature and coronal origin (see \S\ref{SWSAS:fstwnd} for more details). \cite{2021A&A...650L...2S} showed further in {\emph{in situ}} connections with source type, observing an increase in mass flux (with {\emph{PSP}} and {\emph{Wind}} data) associated with increasing temperature of the coronal source, including variation across coronal holes and active regions (ARs). Mapping sources with coronal modeling for {\emph{PSP}}’s early Encs. has nonetheless highlighted challenges yet to be addressed. The total amount of open flux predicted by modeling to escape the corona has been observed to underestimate that measured {\emph{in situ}} in the solar wind at 1~AU \citep{2017ApJ...848...70L}, and this disconnect persists at least down 0.13~AU \citep{2021A&A...650A..18B} suggesting there exist solar wind sources which are not yet captured accurately by modeling. Additionally, due to its orbital plane lying very close to the solar equator, the solar minimum configuration of a near-flat HCS and streamer belt means {\emph{PSP}} spends a lot of time skirting the HCS \citep{2021A&A...650L...3C} and limits the types of sources that can be sampled. For example, {\emph{PSP}} has yet to take a significant sample of fast solar wind from deep inside a coronal hole, instead sampling primarily streamer belt wind and coronal hole boundaries. Finally, connectivity modeling is typically time-static either due to physical model constraints or computational tractability. However, the coronal magnetic field is far from static with processes such as interchange reconnection potentially allowing previously closed structures to contribute to the solar wind \citep{2020ApJ...894L...4F,2020JGRA..12526005V}, as well as disconnection at the tips of the streamer belt \citep{2020ApJ...894L..19L,2021A&A...650A..30N}. Such transient processes are not captured by the static modeling techniques discussed in this section, but have still been probed with {\emph{PSP}} particularly in the context of streamer blowouts (SBOs; \S\ref{SWSAS:strmblt}) via combining remote observations and {\emph{in situ}} observations, typically requiring multi-S/C collaboration and minimizing modeling uncertainty. Such collaborations are rapidly becoming more and more possible especially with the recent launch of Solar Orbiter \citep[{\emph{SolO}}; ][]{2020AA...642A...1M,2020A&A...642A...4V}, recently yielding for the first time detailed imaging of the outflow of a plasma parcel in the mid corona by {\emph{SolO}} followed by near immediate {\emph{in situ}} measurements by {\emph{PSP}} \citep{2021ApJ...920L..14T}. \subsection{Sources of Slow Alfv\'enic Solar Wind} \label{SWSAS:slowlafv} {\emph{PSP}}'s orbit has a very small inclination angle w.r.t. the ecliptic plane. It was therefore not surprising to find that over the first Encs. the solar wind streams were observed, with few exceptions, to have slower velocities than that expected for the high-speed streams (HSSs) typically originating in polar coronal holes around solar minimum. What however has been a surprise is that the slow solar wind streams were seen to have turbulence and fluctuation properties, including the presence of large amplitude folds in the magnetic field, i.re. switchbacks, typical of Alfv\'enic fluctuations usually associated with HSSs. Further out from the Sun, the general dichotomy between fast Alfv\'enic solar wind and slow, non-Alfv\'enic solar wind \citep{1991AnGeo...9..416G} is broken by the so-called slow Alfv\'enic streams, first noticed in the {\emph{Helios}} data acquired at 0.3~AU \citep{1981JGR....86.9199M}. That slow wind interval appeared to have much of the same characteristics of the fast wind, including the presence of predominantly outwards Alfv\'enic fluctuations, except for the overall speed. These were observed at solar maximum, while Parker’s observations over the first four years have covered solar minimum and initial appearance of activity of the new solar cycle. Alfv\'enic slow wind streams have also been observed at 1~AU \citep{2011JASTP..73..653D}, and been extensively studied in their composition, thermodynamic, and turbulent characteristics \citep{2015ApJ...805...84D, 2019MNRAS.483.4665D}. The results of these investigations point to a similar origin \citep{2015ApJ...805...84D} for fast and Alfv\'enic slow wind streams. Instances of slow Alfv\'enic wind at solar minimum were found re-examining the {\emph{Helios}} data collected in the inner heliosphere \citep{2019MNRAS.482.1706S, 2020MNRAS.492...39S, 2020A&A...633A.166P}, again supporting a similar origin - coronal holes - for fast and slow Alfv\'enic wind streams. Reconstruction of the magnetic sources of the wind seen by Parker for the first perihelion clearly showed the wind origin to be associated with a small isolated coronal hole. Both ballistic backwards projection in conjunction with the PFSS method \citep{2020ApJS..246...54P, 2020ApJS..246...23B} and global MHD models showed {\emph{PSP}} connected to a negative-polarity equatorial coronal hole, within which it remained for the entire Enc. \citep{2019ApJ...874L..15R, 2020ApJS..246...24R}. The {\emph{in situ}} plasma associated with the small equatorial coronal hole was a highly Alfv\'enic slow wind stream, parts of which were also seen near Earth at L1 \citep{2019Natur.576..237B, 2020ApJS..246...54P}. The relatively high intermittency, low compressibility \citep{2020A&A...633A.166P}, increased turbulent energy level \citep{2020ApJS..246...53C}, and spectral break radial dependence are similar to fast wind \citep{2020ApJS..246...55D}, while particle distribution functions are also more anisotropic than in non-Alfv\'enic slow wind \citep{2020ApJS..246...70H}. Whether Alfv\'enic slow wind always originates from small isolated or quasi-isolated coronal holes ({\emph{e.g.}}, narrow equatorward extensions of polar coronal holes), with large expansion factors within the subsonic/supersonic critical point, or whether the boundaries of large, polar coronal holes might also produce Alfv\'enic slow streams, is still unclear. There is however one possible implication of the overall high Alfv\'enicity observed by {\emph{PSP}} in the deep inner heliosphere: that all of the solar wind might be born Alfv\'enic, or rather, that Alfv\'enic fluctuations be a universal initial condition of solar wind outflow. Whether this is borne out by {\emph{PSP}} measurements closer to the Sun remains to be seen. \subsection{Near Streamer Belt Wind} \label{SWSAS:strmblt} As already discussed in part in \S\ref{SWSAS:slowlafv}, the slow solar wind exhibits at least two states. One state has similar properties to the fast wind, it is highly Alfv\'enic, has low densities and low source temperature (low charge state) and appears to originate from inside coronal holes \citep[see for instance the review by][]{2021JGRA..12628996D}. The other state of the slow wind displays greater variability, higher densities and more elevated source temperatures. The proximity of {\emph{PSP}} to the Sun and the extensive range of longitudes scanned by the probe during an Enc. makes it inevitable that at least one of the many S/C (the Solar and Heliospheric Observatory [{\emph{SOHO}}; \citealt{1995SSRv...72...81D}], {\emph{STEREO}}, {\emph{SDO}}, \& {\emph{SolO}}) currently orbiting the Sun will image the solar wind measured {\emph{in situ}} by {\emph{PSP}}. Since coronagraph and heliospheric imagers tend to image plasma located preferably (but not only) in the vicinity of the so-called Thomson sphere (very close to the sky plane for a coronagraph), the connection between a feature observed in an image with its counterpart measured {\emph{in situ}} is most likely to happen when {\emph{PSP}} crosses the Thomson sphere of the imaging instrument. A first study exploited such orbital configurations that occurred during Enc.~2 when {\emph{PSP}} crossed the Thompson spheres of the {\emph{SOHO}} and {\emph{STEREO}}-A imagers \citep{2020ApJS..246...37R}. In this study, the proton speed measured by SWEAP was used to trace back ballistically the source locations of the solar wind to identify the source in coronagraphic observations. \begin{figure*} \centering \includegraphics[width=\textwidth]{Rouillard2020.png} \caption{A zoomed-in view of a Carrington map built from LASCO-C3 bands of image pixels extracted at 8~R$_{\odot}$. The {\emph{PSP}} path corresponds to the points of magnetic connectivity traced back to the radial distance of the map (8~\textit{R}$_\odot$). The connectivity is estimated by assuming the magnetic field follows a Parker spiral calculated from the speed of the solar wind measured {\emph{in situ}} at {\emph{PSP}}. The color coding is defined by the density ($N\times r^2$) measured {\emph{in situ}} by {\emph{PSP}} with red corresponding to high densities and blue to low densities. Figure adapted from \cite{2020ApJS..246...37R}}. \label{FIG:Rouillard2020} \end{figure*} Fig.~\ref{FIG:Rouillard2020} presents, in a latitude versus longitude format, a comparison between the brightness of the solar corona observed by the Large Angle and Spectrometric COronagraph \citep[LASCO;][]{1995SoPh..162..357B} on {\emph{SOHO}} and the density of the solar wind measured {\emph{in~situ}} by {\emph{PSP}} and color-coded along its trajectory. The Figure shows that as long as the probe remained inside the bright streamers, the density of the solar wind was high but as soon as it exited the streamers (due to the orbital trajectory of {\emph{PSP}}), the solar wind density suddenly dropped by a factor of four but the solar wind speed remained the same around 300~km~s$^{-1}$ \citep{2020ApJS..246...37R}. \cite{2021ApJ...910...63G} exploited numerical models and ultraviolet imaging to show that as {\emph{PSP}} exited streamer flows it sampled slow solar wind released from deeper inside an isolated coronal hole. These measurements illustrate nicely the transitions that can occur between two slow solar wind types over a short time period. While switchbacks were observed in both flows, the patches of switchbacks were also different between the two types of slow wind with more intense switchbacks patches measured in the streamer flows \citep{2020ApJS..246...37R}, this is also seen in the spectral power of switchbacks \citep{2021A&A...650A..11F}. \cite{2021ApJ...910...63G} show that both types of solar winds can exhibit very quiet solar wind conditions (with no switchback occurrence). These quiet periods are at odds with theories that relate the formation of the slow wind with a continual reconfiguration of the coronal magnetic field lines due to footpoint exchange, since this should drive strong wind variability continually \citep[{\emph{e.g.}},][]{1996JGR...10115547F}. \cite{2021A&A...650L...3C} also measured a distinct transition between streamer belt and non-streamer belt wind by looking at turbulence properties during the fourth {\emph{PSP}} perihelion when the HCS was flat and {\emph{PSP}} skirted the boundary for an extended period of time. They associated lower turbulence amplitude, higher magnetic compressibility, a steeper turbulence spectrum, lower Alfv\'enicity and a lower frequency spectral break with proximity to the HCS, showing that at {\emph{PSP}}'s perihelia distances {\emph{in situ}} data allows indirect distinction between solar wind sources. Finally, in addition to steady state streamer belt and HCS connectivity, remote sensing and {\emph{in situ}} data on board {\emph{PSP}} has also been used to track transient solar phenomena erupting or breaking off from the streamer belt. \cite{2020ApJS..246...69K} detected the passage of a SBO coronal mass ejection (SBO-CME) during {\emph{PSP}} Enc.~1 and via imaging from {\emph{STEREO}}-A at 1~AU and coronal modeling with WSA associated it with a specific helmet streamer. Similarly, \cite{2020ApJ...897..134L} associated a SBO with a CME measured at {\emph{PSP}} during the second Enc. and via stereographic observations from 1~AU and arrival time analysis modeled the flux rope structure underlying the structure. \subsection{Fast Solar Wind Sources} \label{SWSAS:fstwnd} During the first eight Encs. of {\emph{PSP}}, there are only very few observations of fast solar wind, {\emph{e.g.}}, 9 Nov. 2018 and 11 Jan. 2021. Most of the time, {\emph{PSP}} was inside the slow wind streams. Thus, exploration of the source of fast wind remains as a future work. The first observed fast wind interval was included in the study by \cite{2020ApJ...892...88B}, who investigated the relation between the suprathermal solar wind electron population called the strahl and the shape of the electron VDF in the solar corona. Combining {\emph{PSP}} and {\emph{Helios}} observations they found that the strahl parallel temperature ($T_{s\parallel}$) does not vary with radial distance and is anticorrelated with the solar wind velocity, which indicates that $T_{s\parallel}$ is a good proxy for the electron coronal temperature. Fig.~\ref{FIG:Bercic2020} shows the evolution of $T_{s\parallel}$ along a part of the first {\emph{PSP}} orbit trajectory ballistically projected down to the corona to produce sub-S/C points. PFSS model was used to predict the magnetic connectivity of the sub-S/C points to the solar surface. The observed fast solar wind originates from the equatorial coronal hole \citep{2020ApJS..246...23B}, and is marked by low $T_{s\parallel}$ ($<$ 75 eV). These values are in excellent agreement with the coronal hole electron temperatures obtained via the spectroscopy technique \citep{1998A&A...336L..90D,2002SSRv..101..229C}. \begin{figure*} \centering \includegraphics[width=\textwidth]{Bercic2020.png} \caption{The evolution of $T_{s\parallel}$ with part of the {\emph{PSP}} orbit 1 between 30 Oct. 2018, 00:30~UT (Universal Time) and 23 Nov. 2018, 17:30~UT. The {\emph{PSP}} trajectory is ballistically projected down to the corona ($2~R_\odot$) to produce sub-S/C points.The colored lines denote the magnetic field lines mapped from the sub-S/C points to the solar surface as predicted by the PFSS model with source surface height $2~R_\odot$, the same as used in \cite{2019Natur.576..237B} and \cite{2020ApJS..246...23B}. The white line shows the PFSS neutral line. The points and magnetic field lines are colored w.r.t. hour-long averages of $T_{s\parallel}$. The corresponding image of the Sun is a synoptic map of the 193 Å emission synthesized from {\emph{STEREO}}/EUVI and {\emph{SDO}}/AIA for CR~2210, identical to the one used by \cite{2020ApJS..246...23B} in their Figs. 5 and 9. } \label{FIG:Bercic2020} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{Shi2020.png} \caption{ Normalized cross helicity $\sigma_c$ of wave periods 112~s$-$56~s as a function of the radial distance to the Sun R (horizontal axis; $R_S\equiv R_\odot$) and radial speed of the solar wind $V_r$ (vertical axis). The colors of each block represent the median values of the binned data. Text on each block shows the value of the block and the number of data points (bracketed) in the block. Figure adapted from \citet{2021A&A...650A..21S}.} \label{FIG:Shi2020} \end{figure*} \cite{2021A&A...650A..21S} analyzed data from the first five Encs. of {\emph{PSP}} and showed how the Alfv\'enicity varies with the solar wind speed. Fig.~\ref{FIG:Shi2020} shows the statistical result of the normalized cross helicity $\sigma_c$ for waves with periods between 112 and 56 seconds, well inside the inertial range of the MHD turbulence. Although the result may be affected by certain individual wind streams due to the limited volume of data, overall, a positive $\sigma_c-V_r$ correlation is observed, indicating that the faster wind is generally more Alfv\'enic than the slower wind. We should emphasize that the result is acquired mostly from measurements of the slow solar wind as the fast wind was rarely observed by {\emph{PSP}}. Thus, the result implies that even for the slow solar wind, the faster stream is more Alfv\'enic than the slow stream. This could be a result of the shorter nonlinear evolution for the turbulence, which leads to a decrease of the Alfv\'enicity, in the faster stream. \cite{2020ApJS..246...54P} and \cite{2021A&A...650A..21S} showed that the slow wind originating from the equatorial pseudostreamers is Alfv\'enic while that originating from the boundary of the polar coronal holes is low-Alfv\'enic. Thus, this speed-dependence of Alfv\'enicity could also be related to the different sources of the slow wind streams. \subsection{Active Region Sources} \label{SWSAS:actvrgn} The magnetic structure of the corona is important to determining solar wind outflow in at least two different ways, closed coronal field lines providing a geometrical backbone determining the expansion rate of neighboring field lines, and the dynamics of the boundaries between closed and open fields provides time-dependent heating and acceleration mechanisms, as occurs with emerging ARs. Emerging ARs reconfigure the local coronal field, leading to the formation of new coronal hole or coronal hole corridors often at there periphery, and depending on the latitude emergence may lead to the formation of large pseudostreamer or reconfiguration of helmet streamers, therefore changing solar wind distributions. Such reconfiguration is also accompanied by radio bursts and energetic particle acceleration, with at least one energetic particle event seen by IS$\odot$IS, on 4 Apr. 2019, attributed to this type of process \citep{2020ApJS..246...35L,2020ApJ...899..107K}. The event seen by {\emph{PSP}} was very small, with peak 1~MeV proton intensities of $\sim 0.3$ particles~cm$^{-2}$~sr$^{-1}$~s$^{-1}$~MeV$^{-1}$. Temporal association between particle increases and small brightness surges in the EUV observed by {\emph{STEREO}}, which were also accompanied by type III radio emission seen by the Electromagnetic Fields Investigation on {\emph{PSP}}, provided evidence that the source of this event was an AR nearly $80^\circ$ east of the nominal {\emph{PSP}} magnetic footpoint, suggesting field lines expanding over a wide longitudinal range between the AR in the photosphere and the corona. Studies by \citep{2021A&A...650A...6C,2021A&A...650A...7H} further studied the ARs from these times with remote sensing and {\emph{in situ}} data, including the type III bursts, further associating these escaping electron beams with AR dynamics and open field lines indicated by the type III radiation. The fractional contribution of ARs to the solar wind is negligible at solar minimum, and typically around $40\% \mbox{--} 60\%$ at solar maximum, scaling with sunspot number \citep{2021SoPh..296..116S}. The latitudinal extent of AR solar wind is highly variable between different solar cycles and varying from a band of about $\pm30^\circ$ to $\pm60^\circ$ around the equator. As the solar cycle activity increases, {\emph{PSP}} is expected to measure more wind associated with ARs. Contemporaneous measurements by multiple instruments and opportunities for quadratures and conjunctions with {\emph{SolO}} and {\emph{STEREO}} abound, and should shed light into the detailed wind types originating from AR sources. \subsection{Switchback Stream Sources} \label{SWSAS:sbs} As discussed in previous sections, large amplitude fluctuations with characteristics of large amplitude AWs propagating away from the Sun, are ubiquitous in many solar wind streams. Though such features are most frequently found within fast solar wind streams at solar minimum, there are also episodes of Alfv\'enic slow wind visible both at solar minimum and maximum at 0.3~AU and beyond \citep[in the {\emph{Helios}} and {\emph{Wind}} data;][]{2020SoPh..295...46D,2021JGRA..12628996D}. A remarkable aspect of {\emph{PSP}} measurements has been the fact that Alfv\'enic fluctuations also tend to dominate the slow solar wind in the inner heliosphere. Part and parcel of this turbulent regimes are the switchback patches seen throughout the solar Encs. by {\emph{PSP}}, with the possible exception of Enc.~3. As the Probe perihelia get closer to the Sun, there are indications that the clustering of switchbacks into patches remains a prominent feature, though their amplitude decreases w.r.t. the underlying average magnetic field. The sources of such switchback patches appear to be open field coronal hole regions, of which at least a few have been identified as isolated coronal hole or coronal hole equatorial coronal holes (this was the case of the {\emph{PSP}} connection to the Sun throughout the first perihelion), while streams originating at boundaries of polar coronal holes, although also permeated by switchbacks, appear to be globally less Alfv\'enic. The absence of well-defined patches of switchbacks in measurements at 1~AU or other S/C data, together with the association of patches to scales similar to supergranulation, when projected backwards onto the Sun, are indications that switchback patches are a signature of solar wind source structure. {\emph{PSP}} measurements near the Sun provide compelling evidence for the switchback patches being the remnants of magnetic funnels and supergranules \citep{2021ApJ...923..174B,2021ApJ...919...96F}. \section{Kinetic Physics and Instabilities in the Young Solar Wind} \label{KPIYSW} In addition to the observation of switchbacks, the ubiquity of ion- and electron-scale waves, the deformation of the particle VDF from a isotropic Maxwellian, and the kinetic processes connecting the waves and VDFs has been a topic of focused study. The presence of these waves and departures from thermodynamic equilibrium was not wholly unexpected, given previous inner heliospheric observations by {\emph{Helios}} \citep{2012SSRv..172...23M}, but the observations by {\emph{PSP}} at previously unrealized distances has helped to clarify the role they play in the thermodynamics of the young solar wind. In addition, the intensity and large variety of plasma waves in the near-Sun solar wind has offered new insight into the kinetic physics of plasma wave growth. \subsection{Ion-Scales Waves \& Structures} \label{KPIYSW.ion} The prevalence of electromagnetic ion-scale waves in the inner heliosphere was first revealed by {\emph{PSP}} during Enc.~1 at $36-54~R_\odot$ by \cite{2019Natur.576..237B} and studied in more detail in \cite{2020ApJ...899...74B}; they implicated that kinetic plasma instabilities may be playing a role in ion-scale wave generation. A statistical study by \cite{2020ApJS..246...66B} showed that a radial magnetic field was a favorable condition for these waves, namely that $30\%-50\%$ of the circularly polarized waves were present in a quiet, radial magnetic field configuration. However, single-point S/C measurements obscure the ability to answer definitively whether or not the ion-scale waves still exist in non-radial fields, only hidden by turbulent fluctuations perpendicular to the magnetic field. Large-amplitude electrostatic ion-acoustic waves are also frequently observed, and have been conjectured to be driven by ion-ion and ion-electron drift instabilities \cite{2020ApJ...901..107M,2021ApJ...911...89M}. These ubiquitously observed ion-scale waves strongly suggest that they play a role in the dynamics of the expanding young solar wind. The direction of ion-scale wave propagation, however, is ambiguous. The procedure for Doppler-shifting the wave frequencies from the S/C to plasma frame is nontrivial. A complimentary analysis of the electric field measurements is required, \cite[see][for a discussion of initially calibrated DC and low frequency electric field measurements from FIELDS]{2020JGRA..12527980M}. These electric field measurements enabled \cite{2020ApJS..246...66B} to constrain permissible wave polarizations in the plasma frame by Doppler-shifting the cold plasma dispersion relation and comparing to the S/C frame measurements. They found that a majority of the observed ion-scale waves are propagating away from the Sun, suggesting that both left-handed and right-handed wave polarizations are plausible. The question of the origin of these waves and their role in cosmic energy flow remains a topic of fervent investigation; {\emph{c.f.}} reviews in \cite{2012SSRv..172..373M,2019LRSP...16....5V}. An inquiry of the plasma measurements during these wave storms is a natural one, given that ion VDFs are capable of driving ion-scale waves after sufficient deviation from non-local thermal equilibrium \citep[LTE;][]{1993tspm.book.....G}. Common examples of such non-LTE features are relatively drifting components, {\emph{e.g.}}, a secondary proton beam, temperature anisotropies along and transverse to the local magnetic field, and temperature disequilibrium between components. Comprehensive statistical analysis of these VDFs have been performed using {\emph{in situ}} observations from {\emph{Helios}} at 0.3~AU \citep{2012SSRv..172...23M} and at 1~AU ({\emph{e.g.}}, see review of {\emph{Wind}} observations in \citep{2021RvGeo..5900714W}). Many studies employing linear Vlasov theory combined with the observed non-thermal VDFs have implied that the observed structure can drive instabilities leading to wave growth. The question of what modes may dominate, {\emph{e.g.}}, right-handed magnetosonic waves or left-handed ion-cyclotron waves, under what conditions remains open, but {\emph{PSP}} is making progress toward solving this mystery. \begin{figure} \centering {\includegraphics[trim = 0mm 0mm 0mm 0mm, clip, width=1.0\textwidth]{Verniero2020_fig1_v0.pdf}} \caption{Example event on 5 Apr. 2019 (Event \#1) featuring a strong correlation between a proton beam and an ion-scale wave storm. Shown is the (a) radial magnetic field component, (b) angle of wave propagation w.r.t. B, (c) wavelet transform of B, (d) perpendicular polarization of B, (e) SPAN-i measured moment of differential energy flux, (f) SPAN-i measured moments of temperature anisotropy. In panels (c) and (d), the white dashed–dotted line represents the local $f_{cp}$. Figure adapted from \cite{2020ApJS..248....5V}. \label{fig:verniero2020f1}} \end{figure} \begin{figure} \centering {\includegraphics[trim = 0mm 0mm 0mm 0mm, clip, width=0.95\textwidth]{Verniero2020_fig2_v0.pdf}} \caption{Beam evolution for times indicated by the dashed black lines in Fig.~\ref{fig:verniero2020f1}. Left: Proton VDFs, where each line refers to an energy sweep at different elevation angles. Middle: VDF contour elevations that are summed and collapsed onto the $\theta$-plane. Right: VDF contour elevations that are summed and collapsed onto the azimuthal plane. The black arrow represents the magnetic field direction in SPAN-i coordinates, where the head is at the solar wind velocity (measured by SPC) and the length is the Alfv\'en speed. Figure adapted from \cite{2020ApJS..248....5V}. \label{fig:verniero2020f2}} \end{figure} During Enc.~2, {\emph{PSP}} witnessed intense secondary proton beams simultaneous with ion-scale waves at $\sim36~R_\odot$, using measurements from both SWEAP and FIELDS \citep{2020ApJS..248....5V}. The particle instrument suite, SWEAP is comprised of a Faraday Cup, called Solar Probe Cup \citep[SPC;][]{2020ApJS..246...43C} and top-hat electrostatic analyzers called Solar Probe ANalzers that measures electrons \citep[SPAN-e;][]{2020ApJS..246...74W} and ions \citep[SPAN-i;][]{10.1002/essoar.10508651.1}. The SPANs are partially obstructed by {\emph{PSP}}'s heat shield, leading to measurements of partial moments of the solar wind plasma. But, full sky coverage can be leveraged using SPC. The placement of SPAN-i on the S/C was optimal for detecting proton beams, both during initial and ongoing Encs. The time-of-flight capabilities on SPAN-i can separate protons from other minor ions, such as alpha particles. The instrument measures particle VDFs in 3D $(E,\theta,\phi)$ energy-angle phase-space. These particle VDFs were showcased in \cite{2020ApJS..248....5V} where they displayed two events featuring the evolution of an intense proton beam simultaneous with ion-scale wave storms. The first of these, shown in Fig.~\ref{fig:verniero2020f1}, involved left-handed circularly polarized waves parallel propagating in a quiet, nearly radial magnetic field; the frequencies of these waves were near the proton gyrofrequency ($f_{cp}$). Analysis of the FIELDS magnetometer data shows in Fig.~\ref{fig:verniero2020f1}a the steady $B_r/|B|$; Fig.~\ref{fig:verniero2020f1}b shows from MVA the wave traveling nearly parallel to $\mathbf{B}$, and Fig.~\ref{fig:verniero2020f1}d shows the wavelet transform of $\mathbf{B}$ over a narrow frequency range about the $f_{cp}$, indicated by the white dashed horizontal line; Fig.~\ref{fig:verniero2020f1}d represents the wave polarization, where blue is left-handed in the S/C frame, and red is right-handed. The SPAN-i moments of differential energy flux is displayed in Fig.~\ref{fig:verniero2020f1}e, and the temperature anisotropy was extracted from the temperature tensor in Fig.~\ref{fig:verniero2020f1}f. The evolution of proton VDFs reported in \cite{2020ApJS..248....5V} during this event (at the times indicated by the black dashed vertical lines in Fig.~\ref{fig:verniero2020f1}) are displayed in Fig.~\ref{fig:verniero2020f2}. The left column represents the proton VDF in 3D phase-space, where each line represents a different energy sweap at different elevation angles. The middle column represents contours of the VDF in SPAN-i instrument coordinates $v_r$-$v_z$, summed and collapsed onto the $\theta$-plane. The right column represents the VDF in the azimuthal plane, where one can notice the portion of the VDF that is obstructed by the heat shield. During this period of time, the proton core was over 50\% in the SPAN-i FOV, and therefore was determined as a suitable event to analyze. During both wave-particle interaction events described in \cite{2020ApJS..248....5V}, 1D fits were applied to the SPAN-i VDFs and inputted to a kinetic instability solver. Linear Vlasov analysis revealed many wave modes with positive growth rates, and that the proton beam was the main driver of the unstable plasma during these times. \cite{2021ApJ...909....7K} further investigated the nature of proton-beam driven kinetic instabilities by using 3D fits of the proton beam and core populations during Enc.~4. Using the plasma instability solver, PLUMAGE \citep{2017JGRA..122.9815K}, they found significant differences in wave-particle energy transfer when comparing results from modeling the VDF as either one or two components. The differences between the waves predicted by the one- and two-component fits were not universal; in some instances, properly accounting for the beam simply enhanced the growth rate of the instabilities predicted by the one-component model while for other intervals, entirely different sets of waves were predicted to be generated. \vspace{.5 in} \begin{figure} \centering {\includegraphics[trim = 0mm 0mm 0mm 0mm, clip, width=0.6\textwidth]{Verniero2021_fig1_v1.pdf}} \caption{Example of proton VDF displaying a ``hammerhead" feature. The VDF was transformed from the SPAN-i $\theta$-plane to the plasma-frame in magnetic field-aligned coordinates. The black arrow represents the magnetic field, where the head is placed at the solar wind speed and the length is the Alfv\'en speed. The particles diffuse along predicted contours from quasilinear theory, seen by the dashed black curves. Figure adapted from \cite{2022ApJ...924..112V}. \label{fig:verniero2021f1}} \end{figure} \vspace{.5 in} \begin{figure} \centering {\includegraphics[trim = 0mm 0mm 0mm 0mm, clip, width=1.0\textwidth]{Verniero2021_fig2_v1.pdf}} \caption{The proton VDF displays a `hammerhead' feature throughout this interval of enhanced Right-Handed wave power. The proton VDF at the time indicated by the vertical dashed black line is shown in Fig.~\ref{fig:verniero2021f1}. Figure adapted from \cite{2022ApJ...924..112V}. \label{fig:verniero2021f2}} \end{figure} During Encs. 4 and 5, {\emph{PSP}} observed a series of proton beams in which the proton VDF was strongly broadened in directions perpendicular to the magnetic field, but only at $|v_\parallel| \gtrsim 2-3 v_{\rm A}$, where $v_\parallel$ is the proton velocity parallel to the magnetic field relative to the peak of the proton VDF. At $|v_\parallel | \lesssim 2-3 v_{\rm A}$, the beam protons' velocities were much more aligned with the magnetic-field direction. The resulting level contours of the proton VDF exhibited a `hammerhead' shape \citep{2022ApJ...924..112V}. An example VDF is given in Fig.~\ref{fig:verniero2021f1}, at the time indicated by the vertical dashed line in Fig. \ref{fig:verniero2021f2}. These new complex distributions were quantified by modeling the proton VDF as a sum of three bi-Maxwellians, and using the temperature anisotropy of the third population as a proxy for the high energy asymmetries. In addition, the observations substantiate the need for multi-component proton VDF fitting to more accurately characterize the plasma conditions at kinetic scales, as discussed in \cite{2021ApJ...909....7K}. \cite{2022ApJ...924..112V} found that these hammerhead distributions tended to occur at the same time as intense, right circularly polarized, plasma waves at wave lengths comparable to the proton inertial length. Moreover, the level contours of the VDF within the hammerhead region aligned with the velocity-space diffusion contours that arise in quasilinear theory when protons resonantly interact with parallel-propagating, right circularly polarized, fast-magnetosonic/whistler (FM/W) waves. These findings suggest that the hammerhead distributions occur when field-aligned proton beams excite FM/W waves and subsequently scatter off these waves. Resonant interactions between protons and parallel-propagating FM/W waves occur only when the protons' parallel velocities exceed $\simeq 2.6\ v_{\rm A}$, consistent with the observation that the hammerheads undergo substantial perpendicular velocity-space diffusion only at parallel velocities $\gtrsim 2.6\ v_{\rm A}$ \citep{2022ApJ...924..112V}. Initial studies of the transfer of energy between the ion-scale waves and the proton distribution were performed in \cite{2021A&A...650A..10V}. During an Enc.~3 interval where an ion cyclotron wave (ICW) was observed in the FIELDS magnetometer data and SPC was operating in a mode where it rapidly measures a single energy bin, rather than scanning over the entire range of velocities, the work done by the perpendicular electric field on the measured region of the proton VDF was calculated. The energy transferred between the fields and particles was consistent with the damping of an ICW with a parallel $f$ wave-vector of order the thermal proton gyroradius. \subsection{Electron-Scales Waves \& Structures} \label{KPIYSW.electron} Researchers in the 1970s recognized that the distributions should change dramatically as solar wind electrons propagated away from the Sun \citep{1971ApJ...163..383C,hollweg1974electron,feldman1975solar}. As satellites sampled regions from $\sim0.3$~AU to outside 5~AU, studies showed that the relative fractions of the field-aligned strahl and quasi-isotropic halo electrons change with radial distance in a manner that is inconsistent with adiabatic motion \citep{maksimovic2005radial,vstverak2009radial,2017JGRA..122.3858G}. The changes in heat flux, which is carried predominantly by the strahl \citep{scime1994regulation, vstverak2015electron} are also not consistent with adiabatic expansion. Research assessing the relative roles of Coulomb collisions and wave-particle interactions in these changes has often concluded that wave-particle interactions are necessary \citep{phillips1990radial, scudder1979theory, 2019MNRAS.489.3412B, 2013ApJ...769L..22B}. The ambipolar electric field is another mechanism that shapes the electron distributions \citep{lemaire1973kinetic,scudder2019steady}. {\emph{PSP}} has provided new insights into electrons in the young solar wind, and the role of waves and the ambipolar electric field in their evolution. \cite{2020ApJS..246...22H}, in a study of the first two Encs., found that radial trends inside $\sim0.3$~AU were mostly consistent with earlier studies. The halo density, however, decreases close to the Sun, resulting in a large increase in the strahl to halo ratio. In addition, unlike what is seen at 1~AU, the core electron temperature is anti-correlated with solar wind speed \citep{2020ApJS..246...22H,2020ApJS..246...62M}. The core temperature may thus reflect the coronal source region, as also discussed for the strahl parallel temperature in \S4.4 \citep{2020ApJ...892...88B}. \cite{2022ApJ...931..118A} confirmed the small halo density, showing that it continued to decrease in measurements closer to the Sun, and also found that the suprathermal population (halo plus strahl) comprised only 1\% of the solar wind electrons at the closest distances sampled. The electron heat flux carried primarily by strahl \citep{2020ApJ...892...88B, 2021A&A...650A..15H} is also anticorrelated with solar wind speed \citep{ 2020ApJS..246...22H}. Closer to the Sun (from 0.125 to 0.25~AU) the normalized electron heat flux is also anti-correlated with beta \citep{2021A&A...650A..15H}. This beta dependence is not consistent with a purely collisional scattering mechanism. The signature of the ambipolar electric field has also been clearly revealed in electron data \citep{2021ApJ...916...16H, 2021ApJ...921...83B} as a deficit of electrons moving back towards the Sun. This loss of electrons occurs more than 60\% of the time inside 0.2~AU. The angular dependence of the deficit is not consistent with Coulomb scattering, and the deficit disappears in the same radial distances as the increase in the halo. There is also a decrease in the normalized heat flux. Both observations provide additional support for the essential role of waves. The role of whistler-mode waves in the evolution of solar wind electrons and regulation of heat flux has long been a topic of interest. Instability mechanisms including temperature anisotropy and several heat flux instabilities have been proposed \citep{1994JGR....9923391G,1996JGR...10110749G,2019ApJ...871L..29V}. Wave studies near 1~AU utilizing data from {\emph{Wind}}, {\emph{STEREO}}, {\emph{Cluster}} \citep{1997SSRv...79...11E} and the Acceleration, Reconnection, Turbulence and Electrodynamics of the Moon's Interaction \citep[{\emph{ARTEMIS}};][]{2011SSRv..165....3A} missions provided evidence for both low amplitude parallel propagating whistlers \citep{2014ApJ...796....5L,2019ApJ...878...41T} and large amplitude highly oblique waves \citep{2010JGRA..115.8104B,2020ApJ...897..126C}. The Fields instrument on {\emph{PSP}}, using waveform capture, spectral, and bandpass filter datasets using both electric fields and search coil magnetic fields, has enabled study of whistler-mode waves over a wide range of distances from the Sun and in association with different large-scale structures. This research, in concert with studies of solar wind electrons, has provided critical new evidence for the role of whistler-mode waves in regulation of electron heat flux and scattering of strahl electrons to produce the halo. Observational papers have motivated a number of theoretical studies to further elucidate the physics \citep{2020ApJ...903L..23M,2021ApJ...919...42M,2021ApJ...914L..33C,vo2022stochastic}. Enc.~1 waveform data provided the first definitive evidence for the existence of sunward propagating whistler mode waves \citep{2020ApJ...891L..20A}, an important observation because, if the waves have wavevectors parallel to the background magnetic field, only sunward-propagating waves can interact with the anti-sunward propagating strahl. The whistlers observed near the Sun occur with a range of wave angles from parallel to highly oblique \citep{2020ApJ...891L..20A,2021A&A...650A...8C,2022ApJ...924L..33C,Dudok2022_scm}. Because the oblique whistler waves are elliptically polarized (mixture of left and right hand polarized), oblique waves moving anti-sunward can interact with electrons moving away from the Sun. Particle tracing simulations\citep{2021ApJ...914L..33C,vo2022stochastic} and PIC simulations \citep{2020ApJ...903L..23M,2021ApJ...919...42M,2019ApJ...887..190R} have revealed details of wave-electron interactions. Several studies have examined the association of whistlers with large-scale solar wind structures. There is a strong correlation between large amplitude waves and the boundaries of switchbacks \citep{2020ApJ...891L..20A}, and smaller waves can fill switchbacks \citep{2021A&A...650A...8C}. The whistlers are also seen primarily in the slow solar wind \citep{2021A&A...650A...9J,2022ApJ...924L..33C}, as initially observed near 1~AU \citep{2014ApJ...796....5L} and in the recent studies using the {\emph{Helios}} data between 0.3 to 1~AU \citep{2020ApJ...897..118J}. Several studies have found differences in the evolution of the non-thermal electron distributions between the slow and fast wind, suggesting that different scattering mechanisms are active in the fast and slow wind \citep{pagel2005understanding,vstverak2009radial}. \begin{figure} \centering \includegraphics[width=0.95\textwidth]{SSR2021_figure_Whistlers-eps-converted-to.pdf} \caption{ Enlargement of the trailing edge of the switchback of Fig.~\ref{fig:icx1}. Panel (a) shows the magnetic field from MAG with the same color code as in Fig \ref{fig:icx1}. Panels (b) and (c) show magnetic and electric field wave perturbations respectively. Panel (d) displays the dynamic spectrum of magnetic field perturbations $B_w$. The dashed curves in panels (d-f) represent the $f_{LH}$ frequency (bottom curve) and 0.1$f_{ce}$ (upper curve). Panel (e) displays the signed radial component of the Poynting flux. Red colors corresponds to a sunward propagation. Panel (f) shows the wave normal angle relative to the direction of the background magnetic field.} \label{fig:mag} \end{figure} Fig.~\ref{fig:mag} shows a zoom-in of the trailing edge of the switchback displayed in Fig.~\ref{fig:icx1}. Fig.~\ref{fig:mag}a emphasizes that the local dip in the magnetic field is essentially caused by a decrease of its radial component. This dip coincides with an increase of the ratio between electron plasma frequency ($f_{pe}$) and electron gyrofrequency $f_{ce}$ from 120 to $\approx500$. A polarization analysis reveals a right-handed circular polarization of the magnetic field and an elliptical polarization of the electric field perturbations with a $\pi/2$ phase shift. The dynamic spectrum in Fig.~\ref{fig:mag}d shows a complex inner structure of the wave packet, which consists of a series of bursts. The phase shift of the magnetic and electric field components transverse to the radial direction attest a sunward propagation of the observed waves. The sign of the radial component of the Poynting vector (Fig.~\ref{fig:mag}f) changes from positive (sunward) at high frequencies to negative (anti-sunward) at lower frequencies. The frequencies of these wave packets fall between $f_{LH}$ (lower dashed curve in Figs.~\ref{fig:mag}f and \ref{fig:mag}g) and one tenth of $f_{ce}$ (upper dashed curve). This suggests that the observed frequency range of the whistler waves is shifted down by the Doppler effect as the whistler phase velocity ($300-500$~km~s$^{-1}$) is comparable to that of the plasma bulk velocity. The observed whistlers are found to have a wide range of wave normal angle values from quasi-parallel propagation to quasi-electrostatic propagating close to the resonance cone corresponding to the complex structure of the dynamics spectrum (Fig.~\ref{fig:mag}b). Fig.~\ref{fig:mag}h thereby further supports the idea that our complex wave packet consists of a bunch of distinct and narrowband wave bursts. The wave frequency in the solar wind plasma frame, as reconstructed from the Doppler shift and the local parameters of plasma, are found to be in the range of \SIrange{100}{350}{Hz}, which corresponds to $0.2-0.5\ f_{ce}$ (Fig.~\ref{fig:mag}d). Incidentally, the reconstructed wave frequency can be used to accurately calibrate the electric field measurements, and determine the effective length of the electric field antennas, which was found to be approximately \SIrange{3.5}{4.5}{m} in the \SIrange{20}{100}{Hz} frequency range \citep{2020ApJ...891L..20A, 2020ApJ...901..107M}. More details can be found in \cite{2020ApJ...891L..20A}. The population of such sunward propagating whistlers can efficiently interact with the energetic particles of the solar wind and affect the strahl population, spreading their field-aligned PAD through pitch-angle scattering. For sunward propagating whistlers around \SIrange{100}{300}{Hz}, the resonance conditions occur for electrons with energies from approximately \SI{50}{eV} to \SI{1}{keV}. This energy range coincides with that of the observed strahl \citep{2020ApJS..246...22H} and potentially leads to efficient scattering of the strahl electrons. Such an interaction can be even more efficient taking into account that some of the observed waves are oblique. For these waves, the effective scattering is strongly enhanced at higher-order resonances \citep{2020ApJ...891L..20A,2021ApJ...911L..29C}, which leads to a regulation of the heat flux as shown by \cite{2019ApJ...887..190R}, and to an increase in the fraction of the halo distribution with the distance from the Sun. {\emph{PSP}} has provided direct evidence for scattering of strahl into halo by narrowband whistler-mode waves \citep{ 2020ApJ...891L..20A, 2021ApJ...911L..29C, 2021A&A...650A...9J}. The increase in halo occurs in the same beta range and radial distance range as the whistlers, consistent with production of halo by whistler scattering. Comparison of waveform capture data and electron distributions from Enc.~1 \citep{2021ApJ...911L..29C} showed that the narrowest strahl occurred when there were either no whistlers or very intermittent low amplitude waves, whereas the broadest distributions were associated with intense, long duration waves. Features consistent with an energy dependent scattering mechanism were observed in approximately half the broadened strahl distributions, as was also suggested by features in electrons displaying the signature of the ambipolar field \citep{2021ApJ...916...16H}. In a study of bandpass filtered data from Encs. 1 through 9, \cite{ 2022ApJ...924L..33C} have shown that the narrowband whistler-mode waves that scatter strahl electrons and regulate heat flux are not observed inside approximately 30~$R_\odot$. Instead, large amplitude electrostatic (up to 200~mV/m) waves in the same frequency range (from the $f_{LH}$ frequency up to a few tenths of $f_{ce}$) are ubiquitous, as shown in Fig.~\ref{fig:ESradial}. The peak amplitudes of the electrostatic (ES) waves ($\sim220$~mV/m) are an order of magnitude larger than those of the whistlers ($\sim40$~mV/m). Within the same region where whistlers disappear, the deficit of sunward electrons is seen, and the density of halo relative to the total density decreases. The finding that, when the deficit was observed, the normalized heat flux-parallel electron beta relationship was not consistent with the whistler heat flux fan instability is consistent with loss of whistlers. The differences in the functional form of electron distributions due to this deficit very likely result in changes in the instabilities excited \citep{2021ApJ...916...16H, 2021A&A...656A..31B}. Theoretical studies have examined how changes in the distributions and background plasma properties might change which modes are destabilized. \cite{ lopez2020alternative} examined dependence on beta and the ratio of the strahl speed to the electron Alfv\'en speed for different electromagnetic and electrostatic instabilities. This ratio decreases close to the Sun. Other studies \citep{2021ApJ...919...42M, 2019ApJ...887..190R,schroeder2021stability} have concluded that multiple instabilities operate sequentially and/or at different distances from the Sun. \begin{figure} \centering {\includegraphics[width=0.8\textwidth]{Cattell2022.png}} \caption{ Statistics of whistler-mode waves and ES waves identified in bandpass filter data. Left hand column, from top to bottom: the number of BBF samples identified as ES waves versus radial distance, the number of BBF samples identified as whistler-mode waves versus radial distance, and the total number of BBF samples in Enc.~1 through Enc.~9 versus radial distance. The right hand column: the electrostatic wave occurrence rate and the whistler-mode wave occurrence rate , both versus radial distance. Note that the drop off outside 75~$R_\odot$ (0.3~AU) is associated with the impact of the decrease in frequency with radial distance on the algorithm used to identify waves. Figure adapted from \cite{2022ApJ...924L..33C}. \label{fig:ESradial}} \end{figure} Closer to the Sun, other scattering mechanisms must operate, associated with the large narrowband ES waves and the nonlinear waves. Note that in some cases these ES waves have been identified as ion acoustic waves \citep{2021ApJ...919L...2M}. \cite{ dum1983electrostatic} has discussed anomalous transport and heating associated with ES waves in the solar wind. The lack of narrowband whistler-mode waves close to the sun and in regions of either low ($<1$) or high ($>10$) parallel electron beta may also be significant for the understanding and modeling the evolution of flare-accelerated electrons, other stellar winds, the interstellar medium, and intra-galaxy cluster medium. {\emph{PSP}} data has been instrumental in advancing the study of electron-resonant plasma waves other than whistler-mode waves. \citet{Larosa2022} presented the first unambiguous observations of the Langmuir z-mode in the solar wind (Langmuir-slow extraordinary mode) using high frequency magnetic field data. This wave mode is thought to play a key role in the conversion of electron-beam driven Langmuir waves into type~III and type~II solar radio emission \citep[{\emph{e.g.}},][and references therein]{Cairns2018}. However, progress understanding the detailed kinetic physics powering this interaction has been slowed by a lack of direct z-mode observations in the solar wind. Z-mode wave occurrence was found to be highly impacted by the presence of solar wind density fluctuations, confirming long-held theoretical assumptions. {\emph{PSP}} data also revealed the presence of unidentified electrostatic plasma waves near $f_{ce}$ in the solar wind (Fig.~\ref{fig:nearfce}). \citet{Malaspina2020_waves} showed that these waves occur frequently during solar Encs., but only when fluctuations in the ambient solar wind magnetic field become exceptionally small. \citet{2022ApJ...936....7T} identified that a necessary condition for the existence of these waves is the direction of the ambient solar wind magnetic field vector. They demonstrated that these waves occur for a preferential range of magnetic field orientation, and concluded their study by suggesting that these waves may be generated by S/C interaction with the solar wind. \citet{Malaspina2021_fce} explored high-cadence observations of these waves, demonstrating that they are composed of many simultaneously present modes, one of which was identified as the electron Bernstein mode. The other wave modes remain inconclusively identified. \citet{Shi2022_waves} and \citet{Ma2021_waves} explored the possibility that these waves are created by nonlinear wave-wave interactions. Identifying the exact wave mode, the origin of these waves near $f_{ce}$, and their impact on the solar wind remain areas of active study. {\emph{PSP}} data from ever decreasing perihelion distances are expected to enable new progress. \begin{figure} \centering {\includegraphics[width=0.8\textwidth]{Malaspina2020_nearfce_example.png}} \caption{The left-column shows a spectrogram of electric field data, with $f_{ce}$ indicated by the white line. The near-cyclotron waves are present at the center of the interval, where fluctuations in the ambient magnetic field (b), solar wind velocity (c), plasma density (d), and electron temperature (e) show decreased variability. The right-column shows a high-cadence observation of near-cyclotron waves, where three distinct wave Types are identified (A,B,C). Type A was identified as an electron Bernstein wave. Figure adapted from \citet{Malaspina2021_waves} and \citet{Malaspina2020_waves}. \label{fig:nearfce}} \end{figure} \section{Turbulence} \label{TRBLCE} Turbulence refers to a class of phenomena that characteristically occurs in fluids and plasmas when nonlinear effects are dominant. Nonlinearity creates complexity, involvement of many degrees of freedom and an effective lack of predictability. In contrast, linear effects such as viscous damping or waves in uniform media tend to operate more predictably on just a few coordinates or degrees of freedom. Statistical properties in both the space and time domains are crucial to fully understanding turbulence \citep{2004RvMP...76.1015Z, 2015RSPTA.37340154M}. The relative independence of spatial and temporal effects in turbulence presents particular challenges to single S/C missions such as {\emph{PSP}}. Nevertheless various methods, ranging from Taylor's frozen-in hypothesis \citep{2015ApJ...801L..18K,2019ApJS..242...12C,2021A&A...650A..22P} to more elaborate ensemble methods \citep{2019ApJ...879L..16B,2020PhRvR...2b3357P,2020ApJS..246...53C,2021ApJ...923...89C} have been brought to bear to reveal fundamental properties related to the turbulent dynamics of the newly explored regions of the solar atmosphere. This line of research is of specific importance to the {\emph{PSP}} mission in that turbulence is a likely contributor to heating of the corona and the origin of the solar wind -- two of the major goals of the {\emph{PSP}} mission. In this section we will review the literature related to {\emph{PSP}} observations of turbulence that has appeared in the first four years of the mission. \subsection{Energy Range and Large-Scale (Correlation Scale) Properties}\label{sec:5_large_scale} Turbulence is often described by a cascade process, which in simplest terms describes the transfer of energy across scales from largest to smallest scales. The largest relevant scales act as reservoirs and the smallest scales generally contribute to most of the dissipation of the turbulent fluctuations. The drivers of turbulence are notionally the large ``energy-containing eddies'' which may be initially populated by a supply of fluctuations at the boundaries, or injection by ``stirring'' in the interior of the plasma. In the solar wind the dynamics at the photosphere is generally believed to inject a population of fluctuations, usually described as Alfv\'enic fluctuations. These propagate upwards into the corona triggering subsequent turbulent activity \citep{1999ApJ...523L..93M,2009ApJ...707.1659C,2011ApJ...736....3V,2013ApJ...776..124P}. However large scale organized shears in magnetic field and velocity field also exist in the solar atmosphere. While these are not initially in a turbulent state, they may become so at a later time, and eventually participate by enhancing the supply of cascading energy. Stream interaction regions (SIRs) are an example of the latter. Still further stirring mechanisms are possible, such as injection of waves due to scattering of reflected particles upstream of shocks, or by newly ionized particles associated with pickup of interstellar neutrals \citep{2020ApJ...900...94P}. We will not be concerned with this latter class of energy injection processes here. Among the earliest reports from {\emph{PSP}}, \citet{2020ApJS..246...53C} described a number of observations of relevance to the energy range. In particular the large-scale edge of the power-law inertial range (see \S5.2) indicates a transition, moving towards large scale, into the energy range. \citet{2020ApJS..246...53C} reports the presence of a shallower ``$1/f$'' range at these larger scales, a feature that is familiar from 1~AU observations \citep{1986PhRvL..57..495M}. It is important to recognize that in general turbulent theory provides no specific expectation for spectral forms at energy-containing scales, as these may be highly situation dependent. Indeed the implied range of scale at which $1/f$ is observed {\emph{in~situ}} corresponds rather closely to the scales and frequencies at which features are observed in the photospheric magnetic field \citep{2007ApJ...657L.121M} and in the deep corona \citep{2008ApJ...677L.137B}. This correspondence strongly hints that the $1/f$ signal is a vestige of dynamical processes very close to the sun, possibly in the dynamo. Still, \citet{2020ApJS..246...53C} point out that the measured nonlinear times at the {\it break scale} between inertial and energy ranges suggest that there is sufficient time in transit for the source region to develop nonlinear correlations at the that scale. This lends some credence to theories \citep[{\emph{e.g.}},][]{1989PhRvL..63.1807V,2018ApJ...869L..32M} offering explanation for local dynamical emergence of $1/f$ signals. This issue remains to be fully elucidated. An important length scale that describes the energy range is the correlation scale, which is nominally the scale of the energy containing eddies \citep{2012JFM...697..296W}. The notion of a unique scale of this kind is elusive, given that a multiplicity of such scales exists, {\emph{e.g.}}, for MHD and plasmas. Even in incompressible MHD, one deals with one length for each of two Elsasser energies, as well as a separate correlation scale for magnetic field and velocity field fluctuations. Accounting for compressibility, the fluctuations of density \citep{2020ApJS..246...58P} also become relevant, and when remote sensing ({\emph{e.g.}}, radio) techniques are used to probe density fluctuations observed by {\emph{PSP}} citep{2020ApJS..246...57K}, the notion of {\it{effective turbulence scale}} is introduced as a nominal characteristic scale. The correlation lengths themselves are formally defined as integrals computed from correlation functions, and are therefore sensitive to energy range properties. But this definition is notoriously sensitive to low frequency power \citep[or length of data intervals; see][]{2015JGRA..120..868I}. For this reason, simplified methods for estimating correlation lengths are often adopted. \cite{2020ApJS..246...53C} examined two of these in {\emph{PSP}} data -- the so called {\bf{``$1/e$''}} method \citep[see also][]{2020ApJS..246...58P} and the break point method alluded to above. As expected based on analytical spectral models \citep[{\emph{e.g.}},][]{2007ApJ...667..956M}, correlation scales based on the break point and on $1/e$ are quite similar. A number of researchers have suggested that measured correlation scales near {\emph{PSP}} perihelia are somewhat shorter than what may be expected based on extrapolations of 1~AU measurements \citep{2020AGUFMSH0160013C}. It is possible that this is partly explained as a geometric effect, noting that the standard Parker magnetic field is close to radial in the inner heliosphere, while it subtends increasing angles relative to radial at larger distances. One approach to explaining these observations is based on a 1D turbulence transport code \citep{2020ApJS..246...38A} that distinguishes ``slab'' and ``2D'' correlations scales, a parameterization of correlation anisotropy \citep{1994ApJ...420..294B}. Solutions of these equations \citep{2020ApJS..246...38A} were able to account for shorter correlation lengths seen in selected {\emph{PSP}} intervals with nearly radial mean magnetic fields. This represents a partial explanation but leaves open the question of what sets the value of the slab (parallel) correlation scales closer to the sun. The parallel and perpendicular correlation scales, measured relative to the ambient magnetic field direction, have obvious fundamental importance in parameterizing interplanetary turbulence. These scales also enter into expressions for the decay rates of energy and related quantities in von Karman similarity theory and its extensions \citep{1938RSPSA.164..192D,2012JFM...697..296W}. These length scales, or a subset of them, then enter into essentially all macroscopic theories of turbulence decay in the solar wind \citep{2008JGRA..113.8105B,2010ApJ...708L.116V,2014ApJ...782...81V,2014ApJ...796..111L,2017ApJ...851..117A,2018ApJ...865...25U,2019JPlPh..85d9009C}. While the subject is complex and too lengthy for full exposition here, a few words are in order. First, the perpendicular scale may likely be set by the supergranulation scale in the photosphere. A reasonably well accepted number is 35,000~km, although smaller values are sometimes favored. The perpendicular scale is often adopted as a controlling parameter in the cascade, in that the cascade is known to be anisotropic relative to the ambient field direction, and favoring perpendicular spectral transfer \citep{1983JPlPh..29..525S,1995ApJ...438..763G,1999PhRvL..82.3444M}. The parallel correlation scale appears to be less well constrained in general, and may be regulated (at least initially in the photosphere) by the correlation {\it{time}} of magnetic field footpoint motions \citep[see, {\emph{e.g.}},][]{2006ApJ...641L..61G}. Its value at the innermost boundary remains not well determined, even as {\emph{PSP}} observations provide ever better determinations at lower perihelia, where the field direction is often radial. One interesting possibility is that shear driven nonlinear Kelvin Helmholtz-like rollups \citep{2006GeoRL..3314101L,2018MNRAS.478.1980H} drive solar wind fluctuations towards a state of isotropy as reported prior to {\emph{PSP}} based on remote sensing observations \citep{2016ApJ...828...66D,1929ApJ....69...49R}. Shear induced rollups of this type would tap energy in large scale shear flows, enhancing the energy range fluctuations, and supplementing pre-existing driving of the inertial range \citep{2020ApJ...902...94R}. Such interactions are likely candidates for inducing a mixing, or averaging, between the parallel and perpendicular turbulence length scales in regions of Kelvin-Helmholtz-like rollups \citep{2020ApJ...902...94R}. In general, multi-orbit {\emph{PSP}} observations \citep{2020ApJS..246...53C,2021ApJ...923...89C,2020ApJS..246...38A} provide better determination of not only length scales but other parameters that characterize the energy containing scales of turbulence. Knowledge of energy range parameters impacts practical issues such as the selection of appropriate times for describing local bulk properties such as mean density, a quantity that enters into computations of cross helicity and other measures of the Alfv\'enicity in observed fluctuations \citep{2020ApJS..246...58P}. Possibly the most impactful consequence of energy range parameters is their potential control over the cascade rate, and therefore control over the plasma dissipation and heating, whatever the details of those processes may be. One approach is to estimate the energy supply into the smaller scales from the energy containing range by examining the evolution of the break point between the $1/f$ range and the inertial range \citep{2020ApJ...904L...8W}. This approach involves assumptions about the relationship of the $1/f$ range to the inertial range. Using three orbits of {\emph{PSP}} data, these authors evaluated the estimated energy supply rate from the radial break point evolution with the estimated perpendicular proton heating rate, finding a reasonable level of heating in fast and slow wind. Another approach to estimating heating rates in {\emph{PSP}} observations \cite{2020ApJS..246...30M} makes use of approximate connections between heating rate and radial gradient of temperature \citep{2007JGRA..112.7101V,2013ApJ...774...96B} along with theoretical estimates from a form of stochastic heating \citep{2010ApJ...720..503C}. Again, reasonable levels of correspondence are found. Both of these approaches \citep{2020ApJ...904L...8W,2020ApJS..246...30M} derive interesting conclusions based in part on assumptions about transport theory of temperature, or transport of turbulent fluctuations. An alternative approach is based entirely on turbulence theory extended to the solar wind plasma and applied locally to {\emph{PSP}} data \citep{2020ApJS..246...48B}. In this case two evaluations are made -- an energy range estimate adapted from von Karman decay theory \citep{2012JFM...697..296W} and a third order Yaglom-like law \citep{1998GeoRL..25..273P} applied to the inertial range. Cascade rates about 100 times that observed at 1~AU are deduced. The consistency of the estimates of cascade rates obtained from {\emph{PSP}} observations employing these diverse methods suggests a robust interpretation of interplanetary turbulence and the role of the energy containing eddies in exerting control over the cascade. \subsection{Inertial Range}\label{sec:turbulence_inertial_range} During the solar minimum, fast solar wind streams originate near the poles from open magnetic flux tubes within coronal holes, while slow wind streams originate in the streamer belt within a few tens of degrees from the solar equator \citep{2008GeoRL..3518103M}. Because plasma can easily escape along open flux tubes, fast wind is typically observed to be relatively less dense, more homogeneous and characteristically more Alfv\'enic than slow streams, which are believed to arise from a number of sources, such as the tip helmet streamers prevalent in ARs \citep{1999JGR...104..521E,2005ApJ...624.1049L}, opening of coronal loops via interchange reconnection with adjacent open flux tubes \citep{2001ApJ...560..425F}, or from rapidly expanding coronal holes \citep{2009LRSP....6....3C}. The first two {\emph{PSP}} close Encs. with the Sun, which occurred during Nov. 2018 (Enc.~1) and Apr. 2019 (Enc.~2), where not only much closer than any S/C before, but also remained at approximately the same longitude w.r.t. the rotating Sun, allowing {\emph{PSP}} to continuously sample a number of solar wind streams rooted in a small equatorial coronal hole \citep{2019Natur.576..237B,2019Natur.576..228K}. Measurements of velocity and magnetic field during these two Encs. reveal a highly structured and dynamic slow solar wind consisting of a quiet and highly Alfv\'enic background with frequent impulsive magnetic field polarity reversals, which are associated with so called switchbacks \citep[see also][]{2020ApJS..246...39D,2020ApJS..246...45H,2020ApJS..246...67M}. The 30~min averaged trace magnetic spectra for both quiet and switchbacks regions exhibit a dual power-law, with an inertial-range Kolmogorov spectral index of $-5/3$ at high heliocentric distances (as observed in previous observations near 1~AU) and Iroshnikov-Kraichnan index of $-3/2$ near 0.17~AU, consistent with theoretical predictions from MHD turbulence \citep{2016JPlPh..82f5302C}. These findings indicate that Alfv\'enic turbulence is already developed at $0.17$~AU. Moreover, the radial evolution of the turbulent dissipation rate between $0.17$~AU and $0.25$~AU, estimated using Politano-Pouquet third order law and the von Karman decay law, increases from $5\times 10^4~{\rm J~kg}^{-1}{\rm s}^{-1}$ at $0.25$~AU to $2\times 10^5~{\rm J~kg}^{-1}{\rm s}^{-1}$ at $0.17$~AU, which is up to 100 times larger at Perihelion 1 than its measured value at $1$~AU \citep{2020ApJS..246...48B} and in agreement with some MHD turbulent transport models \citep{2018ApJ...865...25U}. \citet{2021ApJ...916...49S} estimated the energy cascade rate at each scale in the inertial range, based on exact scaling laws derived for isentropic turbulent flows in three particular MHD closures corresponding to incompressible, isothermal and polytropic equations of state, and found the energy cascade rates to be nearly constant constant in the inertial range at approximately the same value of $2\times10^5~{\rm{J~kg}}^{-1}~{\rm{s}}^{-1}$ obtained by \citep{2018ApJ...865...25U} at $0.17$~AU, independent of the closure assumption. \citet{2020ApJS..246...71C} performed an analysis to decompose {\emph{PSP}} measurements from the first two orbits into MHD modes. The analysis used solar wind intervals between switchbacks to reveal the presence of a broad spectrum of shear Alfv\'en modes, an important fraction of slow modes and a small fraction of fast modes. The analysis of the Poynting flux reveals that while most of the energy is propagating outward from the sun, inversions in the Poynting flux are observed and are consistent with outward-propagating waves along kinked magnetic field lines. These inversions of the energy flux also correlate with the large rotations of the magnetic field. An observed increase of the spectral energy density of inward-propagating waves with increasing frequency suggests back-scattering of outward-propagating waves off of magnetic field reversals and associated inhomogeneities in the background plasma. Wave-mode composition, propagation and polarization within 0.3~AU was also investigated by~\citet{2020ApJ...901L...3Z} through the Probability Distribution Functions (PDFs) of wave-vectors within 0.3~AU with two main findings: (1) wave-vectors cluster nearly parallel and antiparallel to the local background magnetic field for $kd_i<0.02$ and shift to nearly perpendicular for $kd_i>0.02$. The authors also find that outward-propagating AW dominate over all scales and heliocentric distances, the energy fraction of inward and outward slow mode component increases with heliocentric distance, while the corresponding fraction of fast mode decreases. \cite{2020ApJS..246...53C} investigated the radial dependency of the trace magnetic field spectra, between 0.17~AU to about 0.6~AU, using MAG data from the FIELDS suite \citep{2016SSRv..204...49B} for each 24~h interval during the first two {\emph{PSP}} orbits. Their analysis shows that the trace spectra of magnetic fluctuations at each radii consists of a dual power-law, only this time involving the $1/f$ range followed by an MHD inertial-range with an spectral index varying from approximately $-5/3$ near 0.6~AU to about $-3/2$ at perihelion (0.17~AU). Velocity measurements obtained from SWEAP/SPC \citep{2016SSRv..204..131K} were used for the 24~h interval around Perihelion 1 to obtain the trace spectra of velocity and Elsasser field fluctuations, all of which show a power law with a spectral index closer to $-3/2$. The normalized cross-helicity and residual energy, which was also measured for each 24~h interval, shows that the turbulence becomes more imbalanced closer to the Sun, {\emph{i.e.}}, the dominance of outward-propagating increases with decreasing heliocentric distance. Additional measures of Alfv\'enicity of velocity and magnetic fluctuations as a function of their scale-size \citep{2020ApJS..246...58P} showed that both normalized cross-helicity and the scale-dependent angle between velocity and magnetic field decreases with the scale-size in the inertial-range, as suggested by some MHD turbulence models \citep{2006PhRvL..96k5002B,2009PhRvL.102b5003P,2010ApJ...718.1151P}, followed by a sharp decline in the kinetic range, consistent with observations at 1~AU \citep{2018PhRvL.121z5101P}. The transition from a spectral index of $-5/3$ to $-3/2$ with a concurrent increase in cross helicity is consistent with previous observations at 1~AU in which a spectral index of $-3/2$ for the magnetic field is prevalent in imbalanced streams \citep{2010PhPl...17k2905P,2013ApJ...770..125C,2013PhRvL.110b5003W}, as well as consistent with models and simulations of steadily-driven, homogeneous imbalanced Alfv\'enic turbulence \citep{2009PhRvL.102b5003P,2010ApJ...718.1151P}, and reflection-driven (inhomogeneous) Alfv\'en turbulence \citep{2013ApJ...776..124P,2019JPlPh..85d9009C}. A similar transition was also found by \citet{2020ApJ...902...84A}, where the Hilbert-Huang Transform (HHT) was used to investigate scaling properties of magnetic-field fluctuations as a function of the heliocentric distance, to show that magnetic fluctuations exhibit multifractal scaling properties far from the sun, with a power spectrum $f^{-5/3}$, while closer to the sun the corresponding scaling becomes monofractal with $f^{-3/2}$ power spectrum. Assuming ballistic propagation, \citet{2021ApJ...912L..21T} identified two 1.5~h intervals corresponding to the same plasma parcel traveling from 0.1 to 1~AU during the first {\emph{PSP}} and {\emph{SolO}} radial alignment, and also showed that the solar wind evolved from a highly Alfv\'enic, less developed turbulent state near the sun to a more fully developed and intermittent state near 1~AU. \citet{2021A&A...650A..21S} performed a statistical analysis to investigate how the turbulence properties at MHD scales depend on the type of solar wind stream and distance from the sun. Their results show that the spectrum of magnetic field fluctuations steepens with the distance to the Sun while the velocity spectrum remains unchanged. Faster solar wind is found to be more Alfv\'enic and dominated by outward-propagating waves (imbalanced) and with low residual energy. Energy imbalance (cross helicity) and residual energy decrease with heliocentric distance. Turbulent properties can also vary among different streams with similar speeds, possibly indicating a different origin. For example, slow wind emanating from a small coronal hole has much higher Alfv\'enicity than a slow wind that originates from the streamer belt. \citet{2021A&A...650L...3C} investigated the turbulence and acceleration properties of the streamer-belt solar wind, near the HCS, using measurements from {\emph{PSP}}'s fourth orbit. During this close S/C, the properties of the solar wind from the inbound measurements are substantially different than from the outbound measurements. In the latter, the solar wind was observed to have smaller turbulent amplitudes, higher magnetic compressibility, a steeper magnetic spectrum (closer to $-5/3$ than to $-3/2$), lower Alfv\'enicity and a $1/f$ break at much smaller frequencies. The transition from a more Alfv\'enic (inbound) wind to a non-Alfv\'enic wind occurred at an angular distance of about $4^\circ$ from the HCS. As opposed to the inbound Alfv\'enic wind, in which the measured energy fluxes are consistent with reflection-driven turbulence models~\citep{2013ApJ...776..124P,2019JPlPh..85d9009C}, the streamer belt fluxes are significantly lower than those predicted by the same models, suggesting the streamer-belt wind may be subject to additional acceleration mechanisms. Interpretation of the spectral analysis of temporal signals to investigate scaling laws in the inertial range thus far have relied on the validity of Taylor's Hypothesis (TH). \citet{2021A&A...650A..22P} investigated the applicability of TH in the first four orbits based on a recent model of the space-time correlation function of Alfv\'enic turbulence \citep[incompressible MHD;][]{2018ApJ...858L..20B,2020PhRvR...2b3357P}. In this model, the temporal decorrelation of the turbulence is dominated by hydrodynamic sweeping under the assumption that the turbulence is strongly anisotropic and Alfv\'enic. The only parameter in the model that controls the validity of TH is $\epsilon=\delta u_0/V_\perp$ where $\delta u_0$ is the rms velocity of the large-scale velocity field and $V_\perp$ is the velocity of the S/C in the plasma frame perpendicular to the local field. The spatial energy spectrum of turbulent fluctuations is recovered using conditional statistics based on nearly perpendicular sampling. The analysis is performed on four selected 24h intervals from {\emph{PSP}} during the first four perihelia. TH is observed to still be marginally applicable, and both the new analysis and the standard TH assumption lead to similar results. A general prescription to obtain the energy spectrum when TH is violated is summarized and expected to be relevant when {\emph{PSP}} get closer to the sun. \citet{2021ApJ...915L...8D} investigated the anisotropy of slow Alfv\'enic solar wind in the kinetic range from {\emph{PSP}} measurements. Magnetic compressibility is consistent with kinetic Alfv\'en waves (KAW) turbulence at sub-ion scales. A steepened transition range is found between the (MHD) inertial and kinetic ranges in all directions relative to the background magnetic field. Strong power spectrum anisotropy is observed in the kinetic range and a smaller anisotropy in the transition range. Scaling exponents for the power spectrum in the transition range are found to be $\alpha_{t\|}=-5.7\pm 1.0$ and $\alpha_{t\perp}=-3.7\pm0.3$, while for the kinetic range the same exponent are $\alpha_{k\|}=-3.12\pm0.22$ and $\alpha_{k\perp}=-2.57\pm0.09$. The wavevector anisotropy in the transition and kinetic ranges follow the scaling $k_\|\sim k_\perp^{0.71\pm0.17}$ and $k_\|\sim k_\perp^{0.38\pm0.09}$, respectively. \subsection{Kinetic Range, Dissipation, Heating and Implications} In-situ measurements have revealed that the solar wind is not adiabatically cooling, as both the ion and electron temperatures decay at considerably slower rates than the adiabatic cooling rates induced by the radial expansion effect of the solar wind \citep[{\emph{e.g.}},][]{2020ApJS..246...70H,2020ApJS..246...62M}. Thus, some heating mechanisms must exist in the solar wind. As the solar wind is nearly collisionless, viscosity, resistivity, and thermal conduction are unlikely to contribute much to the heating of the solar wind. Hence, turbulence is believed to be the fundamental process that heats the solar wind plasma. In the MHD inertial range, the turbulence energy cascades from large scales to small scales without dissipation. Near or below the ion kinetic scale, various kinetic processes, such as the wave-particle interaction through cyclotron resonance or Landau damping, and the stochastic heating of the particles, become important. These kinetic processes effectively dissipate the turbulence energy and heat the plasma. As already discussed in \S\ref{sec:5_large_scale}, \citet{2020ApJ...904L...8W} show that the estimated energy supply rate at the large scales agrees well with the estimated perpendicular heating rate of the solar wind, implying that turbulence is the major heating source of the solar wind. However, to fully understand how the turbulence energy eventually converts to the internal energy of the plasma, we must analyze the magnetic field and plasma data at and below the ion scales. \begin{figure} \centering \includegraphics[width=\hsize]{Bowen2020PRLfigure1.PNG} \caption{(a,b) Examples of {\emph{PSP}}/FIELDS magnetic field spectra with 3PL (three spectral range continuous power-law fit, blue) and 2PL (two spectral range continuous power-law fit, orange) fits. Vertical lines show 3PL spectral breaks. (c,d) spectral indices for data (black), 3PL (blue) and 2PL fits (orange). Horizontal lines are shown corresponding to spectral indices of $-8/3$ (teal) and $-4$ (purple). Top interval has statistically significant spectral steepening, while the bottom interval is sufficiently fit by 2PL. Figure adapted from \cite{2020PhRvL.125b5102B}.}. \label{fig:Bowen2020PRLFigure1} \end{figure} \citet{2020PhRvL.125b5102B} analyze data of the MAG and SCM onboard {\emph{PSP}} during Enc.~1 when a slow Alfv\'enic wind originating from an equatorial coronal hole was measured. They show that a very steep transition range in the magnetic field power spectrum with a spectral slope close to $-4$ is observed around the ion gyroscale ($k_\perp \rho_i \sim 1$ where $k_\perp$ is the perpendicular wave number and $\rho_i$ is the ion thermal gyroradius) (Fig.~ \ref{fig:Bowen2020PRLFigure1}). This transition range is steeper than both the inertial range ($k_\perp \rho_i \ll 1$) and the deep kinetic range ($k_\perp \rho_i \gg 1$). \citet{2020PhRvL.125b5102B} estimate that if the steep spectrum corresponds to some dissipation mechanism, then more than 50\% of the turbulence energy is dissipated at the ion scale transition range, which is a sufficient energy source for solar wind heating. \citet{2020ApJS..246...55D} conduct a statistical study of the magnetic field spectrum based on {\emph{PSP}} data from Enc.~2 and show that the break frequency between the inertial range and the transition range decreases with the radial distance to the Sun and is on the order of the ion cyclotron resonance frequency. \citet{2020ApJ...897L...3H} find that, in a one-day interval during Enc.~1, the slow wind observed by {\emph{PSP}} contains both the right-handed polarized KAWs and the left-handed polarized Alfv\'en ion cyclotron waves (ACWs) at sub-ion scales. Many previous observations have shown that at 1~AU KAW dominates the turbulence at sub-ion scales \citep[{\emph{e.g.}},][]{2010PhRvL.105m1101S} and KAW can heat the ions through Landau damping of its parallel electric field. However, the results of \citep{2020ApJ...897L...3H} imply a possible role of ACWs, at least in the very young solar wind, in heating the ions through cyclotron resonance. \citet{2021ApJ...915L...8D}, by binning the Enc.~1 data with different $\mathbf{V-B}$ angles, show that the magnetic field spectrum is anisotropic in both the transition range and kinetic range with $k_\perp \gg k_\parallel$ and the anisotropy scaling relation between $k_\perp$ and $k_\parallel$ is consistent with the critical-balanced KAW model in the sub-ion scales (see \S\ref{sec:turbulence_inertial_range}). Another heating mechanism is the stochastic heating \citep{2010ApJ...720..503C}, which becomes important when the fluctuation of the magnetic field at the ion gyroscale is large enough such that the magnetic moment of the particles is no longer conserved. \citet{2020ApJS..246...30M} calculate the stochastic heating rate using data from the first two Encs. and show that the stochastic heating rate $Q_\perp$ decays with the radial distance as $Q_\perp \propto r^{-2.5}$. Their result reveals that the stochastic heating may be more important in the solar wind closer to the Sun. Last, it is known that development of turbulence leads to the formation of intermittency (see \S\ref{sec:5_intermit} for more detailed discussions). In plasma turbulence, intermittent structures such as current sheets are generated around the ion kinetic scale. \citet{2020ApJS..246...46Q} adopt the partial variance of increments (PVI) technique to identify intermittent magnetic structures using {\emph{PSP}} data from the first S/C. They show that statistically there is a positive correlation between the ion temperature and the PVI, indicating that the intermittent structures may contribute to the heating of the young solar wind through processes like the magnetic reconnection in the intermittent current sheets. At the end of this subsection, it is worthwhile to make several comments. First, the Faraday Cup onboard {\emph{PSP}} (SPC) has a flux-angle operation mode, which allows measurements of the ion phase space density fluctuations at a cadence of 293~Hz \citep{2020ApJS..246...52V}. Thus, combination of the flux-angle measurements with the magnetic field data will greatly help us understand the kinetic turbulence. Second, more studies are necessary to understand behaviors of electrons in the turbulence, though direct measurement of the electron-scale fluctuations in the solar wind is difficult with current plasma instruments. \citet{2021A&A...650A..16A} show that by distributing the turbulence heating properly among ions and electrons in a turbulence-coupled solar wind model, differential radial evolutions of ion and electron temperatures can be reproduced. However, how and why the turbulence energy is distributed unevenly among ions and electrons are not fully understood yet and need future studies. Third, during the Venus flybys, {\emph{PSP}} traveled through Venus’s magnetosphere and made high-cadence measurements. Thus, analysis of the turbulence properties around Venus, {\emph{e.g.}}, inside its magnetosheath \citep{2021GeoRL..4890783B} will also be helpful in understanding the kinetic turbulence (see \S\ref{PSPVENUS}). Last, \citet{2021ApJ...912...28M} compare the turbulence properties inside and outside the magnetic switchbacks using the {\emph{PSP}} data from the first two Encs. They find that the stochastic heating rates and spectral slopes are similar but other properties such as the intermittency level are different inside and outside the switchbacks, indicating that some processes near the edges of switchbacks, such as the velocity shear, considerably affect the turbulence evolution inside the switchbacks (see \S\ref{SB_obs}). \subsection{Intermittency and Small-scale Structure} \label{sec:5_intermit} In the modern era to turbulence research, {\textit{intermittency}} has been established as a fundamental feature of turbulent flows \citep[{\emph{e.g.}},][]{1997AnRFM..29..435S}. Nonlinearly interacting fluctuations are expected to give rise to structure in space and time, which is characterized by sharp gradients, inhomogeneities, and departures from Gaussian statistics. In a plasma such as the solar wind, such ``bursty'' or ``patchy'' structures include current sheets, vortices, and flux tubes. These structures have been associated with enhanced plasma dissipation and heating \citep{2015RSPTA.37340154M}, and with acceleration of energetic particles \citep{2013ApJ...776L...8T}. Studies of intermittency may therefore provide insights into dissipation and heating mechanisms active in the weakly-collisional solar wind plasma. Intermittency also has implications for turbulence theory -- a familiar example is the evolution of hydrodynamic inertial range theory from the classical Kolmogorov \citeyearpar[K41;][]{1941DoSSR..30..301K} theory to the so-called refined similarity hypothesis \citep[][]{1962JFM....13...82K}; the former assumed a uniform energy dissipation rate while the latter allowed for fluctuations and inhomogeneities in this fundamental quantity. Standard diagnostics of intermittency in a turbulent field include PDFs of increments, kurtosis (or flatness; fourth-order moment), and structure functions \cite[{\emph{e.g.}},][]{2015RSPTA.37340154M}. Observational studies tend to focus on the magnetic field due to the availability of higher-quality measurements compared to plasma observations. In well-developed turbulence, one finds that the tails of PDFs of increments become wider (super-Gaussian) and large values of kurtosis are obtained at small scales within the inertial-range. A description in terms of \textit{fractality} is also employed -- monofractality is associated with structure that is non space-filling but lacking a preferred scale ({\emph{i.e.}}, scale-invariance). In contrast, multifractality also implies non space-filling structure but with at least one preferred scale \citep{1995tlan.book.....F}.\footnote{In hydrodynamic turbulence intermittency is often described in terms of the scaling of the structure functions $S^{(p)}_\ell \equiv \langle \delta u_\ell^p \rangle \propto \ell^{p/3 + \mu(p)}$, where \(\delta u_\ell = \bm{\hat{\ell}} \cdot [ \bm{u} (\bm{x} + \bm{\ell}) - \bm{u}(\bm{x}) ]\) is the longitudinal velocity increment, \(\bm{\ell}\) is a spatial lag, and \(\langle\dots\rangle\) is an appropriate averaging operator. In K41 (homogenous turbulence) the intermittency parameter \(\mu(p)\) vanishes. With intermittency one has non-zero \(\mu(p)\), and the scaling exponents \(\zeta (p) = p/3 + \mu(p) \) can be linear or nonlinear functions of \(p\), corresponding to monofractal or multifractal scaling, respectively \citep{1995tlan.book.....F}. The scale-dependent kurtosis \(\kappa\) can be defined in terms the structure functions: \(\kappa (\ell)\equiv S^{(4)}_\ell/ \left[S^{(2)}_\ell\right]^2\).} Intermittency properties of solar wind turbulence have been extensively studied using observations from several S/C \citep[see, {\emph{e.g.}}, review by][]{2019E&SS....6..656B}. High-resolution measurements from the {\emph{Cluster}} and the Magnetospheric Multiscale \citep[{\emph{MMS;}}][]{2014cosp...40E.433B} missions have enabled such investigations within the terrestrial magnetosheath as well \citep[{\emph{e.g.}},][]{2009PhRvL.103g5006K,2018JGRA..123.9941C}, including kinetic-scale studies. {\emph{PSP}}'s trajectory allows us to extend these studies to the near-Sun environment and to examine the helioradial evolution of intermittency in the inner heliosphere. An advantage offered by {\emph{PSP}} is that the observations are not affected by foreshock wave activity that is often found near Earth's magnetosheath \citep[see, {\emph{e.g.}},][]{2012ApJ...744..171W}. \begin{figure} \centering \includegraphics[width=0.7\textwidth]{Alberti2020ApJ.pdf} \includegraphics[width=0.68\textwidth]{Chhiber2021ApJL.pdf} \caption{\textit{Top}: Scaling exponents \(\zeta_q\) (see text) of the radial magnetic field for different orders \(q\), observed by {\emph{PSP}} at different helioradii \(r\). A transition from monofractal linear scaling to multifractal scaling is observed for \(r>0.4\). Figure adapted from \cite{2020ApJ...902...84A}. \textit{Bottom}: Scale-dependent kurtosis of the magnetic field, as a function of lag. A transition from a multifractal inertial range to a monofractal kinetic range occurs near \(20\ d_\text{i}\) \citep{2021ApJ...911L...7C}.} \label{fig:Alberti_Chhib} \end{figure} The radial evolution of intermittency in inertial-range magnetic fluctuations measured by {\emph{PSP}} was investigated by \cite{2020ApJ...902...84A}, who found monofractal, self-similar scaling at distances below 0.4~AU. At larger distances, a transition to multifractal scaling characteristic of strongly intermittent turbulence was observed (see top panel of Fig.~\ref{fig:Alberti_Chhib}). A similar trend was observed by \citep{2021ApJ...912L..21T}, who used measurements during the first radial alignment of {\emph{PSP}} and {\emph{SolO}}. Strong intermittency was obtained in {\emph{SolO}} observations near 1~AU compared to {\emph{PSP}} observations near 0.1~AU, suggesting an evolution from highly Alfv\'enic and under-developed turbulence to fully-developed turbulence with increasing radial distance. Note that several prior studies have found that inertial-range magnetic turbulence at 1~AU is characterized by multifractal intermittency \citep[][]{2019E&SS....6..656B}. It is also worth noting (as in \S\ref{sec:5_large_scale}) that {\emph{PSP}} observations near the Sun may be statistically biased towards sampling variations in a lag direction that is (quasi-)parallel to the mean magnetic field \citep{2021PhPl...28h0501Z,2021ApJ...923...89C}. Future studies could separately examine parallel and perpendicular intervals \citep[{\emph{e.g.}},][]{2011JGRA..11610102R}, which would clarify the extent to which this geometrical sampling bias affects the measured radial evolution of intermittency. A comparison of inertial range vs. kinetic-scale intermittency in near-Sun turbulence was carried out by \cite{2021ApJ...911L...7C} using the publicly available SCaM data product \citep{2020JGRA..12527813B}, which merges MAG and SCM measurements to obtain an optimal signal-to-noise ratio across a wide range of frequencies. They observed multifractal scaling in the inertial range, supported by a steadily increasing kurtosis with decreasing scale down to \(\sim 20\ d_\text{i}\). The level of intermittency was somewhat weaker in intervals without switchbacks compared to intervals with switchbacks, consistent with PVI-based analyses by \cite{2021ApJ...912...28M} (see also \S\ref{sec:5_switchback}). At scales below \(20\ d_\text{i}\), \cite{2021ApJ...911L...7C} found that the kurtosis flattened (bottom panel of Fig.~\ref{fig:Alberti_Chhib}), and their analysis suggested a monofractal and scale-invariant (but still intermittent and non-Gaussian) kinetic range down to the electron inertial scale, a finding consistent with near-Earth observations \citep{2009PhRvL.103g5006K} and some kinetic simulations \citep{2016PhPl...23d2307W}. From these results, they tentatively infer the existence of a scale-invariant distribution of current sheets between proton and electron scales. The SCaM data product was also used by \cite{2021GeoRL..4890783B} to observe strong intermittency at subproton scales in the Venusian magnetosheath. \cite{2020ApJ...905..142P} examined coherent structures at ion scales in intervals with varying switchback activity, observed during {\emph{PSP}}'s first S/C. Using a wavelet-based approach, they found that current sheets dominated intervals with prominent switchbacks, while wave packets were most common in quiet intervals without significant fluctuations. A mixture of vortex structures and wave packets was observed in periods characterized by strong fluctuations without magnetic reversals. A series of studies used the PVI approach \citep{2018SSRv..214....1G} with {\emph{PSP}} data to identify intermittent structures and examine associated effects. \cite{2020ApJS..246...31C} found that the waiting-time distributions of intermittent magnetic and flow structures followed a power law\footnote{Waiting times between magnetic switchbacks, which may also be considered intermittent structures, followed power-law distributions as well \citep{2020ApJS..246...39D}.} for events separated by less than a correlation scale, suggesting a high degree of correlation that may originate in a clustering process. Waiting times longer than a correlation time exhibited an exponential distribution characteristic of a Poisson process. \cite{2020ApJS..246...61B} studied the association of SEP events with intermittent magnetic structures, finding a suggestion of positive correlation (see also \S\ref{sec:5_SEPs}). The association of intermittency with enhanced plasma heating (measured via ion temperature) was studied by \cite{2020ApJS..246...46Q}; their results support the notion that coherent non-homogeneous magnetic structures play a role in plasma heating mechanisms. These series of studies indicate that intermittent structures in the young solar wind observed by {\emph{PSP}} have certain properties that are similar to those observed in near-Earth turbulence. In addition to the statistical properties described in the previous paragraphs, some attention has also been given to the identification of structures associated with intermittency, such as magnetic flux tubes and ropes. \cite{2020ApJS..246...26Z} used wavelet analysis to evaluate magnetic helicity, cross helicity, and residual energy in {\emph{PSP}} observations between 22 Oct. 2018 and 21 Nov. 2018. Based on these parameters they identified 40 flux ropes with durations ranging from 10 to 300 minutes. \cite{2020ApJ...903...76C} used a Grad-Shafranov approach to identify flux ropes during the first two {\emph{PSP}} Encs., and compared this method with the \cite{2020ApJS..246...26Z} approach, finding some consistency. \cite{2021A&A...650A..20P} developed a novel method for flux rope detection based on a real-space evaluation of magnetic helicity, and, focusing on the first {\emph{PSP}} orbit, found some consistency with the previously mentioned approaches. A subsequent work applied this method to orbit 5, presenting evidence that flux tubes act as transport boundaries for energetic particles \citep{2021MNRAS.508.2114P}. The characteristics of so-called interplanetary discontinuities (IDs) observed during {\emph{PSP}}'s $4^{\mathrm{th}}$ and $5^{\mathrm{th}}$ orbits were studied by \cite{2021ApJ...916...65L}, who found that the occurrence rate of IDs decreases from 200 events per day at 0.13~AU to 1 event per day at 0.9~AU, with a sharper decrease observed in RDs as compared with TDs. While the general decrease in occurrence rate may be attributed to radial wind expansion and discontinuity thickening, the authors infer that the sharper decrease in RDs implies a decay channel for these in the young solar wind. We close this section by noting that the studies reviewed above employ a remarkable variety of intermittency diagnostics, including occurrence rate of structures, intensities of the associated gradients, and their fractal properties. The richness of the dataset that {\emph{PSP}} is expected to accumulate over its lifetime will offer unprecedented opportunities to probe the relationships between these various diagnostics and their evolution in the inner heliosphere. \subsection{Interaction Between Turbulence and Other Processes} \subsubsection{Turbulence Characteristics Within Switchbacks}\label{sec:5_switchback} The precise definition of magnetic field reversal (switchbacks) is still ambiguous in the heliophysics community, but the common picture of switchbacks is that they incarnate the S-shaped folding of the magnetic field. Switchbacks are found to be followed by Alv\'enic fluctuations (and often by strahl electrons) and they are associated with an increase of the solar wind bulk velocity as observed recently by {\emph{PSP}} near 0.16~AU \citep{2019Natur.576..228K,2019Natur.576..237B,2020ApJS..246...39D,2020ApJS..246...67M,2020ApJ...904L..30B,2021ApJ...911...73W,2020ApJS..246...74W}. Switchbacks have been previously observed in the fast solar-wind streams near 0.3~AU \citep{ 2018MNRAS.478.1980H} and near or beyond 1~AU \citep{1999GeoRL..26..631B}. The switchback intervals are found to carry turbulence, and the characteristics of that turbulence have been investigated in a number of works using {\emph{PSP}} data. \citet{2020ApJS..246...39D} studied the the spectral properties of inertial range turbulence within intervals containing switchback structures in the first {\emph{PSP}} S/C. In their analysis they introduced the normalized parameter $z=\frac{1}{2}(1-\cos{\alpha})$ (where $\alpha$ is the angle between the instantaneous magnetic field, {\bf B}, and the prevalent field $\langle {\bf B} \rangle$) to determine the deflection of the field. Switchbacks were defined as a magnetic field deflection that exceeds the threshold value of $z=0.05$. They estimated the power spectral density of the radial component $B_R$ of the magnetic field for quiescent ($z < 0.05$) and active (all $z$) regimes. \begin{figure} \centering \includegraphics[width=0.7\textwidth]{dudok_spectra_20.png} \caption{Power spectrum of radial magnetic field fluctuations for quiescent times ($z<0.05$) and for the entire interval (all $z$), during a period near perihelion of Enc.~1. The quiescent times show a lower overall amplitude, and a $1/f$ break at higher frequencies, suggestive of a less evolved turbulence. Figure adapted from \citet{2020ApJS..246...39D}.} \label{fig:Dudock20} \end{figure} Fig.~\ref{fig:Dudock20} shows the results of both power spectra. They found that the properties in active conditions are typical for MHD turbulence, with an inertial range whose spectral index is close to $-3/2$ and preceded by $1/f$ range (consistent with the observation by \citet{2020ApJS..246...53C}. Also, the break frequency (at the lower frequency part) was found to be located around 0.001~Hz. For the quiescent power spectrum, the break frequency moves up to 0.05~Hz showing a difference from the active power spectrum although both power spectra have similar spectral index (about $-3/2$) between 0.05~Hz and 1~Hz. The authors suggest that in the quiescent regime the turbulent cascade has only had time to develop a short inertial range, showing signature of a more pristine solar wind. \citet{2020ApJS..246...67M} have studied the turbulent quantities such as the normalized residual energy, $\sigma_r(s, t)$, and cross helicity, $\sigma_c(s, t)$, during one day of {\emph{PSP}} first S/C as a function of wavelet scale $s$. Their study encompasses switchback field intervals. Overall, they found that the observed features of these turbulent quantities are similar to previous observations made by {\emph{Helios}} in slow solar wind \citep{2007PSS...55.2233B}, namely highly-correlated and Alfv\'enic fluctuations with ($\sigma_c\sim 0.9$ and $\sigma_r\sim -0.3$). However, a negative normalized cross helicity is found within switchback intervals, indicating that MHD fluctuations are following the local magnetic field inside switchbacks, {\emph{i.e.}}, the predominantly outward propagating Alfv\'enic fluctuations outside the switchback intervals become inward propagating during the field reversal. This signature is interpreted as that the field reversals are local kinks in the magnetic field and not due to small regions of opposite polarity of the field. \begin{figure} \centering \includegraphics[width=0.7\textwidth]{bourouaine20_2.png} \caption{Power spectra of Elsasser variables (upper panels) and velocity/magnetic fluctuations (lower panels) for periods of switchbacks (SB; left panels) and periods not within switchbacks (NSB; right panels). Magnetic spectra are multiplied by a factor of 10. Power law fits are also indicated. There are notable differences in both the amplitudes and shape of the spectra between SB and NSB intervals. Figure adapted from \citet{2020ApJ...904L..30B}.} \label{fig:bourouaine20} \end{figure} The turbulence characteristics associated with switchbacks have also been studied by \cite{2020ApJ...904L..30B} using 10 days of {\emph{PSP}} data during the first S/C. The authors used a technique that is based on the conditioned correlation functions to investigate the correlation times and the power spectra of the field $q$ that represents the magnetic field ${\bf B}$, the fluid velocity ${\bf V}$ and the Elsasser fields ${\bf z^\pm}$, inside and outside switchback intervals. In their study, the authors defined switchbacks as field reversals that are deflected by angles that are larger than $90^\circ$ w.r.t. the Parker spiral (the prevalent magnetic field). This work confirms that the dominant Alv\'enic fluctuations follow the field reversal. Moreover, the authors found that, in switchback intervals, the correlation time is about 2 minutes for all fields, but in non-switchback intervals, the correlation time of the sunward propagating Alfv\'enic fluctuations (the minor Elsasser field) is about 3 hr and longer than the ones of the other fields. This result seems to be consistent with previous 1~AU measurements \citep{2013ApJ...770..125C,2018ApJ...865...45B}. Furthermore, the authors estimated the power spectra of the corresponding fields (Fig.~\ref{fig:bourouaine20}), and found that the magnetic power spectrum in switchback intervals is steeper (with spectral index close to $-5/3$) than in switchback intervals, which have a spectral index close to $-3/2$. The analysis also shows that the turbulence is found to be less imbalanced with higher residual energy in switchback intervals. \citet{2021ApJ...911...73W} has investigated the turbulent quantities such as the normalized cross helicity and the residual energy in switchbacks using the first four Encs. of {\emph{PSP}} data. In their analysis they considered separate intervals of 100~s duration that satisfies the conditions of $B_R>0^\circ$ and $B_R>160^\circ$ for the switchbacks and non-switchbacks, respectively. Although, the analysis focuses on the time scale of 100~s, their findings seems to be consistent with the results of \cite{2020ApJ...904L..30B} (for that time scale), {\emph{i.e.}}, the switchback intervals and non-switchback intervals have distinct residual energy and similar normalized cross helicity suggesting that switchbacks have a different Alfv\'enic characteristics. In another study, \citet{2021ApJ...912...28M} have investigated the spectral index and the stochastic heating rate at the inertial range inside and outside switchback intervals and found a fair similar behavior in both intervals. However, at the kinetic range, the kinetic properties, such as the characteristic break scale (frequency that separates the inertial and the dissipation ranges) and the level of intermittency differ inside and outside switchback intervals. The authors found that inside the switchbacks the level of intermittency is higher, which might be a signature of magnetic field and velocity shears observed at the edges. \subsubsection{Impact of Turbulence and Switchbacks on Energetic Particles and CMEs}\label{sec:5_SEPs} Strahl electrons are observed to follow the reversed field within the switchbacks, however, it is not yet understood whether higher energy energetic particles reverse at the switchbacks as well. Recently, \cite{2021AA...650L...4B} examined the radial anisotropy of the energetic particles measured by the EPI-Lo (instrument of the IS$\odot$IS suite) in connection to magnetic switchbacks. The authors investigated switchback intervals with $|\sigma_c|>0.5$ and $z\le 0.5$, respectively. The ratio $r= (F_{\mbox{away}}-F_{\mbox{toward}})/(F_{\mbox{away}}+F_{\mbox{outward}})$ has been used to determine the dominant flux direction of the energetic particles. Here "$\mbox{away}$" and "$\mbox{toward}$" refer to the direction of the measured radial particle fluxes ($F$) in the selected energy range. Fig.~3 of \citet{2021AA...650L...4B} displays the scattering points that correspond to the measurements of the first five Encs. plotted as a function of the $z$ parameter and the ratio $r$. The analysis shows that 80–200~keV energetic ions almost never reverse direction when the magnetic field polarity reverses in switchbacks. One of the reason is that particles with smaller gyroradii, such as strahl electrons, can reverse direction by following the magnetic field in switchbacks, but that larger gyroradii particles likely cannot. Therefore, from this analysis one can expect that particles with higher energies than those detectable by EPI-Lo will likely not get reversed in switchbacks. \cite{2020ApJS..246...48B} studied the connection between the enhanced of the population of energetic particles (measured using IS$\odot$IS) and the intermittent structures (using FIELDS/MAG) near the sun using {\emph{PSP}} data. Intermittent structures are generated naturally by turbulence, and the PVI method was proposed previously to identify these structures \citep{2018SSRv..214....1G} For single S/C measurements, this method relies on the evaluation of the temporal increment in the magnetic field such as $|\Delta B(\tau,t)|=|B(t+\tau)-B(t)|$, and thus the so-called the $PVI(t)$ for a given $\tau$ index is defined as \begin{equation} PVI(t)=\sqrt{\frac{|\Delta(\tau,t)|^2}{|\langle \Delta(\tau,t)|^2\rangle}} \end{equation} where $\langle...\rangle$ denotes a time average computed along the time series. The analysis given in \citet{2020ApJS..246...48B} examined the conditionally averaged energetic-particle count rates and its connection to the intermittent structures using the PVI method. The results from the first two {\emph{PSP}} orbits seem to support the idea that SEPs are are likely correlated with coherent magnetic structures. The outcomes from this analysis may suggest that energetic particles are concentrated near magnetic flux tube boundaries. Magnetic field line topology and flux tube structure may influence energetic particles and their transport in other ways. A consequence of the tendency of particles to follow field lines is that when turbulence is present the particle paths can be influenced by field line {\it random walk}. While this idea is familiar in the context of perpendicular diffusive transport, a recent study of SEP path lengths observed by {\emph{PSP}} \citep{2021A&A...650A..26C} suggested that random walking of field lines or flux tubes may account for apparently increased path of SEP path lengths. This is further discussed in \S\ref{SEPs}. The inertial-range turbulent properties such as the normalized cross helicity, $\sigma_c$, and residual energy, $\sigma_r$ have been examined in magnetic clouds (MCs) using {\emph{PSP}} data by \citet{2020ApJ...900L..32G}. MCs are considered to be large-scale transient twisted magnetic structures that propagate in the solar wind having features of low plasma $\beta$ and low-amplitude magnetic field fluctuations. The analysis presented in \citet{2020ApJ...900L..32G} shows low $|\sigma_c|$ value in the cloud core while the cloud’s outer layers displays higher $|\sigma_c|$ and small residual energy. This study indicates that more balanced turbulence resides in the could core, and large-amplidude Alfv\'enic fluctuations characterize the cloud’s outer layers. These obtained properties suggest that low $|\sigma_c|$ is likely a common feature of magnetic clouds that have a have typical closed field structures. \subsection{Implications for Large-Scale and Global Dynamics} As well as providing information about the fundamental nature of turbulence, and its interaction with the various structures in the solar wind, {\emph{PSP}} has allowed us to study how turbulence contributes to the solar wind at the largest scales. Some of the main goals of {\emph{PSP}} are to understand how the solar wind is accelerated to the high speeds observed and how it is heated to the high temperatures seen, both close to the Sun and further out \citep{2016SSRv..204....7F}. {\emph{PSP}}'s orbits getting increasingly closer to the Sun are allowing us to measure the radial trends of turbulence properties, and directly test models of solar wind heating and acceleration. To test the basic physics of a turbulence driven wind, \citet{2020ApJS..246...53C} compared {\emph{PSP}} measurements from the first two orbits to the 1D model of \citet{2011ApJ...743..197C}. In particular, they calculated the ratio of energy flux in the Alfv\'enic turbulence to the bulk kinetic energy flux of the solar wind (Fig.~\ref{FIG:Chen2020}). This ratio was found to increase towards the Sun, as expected, reaching about 10\% at 0.17~AU. The radial variation of this ratio was also found to be consistent with the model, leading to the conclusion that the wind during these first orbits could be explained by a scenario in which the Sun drives AWs that reflect from the Alfv\'en speed gradient, driving a turbulence cascade that heats and accelerates the wind. Consistent with this picture, \citet{2020ApJS..246...53C} also found that the inward Alfv\'enic fluctuation component grew at a rate consistent with the measured Alfv\'en speed gradient. \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{Chen2020-eps-converted-to.pdf} \caption{(a) Ratio of outward-propagating Alfv\'enic energy flux, $F_\mathrm{A}$, to solar wind bulk kinetic energy flux, $F_\mathrm{k}$, as a function of heliocentric distance, $r$. (b) The same ratio as a function of solar wind radial Alfv\'en Mach number, $M_\mathrm{A}$. In both plots, the black solid line is a power law fit, the red/green dashed lines are solutions to the \citet{2011ApJ...743..197C} model, the data points are colored by solar wind speed, $v$, and crosses mark times during connection to the coronal hole in Enc.~1. Figure adapted from \citet{2020ApJS..246...53C}.} \label{FIG:Chen2020} \end{figure*} To see if such physics can explain the 3D structure of the solar wind, the {\emph{PSP}} observations have also been compared to 3D turbulence-driven solar wind models. \citet{ 2020ApJS..246...48B} calculated the turbulent heating rates from {\emph{PSP}} using two methods: from the third-order laws for energy transfer through the MHD inertial range and from the von K\'arm\'an decay laws based on correlation scale quantities. These were both found to increase going closer to the Sun, taking values at 0.17~AU about 100 times higher than typical values at 1~AU. These were compared to those from the model of \citet{2018ApJ...865...25U}, under two different inner boundary conditions -- an untilted dipole and a magnetogram from the time of the S/C. The heating rates from both models were found to be in broad agreement with those determined from the {\emph{PSP}} measurements, although the magnetogram version provided a slightly better fit overall. \citet{2021ApJ...923...89C} later performed a comparison of the first five orbits to a similar 3D turbulence solar wind model (which captures the coupling between the solar wind flow and small-scale fluctuations), examining both mean-flow parameters such as density, temperature and wind speed, as well as turbulence properties such as fluctuation amplitudes, correlation lengths and cross-helicity. In general, the mean flow properties displayed better agreement with {\emph{PSP}} observations than the turbulence parameters, indicating that aspects of the turbulent heating were possibly being captured, even if some details of the turbulence were not fully present in the model. A comparison between the model and observations for orbit 1 is shown in Fig.~\ref{FIG:Chhiber2021}. \begin{figure*} \centering \includegraphics[width=0.75\textwidth]{Chhiber2021.pdf} \caption{Blue `$+$' symbols show {\emph{PSP}} data from orbit 1, plotted at 1-hour cadence except \(\lambda\), for which daily values are shown. Red curve shows results from the model, sampled along a synthetic {\emph{PSP}} trajectory. Quantities shown are mean radial velocity of ions (\(V_R\)), mean radial magnetic field \(B_R\), mean ion density \(n_p\), mean ion temperature \(T_p\), mean turbulence energy \(Z^2\), correlation length of magnetic fluctuations \(\lambda\), and normalized cross helicity \(\sigma_c\). The shading in the top four panels marks an envelope obtained by adding and subtracting the local turbulence amplitude from the model to the mean value from the model (see the text for details). The vertical black line marks perihelion. The model uses ADAPT map with central meridian time 6 Nov. 2018, at 12:00 UT (Run I). Minor ticks on the time axis correspond to 1 day. Figure adapted from \citet{2021ApJ...923...89C}.} \label{FIG:Chhiber2021} \end{figure*} \citet{2020ApJS..246...38A} first compared the {\emph{PSP}} plasma observations to results from their turbulence transport model based on the nearly incompressible MHD description. They concluded that there was generally a good match for quantities such as fluctuating kinetic energy, correlation lengths, density fluctuations, and proton temperature. Later, \citet{2020ApJ...901..102A} developed a model that couples equations for the large scale dynamics to the turbulence transport equations to produce a turbulence-driven solar wind. Again, they concluded a generally good agreement, and additionally found the heating rate of the quasi-2D component of the turbulence to be dominant, and to be sufficient to provide the necessary heating at the coronal base. Overall, these studies indicate a picture that is consistent with turbulence, driven ultimately by motions at the surface of the Sun, providing the energy necessary to heat the corona and accelerate the solar wind in a way that matches the {\emph{in situ}} measurements made by {\emph{PSP}}. Future work will involve adding even more realistic turbulence physics into these models, and testing them under a wider variety of solar wind conditions. One recent study in this direction is \citet{2021A&A...650L...3C}, which examined the turbulence energy fluxes as a function of distance to the HCS during Enc.~ 4. They found that the turbulence properties changed when {\emph{PSP}} was within 4$^\circ$ of the HCS, resembling more the standard slow solar wind seen at 1~AU, and suggesting this as the angular width of the streamer belt wind at these distances. Also, within this streamer belt wind, the turbulence fluxes were significantly lower, being on average 3 times smaller than required for consistency with the \citet{2011ApJ...743..197C} solar wind model. \citet{2021A&A...650L...3C} concluded, therefore, that additional mechanisms not in these models are required to explain the solar wind acceleration in the streamer belt wind near the HCS. The coming years, with both {\emph{PSP}} moving even closer to the Sun and the the solar cycle coming into its maximum, will provide even better opportunities to further understand the role that turbulence plays in the heating and acceleration of the different types of solar wind, and how this shapes the large-scale structure of our heliosphere. \section{Large-Scale Structures in the Solar Wind} \label{LSSSW} During these four years of mission (within the ascending phase of the solar cycle), {\emph{PSP}} crossed the HCS several times and also observed structures (both remotely and {\emph{in situ}}) with similar features to the internal magnetic flux ropes (MFRs) associated with the interplanetary coronal mass ejections (ICMEs). This section focuses on {\emph{PSP}} observations of large-scale structures, {\emph{i.e.}}, the HCS crossing, ICMEs, and CMEs. Smaller heliospheric flux ropes are also included in this section because of their similarity to larger ICME flux ropes. The comparison of the internal structure of large- and small-scale flux ropes (LFRs and SFRs, respectively) is revealing. They can both store and transport magnetic energy. Their properties at different heliodistances provide insights into the energy transport in the inner heliosphere. {\emph{PSP}} brings a unique opportunity for understanding the role of the MFRs in the solar wind formation, evolution, and thus connecting scales in the heliosphere. \subsection{The Heliospheric Current Sheet} \label{LSSSWHCS} The HCS separates the two heliospheric magnetic hemispheres: one with a magnetic polarity pointing away from the Sun and another toward the Sun. {\emph{PSP}} crossed the HCS multiple times in each orbit due to its low heliographic latitude orbit. Fig.~\ref{Orbit5_HCS} shows a comprehensive set of measurements with periods of HCS crossing identified as gray regions. These crossings are particularly evident in the magnetic field azimuth angle ($\phi_B$) and the PAD of suprathermal electrons. In the Radial-Tangential-Normal (RTN) coordinates used here, the outward and inward polarities have a near-zero degree and $180^{\circ}$ azimuth angle, respectively. Since the electron heat flux always streams away from the Sun, the streaming direction is parallel (antiparallel) to the magnetic field in the regions of the outward (inward) magnetic polarity, resulting in a magnetic pitch angle of $0^{\circ}$ ($180^{\circ}$). \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{Encounter5_HCS.png} \caption{SP solar wind measurements during solar Enc.~5, 10 May 2020 (day of the year [DOY] 130) to 1 Jun. 2020 (DOY 182). The panels from top to bottom are the PAD of 314 eV suprathermal electrons; the normalized PAD of the 314 eV suprathermal electrons; the magnetic field magnitude; the azimuth angle of the magnetic field in the TN plane; the elevation angle of the magnetic field; the solar wind proton number density; the RTN components of the solar wind proton bulk speed; the thermal speed of the solar wind protons; and the S/C radial distance from the Sun. The gray bars mark periods of HCS crossings.} \label{Orbit5_HCS} \end{figure*} Comparing the observed locations of HCS crossings with PFSS model predictions yielded good agreement \citep{2020ApJS..246...47S, 2021A&A...652A.105L}. Lowering the source surface radius to 2~$R_\odot$ or even below would minimize the timing disagreements, though this would increase the amount of open magnetic flux to unreasonable values. The likely resolution is that the appropriate source surface radius is not a constant value but varies depending on the solar surface structures below. Other sources of disagreement between the model predictions and observations are the emergence of ARs not included in the photospheric magnetic maps used by the PFSS models and the presence of solar wind transients ({\emph{e.g.}}, ICMEs). \citet{2021A&A...652A.105L} also found that while the PFSS model predicted a relatively flat HCS, the observed current sheets had a much steeper orientation suggesting significant local corrugation. \citet{2020ApJS..246...47S} also compared the observed HCS crossing times to global MHD model predictions with similar results. The internal structure of the HCS near the Sun is very complex \citep{2020ApJS..246...47S, 2020ApJ...894L..19L, 2021A&A...650A..13P}. \citet{2020ApJS..246...47S} has identified structures within the HCS region with magnetic field magnitude depressions, increased solar wind proton bulk speeds, and associated suprathermal electron strahl dropouts. These might evolve into the small density enhancements observed by \citet{2020ApJ...894L..19L} likely showing magnetic disconnections. In addition, small flux ropes were also identified inside or just outside the HCS, often associated with plasma jets indicating recent magnetic reconnection \citep{2020ApJS..246...47S, 2020ApJ...894L..19L, 2021A&A...650A..13P, 2021A&A...650A..12Z}. The near Sun HCS is much thicker than expected; thus, it is surprising that it is the site of frequent magnetic reconnection \citep{2021A&A...650A..13P}. Moreover, 1~AU observations of the HCS reveal significantly different magnetic and plasma signatures implying that the near-Sun HCS is the location of active evolution of the internal structures \citep{2020ApJS..246...47S}. The HCS also appears to organize the nearby, low-latitude solar wind. \citet{2021A&A...650L...3C} observed lower amplitude turbulence, higher magnetic compressibility, steeper magnetic spectrum, lower Alfv\'enicity, and a $1/f$ break at much lower frequencies within $4^\circ$ of the HCS compared to the rest of the solar wind, possibly implying a different solar source of the HCS environment. \subsection{Interplanetary Coronal Mass Ejections} \label{sec:icme} The accurate identification and characterization of the physical processes associated with the evolution of the ICMEs require as many measurements of the magnetic field and plasma conditions as possible \citep[see][and references therein]{2003JGRA..108.1156C,2006SSRv..123...31Z}. Our knowledge of the transition from CME to ICME has been limited to the {\emph{in~situ}} data collected at 1~AU and remote-sensing observations from space-based observatories. {\emph{PSP}} provides a unique opportunity to link both views through valuable observations that will allow us to distinguish the evidence of the early transition from CME to ICME. Due to its highly elliptical orbit, {\emph{PSP}} measures the plasma conditions of the solar wind at different heliospheric distances. Synergies with other space missions and ground observatories allow building a complete picture of the phenomena from the genesis at the Sun to the inner heliosphere. In general, magnetic structures in the solar wind are MFRs, a subset of which are MCs \citep[][]{1988JGR....93.7217B}, and are characterized by enhanced magnetic fields where the field rotates slowly through a large angle. MCs are of great interest as their coherent magnetic field configuration and plasma properties drive space weather and are related to major geomagnetic storms. \citep{2000JGR...105.7491W}. Therefore, understanding their origin, evolution, propagation, and how they can interact with other transients traveling through space and planetary systems is of great interest. ICMEs are structures that travel throughout the heliosphere and transfer energy to the outer edge of the solar system and perhaps beyond. \paragraph{{\textbf{Event of 11-12 Nov. 2018: Enc. 1 $-$ {\emph{PSP}} at (0.25~AU, -178$^{\circ}$)}}} $\\$ During the first orbit, {\emph{PSP}} collected {\emph{in~situ}} measurements of the thermal solar wind plasma as close as $35.6~R_\odot$ from the Sun. In this new environment, {\emph{PSP}} recorded the signatures of SBO-CMEs: the first on 31 Oct. 2018, at 03:36 UT as it entered the Enc. and the second on 11 Nov. 2018, at 23:50 UT as it exited the Enc.. The signature of the second SBO-CME crossing the S/C was a magnetic field enhancement ({\emph{i.e.}}, maximum strength 97~nT). The event was seen by {\emph{STEREO}}-A but was not visible from L1 or Earth-orbiting S/C as the event was directed away from Earth. The signature and characteristics of this event were the focus of several studies, \citep[see][]{2020ApJS..246...69K,2020ApJS..246...63N,2020ApJS..246...29G}. SBO-CMEs \citep{2008JGRA..113.9105C} are ICMEs that fulfill the following criteria in coronagraph data (1) slow speed ranging from 300 to 500~km~s$^{-1}$; (2) no identifiable surface or low coronal signatures (in this case from Earth point of view); (3) characterized by a gradual swelling of the overlying streamer (blowout type); and (4) follows the tilt of HCS. The source location was determined using remote sensing and {\emph{in situ}} observations, the WSA model \citep{2000JGR...10510465A}, and the Air Force Data Assimilative Photospheric Flux Transport (ADAPT) model \citep{2004JASTP..66.1295A}. Hydrodynamical analytical and numerical simulations were also utilized to predict the CME arrival time to {\emph{PSP}}. Using a CME propagation model, \cite{2020ApJS..246...69K} and \cite{2020ApJS..246...63N} explored the characteristics of the CME using {\emph{in situ}} data recorded closest to the Sun as well as the implications for CME propagation from the coronal source to {\emph{PSP}} and space weather. The CME was traveling at an average speed of $\sim391$~km~s$^{-1}$ embedded in an ambient solar wind flow of $\sim395$~km~s$^{-1}$ and a magnetic field of 37~nT. The difference in speed with the ambient solar wind suggests that drag forces drive the SBO-CME. The internal magnetic structure associated with the SBO displayed signatures of flux-rope but was characterized by changes that deviated from the expected smooth change in the magnetic field direction (flux rope-like configuration), low proton plasma beta, and a drop in the proton temperature. A detailed analysis of the internal magnetic properties suggested high complexity in deviations from an ideal flux rope 3D topology. Reconstructions of the magnetic field configuration revealed a highly distorted structure consistent with the highly elongated “bubble” observed remotely. A double-ring substructure observed in the FOV of COR2 coronagraph on the {\emph{STEREO}-A Sun-Earth Connection Coronal and Heliospheric Investigation \citep[SECCHI;][]{2008SSRv..136...67H}} may also indicate a double internal flux rope. Another possible scenario is described as a mixed topology of a closed flux rope combined with the magnetically open structure, justified by the flux dropout observed in the measurements of the electron PAD. In any case, the plethora of structures observed by the EUV imager (SECCHI-EUVI) in the hours preceding the SBO evacuation indicated that the complexity might be related to the formation processes \citep{2020ApJS..246...63N}. Applying a wavelet analysis technique to the {\emph{in situ}} data from {\emph{PSP}}, \citet{2020ApJS..246...26Z} also identified the related flux rope. They inferred the reduced magnetic helicity, cross helicity, and residual energy. With the method, they also discussed that after crossing the ICME, both the plasma velocity and the magnetic field fluctuate rapidly and positively correlate with each other, indicating that Alfv\'enic fluctuations are generated in the region downstream of the ICME. Finally, \citet{2020ApJS..246...29G} also discussed the SBO-CME as the driver of a weak shock when the ICME was at 7.4 R$_{\odot}$ accelerating energetic particles. {\emph{PSP}}/IS$\odot$IS observed the SEP event (see Fig.~\ref{Giacalone_2020}). Five hours later, {\emph{PSP}}/FIELDS and {\emph{PSP}}/SWEAP detected the passage of the ICME (see \S\ref{EPsRad} for a detailed discussion). \paragraph{{\textbf{Event of 15 Mar. 2019: Enc. 2 $-$ {\emph{PSP}} at (0.547~AU, 161$^{\circ}$)}}} $\\$ An SBO-CME was observed by {\emph{STEREO}}-A and {\emph{SOHO}} coronagraphs and measured {\emph{in situ}} by {\emph{PSP}} at 0.547~AU on 15 Mar. 2019 from 12:14 UT to 17:45 UT. The event was studied in detail by \citet{2020ApJ...897..134L}. The ICME was preceded by two interplanetary shock waves, registered at 08:56:01 UT and 09:00:07 UT (see Fig.~\ref{Lario2020Fig1} in \S\ref{EPsRad}). This study's authors proposed that the shocks were associated with the interaction between the SBO-CME and a HSS. The analysis of the shocks' characteristics indicated that despite the weak strength, the successive structures caused the acceleration of energetic particles. This study aimed to demonstrate that although SBO-CMEs are usually slow close to the Sun, subsequent evolution in the interplanetary space might drive shocks that can accelerate particles in the inner heliosphere. The event is discussed in more detail in \S\ref{EPsRad}, showing that the time of arrival of energetic particles at {\emph{PSP}} (Fig.~\ref{Lario2020Fig2}) is consistent with the arrival of the ICME predicted by MHD simulations. With the simulations, \citet{2020ApJ...897..134L} determined when the magnetic connection was established between {\emph{PSP}} and the shocks, potentially driven by the ICME. \paragraph{{\textbf{Event of 13 Oct. 2019: Enc. 3 $-$ {\emph{PSP}} at (0.81~AU, 75$^{\circ}$)}}} $\\$ The event observed during Enc.~3 was reported by \citet{2021ApJ...916...94W}. The ICME is associated with the stealth CME evacuation on 10 Oct. 2019, at 00:48 UT. It was characterized by an angular width of 19$^{\circ}$, position angle of 82$^{\circ}$, no signatures in {\emph{SDO}}/AIA and EUVI-A images, and reaching a speed of 282~km~s$^{-1}$ at 20 R$_{\odot}$. At the time of the eruption, two coronal holes were identified from EUV images and extrapolations of the coronal magnetic field topology computed using the PFSS model \citep{1992ApJ...392..310W}, suggesting that the stealth CME evolved between two HSSs originated at the coronal holes. The first HSS enabled the ICME to travel almost at a constant speed (minimum interaction time $\sim2.5$ days), while the second overtook the ICME in the later stages of evolution. The event was measured when {\emph{PSP}} was not taking plasma measurements due to its proximity to aphelion, and there are only reliable magnetic field measurements by {\emph{PSP}}/FIELDS instrument. Even with these observational limitations, this event is of particular interest as {\emph{STEREO}}-A was located, 0.15~AU in the radial distance, $<1^{\circ}$ in latitude and $-7.7^{\circ}$ in longitude, away of {\emph{PSP}}. The ICME arrival is characterized by a fast-forward shock observed by {\emph{PSP}} on 13 Oct. 2019, at 19:03 UT and by {\emph{STEREO}}-A on 14 Oct. 2019, at 07:44 UT. Both S/C observed the same main features in the magnetic field components (exhibiting flux rope-like signatures) except for the HSS that {\emph{STEREO}}-A observed as an increasing speed profile and shorter ICME duration. To show the similarity of the main magnetic field features and the effect of the ICME compression due to its interaction with the HSS, the magnetic field and plasma parameters of {\emph{STEREO}}-A were plotted (shown in Fig.~\ref{fig:Winslow2020}) and overlaid the {\emph{PSP}} magnetic field measurements scaled by a factor of $1.235$ and shifted to get the same ICME duration as observed by {\emph{STEREO}}-A. \begin{figure} \centering \includegraphics[width=0.75\textwidth]{Winslow2021.png} \caption{Overlay of the {\emph{in situ}} measurements by {\emph{STEREO}}-A (black) and {\emph{PSP}} (red). From top to bottom: the magnetic field strength and the radial (B$_R$), tangential (B$_T$) and normal (B$_N$) components, and for {\emph{STEREO}}-A only: the proton density (N), temperature (T), and velocity (V). The {\emph{PSP}} data are scaled (by a factor of 1.235) and time-shifted to obtain the same ICME duration delimit by the two dashed vertical red lines. The red vertical solid line marks the fast forward shock at {\emph{PSP}}, while at {\emph{STEREO}}-A with the black vertical solid line. Figure adapted from \citet{2021ApJ...916...94W}.} \label{fig:Winslow2020} \end{figure} \paragraph{{\textbf{Event of 20 Jan. 2020: Enc. 4 $-$ {\emph{PSP}} at (0.32~AU, 80$^{\circ}$)}}} $\\$ \citet{2021A&A...651A...2J} reported a CME event observed by {\emph{PSP}} on 18 Jan. 2020, at 05:30 UT. The event was classified as a stealth CME since the eruption signatures were identified on the Sun's surface. Coronal {\emph{SDO}}/AIA observations indicated the emission of a set of magnetic substructures or loops followed by the evacuation of the magnetic structure on 18 Jan. 2020, at 14:00 UT. The signatures of a few dispersed brightenings and dimmings observed in EUVI-A 195~{\AA} were identified as the source region \citep{2021A&A...651A...2J}. The ICME arrived at {\emph{PSP}} on 20 Jan. 2020, at 19:00 UT, with a clear magnetic obstacle and rotation in the magnetic field direction but no sign of an IP shock wave. The event was also associated with a significant enhancement of SEPs. {\emph{PSP}} and {\emph{STEREO}}-A were almost aligned (separated by 5$^{\circ}$ in longitude). The ICME flew by both S/C, allowing for the examination of the evolution of the associated SEPs. Interestingly, this event established a scenario in which weaker structures can also accelerate SEPs. Thus, the presence of SEPs with the absence of the shock was interpreted as {\emph{PSP}} crossing the magnetic structure's flank, although no dense feature was observed in coronagraph images propagating in that direction \citep{2002ApJ...573..845G}. In \S\ref{EPsRad}, the event is discussed in detail, including the discussion of the associated {\emph{PSP}} observations of SEPs. \paragraph{{\textbf{Event of 25 Jun. 2020: Enc. 5 $-$ {\emph{PSP}} at (0.5~AU, 20$^{\circ}$)}}} $\\$ \citet{2021ApJ...920...65P,2022SpWea..2002914K}, and \citet{2022ApJ...924L...6M} studied and modeled the event of 25 Jan. 2020, which occurred during Enc.~5. The lack of clear signatures on the solar surface and low corona led to the interpretation of this event as an SBO-CME and was the primary motivation for these studies. The models were tested extensively to determine their capabilities to predict the coronal features and their counterparts in space. \citet{2021ApJ...920...65P} focused on predictions of the location of its source and the magnetic field configuration expected to be measured by {\emph{PSP}}. The {\emph{SDO}}/AIA and EUVI-A observations were used to determine the source location of the event. The increase in the solar activity around the source region was followed by a small eruption in the northern hemisphere on 21 Jun. 2020, at 02:00 UT (Fig.~\ref{fig:Palmerio2021}-left). This led to the outbreak of the SBO-CME on 23 Jun. 2020, at 00:54 UT. Using the PFSS model, the authors found that the SBO-CME was triggered by the interaction between the small eruption and the neighboring helmet streamer. The SBO-CME geometry and kinetic aspects were obtained by applying the graduated cylindrical shell \citep[GCS;][]{2011ApJS..194...33T} model to the series of coronagraph images resulting in the estimation of an average speed of 200~km~s$^{-1}$. The magnetic field configuration from Sun to {\emph{PSP}} was obtained by modeling the event using OSPREI suite \citep{2022SpWea..2002914K}. The arrival at {\emph{PSP}} was also predicted to be on 25 Jun. 2020, at 15:50~UT (9 minutes before the actual arrival). {\emph{PSP}} was located at 0.5~AU and 20$^{\circ}$ west of the Sun-Earth line. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{Palmerio2021.png} \includegraphics[width=0.45\textwidth]{Mostl2021.png} \caption{Left: {\emph{PSP}} {\emph{in situ}} magnetic field and plasma measurements of the 21 Jun. 2020 stealth CME. From top to bottom: magnetic field strength and components (B$_R$, B$_T$ and B$_N$), $\theta_B$ and (d)$\phi_B$ magnetic field angles, wind speed (V$_P$), proton number density (N$_P$), and proton temperature (T$_P$). The flux rope interval is shaded in grey. Figure adapted from \citet{2021ApJ...920...65P}. Right: {\emph{in situ}} magnetic field data at {\emph{PSP}}, {\emph{BepiColombo}} and {\emph{Wind}}. Solid vertical lines indicate ICME start times, and dashed lines the boundaries of the magnetic obstacle. Figure adapted from \citet{2022ApJ...924L...6M}. } \label{fig:Palmerio2021} \end{figure} \citet[][; Fig.~\ref{fig:Palmerio2021}-right]{2022ApJ...924L...6M} studied the same event using multipoint measurements from {\emph{SolO}}, {\emph{BepiColombo}} \citep{2021SSRv..217...90B} (0.88~AU, -3.9$^{\circ}$), {\emph{PSP}}, {\emph{Wind}} (1.007~AU, -0.2$^{\circ}$), and {\emph{STEREO}}-A (0.96~AU, -69.6$^{\circ}$). The WSA/THUX \citep{2020ApJ...891..165R}, HELCATS, and 3DCORE \citep{2021ApJS..252....9W} models were used to infer the background solar wind in the solar equatorial plane, the height time plots, and the flux rope, respectively. With the multi-S/C observation, the authors attempted to explain the differences in the {\emph{in~situ}} signatures observed at different locations of a single CME. To accomplish this goal, they modeled the evolution of a circular front shape propagating at a constant speed. The {\emph{in situ}} arrival ICME speeds at {\emph{PSP}} and {\emph{Wind}} were 290~km~s$^{-1}$ and 326~km~s$^{-1}$, respectively. The arrival speed at {\emph{STEREO}}-A was computed using SSEF30 model described by \citet{2012ApJ...750...23D}. The discrepancies between the observed and predicted arrival times ranged from $-11$ to $+18$ hrs. The authors attributed this to a strong ICME deformation. \paragraph{{\textbf{Event of 29 Nov. 2020: Enc. 6 $-$ {\emph{PSP}} at (0.81~AU, 104$^{\circ}$)}}} $\\$ The CME event of 2020 Nov. 29 has been widely studied and identified as the largest widespread SEP event of solar cycle 25 and the direct first {\emph{PSP}} observation of the interaction of two successive ICMEs \citep{2021A&A...656A..29C, 2021ApJ...919..119M, 2021A&A...656A..20K, 2021ApJ...920..123L, 2021A&A...656L..12M, 2022ApJ...924L...6M, 2022ApJ...930...88N}. During this event, {\emph{PSP}}, {\emph{SolO}}, and {\emph{STEREO}}-A were located at respective radial distances of 0.81~AU, 0.87~AU, and $\sim1$~AU. As seen from the Earth, they were at longitudinal angular positions of 104$^{\circ}$ east, 115$^{\circ}$ west, 65$^{\circ}$ east, respectively. The remote sensing observations show that at least four successive CMEs were observed during 24-29 Nov. 2020, although only two were directed toward {\emph{PSP}}. During the SEP event, the particles spread over more than 230$^{\circ}$ in longitude close to 1~AU. \citet{2021A&A...656A..20K} compared the timing when the EUV wave intersects the estimated magnetic foot-points from different S/C with the particle release times from time shift and velocity dispersion analyses. They found that there was no EUV wave related to the event. The PAD and first-order anisotropies studies at {\emph{SolO}}, {\emph{STEREO}}-A, and {\emph{Wind}} S/C suggest that diffusive propagation processes were involved. \citet{2022ApJ...930...88N} analyzed multi-S/C observations and included different models and techniques focusing on creating the heliospheric scenario of the CMEs' evolution and propagation and the impact on their internal structure at {\emph{PSP}}. The observations of {\emph{PSP}}, {\emph{STEREO}}-A, and {\emph{Wind}} of type II and III radio burst emissions indicate a significant left-handed polarization, which has never been detected in that frequency range. The authors identified the period when the interaction/collision between the CMEs took place using the results of reconstructing the event back at the Sun and simulating the event on the WSA-ENLIL+Cone and DBM models. They concluded that both ICMEs interacted elastically while flying by {\emph{PSP}}. The impact of such interaction on the internal magnetic structure of the ICMEs was also considered. Both ICMEs were fully characterized and 3D-reconstructed with the GCS, elliptical cylindrical (EC), and circular cylindrical (CC) models. The aging and expansion effects were implemented to evaluate the consequences of the interaction on the internal structure. \citet{2021ApJ...919..119M} investigated key characteristics of the SEP event, such as the time profile and anisotropy distribution of near-relativistic electrons measured by IS${\odot}$IS/EPI-Lo. They observed the brief PAD with a peak between 40$^{\circ}$ and 90$^{\circ}$ supporting the idea of a shock-drift acceleration, noting that the electron count rate peaks at the time of the shock driven by the faster of the two ICMEs. They concluded that the ICME shock caused the acceleration of electrons and also discussed that the ICMEs show significant electron anisotropy indicating the ICME's topology and connectivity to the Sun. \citet{2021ApJ...920..123L} studied two characteristics of the shock and their impact on the SEP event intensity: (1) the influence of unrelated solar wind structures, and (2) the role of the sheath region behind the shock. The authors found that on arrival at {\emph{PSP}}, the SEP event was preceded by an intervening ICME that modified the low energy ion intensity-time profile and energy spectra. The low-energy ($\lesssim$220~keV) protons accelerated by the shock were excluded from the first ICME, resulting in the observation of inverted energy spectra during its passage. \citet{2021A&A...656A..29C} analyzed the ion spectra during both the decay of the event (where the data are the most complete for H and He) and integrated over the entire event (for O and Fe). They found that the spectra follow a power law multiplied by an exponential with roll-over energies that decrease with the species' increasing rigidities. These signatures are typically found in SEP events where the dominant source is a CME-driven shock, supported by the He/H and Fe/O composition ratios. They also identified signatures in the electron spectrum that may suggest the presence of a population trapped between the ICMEs and pointed out the possibility of having the ICMEs interacting at the time of observation by noting a local ion population with energies up to $\sim1$~MeV. The SEP intensities dropped significantly during the passage of the MFR and returned to high values once {\emph{PSP}} crossed out of the magnetic structure. \citet{2021A&A...656L..12M} compared detailed measurements of heavy ion intensities, time dependence, fluences, and spectral slopes with 41 events surveyed by \citet{2017ApJ...843..132C} from previous solar cycles. They concluded that an interplanetary shock passage could explain the observed signatures. The observed Fe/O ratios dropped sharply above ~1 MeV nucleon$^{-1}$ to values much lower than the averaged SEP survey. They were a few MeV nucleon$^{-1}$ and $^3$He/$^4$He $<0.03$\% at {\emph{ACE}} and $<1$\% at {\emph{SolO}}. For further details on this SEP event, see the discussed in \S\ref{EPsCMENov}. The second ICME hitting {\emph{PSP}} was also analyzed by \citet{2022ApJ...924L...6M}. The authors combined coronagraph images from {\emph{SOHO}} and {\emph{STEREO}} and applied the GCS model to obtain the ICME geometric and kinematic parameters, computing an average speed of 1637~km~s$^{-1}$ at a heliodistance ranging from 6 to 14~$R_{\odot}$. The ICME arrived at {\emph{PSP}} (0.80~AU and $-96.8^{\circ}$) on 1 Dec. 2020, at 02:22 UT and at {\emph{STEREO}}-A (0.95~AU and $-57.6^{\circ}$) on 1 Dec. 2020, at 07:28 UT. They also considered this event an excellent example of the background wind's influence on the possible deformation and evolution of a fast CME and the longitudinal extension of a high-inclination flux rope. \subsection{Magnetic Flux Ropes}\label{7_mfr} The {\emph{in situ}} solar wind measurements show coherent and clear rotations of the magnetic field components at different time scales. These magnetic structures are well known as MFRs. According to their durations and sizes, MFRs are categorized as LFRs \citep[few hours to few days;][]{2014SoPh..289.2633J} and SFRs \citep[tens of minutes to a few hours;][]{2000GeoRL..27...57M}. At 1~AU, it has been found that 30\% to 60\% of the large-scale MFRs are related to CMEs \citep[][]{1990GMS....58..343G,2010SoPh..264..189R}. This subset of MFRs is known as MCs \citep{1988JGR....93.7217B}. On the other hand, the SFRs' origin is not well understood. Several studies proposed that SFRs are produced in the near vicinity of the Sun, while others can consider turbulence as a potential SFR source \citep[{\emph{i.e.}},][]{2019ApJ...881L..11P} or else that SFRs are related and originate from SBO-CMEs. It is worth noticing that observations suggest that SBO-CMEs last a few hours, a time scale that falls in the SFR category. To identify SFRs, \citet{2020ApJS..246...26Z} analyzed the magnetic field and plasma data from the {\emph{PSP}}'s first orbit from 22 Oct. to 21 Nov. 2018. They identified 40 SFRs by following the method described by \citet{2012ApJ...751...19T}. They applied a Morlet analysis technique to estimate an SFR duration ranging from 8 to 300 minutes. This statistical analysis suggests that the SFRs are primarily found in the slow solar wind, and their possible source is MHD turbulence. For the third and fourth orbits, they identified a total of 21 and 34 SFRs, respectively \citep{2021A&A...650A..12Z}, including their relation to the streamer belt and HCS crossing. Alternatively, \citet{2020ApJ...903...76C} identified 44 SFRs by implementing an automated detection method based on the Grad-Shafranov reconstruction technique \citep{2001GeoRL..28..467H,2002JGRA..107.1142H} over the {\emph{in situ}} measurements in a 28-second cadence. They looked for the double-folding pattern in the relation between the transverse pressure and the magnetic vector potential axial component and removed highly Alfv\'enic structures with a threshold condition over a Wal\'en test. The SFRs were identified during the first two {\emph{PSP}} Encs. over the periods 31 Oct. $-$ Dec. 19 2018 ($\sim0.26-0.81$~AU), and 7 Mar. $-$ 15 May 2019 ($\sim0.66-0.78$~AU) with durations ranging from 5.6 to 276.3 min. They found that the monthly counts at {\emph{PSP}} (27 per month) are notably lower than the average monthly counts at {\emph{Wind}} (294 at 1~AU). The authors also noticed that some of the detected SFRs are related to magnetic reconnection processes \citep[two reported by][]{2020ApJS..246...34P} and HCS \citep[three reported by][]{2020ApJS..246...47S}. They argue that the SFR occurrence rate (being far less than at 1~AU) and a power-law tendency of the size-scales point towards an SFRs origin from MHD turbulence but note that the number of events analyzed is not sufficient to yield a statistically significant analysis result. 12 SFRs were also identified with the method proposed by \citet{2020ApJS..246...26Z} with similar duration and two cases with opposite helicity. \subsection{Remote Sensing}\label{7_rs} \subsubsection{Introduction} \label{InstIntro} {\emph{PSP}}/WISPR is a suite of two white light telescopes akin to the heliospheric imagers \citep[HI-1 and HI-2;][]{2009SoPh..254..387E} of {\emph{STEREO}/SECCHI \citep{2008SSRv..136....5K}. The WISPR telescopes look off to the ram side of the S/C ({\emph{i.e.}}, in the direction of motion of {\emph{PSP}} in its counter-clockwise orbit about the Sun). When {\emph{PSP}} is in its nominal attitude ({\emph{i.e.}}, Sun-pointed and ``unrolled''), their combined FOV covers the interplanetary medium on the West side of the Sun, starting at a radial elongation of about $13.5^\circ$ from the Sun and extending up to about $108^\circ$. The FOV of WISPR-i extends up to $53.5^\circ$, while the FOV of WISPR-o starts at $50^\circ$ elongation, both telescopes covering about $40^\circ$ in latitude. Since the angular size of the Sun increases as {\emph{PSP}} gets closer to the Sun, the radial offset from Sun center represents different distances in units of solar radii. For example, on 24 Dec. 2024, at the closest approach of {\emph{PSP}} of 0.046~AU ($9.86~R_\odot$), the offset of $13.5^\circ$ will correspond to $\sim2.3~R_\odot$. \subsubsection{Streamer Imaging with WISPR} \citet{2019Natur.576..232H} reported on the first imaging of the solar corona taken by WISPR during {\emph{PSP}}’s first two solar Encs. ($0.16-0.25$~AU). The imaging revealed that both the large and small scale topology of streamers can be resolved by WISPR and that the temporal variability of the streamers can be clearly isolated from spatial effects when {\emph{PSP}} is corotating with the Sun \cite{ 2020ApJS..246...60P}, by exploiting synoptic maps based on sequential WISPR images, revealed the presence of multiple substructures (individual rays) inside streamers and pseudostreamers. This substructure of the streamers was noted in other studies \citep{2006ApJ...642..523T, 2020ApJ...893...57M}. Noteworthy in the WISPR synoptic maps was the identification of a bright and narrow set of streamer rays located at the core of the streamer belt \citep{2020ApJS..246...60P,2020ApJS..246...25H}. The thickness of this bright region matches the thickness of the heliospheric plasma sheet (HPS) measured in the solar wind (up to 300Mm) at times of sector boundary crossings \citep{1994JGR....99.6667W}. Thus, WISPR may offer the first clear-cut connection between coronal imaging of streamers and the {\emph{in situ}} measurements of the rather narrow HPS. Global PFSS and MHD models of the solar corona during the {\emph{PSP}} Encs. generally agree with the large-scale structure inferred from remote sensing observations \citep[{\emph{e.g.}},][]{2019ApJ...874L..15R,2020ApJS..246...60P}. As noted above, they have been used to interpret streamer sub-structure \citep{2020ApJS..246...60P} observed in WISPR observations, as well as during eclipses \citep{2018NatAs...2..913M}. Equally importantly, they have been used to connect remote solar observations with their {\emph{in~situ}} counterparts (\S\ref{LSSSWHCS}). Comparisons with white-light, but more importantly from emission images, provides crucial constraints for models that include realistic energy transport processes \citep{2019ApJ...872L..18V,2019ApJ...874L..15R}. They have already led to the improvement of coronal heating models \citep{2021A&A...650A..19R}, resulting in better matches with {\emph{in~situ}} measurements during multiple {\emph{PSP}} Encs. Images taken from a vantage point situated much closer to the Sun provide more detailed information on the population of transient structures released continually by helmet streamers. The fine-scale structure of streamer blobs is better resolved by WISPR than previous generations of heliospheric imagers. In addition the WISPR images have revealed that small-scale transients, with aspects that are reminiscent of magnetic islands and/or twisted 3D magnetic fields, are emitted at scales smaller than those of streamer blobs \citep{2019Natur.576..232H}. These very small flux ropes were identified {\emph{in situ}} as common structures during crossings of the HPS \citep{2019ApJ...882...51S} and more recently at {\emph{PSP}} \citep{2020ApJ...894L..19L}. They may also relate to the quasi-periodic structures detected by \citet{2015ApJ...807..176V} -- on-going research is evaluating this hypothesis. Recent MHD simulations have shown that the flux ropes observed in blobs and the magnetic islands between quasi-periodic increases in density could result from a single process known as the tearing-mode instability as the HCS is stretched by the adjacent out-flowing solar wind \citep{2020ApJ...895L..20R}. \subsubsection{Coronal Mass Ejection Imaging with WISPR} \begin{figure*} \centering \includegraphics[width=\textwidth]{RemoteSensing_hess.png} \caption{Multi-S/C observations of the first CME imaged by WISPR, on 1 Nov. 2018. (a) {\emph{SDO}}/AIA imaging of the CME. Different features are visible in each panel, including the dark, circular cavity (193 \AA; 1.6 MK and 20 MK), the bright trailing edge (131 \AA; 0.4 MK and 10 MK), a bright blob that is co-spatial with the cavity (171 \AA; 0.6 MK) and the prominence at the base of the eruption (304 \AA; 0.05 MK). The black line in the 193 \AA \ frame was used to calculate the size of the cavity. The white line in the 171 \AA \ frame is the approximate direction of motion and was used to measure the height and calculate the velocity of the cavity in AIA. (b) The CME as seen by the {\emph{SOHO}}/LASCO-C2 and -C3 coronagraphs. (c) The CME as seen by both {\emph{PSP}}/WISPR telescopes. The white line denotes the solar equatorial plane. The curvature of the line in WISPR-o is due to the distortion of the detector. Figure adapted from \citet{2020ApJS..246...25H}.} \label{FIG_NOV1} \end{figure*} Within a few hours of being turned on in preparation for the first {\emph{PSP}} perihelion passage, the WISPR imager observed its first CME. The WISPR cameras began taking science data at 00:00 UT on 1 Nov. 2018. By 11:00 UT a CME was visible in the inner telescope \citep{2020ApJS..246...25H}. Over the course of the next two days, the CME propagated along the solar equatorial plane throughout both WISPR telescopes, spanning $13.5^{\circ}-108.5^{\circ}$, with a speed of about 300~km~s$^{-1}$, consistent with SBO-CMEs \citep{2018ApJ...861..103V}. The WISPR observations are included in Fig.~\ref{FIG_NOV1}. The CME was also observed from the Earth perspective by the {\emph{SDO}}/AIA EUV imager and the {\emph{SOHO}}/LASCO coronagraphs. In AIA, a small prominence was observed beneath a cavity, which slowly rose from the west limb in a non-radial direction. The cavity and prominence are both visible in the left panel Fig.~\ref{FIG_NOV1}. As this structure enters the LASCO-C2 FOV the cavity remains visible, as does a bright claw-like structure at its base, as seen throughout the middle panel of Fig.~\ref{FIG_NOV1}. The non-radial motion continues until the CME reaches the boundary of an overlying helmet streamer, at which point the CME is deflected out through the streamer along the solar equatorial plane. Because of the alignment of the S/C at the time of the eruption, WISPR was able to see the CME from a similar perspective as LASCO, but from a quarter of the distance. The inner FOV from WISPR was within the LASCO-C3 FOV, meaning that for a brief time WISPR and C3 observations were directly comparable. These direct comparisons demonstrate the improved resolution possible, even in a weaker event, from a closer observational position. This can be seen directly in Fig.~\ref{FIG_NOV1} in the LASCO frame at 17:16 UT and the WISPR-i frame at 17:15~UT. The clarity of the observations of the CME cavity in WISPR allowed for tracking of the cavity out to $40~R{_\odot}$, as well as detailed modeling of the internal magnetic field of the CME \citep{2020ApJS..246...72R}. Both studies would have been impossible without the details provided by WISPR imaging. \begin{figure*} \centering \includegraphics[width=\textwidth]{RemoteSensing_Wood.png} \caption{(a) A LASCO/C3 image from 5 Nov. 2018 showing two small streamer blobs marked by the two arrows. The northern blob (red arrow) is observed by {\emph{PSP}}/WISPR during {\emph{PSP}}'s first perihelion passage. (b) The upper panels are a sequence of four images from WISPR's inner detector of the northern streamer blob eruption from (a), with dotted lines outlining the transient. Synthetic images are shown below the real images, based on the 3D reconstruction of the event. (c) Reconstructed 3D flux rope structure of the streamer blob. The flux rope is shown at two times, t1 and t2, corresponding to 03:48 UT and 09:40 UT, respectively. The red circles indicate the location of {\emph{PSP}} at these two times, and the size of the Sun is to scale. Figure adapted from \citet{2020ApJS..246...28W}.} \label{Wood2020Fig1} \end{figure*} Another transient observed by WISPR during the first {\emph{PSP}} perihelion passage was a small eruption seen by the WISPR-i detector on 5 Nov. 2018, only a day before {\emph{PSP}}'s first close perihelion passage \citep{2020ApJS..246...28W}. As shown in Fig.~\ref{Wood2020Fig1}(a), the LASCO/C3 coronagraph on board {\emph{SOHO}} observed two small jet-like eruptions on that day, with the northern of the two (red arrow) corresponding to the one observed by WISPR. The appearance of the event from 1~AU is very consistent with the class of small transients called ``streamer blobs'' \citep{1997ApJ...484..472S,1998ApJ...498L.165W}, although it is also listed in catalogs of CMEs compiled from {\emph{SOHO}}/LASCO data, and so could also be described as a small CME. At the time of the CME, {\emph{PSP}} was located just off the right side of the LASCO/C3 image in Fig.~\ref{Wood2020Fig1}a, lying almost perfectly in the C3 image plane. The transient's appearance in WISPR images is very different than that provided by the LASCO/C3 perspective, being so much closer to both the Sun and the event itself. This is the first close-up image of a streamer blob. In the WISPR images in Fig.~\ref{Wood2020Fig1}b, the transient is not jet-like at all. Instead, it looks very much like a flux rope, with two legs stretching back toward the Sun, although one of the legs of the flux rope mostly lies above the WISPR FOV. This leg basically passes over {\emph{PSP}} as the transient moves outward. A 3D reconstruction of the flux rope morphology of the transient is shown in Fig.~\ref{Wood2020Fig1}c, based not only on the LASCO/C3 and {\emph{PSP}}/WISPR data, but also on images from the COR2 coronagraph on {\emph{STEREO}}-A, making this the first CME reconstruction performed based on images from three different perspectives that include one very close to the Sun. Although typical of streamer blobs in appearance, a kinematic analysis of the 5 Nov. 2018 event reveals that it has a more impulsive acceleration than previously studied blobs. \subsubsection{Analysis of WISPR Coronal Mass Ejections} \label{intro} The rapid, elliptical orbit of {\emph{PSP}} presents new challenges for the analysis of the white light images from WISPR due to the changing distance from the Sun. While the FOV of WISPR’s two telescopes are fixed in angular size, the physical size of the coronal region imaged changes dramatically, as discussed in \citet{2019SoPh..294...93L}. In addition, because of {\emph{PSP}}’s rapid motion in solar longitude, the projected latitude of a feature changes, as seen by WISPR, even if the feature has a constant heliocentric latitude. Because of these effects, techniques used in the past for studying the kinematics of solar ejecta may no longer be sufficient. The motion observed in the images is now a combination of the motion of the ejecta and of the S/C. On the other hand, the rapid motion gives multiple view points of coronal features and this can be exploited using triangulation. Prior to launch, synthetic white light WISPR images, created using the sophisticated ray-tracing software \citep{2009SoPh..256..111T}, were used to develop new techniques for analyzing observed motions of ejecta. \citet{2020SoPh..295...63N} performed extensive studies of the evolution of the brightness due to the motion of both the S/C and the feature. They concluded that the total brightness evolution could be exploited to obtain a more precise triangulation of the observed features than might be possible otherwise. \begin{figure} \includegraphics[width=\textwidth]{RemoteSensing_PCL1.png} \caption{WISPR-i running-difference images at two times for the CME of 2 Apr. 2019 showing the tracked feature, the lower dark ``eye" (marked with red X's). The image covers approximately $13.5^{\circ} - 53.0^{\circ}$ elongation from the Sun center. The streaks seen in the images are due to reflections off debris created by dust impacts on the {\emph{PSP}} S/C.} \label{figPCL1} \end{figure} \smallskip \paragraph{{\textbf{Tracking and Fitting Technique for Trajectory Determination}}} $\\$ \citet{2020SoPh..295..140L} developed a technique for determining the trajectories of CMEs and other ejecta that takes into account the rapid motion of {\emph{PSP}}. The technique assumes that the ejecta, treated as a point, moves radially at a constant velocity. This technique builds on techniques developed for the analysis of J-maps \citep{1999JGR...10424739S} created from LASCO and SECCHI white light images. For ejecta moving radially at a constant velocity in a heliocentric frame, there are four trajectory parameters: longitude, latitude, velocity and radius (distance from the Sun) at some time $t_0$. Viewed from the S/C, the ejecta is seen to change position in a time sequence of images. The position in the image can be defined by two angles that specify the telescope’s LOS at that pixel location. We use a projective cartesian observer-centric frame of reference that is defined by the Sun-{\emph{PSP}} vector and the {\emph{PSP}} orbit plane. One angle ($\gamma$) measures the angle from the Sun parallel to the {\emph{PSP}} orbit plane and the second angle ($\beta$) measures the angle out of the orbit plane. We call this coordinate system the {\emph{PSP}} orbit frame. Using basic trigonometry, two equations were derived relating the coordinates in the heliocentric frame to those measured in the S/C frame ($\gamma$, $\beta$) as a function of time. The geometry relating the ejecta's coordinates in the two frames is shown in Fig.~1 of \citet{2020SoPh..295..140L} for the case with the inclination of {\emph{PSP}}’s orbit plane w.r.t. the solar equatorial plane neglected. The coordinates of the S/C are $[r_1, \phi_1, 0]$, and the coordinates of the ejecta are $[r_2, \phi_2, \delta_2]$. The two equations are \begin{equation} \frac{\tan\beta (t)}{\sin\gamma (t)} = \frac{\tan\delta_2}{\sin[\phi_2 - \phi_1 (t)]}, \end{equation} \begin{equation} \cot\gamma(t) = \frac{r_1(t) - r_2(t)\cos\delta_2 \cos[\phi_2 - \phi_1(t)]}{r_2(t)\cos\delta_2 \sin[\phi_2 - \phi_1(t)]}. \end{equation} By tracking the point ejecta in a time sequence of WISPR images, we generate a set of angular coordinates $[\gamma(t_i), \beta(t_i)]$ for the ejecta in the {\emph{PSP}} orbit frame. In principle, ejecta coordinates in the S/C frame are only needed at two times to solve the above two equations for the four unknown trajectory parameters. However, we obtain more accurate results by tracking the ejecta in considerably more than two images. The ejecta trajectory parameters in the heliocentric frame are determined by fitting the above equations to the tracking data points $[\gamma(t_i), \beta(t_i)]$. Our fitting technique is described in \citet{2020SoPh..295..140L}, which also gives the equations with the corrections for the inclination of {\emph{PSP}}’s orbit w.r.t. the solar equatorial plane. \begin{figure} \includegraphics[width=0.66\textwidth]{RemoteSensing_PCL2.png} \caption{Trajectory solution for the 2 Apr. 2019 flux rope (magenta arrow) shown in relation to {\emph{PSP}}, {\emph{STEREO}}-A, and Earth at 18:09 UT. The fine solid lines indicate the fields-of-view of the telescopes on {\emph{PSP}} and {\emph{STEREO-A}}. The CME direction was found to be HCI longitude $= 67^{\circ} \pm 1^{\circ}$. Note that the arrow only indicates the direction of the CME, and it is not meant to indicate the distance from the Sun. The HCI longitudes of the Earth and {\emph{STEREO}}-A are 117$^{\circ}$ and 19$^{\circ}$, respectively. The blue dashed ellipse is {\emph{PSP}}'s orbit. The plot is in the Heliocentric Earth Ecliptic (HEE) reference frame and distances are in AU.} \label{figPCL2} \end{figure} \begin{figure} \includegraphics[width=.66\textwidth]{RemoteSensing_PCL3.png} \caption{Trajectory of the flux rope of 2 Apr. 2019, found from the WISPR data using the tracking and fitting technique, projected to images from {\emph{STEREO}}-A/HI-1 at 18:09 UT. The trajectory from tracking and fitting ({\textcolor{red}{\bf{+}}} signs) is shown from 12:09 to 18:09 UT in hourly increments, as seen from {\emph{STEREO}}-A. The location of the prediction for the last time (18:09 UT) is in good agreement with the location of the tracked feature seen in the HI-1A image, thus verifying the trajectory. The grid lines are the coordinate lines of the WCS frame specified in the HI-1A FITS header. The size and location of the Sun (yellow globe) is shown to scale.} \label{figPCL3} \end{figure} The tracking and fitting technique was applied to a small CME seen by WISPR in the second solar Enc. on 1-2 Apr. 2019 \citep{2020SoPh..295..140L}. Fig.~\ref{figPCL1} shows two of the WISPR images used in the tracking; the feature tracked, an eye-like dark oval, is shown as the red X. The direction of the trajectory found for this CME is indicated with an arrow in Fig.~\ref{figPCL2}, which also shows the location and fields of view of {\emph{STEREO}}-A and {\emph{PSP}}. The trajectory solution in heliocentric inertial (HCI) coordinates was longitude $ = 67 \pm 1^{\circ}$, latitude $= 6.0\pm 0.3^{\circ}$, $V = 333 \pm 1$~km~s$^{-1}$, and $r_{2}(t_0) = 13.38 \pm 0.01$\,R$_{\odot}$, where $t_0 $= 12:09 UT on 2 Apr. 2019. There were simultaneous observations of the CME from {\emph{STEREO}}-A and {\emph{PSP}}, which enabled us to use the second viewpoint observation of {\emph{STEREO}}-A/HI-1 to verify the technique and the results. This was done by generating a set of 3D trajectory points using the fitting solution above that included the time of the {\emph{STEREO}}-A/HI-1 observations. These trajectory points were then projected onto an image from HI-1A using the World Coordinate System (WCS) information in the Flexible Image Transport System (FITS) image header. This is illustrated in Fig.~\ref{figPCL3}, which shows the trajectory points generated from the solutions from 12:09 to 18:09 UT in hourly increments projected onto the HI-1A image at 18:09 UT on 2 Apr. 2019. Note that the last point, corresponding to the time of the HI-1A image, falls quite near a similar feature on the CME as was tracked in the WISPR images (Fig.~\ref{figPCL1}). Thus, the trajectory determined from the WISPR data agrees with the {\emph{STEREO}}-A observations from a second view point. This technique was also applied to the first CME seen by WISPR on 2 Nov. 2018. Details of the tracking and the results are in \citet{2020SoPh..295..140L}, and independent analyses of the CME kinematics and trajectory for the 2 Apr. 2019 event were carried out by \citet{2021A&A...650A..31B} and \citet{2021ApJ...922..234W}, with similar results. \begin{figure} \includegraphics[width=.66\textwidth]{RemoteSensing_PCL4.png} \caption{ Trajectory of the 26-27 Jan. 2020 CME (magenta arrow) in relation to {\emph{PSP}}, {\emph{STEREO}}-A, and Earth, projected in the HEE reference frame on 26 Jan. 2020, at 20:49 UT. The fields-of-view of the WISPR-i and WISPR-o telescopes on {\emph{PSP}} and COR2 and HI-1 on {\emph{STEREO}}-A are indicated by solid lines. The plot is in the HEE coordinate frame and distances are in AU.} \label{figPCL4} \end{figure} \begin{figure} \includegraphics[width=\textwidth]{RemoteSensing_PCL5.png} \caption{Image pair used to determine the location of the 26-27 Jan. 2020 CME by triangulation. Left: WISPR-o image of 26 Jan. 2020, at 20:49 UT with the selected feature circled in red.Right: Simultaneous HI-1 image showing the location of the same feature identified in the WISPR-o image. The location of the CME found by triangulation of this feature was in excellent agreement with the trajectory found from tracking and fitting (see core text). The Sun (yellow globe) in the panel on the right is shown to scale. The HI-1 image is projected in the Helioprojective Cartesian (HPC) system (red and blue grid lines) with the Sun (yellow globe) drawn to scale.} \label{figPCL5} \end{figure} \smallskip \paragraph{{\textbf{WISPR and {\emph{STEREO}} Observations of the Evolution of a Streamer Blowout CME}}} $\\$ WISPR obtained detailed images of the flux rope structure of a CME on 26-27 Jan. 2020. The tracking and fitting procedure was also used here to determine the trajectory. The direction of the trajectory is shown in Fig.~\ref{figPCL4}, along with locations and FOVs of {\emph{STEREO}} A and WISPR. The trajectory solution parameters are HCI longitude and latitude = (65$^{\circ} \pm $2$^{\circ}$, 2$^{\circ} \pm$ 2$^{\circ}$), $v=248 \pm 16$~km~s$^{-1}$ and $r_{2}(t_0$)/R$_{\odot}$= $30.3 \pm 0.3$ at $t_0$ = 20:04 UT on 26 Jan. 2020. The CME was also observed by {\emph{STEREO}}-A/COR2 starting on 25 Jan. 2020. The brightening of the streamer before the eruption, the lack of a bright leading edge, and the outflow following the ejecta led to its identification as a SBO-CME \citep{2021A&A...650A..32L}. Data from {\emph{STEREO}}-A/EUVI suggested that this CME originated as a rising flux rope on 23 Jan. 2020, which was constrained in the corona until its eruption of 25 Jan. 2020. The detail of the observations and the supporting data from AIA and the Helioseismic and Magnetic Imager \citep[HMI;][]{2012SoPh..275..207S} on {\emph{SDO}} can be found in \citet{2021A&A...650A..32L}. The direction determined from the tracking and fitting was consistent with this interpretation; the direction was approximately $30^{\circ}$ west of a new AR, which was also a possible source. To verify the CME’s trajectory determined by tracking and fitting, we again made use of simultaneous observations from {\emph{STEREO}}-A, but in this case we used triangulation to determine the 3D location of the CME at the time of a simultaneous image pair. This was only possible because details of the structure of the CME were evident from both viewpoints so that the same feature could be located in both images. Fig.~\ref{figPCL5} shows the simultaneous WISPR-o and HI-1A images of the CME at time 26 Jan. 2020, at 20:49 UT when the S/C were separated by 46$^{\circ}$. The red X in each image marks the what we identify as the same feature in both images (a dark spot behind the bright V). Using a triangulation technique on this image pair \citet{2021A&A...650A..32L} gave the result distance from the Sun $r/R_{\odot}$ = 31$ \pm 2$, HCI longitude and latitude = ( $66^{\circ}\pm 3^{\circ}$, $-2^{\circ} \pm 2^{\circ}$). These angles are in excellent agreement with the longitude found by tracking and fitting given above. The distance from the Sun is also in excellent agreement with the predicted distance at this time of $r_2/R_{\odot}$= $31.2 \pm 0.3$, validating our trajectory solution. Thus the trajectory was confirmed, which further supported our interpretation of the evolution of this slowly evolving SBO-CME. \section{Solar Radio Emission} \label{SRadE} At low frequencies, below $\sim10-20$ MHz, radio emission cannot be observed well from ground-based observatories due to the terrestrial ionosphere. Solar radio emission at these frequencies consists of radio bursts, which are signatures of the acceleration and propagation of non-thermal electrons. Type II and type III radio bursts are commonly observed, with the former resulting from electrons accelerated at shock fronts associated with CMEs, and the latter from electron beams accelerated by solar flares (see Fig.~\ref{Wiedenbeck2020Fig}C). Solar radio bursts offer information on the kinematics of the propagating source, and are a remote probe of the properties of the local plasma through which the source is propagating. Radio observations on {\emph{PSP}} are made by the FIELDS RFS, which measures electric fields from 10 kHz to 19.2 MHz \citep{2017JGRA..122.2836P}. At frequencies below and near $f_{pe}$, the RFS measurements are dominated by the quasi-thermal noise (QTN). {\emph{PSP}} launched at solar minimum, when the occurrence rate of solar radio bursts is relatively low. Several {\emph{PSP}} Encs. (Enc.~1, Enc.~3, Enc.~4) near the start of the mission were very quiet in radio, containing only a few small type III bursts. The second {\emph{PSP}} Enc. (Enc.~2), in Apr. 2019, was a notable exception, featuring multiple strong type III radio bursts and a type III storm \citep{2020ApJS..246...49P}. As solar activity began rising in late 2020 and 2021, with Encs.~5 and beyond, the occurrence of radio bursts is also increasing. Taking advantage of the quiet Encs. near the start of the mission, \citet{ChhabraThesis} applied a correlation technique developed by \citet{2013ApJ...771..115V} for imaging data to RFS light curves, searching for evidence of heating of the coronal by small-scale nanoflares which are too faint to appear to the eye in RFS spectrograms. During {\emph{PSP}} Encs., the cadence of RFS spectra range is typically 3.5 or 7 seconds, which is higher than the typical cadence of radio spectra available from previous S/C such as {\emph{STEREO}} and {\emph{Wind}}. The relatively high cadence of RFS data is particularly useful in the study of type III radio bursts above 1 MHz (in the RFS High Frequency Receiver [HFR] range), which typically last $\lesssim1$~minute at these frequencies. Using the HFR data enabled \citet{2020ApJS..246...49P} to measure circular polarization near the start of several type III bursts in Enc.~2. \citet{2020ApJS..246...57K} characterized the decay times of type III radio bursts up to 10 MHz, observing increased decay times above 1 MHz compared to extrapolation using previous measurements from {\emph{STEREO}}. Modeling suggests that these decay times may correspond to increased density fluctuations near the Alfv\'en point. Recent studies have used RFS data to investigate basic properties of type III bursts, in comparison to previous observations and theories. \cite{2021ApJ...915L..22C} examined a single Enc.~2 type IIIb burst featuring fine structure (striae) in detail. \cite{2021ApJ...915L..22C} found consistency between RFS observations and results of a model with emission generated via the electron cyclotron maser instability \citep{2004ApJ...605..503W}, over the several-MHz frequency range corresponding to solar distances where $f_{ce} > f_{pe}$. \cite{2021ApJ...913L...1M} performed a statistical survey of the lower cutoff frequency of type III bursts using the first five {\emph{PSP}} Encs., finding a higher average cutoff frequency than previous observations from {\emph{Ulysses}} and {\emph{Wind}}. \cite{2021ApJ...913L...1M} proposed several explanations for this discrepancy, including solar cycle and event selection effects. The launch of {\emph{SolO}} in Feb. 2020 marked the first time four S/C ({\emph{Wind}}, {\emph{STEREO}}-A, {\emph{PSP}}, and {\emph{SolO}}) with radio instrumentation were operational in the inner heliosphere. \cite{2021A&A...656A..34M} combined observations from these four S/C along with a model of the burst emission to determine the directivity of individual type III radio bursts, a measurement previously only possible using statistical analysis of large numbers of bursts. \section{Energetic Particles} \label{EPsRad} The first four years of the {\emph{PSP}} mission have provided key insights into the acceleration and transport of energetic particles in the inner heliosphere and has enabled a comprehensive understanding into the variability of solar radio emission. {\emph{PSP}} observed a multitude of solar radio emissions, SEP events, CMEs, CIRs and SIRs, inner heliospheric anomalous cosmic rays (ACRs), and energetic electron events; all of which are critical to explore the fundamental physics of particle acceleration and transport in the near-Sun environment and throughout the universe. \subsection{Solar Energetic Particles} \label{SEPs} On 2 and 4 Apr. 2019, {\emph{PSP}} observed two small SEP events \citep{2020ApJS..246...35L, 2020ApJ...899..107K, 2020ApJ...898...16Z} while at $\sim0.17$~AU (Fig.~\ref{Leske2020Fig}). The event on 4 Apr. 2019 was associated with both a type III radio emission seen by {\emph{PSP}} as well as surges in the EUV observed by {\emph{STEREO}}-A, all of which determined the source was an AR $\sim80^{\circ}$ east of the {\emph{PSP}} footpoint \citep{2020ApJS..246...35L}. To better understand the origin of these SEP events, \citet{2020ApJ...899..107K} conducted a series of simulations constrained by remote sensing observations from {\emph{SDO}}/AIA, {\emph{STEREO}}-A/EUVI and COR2, {\emph{SOHO}}/LASCO, and {\emph{PSP}}/WISPR to determine the magnetic connectivity of {\emph{PSP}}, model the 3D structure and evolution of the EUV waves, investigate possible shock formation, and connect these simulations to the SEP observations. This robust simulation work suggests that the SEP events were from multiple ejections from AR 12738. The 2 Apr. 2019 event likely originated from two ejections that formed a shock in the lower corona \citep{2020ApJ...899..107K}. Meanwhile, the 4 Apr. 2019 event was likely the result of a slow SBO, which reconfigured the global magnetic topology to be conducive for transport of solar particles away from the AR toward {\emph{PSP}}. Interestingly, however, \citet{2020ApJS..246...35L} did not observe \textsuperscript{3}He for this event, as would be expected from flare-related SEPs. \citet{2020ApJ...898...16Z} explained gradual rise of the 4 Apr. 2019 low-energy H\textsuperscript{+} event compared to the more energetic enhancement on 2 Apr. 2019 as being indicative of different diffusion conditions. \begin{figure*} \centering \includegraphics[width=\textwidth]{Leske2020Fig.jpg} \caption{IS$\odot$IS/EPI-Lo time-of-flight measurements for the two SEP events on 2 Apr. 2019 (DOY 92) and 4 Apr. 2019 (DOY 94) are shown in green, blue, and red for the stated energies. IS$\odot$IS/EPI-Hi/LET1 observations are shown in black. Figure adapted from \citet{2020ApJS..246...35L}.} \label{Leske2020Fig} \end{figure*} The same AR (AR 12738) was later responsible for a \textsuperscript{3}He-rich SEP event on 20-21 Apr. 2019 observed by {\emph{PSP}} at $\sim0.46$~AU that was also measured by {\emph{SOHO}} at $\sim1$~AU (shown in Fig.~\ref{Wiedenbeck2020Fig}) \citep{2020ApJS..246...42W}. This SEP event was observed along with type III radio bursts and helical jets. The \textsuperscript{3}He/\textsuperscript{4}He ratios at {\emph{PSP}} and {\emph{SOHO}} were $\sim250$ times the nominal solar wind ratio; such large enhancements are often seen in impulsive SEP events. This event demonstrated the utility of IS$\odot$IS/EPI-Hi to contribute to our understanding of the radial evolution of \textsuperscript{3}He-rich SEP events, which can help constrain studies of potential limits on the amount of \textsuperscript{3}He that can be accelerated by an AR \citep[{\emph{e.g.}},][]{2005ApJ...621L.141H}. \begin{figure*} \centering \includegraphics[width=\textwidth]{Wiedenbeck2020Fig.jpg} \caption{Remote and {\emph{in situ}} observations for the 20-21 Apr. 2019 \textsuperscript{3}He-rich SEP event. (a) Jet onset times and CME release times as reported by \citet{2020ApJS..246...33S}, (b) 5-min 0.05-0.4 nm (blue) and $0.1-0.8$ nm (red) X-ray flux from the Geostationary Operational Environmental Satellite ({\emph{GOES}}), (c) radio emissions from {\emph{Wind}}/WAVES \citep{1995SSRv...71..231B}, (d) electron fluxes for 53 (black), 79 (red), and 133 (blue) keV from the {\emph{ACE}} Electron Proton Alpha Monitor \citep[EPAM;][]{1998SSRv...86..541G}, (e) velocity dispersion with red line indicating the dispersion slope from the {\emph{ACE}} Ultra Low Energy Isotope Spectrometer \citep[ULEIS;][]{1998SSRv...86..409M}, (f) {\emph{ACE}}/ULEIS 1 MeV He flux, (g) {\emph{ACE}}/ULEIS He mass vs. time, and (h) {\emph{PSP}}/IS$\odot$IS/EPI-Hi mass vs. time. Grey boxes in panel (h) indicate times without IS$\odot$IS observations. Figure adapted from \citet{2020ApJS..246...42W}.} \label{Wiedenbeck2020Fig} \end{figure*} \citet{2021A&A...650A..23C} later investigated the helium content of six SEP events from May to Jun. 2020 during the fifth orbit of {\emph{PSP}}. These events demonstrated that SEP events, even from the same AR, can have significantly different \textsuperscript{3}He/\textsuperscript{4}He and He/H ratios. Additionally, EUV and coronagraph observations of these events suggest that the SEPs were accelerated very low in the corona. Using velocity-dispersion analysis, \citet{2021A&A...650A..26C} concluded that the path length of these SEP events to the source was $\sim0.625$~AU, greatly exceeding that of a simple Parker spiral. To explain the large path length of these particles, \citet{2021A&A...650A..26C} developed an approach to estimate how the random-walk of magnetic field lines could affect particle path length, which well explained the computed path length from the velocity-dispersion analysis. During the first orbit of {\emph{PSP}}, shortly after the first perihelion pass, a CME was observed locally at {\emph{PSP}}, which was preceded by a significant enhancement in SEPs with energies below a few hundred keV/nuc \cite[][]{2020ApJS..246...29G,2020ApJS..246...59M}. The CME was observed to cross {\emph{PSP}} on 12 Nov. 2018 (DOY 316), and at this time, {\emph{PSP}} was approximately 0.23~AU from the Sun. The CME was observed remotely by {\emph{STEREO}}-A which was in a position to observe the CME erupting from the east limb of the Sun (w.r.t. {\emph{STEREO}}-A) and moving directly towards {\emph{PSP}}. {\emph{PSP}} was on the opposite side of the Sun relative to Earth. Through analysis of {\emph{STEREO}}-A coronagraph images, the speed of the CME was determined to be 360~km~s$^{-1}$, which is slower than typical SEP-producing CMEs seen by S/C near 1~AU. Moreover, in the few days that preceded the CME, there were very few energetic particles observed, representing a very quiet period. Thus, this represented a unique observation of energetic particles associated with a slow CME near the Sun. Fig.~\ref{Giacalone_2020} shows a multi-instrument, multi-panel plot of data collected during this slow-CME/SEP event. Fig.~\ref{Giacalone_2020}a shows the position of the CME as a function of time based on {\emph{STEREO}}-A observations as well as {\emph{PSP}} (the cyan point), while the lower Fig.~\ref{Giacalone_2020}e-f shows $30-300$~keV energetic particles from the IS$\odot$IS/EPI-Lo instrument. The CME was observed to erupt and move away from the Sun well before the start of the SEP event, but the SEP intensities rose from the background, peaked, and then decayed before the CME crossed {\emph{PSP}}. There was no shock observed locally at {\emph{PSP}}, and there is no clear evidence of local acceleration of SEPs at the CME crossing. It was suggested by \citet{2020ApJS..246...29G} that the CME briefly drove either a weak shock or a plasma compression when it was closer to the Sun, and was capable of accelerating particles which then propagated ahead of the CME and observed by {\emph{PSP}}. In fact, modeling of the CME and local plasma parameters, also presented in this paper, suggested there may have been a weak shock over parts of the (modeled) CME-shock surface, but is not clear whether {\emph{PSP}} was magnetically connected to these locations. The energetic particle event was characterized by a clear velocity dispersion in which higher-energy particles arrived well before the lower energy particles. Moreover, the time-intensity profiles at specific energies, seen in Fig.~\ref{Giacalone_2020}e, show a relatively smooth rise from the background to the peak, and a gradual decay. The particles were observed to be initially anisotropic, moving radially away from the Sun, but at the peak of the event were observed to be more isotropic. \citet{2020ApJS..246...29G} interpreted this in terms of the diffusive transport of particles accelerated by the CME starting about the time it was 7.5~$R_\odot$ and continuing with time but with a decreasing intensity. They used a diffusive-transport model and fit the observed time-intensity profiles, which gave values for the scattering mean-free path, parallel to the magnetic field, of 30~keV to 100~keV protons to be from $0.04-0.09$~AU at the location of {\emph{PSP}}. \begin{figure*} \centering \includegraphics[width=\textwidth]{Giacalone_etal_Fig1.pdf} \caption{Multi-instrument data for a CME and SEP event observed by {\emph{PSP}} and {\emph{STEREO}}-A. (a) shows the heliocentric distance of the CME, (b-c) show solar wind density and solar wind speed from the SWEAP instrument, (d) shows the vector and magnitude of the magnetic field from the FIELDS instrument, and (e-f) show data from the IS$\odot$IS/EPI-Lo instrument. Figure adapted from \citet{2020ApJS..246...29G} where further details are provided.} \label{Giacalone_2020} \end{figure*} Another important feature of this event was the generally steep energy spectrum of the low-energy ions. This suggested a very weak event. In the comparison between the model used by \citet{2020ApJS..246...29G} and the observations, it was found that a source spectrum, assumed to be at the CME when it was close to the Sun, had an approximately $E^{-5.5}$ power-law dependence. At the time, this was the closest to the Sun that a CME-related SEP event had been observed {\emph{in situ}}. \citet{2020ApJS..246...29G} used their diffusive transport model to estimate the total fluence that this event would have had at 1~AU, in order to compare with previous observations of CME-related SEP events. It was determined that this event would have been so weak to not even appear on a figure showing a wide range of values of the SEP fluence as a function of CME speed produced by \citet{2001JGR...10620947K}. \citet[][]{2020ApJS..246...59M} presented a separate analysis of this same CME-SEP event suggested an alternative acceleration mechanism. They noted that since {\emph{PSP}} did not observe a shock locally, and modeling of the CME suggested it may not have ever driven a shock, the acceleration mechanism was not likely diffusive shock acceleration. Instead, they suggested it may be similar to that associated with aurora in planetary magnetospheres \cite[{\emph{e.g.}},][and references therein]{2009JGRA..114.2212M}. This study focused on two important observed aspects: the velocity dispersion profile and the composition of the SEP event. In the proposed mechanism, which was referred to as ``the pressure cooker'' \cite[{\emph{e.g.}},][]{1985JGR....90.4205G}, energetic particles are confined below the CME in the solar corona in a region bound by an electric potential above and strong magnetic fields below. The electric field is the result of strong field-aligned electric currents associated with distorted magnetic fields and plasma flow, perhaps associated with magnetic reconnection, between the CME and corona during its eruption. Particles are confined in this region until their energy is sufficient to overcome the electric potential. There are two key results from this process. One is that the highest energy particles will overcome this barrier earlier, and, hence, will arrive at {\emph{PSP}} earlier than low energy particles, which are presumably released much later when the CME has erupted from the Sun. The other is that the mechanism produces a maximum energy that depends on the charge of the species. Although the event was quite weak, there were sufficient counts of He, O, and Fe, that when combined with assumptions about the composition of these species in the corona, agreed with the observed high-energy cut-off as a function of particle species. {\emph{PSP}} was in a fortunate location, during a fortuitously quiet period, and provided a unique opportunity to study energetic particles accelerated by a very slow and weak CME closer to the Sun that had been see {\emph{in situ}} previously. On the one hand, the observations suggests that very weak shocks, or even non-shock plasma compressions driven by a slow CME, are capable of accelerating particles. On the other hand, the pressure cooker method provides an interesting parallel with processes that occur in planetary ionospheres and magnetosphere. Moreover, the observation of the SEP event provided the opportunity to determine the parallel mean-free path of the particles, at 0.23~AU, as the particles were transported from their source to {\emph{PSP}}. In Mar. 2019, {\emph{PSP}} encountered a SBO-CME with unique properties which was analyzed by \citet{2020ApJ...897..134L}. SBO-CMEs are generally well-structured, slow CMEs that emerge from the streamer belt in extended PILs outside of ARs. Fig.~\ref{Lario2020Fig1} shows an overview of the plasma, magnetic field, electron and energetic particle conditions associated with the CME. Despite the relatively low speed of the SBO-CME close to the Sun determined by remote observation from {\emph{SOHO}} and {\emph{STEREO}}-A ($\sim311$~km~s$^{-1}$), the transit time to {\emph{PSP}} indicated a faster speed and two shocks were observed at {\emph{PSP}} prior to the arrival of the CME. The low initial speed of the SBO-CME makes it unlikely that it would have driven a shock in the corona and \citet{2020ApJ...897..134L} proposed that the formation of the shocks farther from the Sun were likely caused by compression effects of a HSS that followed the CME and that the formation of the two-shock structure may have been caused by distortions in the CME resulting from the HSS. This demonstrates the importance of the surrounding plasma conditions on the viability of energetic particle acceleration in CME events, though the associated energetic particle event in this case was limited to low energies $<$100~kev/nuc. \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{Lario2020Fig1.jpg} \caption{Overview of plasma (measured by SWEAP), magnetic field (measured by FIELDS), electron (SWEAP) and energetic particle (measured by IS$\odot$IS conditions during the Mar. 2019 SBO-CME event observed by {\emph{PSP}}. From top to bottom: (a) radial velocity, (b) tangential velocity component, and (c) normal velocity components of the solar wind proton velocity in RTN coordinates, (d) solar wind proton density, (e) solar wind proton temperature, magnetic field (f) magnitude, (g) elevation and (h) azimuth angles in RTN coordinates, (i) proton plasma beta, (j) the sum of the magnetic and thermal pressures, (k) ram pressure, (l) 314 eV electron PADs, (m) normalized 314 eV electron PADs, and (n) $\sim30-500$ keV TOF-only ion intensities measured by IS$\odot$IS/EPI-Lo. The vertical solid line indicates the passing of the two shocks associated with the CME which are too close in time to be separately resolved here, the vertical dashed lines indicate the boundaries of the CME, and the blue arrow indicates the eruption time of the SBO-CME at the Sun. Figure adapted from \citet{2020ApJ...897..134L}.} \label{Lario2020Fig1} \end{figure*} In order to determine the point at which {\emph{PSP}} would have been magnetically connected to the CME, \citet{2020ApJ...897..134L} ran two ENLIL simulations, one with just ambient solar wind conditions and another including the CME. By evaluating the solar wind speed along the magnetic field line connecting {\emph{PSP}} to the Sun, they find the point at which the solar wind speed in the CME simulation exceeds that of the ambient simulation, which establishes the point at which {\emph{PSP}} is connected to the CME, termed the ``Connecting with the OBserving" point or "cobpoint" \citep{1995ApJ...445..497H}. Fig.~\ref{Lario2020Fig2} shows the coordinates of the cobpoint determined by this analysis alongside energetic particle anisotropy measurements. The energetic particle event is shown to be highly anisotropic, with enhanced particle intensities seen in the sunward-facing sensors of the instrument, and that the onset of energetic particles coincided with the establishment of the cobpoint, connecting the CME to {\emph{PSP}}. Also notable is that the increase in the speed jump at the cobpoint coincides with an increase in the measured energetic particle intensities prior to shock arrival. This analysis demonstrates the importance of energetic particle measurements made by IS$\odot$IS in constraining modelling of large scale magnetic field structures such as CMEs. \begin{figure*} \centering \includegraphics[width=\textwidth]{Lario2020Fig2.jpg} \caption{Cobpoint characteristics as determined from ENLIL simulations and TOF-only energetic ion data measured by IS$\odot$IS/EPI-Lo. From top to bottom: (a) heliocentric radial distance of {\emph{PSP}} cobpoint, (b) Heliocentric Earth Equatorial (HEEQ) longitude of {\emph{PSP}} cobpoint, (c) speed jump ratio measured at {\emph{PSP}} cobpoint by comparing the ENLIL background simulation to the simulation including the CME, (d) speed of the cobpoint, (e) ion intensities $\sim20-500$, (f) ion intensities measured in the Sun-facing wedges of EPI-Lo, (g) ion intensities measured in the EPI-Lo wedges facing away from the Sun, (h) ion intensities measured in the transverse wedges of EPI-Lo. The verticle solid line indicate the passage of the two shocks, the vertical dotted lines show the boundaries of the CME, the vertical purple dashed lie indicates the time when {\emph{PSP}} became connected to the compression region in front of the CME, and the purple vertical arrows indicate the time that the SBO-CME was accelerated at the Sun. Figure adapted from \citet{2020ApJ...897..134L}.} \label{Lario2020Fig2} \end{figure*} \citet{2021A&A...651A...2J} analyzed a CME that was measured by {\emph{PSP}} on 20 Jan. 2020, when the S/C was 0.32~AU from the Sun. The eruption of the CME was well imaged by both {\emph{STEREO}}-A and {\emph{SOHO}} and was observed to have a speed of $\sim380$~km~s$^{-1}$, consistent with the transit time to {\emph{PSP}}, and possessed characteristics indicative of a stealth-type CME. Fig.~\ref{Joyce2021CMEFig1} shows a unique evolution of the energetic particle anisotropy during this event, with changes in the anisotropy seeming to coincide with changes of the normal component of the magnetic field ($B_N$). Of particular interest is a period where $B_N$ is close to zero and the dominant anisotropy is of energetic particles propagating toward the Sun (highlighted in yellow in Fig.~\ref{Joyce2021CMEFig1}), as well as two periods when $B_N$ spikes northward and there is a near complete dropout in energetic particle flux (highlighted in orange). The period dominated by particles propagating toward the Sun has the highest fluxes extending to the highest energies in the event and \citet{2021A&A...651A...2J} argued that this may be evidence that {\emph{PSP}}, located on the western flank of the CME throughout the event, may have briefly been connected to a region of stronger energetic particle acceleration, likely closer to the nose of the CME where the compression is likely strongest. \begin{figure*} \centering \includegraphics[width=\textwidth]{Joyce2021CMEFig1.png} \caption{Overview of energetic particle anisotropy and magnetic field conditions during the Jan. 2020 CME. Energetic particle measurements are from the TOF-only channel of IS$\odot$IS/EPI-Lo and magnetic field data are from FIELDS. From top to bottom: omnidirectional ion spectrogram, ion spectrum from Sun ($0-60^\circ$ from nominal Parker spiral direction), ion spectrogram toward the Sun ($120-180^\circ$), ion spectrogram in the transverse direction ($60-120^\circ$), and the magnetic field vector in RTN coordinates, with the magnetic field magnitude in black. The period highlighted in yellow shows a strong influx of particles propagating toward the Sun, while periods of energetic particle dropouts are highlighted in orange. Figure adapted from \citet{2021A&A...651A...2J}.} \label{Joyce2021CMEFig1} \end{figure*} {\emph{STEREO}}-A was well-aligned radially with {\emph{PSP}} during this time period and observed the same CME also from the western flank. Fig.~\ref{Joyce2021CMEFig2} shows the comparison between energetic particle spectrograms and magnetic field vectors measured by both instruments. Particularly striking is the remarkable similarity of the magnetic field vector measured by both S/C, suggesting that they both encountered a very similar region of the magnetic structure, contrasted with the dissimilarity of the energetic particle observations, with those at {\emph{STEREO}}-A lacking the fine detail and abrupt changes in anisotropy (not shown here) that are seen closer to the Sun. This is likely due to transport effects such as scattering and diffusion which have created a much more uniform distribution of energetic particles by the time the CME has reached 1~AU. This demonstrates the importance of measurements of such events close to the Sun, made possible by {\emph{PSP}}/IS$\odot$IS, when it is still possible to distinguish between different acceleration mechanisms and source regions that contribute to energetic particle populations before these fine distinctions are washed out by transport effects. Such detailed measurements will be critical in determining which mechanisms play an important role in the acceleration of energetic particles close to the Sun. \begin{figure*} \centering \includegraphics[width=\textwidth]{Joyce2021CMEFig2.png} \caption{Comparison of energetic particle and magnetic field measurements of the same CME event observed at both {\emph{PSP}} and {\emph{STEREO}}-A. The data have been lined up by the arrival time of the CME arrival and the {\emph{PSP}} data have been stretched in time by a factor of 1.3 to match the magnetic field features seen by both S/C. Gray dotted lines indicate reference points used to line up the measurements from both S/C.} \label{Joyce2021CMEFig2} \end{figure*} \subsubsection{The Widespread CME Event on 29 Nov. 2020} \label{EPsCMENov} The beginning of solar cycle 25 was marked by a significant SEP event in late Nov. 2020. The event has gained substantial attention and study, not only as one of the largest SEP events in several quiet years, but also because it was a circumsolar event spanning at least 230$^{\circ}$ in longitude and observed by four S/C positioned at or inside of 1~AU (see Fig.~\ref{Kollhoff2021Fig}). Among those S/C was {\emph{PSP}} and {\emph{SolO}}, providing a first glimpse of coordinated studies that will be possible between the two missions. The solar source was AR 12790 and the associated M4.4 class flare (as observed by {\emph{GOES}} at 12:34~UT on 29 Nov.) was at (E99$^{\circ}$,S23$^{\circ}$) (as viewed from Earth), 2$^{\circ}$ east of {\emph{PSP}}’s solar longitude. A CME traveling at $\sim1700$~km~s$^{-1}$ was well observed by {\emph{SOHO}}/LASCO and {\emph{STEREO}}-A/COR2, both positioned west of {\emph{PSP}} \citep{2021A&A...656A..29C}. {\emph{STEREO}}-A/EUVI also observed an EUV wave propagating away from the source at $\sim500$~km~s$^{-1}$, lasting about an hour and traversing much of the visible disk \citep{2021A&A...656A..20K}. \begin{figure*} \centering \includegraphics[width=\textwidth]{Kollhoff2021Fig.png} \caption{Overview of the widespread CME event on 19 Nov. 2020. Counterclockwise from the top right: {\emph{SolO}}, {\emph{PSP}}, {\emph{STEREO}}-A, and {\emph{ACE}} energetic particle observations are shown along with the relative location of all S/C. The direction of the CME is given by the black array, while curved lines in the orbit plot indicate nominal Parker Spiral magnetic field lines each S/C would be connected to. Figure adapted from \citet{2021A&A...656A..20K}.} \label{Kollhoff2021Fig} \end{figure*} Protons at energies $>50$~MeV and $>1$~MeV electrons were observed by {\emph{PSP}}, {\emph{STEREO}}-A, {\emph{SOHO}}, and {\emph{SolO}} with onsets and time profiles that were generally organized by the S/C’s longitude relative to the source region, as has been seen in multi-S/C events from previous solar cycles \citep{2021A&A...656A..20K}. However, it was clear that intervening solar wind structures such as a slower preceding CME and SIRs affected the temporal evolution of the particle intensities. Analysis of the onset times of the protons and electrons observed at the four S/C yielded solar release times that were compared to the EUV wave propagation. The results were inconsistent with a simplistic scenario of particles being released when the EUV wave arrived at the various S/C magnetic footpoints, suggesting more complex particle transport and/or acceleration process. Heavy ions, including He, O and Fe, were observed by {\emph{PSP}}, {\emph{STEREO}}-A, {\emph{ACE}} and {\emph{SolO}} and their event-integrated fluences had longitudinal spreads similar to those obtained from three-S/C events observed in cycle 24 \citep{2021A&A...656L..12M}. The spectra were all well described by power-laws at low energies followed by an exponential roll-over at higher energies (Fig.~\ref{Mason2021CMEFig}). The roll-over energy was element dependent such that Fe/O and He/H ratios decreased with increasing energy; a signature of shock-acceleration that is commonly seen in SEP events \citep{2021A&A...656A..29C, 2021A&A...656L..12M}. The overall composition (relative to O) at 0.32 – 0.45 MeV/nuc was fairly typical of events this size, with the exception of Fe/O at {\emph{PSP}} and {\emph{ACE}} where it was depleted by a factor of $\sim2$ \citep{2021A&A...656L..12M}. \begin{figure*} \centering \includegraphics[width=\textwidth]{Mason_aa41310-21_spectra.jpg} \caption{Multi-species fuence spectra of the 29 Nov. 2020 event from (a) {\emph{PSP}}, (b) {\emph{STEREO}}-A, (c) {\emph{ACE}}, and (d) {\emph{SolO}}. Figure adapted from \citet{2021A&A...656L..12M}.} \label{Mason2021CMEFig} \end{figure*} Due to the relative positioning of the source region and {\emph{PSP}}, the CME passed directly over the S/C. It was traveling fast enough to overtake a preceding, slower CME in close proximity to {\emph{PSP}}, creating a dynamic and evolving shock measured by FIELDS \citep{2021ApJ...921..102G, 2022ApJ...930...88N}. Coincident with this shock IS$\odot$IS observed a substantial increase in protons up to at least 1 MeV, likely due to local acceleration. More surprisingly, an increase in energetic electrons was also measured at the shock passage (Fig.~\ref{Kollhoff2021Fig}). Acceleration of electrons by interplanetary shocks is rare \citep{2016A&A...588A..17D}, thus it is more likely this increase is a consequence of a trapped electron distribution, perhaps caused by the narrowing region between the two CMEs \citep{2021A&A...656A..29C}. \begin{figure*} \centering \includegraphics[width=\textwidth]{cohen_cloud_fig.png} \caption{Time profile of energetic protons stopping in the third and fifth detector of HET (top panel, upper and lower traces, respectively) and electrons stopping in the third and fourth detector of HET (middle panel, upper and lower traces, respectively) with the magnetic field in the bottom panel for the 29 Nov. 2020 CME. The over plotted virticle lines illustrate that the variations in the particle count rates occur at the same time as changes in the magnetic field. See \citet{2021A&A...656A..29C} for more information.} \label{Cohen2021Fig} \end{figure*} The MC of the fast CME followed the shock and sheath region, with a clear rotation seen in the magnetic field components measured by FIELDS (Fig.~\ref{Cohen2021Fig}). At the onset of the cloud, the particle intensities dropped as is often seen due to particles being unable to cross into the magnetic structure (Fig.~\ref{Cohen2021Fig}). During this period there was a 30 minute interval in which all the particle intensities increased briefly to approximately their pre-cloud levels. This was likely the result of {\emph{PSP}} exiting the MC, observing the surrounding environment populated with SEPs, and then returning to the interior of the MC. Although several of the properties of the SEP event are consistent with those seen in previous studies, the 29 Nov. 2020 event is noteworthy as being observed by four S/C over 230$^{\circ}$ longitude and as the first significant cycle 25 event. The details of many aspects of the event (both from individual S/C and multi-S/C observations) remain to be studied more closely. In addition, modeling of the event has only just begun and will likely yield significant insights regarding the evolution of the CME-associated shock wave \citep{2022A&A...660A..84K} and the acceleration and transport of the SEPs throughout the inner heliosphere \citep{2021ApJ...915...44C}. \subsection{Energetic Electrons} \label{EE} The first observations of energetic electrons by {\emph{PSP}}/IS$\odot$IS were reported by \citet{2020ApJ...902...20M}, who analyzed a series of energetic electron enhancements observed during {\emph{PSP}}’s second Enc. period, which reached a perihelion of 0.17~AU. Fig.~\ref{Mitchell2020Fig} shows a series of four electron events that were observed at approximately 03:00, 05:00, 09:00, and 15:40~UT on 2 Apr. 2019. The events are small compared with the background and are only observable due to the small heliocentric distance of {\emph{PSP}} during this time. Background subtraction is applied to the electron rates data to help resolve the electron enhancements as well as a second-degree Savitzky–Golay smoothing filter over 7 minutes is applied to reduce random statistical fluctuations \citep{1964AnaCh..36.1627S}. While the statistics for these events are very low, the fact that they are observed concurrently in both EPI-Hi and EPI-Lo and that they coincide with either abrupt changes in the magnetic field vector or that they can plausibly be connected to type III radio bursts observed by the FIELDS instrument which extend down to $f_{pe}$ suggests that these are real electron events. These are the first energetic electron events which have been observed within 0.2~AU of the Sun and suggest that such small and short-duration electron events may be a common feature close to the Sun that was not previously appreciated since it would not be possible to observe such events farther out from the Sun. This is consistent with previous observations by {\emph{Helios}} between 0.3 and 1~AU \citep{2006ApJ...650.1199W}. More observations and further analysis are needed to determine what physical acceleration mechanisms may be able to produce such events. \begin{figure*} \centering \includegraphics[width=\textwidth]{Mitchell2020Fig.jpg} \caption{Overview of {\emph{PSP}} observations during 2 Apr. 2019. Panels show the following: (a) EPI-Hi electron count rate (0.5–6 MeV) with a background subtraction and 7 minutes Savitzky–Golay smoothing applied and with a dashed line to indicate 2$\sigma$ deviation from the mean, (b) EPI-Lo electron count rate (50–500 keV) with a background subtraction and 7 minutes Savitzky–Golay smoothing applied and with a dashed line to indicate 2$\sigma$ deviation from the mean, (c) FIELDS high-frequency radio measurements (1.3–19.2 MHz), (d) FIELDS low-frequency radio measurements (10.5 kHz–1.7 MHz), (e) SWEAP solar wind ion density ($\sim5$ measurements per second), (f) SWEAP radial solar wind speed ($\sim5$ measurements per second), (g) FIELDS 1 minute magnetic field vector in RTN coordinates (with magnetic field strength denoted by the black line). A series of electron events are observed (in the top two panels), occurring at approximately 03:00, 05:00, 09:00, and 15:40~UT, as well as a series of strong type III radio bursts (seen in panels c and d). Figure adapted from \citet{2020ApJ...902...20M}.} \label{Mitchell2020Fig} \end{figure*} In late Nov. 2020, {\emph{PSP}} measured an SEP event associated with two CME eruptions, when the S/C was at approximately 0.8~AU. This event is the largest SEP event observed during the first 8 orbits of {\emph{PSP}}, producing the highest ion fluxes yet observed by IS$\odot$IS \citep{2021A&A...656A..29C,2021ApJ...921..102G,2021A&A...656A..20K,2021A&A...656L..12M}, and also produced the first energetic electron events capable of producing statistics sufficient to register significant anisotropy measurements by IS$\odot$IS as reported by \citet{2021ApJ...919..119M}. Fig.~\ref{Mitchell2021Fig1} shows an overview of the electron observations during this period along with magnetic field data to provide context. Notable in these observations is the peaking of the EPI-Lo electron count rate at the passage of the second CME shock, which is quite rare, though not unheard of, due to the inefficiency of CME driven shocks in accelerating electrons. This may indicate the importance of quasi-perpendicular shock acceleration in this event, which has been shown to be a more efficient acceleration of electrons \citep{1984JGR....89.8857W,1989JGR....9415089K,1983ApJ...267..837H,2010ApJ...715..406G,2012ApJ...753...28G,2007ApJ...660..336J}. The notable dip in the EPI-Hi electron count rate at this time is an artifact associated with dynamic threshold mode changes of the EPI-Hi instrument during this time \citep[for details, see][]{2021A&A...656A..29C}. \begin{figure*} \centering \includegraphics[width=\textwidth]{Mitchell2021Fig1.png} \caption{Overview of electron observations associated with the Nov. 29 CMEs Panels are as follows: (a) shows the EPI-Hi electron count rates summed in all 5 apertures ($\sim0.5-2$~MeV), (b) shows the EPI-Lo electron count rate from wedges 3 and 7 ($\sim57-870$~keV), (c) shows the FIELDS magnetic field vector in RTN coordinates, and (d) and (e) show the magnetic field vector angles. Vertical lines show the eruption of the second CME and the passage of the shocks associated with both CMEs. Flux-rope like structures are indicated by the shaded grey regions. The decrease in the EPI-Hi electron count rate seen at the passage of the second CME and the overall flat profile are artifacts caused by EPI-Hi dynamic threshold mode changes (explained in detail by Cohen et al. 2021). Figure adapted from \citet{2021ApJ...919..119M}.} \label{Mitchell2021Fig1} \end{figure*} Fig.~\ref{Mitchell2021Fig2} shows the electron and magnetic field measurements during a three-hour period around the shock crossing associated with the second CME, including the electron PAD. Because of the off-nominal pointing of the S/C during this time, the pitch angle coverage is somewhat limited, however the available data shows that the highest intensities to be in the range of $\sim40-90^{\circ}$ at the time of the shock crossing. Distributions with peak intensities at pitch angles of around $90^{\circ}$ may be indicative of the shock-drift acceleration mechanism that occurs at quasi-perpendicular shocks \citep[{\emph{e.g.}},][]{2007ApJ...660..336J,1974JGR....79.4157S,2003AdSpR..32..525M}. This, along with the peak electron intensities seen at the shock crossing, further supports the proposition that electrons may be efficiently accelerated by quasi-perpendicular shocks associated with CMEs. Other possible explanations are that the peak intensities may be a result of an enhanced electron seed population produced by the preceding CME \citep[similar to observations by][]{2016A&A...588A..17D}, that energetic electrons may be accelerated as a result of being trapped between the shocks driven by the two CMEs \citep[a mechanism proposed by][]{2018A&A...613A..21D}, and that enhanced magnetic fluctuations and turbulence created upstream of the shock by the first CME may increase the efficiency of electron acceleration in the shock \citep[as proposed by][]{2015ApJ...802...97G}. \begin{figure*} \centering \includegraphics[width=\textwidth]{Mitchell2021Fig2.png} \caption{Electron and magnetic field measurements around the time of the second CME shock crossing (indicated by the vertical red dashed line). Panels show the following: (a) EPI-Lo electron measurements ($\sim130-870$ keV) in each of its 8 wedges, (b) FIELDS magnetic field vector in RTN coordinates, (c) azimuthal angle of the magnetic field, (d) polar angle of the magnetic field, (e) pitch angle of the geometric center of each EPI-Lo wedge (each with a width of $\sim30^{\circ}$), and (f) pitch angle time series for each EPI-Lo wedge ($80-870$~keV) showing the fraction at each time bin to the total count rate over the entire interval. Figure adapted from \citet{2021ApJ...919..119M}.} \label{Mitchell2021Fig2} \end{figure*} While observations of energetic electron events thus far in the {\emph{PSP}} mission have been few, the measurements that have been made have shown IS$\odot$IS to be quite capable of characterizing energetic electron populations. Because of its close proximity to the Sun during {\emph{PSP}}’s Enc. phases, IS$\odot$IS has been shown to be able to measure small, subtle events which are not measurable farther from the Sun, but which may provide new insights into electron acceleration close to the Sun. The demonstrated ability to provide detailed electron anisotropy analyses is also critical for determining the acceleration mechanisms for electrons (particularly close to the Sun where transport effects have not yet influenced these populations) and for providing insight into the magnetic topology of magnetic structures associated with SEP events. As the current trend of increasing solar activity continues, we can expect many more unique observations and discoveries related to energetic electron events in the inner heliosphere from {\emph{PSP}}/IS$\odot$IS. \subsection{Coronating/Stream Interaction Region-Associated Energetic Particles} \label{CIREPs} SIRs form where HSSs from coronal holes expand into slower solar wind \citep{1971JGR....76.3534B}. As the coronal hole structure corotates on the Sun, the SIR will corotate, as well, and becoming a CIR after one complete solar co-rotation. As the HSS flows radially outward, both forward and reverse shocks can form along the SIR/CIR, often at distances beyond 1~AU \citep[{\emph{e.g.}},][]{2006SoPh..239..337J, 2008SoPh..250..375J, 1978JGR....83.5563P, 1976GeoRL...3..137S}, which act as an important source of energetic ions, particularly during solar minimum. Once accelerated at an SIR/CIR-associated shock, energetic particles can propagate back towards the inner heliosphere along magnetic field lines and are subject to adiabatic deceleration and scattering \citep{1980ApJ...237..620F}. The expected result of these transport effects is a softening of the energetic particle spectra and a hardening of the lower-energy suprathermal spectra \citep[see][]{1999SSRv...89...77M}. This spectral variation, however, has not always been captured in observations, motivating the formulation of various other SIR/CIR-associated acceleration processes such as compressive, non-shock related acceleration \citep[{\emph{e.g.}},][]{2002ApJ...573..845G,2012ApJ...749...73E,2015JGRA..120.9269C} which can accelerate ions into the suprathermal range at lower heliospheric distances. Additionally, footpoint motion and interchange reconnection near the coronal hole boundary has been proposed to lead to more radial magnetic field lines on the HSS side of the SIR/CIR, resulting in more direct access, and less modulation, of energetic particles \citep{2002GeoRL..29.2066M, 2002GeoRL..29.1663S, 2005GeoRL..32.3112S}. Observations within 1~AU, by {\emph{PSP}}, are therefore particularly well suited to detangle these acceleration and transport effects as the SIR/CIR-associated suprathermal to energetic ion populations are further from shock-associated acceleration sites that are usually beyond 1~AU. \begin{figure*} \centering \includegraphics[width=\textwidth]{Allen_2020.jpg} \caption{Overview of four months around the first perihelion (6 Nov. 2018). Panels show (a) the heliographic distance of {\emph{PSP}}; bulk proton (b) radial velocity, (c) density, (d) temperature, and (e) entropy; (f) summation of the magnetic and bulk proton thermal plasma pressure; (g) magnitude of the magnetic field, (h) $\Theta$, and (i) $\Phi$ angels of the magnetic field; and (j) EPI-Lo ion time-of-flight count rate for energies from 30 to 586 keV. Simulated quantities from two simulations are shown by the yellow and blue lines \citep[see][for more information]{2020ApJS..246...36A}. The four HSSs investigated in \citet{2020ApJS..246...36A} are indicated by the grey shaded regions, while pink shaded regions denote CMEs. Figure adapted from \citet{2020ApJS..246...36A}.} \label{Allen_2020} \end{figure*} During the first orbit of {\emph{PSP}}, \citet{2020ApJS..246...36A} reported on four HSSs observed by {\emph{PSP}}, illustrated in Fig.~\ref{Allen_2020}, and compared these to observations of the streams at 1~AU using observations from L1 ({\emph{ACE}} and {\emph{Wind}}) and {\emph{STEREO}}-A. Many of these nascent SIR/CIRs were associated with energetic particle enhancements that were offset from the interface of the SIR/CIR. One of the events also had evidence of local compressive acceleration, which was previously noted by \citet{2019Natur.576..223M}. \citet{2020ApJS..246...20C} further analyzed energetic particle increases associated with SIR/CIRs during the first two orbits of {\emph{PSP}} (Fig.~\ref{Cohen_2020}). They found He/H abundance ratios similar to previous observations of SIR/CIRs at 1~AU with fast solar wind under 600~km~s$^{-1}$, however the proton spectra power laws, with indices ranging from $-4.3$ to 6.5, were softer than those often observed at 1~AU. Finally, \citet{2020ApJS..246...56D} investigated the suprathermal-to-energetic ($\sim0.03-3$ MeV/nuc) He ions associated with these SIR/CIRs from the first two orbits. They found that the higher energy He ions would arrive further in the rarefaction region than the lower-energy ions. The He spectra behaved as flat power laws modulated by exponential roll overs with an e-folding at energies of $\sim0.4$ MeV/nuc, suggesting acceleration at shocks further out in the heliosphere. \citet{2020ApJS..246...56D} interpreted the tendency for the suprathermal ion peak to be within the rarefaction regions with acceleration further out in the heliosphere as evidence that the rarefaction regions allowed easier access for particles than other regions in the SIR/CIR structure. \begin{figure*} \centering \includegraphics[width=\textwidth]{Cohen_2020.jpg} \caption{Summary of EPI-Hi LET $\sim1-2$ MeV proton observations from the first two orbiter of {\emph{PSP}}. SIR-associated energetic particle events studied by \citet{2020ApJS..246...20C} are denoted by the numbered circles. Figure adapted from \citet{2020ApJS..246...20C}.} \label{Cohen_2020} \end{figure*} One fortuitus CIR passed {\emph{PSP}} on 19 Sep. 2019, during the third orbit of {\emph{PSP}}, when {\emph{PSP}} and {\emph{STEREO}}-A were nearly radially aligned and $\sim0.5$~AU apart \citep{2021A&A...650A..25A, 2021GeoRL..4891376A}. As shown in Fig.~\ref{Allen_2021}, while the bulk plasma and magnetic field observations between the two S/C followed expected radial dependencies, the CIR-associated suprathermal ion enhancements were observed at {\emph{PSP}} for a longer duration in time than at {\emph{STEREO}}-A \citep{2021GeoRL..4891376A}. Additionally, the suprathermal ion spectral slopes between {\emph{STEREO}}-A total ions and {\emph{PSP}} H\textsuperscript{+} were nearly identical, while the flux at {\emph{PSP}} was much smaller, suggesting little to no spectral modulation from transport. \citet{2021GeoRL..4891376A} concluded that the time difference in the CIR-associated suprathermal ion enhancement might be related to the magnetic topology between the slow speed stream ahead of the CIR interface, where the enhancement was first observed, and the HSS rarefaction region, where the suprathermal ions returned to background levels. \citet{2021ApJ...908L..26W} furthered this investigation by simulating the {\emph{PSP}} and {\emph{STEREO}}-A observations using the European Heliopheric FORecasting Information Asset \citep[EUHFORIA;][]{2018JSWSC...8A..35P} model and the Particle Radiation Asset Directed at Interplanetary Space Exploration \citep[PARADISE;][]{2019AA...622A..28W, 2020AA...634A..82W} model, suggesting that this event provides evidence that CIR-associated acceleration does not always require shock waves. \begin{figure*} \centering \includegraphics[width=\textwidth]{Allen_2021.jpg} \caption{Comparison of {\emph{PSP}} observations (black) and time-shifted and radially corrected {\emph{STEREO}}-A observations (blue) for the CIR that passed over {\emph{PSP}} on 19 Sep. 2019. While the bulk solar wind and magnetic field observations match well after typical scaling factors are applied \citep[a-h, see][for more information]{2021GeoRL..4891376A}, the energetic particle are elevated at {\emph{PSP}} (i) for longer than at {\emph{STEREO}}-A (j). Figure adapted from \citet{2021GeoRL..4891376A}.} \label{Allen_2021} \end{figure*} An SIR that passed over {\emph{PSP}} on 15 Nov. 2018 when the S/C was $\sim0.32$~AU from the Sun providing insight into energetic particle acceleration by SIRs in the inner heliosphere and the importance of the magnetic field structures connecting the observer to the acceleration region. Fig.~\ref{SSR_SIR_Fig} shows an overview of the energetic particle, plasma and magnetic field conditions during the passage of the SIR and the energetic particle event that followed it, which started about a day after the passage of the compression and lasted for about four days.The spectral analysis provided by \citet{2021A&A...650L...5J}, showed that for the first day of the event, the spectra resembled a simple power law, which is commonly associated with local acceleration, despite being well out of the compression region by that point. The spectrum for the remaining three days of the event was shown to be fairly constant, a finding that is inconsistent with the traditional model of SIR energetic particle acceleration provided by \citet{1980ApJ...237..620F}, which models energetic particle acceleration at distant regions where SIRs have steepened into shocks and predicts changes in the spectral shape with distance from the source region. Within this paradigm, we would expect that the the distance along the magnetic field connecting to the source region would increase during the event and that the observed spectrum would change. This combined with the simple power law spectrum observed on the first day, seems to indicate that the source region is much closer to the observer than is typically thought as we do not see the expected transport effects and that acceleration all along the compression, not only in the distant regions where the SIR may steepen into shocks, may play an important role in energetic particle acceleration associated with SIRs (consistent with previous studies by \citealt{2000JGR...10523107C} and \citealt{2015JGRA..120.9269C}). \begin{figure*} \centering \includegraphics[width=\textwidth]{SSR_SIR_Fig.png} \caption{Overview of energetic particle observations associated with the SIR that passed over {\emph{PSP}} on 15 Nov. 2018. Plasma data is provided by the SWEAP instrument and magnetic field data by the FIELDS instrument. The compression associated with the passage of the SIR is highlighted in yellow. Figure is updated from figures shown in \citet{2021A&A...650A..24S} and \citet{2021A&A...650L...5J}.} \label{SSR_SIR_Fig} \end{figure*} \citet{2021A&A...650A..24S} analyzed the same event, also noting that the long duration of the energetic particle event following the passage of the CME suggests a non-Parker spiral orientation of the magnetic field and proposed that the observations may be explained by a sub-Parker magnetic field structure~\citep{2002GeoRL..29.2066M,2002GeoRL..29.1663S,2005GeoRL..32.3112S,2005JGRA..110.4104S}. The sub-Parker spiral structure forms when magnetic footpoints on the Sun move across coronal hole boundaries, threading the magnetic field between the fast and slow solar wind streams that form the compression and creating a magnetic field structure that is significantly more radial than a nominal Parker spiral. Fig.~\ref{Schwadron2021SIRFig}a shows a diagram of the sub-Parker spiral and Fig.~\ref{Schwadron2021SIRFig}b shows a comparison between the energetic particle fluxes measured by IS$\odot$IS in two different energy regimes compared with modeled fluxes for both the Parker and sub-Parker spiral magnetic field orientations. The modeling includes an analytic solution of the distribution function at the SIR reverse compression/shock and numerical modeling of the propagation of the particles back to the S/C \citep[details in][]{2021A&A...650A..24S}. The modeled fluxes for the sub-Parker spiral match the observed fluxes much better than the nominal Parker spiral, demonstrating that the sub-Parker spiral structure is essential for explaining the extended duration of the energetic particle event associated with the SIR. The sub-Parker spiral is often seen in rarefaction regions, such as those that form behind SIRs, and thus is likely to play a significant role in the observed energetic particle profiles associated with such events. Both the \citet{2021A&A...650L...5J} and \citet{2021A&A...650A..24S} demonstrate the importance of IS$\odot$IS observations of SIRs in understanding the large scale structure of the magnetic field in the inner heliosphere, the motion of magnetic footpoints on the Sun and the propagation of energetic particles, helping us to understand the variability of energetic particles and providing insight into the source of the solar wind. \begin{figure*} \centering \includegraphics[width=0.7\textwidth]{Schwadron2021SIRFig.png} \caption{(a) shows the magnetic field structure associated with an SIR, with the red dashed lines representing the compression region where the fast solar wind overtakes the slow solar wind, the blue lines represent the nominal Parker spiral configuration, and the clack lines represent the sub-Parker spiral field lines that are threaded between the fast and slow solar wind streams as a result of footpoint motion across the coronal hole boundary. (b) shows a comparison between IS$\odot$IS energetic particle fluxes in two energy ranges (blue data points) and modeled energetic particle fluxes for both the Parker spiral (blue lines) and sub-Parker spiral (black lines) magnetic field configurations. Figure adapted from \citet{2021A&A...650A..24S}.} \label{Schwadron2021SIRFig} \end{figure*} \subsection{Inner Heliospheric Anomalous Cosmic Rays} \label{ACRs} The ability of {\emph{PSP}} to measure the energetic particle content at unprecedentedly close radial distances during a deep solar minimum has also allowed for detailed investigations into the radial dependence of ACRs in the inner heliosphere. ACRs are mainly comprised of singly ionized hydrogen, helium, nitrogen, oxygen, neon, and argon, with energies of $\sim5-50$ MeV/nuc \citep[{\emph{e.g.}},][]{1973ApJ...182L..81G, 1973PhRvL..31..650H, 1974IAUS...57..415M, 1988ApJ...334L..77C, 1998SSRv...83..259K, 2002ApJ...578..194C, 2002ApJ...581.1413C, 2013SSRv..176..165P}. The source of these particles are neutral interstellar particles that are part of the interstellar wind \citep{2015ApJS..220...22M} before becoming ionized near the Sun \citep{1974ApJ...190L..35F}. Once the particles become ionized, they become picked-up by the solar wind convective electric field and are convected away from the Sun as pick-up ions. A small portion of these pick-up ions can become accelerated to high energies (tens to hundreds of MeV) further out in the heliosphere before returning into the inner heliosphere, thus becoming ACRs \citep{1996ApJ...466L..47J, 1996ApJ...466L..43M, 2000AIPC..528..337B, 2012SSRv..173..283G}. While the acceleration of ACRs is primarily thought to occur at the termination shock \citep{1981ApJ...246L..85P, 1992ApJ...393L..41J}, neither {\emph{Voyager}}~1 or {\emph{Voyager}}~2 \citep{1977SSRv...21...77K} observed a peak in ACR intensity when crossing the termination shock \citep{2005Sci...309.2017S, 2008Natur.454...71S}. As a result, numerous theories have been proposed to explain this including a ``blunt” termination shock geometry in which the ACR acceleration occurs preferentially along the termination shock flanks and tail \citep{2006GeoRL..33.4102M, 2008ApJ...675.1584S} away from the region the {\emph{Voyager}} S/C crossed, magnetic reconnection at the heliopause \citep{2010ApJ...709..963D}, heliosheath compressive turbulence \citep{2009AdSpR..43.1471F}, and second-order Fermi processes \citep{2010JGRA..11512111S}. After being accelerated, ACR particles penetrate back into the heliosphere, where their intensities decrease due to solar modulation \citep[{\emph{e.g.}},][]{1999AdSpR..23..521K, 2002ApJ...578..194C, 2006GeoRL..33.4102M}. The radial gradients of ACRs in the heliosphere have primarily been studied from 1~AU outward through comparing observations at the {\emph{IMP-8}}\footnote{The Interplanetary Monitoring Platform-8} at 1~AU to observations from {\emph{Pioneer}}~10, {\emph{Pioneer}}~11, {\emph{Voyager}}~1, and {\emph{Voyager}}~2 in the outer heliosphere. These comparisons revealed that the helium ACR intensity varied as $r^{-0.67}$ from 1 to $\sim41$~AU \citep{1990ICRC....6..206C}. Understanding this modulation provides insight into the various processes that govern global cosmic ray drift paths throughout the heliosphere. \begin{figure*} \centering \includegraphics[width=\textwidth]{Rankin_2021_1.jpg} \caption{Helium spectra over the first three orbits of {\emph{PSP}} after removing transient events \citep[see][for more information]{2021ApJ...912..139R} at {\emph{PSP}} (red and blue) and at {\emph{SOHO}} (green). A simulated GCR spectrum at 1~AU is included (black) from HelMOD (version 4.0.1, 2021 January; www.helmod.org). Figure adapted from \citet{2021ApJ...912..139R}.} \label{Rankin_2021_1} \end{figure*} The orbit of {\emph{PSP}} is well suited to investigate ACR radial variations due to its sampling of a large range of radial distances near the ecliptic. Additionally, {\emph{PSP}} enables investigations into the ACR populations at distances closer to the Sun than previously measured. \citet{2021ApJ...912..139R} utilized the {\emph{PSP}}/IS$\odot$IS/EPI-Hi instrument to study the radial variation of the helium ACR content down to $35.6~R_\odot$ (0.166~AU) and compare these observations to ACR observations at 1~AU measured the {\emph{SOHO}} mission. To ensure that the particles included in the comparisons were ACRs, rather than SEPs, only “quiet-time” periods were used \citep[see the Appendix in][]{2021ApJ...912..139R}. The resulting quiet-time EPI-Hi and {\emph{SOHO}} spectra over the first three orbits of {\emph{PSP}} is shown in Fig.~\ref{Rankin_2021_1}. The ACR intensity was observed to increase over energies from $\sim5$ to $\sim40$ MeV/nuc, a characteristic feature of ACR spectra. Figs.~\ref{Rankin_2021_2}a and \ref{Rankin_2021_2}b show normalized ACR fluxes from the {\emph{SOHO}} Electron Proton Helium INstrument \citep[EPHIN;][]{1988sohi.rept...75K} and {\emph{PSP}}/IS$\odot$IS/EPI-Hi, respectively. The ratio of the ACR fluxes (Fig.~\ref{Rankin_2021_2}c) correlate well with the heliographic radial distance of {\emph{PSP}} (Fig.~\ref{Rankin_2021_2}d). This presents clear evidence of radial-dependent modulation, as expected. However, the observed radial gradient is stronger ($\sim25\pm5$\%~AU) than observed beyond 1~AU. Better understanding the radial gradients of ACRs in the inner heliosphere may provide needed constraints on drift transport and cross-field diffusion models, as cross-field diffusion will become more dominant in the inner heliosphere \citep{2010JGRA..11512111S}. Future studies will also be aided by the addition of ACR measurements by {\emph{SolO}}, such as those reported in \citet{2021A&A...656L...5M}. \begin{figure*} \centering \includegraphics[width=\textwidth]{Rankin_2021_2.jpg} \caption{ACR normalized flux at (a) 1~AU averaged over 27.27 days and (b) {\emph{PSP}} averaged over Carrington longitude. The ratio of intensities (c) has a clear dependence on the radial distance of {\emph{PSP}} (d). Figure adapted from \citet{2021ApJ...912..139R}.} \label{Rankin_2021_2} \end{figure*} \subsection{Open Questions and Future Collaborations} \label{EPsRadOQ} Over the first four years of the {\emph{PSP}} prime mission, large advances have been made regarding our understanding of inner heliospheric energetic particles and solar radio emissions. Looking forward, as the solar cycle ascends out of solar minimum, and as additional observatories such as {\emph{SolO}} enter into their science phase and provide robust energetic particle measurements \citep[see][]{2021A&A...656A..22W}, many new opportunities to study energetic particle populations and dynamics will present themselves. For example, while {\emph{PSP}} has begun exploring the radial evolution of SIRs and associated energization and transport of particles, future measurements will explore the cause of known solar cycle dependencies of the SIR/CIR-associated suprathermal ion composition \citep[{\emph{e.g.}},][]{ 2008ApJ...678.1458M, 2012ApJ...748L..31M, 2019ApJ...883L..10A}. Additionally, future {\emph{PSP}} observations of SIR/CIR-associated ions will be a crucial contribution to studies on the radial gradient of energetic ions \citep[{\emph{e.g.}},][]{1978JGR....83.4723V, Allen2021_CIR}. As {\emph{SolO}} begins to return off-ecliptic observations, the combination of {\emph{PSP}} and {\emph{SolO}} at different latitudes will enable needed insight into the latitudinal structuring of SIR/CIRs and associated particle acceleration. As the solar cycle progresses, solar activity will increase. This will provide many new observations of CMEs and SEP events at various intensities and radial distances in the inner heliosphere, particularly for low energy SEP events that are not measurable at 1~AU \citep[{\emph{e.g.}},][]{2020ApJS..246...65H}. These {\emph{PSP}} observations will further our understanding of CME-associated shock acceleration and how the energetic content of CMEs evolves with heliographic distance. The current and future Heliophysics System Observatory (HSO) should also provide additional opportunities to not only study the radial evolution of CMEs \citep[{\emph{e.g.}},][]{2021AA...656A...1F}, but also the longitudinal variations of these structures, as was done for the 29 Nov. 2020 event \citep[{\emph{e.g.}},][]{2021A&A...656A..29C,2021A&A...656A..20K,2021A&A...656L..12M}. As discussed in \S\ref{SEPs}, {\emph{PSP}} has already expanded our understanding of SEP events in the inner heliosphere. Because {\emph{SolO}}, which also returns observations of \textsuperscript{3}He-rich SEP events \citep[{\emph{e.g.}},][]{2021A&A...656L...1M,2021A&A...656L..11B}, will soon be taking measurements of SEP events at different latitudes than {\emph{PSP}}, the combination of these missions will enable exploration of latitudinal variations in SEP content. Similarly, energetic electron measurements on {\emph{SolO}} \citep[{\emph{e.g.}},][]{2021A&A...656L...3G}, soon to be taken off-ecliptic, will enable future studies into the latitudinal variations of electron events. In addition to radio observations using multiple S/C, space-based and ground-based multi-wavelength observations enable new types of coordinated analysis of solar activity. \cite{2021A&A...650A...7H} combined {\emph{Hinode}}, {\emph{SDO}}/AIA, and RFS observations in a joint analysis of a non-flaring AR and a type III storm observed during {\emph{PSP}} Enc.~2, identifying the source of electron beams associated with the storm and using radio measurements to show the evolution of the peak emission height throughout the storm. \cite{2021A&A...650A...6C} studied a different storm occurring slightly after Enc.~2 using radio observations from {\emph{PSP}} and {\emph{Wind}}, and solar observations from {\emph{SDO}}/AIA, {\emph{SDO}}/HMI, and the Nuclear Spectroscopic Telescope ARray \citep[{\emph{NuSTAR}};][]{2013ApJ...770..103H}, finding correlated periodic oscillations in the EUV and radio data indicative of small impulsive electron acceleration. Additionally, the continuation of the {\emph{PSP}} project science team’s close relationship with the Whole Heliosphere and Planetary Interactions (WHPI\footnote{https://whpi.hao.ucar.edu/}) international initiative, the successor of Whole Sun Month \citep{1999JGR...104.9673G} and Whole Heliosphere Interval \citep{2011SoPh..274....5G, 2011SoPh..274...29T, 2011SoPh..274....1B} will allow for multifaceted studies that incorporate ground-based and space-based observatories providing contextual information for the {\emph{PSP}} measurements. Many of these studies are beginning now, and should propel our fundamental understanding of the connection of the solar surface to interplanetary space and beyond. \section{Dust} \label{PSPDUST} \subsection{Dust Populations in the Inner Heliosphere} The ZDC is one of the largest structures in the heliosphere. It is comprised of cosmic dust particles sourced from comets and asteroids, with most of the material located in the ecliptic plane where the majority of these dust sources reside. The orbits of grains gravitationally bound to the Sun, termed ``\amsn'', lose angular momentum from Poynting-Robertson and solar wind drag \citep[{\emph{e.g.}},][]{1979Icar...40....1B} and subsequently circularize and spiral toward the Sun. Due to the inward transport of zodiacal material, the dust spatial density increases as these grains get closer to the Sun \citep[{\emph{e.g.}},][]{1981A&A...103..177L}, until they are ultimately collisionally fragmented or sublimated into smaller grains \citep[{\emph{e.g.}},][]{2004SSRv..110..269M}. Dust-dust collisions within the cloud are responsible for generating a significant portion of the population of smaller grains. Additionally, a local source for dust particles very near the Sun are the near-Sun comets, ``Sunskirters'', that pass the Sun within half of Mercury’s perihelion distance, and sungrazers that reach perihelion within the fluid Roche limit \citep{2018SSRv..214...20J}. Because these comets are in elongated orbits, their dust remains in the vicinity of the Sun only for short time \citep{2004SSRv..110..269M,2018A&A...617A..43C}. Sub-micron sized grains, with radii on the order of a few hundred nm, are most susceptible to outward radiation pressure. The orbital characteristics of these submicron-sized ``\bmsn'' are set by the ratio of solar radiation pressure to gravitational force, $\beta = F_{R}/F_{G}$, dependent on both grain size and composition \citep{1979Icar...40....1B}. Grains with $\beta$ above a critical value dependent on their orbital elements have positive orbital energy and follow hyperbolic trajectories escaping the heliosphere. This population of grains represents the highest number flux of micrometeoroids at 1~AU \citep{1985Icar...62..244G}. For the smallest nanograins ($\lesssim50$~nm), electromagnetic forces play an important role in their dynamics \citep[{\emph{e.g.}},][]{1986ASSL..123..455M}, where a certain population of grains can become electromagnetically trapped very near the Sun \citep{2010ApJ...714...89C}. Fig.~\ref{fig:dust_overview} summarizes these various processes and dust populations. \begin{figure} \includegraphics[width=4.5in]{mann_2019_fig1.png} \caption{The dust environment near the Sun \citep{2019AnGeo..37.1121M}.\label{fig:dust_overview} } \end{figure} When dust particles approach very near to the Sun, they can sublimate rapidly, leaving a region near the Sun relatively devoid of dust. Different estimates of this DFZ have been made based on Fraunhofer-corona (F-corona) observations and model calculations, predicting a DFZ within 2 to 20 solar radii and possible flattened radial profiles before its beginning \citep{2004SSRv..110..269M}. These estimates are based on dust sublimation; however, an additional destruction process recognized in the innermost parts of the solar system is sputtering by solar wind particles. \citet{2020AnGeo..38..919B} showed that sputtering is more effective during a CME event than during other solar wind conditions and suggested that multiple CMEs can lead to an extension of the DFZ. Dust destruction near the Sun releases molecules and atoms, where photoionization, electron-impact ionization, and charge exchange quickly ionize the atoms and molecules. This process contributes to a population of pickup-ions in the solar wind and provides a seed population for energetic particles \citep{2000JGR...105.7465S,2005ApJ...621L..73M}. The inner heliosphere's dust populations within a few~AU have been observed both with {\emph{in~situ}} dust impact detections and remotely via scattered light observations. Due to their higher number densities, dust grains with radii on the order of $\sim\mu$m and smaller can be observed with {\emph{in~situ}} impact measurements. Dedicated dust measurements within this size range have been taken in the inner solar system with {\emph{Pioneers}} 8 and 9 \citep{1973spre.conf.1047B}, {\emph{HEOS-2}}\footnote{The Highly Eccentric Orbit Satellite 2} \citep{1975P&SS...23..985H, 1975P&SS...23..215H}, {\emph{Helios}} \citep{1978A&A....64..119L, 1981A&A...103..177L, 1980P&SS...28..333G, 2006A&A...448..243A, 2020A&A...643A..96K}, {\emph{Ulysses}} \citep{1999A&A...341..296W,2004A&A...419.1169W,2003JGRA..108.8030L, 2015ApJ...812..141S, 2019A&A...621A..54S}, and {\emph{Galileo}} \citep{1997Icar..129..270G}. These observations identified three populations of dust: \amsn, \bmsn, and interstellar grains transiting the solar system \citep{1993Natur.362..428G}. Before {\emph{PSP}}, the innermost dust measurements were made by {\emph{Helios}} as close as 0.3~AU from the Sun. For grains on the order of several $\mu$m and larger, astronomical observations of the F-corona and ZL \citep{1981A&A...103..177L} provide important constraints on their density distributions. Unlike the broader zodiacal cloud (ZC) structure, which is most concentrated near the ecliptic plane, the solar F-corona has a more spherical shape, with the transition from one to the other following a super-ellipsoidal shape according to the radial variation of a flattening index using observations from the {\emph{STEREO}}/SECCHI instrument \citep{2018ApJ...864...29S}. Measurements from {\emph{Helios}}~1 and {\emph{Helios}}~2 at locations between 0.3 to 1~AU showed that the brightness profile at the symmetry axis of the ZL falls off as a power law of solar distance, with exponent 2.3 \citep{1981A&A...103..177L}, which is consistent with a derived dust density profile of the form $n(r) = n_0\ r^{-1.3}$. This dust density dependence is well-reproduced by the dust produced by Jupiter-family comets \citep{2019ApJ...873L..16P}. Additionally, there were a number of discussions on the influence of excess dust in circumsolar rings near the Sun \citep[][]{1998EP&S...50..493K} on the observed F-corona brightness \citep[][]{1998P&SS...46..911K}. Later on, \cite[][]{2004SSRv..110..269M} showed that no prominent dust rings exist near the Sun. More recently, \cite{2021SoPh..296...76L}. \citet{2021SoPh..296...76L} analyzed images obtained with the {\emph{SOHO}}/LASCO-C3 between 1996 and 2019. Based on a polarimetric analysis of the LASCO-C3 images, they separated the F- and K-corona components and derived the electron-density distribution. In addition, they reported the likely increasing polarization of the F-corona with increasing solar elongation. They do not discuss, however, the dust distribution near the Sun. They further discuss in detail the properties of the F-corona in \cite{2022SSRv..218...53L}. To date, our understanding of the near-Sun dust environment is built on both {\emph{in~situ}} and remote measurements outside 0.3~AU, or 65 $\rs$. {\emph{PSP}}, with its eccentric and progressively low perihelion orbit, provides the only {\emph{in~situ}} measurements and remote sensing observations of interplanetary dust in the near-Sun environment inside 0.3~AU. In the first six orbits alone, {\emph{PSP}} has transited as close as 20 $\rs$ from the center of the Sun, offering an unprecedented opportunity to understand heliospheric dust in the densest part of the ZC and provide critical insight into long-standing fundamental science questions concerning the stellar processing of dust debris discs. Key questions {\emph{PSP}} is well-posed to address are: How is the ZC eroded in the very near-Sun environment?; which populations of material are able to survive in this intense environment?; how do the near-Sun dust populations interact with the solar wind?, among others. \subsection{Dust Detection on {\emph{PSP}}} A number of sensors on {\emph{PSP}} are capable of detecting interplanetary dust in the inner heliosphere, each by a different mechanism. The FIELDS instrument detects perturbations to the S/C potential that result from transient plasma clouds formed when dust grains strike the S/C at high velocities, vaporizing and ionizing the impacting grain and some fraction of the S/C surface \citep{2020ApJS..246...27S, Page2020, Malaspina2020_dust}. WISPR detects solar photons scattered by dust in the ZC \citep{2019Natur.576..232H, 2021A&A...650A..28S}, and IS$\odot$IS is sensitive to dust penetration of its collimator foils by dust \citep{2020ApJS..246...27S}. Dust detection by these mechanisms has lead to advances in the our understanding of the structure and evolution of the ZDC, and we describe these observations in the context of {\emph{in~situ}} and remote-based measurements separately below. \subsection{{\emph{in~situ}} impact detection} \subsubsection{FIELDS} As {\emph{PSP}} traverses the inner heliosphere, its orbital trajectory results in high relative velocities between the S/C and interplanetary dust grains. Relative velocities for $\alpha$-meteoroids can approach 100~km~s$^{-1}$ and exceed 100's~km~s$^{-1}$ for $\beta$-meteoroids and retrograde impactors \citep{2020ApJS..246...27S}. The impact-generated plasma cloud perturbs the S/C surface potential, creating a signal detectable by the FIELDS electric field sensors. This method of {\emph{in~situ}} dust detection was first demonstrated on the {\emph{Voyager}} S/C by \citet{Gurnett1983} and has subsequently been reported on numerous other S/C. See the review by \citet{2019AnGeo..37.1121M} and references therein. While there is agreement that electric field sensors detect impact ionization of dust, the physical mechanism by which potential perturbations are generated continues to be an active topic of research, with a range of competing theories \citep{Oberc1996, Zaslavsky2015, Kellogg2016, MeyerVernet2017, 2019AnGeo..37.1121M, 2021JGRA..12628965S}, and rapidly advancing lines of inquiry using controlled laboratory experiments \citep[{\emph{e.g.}},][]{2014JGRA..119.6019C, 2015JGRA..120.5298C, 2016JGRA..121.8182C, Nouzak2018, 2021JGRA..12629645S}. On {\emph{PSP}}, the vast majority of dust impacts ionization events produce high amplitude ($>10$~mV), brief ($\mu$s to ms) voltage spikes. These can be detected in various FIELDS data products, including peak detector data, bandpass filter data, high cadence time-series burst data, and lower cadence continuous time-series data \citep{2016SSRv..204...49B, Malaspina2016_DFB}. Impact plasma clouds often produce asymmetric responses on electric field antennas \citep[{\emph{e.g.}},][]{Malaspina2014_dust}. By comparing the relative dust signal amplitude on each FIELDS antenna for a given impact, the location of the impact on the S/C body can be inferred. From the impact location, and constraints imposed by dust population dynamics, one can deduce the pre-impact directionality of the dust that struck the S/C \citep{Malaspina2020_dust, Pusack2021}. {\emph{PSP}} data have revealed new physical processes active in the impact ionization of dust. \citet{Dudok2022_scm} presented the first observations of magnetic signatures associated with the escape of electrons during dust impact ionization. \citet{Malaspina2022_dust} demonstrated strong connection between the plasma signatures of dust impact ionization and subsequent debris clouds observed by WISPR and the {\emph{PSP}} star trackers. This study also demonstrated that long-duration S/C potential perturbations, which follow some dust impacts, are consistent with theoretical expectations for clouds of S/C debris that electrostatically charge in the solar wind \citep{2021JGRA..12629645S}. These perturbations can persist up to 60 seconds, much longer than the brief ($\mu$s to ms) voltage spikes generated by the vast majority of dust impacts. \subsubsection{Data-Model Comparisons} \begin{figure}[ht] \centering \includegraphics[width=4.5in]{overview_rates_v7_50mv.pdf} \caption{Daily averaged impact rates as a function of radial distance for orbits $1-6$, separated by inbound (a) and outbound (b). (c) Impact rates overlaid on the {\emph{PSP}} trajectory in the ecliptic J2000 frame, averaged over orbits $1-3$, $4-5$, and individually shown for orbit 6. Color and width of the color strip represents the impact rate. Figure adapted from \citet{szalay:21a}. \label{fig:dust_rates}} \end{figure} Since FIELDS can detect impacts over the entire S/C surface area, in the range of $4-7$ m$^2$ \citep{Page2020}, {\emph{PSP}} provides a robust observation of the total impact rate to the S/C. Fig.~\ref{fig:dust_rates} shows the impact rates as a function of heliocentric distance and in ecliptic J2000 coordinates \citep{szalay:21a}. There are a number of features that have been observed in the impact rate profiles. For the first three orbits, all with very similar orbits, a single pre-perihelion peak was observed. For the subsequent two orbit groups, orbits $4-5$, and orbit 6, a post-perihelion peak was also observed, where a local minimum in impact rate was present near perihelion. As shown in Fig.~\ref{fig:dust_rates}c, the substructure in observed impact rate occurs inside the previous inner limit of {\emph{in~situ}} dust detections by {\emph{Helios}}. While {\emph{PSP}} registers a large number of impacts due to its effective area, determining impactor speed, mass, and directionality is not straightforward. To interpret these impact rates into meaningful conclusions about inner zodiacal dust requires data-model comparisons. Analysis of {\emph{PSP}} dust impact data from the first three orbits found the orbital variation in dust count rates detected by FIELDS during the first three solar Encs. were consistent with primarily sub-micron $\beta$-meteoroids \citep{2020ApJS..246...27S,Page2020,Malaspina2020_dust}. From the first three orbits, it was determined that the flux of \bms varies by approximately 50\%, suggesting the inner solar system's collisional environment varies on timescales of 100's of days \citep{Malaspina2020_dust}. Additionally, nanograins with radii below 100 nm were not found to appreciably contribute to the observed impact rates from these first orbits \citep{2021A&A...650A..29M}. Subsequent analysis which included the first six orbits \citep{szalay:21a} compared {\emph{PSP}} data to a two-component analytic dust model to conclude {\emph{PSP}}'s dust impact rates are consistent with at least three distinct populations: ($\alpha$) bound zodiacal \ams on elliptic orbits, ($\beta$) unbound \bms on hyperbolic orbits, and a distinct third population of impactors. Unlike during the first three orbits of dust impact data, which were dominated by escaping \bmsn, larger grains have been inferred to dominate FIELDS detections for sections of each orbit \citep{szalay:21a} during orbits $4-6$. Data-model comparisons from the first six orbits have already provided important insight on the near-Sun dust environment. First, they placed quantitative constraints on the zodiacal collisional erosion rate of greater than 100 kg s$^{-1}$. This material, in the form of outgoing \bmsn, was found to be predominantly produced within $10-20~\rs$. It was also determined that \bms are unlikely to be the inner source of pickup ions, instead suggesting the population of trapped nanograins \citep{2010ApJ...714...89C} with radii $\lesssim 50$ nm is likely this source. The flux of \bms at 1~AU was also estimated to be in the range of $0.4-0.8 \times 10^{-4}$ m$^{-2}$ s$^{-1}$. \begin{figure}[ht] \centering \includegraphics[width=4.5in]{pusack_2021_fig5.png} \caption{(a) Orbit 4 count rates vs. time for antennas V1, V2, V3, and V4 using the $50–1000$ ~mV amplitude window with darker gray lines corresponding to the ram direction of the S/C and lighter gray lines corresponding to the anti-ram direction. (b) Count rates vs. time for each amplitude window on all four planar antennas. Gray shaded region indicates the anomaly duration. (b) carries with it the same gradation as (a) of tone to the various color families depicted, where each color family corresponds to a different amplitude window: blues for $50–100$~mV, orange–yellows for $100–200$~mV, pinks for $200–400$~mV, and greens for $400–1000$~mV. (c) Amplitude window ram/anti-ram rates vs. time with the same color families as (b). Figure adapted from \citet{Pusack2021}. \label{fig:pusack}} \end{figure} From the data-model comparisons, orbits $4-6$ exhibited a post-perihelion peak in the impact rate profile that was not well-described using the two-component model of nominal \ams and \bms \citep{szalay:21a}. Two hypotheses were provided to explain this post-perihelion impact rate enhancements: (a) {\emph{PSP}} directly transited and observed grains within a meteoroid stream or (b) {\emph{PSP}} flew through the collisional by-products produced when a meteoroid stream collides with the nominal ZC, termed a \bsn. The timing and total flux observed during this time favors the latter explanation, and more specifically, a \bs from the Geminids meteoroid stream was suggested to be the most likely candidate \citep{szalay:21a}. A separate analysis focusing on the directionality and amplitude distribution during the orbit 4 post-perihelion impact rate enhancement also supports the Geminids \bs hypothesis \citep{Pusack2021}. Fig.~\ref{fig:pusack} shows the amplitude and directionality trends observed during orbit 4, where the two impact rate peaks exhibit different behaviors. For the pre-perihelion peak, predicted by the two-component model, impact rates for multiple separate amplitude ranges all peak at similar times (Fig.~\ref{fig:pusack}b) and impact the S/C from similar locations (Fig.~\ref{fig:pusack}c). The post-perihelion peak exhibits a clear amplitude dispersion, where impacts producing smaller amplitudes peak in rate ahead of the impacts that produce larger amplitudes. Additionally, the ram/anti-ram ratio is significantly different from the pre-perihelion peak. As further described in \citet{Pusack2021}, these differences are also suggestive of a Geminids \bsn. We note that grains that are smaller than the detected \bms and affected by electromagnetic forces have a much larger flux close to the orbital perihelia than at other parts of the orbit \citep{2021A&A...650A..29M}, yet their detection is difficult with {\emph{PSP}}/FIELDS due to the low expected impact charge generated by such small mass grains \citep{szalay:21a}. Fig.~\ref{fig:dust_overview_PSP} summarizes the dust populations {\emph{PSP}} is likely directly encountering. From the data-model comparisons, the relative fluxes and densities of bound \ams and unbound \bms has been quantitatively constrained. {\emph{PSP}}'s dust impact measurements have been able to directly inform on the intense near-Sun dust environment. Furthermore, the existence of a third dust population suggests collisions between material along asteroid or cometary orbits can be a significant source of near-sun collisional grinding and \bmm production in the form of \bs \citep{szalay:21a}, and is a fundamental physical process occurring at all stellar dust clouds. \begin{figure}[ht] \centering \includegraphics[width=4.5in]{summary_v6_50mv.pdf} \caption{{\emph{PSP}} detects impacts due to \amsn, \bmsn, and likely from discrete meteoroid streams. {\it Left}: Impact rates and model fits from orbit 3 (inbound) and orbit 4 (outbound). {\it Right}: Sources for the multiple populations observed by {\emph{PSP}}. Figure adapted from \citet{szalay:21a}. \label{fig:dust_overview_PSP}} \end{figure} \subsection{Remote Sensing} \subsubsection{Near-sun dust density radial dependence} \label{dddZ} \begin{figure} \includegraphics[scale=0.35, clip=true]{FigDustDepletion.pdf} \caption{(a) Left panel: Sample of radial brightness gradients along the symmetry axis of the F-corona (black) and data fit with an empirical model (red dashed line). The linear portion of the model is delineated with the light-blue dashed line. The inset shows the percentage departure of the empirical model from the linear trend. Upper Right panel: Comparison of the empirical model (in black color) and the forward modeling of the ZL brightness along the symmetry axis considering a DDZ between $2-20~R_\odot$ (in green color) and $3-19~R_\odot$ (in red color). The inset shows the relative error of the simulations compared to the empirical model (same color code). Bottom Right panel: Dust density model used in the simulations relative to the outermost edge of the DDZ between $3–19~R_\odot$ assuming a linear decrease in the multiplier of the nominal density. The dashed, vertical line in a red color indicates the innermost distance of the WISPR FOV in this study. Figure adapted from \cite{2021A&A...650A..28S}.} \label{fig:depl} \end{figure} In anticipation of the {\emph{PSP}} observations, several studies of the ZL/F-corona based on observations from the {\emph{STEREO}}/SECCHI instrument were carried out \citep{2017ApJ...839...68S, 2017ApJ...848...57S, 2018ApJ...862..168S,2018ApJ...864...29S}. These studies established a baseline of F-corona properties from 1 AU to help identify any differences that may arise due to the varying heliocentric distance of the corresponding WISPR observations. The question of whether a DFZ \citep[][]{1929ApJ....69...49R} exists close to the Sun is long-standing and has not been answered by pre-{\emph{PSP}} observations of the ZL/F-corona. White light observations obtained from distances between $0.3-1$~AU \citep[{\emph{e.g.}},][]{2018ApJ...862..168S,1981A&A...103..177L} do not reveal any break in the radial gradient of the brightness along of the symmetry plane of the ZDC, which was found to follow a power law $I(r) \sim r^{-2.3}$ to at least below the theoretically predicted start of the DFZ at $\approx 4-5 R_\odot$. The WISPR instrument has recorded the intensities of the ZL/F-corona from ever decreasing observing distances, down to about 0.074~AU ($\sim15.9~R_\odot$) at the last executed perihelion (by the time of this writing). This unprecedented observer distance corresponds to an inner limit of the FOV of WISPR-i of about 3.7~$R_\odot$ (0.017~AU). A striking result from the WISPR observations obtained during the first five orbits, was the departure of the radial dependence of the F-corona brightness profile along the symmetry axis of the ZDC from the previously-established power law \citep[][hereafter referred to as HS]{2019Natur.576..232H,2021A&A...650A..28S}. In the left panel of Fig.~\ref{fig:depl}, we show a sample of WISPR brightness profiles along the symmetry axis obtained during orbits 1, 2, 4, and 5 (in black color), along with the fitting of an empirical model comprising a linear and an exponential function (red dashed line). The linear portion of the empirical model is delineated with the light-blue dashed line. We note that the linear behavior ({\emph{i.e.}}, constant gradient) continues down to $\sim20~R_\odot$ with the same slope as observed in former studies ({\emph{i.e.}}, $\propto r^{-2.3}$). Below that elongation distance, the radial brightness gradient becomes less steep. The modeled brightness measurements depart by about 35\% at the inner limit of $7.65~R_\odot$ (0.036~AU) from the extrapolation of the linear part the model (see inset in Fig.~\ref{fig:depl}). The brightness decrease is quite smooth down to $7.65~R_\odot$, {\emph{i.e.}}, it does not show discrete brightness enhancements due to sublimation effects of any particular dust species \citep[][]{2009Icar..201..395K}. The brightness profile was forward-modeled using RAYTRACE \citep{2006ApJ...642..523T}\footnote{RAYTRACE is available in the {\emph{SOHO}} Solarsoft library, http://www.lmsal.com/solarsoft/. }, which was adapted to integrate the product of an empirical volume scattering function (VSF) and a dust density at each point along any given LOS. The VSF was given by \cite{1986A&A...163..269L}, which condenses all the physics of the scattering process into an empirical function of a single parameter, the scattering angle. The dust density along the symmetry axis of the ZDC was taken from \cite{1981A&A...103..177L} ($n(r) \propto r^{-1.3}$). The intensity decrease observed in WISPR results was ascribed to the presence of a dust depletion zone (DDZ), which appears to begin somewhere between 19 and 20~$R_\odot$ and extends sunward to the beginning of the DFZ. To model the density decrease in the DDZ, we used a multiplier in the density model defined as a linear factor that varied between 1 at the outer boundary of the DDZ and 0 at the inner boundary. The extent of the DDZ was determined empirically by matching the RAYTRACE calculation with the empirical model of the radial brightness profile. In the upper right panel of Fig.~\ref{fig:depl} we show in green and red colors (the red is fully underneath the green) the forward modeling of the brightness with two different boundaries for the DDZ along with the empirical model (in black). The inner boundary was a free parameter to give the best match to the empirical model. The $3-19~R_\odot$ range (green) for the DDZ yields a slightly better fit to the observation than in the $2-20~R_\odot$ range (red). Note that the behavior below the observational limit of 7.65~$R_\odot$ is only an extrapolation. The inset shows the difference between the two forward models. We thus choose $19~R_\odot$ as the upper limit of the DDZ, although depletion could start beyond $19~R_\odot$ but doesn't cause a noticeable change in the intensities until about $19~R_\odot$. In the bottom right panel of Fig.~\ref{fig:depl}, we show the radial profile of the dust density relative to the density at $19~R_\odot$ for the best fit to the intensity profile. Note that from $\sim10$ to $19~R_\odot$, the density appears to be approximately constant. In future orbits, WISPR will observe the corona down to 2.3~$R_\odot$, which will help establish more accurately the actual limit of the DFZ. \subsubsection{Implications for collisions and/or sublimation:} The smooth behavior of the radial brightness profile of the F-corona along its symmetry axis from 35~$R_\odot$ down to 7.65~$R_\odot$ is suggestive of a smooth and continuous process of dust removal. No evidence is seen of dust depletion at a particular distance due to the sublimation of a particular species. Thus the dust remaining at these distances is probably similar to quartz or obsidian, which are fairly resistant to sublimation \citep[{\emph{e.g.}},][]{2004SSRv..110..269M}. \subsubsection{Dust density enhancement along the inner planets' orbits} \label{VdustRing} In addition to measurements of the broad ZC structure, discrete dust structures have also been observed by WISPR. A dust density enhancement nearby Earth's orbit was theoretically predicted in the late eighties by \cite{1989Natur.337..629J} and observationally confirmed by \cite{1994Natur.369..719D} using observations from the Infrared Astronomy Satellite \citep[{\emph{IRAS}};][]{1984ApJ...278L...1N}. \cite{1995Natur.374..521R} confirmed the predicted structure of the dust ring near Earth using observations from the Diffuse Infrared Background Experiment \citep[{\emph{DIRBE}};][]{1993SPIE.2019..180S} on the Cosmic Background Explorer mission \citep[{\emph{COBE}};][]{1992ApJ...397..420B}. More recently, in a reanalysis of white light observations from the {\emph{Helios}} mission \citep{1981ESASP.164...43P}, \cite{2007A&A...472..335L} found evidence of a brightness enhancement nearby Venus' orbit , which was later confirmed by \cite{2013Sci...342..960J,2017Icar..288..172J} using {\emph{STEREO}}/SECCHI observations. Finally, in spite of the lack of a theoretical prediction, a very faint, circumsolar dust ring associated with Mercury was indirectly inferred from 6+ years of white-light observations \citep{2018ApJ...868...74S} obtained with the {\emph{STEREO}}-Ahead/HI-1 instrument. In all the observational cases mentioned above, only particular viewing geometries allowed the detection of just a small portion of the dust rings. The {\emph{Helios}} measurements were carried out with the $90^\circ$ photometer of the Zodiacal Light Experiment \citep[{\emph{ZLE}};][]{1975RF.....19..264L}, which looked perpendicular to the ecliptic plane. The observations reported a 2\% increase in brightness as {\emph{Helios}} crossed just outside of Venus’s orbit \citep{2007A&A...472..335L}. On the other hand, the {\emph{STEREO}} observations were obtained with the SECCHI/HI-2 telescopes, which image the interplanetary medium about $\pm20^\circ$ above and below the ecliptic plane. In the latter, the brightness enhancements observed were detected only when the viewing geometry was tangent to the orbit of Venus. The findings were interpreted, via theoretical modeling, as due to the presence of a resonant dust ring slightly beyond Venus' orbit \citep{2013Sci...342..960J,2017Icar..288..172J}. However, in a more recent work, the dust environment near Venus' orbit was modeled by coalescing the orbital paths of more than 10,000,000 dust particles of different provenance under the influence of gravitational and non-gravitational forces \citep{2019ApJ...873L..16P}. According to this model, an hypothetical population of dust particles released by Venus co-orbital asteroids could be stable enough to produce enough signal to match the observations. So far, twilight telescopic surveys have not found any long-term stable Venus co-orbital asteroids \citep{2020PSJ.....1...47P}; however, their existence cannot be ruled out. At visible wavelengths, the high density and scattering properties of the dust particles in the ZC \citep[{\emph{e.g.}},][]{1986A&A...163..269L}, makes it difficult to detect localized density structures embedded in it from 1~AU. However, as shown in \cite{2021ApJ...910..157S}, the {\emph{PSP}} mission traveling through regions never visited before by any man-made probe, allows the comprehensive visualization of discrete dust density enhancements in the ZDC. As with other white-light heliospheric imagers, the scene recorded in WISPR observations is dominated by the ZL \citep[or F-corona close to the Sun; see, {\emph{e.g.}},][]{2019Natur.576..232H}. To reveal discrete, stationary F-corona features in the FOV of the WISPR instrument, it is necessary to estimate the F-corona background component (for its subsequent removal from the images) with images where the stationary feature is present at a different location in the FOV. By exploiting the different rolls while the S/C was between 0.5 and 0.25~AU, \cite{2021ApJ...910..157S} revealed the first comprehensive, white light observation of a brightness enhancement across a $345^\circ$ longitudinal extension along the Venus' orbit. \begin{figure} \includegraphics[scale=0.33, clip=true, trim=0.0cm -0.5cm 0.0cm 0.0cm]{Fig_Ring.png} \caption{Combined WISPR observations of a circumsolar dust ring near Venus’s orbit on 25 Aug. 2019. Images are projected onto the surface of a sphere with observer at the center ({\emph{PSP}} S/C) and radius equal to the heliocentric distance of the observer. The Sun is not to scale. The gray areas surrounding the bright point-like objects (Mercury, Venus, and Earth) are artifacts of the image processing due to the saturation caused by their excessive brightness. The odd oval-shaped object and its surrounding area are caused by reflections in the optics of the very bright Venus. The red dots delineate Venus’s orbital path, the dashed orange line the ecliptic, and the yellow dotted line the invariable plane. Figure adapted from \cite{2021ApJ...910..157S}.} \label{fig:DustRing} \end{figure} Fig.~\ref{fig:DustRing} shows a composite panorama of the Venusian dust ring in WISPR images acquired during the inbound segment of orbit 3 while the {\emph{PSP}} S/C was performing roll maneuvers \citep[as extracted from][]{2021ApJ...910..157S}. The study showed that the latitudinal extension of the brightness enhancement corresponds to a dust ring extending 0.043~AU $\pm$ 0.004~AU, co-spatial with Venus' orbital path. Likewise, the median excess brightness of the band w.r.t. the background (of about 1\%), was shown to correspond to a dust density enhancement relative to the local density of the ZC of about 10\%. Both, the latitudinal extension and density estimates is in general agreement with the findings of \cite{2007A&A...472..335L} and \cite{2013Sci...342..960J,2017Icar..288..172J}. The viewing geometry only allowed a measure of the inclination and projected height of the ring, not of its radial position or extent. Therefore, no detailed information on the distance of the dust ring from the orbit of Venus could be extracted. \subsubsection {Dust Trail of 3200 Phaethon} \label{Phaethon} Discovered in 1983 \citep{1983IAUC.3878....1G}, asteroid (3200) Phaethon is one of the most widely-studied inner solar system minor bodies, by virtue of a 1.434 year orbit, its large size for a near-Earth object \citep[6~km in diameter,][]{2019P&SS..167....1T}, and a low 0.0196~AU Earth minimum orbit intersection distance (MOID) favorable to ground-based optical and radar observations \citep{1991ASSL..167...19J,2010AJ....140.1519J}. Phaethon is recognized as the parent of the Geminid meteor shower and is associated with the Phaethon-Geminid meteoroid stream complex including likely relationships with asteroids 2005 UD and 1999 YC \citep[{\emph{e.g.}},][]{1983IAUC.3881....1W,1989A&A...225..533G,1993MNRAS.262..231W,2006A&A...450L..25O,2008M&PSA..43.5055O}. Due to a small 0.14~AU perihelion distance, observations of Phaethon near the Sun are impossible from traditional ground-based telescopes. The first detections of Phaethon at perihelion were made by {\emph{STEREO}}/SECCHI \citep{2013ApJ...771L..36J}. While Phaethon is active near perihelion and experiences an intense impact environment near the Sun, the mass-loss rates from cometary-like activity \citep{2013ApJ...771L..36J} and impact ejecta \citep{2019P&SS..165..194S} were both found to be orders of magnitude too low to sustain the Geminids. \begin{figure}[ht!] \centering \includegraphics[scale=0.35]{FigPhaethon.pdf} \caption{WISPR-i observations recorded on 5 Nov. 2018, 14:14~UT, 6 Nov. 1:43~UT and 6 Nov. 14:54~UT. Plotted symbols indicate the imaginary position of Phaethon along the orbit in 60 minute increments, both pre-perihelion (blue) and post-perihelion (white). Symbols are excluded in the region where the trail is most easily visible. The white diamond indicates the perihelion position of the orbit in the FOV. Figure adapted from \cite{2020ApJS..246...64B}. \label{fig:trail-evolution}} \end{figure} As presented in \cite{2020ApJS..246...64B}, an unexpected white-light feature revealed in the WISPR background-corrected data was the presence of a faint extended dust trail following the orbit of Phaethon. In Fig.~\ref{fig:trail-evolution}, we show three WISPR-i telescope observations that highlight the most visible portion of this dust trail as detected during {\emph{PSP}} Enc.~1, which is seen following the projection of Phaethon's orbital path perfectly. Despite the dust trail being close to the instrument noise floor, the mean brightness along the trail was determined to be 8.2$\times10^{-15} B_\odot$ (where $B_\odot$ is the mean solar brightness), which equates to a visual magnitude of 15.8$\pm$0.3 per pixel. This result, coupled with the 70~arcsec per pixel resolution of WISPR-i, yields an estimated surface brightness of 25.0~mag~arcsec$^{-2}$ for the dust trail, which in turn is shown to yield a total mass of dust in the entire trail of $\sim(0.4-1.3){\times}10^{12}$~kg. This mass estimate is inconsistent with dust by Phaethon at perihelion, but is plausibly in-line (slightly below) mass estimates of the Geminids. The difference is attributed primarily to the faintness of the detection. This detection highlights the remarkable sensitivity of WISPR to white-light dust structures. Recent ground- and space-based surveys have failed to detect a dust trail in the orbit of 3200 Phaethon \citep{2018ApJ...864L...9Y}. The WISPR observation explains this as, when factors such as heliocentric distance and orbital spreading/clustering of the dust are considered, it can be shown that the surface brightness of the trail as seen from a terrestrial viewpoint is less than 30~mag~arcsec$^{-2}$, which constitutes an extremely challenging target even for deep sky surveys. The Phaethon dust trail continues to be clearly observed in the WISPR data in every {\emph{PSP}} orbit, and remains under continued investigation. The dust trail of comet 2P/Encke is also quite clearly visible in the WISPR data, again highlighting the instrument's ability to detect faint dust features. The inner solar system is rich with fragmenting comets and comet families, yielding the potential for the discovery of additional dust features as the mission orbit evolves. \subsubsection{Mass Loading of the Solar Wind by Charged Interplanetary Dust} If charged dust grains reach sufficient density, they are theoretically capable of impacting solar wind plasma dynamics, primarily through mass-loading the wind \citep[{\emph{e.g.}},][]{Rasca2014a}. As the solar wind flows over charged dust grains, the Lorentz force attempts to accelerate these grains up to the solar wind velocity. The resulting momentum exchange can slow the solar wind and distort solar wind magnetic fields \citep{Lai2015}. In practice, a high enough density of dust grains with sufficiently large charge-to-mass ratio to distort the solar wind flow is most likely to be found near localized dust sources, like comets \citep{Rasca2014b}. The Solar Probe data so far have yielded one such potential comet-solar wind interaction, and a study of this event was inconclusive with regard to whether mass loading created an observable impact on the solar wind \citep{He2021_comet}. \subsection{Summary of Dust Observations and Future Prospects for {\emph{PSP}} Dust Measurements} Summarizing our understanding of the inner heliosphere's dust environment after four years of {\emph{PSP}} dust data: \begin{itemize} \item[1.] Impact rates from the first six orbits are produced by three dust sources: \ams on bound elliptic orbits, \bms on unbound, hyperbolic orbits, and a third dust source likely related to meteoroid streams. \item[2.] The flux of \bms varies by at least 50\% on year-long timescales. \item[3.] Directionality analysis and data-model comparisons suggests the third source detected during {\emph{PSP}}'s first six orbits is a \bsn. \item[4.] A zodiacal erosion rate of at least $\sim100$~kg s$^{-1}$ is consistent with observed impact rates. \item[5.] The flux of \bms at 1~AU is estimated to be in the range of $0.4-0.8 \times 10^{-4}$ m$^{-2}$ s$^{-1}$. \item[6.] The majority of zodiacal collisions production \bms occur in a region from $\sim10-20~\rs$. \item[7.] If the inner source of pickup ions is due to dust, it must be from nanograins with radii $\lesssim50$~nm. \item[8.] The zodiacal dust density is expected to maintain a constant value in the range of $10-19~\rs$. \item[9.] A dust ring along the orbit of Venus' orbit has been directly observed. \item[10.] Multiple meteoroid streams have been directly observed, including the Geminids meteoroid stream. \end{itemize} There are a number of ongoing and recently open questions in the {\emph{PSP}}-era of ZC exploration. For example, it is not yet determined why the FIELDS dust count rate rises within each orbital group. Increases among orbital groups are expected because, as the S/C moves closer to the Sun, its relative velocity to zodiacal dust populations increases and the zodiacal dust density increases closer to the Sun \citep{2020ApJS..246...27S}. While this effect is observed, it is also observed \citep{Pusack2021, szalay:21a} that successive orbits with the same perihelion distance show increasing dust count rates ({\emph{e.g.}}, high dust count rates on orbit 5 compared to 4). Additionally, can FIELDS dust detections be used to differentiate between existing theories for the generation of voltage spikes by impact-generated plasma? {\emph{PSP}} traverses a wide range of thermal plasma, photon flux, magnetic field, and dust impact velocity conditions, enabling new tests of impact-plasma behavior as a function of these parameters. The WISPR remote measurements have provided an unparalleled look at the dust environment with a few 10's of $\rs$. Upcoming orbits will reveal whether the DFZ indicated by WISPR data \citep{2019Natur.576..232H, 2021A&A...650A..28S} will be directly observable {\emph{in~situ}} with FIELDS, and if the observed trend from WISPR for larger grains holds in the micron-sized regime. While {\emph{PSP}} will directly transit the region of constant radial density profile inferred by WISPR in the range of $10-19~\rs$, the decrease of this profile towards a DFZ occurs inside 10~$\rs$ where {\emph{PSP}} will not transit. \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{beta_fluxes_v0.pdf} \caption{\bmm fluxes observed by multiple S/C and detection schemes. \label{fig:beta_fluxes}} \end{figure} Finally, {\emph{PSP}}'s long mission duration will enable it to be a long-term observation platform for \bmm fluxes inside 1~AU. The flux of \bms directly encodes the collisional erosion occurring in the inner heliosphere, therefore a determination of their flux provides an important window into the dynamics and evolution of the ZC. Furthermore, the flux of \bms is the largest impactor source by number flux at 1~AU, and may be responsible for sustaining a significant portion of the Moon's impact ejecta cloud \citep{2020ApJ...890L..11S}. Hence, \bms may play a more important role in space weathering airless bodies than previously considered, and constraining their fluxes and variations with time can provide key insight on the space weathering of airless bodies which transit inside 1~AU. Fig.~\ref{fig:beta_fluxes} highlights the multiple \bmm flux estimates from dedicated dust instruments onboard {\emph{Pioneers}} 8 \& 9 \citep{1973spre.conf.1047B}, {\emph{Ulysses}} \citep{1999A&A...341..296W,2004A&A...419.1169W}, as well electric field-based observations from {\emph{STEREO}} \citep{2012JGRA..117.5102Z}, {\emph{SolO}} \citep{2021A&A...656A..30Z}, and {\emph{PSP}} \citep{2020ApJS..246...27S,szalay:21a,2021A&A...650A..29M}. As shown in this figure, the dedicated dust observations indicated a higher flux of \bms than the more recent estimates derived from electric field observations taken decades later. The extent to which the flux of \bms varies over time is a quantity {\emph{PSP}} will be uniquely posed to answer in its many years of upcoming operations. \section{Venus} \label{PSPVENUS} Putting {\emph{PSP}} into an orbit that reaches within 10~R$_{\odot}$ of the Sun requires a series of VGA flybys to push the orbital perihelion closer and closer to the Sun. A total of seven such flybys are planned, five of which have already occurred as of this writing. These visits to Venus naturally provide an opportunity for {\emph{PSP}} to study Venus and its interactions with the solar wind. In this section, we review results of observations made during these flybys. Direct images of Venus have been obtained by the WISPR imagers on board {\emph{PSP}}. The first attempt to image Venus with WISPR was during the third flyby (VGA3) on 11 Jul. 2020. The dayside of Venus is much too bright for WISPR to image. With no shutter mechanism, the effective minimum exposure time with WISPR is the image readout time of about 2~s, much too long for Venus to be anything other than highly overexposed in WISPR images made close to the planet. Furthermore, the VGA3 sequence of images demonstrated that if any part of dayside Venus is in the FOV, not only is the planet highly overexposed, but there are scattered light artifacts that contaminate the entire image. \begin{figure*} \centering \includegraphics[width=0.6\textwidth]{Wood2022Fig1.png} \caption{(a) WISPR-i image of the nightside of Venus from VGA3, showing thermal emission from the surface on the disk and O$_2$ nightglow emission at the limb. (b) Topographical map from Magellan, using an inverse black and white scale to match the WISPR image, with bright regions being low elevation and dark regions being high elevation. (c) WISPR-i and -o images of Venus from VGA4. The same part of the Venusian surface is observed as in (a). Red numbers in all panels mark common features for ease of reference. Figure adapted from \citet{2022GeoRL..4996302W}.} \label{Wood2022Fig1} \end{figure*} Fortunately, there were a couple VGA3 images that only contained nightside Venus in the FOV, and these images proved surprisingly revelatory. One of these images is shown in Fig.~\ref{Wood2022Fig1}a \citep{2022GeoRL..4996302W}. Structure is clearly seen on the disk. Furthermore, comparison with a topographical map of Venus from the Magellan mission (see Fig.~\ref{Wood2022Fig1}b) makes it clear that we are actually seeing the surface of the planet. This was unexpected, as the surface of Venus had never before been imaged at optical wavelengths. Viewing the planetary surface is impossible on the dayside due to the blinding presence of scattered sunlight from the very thick Venusian atmosphere. However, on the nightside there are windows in the near infrared (NIR) where the surface of the planet had been imaged before, particularly by the {\emph{Venus Express}} \citep[{\emph{VEX}};][]{2006CosRe..44..334T} and {\emph{AKATSUKI}} \citep{2011JIEEJ.131..220N} missions \citep{2008GeoRL..3511201H,nm08,ni18}. This is not reflected light but thermal emission from the surface, which even on the nightside of Venus is about 735~K. The WISPR imagers are sensitive enough to detect this thermal emission within their optical bandpass. Because surface temperature decreases with altitude on Venus, as it does on Earth, dark areas in the {\emph{PSP}}/WISPR images correspond to cooler highland areas while bright areas correspond to hotter lowland regions. The dark oval-shaped region dominating the WISPR image near the equator is the Ovda Regio plateau at the western end of Aphrodite Terra, the largest highland region on Venus. In addition to the thermal emission from the disk of the planet, a bright rim of emission is seen at the limb of the planet. This is O$_2$ nightglow emission from the upper atmosphere of the planet, which had been observed by previous missions, particularly {\emph{VEX}} \citep{2009JGRE..11412002G,2013GeoRL..40.2539G}. This emission is believed to be excited by winds of material flowing in the upper atmosphere from the dayside to the nightside. The experience with the VGA3 images allowed for better planning for VGA4, and during this fourth flyby on 21 Feb. 2021 a much more extensive series of images was taken of the Venusian nightside, using both the WISPR-i and WISPR-o imagers. Fig.~\ref{Wood2022Fig1}c shows a view from VGA4, combining the WISPR-i and WISPR-o images. It so happened that VGA4 occurred with essentially the same part of Venus on the nightside as VGA3, so the VGA3 and VGA4 images in Figs.~\ref{Wood2022Fig1}a and \ref{Wood2022Fig1}c are of roughly the same part of the planet, with the Ovda Regio area dominating both. An initial analysis of the WISPR images has been presented by \citet{2022GeoRL..4996302W}. A model spectrum of the surface thermal emission was computed, propagated through a model Venusian atmosphere. This model, assuming a 735~K surface temperature, was able to reproduce the count rates observed by WISPR. A long-term goal will be to compare the WISPR observations with NIR images. Ratios of the two could potentially provide a diagnostic for surface composition. However, before such mineralogy can be done, a more detailed analysis of the WISPR data must be performed to correct the images for scattered light, disk O$_2$ nightglow, and the effects of spatially variable atmospheric opacity. Finally, additional WISPR observations should be coming in the future. Although the Enc. geometry of VGA5 was not favorable for nightside imaging, and VGA6 will likewise be unfavorable, the final flyby (VGA7) on 6 Nov. 2024 should provide an opportunity for new images to be made. Furthermore, for VGA7 we will be viewing the side of Venus not observed in VGA3 and VGA4. {\emph{PSP}} also made extensive particle and fields measurements during the Venus flybys. Such measurements are rare at Venus, particularly high cadence electric and magnetic field measurements \citep{Futaana2017}. Therefore, {\emph{PSP}} data recorded near Venus has the potential to yield new physical insights. Several studies examined the interaction between the induced magnetosphere of Venus and the solar wind. \citet{2021GeoRL..4890783B} explored kinetic-scale turbulence in the Venusian magnetosheath, quantifying properties of the shock and demonstrating developed sub-ion kinetic turbulence. \citet{Malaspina2020_Venus} identified kinetic-scale electric field structures in the Venusian bow shock, including electron phase space holes and double layers. The occurrence rate of double layers was suggested to be greater than at Earth's bow shock, hinting at a potential significant difference in the kinetic properties of bow shocks at induced magnetospheres vs. intrinsic magnetospheres. \citet{Goodrich2020} identified subproton scale magnetic holes in the Venusian magnetosheath, one of the few observations of such structures beyond Earth's magnetosphere. Other studies used the closest portions of the flybys to examine the structure and properties of the Venusian ionosphere. \citet{Collinson2021} examined the ionospheric density at 1,100 km altitude, demonstrating consistency with solar cycle predictions for ionospheric variability. \citet{2022GeoRL..4996485C} used {\emph{PSP}} observations of cold plasma filaments extending from the Venus ionosphere (tail rays) to reconcile previously inconsistent observations of tail rays by {\emph{Pioneer}}~12 (also named Pioneer Venus Orbiter) and {\emph{VEX}}. Finally, \citet{Pulupa2021} examined radio frequency data recorded during {\emph{PSP}} Venus flybys, searching for evidence of lightning-generated plasma waves. No such waves were found, supporting results from earlier Cassini flyby observations \citep{2001Natur.409..313G}. \section{Summary and Conclusions} \label{SUMCONC} {\emph{PSP}} has completed 13 of its 24 scheduled orbits around the Sun over a 7-year nominal mission duration. The S/C flew by Venus for the fifth time on 16 Oct. 2021, followed by the closest perihelion of $13.28~R_\odot$. Generally, the S/C has performed well within the expectations. The science data returned is a true treasure trove, revealing new aspects of the young solar wind and phenomena that we did not know much about. The following is a summary of the findings of the {\emph{PSP}} mission during its four years of operations. We, however, refer the readers to the corresponding sections for more details. \paragraph{\textbf{Switchbacks ---}} The magnetic field switchbacks observed by the {\emph{PSP}} are a fundamental phenomenon of the young solar wind. SBs show an impressive effect; they turn the ambient slow solar wind into fast for the crossing duration without changing the connection to the source. These structures are Alfv\'enic, show little changes in the density, and display a slight preference to deflect in the tangential direction. The duration of the observed switchbacks is related to how S/C cross through the structure, which is in turn associated with the deflection, dimensions, orientation, and the S/C velocity. Most studies implied that these structures are long and thin along the flow direction. SB patches have shown the local modulation in the alpha fraction observed in-situ, which could be a direct signature of spatial modulation in solar sources. They also have shown large-scale heating of protons in the parallel direction to the magnetic field, indicating the preferential heating of the plasmas inside the switchbacks. Observations provided a clue that switchbacks might have relevance to the heating and acceleration of the solar wind. Therefore it is essential to understand their generation and propagation mechanism. Some aspects of these features point toward ex-situ processes ({\emph{e.g.}}, interchange reconnection and other solar-surface processes) and others toward in-situ mechanisms (covering stream interactions, AWs, and turbulence) in which switchbacks result from processes within the solar wind as it propagates outwards. The various flavors of interchange-reconnection-based models have several attractive features, in particular their natural explanation of the likely preferred tangential deflections of large switchbacks, the bulk tangential flow, and the possible observed temperature enhancements. However, some important features remain unclear, such as the Alfv\'enicity of the structures and how they evolve as they propagate to {\emph{PSP}} altitudes. While, AW models naturally recover the Alfv\'enicity and radial elongation of switchbacks seen in {\emph{PSP}} observations, but can struggle with some other features. In particular, it remains unclear whether the preferred tangential deflections of large switchbacks can be recovered and also struggle to reproduce the high switchback fractions observed by {\emph{PSP}}. When radially stratified environment conditions are considered for AW models, studies showed that before propagating any significant distance, a switchback will have deformed significantly, either changing shape or unfolding depending on the background profile. This blurs the line between ex-situ and in-situ formation scenarios. There are interrelationships and the coexistence of different mechanisms in some of the proposed models; moving forward, we must keep all the models in mind as we attempt to distinguish observationally between different mechanisms. Further understanding of switchback formation will require constant collaboration between observers and theorists. \paragraph{\textbf{Solar Wind Sources ---}} A central question in heliophysics is connecting the solar wind to its sources. A broad range of coronal and heliospheric modeling efforts have supported all the {\emph{PSP}} Encs. {\emph{PSP}} has mainly observed only slow solar wind with a few exceptions. The first Enc. proved unique where all models pointed to a distinct equatorial coronal hole at perihelion as the dominant solar wind source. The flow was predominantly slow and highly Alfv\'enic. During the subsequent Encs., the S/C was connected to polar coronal hole boundaries and a flatter HCS. However, what has been a surprise is that the slow solar wind streams were seen to have turbulence and fluctuation properties, including the presence of the SBs, typical of Alfv\'enic fluctuations usually associated with HSSs. That slow wind interval appeared to have much of the same characteristics of the fast wind, including the presence of predominantly outwards Alfv\'enic fluctuations, except for the overall speed. The consensus is that the slow Alfv\'enic solar wind observed by {\emph{PSP}} originates from coronal holes or coronal hole boundaries. It is still unclear how the Alfv\'enic slow wind emerge: (1) does it always arise from small isolated coronal holes with large expansion factors within the subsonic/supersonic critical point? Or is it born at the boundaries of large, polar coronal holes? There is, however, one possible implication of the overall high Alfv\'enicity observed by {\emph{PSP}} in the deep inner heliosphere. All solar wind might be born Alfv\'enic, or rather that Alfv\'enic fluctuations be a universal initial condition of solar wind outflow. Whether this is borne out by {\emph{PSP}} measurements closer to the Sun remains to be seen. Quiet periods typically separate the SBs-dominated patches. These quiet periods are at odds with theories relating to slow wind formation and continual reconfiguration of the coronal magnetic field lines due to footpoint exchange. This should drive strong wind variability continually \citep[{\emph{e.g.}},][]{1996JGR...10115547F}. Another interesting finding from the {\emph{PSP}} data is the well-known open flux problem persists down to 0.13~AU, suggesting there exist solar wind sources which are not yet captured accurately by modeling. \paragraph{\textbf{Kinetic Physics ---}} {\emph{PSP}} measurements show interesting kinetic physics phenomena. The plasma data reveal the prevalence of electromagnetic ion-scale waves in the inner heliosphere for the first time. The statistical analysis of these waves shows that a near-radial magnetic field is favorable for their observation and that they mainly propagate anti-sunward. {\emph{PSP}} observed for the first time a series of proton beams with the so-called hammerhead velocity distributions that is an excessively broadened VDF in the direction perpendicular to the mean magnetic field vector. These distributions coincide with intense, circularly polarized, FM/W waves. These findings suggest that the hammerhead distributions arise when field-aligned proton beams excite FM/W waves and subsequently scatter off these waves. {\emph{PSP}} waveform data has also provided the first definitive evidence of sunward propagating whistler-mode waves. This is an important discovery because sunward-propagating waves can interact with the anti-sunward propagating strahl only if the wave vector is parallel to the background magnetic field. \paragraph{\textbf{Turbulence ---}} Turbulence often refers to the energy cascade process that describes the energy transfer across scales. In solar wind turbulence, the energy is presumably injected at a very large scale ({\emph{e.g.}}, with a period of a few days). It cascades then down to smaller scales until it dissipates at scales near the ion and electron scales. The intermediate scale range between the injection scale and dissipation (or the kinetic) range is known as the inertial range. The {\emph{PSP}} observations shed light on the properties of the turbulence at various scales ({\emph{i.e.}}, outer scale, inertial-range scale, and kinetic scales) at the closest distances to the Sun. This includes the sub-Alfv\'enic region where the solar wind speed becomes smaller than the typical Alfv\'en speed. Several recent studies using {\emph{PSP}} data reveal the significance of solar wind turbulence on the overall heating and acceleration of the solar wind plasma. For instance, magnetic field switchbacks are associated with turbulent structures, which mainly follow the field's kink. Turbulence features such as the intermittency, the Alf\'venicity, and the compressibility have also been investigated. Overall, the data show that solar wind turbulence is mostly highly Alfv\'enic with less degree of compressibility even in the slow solar wind. Other studies used {\emph{PSP}} measurements to examine the typical plasma scale at which the energy spectrum breaks. However, it remains challenging to interpret the appropriate plasma scales corresponding to the empirical timescales using the standard frozen-in-flow Taylor hypothesis as the solar wind speed and the local Alfv\'en speed becomes comparable. \paragraph{\textbf{Large Scale ---}} Due to its low heliographic latitude orbit, {\emph{PSP}} crossed the HCS multiple times in each Enc. and observed many LFRs and SFRs. The observed locations of HCS crossings and PFSS model predictions were compared. An irregular source surface with a variable radius is utilized to minimize the timing and location differences. The internal structure of the HCS near the Sun is very complex, comprising structures with magnetic field magnitude depressions, increased solar wind proton bulk speeds, and associated suprathermal electron strahl dropouts, likely indicating magnetic disconnections. In addition, small flux ropes were also identified inside or just outside the HCS, often associated with plasma jets indicating recent magnetic reconnection. {\emph{PSP}} measurements also show that, despite being the site of frequent magnetic reconnection, the near-Sun HCS is much thicker than expected. HCS observations at 1 AU reveal significantly different magnetic and plasma signatures implying that the near-Sun HCS is the location of active evolution of the internal structures. In addition, our knowledge of the transition from CME to ICME has been limited to the in-situ data collected at 1 AU and remote-sensing observations from space-based observatories. {\emph{PSP}} provides a unique opportunity to link both views by providing valuable information that will allow us to distinguish the evidence of the early transition from CME to ICME. {\emph{PSP}} has also observed a multitude of events, both large- and small-scale, connected to flux ropes. For instance, at least one SBO event showed a flux rope characterized by changes that deviated from the expected smooth change in the magnetic field direction (flux rope-like configuration), low proton plasma beta, and a drop in the proton temperature. {\emph{PSP}} also observed a significant number of SFRs. Several tens of SFRs were analyzed, suggesting that the SFRs are primarily found in the slow solar wind and that their possible source is MHD turbulence. Other SFRs seem to be the result of magnetic reconnection. From WISPR imaging data, the most striking features (in addition to CMEs) are the small-scale features observed when the S/C crosses the HCS. The imaging of the young solar wind plasma is revealing. The internal structure of CMEs is observed in ways not accessible before the {\emph{PSP}} era. Also, features such as the fine structure of coronal streamers indicate the highly-dynamic nature of the solar wind close to the Sun. An excellent example of the feature identified by WISPR are bright and narrow streamer rays located at the core of the streamer belt. \paragraph{\textbf{Radio Emissions and Energetic Particles ---}} The first four years of the {\emph{PSP}} mission enabled an essential understanding of the variability of solar radio emissions and provided critical insights into the acceleration and transport of energetic particles in the inner heliosphere. {\emph{PSP}} observed many solar radio emissions, SEP events, CMEs, CIRs and SIRs, inner heliospheric ACRs, and energetic electron events, which are critical to exploring the fundamental physics of particle acceleration and transport in the near-Sun environment. The {\emph{PSP}}/FIELDS RFS measures electric fields from 10 kHz to 19.2 MHz, enabling radio observations. Only Enc.~2 featured multiple strong type III radio bursts and a type III storm during the first four Encs. As the solar activity began rising with Encs.~5 and beyond, the occurrence of radio bursts has also increased. The {\emph{PSP}} radio measurements enabled several critical studies, {\emph{e.g.}}: (1) Searching for evidence of heating of the corona by small-scale nanoflares; (2) Measurement of the circular polarization near the start of several type III bursts in Enc.~2; (3) Characterization of the decay times of type III radio bursts up to 10 MHz, observing increased decay times above 1 MHz compared to extrapolation using previous measurements from {\emph{STEREO}}; (4) Finding evidence for emission generated via the electron cyclotron maser instability over the several-MHz frequency range corresponding to solar distances where $f_{ce}>f_{pe}$; and (5) and determine the directivity of individual type III radio bursts using data from other missions, which was only possible using statistical analysis of large numbers of bursts. {\emph{PSP}} observed many SEP events from different sources ({\emph{e.g.}}, SBOs, jets, surges, CMEs, flares, etc.) and with various properties that are key to characterizing the acceleration and transport of particles in the inner heliosphere. \citet{2021A&A...650A..23C} investigated the helium content of six SEP events from May to Jun. 2020 during the fifth orbit. At least three of these six events originated from the same AR. Yet, they have significantly different $^3$He/$^4$He and He/H ratios. In addition, \citet{2021A&A...650A..26C} found that the path length of these events greatly exceeded that of the Parker spiral. They attributed that to the random walk of magnetic field lines. Most of the CMEs observed by {\emph{PSP}} were slow and did not produce clear events at 1~AU. They nonetheless produced particle events that were observed by {\emph{PSP}} closer to Sun. \citet{2020ApJS..246...29G} and \citet{2020ApJS..246...59M} reported on a particular CME event observed by {\emph{PSP}} shortly after the first perihelion pass that produced a significant enhancement in SEPs with energies below a few hundred keV/nuc, which also showed a clear velocity dispersion. The {\emph{PSP}} plasma measurement did not show any shock evidence, and the particle flux decayed before the CME crossed the S/C. Two different interpretations were proposed for this event. \citet{2020ApJS..246...29G} suggested diffusive transport of particles accelerated by the CME starting about the time it was 7.5~$R_\odot$ as observations suggest that very weak shocks, or even non-shock plasma compressions driven by a slow CME, are capable of accelerating particles. \citet{2020ApJS..246...59M} proposed an alternative based on the “pressure cooker” mechanism observed in the magnetosphere, where energetic particles are confined below the CME in the solar corona in a region bound by an electric potential above and strong magnetic fields below. The highest-energy particles overcome this barrier earlier and arrive at the S/C earlier than low-energy particles, presumably released much later when the CME has erupted from the Sun. The other interesting aspect is that the “pressure cooker” mechanism produces maximum energy that depends on the charge of the species. Although the event was relatively weak, there were sufficient counts of He, O, and Fe that, when combined with assumptions about the composition of these species in the corona, agreed with the observed high-energy cut-off as a function of particle species. SIRs/CIRs are known to be regions where energetic particles are accelerated. Therefore, {\emph{PSP}} observations within 1~AU are particularly well suited to detangle these acceleration and transport effects as the SIR/CIR-associated suprathermal to energetic ion populations are further from shock-associated acceleration sites that are usually beyond 1~AU. Many of these nascent SIR/CIRs were associated with energetic particle enhancements offset from the SIR/CIR interface. At least one of these events had evidence of local compressive acceleration, suggesting that this event provides evidence that CIR-associated acceleration does not always require shock waves. {\emph{PSP}} also observed ACRs with intensities increasing over energies from $\sim5$ to $\sim40$ MeV/nuc, a characteristic feature of ACR spectra. However, the observed radial gradient is stronger ($\sim25\pm5$\%~AU) than observed beyond 1~AU. Understanding the radial gradients of ACRs in the inner heliosphere provides constraints on drift transport and cross-field diffusion models. \paragraph{\textbf{Dust in the Inner Heliosphere ---}} The zodiacal dust cloud is one of the most significant structures in the heliosphere. To date, our understanding of the near-Sun dust environment is built on both in-situ and remote measurements outside 0.3~AU. {\emph{PSP}} provides the only in-situ measurements and remote sensing observations of interplanetary dust in the near-Sun environment inside 0.3~AU. {\emph{PSP}} provides the total dust impact rate to the S/C. The FIELDS instrument detects perturbations to the S/C potential that result from transient plasma clouds formed when dust grains strike the S/C at high velocities, vaporizing and ionizing the impacting grain and some fraction of the S/C surface. Several features have been observed in the impact rate profiles. For the first three orbits, a single pre-perihelion peak was observed. Another post-perihelion peak also marks the subsequent orbits. Between these two peaks, a local minimum in impact rate was present near the perihelion. Comparing the {\emph{PSP}} data to a two-component analytic dust model shows that {\emph{PSP}}’s dust impact rates are consistent with at least three distinct populations: a bound zodiacal $\alpha$-meteoroids on elliptic orbits; an unbound $\beta$-meteoroids on hyperbolic orbits; and a distinct third population of impactors. The data-model comparison indicates that the $\beta$-meteoroids are predominantly produced within $10-20~R_\odot$ and are unlikely to be the inner source of pickup ions, instead suggesting the population of trapped nanograins is likely this source. The post-perihelion peak is like the result of {\emph{PSP}} flying through the collisional by-products produced when a meteoroid stream ({\emph{i.e.}}, the Geminids meteoroid stream) collides with the nominal zodiacal cloud. At about 19~$R_\odot$, WISPR white-light observations revealed a lower increase of F-corona brightness compared to observations obtained between 0.3~AU and 1~AU. This marks the outer boundary of the DDZ. The radius of the DFZ itself is found to be about 4~$R_\odot$. The {\emph{PSP}} imaging observations confirm a nine-decade prediction of stellar DFZs by \citet{1929ApJ....69...49R}. \paragraph{\textbf{Venus ---}} WISPR Images of the Venusian night side during VGAs 3 and 4 proved surprisingly revelatory, clearly showing structures on the disk. This was unexpected, as the surface of Venus had never before been imaged at optical wavelengths. The WISPR imagers are sensitive enough to detect this thermal emission within their optical bandpass. The WISPR images show the Ovda Regio plateau at the western end of Aphrodite Terra, the most extensive highland region on Venus. In addition to the thermal emission from the planet's disk, the data show an O\_2 night glow emission from the planet's upper atmosphere, which previous missions had observed. Another important planetary discovery is that of the dust ring along the orbit of Venus (see \S\ref{VdustRing}). {\emph{PSP}} is over four years into its prime mission. It uncovered numerous phenomena that were unknown to us so far and which are about phenomena occurring during the solar cycle minimum, where the Sun is not very active. The activity level is rising as we approach the maximum of solar cycle 25. We will undoubtedly discover other aspects of the solar corona and inner heliosphere. For instance, we pine for the S/C to fly through many of the most violent solar explosions and tell us how particles are accelerated to extreme levels. \section{List of Abreviations} {\small \begin{longtable}{ p{.10\textwidth} p{.90\textwidth} } \multicolumn{2}{l}{\textbf{Space Agencies}} \\ ESA & European Space Agency \\ JAXA & Japan Aerospace Exploration Agency \\ NASA & National Aeronautics and Space Administration \\ \end{longtable} } {\small \begin{longtable}{ p{.15\textwidth} p{.5\textwidth} p{.35\textwidth} } \multicolumn{3}{l}{\textbf{Missions, Observatories, and Instruments}} \\ \multicolumn{1}{l}{\textbf{Acronym}} & \multicolumn{1}{l}{\textbf{Expanded Form}} & \multicolumn{1}{l}{\textbf{References}} \\ \endfirsthead \multicolumn{3}{p{\textwidth}}{ --- {\it{continued from previous page}} --- } \\ \multicolumn{1}{l}{\textbf{Acronym}} & \multicolumn{1}{l}{\textbf{Expanded Form}} & \multicolumn{1}{l}{\textbf{References}} \\ \endhead \multicolumn{3}{p{\textwidth}}{ --- {\it{continued on next page}} ---} \\ \endfoot \endlastfoot {\emph{\textbf{ACE}}} & The Advanced Composition Explorer mission & \citet{1998SSRv...86....1S} \\ \hspace{0.15cm} {\emph{EPAM}} & The Electron, Proton, and Alpha Monitor instrument & \citet{1998SSRv...86..541G} \\ \hspace{0.15cm} {\emph{ULEIS}} & The Ultra Low Energy Isotope Spectrometer instrument & \citet{1998SSRv...86..409M} \\ {\emph{\textbf{AKATSUKI}}} & The AKATSUKI/PLANET-C mission & \citet{2011JIEEJ.131..220N} \\ {\emph{\textbf{ARTEMIS}}} & The Acceleration, Reconnection, Turbulence and Electrodynamics of the Moon's Interaction with the Sun mission & \citet{2011SSRv..165....3A} \\ {\emph{\textbf{BepiColombo}}} & The BepiColombo mission & \citet{2021SSRv..217...90B} \\ {\emph{\textbf{Cluster}}} & The Cluster mission & \citet{1997SSRv...79...11E} \\ {\emph{\textbf{COBE}}} & The Cosmic Background Explorer mission & \citet{1992ApJ...397..420B} \\ \hspace{0.15cm} DIRBE & The Diffuse Infrared Background Experiment & \citet{1993SPIE.2019..180S} \\ {\emph{\textbf{Galileo}}} & The Galileo mission & \citet{1992SSRv...60....3J} \\ {\emph{\textbf{GOES}}} & The Geostationary Operational Environmental Satellite program & \url{https://www.nasa.gov/content/goes-overview/index.html} \\ {\emph{\textbf{GONG}}} & The Global Oscillation Network Group & \citet{1988AdSpR...8k.117H} \\ {\emph{\textbf{Helios}}} & The Helios (1 \& 2) mission & \citet{Marsch1990} \\ \hspace{0.15cm} ZLE & The Zodiacal Light Experiment & \citet{1975RF.....19..264L} \\ {\emph{\textbf{HEOS-2}}} & The Highly Eccentric Orbit Satellite-2 & \url{https://nssdc.gsfc.nasa.gov/nmc/spacecraft/display.action?id=1972-005A} \\ {\emph{\textbf{IMP-8}}} & The Interplanetary Monitoring Platform-8 & \url{https://science.nasa.gov/missions/imp-8} \\ {\emph{\textbf{IRAS}}} & The Infrared Astronomy Satellite & \citet{1984ApJ...278L...1N} \\ {\emph{\textbf{ISEE}}} & The International Sun-Earth Explorer & \cite{1979NCimC...2..722D} \\ {\emph{\textbf{Mariner~2}}} & The Mariner~2 mission & \url{https://www.jpl.nasa.gov/missions/mariner-2} \\ {\emph{\textbf{MMS}}} & The Magnetospheric Multiscale mission & \citet{2014cosp...40E.433B} \\ {\emph{\textbf{NuSTAR}}} & The Nuclear Spectroscopic Telescope ARray & \citet{2013ApJ...770..103H} \\ {\emph{\textbf{Pioneer}}} & The Pioneer mission & \url{https://www.nasa.gov/centers/ames/missions/archive/pioneer.html} \\ {\emph{\textbf{PSP}}} & The Parker Solar Probe mission & \citet{2016SSRv..204....7F} \\ & & \citet{doi:10.1063/PT.3.5120} \\ \hspace{0.15cm} FIELDS & The FIELDS investigation & \citet{2016SSRv..204...49B} \\ \hspace{0.3cm} AEB & The Antenna Electronics Board & --- \\ \hspace{0.3cm} DFB & The Digital Field Board & --- \\ \hspace{0.3cm} HFR & The High Frequency Receiver & --- \\ \hspace{0.3cm} MAG(s) & The Fluxgate magnetometer(s) & --- \\ \hspace{0.3cm} RFS & The Radio Frequency Spectrometer & \citet{2017JGRA..122.2836P} \\ \hspace{0.3cm} SCM & The Search Coil Magnetometer & --- \\ \hspace{0.3cm} TDS & The Time Domain Sampler & --- \\ \hspace{0.15cm} SWEAP & The Solar Wind Electrons Alphas and Protons Investigation & \citet{2016SSRv..204..131K} \\ \hspace{0.3cm} SPAN & Solar Probe ANalzers (A \& B) & \citet{2020ApJS..246...74W} \\ \hspace{0.3cm} SPAN-e & Solar Probe ANalzers-electrons & \citet{2020ApJS..246...74W} \\ \hspace{0.3cm} SPAN-i & Solar Probe ANalzers-ions & \citet{10.1002/essoar.10508651.1} \\ \hspace{0.3cm} SPC & Solar Probe Cup & \citet{2020ApJS..246...43C} \\ \hspace{0.3cm} SWEM & SWEAP Electronics Module & --- \\ \hspace{0.15cm} {\emph{IS$\odot$IS}} & The Integrated Science Investigation of the Sun & \citet{2016SSRv..204..187M} \\ \hspace{0.3cm} EPI-Hi & Energetic Particle Instrument-High & --- \\ \hspace{0.45cm} HET & High Energy Telescope & --- \\ \hspace{0.45cm} LET & Low Energy Telescope & --- \\ \hspace{0.3cm} EPI-Lo & Energetic Particle Instrument-Low & --- \\ \hspace{0.15cm} {\emph{WISPR}} & The Wide-field Imager for Solar PRobe & \citet{2016SSRv..204...83V} \\ \hspace{0.3cm} {\emph{WISPR-i}} & WISPR inner telescope & \citet{2016SSRv..204...83V} \\ \hspace{0.3cm} {\emph{WISPR-o}} & WISPR outer telescope & \citet{2016SSRv..204...83V} \\ \hspace{0.15cm} {\emph{TPS}} & the Thermal Protection System & --- \\ {\emph{\textbf{SDO}}} & The Solar Dynamic Observatory & \citet{2012SoPh..275....3P} \\ \hspace{0.15cm} AIA & The Advanced Imaging Assembly & \citet{2012SoPh..275...17L} \\ \hspace{0.15cm} HMI & The Helioseismic and Magnetic Imager & \citet{2012SoPh..275..207S} \\ {\emph{\textbf{SOHO}}} & The Solar and Heliospheric Observatory & \citet{1995SSRv...72...81D} \\ \hspace{0.15cm} {\emph{EPHIN}} & Electron Proton Helium INstrument & \citet[EPHIN;][]{1988sohi.rept...75K} \\ \hspace{0.15cm} {\emph{LASCO}} & Large Angle and Spectrometric COronagraph & \citet{1995SoPh..162..357B} \\ {\emph{\textbf{SolO}}} & The Solar Orbiter mission & \citet{2020AA...642A...1M} \\ {\emph{\textbf{STEREO}}} & The Solar TErrestrial RElations Observaory & \citet{2008SSRv..136....5K} \\ \hspace{0.15cm} SECCHI & Sun-Earth Connection Coronal and Heliospheric Investigation & \citet{2008SSRv..136...67H} \\ \hspace{0.3cm} COR2 & Coronagraph 2 & --- \\%7_LargeScale DONE \hspace{0.3cm} EUVI & Extreme Ultraviolet Imager & \citet{2004SPIE.5171..111W} \\ \hspace{0.3cm} HI & Heliospheric Imager (1 \& 2) & \citet{2009SoPh..254..387E} \\ {\emph{\textbf{Ulysses}}} & Ulysses & \citet{1992AAS...92..207W} \\ {\emph{\textbf{VEX}}} & Venus Express & \citet{2006CosRe..44..334T} \\ {\emph{\textbf{Voyager}}} & Voyager (1 \& 2) & \citet{1977SSRv...21...77K} \\ {\emph{\textbf{Wind}}} & Wind & \url{https://wind.nasa.gov} \\ \hspace{0.15cm} {\emph{WAVES}} & WAVES & \citet{1995SSRv...71..231B} \end{longtable} } {\small \begin{longtable}{ p{.15\textwidth} p{.4\textwidth} p{.35\textwidth} } \multicolumn{3}{l}{\textbf{Models}} \\ \multicolumn{1}{l}{\textbf{Acronym}} & \multicolumn{1}{l}{\textbf{Expanded Form}} & \multicolumn{1}{l}{\textbf{References}} \\ \endfirsthead \multicolumn{3}{p{\textwidth}}{ --- {\it{continued from previous page}} --- } \\ \multicolumn{1}{l}{\textbf{Acronym}} & \multicolumn{1}{l}{\textbf{Expanded Form}} & \multicolumn{1}{l}{\textbf{References}} \\ \endhead \multicolumn{3}{p{\textwidth}}{ --- {\it{continued on next page}} ---} \\ \endfoot \endlastfoot 3DCORE & 3D Coronal Rope Ejection model & \citet{2021ApJS..252....9W} \\ ADAPT & Air Force Data Assimilative Photospheric Flux Transport model & \citet{2004JASTP..66.1295A} \\ CC & Circular Cylindrical model & --- \\ EC & Elliptical-Cylindrical model & --- \\ EUHFORIA & European Heliopheric FORecasting Information Asset & \citet{2018JSWSC...8A..35P} \\ GCS & Graduated Cylindrical Shell model & \citet{2011ApJS..194...33T} \\ HELCATS & Heliospheric Cataloguing, Analysis and Techniques Service model & \citet{2014AGUFMSH43B4214B} \\ HelMOD & Heliospheric Modulation model & \citet{2018AdSpR..62.2859B} \\ OSPREI & Open Solar Physics Rapid Ensemble Information model & \citet{2022SpWea..2002914K} \\ PARADISE & Particle Radiation Asset Directed at Interplanetary Space Exploration model & \citep{2019AA...622A..28W,2020AA...634A..82W} \\ PFSS & Potential-Field Source-Surface model & \citet{1969SoPh....9..131A,1969SoPh....6..442S} \\ PIC & Particle-In-Cell & --- \\ SSEF30 & The Self-Similar Expansion Fitting (30) model & \citet{2012ApJ...750...23D} \\ WSA & Wang-Sheeley-Arge (PFSS) model & \citet{2000JGR...10510465A} \\ WSA/THUX & WSA/Tunable HUX model & \citet{2020ApJ...891..165R} \end{longtable} } {\small \begin{longtable}{ p{.20\textwidth} p{.80\textwidth} } \multicolumn{2}{l}{\textbf{Acronyms and Symbols}} \\ \multicolumn{1}{l}{\textbf{Acronym}} & \multicolumn{1}{l}{\textbf{Expanded Form}} \\ \endfirsthead \multicolumn{2}{p{\textwidth}}{ --- {\it{continued from previous page}} --- } \\ \multicolumn{1}{l}{\textbf{Acronym}} & \multicolumn{1}{l}{\textbf{Expanded Form}} \\ \endhead \multicolumn{2}{p{\textwidth}}{ --- {\it{continued on next page}} ---} \\ \endfoot \endlastfoot 1D & One-dimensional \\ 2D & Two-dimensional \\ 2PL & Two spectral range continuous power-law fit \\ 3D & Three-dimensional \\ 3PL & Three spectral range continuous power-law fit \\ ACR(s) & Anomalous cosmic ray(s) \\ ACW(s) & Alfv\'en ion cyclotron wave(s) \\ AR(s) & Active region(s) \\ AU & Astronomical unit \\ AW(s) & Alfv\'en wave(s) \\ CIR(s) & Corotating interaction region(s) \\ CME(s) & Coronal mass ejection(s) \\ cobpoint & “Connecting with the OBserving” point \\ CR & Carrington rotation \\ DC & Direct current \\ DDZ & Dust depletion zone \\ DFZ & Dust-free zone \\ dHT & de Hoffman-Teller frame \\ DOY & Day of the year \\ ED & ``Either" discontinuity \\ Enc. / Encs. & Encounter(s) \\ ES (waves) & Electrostatic waves \\ EUV & Extreme ultraviolet \\ $f_{ce}$ & Electron gyrofrequency or electron cyclotron frequency \\ $f_{cp}$ & Proton gyrofrequency or proton cyclotron frequency \\ $f_{pe}$ & Plasma frequency \\ $f_{LH}$ & Lower hybrid frequency \\ F-corona & Fraunhofer-corona \\ $F_\mathrm{A}$ & Alfv\'enic energy flux \\ $F_\mathrm{K}$ & Bulk kinetic energy flux \\ FITS & Flexible Image Transport System \\ FM/W & Fast-magnetosonic/whistler \\ FOV & Field of view \\ GCR(s) & Galactic cosmic ray(s) \\ HCI & Heliocentric inertial coordinate system \\ HCS & Heliospheric current sheet \\ HEE & Heliocentric Earth Ecliptic system \\ HEEQ & Heliocentric Earth Equatorial system \\ HFR & High frequency receiver \\ HHT & Hilbert-Huang transform \\ HPC & Helioprojective cartesian system \\ HPS & Heliospheric plasma sheet \\ HSO & Heliophysics System Observatory \\ HSS(s) & High-speed stream(s) \\ ICME(s) & Interplanetary coronal mass ejection(s) \\ ICW & Ion cyclotron wave \\ ID(s) & Interplanetary discontinuity(ies) \\ KAW(s) & Kinetic Alfv\'en wave(s) \\ LFR(s) & Large-scale flux rope(s) \\ LTE & Local thermal equilibrium \\ LOS & Line of sight \\ $M_{A}$ & Alfv\'enic Mach number \\ MAG(s) & Fluxgate magnetometer(s) \\ MC(s) & Magnetic cloud(s) \\ MFR(s) & Magnetic flux rope(s) \\ MHD & Magneto-HydroDynamic \\ MOID & Earth Minimum Orbit Intersection Distance \\ MVA & Minimum variance analysis \\ ND & ``Neither" discontinuity \\ NIR & Near Infrared \\ PAD(s) & Pitch angle distribution(s) \\ PDF(s) & Probability distribution function(s) \\ PIC & Particle-in-cell \\ PIL(s) & Polarity inversion line(s) \\ PVI & Partial variance of increments \\ QTN & Quasi-thermal noise \\ RD(s) & Rotational discontinuity(ies) \\ RLO & Reconnection/Loop-Opening \\ RTN & Radial-Tangential-Normal frame \\ $R_\odot$ & Solar radius \\ SBO(s) & Streamer blowout(s) \\ SBO-CME(s) & Streamer blowout CME(s) \\ S/C & Spacecraft \\ SEP(s) & Solar energetic particle event(s) \\ SFR(s) & Small-scale flux rope(s) \\ SIR(s) & Stream interaction region(s) \\ TD(s) & Tangential discontinuity(ies) \\ TH & Taylor's hypothesis \\ TOF & Time of flight \\ TPS & Thermal Protection System \\ UT & Universal Time \\ VDF(s) & Velocity distribution function(s) \\ VGA(s) & Venus gravity assist(s) \\ VSF & Volume scattering function \\ WCS & World Coordinate System \\ WHPI & Whole Heliosphere and Planetary Interactions \\ WKB & The Wentzel, Kramers, and Brillouin approximation \\ w.r.t. & With respect to \\ WTD & Wave/Turbulence-Driven \\ ZC & Zodiacal cloud \\ ZDC & Zodiacal dust cloud \\ ZL & Zodiacal light \\ \end{longtable} \begin{acknowledgements} Parker Solar Probe was designed, built, and is now operated by the Johns Hopkins Applied Physics Laboratory as part of NASA’s Living with a Star (LWS) program (contract NNN06AA01C). Support from the LWS management and technical team has played a critical role in the success of the Parker Solar Probe mission. \smallskip The FIELDS instrument suite was designed and built and is operated by a consortium of institutions including the University of California, Berkeley, University of Minnesota, University of Colorado, Boulder, NASA/GSFC, CNRS/LPC2E, University of New Hampshire, University of Maryland, UCLA, IFRU, Observatoire de Meudon, Imperial College, London and Queen Mary University London. \smallskip The SWEAP Investigation is a multi-institution project led by the Smithsonian Astrophysical Observatory in Cambridge, Massachusetts. Other members of the SWEAP team come from the University of Michigan, University of California, Berkeley Space Sciences Laboratory, The NASA Marshall Space Flight Center, The University of Alabama Huntsville, the Massachusetts Institute of Technology, Los Alamos National Laboratory, Draper Laboratory, JHU's Applied Physics Laboratory, and NASA Goddard Space Flight Center. \smallskip The Integrated Science Investigation of the Sun (IS$\odot$IS) Investigation is a multi-institution project led by Princeton University with contributions from Johns Hopkins/APL, Caltech, GSFC, JPL, SwRI, University of New Hampshire, University of Delaware, and University of Arizona. \smallskip The Wide-Field Imager for Parker Solar Probe (WISPR) instrument was designed, built, and is now operated by the US Naval Research Laboratory in collaboration with Johns Hopkins University/Applied Physics Laboratory, California Institute of Technology/Jet Propulsion Laboratory, University of Gottingen, Germany, Centre Spatiale de Liege, Belgium and University of Toulouse/Research Institute in Astrophysics and Planetology. \smallskip IM is supported by the Research Council of Norway (grant number 262941). OVA was supported by NASA grants 80NNSC19K0848, 80NSSC22K0417, 80NSSC21K1770, and NSF grant 1914670. \end{acknowledgements} \section*{Compliance with Ethical Standards} {\bf{Disclosure of potential conflicts of interest:}} There are no conflicts of interest (financial or non-financial) for any of the co-authors of this article. \\ {\bf{Research involving Human Participants and/or Animals:}} The results reported in this article do not involve Human Participants and/or Animals in any way. \\ {\bf{Informed consent:}} The authors agree with sharing the information reported in this article with whoever needs to access it. \bibliographystyle{aa} \section{Introduction} \label{PSPINTRO} Parker Solar Probe \cite[{\emph{PSP}};][]{2016SSRv..204....7F,doi:10.1063/PT.3.5120} is flying closer to the Sun than any previous spacecraft (S/C). Launched on 12 Aug. 2018, on 6 Dec. 2022 {\emph{PSP}} had completed 14 of its 24 scheduled perihelion encounters (Encs.)\footnote{The PSP solar Enc. is defined as the orbit section where the S/C is below 0.25~AU from the Sun's center.} around the Sun over the 7-year nominal mission duration. The S/C flew by Venus for the fifth time on 16 Oct. 2021, followed by the closest perihelion of 13.28 solar radii (\rst) on 21 Nov. 2021. The S/C will remain on the same orbit for a total of seven solar Encs. After Enc.~16, {\emph{PSP}} is scheduled to fly by Venus for the sixth time to lower the perihelion to 11.44~$R_\odot$ for another five orbits. The seventh and last Venus gravity assist (VGA) of the primary mission is scheduled for 6 Nov. 2024. This gravity assist will set {\emph{PSP}} for its last three orbits of the primary mission. The perihelia of orbits 22, 23, and 24 of 9.86~$R_\odot$ will be on 24 Dec. 2024, 22 Mar. 2025, and 19 Jun. 2025, respectively. The mission’s overarching science objective is to determine the structure and dynamics of the Sun’s coronal magnetic field and to understand how the corona is heated, the solar wind accelerated, and how energetic particles are produced and their distributions evolve. The {\emph{PSP}} mission targets processes and dynamics that characterize the Sun’s expanding corona and solar wind, enabling the measurement of coronal conditions leading to the nascent solar wind and eruptive transients that create space weather. {\emph{PSP}} is sampling the solar corona and solar wind to reveal how it is heated and the solar wind and solar energetic particles (SEPs) are accelerated. To achieve this, {\emph{PSP}} measurements will be used to address the following three science goals: (1) Trace the flow of energy that heats the solar corona and accelerates the solar wind; (2) Determine the structure and dynamics of the plasma and magnetic fields at the sources of the solar wind; and (3) Explore mechanisms that accelerate and transport energetic particles. Understanding these phenomena has been a top science goal for over six decades. {\emph{PSP}} is primarily an exploration mission that is flying through one of the last unvisited and most challenging regions of space within our solar system, and the potential for discovery is huge. The returned science data is a treasure trove yielding insights into the nature of the young solar wind and its evolution as it propagates away from the Sun. Numerous discoveries have been made over the first four years of the prime mission, most notably the ubiquitous magnetic field switchbacks closer to the Sun, the dust-free zone (DFZ), novel kinetic aspects in the young solar wind, excessive tangential flows beyond the Alfv\'en critical point, dust $\beta$-streams resulting from collisions of the Geminids meteoroid stream with the zodiacal dust cloud (ZDC), and the shortest wavelength thermal emission from the Venusian surface. Since 28 Apr. 2021 ({\emph{i.e.}}, perihelion of Enc.~8), the S/C has been sampling the solar wind plasma within the magnetically-dominated corona, {\emph{i.e.}}, sub-Alfv\'enic solar wind, marking the beginning of a critical phase of the {\emph{PSP}} mission. In this region solar wind physics changes because of the multi-directionality of wave propagation (waves moving sunward and anti-sunward can affect the local dynamics including the turbulent evolution, heating and acceleration of the plasma). This is also the region where velocity gradients between the fast and slow speed streams develop, forming the initial conditions for the formation, further out, of corotating interaction regions (CIRs). The science data return ({\emph{i.e.}}, data volume) from {\emph{PSP}} exceeded the pre-launch estimates by a factor of over four. Since the second orbit, orbital coverage extended from the nominal perihelion Enc. to over 70\% of the following orbit duration. We expect this to continue throughout the mission. The {\emph{PSP}} team is also looking into ways to extend the orbital coverage to the whole orbit duration. This will allow sampling the solar wind and SEPs over a large range of heliodistances. The {\emph{PSP}} science payload comprises four instrument suites: \begin{enumerate} \item{}FIELDS investigation makes measurements of the electric and magnetic fields and waves, S/C floating potential, density fluctuations, and radio emissions over 20 MHz of bandwidth and 140 dB of dynamic range. It comprises \begin{itemize} \item Four electric antennas (V1-V4) mounted at the base of the S/C thermal protection system (TPS). The electric preamplifiers connected to each antenna provide outputs to the Radio Frequency Spectrometer (RFS), the Time Domain Sampler (TDS), and the Antenna Electronics Board (AEB) and Digital Fields Board (DFB). The V1-V4 antennas are exposed to the full solar environment. \item A fifth antenna (V5) provides low (LF) and medium (MF) frequency outputs. \item Two fluxgate magnetometers (MAGs) provide data with bandwidth of $\sim140$~Hz and at 292.97 Samples/sec over a dynamic range of $\pm65,536$~nT with a resolution of 16 bits. \item A search coil magnetometer (SCM) measures the AC magnetic signature of solar wind fluctuations, from 10 Hz up to 1 MHz. \end{itemize} V5, the MAGs, and the SCM are all mounted on the boom in the shade of the TPS. For further details, see \citet{2016SSRv..204...49B}. \item{}The Solar Wind Electrons Alphas and Protons (SWEAP) Investigation measures the thermal solar wind, {\emph{i.e.}}, electrons, protons and alpha particles. SWEAP measures the velocity distribution functions (VDFs) of ions and electrons with high energy and angular resolution. It consists of the Solar Probe Cup (SPC) and the Solar Probe Analyzers (SPAN-A and SPAN-B), and the SWEAP Electronics Module (SWEM): \begin{itemize} \item SPC is fully exposed to the solar environment as it looks directly at the Sun and measures ion and electron fluxes and flow angles as a function of energy. \item SPAN-A is mounted on the ram side and comprises an ion and electron electrostatic analyzers (SPAN-i and SPAN-e, respectively). \item SPAN-B is an electron electrostatic analyzer on the anti-ram side of the S/C. \item The SWEM manages the suite by distributing power, formatting onboard data products, and serving as a single electrical interface to the S/C. \end{itemize} The SPANs and the SWEM reside on the S/C bus behind the TPS. See \citet{2016SSRv..204..131K} for more information. \item{} The Integrated Science Investigation of the Sun (IS$\odot$IS) investigation measures energetic particles over a very broad energy range (10s of keV to 100 MeV). IS$\odot$IS is mounted on the ram side of the S/C bus. It comprises two Energetic Particle Instruments (EPI) to measure low (EPI-Lo) and high (EPI-Hi) energy: \begin{itemize} \item EPI-Lo is time-of-flight (TOF) mass spectrometer that measures electrons from $\sim25–1000$~keV, protons from $\sim0.04–7$~MeV, and heavy ions from $\sim0.02–2$~MeV/nuc. EPI-Lo has 80 apertures distributed over eight wedges. Their combined fields-of-view (FOVs) cover nearly an entire hemisphere. \item EPI-Hi measures electrons from $\sim0.5–6$~MeV and ions from $\sim1–200$ MeV/nuc. EPI-Hi consists of three telescopes: a high energy telescope (HET; double ended) and two low energy telescopes LET1 (double ended) and LET2 (single ended). \end{itemize} See \citep{2016SSRv..204..187M} for a full description of the IS$\odot$IS investigation. \item{} The Wide-Field Imager for Solar PRobe (WISPR) is the only remote-sensing instrument suite on the S/C. WISPR is a white-light imager providing observations of flows and transients in the solar wind over a $95^\circ\times58^\circ$ (radial and transverse, respectively) FOV covering elongation angles from $13.5^\circ$ to $108^\circ$. It comprises two telescopes: \begin{itemize} \item WISPR-i covers the inner part of the FOV ($40^\circ\times40^\circ$). \item WISPR-o covers the outer part of the FOV ($58^\circ\times58^\circ$). \end{itemize} See \citet{2016SSRv..204...83V} for further details. \end{enumerate} Before tackling the {\emph{PSP}} achievements during the first four years of the prime mission, a brief historical context is given in \S\ref{HistoricalContext}. \S\ref{PSPMSTAT} provides a brief summary of the {\emph{PSP}} mission status. \S\S\ref{MagSBs}-\ref{PSPVENUS} describe the {\emph{PSP}} discoveries during the first four years of operations: switchbacks, solar wind sources, kinetic physics, turbulence, large-scale structures, energetic particles, dust, and Venus, respectively. The conclusions and discussion are given in \S\ref{SUMCONC}. {\emph{Although sections 3-12 may have some overlap and cross-referencing, each section can be read independently from the rest of the paper.}} \section{Historical Context: {\emph{Mariner~2}}, {\emph{Helios}}, and {\emph{Ulysses}}} \label{HistoricalContext} Before {\emph{PSP}}, several space missions shaped our understanding of the solar wind for decades. Three stand out as trailblazers, {\emph{i.e.}}, {\emph{Mariner~2}}, {\emph{Helios}} \citep{Marsch1990} , and {\emph{Ulysses}} \citep{1992AAS...92..207W}. {\emph{Mariner~2}}, launched on 27 Aug. 1962, was the first successful mission to a planet other than the Earth ({\emph{i.e.}}, Venus). Its measurements of the solar wind are a first and among the most significant discoveries of the mission \citep[see ][]{1962Sci...138.1095N}. Although the mission returned data for only a few months, the measurements showed the highly variable nature and complexity of the plasma flow expanding anti-sunward \citep{1965ASSL....3...67S}. However, before the launch of {\emph{PSP}}, almost everything we knew about the inner interplanetary medium was due to the double {\emph{Helios}} mission. This mission set the stage for an initial understanding of the major physical processes occurring in the inner heliosphere. It greatly helped the development and tailoring of instruments onboard subsequent missions. The two {\emph{Helios}} probes were launched on 10 Dec. 1974 and 15 Jan. 1976 and placed in orbit in the ecliptic plane. Their distance from the Sun varied between 0.29 and 1 astronomical unit (AU) with an orbital period of about six months. The payload of the two {\emph{Helios}} comprised several instruments: \begin{itemize} \item Proton, electron, and alpha particle analyzers; \item Two DC magnetometers; \item A search coil magnetometer; \item A radio wave experiment; \item Two cosmic ray experiments; \item Three photometers for the zodiacal light; and \item A dust particle detector \end{itemize} Here we provide a very brief overview of some of the scientific goals achieved by {\emph{Helios}} to make the reader aware of the importance that this mission has had in the study of the solar wind and beyond. {\emph{Helios}} established the mechanisms which generate dust particles at the origin of the zodiacal light (ZL), their relationship with micrometeorites and comets, and the radial dependence of dust density \citep{1976BAAS....8R.457L}. Micrometeorite impacts of the dust particle sensors allowed to study asymmetries with respect to (hereafter w.r.t.) the ecliptic plane and the different origins related to stone meteorites or iron meteorites and suggested that many particles run on hyperbolic orbits aiming out of the solar system \citep{1980P&SS...28..333G}. {\emph{Helios}}' plasma wave experiment firstly confirmed that the generation of type~III radio bursts is a two-step process, as theoretically predicted by \citet{1958SvA.....2..653G} and revealed enhanced levels of ion acoustic wave turbulence in the solar wind. In addition, the radial excursion of {\emph{Helios}} allowed proving that the frequency of the radio emission increases with decreasing the distance from the Sun and the consequent increase of plasma density \citep{1979JGR....84.2029G,1986A&A...169..329K}. The radio astronomy experiments onboard both S/C were the first to provide ``three-dimensional (3D) direction finding" in space, allowing to follow the source of type~III radio bursts during its propagation along the interplanetary magnetic field lines. In practice, they provided a significant extension of our knowledge of the large-scale structure of the interplanetary medium via remote sensing \citep{1984GeCAS......111K}. The galactic and cosmic ray experiment studied the energy spectra, charge composition, and flow patterns of both solar and galactic cosmic rays (GCRs). {\emph{Helios}} was highly relevant as part of a large network of cosmic ray experiments onboard S/C located between 0.3 and 10~AU. It contributed significantly to confirming the role of the solar wind suprathermal particles as seed particles injected into interplanetary shocks to be eventually accelerated \citep{1976ApJ...203L.149M}. Coupling observations by {\emph{Helios}} and other S/C at 1~AU allowed studying the problem of transport performing measurements in different conditions relative to magnetic connectivity and radial distance from the source region. Moreover, joint measurements between {\emph{Helios}} and {\emph{Pioneer}}~10 gave important results about the modulation of cosmic rays in the heliosphere \citep{1978cosm.conf...73K,1984GeCAS......124K}. The solar wind plasma experiment and the magnetic field experiments allowed us to investigate the interplanetary medium from the large-scale structure to spatial scales of the order of the proton Larmor radius for more than an entire 11-year solar cycle. The varying vantage point due to a highly elliptic orbit allowed us to reach an unprecedented description of the solar wind's large-scale structure and the dynamical processes that take place during the expansion into the interplanetary medium \citep{1981sowi.conf..126S}. {\emph{Helios}}' plasma and magnetic field continuous observations allowed new insights into the study of magneto-hydrodynamic (MHD) turbulence opening Pandora's box in our understanding of this phenomenon of astrophysical relevance \citep[see reviews by][]{1995Sci...269.1124T,2013LRSP...10....2B}. Similarly, detailed observations of the 3D velocity distribution of protons, alphas, and electrons not only revealed the presence of anisotropies w.r.t. the local magnetic field but also the presence of proton and alpha beams as well as electron strahl. Moreover, these observations allowed us to study the variability and evolution of these kinetic features with heliocentric distance and different Alfv\'enic turbulence levels at fluid scales \citep[see the review by][]{1995Sci...269.1124T}. Up to the launch of {\emph{Ulysses}} on 6 Oct. 1990, the solar wind exploration was limited to measurements within the ecliptic plane. Like {\emph{PSP}}, the idea of flying a mission to explore the solar polar region dates back to the 1959 Simpson's Committee report. Using a Jupiter gravity assist, {\emph{Ulysses}} slang shot out of the ecliptic to fly above the solar poles and provide unique measurements. During its three solar passes in 1994-95, 2000-01, and 2005, {\emph{Ulysses}} covered two minima and one maximum of the solar sunspot cycle, revealing phenomena unknown to us before \citep[see][]{2008GeoRL..3518103M}. All measurements were, however, at heliodistances beyond 1~AU and only {\emph{in~situ}}, as there were no remote-sensing instruments onboard. \section{Mission Status} \label{PSPMSTAT} After a decade in the making, {\emph{PSP}} began its 7-year journey into the Sun’s corona on 12 Aug. 2018 \citep{9172703}. Following the launch, about six weeks later, the S/C flew by Venus for the first of seven gravity assists to target the initial perihelion of 35.6~$R_\odot$. As the S/C continues to perform VGAs, the perihelion has been decreased to 13.28~$R_\odot$ after the fifth VGA, with the anticipation of a final perihelion of 9.86~$R_\odot$ in the last three orbits. Fig.~\ref{Fig_PSPStatus} shows the change in perihelia as the S/C has successfully completed the VGAs and the anticipated performance in future orbits. Following the seventh VGA, the aphelion is below Venus’ orbit. So, no more VGAs will be possible, and the orbit perihelion will remain the same for a potential extended mission. As shown in Fig.~\ref{Fig_PSPStatus}, the S/C had completed 13 orbits by Oct. of 2022, with an additional 11 orbits remaining in the primary mission. As designed, these orbits are separated into a solar Enc. phase and a cruise phase. Solar Encs. are dedicated to taking the data that characterize the near-Sun environment and the corona. The cruise phase of each orbit is devoted to a mix of science data downlink, S/C operations, and maintenance, and science in regions further away from the Sun. \begin{figure}[!ht] \begin{center} \includegraphics[width=1\columnwidth]{PSP_Mission_Status_SSRv2022.pdf} \includegraphics[width=1\columnwidth]{PSP_Mission_Status_SSRv2022_WhereIsPSP12212022.pdf} \caption{(Top-blue) PSP's perihelion distance is decreased by performing gravity assists using Venus (VGAs). After seven close flybys of Venus, the final perihelion is anticipated to be 9.86~$R_\odot$ from the Sun's center. (Top-orange) The modeled temperature of the TPS sunward face at each perihelion. The thermal sensors on the S/C (behind the TPS) confirm the TPS thermal model. It is noteworthy that there are no thermal sensors on the TPS itself. (Bottom) The trajectory of PSP during the 7-year primary mission phase as a function of days after the launch on 12 Aug. 2018. The green (red) color indicates the completed (future) part of the PSP orbit. The green dot shows the PSP heliodistance on 21 Dec. 2022.} \label{Fig_PSPStatus} \end{center} \end{figure} The major engineering challenge for the mission before launch was to design and build a TPS that would keep the bulk of the S/C at comfortable temperatures during each solar Enc. period. Fig.~\ref{Fig_PSPStatus} also shows the temperature of the TPS' sunward face at each perihelion, the maximum temperature in each orbit. Given the anticipated temperature at the final perihelion of nearly 1000$^\circ$C, the TPS does not include sensors for the direct measurement of the TPS temperature. However, the S/C has other sensors, such as the barrier blanket temperature sensor and monitoring of the cooling system, with which the S/C's overall thermal model has been validated. Through orbit 13, the thermal model and measured temperatures agree very well, though actual temperatures are slightly lower as the model included conservative assumptions for inputs such as surface properties. This good agreement holds throughout the orbits, including aphelion. For the early orbits, the reduced solar illumination when the S/C is further away from the Sun raised concerns before launch that the cooling system might freeze unless extra energy was provided to the S/C by tilting the system to expose more surface area to the Sun near aphelion. This design worked as expected, and temperatures near aphelion have been comfortably above the point where freezing might occur. The mission was designed to collect science data during solar Encs. ({\emph{i.e.}}, inside 0.25~AU) and return that data to Earth during the cruise phase, when the S/C is further away from the Sun. The system was designed to do this using a Ka-band telecommunications link, one of the first uses of this technology for APL\footnote{The Johns Hopkins University Applied Physics Laboratory, Laurel, Maryland.}, with the requirement of returning an average of 85 Gbits of science data per orbit. While the pre-launch operations plan comfortably exceeded this, the mission has returned over three times the planned data volume through the first 13 orbits, with increased data return expected through the remaining orbits. The increased data return is mainly due to better than expected performance of the Ka-band telecommunications system. It has resulted in the ability to measure and return data throughout the orbit, not just in solar Encs., to characterize the solar environment fully. Another major engineering challenge before launch was the ability of the system to detect and recover from faults and to maintain attitude control of the S/C to prevent unintended exposure to solar illumination. The fault management is, by necessity, autonomous, since the S/C spends a significant amount of time out of communication with Mission Operations during the solar Enc. periods in each orbit. A more detailed discussion of the design and operation of the S/C autonomy system is found in \citet{9172703}. Through 13 orbits, the S/C has successfully executed each orbit and operated as expected in this harsh environment. We have seen some unanticipated issues, associated mainly with the larger-than-expected dust environment, that have affected the S/C. However, the autonomy system has successfully detected and recovered from all of these events. The robust design of the autonomy system has kept the S/C safe, and we expect this to continue through the primary mission. Generally, the S/C has performed well within the expectations of the engineering team, who used conservative design and robust, redundant systems to build the highly capable {\emph{PSP}}. Along with this, a major factor in the mission's success so far is the tight coupling between the engineering and operations teams and the science team. Before launch, this interaction gave the engineering team insight into this unexplored near-Sun environment, resulting in designs that were conservative. After launch, the operations and science teams have worked together to exploit this conservatism to achieve results far beyond expectations. \section{Magnetic Field Switchbacks} \label{MagSBs} Abrupt and significant changes of the interplanetary magnetic field direction were reported as early as the mid-1960's \citep[see][]{1966JGR....71.3315M}. The cosmic ray anisotropy remained well aligned with the field. \citet{1967JGR....72.1917M} also reported increases in the radial solar wind speed accompanying the magnetic field deviations from the Parker spiral. Using {\emph{Ulysses}}’ data recorded above the solar poles at heliodistances $\ge1$~AU, \citep{1999GeoRL..26..631B} analyzed the propagation direction of waves to show that these rotations in the magnetic field of $90^\circ$ w.r.t. the Parker spiral are magnetic field line folds rather than opposite polarity flux tubes originating at the Sun. Magnetic field inversions were observed at 1~AU by the International Sun-Earth Explorer-3 ({\emph{ISEE}}-3 [\citealt{1979NCimC...2..722D}]; \citealt{1996JGR...10124373K}) and the Advanced Composition ({\emph{ACE}} [\citealt{1998SSRv...86....1S}]; \citealt{2009ApJ...695L.213G,2016JGRA..12110728L}). Inside 1~AU, the magnetic field reversals were also observed in the {\emph{Helios}} \citep{1981ESASP.164...43P} solar wind measurements as close as 0.3~AU from the Sun's center \citep{2016JGRA..121.5055B,2018MNRAS.478.1980H}. The magnetic field switchbacks took center stage recently owing to their prominence and ubiquitousness in the {\emph{PSP}} measurements inside 0.2~AU. \subsection{What is a switchback?} Switchbacks are short magnetic field rotations that are ubiquitously observed in the solar wind. They are consistent with local folds in the magnetic field rather than changes in the magnetic connectivity to solar source regions. This interpretation is supported by the observation of suprathermal electrons \citep{1996JGR...10124373K}, the differential streaming of alpha particles \citep{2004JGRA..109.3104Y} and proton beams \citep{2013AIPC.1539...46N}, and the directionality of Alfv\'en waves (AWs) \citep{1999GeoRL..26..631B}. Because of the intrinsic Alfv\'enic nature of these structures $-$ implying a high degree of correlation between magnetic and velocity fluctuations in all field components $-$ the magnetic field fold has a distinct velocity counterpart. Moreover, the so called \emph{one-sided} aspect of solar wind fluctuations during Alfv\'enic streams \citep{2009ApJ...695L.213G}, which is a consequence of the approximate constancy of the magnetic field strength $\Bm=|\B|$ during these intervals, has a direct impact on the distribution of $B_R$ and $V_R$ in switchbacks. Under such conditions (constant $B$ and Alfv\'enic fluctuations), large magnetic fields rotations, and switchbacks in particular, always lead to bulk speed enhancements \citep{2014GeoRL..41..259M}, resulting in a spiky solar wind velocity profile during Alfv\'enic periods. Since the amplitude of the velocity spikes associated to switchbacks is proportional to the local Alfv\'en speed $\va$, the speed modulation is particularly intense in fast-solar-wind streams observed inside 1~AU, where $\va$ is larger, and it was suggested that velocity spikes could be even larger closer-in \citep{2018MNRAS.478.1980H}. Despite the previous knowledge of switchbacks in the solar wind community and some expectations that they could have played some role closer to the Sun, our consideration of these structures has been totally changed by {\emph{PSP}}, since its first observations inside 0.3~AU \citep{2019Natur.576..228K,2019Natur.576..237B}. The switchback occurrence rate, morphology, and amplitude as observed by {\emph{PSP}}, as well as the fact that they are ubiquitously observed also in slow, though mostly Alfv\'enic, solar wind, made them one of the most interesting and intriguing aspects of the first {\emph{PSP}} Encs. In this section, we summarize recent findings about switchbacks from the first {\emph{PSP}} orbits. In Section \ref{SB_obs} we provide an overview of the main observational properties of these structures in terms of size, shape, radial evolution, and internal and boundary properties; in Section \ref{sec: theory switchbacks} we present current theories for the generation and evolution of switchbacks, presenting different types of models, based on their generation at the solar surface or {\emph{in situ}} in the wind. \S\ref{SB_discussion} contains a final discussion of the state-of-art of switchbacks' observational and theoretical studies and a list of current open questions to be answered by {\emph{PSP}} in future Encs. \subsection{Observational properties of switchbacks}\label{SB_obs} \subsubsection{Velocity increase inside switchbacks}\label{sub:obs: velocity increase} At first order, switchbacks can be considered as strong rotations of the magnetic-field vector $\B$, with no change in the magnetic field intensity $\Bm=|\B|$. Geometrically, this corresponds to a rotation of $\B$ with its tip constrained on a sphere of constant radius $\Bm$. Such excursions are well represented by following the $\Bm$ direction in the RT plane, during the time series of a large amplitude switchback, like in the left panels of Fig.~\ref{fig:big_sb}. The top left panel represents the typical $\B$ pattern observed since Enc.~1 \citep{2020MNRAS.498.5524W}: the background magnetic field, initially almost aligned with the radial ($B_R<0$) in the near-Sun regions observed by {\emph{PSP}}, makes a significant rotation in the RT plane, locally inverting its polarity ($B_R>0$). All this occurs keeping $\Bm\sim\mathrm{const.}$ and points follow a circle of approximately constant radius during the rotation; as a consequence this increases significantly the transverse component of $\B$ and $B_T\gg B_R$ when approaching $90^\circ$. Due to the high Alfv\'enicity of the fluctuations in near-Sun streams sampled by {\emph{PSP}}, the same pattern is observed for the velocity vector, with similar and proportional variations in $V_R$ and $V_T$ (bottom left panel). While the magnetic field is frame-invariant, the circular pattern seen for the velocity vector is not and its center identifies the so-called de Hoffman-Teller frame (dHT): the frame in which the motional electric field associated to the fluctuations is zero and where the switchbacks magnetic structure can be considered at rest. This frame is typically traveling at the local Alfv\'en speed ahead of the solar wind protons, along the magnetic field. This is consistent with the velocity measurements in the bottom left panel of Fig.~\ref{fig:big_sb}, where the local $\va$ is of the order of $\sim50$~km~s$^{-1}$ and agrees well with the local of the centre of the circle, which is roughly 50~km~s$^{-1}$ ahead of the minimum $V_R$ seen at the beginning of the interval. Because of the geometrical property above, there is a direct relation between the $\B$ excursion and the resulting modulation of the flow speed in switchbacks. Remarkably, switchbacks always lead to speed increases, characterized by a spiky, one-sided profile of $V_R$, independent of the underlying magnetic field polarity; {\emph{i.e.}}, regardless $\B$ rotates from $0^\circ$ towards $180^\circ$, or vice-versa \citep{2014GeoRL..41..259M}. As a consequence, it is possible to derive a simple phenomenological relation that links the instantaneous proton radial velocity $V_R$ to the magnetic field angle w.r.t. the radial $\theta_{BR}$, where $\cos\theta_{BR}=B_R$/\Bm. Moreover, since the solar wind speed is typically dominated by its radial component, this can be considered an approximate expression for the proton bulk speed within switchbacks \citep{2015ApJ...802...11M}: \begin{equation} V_p=V_0+\va[1\pm\cos{\theta_{BR}}],\label{eq_v_in_sb} \end{equation} where $V_0$ is the background solar wind speed and the sign in front of the cosine takes into account the underlying Parker spiral polarity ($-\cos{\theta_{BR}}$ if $B_R>0$, $+\cos{\theta_{BR}}$ otherwise). As apparent from Eq.~(\ref{eq_v_in_sb}), the speed increase inside a switchback with constant $\Bm$ has a maximum amplitude of $2\times\va$. This corresponds to magnetic field rotations that are full reversals; for moderate deflections, the speed increase is smaller, typically of the order of $\sim{\va}$ for a $90^\circ$ deflection. Also, because the increase in $V_p$ is proportional to the local Alfv\'en speed, larger enhancements are expected closer to the Sun. \begin{figure} \begin{center} \includegraphics[width=1\columnwidth]{Fig_big_sb.jpg} \caption{ {\it Left:} Magnetic field and velocity vector rotations during a large amplitude switchback during {\emph{PSP}} Enc.~1 \citep{2020MNRAS.498.5524W}. {\it Right:} An example of switchback observed by {\emph{PSP}} during Enc.~6. Top panel shows the almost complete magnetic field reversal of $B_R$ (black), while the magnetic field intensity $|B|$ (red) remains almost constant through the whole structure. The bottom panel shows the associated jump in the radial velocity $V_R$. In a full switchback the bulk speed of the solar wind protons can increase by up to twice the Alfv\'en speed $\va$; as a consequence we observe a jump from $\sim300$~km~s$^{-1}$ to $\sim600$~km~s$^{-1}$ in the speed during this interval ($\va\sim150$~km~s$^{-1}$).} \label{fig:big_sb} \end{center} \end{figure} The right panels of Fig.~\ref{fig:big_sb} show one of the most striking examples of switchbacks observed by {\emph{PSP}} during Enc.~6. This corresponds to an almost full reversal of $B_R$, from approximately $100$ to ${-100}$ nT, maintaining the magnetic field intensity remarkably constant during the vector $\B$ rotation. As a consequence, the background bulk flow proton velocity ($\sim300$~km~s$^{-1}$) goes up by almost $2~\va$, leading to a speed enhancement up to 600~km~s$^{-1}$ inside the structure ($\va\sim150$~km~s$^{-1}$). This has the impressive effect of turning the ambient slow solar wind into fast for the duration of the crossing, without a change in the connection to the source. It is an open question if even larger velocity jumps could be observed closer in, when $\va$ approaches $200-300$~km~s$^{-1}$ and becomes comparable to the bulk flow itself, and what would be the consequences on the overall flow energy and dynamics. Finally, it is worth emphazising that the velocity enhancements discussed above relate only to the main proton core population in the solar wind plasma. Other species, like proton beams and alpha particles, react differently to switchbacks and may or may not partake in the Alfv\'enic motion associated to these structures, depending on their relative drift w.r.t. the proton core. In fact, alpha particles typically stream faster than protons along the magnetic field in Alfv\'enic streams, with a drift speed that in the inner heliosphere can be quite close to $\va$. As a consequence they sit close to the zero electric field reference frame (dHT) and display much smaller oscillations and speed variations in switchbacks (in the case they stream exactly at the same speed as the phase velocity of the switchback, they are totally unaffected and do not feel any fold in the field \citep[see {\emph{e.g.}},][]{2015ApJ...802...11M}. Similarly, proton beams have drift speeds that exceed the local Alfv\'en speed close to the Sun and therefore, because they stream faster than the dHT, they are observed to oscillate out of phase with the main proton core \citep[{\emph{i.e.}}, they get slower inside switchbacks and the core-beam structure of the proton VDF is locally reversed where $B_R$ flips sign;][]{2013AIPC.1539...46N}. The same happens for the electron strahl, leading to an inversion in the electron heat-flux. \subsubsection{Characteristic Scales, Size and Shape}\label{sub:obs: shapes and sizes} Ideally, switchbacks would be imaged from a range of angles, providing a straightforward method to visualize their shape. However, as mentioned above, these structures are Alfv\'enic and have little change in plasma density, which is essential for line of sight (LOS) images from remote sensing instruments. We must instead rely on the {\emph{in~situ}} observations from a single S/C, which are fundamentally local measurements. Therefore, it is important to understand the relationship between the true physical structure of a switchback and the data measured by a S/C, as this can influence the way in which we think about and study them. For example, a small duration switchback in the {\emph{PSP}} time series may be due to a physically smaller switchback, or because {\emph{PSP}} clipped the edge of a larger switchback. This ambiguity also applies to a series of multiple switchbacks, which may truly be several closely spaced switchbacks or in fact one larger, more degraded switchback \citep{2021ApJ...915...68F}. \citet{2020ApJS..246...39D} provided the first detailed statistics on switchbacks for {\emph{PSP}}'s first Enc. They showed that switchback duration could vary from a few seconds to over an hour, with no characteristic timescale. Through studying the waiting time (the time between each switchback) statistics, they found that switchbacks exhibited long term memory, and tended to aggregate, which they take as evidence for similar coronal origin. Many authors define switchbacks as deflections, above some threshold, away from the Parker spiral. The direction of this deflection, {\emph{i.e.}} towards +T, is also interesting as it could act as a testable prediction of switchback origin theories \cite{2021ApJ...909...95S}. For Enc.~1 at least, \citet{2020ApJS..246...39D} showed that deflections were isotropic about the Parker spiral direction, although they did note that the longest switchbacks displayed a weak preference for deflections in +T. \citet{2020ApJS..246...45H} also found that switchbacks displayed a slight preference to deflect in T rather than N, although there was no distinction between -T or +T. The authors refer to the clock angle in an attempt to quantify the direction of switchback deflection. This is defined as the ``angle of the vector projected onto a plane perpendicular to the Parker spiral that also contains N", where 0$^{\circ}$, 90$^{\circ}$, 180$^{\circ}$ and 270$^{\circ}$ refer to +N, +T, -N, -T directions respectively. Unlike the entire switchback population, the longest switchbacks did show a preference for deflection direction, that often displayed clustering about a certain direction that was not correlated to the solar wind flow direction. Crucially, \citet{2020ApJS..246...45H} demonstrated a correlation between the duration of a switchback and the direction of deflection. They then asserted that the duration of a switchback was related to the way in which {\emph{PSP}} cut through the true physical shape. Since switchbacks are Alfv\'enic, the direction of the magnetic field deflection also creates a flow deflection. This, when combined with the S/C velocity (which had a maximum tangential component of +90~km~s$^{-1}$ during the first Enc.), sets the direction at which {\emph{PSP}} travels through a switchback. As a first attempt, they assumed the switchbacks were aligned with the radial direction or dHT, allowing for the angle of {\emph{PSP}} w.r.t. the flow to be calculated. The authors then demonstrated that as the angle to the flow decreased, the switchback duration increased, implying that these structures were long and thin along the flow direction, with transverse scales around $10^{4}$~km. This idea was extended by \citet{2021AA...650A...1L} to more solar wind streams across the first two Encs. Instead of assuming a flow direction, they instead started with the idea that the structures were long and thin, and attempted to measure their orientation and dimensions. Allowing the average switchback width and aspect ratio to be free parameters they fit an expected model to the distribution of switchback durations, w.r.t. the S/C cutting angle. They applied this method while varying the switchback orientation, finding the orientation that was most consistent with the long, thin model. Switchbacks were found to be aligned away from the radial direction, towards to the Parker spiral. The statistical average switchback width was around $50,000$~km, with an aspect ratio of the order of 10, although there was a large variation. \citet{2021AA...650A...1L} again emphazised that the duration of a switchback is a function of how the S/C cut through the structure, which is in turn related to the switchback deflection, dimensions, orientation and S/C velocity. A similar conclusion was also reached by \citet{2020MNRAS.494.3642M} who argued that the direction of {\emph{Helios}} w.r.t. switchbacks could influence the statistics seen in the data. Unlike the previous studies that relied on large statistics, \citet{2020ApJ...893...93K} analyzed several case study switchbacks during the first Enc., finding currents at the boundaries. They argued that these currents flowed along the switchback surface, and also imagined switchbacks to be cylindrical. Analysing the flow deflections relative to the S/C for three switchbacks, they found a transverse scale of $7,000$~km and $50,000$~km for a compressive and Alfv\'enic switchback, respectively. A similar method was applied to a larger set of switchbacks by \citet{2021AA...650A...3L}, who used minimum variance analysis (MVA) to find the normal directions of the leading and trailing edge. After calculating the width of the edges, an average normal velocity was multiplied by the switchback duration to give a final width. They found that the transverse switchback scale varied from several thousand km to a solar radius ($695,000$~km), with the mode value lying between $10^{4}$~km and $10^{5}$~km. A novel approach to probe the internal structure of switchbacks was provided by \citet{2021AA...650L...4B}, who studied the behavior of energetic particles during switchback periods in the first five {\emph{PSP}} Encs. Energetic particles (80-200 MeV/nucleus) continued to stream anti-sunward during a switchback in 86\% of cases, implying that the radius of magnetic field curvature inside switchbacks was smaller or comparable to the ion gyroradius. Using typical solar wind parameters ($\Bm\sim50$ nT, ion energy $100$~eV) this sets an upper limit of $\sim4000$~km for the radius of curvature inside a switchback. Assuming a typical S-shaped curve envisaged by \citet{2019Natur.576..228K}, this would constrain the switchback width to be less than $\sim16,000$~km. A summary of the results is displayed in Table \ref{tab:shape_size}, which exhibits a large variation but a general consensus that the switchback transverse scale ranges from $10^{3}$~km to $10^{5}$~km. Future areas of study should be focused on how the switchback shape and size varies with distance from the Sun. However, a robust method for determining how {\emph{PSP}} cut through the switchback must be found for progress to be made in this area. For example, an increased current density or wave activity at the boundary may be used a signature of when {\emph{PSP}} is clipping the edge of a switchback. Estimates of switchback transverse scale, like \citet{2021AA...650A...3L}, could be constrained with the use of energetic particle data \citep{2021AA...650L...4B} on a case-by-case basis, improving the link between the duration measured by a S/C and the true physical size of the switchback. \begin{table}[t] \centering \resizebox{\textwidth}{!}{ \begin{tabular}{l|l|l|l} {\bf{Study}} & {\bf{Enc.}} & {\bf{Transverse Scale (km)}} & {\bf{Aspect}} \\ \hline \citet{2020ApJS..246...45H} & 1 & $10^{4}$ km & - \\ \hline \citet{2021AA...650A...1L} & 1,\,2 & $50,000$ km & $\sim 10$ \\ \hline \citet{2020ApJ...893...93K} & 1 & $7000$ km for compressive & - \\ \hline & & $50,000$ km for Alfv\'enic & \\ \citet{2021AA...650A...3L} & 1 & $10^{3}$ km - $10^{5}$ km & - \\ \hline \citet{2021AA...650L...4B} & 1-5 & $< 16,000$ km* & - \\ \hline \end{tabular} } \caption{Summary of the results regarding switchback shape and size, including which {\emph{PSP}} Encs. were used in the analysis. *assuming an S-shape structure. }\label{tab:shape_size} \end{table} \subsubsection{Occurrence and radial evolution in the solar wind}\label{sub:obs: occurrence} \begin{figure} \begin{center} \includegraphics[width=.49\columnwidth]{rates_all30.png} \includegraphics[width=.49\columnwidth]{rates_all3.png} \caption{Cumulative counts of switchbacks as a function of radial distance from {\emph{PSP}}, {\emph{Helios}} and two polar passes of {\emph{Ulysses}} (in 1994 and 2006). The left plot shows counts per km of switchbacks of duration up to 30 minute, while the right plot shows the same quantity but for switchbacks of duration up to 3 hours. {\emph{PSP}} data (43 in total) were binned in intervals of width $\Delta R=0.05$~AU. The error bars denote the range of data points in each bin \citep{2021ApJ...919L..31T}.} \label{fig_rates} \end{center} \end{figure} Understanding how switchbacks evolve with radial distance is one of the key elements not only to determine their origin, but also to understand if switchbacks may contribute to the evolution of the turbulent cascade in the solar wind and to solar wind energy budget. Simulations (\S\ref{sub: theory SB propagation}) and observations (\S\ref{sec: obs: boundaries}) suggest that switchbacks may decay and disrupt as they propagate in the inner heliosphere. As a consequence, it is expected that the occurrence of switchbacks decreases with radial distance in the absence of an ongoing driver capable of reforming switchbacks {\emph{in~situ}}. On the contrary, the presence of an efficient driving mechanism is expected to lead to an increase, or to a steady state, of the occurrence of switchbacks with heliocentric distance. Based on this idea, \citet{2021ApJ...919...60M} analyzed the occurrence rate (counts per hour) of switchbacks with radial distance using data from Encs.~3 through 7 of {\emph{PSP}}. The authors conclude that the occurrence rate depends on the wind speed, with higher count rates for higher wind speed, and that it and does not depend on the radial distance. Based on this result, \citet{2021ApJ...919...60M} exclude {\emph{in~situ}} generation mechanisms. However, it is interesting to note that counts of switchbacks observed by {\emph{PSP}} are highly scattered with radial distance, likely due to the mixing of different streams \citep{2021ApJ...919...60M}. \citet{2021ApJ...919L..31T} also report highly scattered counts of switchbacks with radial distance, although they argue that the presence of decaying and reforming switchbacks might also contribute to such an effect. \citet{2021ApJ...919L..31T} analyzed the count rates (counts per km) of switchbacks by complementing {\emph{PSP}} data with {\emph{Helios}} and {\emph{Ulysses}}. Their analysis shows that the occurrence of switchbacks is scale-dependent, a trend that is particularly clear in {\emph{Helios}} and {\emph{Ulysses}} data. In particular, they found that the fraction of switchbacks of duration of a few tens of seconds and longer increases with radial distance and that the fraction of those of duration below a few tens of seconds instead decreases. The overall cumulative counts per km, two examples of which are shown in Fig.~\ref{fig_rates}, show such a trend. Results from this analysis led \citet{2021ApJ...919L..31T} to conclude that switchbacks in the solar wind can decay and reform in the expanding solar wind, with {\emph{in~situ}} generation being more efficient at the larger scales. They also found that the mean radial amplitude of switchbacks decays faster than the overall turbulent fluctuations, in a way that is consistent with the radial decrease of the mean radial field. They argued that this could be the result of a saturation of amplitudes and may be a signature of decay processes of switchbacks as they evolve and propagate in the inner Heliosphere. \subsubsection{Thermodynamics and energetics}\label{sub: obs: thermodynamics} An important question about switchbacks is whether the plasma inside these structures is different compared to the background surrounding plasma. We have seen already that switchbacks exhibit a bulk speed enhancement in the main core proton population. As this increase in speed corresponds to a net acceleration in the center of mass frame, the plasma kinetic energy is therefore larger in switchbacks than in the background solar wind. This result suggests these structures carry a significant amount of energy with them as the solar wind flows out into the inner heliosphere. A question that directly follows is whether the plasma is also hotter inside w.r.t. outside. Attempting to answer this important question with SPC is non-trivial, since the measurements are restricted to a radial cut of the full 3D ion VDF \citep{2016SSRv..204..131K, 2020ApJS..246...43C}. While the magnetic field rotation in switchbacks enables the sampling of many different angular cuts as the S/C Encs. these structures, the cuts are not directly comparable as they represent different combinations of $T_\perp$ and $T_\|$ \cite[See for example,][]{2020ApJS..246...70H}: \begin{equation} \label{SPCtemp} w_{r}=\sqrt{w^{2}_{\parallel }\left( \hat{r} \cdot \hat{b} \right)^{2} +w^{2}_{\perp }\left[ 1-\left( \hat{r} \cdot \hat{b} \right)^{2} \right]}, \end{equation} \noindent where $w_r$ is the measured thermal speed of the ions, related to temperature by $w=\sqrt{2 k_B T/m}$, and $\hat{\bm{b}}=\B/\Bm$. Therefore, SPC measurements of temperature outside switchbacks, where the magnetic field is typically radial, sample the proton parallel temperature, $T_{p\|}$. In contrast, as $\B$ rotates towards $90^\circ$ within a switchback, the SPC cut typically provides a better estimate of $T_{p\perp}$. To overcome this, \citet{2020ApJS..246...70H} investigated the proton temperature anisotropy statistically. They assumed that the proton VDF does not vary significantly over the SPC sampling time as $\B$ deflects away from the radial direction, and then solved Eq.~\ref{SPCtemp} for both $w_\parallel$ and $w_\perp$. While this method does reveal some information about the the underlying temperature anisotropy, this approach is not suitable for the comparison of anisotropy within a single switchback since it assumes, $\textit{a priori}$, that the anisotropy is fixed compared to the background plasma. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{Fig_SPC_sb_Woolley.jpg} \caption{{\it Left}: SPC measurements of the core proton radial temperature during a large amplitude switchback shown in the bottom panel. The measured core proton temperature (upper panel) is modulated by the B angle and it's maximum when measuring $T_{p\perp}$ at roughly $90^\circ$, consistent with a dominant $T_{p\perp}>T_{p\|}$ anisotropy in the background plasma. {\it Right}: cuts of the ion VDF made by SPC at different angles: antiparallel (anti-radial), orthogonal and parallel (radial) to $\B$. The fit of the proton core is shown in pink. The bottom panel compares the radial and anti-radial VDFs, where the latter has been flipped to account for the field reversal inside the switchback. Figure adapted from \cite{2020MNRAS.498.5524W}.} \label{fig_sb_spc} \end{center} \end{figure} Another possibility is to investigate switchbacks that exhibit a reversal in the sign of $B_R$, in other words, $\theta_{BR}\simeq180^\circ$ inside the switchback for a radial background field. This technique provides two estimates of $T_{p\|}$: outside the switchback, when the field is close to (anti-)radial, and inside, when $B_R$ is reversed. This is the only way to compare the same radial SPC cut of the VDF inside and outside switchbacks, leading then to a direct comparison between the two resulting $T_{p\|}$ values. \citet{2020MNRAS.498.5524W} first attempted this approach, and a summary of their results are presented in Fig.~\ref{fig_sb_spc}. A switchback with an almost complete reversal in the field direction is tracked in the left panels; the bottom panel shows the angle of the magnetic field, from almost anti-radial to radial and back again. The measured core proton temperature, $T_{cp\|}$ (upper left panel), increases with angle, $\theta_{BR}$, and reaches a maximum at $\theta_{BR}\simeq 90^\circ$, consistent with a dominant $T_{p\perp}>T_{p\|}$ anisotropy in the background plasma. On the other hand, when the SPC sampling direction is (anti-)parallel to $\Bm$ (approximately $0^\circ$ and $180^\circ$), \citet{2020MNRAS.498.5524W} find the same value for $T_{cp\|}$. Therefore, they concluded that the plasma inside switchbacks is not significantly hotter than the background plasma. The right panels show radial cuts of the ion VDF made by SPC at different angles: anti-parallel (anti-radial), orthogonal and parallel (radial) to $\B$. The fit of the proton core is shown in pink. The bottom panel compares the measurement in the radial and anti-radial direction, once the latter has been flipped to account for the field reversal inside the switchback; the two distributions fall on top of each other, suggesting that core protons undergo a rigid rotation in velocity space inside the switchback, without a significant deformation of the VDF. The comparison in the panels also shows that the core temperature is larger for oblique angles $^\circ$ (large $T_{cp,\perp}$) and that the proton beam switches sides during the reversal, as discussed in \cite{2013AIPC.1539...46N}. They conclude that plasma inside switchbacks, at least those with the largest angular deflections, exhibits a negligible difference in the parallel temperature compared to the background, and therefore, the speed enhancement of the proton core inside these structures does not follow the expected $T$-$V$ relation \citep[{\emph{e.g.}}, see][]{2019MNRAS.488.2380P}. This scenario is consistent with studies about turbulent properties and associated heating inside and outside switchbacks \citep{2020ApJ...904L..30B, 2021ApJ...912...28M}. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{fig_Tpatches_woodham.pdf} \caption{Overview of plasma properties inside a group of switchback patches. The bottom panel shows the core proton parallel and perpendicular temperatures measured by SPAN. The colours in $T_{\|}$ encode the deflection of $\B$ from the radial direction. Patches (grey sectors exhibit systematically higher $T_{\|}$ than in quiet periods, while $T_{\perp}$ is mostly uniform throughout the interval. Figure adapted from \citet{2021A&A...650L...1W}.} \label{fig_sb_span2} \end{center} \end{figure} On the other hand, SPAN measurements of the core proton parallel and perpendicular temperatures show a large-scale modulation by patches of switchbacks \citep{2021A&A...650L...1W}. Fig.~\ref{fig_sb_span2} shows an overview of magnetic field and plasma properties through an interval that contains a series of switchback patches and quiet radial periods during Enc.~2. The bottom panel highlights the behavior of $T_{\perp}$ and $T_{\|}$ through the structures. The former is approximately constant throughout the interval, consistent with an equally roughly constant solar wind speed explained by the well-known speed-temperature relationship in the solar wind \citep[for example, see][and references therein]{2006JGRA..11110103M}. In contrast, the latter shows large variations, especially during patches when a systematic larger $T_{\|}$ is observed. As a consequence, increases in $T_{\|}$ are also correlated with deflections in the magnetic field directions (colors refer to the instantaneous angle $\theta_{BR}$). The origin of such a correlation between $\theta_{BR}$ and $T_{\|}$ is not fully understood yet, although the large-scale enhancement of the parallel temperature within patches could be a signature of some preferential heating of the plasma closer to the Sun ({\emph{e.g.}}, by interchange reconnection), supporting a coronal origin for these structures. \subsubsection{Switchback boundaries and small-scale waves}\label{sec: obs: boundaries} \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{ssr2021_switchbacks_figure1a.png} \caption{The magnetic field dynamics for a typical deflection (switchback) of the magnetic field observed at heliocentric distance of $35.6~R_\odot$ during {\emph{PSP}}’s first solar Enc., on 4 Nov. 2018 (left) and at heliocentric distance of $\sim50~R_\odot$ on 10 Nov. 2018 (right). The radial component of the magnetic field (red curve in panel (a)) exhibits an almost complete inversion at the switchback boundary and becomes positive (anti-sunward). The transverse components are shown in blue (T, in the ecliptic plane) and in green (N –normal component, transverse to the ecliptic plane). The magnetic field magnitude is shown in black. Panel (b) represents plasma bulk velocity components (with a separate scale for the radial component $V_z$ shown in red) with the same color scheme as in panel (a). Panels (c) and (d) represent the proton density and temperature. Panel (e) presents the magnetic field waveforms from SCM (with the instrumental power cut-off below 3 Hz). The dynamic spectrum of these waveforms are shown in Panel (f), in which the red-dashed curve indicates the local lower hybrid ($f_{LH}$) frequency. Panels (g-j) represent the magnetic and eclectic field perturbations around the switchback leading boundary, the wavelet spectrum of the magnetic field perturbation, and radial component of the Poynting flux (blue color indicates propagation from the Sun and red sunward propagation). The same parameters for the trailing boundary are presented in panels (k-n).} \label{fig:icx1} \end{center} \end{figure} Switchback boundaries are plasma discontinuities, which separate two plasmas inside and outside the structure moving with different velocities that may have different temperatures and densities. Fig.~\ref{fig:icx1} shows a “typical” switchback, highlighting: (1) the sharp rotation of magnetic field as well as the dropouts in field intensity on the boundaries (Fig.~\ref{fig:icx1}a), in agreement with \citet{2020ApJS..249...28F}; (2) the increase of radial velocity showing the Alfv\'enicity (Fig.~\ref{fig:icx1}b); (3) the plasma density enhancements at the boundaries of the switchback (Fig.~\ref{fig:icx1}c), from 300~cm$^{-3}$ to $\sim500$ and 400~cm$^{-3}$ at the leading and trailing edges respectively with some decrease of plasma density inside the structure \citep{2020ApJS..249...28F} down to $250-280$~cm$^{-3}$; and (4) enhanced wave activity inside the switchback and at the boundaries (Fig.~\ref{fig:icx1}d) predominantly below $f_{LH}$ with the higher amplitude wave bursts at the boundaries. The detailed superimposed epoch analysis of plasma and magnetic field parameters presented in \citet{2020ApJS..249...28F} showed that magnetic field magnitude dips and plasma density enhancement are the characteristic features associated with switchbacks boundaries. It is further shown that wave activity decays with heliocentric distances. Together with the activity inside switchbacks, the boundaries also relax during propagation \citep{2020ApJS..246...68M, 2021ApJ...915...68F, 2021A&A...650A...4A} suggesting that the switchback boundary formation process is dynamic and evolving, even occurring near the {\emph{PSP}} observation point inside of $40~R_\odot$ \citep{2021ApJ...915...68F}. \begin{figure} \begin{center} \includegraphics[width=1\columnwidth]{Akhavan-Tafti_etal_SB_DiscType.pdf} \caption{(a) Discontinuity classification of 273 magnetic switchbacks. Scatter plot of relative normal component of magnetic field of upstream, pristine solar wind and relative variation in magnetic field intensity across switchbacks’ leading (QL-to-SPIKE) transition regions. The color shading indicates the switchbacks’ distance from the Sun. (b) Scatter plot of the ratio of number of RD events to that of ED as a function of distance from the Sun. The histogram of event count per radial distance (bin width = $1~R_\odot$) is provided on the right y-axis in blue for reference. (c) Stacked bar plots of the relative ratios of RD:TD:ED:ND discontinuities at 0.2~AU \citep[{\emph{PSP}};][]{2021A&A...650A...4A}, 1.0~AU \citep[{\emph{ISEE}};][]{1984JGR....89.5395N}, and $1.63-3.73$~AU \citep[{\emph{Ulysses}};][]{2002GeoRL..29.1383Y}.} \label{fig:icx2} \end{center} \end{figure} The analysis of MHD discontinuity types was performed by \citet{2021AA...650A...3L} who found that $32\%$ of switchbacks may be attributed to rotational discontinuities (RD), $17\%$ to tangential discontinuities (TD), about $42\%$ to the group of discontinuities that are difficult to unambiguously define (ED), and $9\%$ that do not belong to any of these groups (ND). Similarly, as shown in {\bf{Fig.~\ref{fig:icx2}}}, a recent study by \citet{2021A&A...650A...4A} reported that the relative occurrence rate of RD-type switchbacks goes down with heliocentric distance (Fig.~\ref{fig:icx2}b), suggesting that RD-type switchbacks may fully disappear past 0.3~AU. However, RD-type switchbacks have been observed at both Earth \citep[1~AU;][]{1984JGR....89.5395N} and near Jupiter \citep[2.5~AU;][]{2002GeoRL..29.1383Y}, though at smaller rates of occurrence (Fig.~\ref{fig:icx2}c) than that measured by {\emph{PSP}}. Future investigations are needed to examine (1) the mechanisms via which switchbacks may evolve, and (2) whether the dominant switchback evolution mechanism changes with heliocentric distance. Various studies have also investigated wave activity on switchback boundaries \citep{2020ApJS..246...68M, 2020ApJ...891L..20A,2021AA...650A...3L}: the boundary surface MHD wave (observed at the leading edge of the switchback in Fig.~\ref{fig:icx1} and highlighted in panels (g-h)) and the localized whistler bursts in the magnetic dip (observed at the trailing edge of the switchback in Fig.~\ref{fig:icx1} and highlighted in panels (k-n)). The whistler wave burst in Fig.~\ref{fig:icx1}(k-n) had Poynting flux directed to the Sun that leaded to significant Doppler downshift of wave frequency in the S/C frame \citep{2020ApJ...891L..20A}. Because of their sunward propagation these whistler waves can efficiently scatter strahl electron population. These waves are often observed in the magnetic field magnitude minima at the switchback boundaries, {\emph{i.e.}}, can be considered as the regular feature associated with switchbacks. Lastly, features related to reconnection are occasionally observed at switchback boundaries, albeit only in about $1\%$ of the observed events \citep{2021A&A...650A...5F,2020ApJS..246...34P}. If occurring, reconnection on the boundary of switchbacks with the solar wind magnetic field may lead to the disappearance of some switchbacks \citep{2020AGUFMSH034..06D}. Surprisingly, there has been no evidence of reconnection on switchback boundaries at distances greater than $50~R_\odot$. \citet{2020ApJS..246...34P} explained that the absence of reconnection at these boundaries may be due to (a) large, albeit sub-Alfv\'enic, velocity shears at switchback boundaries which can suppress reconnection \citep{2003JGRA..108.1218S}, or that (b) switchback boundaries, commonly characterized as Alfv\'enic current sheets, are isolated RD-type discontinuities that do not undergo local reconnection. \citet{2021A&A...650A...4A} similarly showed that switchback boundaries theoretically favor magnetic reconnection based on their plasma beta and magnetic shear angle characteristics \citep{2003JGRA..108.1218S}. However, the authors concluded that negligible magnetic curvature, that is highly stretched magnetic field lines \citep{2019JGRA..124.5376A, 2019GeoRL..4612654A}, at switchback boundaries may inhibit magnetic reconnection. Further investigations are needed to explore whether and how magnetic curvature evolves with heliocentric distance. \subsection{Theoretical models}\label{sec: theory switchbacks} In this section, we outline the collection of theoretical models that have been formulated to explain observations of switchbacks. These are based on a variety of physical effects, and there is, as of yet, no consensus about the key ingredients needed to explain observations. In the following we discuss each model and related works in turn, organized by the primary physical effect that is assumed to drive switchback formation. These are (i) Interchange reconnection (\S\ref{sub: theory interchange }), (ii) Other solar-surface processes (\S\ref{sub: theory coronal jets}), (iii) Interactions between solar-wind streams (\S\ref{sub: theory stream interactions}), and (iv) Expanding AWs and turbulence (\S\ref{sub: theory alfven waves }). Within each of these broad categories, we discuss the various theories and models, some of which differ in important ways. In addition, some models naturally involve multiple physical effects, which we try to note as appropriate. The primary motivation for understanding the origin of switchbacks is to understand their relevance to the heating and acceleration of the solar-wind. As discussed in, {\emph{e.g.}}, \citet{2009LRSP....6....3C}, magnetically driven wind models fall into the two broad classes of wave/turbulence-driven (WTD) and reconnection/loop-opening (RLO) models. A natural question is how switchbacks relate to the heating mechanism and what clues they provide as to the importance of different forms of heating in different types of wind. With this in mind, it is helpful to further, more broadly, categorize the mechanisms discussed above into ``{\emph{ex situ}}'' mechanisms (covering interchange reconnection and other solar-surface processes) -- in which switchbacks result from transient, impulsive events near the surface of the sun -- and ``{\emph{in situ}}'' mechanisms (covering stream interactions and AWs), in which switchbacks result from processes within the solar wind as it propagates outwards. An {\emph{ex situ}} switchback formation model, with its focus on impulsive events, naturally ties into an RLO heating scenario; an {\emph{in situ}} formation process, by focusing on local processes in the extended solar wind, naturally ties into a WTD scenario. This is particularly true given the significant energy content of switchbacks in some {\emph{PSP}} observations (see \S\ref{sub:obs: velocity increase}), although there are also important caveats in some of the models. Thus, understanding the origin of switchbacks is key to understanding the origin of the solar wind itself. How predictions from different models hold up when compared to observations may provide us with important clues. This is discussed in more detail in the summary of the implications of different models and how they compare to observations in \S\ref{sub: sb summary theory}. \subsubsection{Interchange reconnection}\label{sub: theory interchange } \begin{figure} \begin{center} \includegraphics[width=.90\columnwidth]{modelsfigure} \caption{Graphical overview covering most of the various proposed switchback-generation mechanisms, reprinted from {\footnotesize \texttt{https://www.nasa.gov/feature/goddard/2021/} \texttt{switchbacks-science-explaining-parker-solar-probe-s-magnetic-puzzle}}. The mechanisms are classified into those that form switchbacks (1) directly through interchange reconnection ({\emph{e.g.}}, \citealt{2020ApJ...894L...4F,2021ApJ...913L..14H,2020ApJ...896L..18S}); (2) through ejection of flux ropes by interchange reconnection \citep{2021A&A...650A...2D,2022ApJ...925..213A}; (3) from expanding/growing AWs and/or Alfv\'enic turbulence \citep{2020ApJ...891L...2S,2021ApJ...918...62M,2021ApJ...915...52S}; (4) due to roll up from nonlinear Kelvin-Helmholtz instabilities \citep{2020ApJ...902...94R}; and (5) through magnetic field lines that stretch between sources of slower and faster wind (\citealp{2021ApJ...909...95S}; see also \citealt{2006GeoRL..3314101L}).} \label{fig:icx} \end{center} \end{figure} Interchange reconnection refers to the process whereby a region of open magnetic-field lines reconnect with a closed magnetic loop \citep{2005ApJ...626..563F}. Since this process is expected to be explosive and suddenly change the shape and topology of the field, it is a good candidate for the origin of switchbacks and has been considered by several authors. The basic scenario is shown in Fig.~\ref{fig:icx}. \citet{2020ApJ...894L...4F} first pointed out the general applicability of interchange reconnection to the {\emph{PSP}} observations (the possible relevance to earlier {\emph{Ulysses}} observations had also been discussed in \citealt{2004JGRA..109.3104Y}). They focus on the large transverse flows measured by {\emph{PSP}} as evidence for the global circulation of open flux enabled by the interchange reconnection process \citep{2001ApJ...560..425F,2005ApJ...626..563F}. Given that switchbacks tend to deflect preferentially in the transverse direction (see \S\ref{sub:obs: shapes and sizes}; \citealp{2020ApJS..246...45H}), they argue that these two observations are suggestively compatible: an interchange reconnection event that enables the transverse transport of open flux would naturally create a transverse switchback. Other authors have focused more on the plasma-physics process of switchback formation, including the reconnection itself and the type of perturbation it creates. \citet{2021A&A...650A...2D} used two-dimensional (2D) particle-in-cell (PIC) simulations to study the hypothesis that switchbacks are flux-rope structures that are ejected by bursty interchange reconnection. They present two 2D simulations, the first focusing on the interchange reconnection itself and the second on the structure and evolution of a flux rope in the solar wind. They find generally positive conclusions: flux ropes with radial-field reversals, nearly constant $\Bm$, and temperature enhancements are naturally generated by interchange reconnection; and, flux-rope initial conditions relax into structures that match {\emph{PSP}} observations reasonably well. Further discussion of the evolution of such structures, in particular how they evolve and merge with radius, is given in \citet{2022ApJ...925..213A} (see also \S\ref{sub: theory SB propagation}) who also argue that the complex internal structure of observed switchbacks is consistent with the merging process. A challenge of the scenario is to reproduce the high Alfv\'enicity ($\delta \B\propto \delta \bm{v}$) of {\emph{PSP}} observations, although the merging process of \citet{2022ApJ...925..213A} naturally halts once Alfv\'enic structures develop, suggesting we may be observing this end result at {\emph{PSP}} altitudes. A somewhat modified reconnection geometry has been explored with 2D MHD simulations by \citet{2021ApJ...913L..14H}. They introduce an interchange reconnection process between open and closed regions with discontinuous guide fields, which is enabled by footpoint shearing motions and favors the emission of AWs from the reconnection site. They find quasi-periodic, intermittent emission of MHD waves, classifying the open-flux regions as ``un-reconnected,'' ``newly reconnected,'' and ``post-reconnected.'' Impulsive AWs, which can resemble switchbacks, robustly propagate outwards in both the newly and post-reconnected regions. While both regions have enhanced temperatures, the newly-reconnected regions have more slow-mode activity and the post-reconnected regions have lower densities, features of the model that may be observable at higher altitudes by {\emph{PSP}}. They also see that flux ropes, which are ejected into the open field lines, rapidly disappear after the secondary magnetic reconnection between the impacting flux rope and the impacted open field lines; it is unclear whether this difference with \citet{2021A&A...650A...2D} is a consequence of the MHD model or the different geometry. Finally, \cite{2020ApJ...903....1Z} focus more on the the evolution of magnetic-field structures generated by the reconnection process, which would often be in clustered in time as numerous open and closed loops reconnect over a short period. They argue that the strong radial-magnetic-field perturbations associated with switchbacks imply that their complex structures should propagate at the fast magnetosonic speed (but see also \S\ref{sub: theory alfven waves } below), deriving an equation from WKB ({\emph{i.e.}}, the Wentzel, Kramers, and Brillouin approximation) theory for how the structures evolve as they propagate outwards from a reconnection site to {\emph{PSP}} altitudes. The model is compared to data in more detail in \citet{2021ApJ...917..110L}, who use a Markov Chain Monte Carlo technique to fit the six free parameters of the model ({\emph{e.g.}}, wave angles and the initial perturbation) to seven observed variables taken from {\emph{PSP}} time-series data for individual switchbacks. They find reasonable agreement, with around half of the observed switchbacks accepted as good fits to the model. \cite{2020ApJ...903....1Z}'s WKB evolution equation implies that $|\delta \B|/|\B_0|$ grows in amplitude out to $\sim50~R_\odot$ (whereupon it starts decaying again), and the shape of the proposed structures implies that switchbacks should often be observed as closely spaced double-humped structures. Their assumed fast-mode polarization implies that switchbacks that are more elongated in the radial direction will also exhibit larger variation in $\Bm$, because radial elongation, combined with $\nabla\cdot \B=0$, implies a mostly perpendicular wavenumber. This could be tested directly (see \S\ref{sub:obs: shapes and sizes}) and is a distinguishing feature between the fast-mode and AW based models (which generically predict $\Bm\sim{\rm const}$; \S\ref{sub: theory alfven waves }). Overall, we see that the various flavors of interchange-reconnection based models have a number of attractive features, in particular their natural explanation of the likely preferred tangential deflections of large switchbacks (\S\ref{sub:obs: shapes and sizes}; \citealp{2020ApJS..246...45H,2022MNRAS.517.1001L}), along with the bulk tangential flow \citep{2019Natur.576..228K}, and of the possible observed temperature enhancements (\S\ref{sub: obs: thermodynamics}; although to our knowledge, there are not yet clear predictions for separate $T_\perp$ and $T_\|$ dynamics). However, a number of features remain unclear, including (depending on the model in question) the Alfv\'enicity of the structures that are produced and how they survive and evolve as they propagate to {\emph{PSP}} altitudes (see \S\ref{sub: theory SB propagation}). \subsubsection{Other solar-surface processes}\label{sub: theory coronal jets} \cite{2020ApJ...896L..18S} present a phenomenological model for how switchbacks might form from the same process that creates coronal jets, which are small-scale filament eruptions observed in X-ray and extreme ultraviolet (EUV). Their jet model (proposed in \citealt{2015Natur.523..437S}) involves jets originating as erupting-flux-rope ejections through a combination of internal and interchange reconnection (thus this model would also naturally belong to \S\ref{sub: theory interchange } above). Observations that suggest jets originate around regions of magnetic-flux cancellation ({\emph{e.g.}}, \citealt{2016ApJ...832L...7P}) support this concept. \cite{2020ApJ...896L..18S} propose that the process can also produce a magnetic-field twist that propagates outwards as an AW packet that eventually evolves into a switchback. Although there is good evidence for equatorial jets reaching the outer corona (thus allowing the switchback propagation into the solar wind) their relation to switchbacks is somewhat circumstantial at the present time; further studies of this mechanism could, for instance, attempt to correlate switchback and jet occurrences by field-line mapping. Using 3D MHD simulations, \cite{2021ApJ...911...75M} examined how photospheric motions at the base of a magnetic flux tube might excite motions that resemble switchbacks. They introduced perturbations at the lower boundary of a pressure-balanced magnetic-field solution, considering either a field-aligned, jet-like flow, or a transverse, vortical flows. Switchback-like fluctuations evolve in both cases: from the jet, a Rayleigh-Taylor-like instability that causes the field to from rolls; from the vortical perturbations, large-amplitude AWs that steepen nonlinearly. However, they also conclude that such perturbations are unlikely to enter the corona: the roll-ups fall back downwards due to gravity and the torsional waves unwind as the background field straightens. They conclude that while such structures are likely to be present in the chromosphere, it is unclear whether they are related to switchbacks as observed by {\emph{PSP}}, since propagation effects will clearly play a dominant role (see \S\ref{sub: theory SB propagation}). \subsubsection{Interactions between wind streams}\label{sub: theory stream interactions} There exist several models that relate the formation of switchbacks in some way to the interaction between neighbouring solar-wind streams with different speeds. These could be either large-scale variations between separate slow- and fast-wind regions, or smaller-scale ``micro-streams,'' which seem to be observed ubiquitously in imaging studies of the low corona \citep{2018ApJ...862...18D} as well as in {\emph{in~situ}} data \citep{2021ApJ...923..174B,2021ApJ...919...96F}.\footnote{We also caution, however, that switchbacks themselves create large radial velocity perturbations (see \S\ref{sub:obs: velocity increase}), which clearly could not be a cause of switchbacks.} Because these models require the stream shear to overwhelm the magnetic tension, they generically predict that switchbacks start forming primarily outside the Alfv\'en critical zone, once $V_R\gtrsim B$, and/or once $\beta\gtrsim1$. However, the mechanism of switchback formation differs significantly between the models. \citet{2006GeoRL..3314101L} presented an early proposal of this form to explain {\emph{Ulysses}} observations. Using 2D MHD simulations, they studied the evolution of a large-amplitude parallel (circularly polarized) AW propagating in a region that also includes strong flow shear from a central smaller-scale velocity stream. They find that large-magnitude field reversals develop across the stream due to the stretching of the field. However, the reversals are also associated with large compressive fluctuations in the thermal pressure, $\Bm$, and plasma $\beta$. Although these match various {\emph{Ulysses}} datasets quite well, they are much less Alfv\'enic then most switchbacks observed by {\emph{PSP}}. \cite{2020ApJ...902...94R} consider the scenario where nonlinear Kelvin-Helmholtz instabilities develop across micro-stream boundaries, with the resulting strong turbulence producing switchbacks. This is motivated in part by the Solar TErrestrial RElations Observaory \citep[{\emph{STEREO}};][]{2008SSRv..136....5K} observations of the transition between ``striated'' (radially elongated) and ``floculated'' (more isotropic) structures \citep{2016ApJ...828...66D} around the surface where $\beta\approx1$, which is around the Alfv\'en critical zone. Since this region is where the velocity shear starts to be able to overwhelm the stabilizing effect of the magnetic field, it is natural to imagine that the instabilities that develop will contribute to the change in fluctuation structure and the generation of switchbacks. Comparing {\emph{PSP}} and {\emph{ex~situ}} observations with theoretical arguments and numerical simulations, \cite{2020ApJ...902...94R} argue that this scenario can account for a range of solar-wind properties, and that the conditions -- {\emph{e.g.}}, the observed Alfv\'en speed and prevalence of small-scale velocity shears -- are conducive to causing shear-driven turbulence. Their 3D MHD simulations of shear-driven turbulence generate a significant reversed-field fraction that is comparable to {\emph{PSP}} observations, with the distributions of $\Bm$, radial field, and tangential flows having a promising general shape. However, it remains unclear whether turbulence generated in this way is sufficiently Alfv\'enic to explain observations, since they see somewhat larger variation in $\Bm$ than observed in many {\emph{PSP}} intervals (but see \citealt{2021ApJ...923..158R}). A key prediction of this model is that switchback activity should generally increase with distance from the Sun, since the turbulence that creates the switchbacks should continue to be driven so long as there remains sufficient velocity shear between streams. This feature is a marked contrast to models that invoke switchback generation through interchange reconnection or other Solar-surface processes. \cite{2021ApJ...909...95S} consider a simpler geometric explanation -- that switchbacks result from global magnetic-field lines that stretch across streams with different speeds, rather than due to waves or turbulence generation. This situation is argued to naturally result from the global transport of magnetic flux as magnetic-field footpoints move between sources of wind with different speeds, with the footpoint motions sustained by interchange reconnection to conserve magnetic flux \citep{2001ApJ...560..425F}. A field line that moves from a source of slower wind into faster wind (thus traversing faster to slower wind as it moves radially outwards) will naturally reverse its radial field across the boundary due to the stretching by velocity shear. This explanation focuses on the observed asymmetry of the switchbacks -- as discussed in \S\ref{SB_obs}, the larger switchback deflections seem to show a preference to be tangential and particularly in the +T (Parker-spiral) direction, which is indeed the direction expected from the global transport of flux through interchange reconnection.\footnote{Note, however, that \citealt{2020ApJS..246...45H} argue that the global tangential flow asymmetry is not a consequence of the switchbacks themselves.} Field reversals are argued to develop their Alfv\'enic characteristics beyond the Alfv\'en point, since the field kink produced by a coherent velocity shear does not directly produce $\delta \B\propto \delta \bm{v}$ or $\Bm\sim{\rm const}$ (as also seen in the simulations of \citealt{2006GeoRL..3314101L}). \subsubsection{Expanding Alfv\'en waves and turbulence}\label{sub: theory alfven waves } The final class of models relate to perhaps the simplest explanation: that switchbacks are spherically polarized ($\Bm={\rm const}$) AWs (or Alfv\'enic turbulence) that have reached amplitudes $|\delta \B|/|\B_0|\gtrsim 1$ (where $\B_0$ is the background field). The idea follows from the realisation \citep{1974sowi.conf..385G,1974JGR....79.2302B} that an Alfv\'enic perturbation -- one with $\delta \bm{v}=\B/\sqrt{4\pi\rho}$ and $\Bm$, $\rho$, and the thermal pressure all constant -- is an exact nonlinear solution to the MHD equations that propagates at the Alfv\'en velocity. This is true no matter the amplitude of the perturbation compared to $\B_0$, a property that seems unique among the zoo of hydrodynamic and hydromagnetic waves (other waves generally form into shocks at large amplitudes). Once $|\delta \B|\gtrsim |\B_0|$ such states will often reverse the magnetic field in order to maintain their spherical polarization (they involve a perturbation $\delta \B$ parallel to $\B_0$). Moreover, as they propagate in an inhomogeneous medium, nonlinear AWs behave just like small-amplitude waves \citep{1974JGR....79.1539H,1974JGR....79.2302B}; this implies that in the expanding solar wind, where the decreasing Alfv\'en speed causes $|\delta \B|/ |\B_0|$ to increase, waves propagating outwards from the inner heliosphere can grow to $|\delta \B|\gtrsim |\B_0|$, feasibly forming switchbacks from initially small-amplitude waves. In the process, they may develop the sharp discontinuities characteristic of {\emph{PSP}} observations if, as they grow, the constraint of constant $\Bm$ becomes incompatible with smooth $\delta \B$ perturbations. However, in the more realistic scenario where there exists a spectrum of waves, this wave growth competes with the dissipation of the large-scale fluctuations due to turbulence induced by wave reflection \citep[see, {\emph{e.g.}},][]{1989PhRvL..63.1807V,2007ApJ...662..669V,2009ApJ...707.1659C,2022PhPl...29g2902J} or other effects \citep[{\emph{e.g.}},][]{1992JGR....9717115R}. If dissipation is too fast, it will stop the formation of switchbacks; so, in this formation scenario turbulence and switchbacks are inextricably linked (as is also the case in the scenario of \citealt{2020ApJ...902...94R}). Thus, understanding switchbacks will require understanding and accurately modelling the turbulence properties, evolution, and amplitude \citep{2018ApJ...865...25U,2019ApJS..241...11C,2013ApJ...776..124P}. Several recent papers have explored the general scenario from different standpoints, finding broadly consistent results. \citet{2020ApJ...891L...2S} and \citet{2022PhPl...29g2902J} used expanding-box MHD simulations to understand how AWs grow in amplitude and develop turbulence. The basic setup involves starting from a purely outwards propagating (fully imbalanced) population of moderate-amplitude randomly phased waves and expanding the box to grow the waves to larger amplitudes. Switchbacks form organically as the waves grow, with their strength ({\emph{e.g.}}, the strength and proportion of field reversals) and properties ({\emph{e.g.}}, the extent to which $\Bm$ is constant across switchbacks) depending on the expansion rate and the wave spectrum. While promising, these simulations are highly idealized -- {\emph{e.g.}}, the expanding box applies only outside the Alfv\'en point, the equation of state was taken as isothermal. While this has hindered the comparison to some observational properties, there are also some promising agreements \citep{2022PhPl...29g2902J}. Similar results were found by \cite{2021ApJ...915...52S} using more comprehensive and realistic simulations that capture the full evolution of the solar wind from coronal base to {\emph{PSP}} altitudes. Their simulation matches well the bulk properties of the slow-Alfv\'enic wind seen by {\emph{PSP}} and develops strong switchbacks beyond $\sim10-20~R_\odot$ (where the amplitude of the turbulence becomes comparable to the mean field). They find switchbacks that are radially elongated, as observed, although the proportion of field reversals is significantly lower than observed (this was also the case in \citealt{2020ApJ...891L...2S}). It is unclear whether this discrepancy is simply due to insufficient numerical resolution or a more fundamental issue with the AW scenario. \cite{2021ApJ...915...52S} do not see a significant correlation between switchbacks and density perturbations (see \S\ref{sub: obs: thermodynamics} for discussion), while more complex correlations with kinetic thermal properties \citep{2020MNRAS.498.5524W} cannot be in addressed either this model or the simpler local ones \citep{2020ApJ...891L...2S,2022PhPl...29g2902J}. \citet{2021ApJ...918...62M} consider a complementary, analytic approach to the problem, studying how one-dimensional, large-amplitude AWs grow and change shape in an expanding plasma. This shows that expansion necessarily generates small compressive perturbations in order to maintain the wave's spherical polarization as it grows to large amplitudes, providing specific $\beta$-dependent predictions for magnetic and plasma compressibility. The model has been extended to include the Parker spiral by {\citet{2022PhPl...29k2903S}, who find that the interaction with a rotating background field causes the development of tangential asymmetries in the switchback deflection directions. These, and the compressive predictions of \citet{2021ApJ...918...62M}, seem to explain various aspects of simulations \citep{2022PhPl...29g2902J}. Overall, these analyses provide simple, geometric explanations for various switchback properties, most importantly the preference for switchbacks to be radially elongated (\S\ref{sub:obs: shapes and sizes} and Table~\ref{tab:shape_size}); however, they are clearly highly idealised, particularly concerning the neglect of turbulence. The models also struggle to reproduce the extremely sharp switchback boundaries seen in {\emph{PSP}} data, which is likely a consequence of considering one-dimensional (1D) waves, since much sharper structures evolve in similar calculations with 2D or 3D structure \citep{2022arXiv220607447S}. Overall, AW models naturally recover the Alfv\'enicity ($\delta \B\propto\bm{v}$ and nearly constant $\Bm$) and radial elongation of switchbacks seen in {\emph{PSP}} observations, but may struggle with some other features. It remains unclear whether detailed aspects of the preferred tangential deflections of large switchbacks can be recovered \citep{2022A&A...663A.109F,2022MNRAS.517.1001L}, although large-ampliude AWs do develop tangentially asymmetries as a consequence of expansion and the rotating Parker spiral \citep{2022PhPl...29g2902J,2022PhPl...29k2903S}. Similarly, further work is needed to understand the compressive properties, in particular in a kinetic plasma.\footnote{Note, however, that AW models do not predict an \emph{absence} of compressive features in switchbacks. Indeed, compressive flows are necessary to maintain spherical polarization as they grow in amplitude due to expansion \citep{2021ApJ...918...62M}.} AW models naturally predict an increase in switchback occurrence with radial distance out to some maximum (whereupon it may decrease again), although the details depend on low-coronal conditions and the influence of turbulence, which remain poorly understood \citep{2022PhPl...29g2902J}. Computational models have also struggled to reproduce the very high switchback fractions observed by {\emph{PSP}}; whether this is due to numerical resolution or more fundamental issues remains poorly understood. \subsubsection{Propagation and evolution of switchbacks}\label{sub: theory SB propagation} A final issue to consider, particularly for understanding the distinction between {\emph{ex~situ}} and {\emph{in~situ}} generation mechanisms, is how a hypothetical switchback evolves as it propagates and is advected outwards in the solar wind. In particular, if switchbacks are to be formed at the solar surface, they must be able to propagate a long way without disrupting or dissolving. Further, different formation scenarios predict different occurrence rates and size statistics as a function of heliocentric radius (\S\ref{sub:obs: occurrence}), and it is important to understand how they change shape and amplitude in order to understand what solar-wind observations could tell us about coronal conditions. Various studies have focused on large-amplitude Alfv\'enic initial conditions, thus probing the scenario where Alfv\'enic switchback progenitors are released in the low corona ({\emph{e.g.}}, due to reconnection), perhaps with subsequent evolution resulting from the AW/turbulence effects considered in \S\ref{sub: theory alfven waves }. Using 2D MHD simulations, they start from an analytic initial condition with a magnetic perturbation that is large enough to reverse the mean field and an Alfv\'enic velocity $\delta \bm{v} = \pm \delta \B/\sqrt{4\pi \rho}$. While \citet{2005ESASP.592..785L} showed that such structures rapidly dissolve if $\Bm$ is not constant across the wave, \citet{2020ApJS..246...32T} reached the opposite conclusion for switchbacks with constant $\Bm$ (as relevant to observations), with their initial conditions propagating unchanged for hundreds of Alfv\'en times before eventually decaying due to parametric instability. They concluded that even relatively short wavelength switchbacks can in principle survive propagating out to tens of solar radii. Using the same initial conditions, \citet{2021ApJ...914....8M} extended the analysis to include switchbacks propagating through a radially stratified environment. They considered a fixed, near-exponential density profile and a background magnetic field with different degrees of expansion, which changes the radial profile of $\va$ in accordance with different possible conditions in the low corona. Their basic results are consistent with the expanding-AW theory discussed above (\S\ref{sub: theory alfven waves }), with switchbacks in super-radially expanding background fields maintaining strong field deflections, while those in radially expanding or non-expanding backgrounds unfold as they propagate outwards. The study also reveals a number of non-WKB effects from stratification, such as a gravitational damping from plasma entrained in the switchback. More generally, they point out that after propagating any significant distance in a radially stratified environment, a switchback will have deformed significantly compared to the results from \citet{2020ApJS..246...32T}, either changing shape or unfolding depending on the background profile. This blurs the line between {\emph{ex~situ}} and {\emph{in~situ}} formation scenarios. The above studies, by fixing $\delta \bm{v} = \pm \delta \B/\sqrt{4\pi \rho}$ and $\Bm={\rm const}$, effectively assume that switchbacks are Alfv\'enic. \citet{2022ApJ...925..213A} have considered the evolution and merging of flux ropes, that are ejected from interchange reconnection sites in the scenario of \citet{2021A&A...650A...2D}. They show that while flux ropes are likely to form initially with an aspect ratio of near unity, merging of ropes through slow reconnection of the wrapping magnetic field is energetically favorable. This merging continues until the axial flows inside the flux ropes increase to near Alfv\'enic values, at which point the process becomes energetically unfavorable. This process also causes flux ropes to increasingly radially elongated with distance from the sun, which is observationally testable (see \S\ref{sub:obs: shapes and sizes}) and may be the opposite prediction to AW based models (since the wave vector rotates towards the parallel direction with expansion). \citet{2021A&A...650A...2D} also argue that the complex, inner structure of observed switchbacks is consistent with this merging process. The WKB fast-mode-like calculation of \citet{2020ApJ...903....1Z} produces somewhat modified scalings (which nonetheless predict a switchback amplitude that increases with radius), but does not address the stability or robustness of the structures. Considerations relating to the long-time stability of switchbacks are less relevant to the shear-driven models of \citet{2020ApJ...902...94R,2021ApJ...909...95S}, in which switchbacks are generated in the Alfv\'en zone and beyond (where $V_R\gtrsim \va$), so will not have propagated a significant distance before reaching {\emph{PSP}} altitudes. Overall, we see that Alfv\'enic switchbacks are expected to be relatively robust, as are flux-rope structures, although they evolve significantly through merging. This suggests that source characteristics could be retained (albeit with significant changes in shape) as they propagate through the solar wind. If indeed switchbacks are of low-coronal origin, this is encouraging for the general program of using switchbacks to learn about the important processes that heat and accelerate the solar wind. \begin{figure} \begin{center} \includegraphics[width=.87\columnwidth]{fig_bale_fargette_1.jpg} \includegraphics[width=.87\columnwidth]{fig_bale_fargette_2.jpg} \caption{{\it Top:} Variation of plasma properties observed during patches of switchbacks during Enc.~6. The second panel shows that Helium abundance is modulated with the patch profile too, with enhanced fractional density inside patches of switchbacks. The periodicity is consistent with the crossing of funnels emerging from the Solar atmosphere. Figure adapted from \cite{2021ApJ...923..174B} {\it Bottom:} Cartoon showing the association between switchback patches and their periodicity with supergranular and granular structure in Corona. Figure adapted from \cite{2021ApJ...919...96F}.} \label{fig_sb_span} \end{center} \end{figure} \subsection{Outlook and open questions}\label{SB_discussion} \subsubsection{Connection to Solar sources}\label{sub:solar sources} Because of their persistence in the solar wind, switchbacks can also be considered as tracers of processes occurring in the Solar atmosphere and therefore can be used to identify wind sources at the Sun. Recent work by \citet{2021MNRAS.508..236W} compared the properties of two switchback patches during different Encs. and suggested that patches could be linked to coronal hole boundary regions at the solar surface. \citet{2021MNRAS.508..236W} also showed that these periods, which had bulk velocities of $\sim300$~km~s$^{-1}$, could sometimes be characterized by a particularly low alpha abundance. The cause of this low alpha abundance is not known, but it could be related to the processes and mechanisms governing the release of the solar wind at the surface. Moreover, local modulation in the alpha fraction observed {\emph{in situ}} and crucially during switchback patches could be a direct signature of spatial modulation in solar sources; \cite{2021ApJ...923..174B} have identified funnels as a possible source for these structures. Such an interpretation is consistent with the finding presented by \cite{2021ApJ...919...96F} who have identified some periodicity in patches that is consistent with that expected from Solar supergranulation, and also some smaller scale signatures potentially related to granular structures inside funnels (see Fig.~\ref{fig_sb_span}). Another interesting interpretation has been proposed by \citep{2021ApJ...920L..31N} who suggest that patches of switchbacks observed in the inner heliosphere by {\emph{PSP}} could then evolve with radial distance into structures with an overall higher bulk velocity, like the microstreams observed by {\emph{Ulysses}} in the polar wind \citep{1995JGR...10023389N}. According to the authors, then microstreams might be the result of accumulated and persistent velocity enhancements resulting from a series of switchbacks or patches. At the same time, \cite{2021ApJ...920L..31N} also propose that the individual switchbacks inside the patches could be generated by minifilament/flux rope eruptions that cause coronal jets \citep{2020ApJ...896L..18S}, so that microstreams are a consequence of a series of such jet-driven switchbacks occurring in close succession. \subsubsection{Implications for our understanding of the solar wind}\label{sub: sb summary theory} Switchback observations hold promise as a way to better constrain our understanding of the solar-wind itself. In particular, most of the theoretical models discussed in \S\ref{sec: theory switchbacks} also suggest broader implications for coronal and solar-wind heating, and thus the origin of the solar wind. Given this, although there is currently no consensus about the key ingredients that form switchbacks, if a particular model does gain further observational support, this may lead to more profound shifts in our understanding of the heliosphere. Here, we attempt to broadly characterize the implications of the different formation scenarios in order to highlight the general importance of understanding switchbacks. Further understanding will require constant collaboration between theory and observations, whereby theorists attempt to provide the most constraining, and unique, possible tests of their proposed mechanisms in order to better narrow down the possibilities. Such a program is strongly supported by the recent observations of switchback modulation on solar supergranulation scales, which suggest a direct connection to solar-surface processes (see above \S\ref{sub:solar sources}). Above (\S\ref{sec: theory switchbacks}), we broadly categorized models into ``{\emph{ex~situ}}'' and ``{\emph{in~situ}},'' which involved switchbacks being formed on or near the solar surface, or in the bulk solar wind, respectively. These classes then naturally tied into RLO models of coronal heating for {\emph{ex~situ}} models, or to WTD coronal-heating theories for {\emph{in~situ}} switchback formation (with some modifications for specific models). But, the significant differences between different models narrow down the correspondence further than this. Let us consider some of the main proposals and what they will imply, if correct. In the discussion below, we consider some of the same models as discussed in \S\ref{sec: theory switchbacks}, but grouped together by their consequence to the solar wind as opposed to the switchback formation mechanism. \citet{2020ApJ...894L...4F} and \citet{2021ApJ...909...95S} propose that switchbacks are intimately related to the global transport of open magnetic flux caused by interchange reconnection, either through the ejection of waves or due to the interaction between streams. This global circulation would have profound consequences more generally, {\emph{e.g.}}, for coronal and solar-wind heating \citep{2003JGRA..108.1157F} or the magnetic-field structure of the solar wind structure (the sub- and super-Parker spiral; \citealp{2021ApJ...909...95S}). Some other interchange reconnection or impulsive jet mechanisms -- in particular, the ejection of flux ropes \citet{2021A&A...650A...2D,2022ApJ...925..213A}, or MHD waves \citet{2020ApJ...903....1Z,2021ApJ...913L..14H,2020ApJ...896L..18S} from the reconnection site -- do not necessarily involve the same open-flux-transport mechanism, but clearly favor an RLO-based coronal heating scenario, and have more specific consequences for each individual models; for example, the importance flux-rope structures to the energetics of the inner heliosphere for \citet{2021A&A...650A...2D}, magnetosonic perturbations in \citet{2020ApJ...903....1Z}, or specific photospheric/chromospheric motions for \citet{2021ApJ...913L..14H,2020ApJ...896L..18S}. The model of \citet{2020ApJ...902...94R} suggests a very different scenario, whereby shear-driven instabilities play a crucial role outside of the Alfv\'en critical zone (where $V_R\gtrsim\va$), where they would set the properties of the turbulent cascade and change the energy budget by boosting heating and acceleration of slower regions. Finally, in the Alfv\'enic turbulence/waves scenario, switchbacks result from the evolution of turbulence to very large amplitudes $|\delta \B|\gtrsim |\B_0|$ \citep{2020ApJ...891L...2S,2021ApJ...915...52S,2021ApJ...918...62M}. In turn, given the low plasma $\beta$, this implies that the energy contained in Alfv\'enic fluctuations is significant even at {\emph{PSP}} altitudes (at least in switchback-filled regions), by which point they should already have contributed significantly to heating. Combined with low-coronal observations \citep[{\emph{e.g.}},][]{2007Sci...318.1574D}, this is an important constraint on wave-heating models \citep[{\emph{e.g.}},][]{2007ApJS..171..520C}. Finally, despite the significant differences between models highlighted above, it is also worth noting some similarities. In particular, features of various models are likely to coexist and/or feed into one another. For example, some of the explosive, {\emph{ex~situ}} scenarios (\S\ref{sub: theory interchange }--\ref{sub: theory coronal jets}) propose that such events release AWs, which then clearly ties into the AW/expansion scenario of \ref{sub: theory alfven waves }. Indeed, as pointed out by \citet{2021ApJ...914....8M}, the subsequent evolution of switchbacks as they propagate in the solar wind cannot be \emph{avoided}, which muddies the {\emph{in~situ}}/{\emph{ex~situ}} distinction. In this case, the distinction between {\emph{in~situ}} and {\emph{ex~situ}} scenarios would be more related to the relevance of distinct, impulsive events to wave launching, as opposed to slower, quasi-continuous stirring of motions. Similarly, \citet{2020ApJ...902...94R} discuss how waves propagating upwards from the low corona could intermix and contribute to the shear-driven dynamics that form the basis for their model. These interrelationships, and the coexistence of different mechanisms, should be kept in mind moving forward as we attempt to distinguish observationally between different mechanisms. \subsubsection{Open Questions and Future Encs.} Given their predominance in the solar wind plasma close to the Sun and because each switchback is associated with a positive increase in the bulk kinetic energy of the flow -- as they imply a net motion of the centre of mass of the plasma -- it is legitimate to consider whether switchbacks play any dynamical role in the acceleration of the flow and its evolution in interplanetary space. Moreover, it is an open question if the kinetic energy and Poynting flux carried by these structures have an impact on the overall energy budget of the wind. To summarize, these are some of the main currently open questions about these structures and their role in the solar wind dynamics: \begin{itemize} \item Do switchbacks play any role in solar wind acceleration? \item Does energy transported by switchbacks constitute a important possible source for plasma heating during expansion? \item Do switchbacks play an active role in driving and maintaining the turbulent cascade? \item Are switchbacks distinct plasma parcels with different properties than surrounding plasma? \item Do switchbacks continuously decay and reform during expansion? \item Are switchbacks signatures of key processes in the Solar atmosphere and tracers of specific types of solar wind sources? (Open field regions vs. streamer) \item Can switchback-like magnetic-field reversals exist close to the Sun, inside the Alfv\'en radius? \end{itemize} Answering these questions require taking measurements even closer to the Sun and accumulate more statistics for switchbacks in different types of streams. These should include the fast solar wind, which has been so far seldomly encuntered by {\emph{PSP}} because of solar minimum and then a particularly flat heliospheric current sheet (HCS; which implies a very slow wind close to the ecliptic). Crucially, future {\emph{PSP}} Encs. will provide the ideal conditions for answering these open questions, as the S/C will approach the solar atmosphere further, likely crossing the Alfv\'en radius. Further, this phase will coincide with increasing solar activity, making possible to sample coronal hole sources of fast wind directly. \section{Solar Wind Sources and Associated Signatures} \label{SWSAS} A major outstanding research question in solar and heliophysics is establishing causal relationships between properties of the solar wind and the source of that same plasma in the solar atmosphere. Indeed, investigating these connections is one of the major science goals of {\emph{PSP}}, which aims to ``\textit{determine the structure and dynamics of the plasma and magnetic fields at the sources of the solar wind}" \cite[Science Goal \#2;][]{2016SSRv..204....7F} by making {\emph{in situ}} measurements closer to the solar corona than ever before. In this section, we outline the major outcomes, and methods used to relate {\emph{PSP}}’s measurements to specific origins on the Sun. In \S\ref{SWSAS:modeling} we review modeling efforts and their capability to identify source regions. In \S\ref{SWSAS:slowlafv} we outline how {\emph{PSP}} has identified significant contributions to the slow solar wind from coronal hole origins with high alfv\'enicity. In \S\ref{SWSAS:strmblt} we review {\emph{PSP}}’s measurements of streams associated with the streamer belt and slow solar wind more similar to 1~AU measurements. In \S\ref{SWSAS:fstwnd} we examine {\emph{PSP}}’s limited exposure to fast solar wind, and the diagnostic information about its coronal origin carried by electron pitch angle distributions (PADs). In \S\ref{SWSAS:actvrgn} we recap {\emph{PSP}}’s measurements of energetic particles associated with solar activity and impulsive events, as well as how they can disentangle magnetic topology and identify pathways by which the Sun’s plasma can escape. Finally, in \S\ref{SWSAS:sbs} we briefly discuss clues to the solar origin of streams in which {\emph{PSP}} observes switchbacks and refer the reader to \S\ref{MagSBs} for much more detail. \subsection{Modeling and Verifying Connectivity} \label{SWSAS:modeling} Association of solar wind sources with specific streams of plasma measured {\emph{in situ}} in the inner heliosphere requires establishing a connection between the Sun and S/C along which solar wind flows and evolves. One of the primary tools for this analysis is combined coronal and heliospheric modeling, particularly of the solar and interplanetary magnetic field which typically governs the large scale flow streamlines for the solar wind. In support of {\emph{PSP}}, there has been a broad range of such modeling efforts with goals including making advance predictions to test and improve the models, as well as making detailed connectivity estimates informed by {\emph{in situ}} measurements after the fact. The physics contained in coronal/heliospheric models lies on a continuum mediated by computational difficulty ranging from high computational tractability and minimal physics (Potential Field Source Surface [PFSS] models, \cite{1969SoPh....9..131A,1969SoPh....6..442S,1992ApJ...392..310W} to comprehensive (but usually still time-independent) fluid plasma physics \citep[MHD, {\emph{e.g.}},][]{1996AIPC..382..104M,2012JASTP..83....1R} but requiring longer computational run times. Despite these very different overall model properties, in terms of coronal magnetic structure they often agree very well with each other, especially at solar minimum \cite{2006ApJ...653.1510R}. In advance of {\emph{PSP}}’s first solar Enc. in Nov. 2018, \cite{2019ApJ...874L..15R} and \cite{2019ApJ...872L..18V} both ran MHD simulations. They utilized photospheric observations from the Carrington rotation (CR) prior to the Enc. happening and model parameters which were not informed by any {\emph{in situ}} measurements. Both studies successfully predicted {\emph{PSP}} would lie in negative polarity solar wind during perihelion and cross the HCS shortly after (see Fig.~\ref{FIG:Badman2020}). Via field line tracing, \cite{2019ApJ...874L..15R} in particular pointed to a series of equatorial coronal holes as the likely sources of this negative polarity and subsequent HCS crossing. This first source prediction was subsequently confirmed with careful comparison of {\emph{in~situ}} measurements of the heliospheric magnetic field and tuning of model parameters. This was done both with PFSS modeling \citep{2019Natur.576..237B,2020ApJS..246...23B,2020ApJS..246...54P}, Wang-Sheeley-Arge (WSA) PFSS $+$ Current Sheet Modeling \citep{2020ApJS..246...47S} and MHD modeling \citep{2020ApJS..246...24R,2020ApJS..246...40K,2021A&A...650A..19R}, all pointing to a distinct equatorial coronal hole at perihelion as the dominant solar wind source. As discussed in \S\ref{SWSAS:slowlafv} the predominant solar wind at this time was slow but with high alfv\'enicity. This first Enc. has proved quite unique in how distinctive its source was, with subsequent Encs. typically connecting to polar coronal hole boundaries and a flatter HCS such that {\emph{PSP}} trajectory skirts along it much more closely \citep{2020ApJS..246...40K, 2021A&A...650L...3C, 2021A&A...650A..19R}. It is worth discussing how these different modeling efforts made comparisons with {\emph{in situ}} data in order to determine their connectivity. The most common is the timing and heliocentric location of current sheet crossings measured {\emph{in situ}} which can be compared to the advected polarity inversion line (PIL) predicted by the various models \citep[{\emph{e.g.}},][and Fig.~\ref{FIG:Badman2020}]{2020ApJS..246...47S}. By ensuring a given model predicts when these crossings occur as accurately as possible, the coronal magnetic geometry can be well constrained and provide good evidence that field line tracing through the model is reliable. Further, models can be tuned in order to produce the best agreement possible. This tuning process was used to constrain models using {\emph{PSP}} data to more accurately find sources. For example, \citet{2020ApJS..246...54P} found evidence that different source surface heights (the primary free parameter of PFSS models) were more appropriate at different times during {\emph{PSP}}’s first two Encs. implying a non-spherical surface where the coronal field becomes approximately radial, and that a higher source surface height was more appropriate for {\emph{PSP}}’s second Enc. compared to its first. This procedure can also be used to distinguish between choices of photospheric magnetic field data \citep[{\emph{e.g.}},][]{2020ApJS..246...23B}. \begin{figure*} \centering \includegraphics[width=\textwidth]{Badman2020.png} \caption{Illustration of mapping procedure and model validation. A PFSS model is run using a source surface height of 2.0~$R_\odot$ and a Global Oscillation Network Group \citep[{\emph{GONG}};][]{1988AdSpR...8k.117H} ZQS magnetogram from 6 Nov. 2018 (the date of the first {\emph{PSP}} perihelion). A black solid line shows the resulting PIL. Running across the plot is {\emph{PSP}}’s first solar Enc. trajectory ballistically mapped to the model outer boundary. The color (red or blue) indicated the magnetic polarity measured {\emph{in situ}} which compares well to the predicted crossings of the PIL. The resulting mapped field lines are shown as curves connecting {\emph{PSP}}’s trajectory to the Sun. A background map combining EUV images from the {\emph{STEREO}}-A Extreme Ultraviolet Imager \citep[EUVI -- 195~{\AA};][]{2004SPIE.5171..111W} and the Advanced Imaging Assembly \citep[AIA -- 193~{\AA};][]{2012SoPh..275...17L} on the Solar Dynamic Observatory \citep[{\emph{SDO}};][]{2012SoPh..275....3P} shows the mapped locations correspond to coronal holes (dark regions) implying the locations of open magnetic field in the model are physical. Figure adapted from \citet{2020ApJS..246...23B} } \label{FIG:Badman2020} \end{figure*} Further validation of source estimates for {\emph{PSP}} have been made in different ways. For example, if a given source estimate indicates a specific coronal hole, or polar coronal hole extension or boundary, the model can be used to produce contours of the open field and compared with EUV observations of coronal holes to detect if the modeled source is empirically present, and if so if its size and shape are accurately captured \citep[{\emph{e.g.}},][]{2011SoPh..269..367L,2020ApJS..246...23B}. Other {\emph{in situ}} properties besides magnetic polarity have been compared in novel ways: \cite{2020ApJS..246...37R} showed for {\emph{PSP}}’s second Enc. a distinct trend in {\emph{in situ}} density depending on whether the S/C mapped to the streamer belt or outside (see \S\ref{SWSAS:strmblt} for more details). MHD models \citep[{\emph{e.g.}},][]{2020ApJS..246...24R,2020ApJS..246...40K,2021A&A...650A..19R} have also allowed direct timeseries predictions of other {\emph{in situ}} quantities at the location of {\emph{PSP}} which can be compared directly to the measured timeseries. Kinetic physics such as plasma distributions have provided additional clues: \cite{2020ApJ...892...88B} showed cooler electron strahl temperatures for solar wind mapping to a larger coronal hole during a fast wind stream, and hotter solar wind mapping to the boundaries of a smaller coronal hole during a slow solar wind stream. This suggests a connection between the strahl temperature and coronal origin (see \S\ref{SWSAS:fstwnd} for more details). \cite{2021A&A...650L...2S} showed further in {\emph{in situ}} connections with source type, observing an increase in mass flux (with {\emph{PSP}} and {\emph{Wind}} data) associated with increasing temperature of the coronal source, including variation across coronal holes and active regions (ARs). Mapping sources with coronal modeling for {\emph{PSP}}’s early Encs. has nonetheless highlighted challenges yet to be addressed. The total amount of open flux predicted by modeling to escape the corona has been observed to underestimate that measured {\emph{in situ}} in the solar wind at 1~AU \citep{2017ApJ...848...70L}, and this disconnect persists at least down 0.13~AU \citep{2021A&A...650A..18B} suggesting there exist solar wind sources which are not yet captured accurately by modeling. Additionally, due to its orbital plane lying very close to the solar equator, the solar minimum configuration of a near-flat HCS and streamer belt means {\emph{PSP}} spends a lot of time skirting the HCS \citep{2021A&A...650L...3C} and limits the types of sources that can be sampled. For example, {\emph{PSP}} has yet to take a significant sample of fast solar wind from deep inside a coronal hole, instead sampling primarily streamer belt wind and coronal hole boundaries. Finally, connectivity modeling is typically time-static either due to physical model constraints or computational tractability. However, the coronal magnetic field is far from static with processes such as interchange reconnection potentially allowing previously closed structures to contribute to the solar wind \citep{2020ApJ...894L...4F,2020JGRA..12526005V}, as well as disconnection at the tips of the streamer belt \citep{2020ApJ...894L..19L,2021A&A...650A..30N}. Such transient processes are not captured by the static modeling techniques discussed in this section, but have still been probed with {\emph{PSP}} particularly in the context of streamer blowouts (SBOs; \S\ref{SWSAS:strmblt}) via combining remote observations and {\emph{in situ}} observations, typically requiring multi-S/C collaboration and minimizing modeling uncertainty. Such collaborations are rapidly becoming more and more possible especially with the recent launch of Solar Orbiter \citep[{\emph{SolO}}; ][]{2020AA...642A...1M,2020A&A...642A...4V}, recently yielding for the first time detailed imaging of the outflow of a plasma parcel in the mid corona by {\emph{SolO}} followed by near immediate {\emph{in situ}} measurements by {\emph{PSP}} \citep{2021ApJ...920L..14T}. \subsection{Sources of Slow Alfv\'enic Solar Wind} \label{SWSAS:slowlafv} {\emph{PSP}}'s orbit has a very small inclination angle w.r.t. the ecliptic plane. It was therefore not surprising to find that over the first Encs. the solar wind streams were observed, with few exceptions, to have slower velocities than that expected for the high-speed streams (HSSs) typically originating in polar coronal holes around solar minimum. What however has been a surprise is that the slow solar wind streams were seen to have turbulence and fluctuation properties, including the presence of large amplitude folds in the magnetic field, i.re. switchbacks, typical of Alfv\'enic fluctuations usually associated with HSSs. Further out from the Sun, the general dichotomy between fast Alfv\'enic solar wind and slow, non-Alfv\'enic solar wind \citep{1991AnGeo...9..416G} is broken by the so-called slow Alfv\'enic streams, first noticed in the {\emph{Helios}} data acquired at 0.3~AU \citep{1981JGR....86.9199M}. That slow wind interval appeared to have much of the same characteristics of the fast wind, including the presence of predominantly outwards Alfv\'enic fluctuations, except for the overall speed. These were observed at solar maximum, while Parker’s observations over the first four years have covered solar minimum and initial appearance of activity of the new solar cycle. Alfv\'enic slow wind streams have also been observed at 1~AU \citep{2011JASTP..73..653D}, and been extensively studied in their composition, thermodynamic, and turbulent characteristics \citep{2015ApJ...805...84D, 2019MNRAS.483.4665D}. The results of these investigations point to a similar origin \citep{2015ApJ...805...84D} for fast and Alfv\'enic slow wind streams. Instances of slow Alfv\'enic wind at solar minimum were found re-examining the {\emph{Helios}} data collected in the inner heliosphere \citep{2019MNRAS.482.1706S, 2020MNRAS.492...39S, 2020A&A...633A.166P}, again supporting a similar origin - coronal holes - for fast and slow Alfv\'enic wind streams. Reconstruction of the magnetic sources of the wind seen by Parker for the first perihelion clearly showed the wind origin to be associated with a small isolated coronal hole. Both ballistic backwards projection in conjunction with the PFSS method \citep{2020ApJS..246...54P, 2020ApJS..246...23B} and global MHD models showed {\emph{PSP}} connected to a negative-polarity equatorial coronal hole, within which it remained for the entire Enc. \citep{2019ApJ...874L..15R, 2020ApJS..246...24R}. The {\emph{in situ}} plasma associated with the small equatorial coronal hole was a highly Alfv\'enic slow wind stream, parts of which were also seen near Earth at L1 \citep{2019Natur.576..237B, 2020ApJS..246...54P}. The relatively high intermittency, low compressibility \citep{2020A&A...633A.166P}, increased turbulent energy level \citep{2020ApJS..246...53C}, and spectral break radial dependence are similar to fast wind \citep{2020ApJS..246...55D}, while particle distribution functions are also more anisotropic than in non-Alfv\'enic slow wind \citep{2020ApJS..246...70H}. Whether Alfv\'enic slow wind always originates from small isolated or quasi-isolated coronal holes ({\emph{e.g.}}, narrow equatorward extensions of polar coronal holes), with large expansion factors within the subsonic/supersonic critical point, or whether the boundaries of large, polar coronal holes might also produce Alfv\'enic slow streams, is still unclear. There is however one possible implication of the overall high Alfv\'enicity observed by {\emph{PSP}} in the deep inner heliosphere: that all of the solar wind might be born Alfv\'enic, or rather, that Alfv\'enic fluctuations be a universal initial condition of solar wind outflow. Whether this is borne out by {\emph{PSP}} measurements closer to the Sun remains to be seen. \subsection{Near Streamer Belt Wind} \label{SWSAS:strmblt} As already discussed in part in \S\ref{SWSAS:slowlafv}, the slow solar wind exhibits at least two states. One state has similar properties to the fast wind, it is highly Alfv\'enic, has low densities and low source temperature (low charge state) and appears to originate from inside coronal holes \citep[see for instance the review by][]{2021JGRA..12628996D}. The other state of the slow wind displays greater variability, higher densities and more elevated source temperatures. The proximity of {\emph{PSP}} to the Sun and the extensive range of longitudes scanned by the probe during an Enc. makes it inevitable that at least one of the many S/C (the Solar and Heliospheric Observatory [{\emph{SOHO}}; \citealt{1995SSRv...72...81D}], {\emph{STEREO}}, {\emph{SDO}}, \& {\emph{SolO}}) currently orbiting the Sun will image the solar wind measured {\emph{in situ}} by {\emph{PSP}}. Since coronagraph and heliospheric imagers tend to image plasma located preferably (but not only) in the vicinity of the so-called Thomson sphere (very close to the sky plane for a coronagraph), the connection between a feature observed in an image with its counterpart measured {\emph{in situ}} is most likely to happen when {\emph{PSP}} crosses the Thomson sphere of the imaging instrument. A first study exploited such orbital configurations that occurred during Enc.~2 when {\emph{PSP}} crossed the Thompson spheres of the {\emph{SOHO}} and {\emph{STEREO}}-A imagers \citep{2020ApJS..246...37R}. In this study, the proton speed measured by SWEAP was used to trace back ballistically the source locations of the solar wind to identify the source in coronagraphic observations. \begin{figure*} \centering \includegraphics[width=\textwidth]{Rouillard2020.png} \caption{A zoomed-in view of a Carrington map built from LASCO-C3 bands of image pixels extracted at 8~R$_{\odot}$. The {\emph{PSP}} path corresponds to the points of magnetic connectivity traced back to the radial distance of the map (8~\textit{R}$_\odot$). The connectivity is estimated by assuming the magnetic field follows a Parker spiral calculated from the speed of the solar wind measured {\emph{in situ}} at {\emph{PSP}}. The color coding is defined by the density ($N\times r^2$) measured {\emph{in situ}} by {\emph{PSP}} with red corresponding to high densities and blue to low densities. Figure adapted from \cite{2020ApJS..246...37R}}. \label{FIG:Rouillard2020} \end{figure*} Fig.~\ref{FIG:Rouillard2020} presents, in a latitude versus longitude format, a comparison between the brightness of the solar corona observed by the Large Angle and Spectrometric COronagraph \citep[LASCO;][]{1995SoPh..162..357B} on {\emph{SOHO}} and the density of the solar wind measured {\emph{in~situ}} by {\emph{PSP}} and color-coded along its trajectory. The Figure shows that as long as the probe remained inside the bright streamers, the density of the solar wind was high but as soon as it exited the streamers (due to the orbital trajectory of {\emph{PSP}}), the solar wind density suddenly dropped by a factor of four but the solar wind speed remained the same around 300~km~s$^{-1}$ \citep{2020ApJS..246...37R}. \cite{2021ApJ...910...63G} exploited numerical models and ultraviolet imaging to show that as {\emph{PSP}} exited streamer flows it sampled slow solar wind released from deeper inside an isolated coronal hole. These measurements illustrate nicely the transitions that can occur between two slow solar wind types over a short time period. While switchbacks were observed in both flows, the patches of switchbacks were also different between the two types of slow wind with more intense switchbacks patches measured in the streamer flows \citep{2020ApJS..246...37R}, this is also seen in the spectral power of switchbacks \citep{2021A&A...650A..11F}. \cite{2021ApJ...910...63G} show that both types of solar winds can exhibit very quiet solar wind conditions (with no switchback occurrence). These quiet periods are at odds with theories that relate the formation of the slow wind with a continual reconfiguration of the coronal magnetic field lines due to footpoint exchange, since this should drive strong wind variability continually \citep[{\emph{e.g.}},][]{1996JGR...10115547F}. \cite{2021A&A...650L...3C} also measured a distinct transition between streamer belt and non-streamer belt wind by looking at turbulence properties during the fourth {\emph{PSP}} perihelion when the HCS was flat and {\emph{PSP}} skirted the boundary for an extended period of time. They associated lower turbulence amplitude, higher magnetic compressibility, a steeper turbulence spectrum, lower Alfv\'enicity and a lower frequency spectral break with proximity to the HCS, showing that at {\emph{PSP}}'s perihelia distances {\emph{in situ}} data allows indirect distinction between solar wind sources. Finally, in addition to steady state streamer belt and HCS connectivity, remote sensing and {\emph{in situ}} data on board {\emph{PSP}} has also been used to track transient solar phenomena erupting or breaking off from the streamer belt. \cite{2020ApJS..246...69K} detected the passage of a SBO coronal mass ejection (SBO-CME) during {\emph{PSP}} Enc.~1 and via imaging from {\emph{STEREO}}-A at 1~AU and coronal modeling with WSA associated it with a specific helmet streamer. Similarly, \cite{2020ApJ...897..134L} associated a SBO with a CME measured at {\emph{PSP}} during the second Enc. and via stereographic observations from 1~AU and arrival time analysis modeled the flux rope structure underlying the structure. \subsection{Fast Solar Wind Sources} \label{SWSAS:fstwnd} During the first eight Encs. of {\emph{PSP}}, there are only very few observations of fast solar wind, {\emph{e.g.}}, 9 Nov. 2018 and 11 Jan. 2021. Most of the time, {\emph{PSP}} was inside the slow wind streams. Thus, exploration of the source of fast wind remains as a future work. The first observed fast wind interval was included in the study by \cite{2020ApJ...892...88B}, who investigated the relation between the suprathermal solar wind electron population called the strahl and the shape of the electron VDF in the solar corona. Combining {\emph{PSP}} and {\emph{Helios}} observations they found that the strahl parallel temperature ($T_{s\parallel}$) does not vary with radial distance and is anticorrelated with the solar wind velocity, which indicates that $T_{s\parallel}$ is a good proxy for the electron coronal temperature. Fig.~\ref{FIG:Bercic2020} shows the evolution of $T_{s\parallel}$ along a part of the first {\emph{PSP}} orbit trajectory ballistically projected down to the corona to produce sub-S/C points. PFSS model was used to predict the magnetic connectivity of the sub-S/C points to the solar surface. The observed fast solar wind originates from the equatorial coronal hole \citep{2020ApJS..246...23B}, and is marked by low $T_{s\parallel}$ ($<$ 75 eV). These values are in excellent agreement with the coronal hole electron temperatures obtained via the spectroscopy technique \citep{1998A&A...336L..90D,2002SSRv..101..229C}. \begin{figure*} \centering \includegraphics[width=\textwidth]{Bercic2020.png} \caption{The evolution of $T_{s\parallel}$ with part of the {\emph{PSP}} orbit 1 between 30 Oct. 2018, 00:30~UT (Universal Time) and 23 Nov. 2018, 17:30~UT. The {\emph{PSP}} trajectory is ballistically projected down to the corona ($2~R_\odot$) to produce sub-S/C points.The colored lines denote the magnetic field lines mapped from the sub-S/C points to the solar surface as predicted by the PFSS model with source surface height $2~R_\odot$, the same as used in \cite{2019Natur.576..237B} and \cite{2020ApJS..246...23B}. The white line shows the PFSS neutral line. The points and magnetic field lines are colored w.r.t. hour-long averages of $T_{s\parallel}$. The corresponding image of the Sun is a synoptic map of the 193 Å emission synthesized from {\emph{STEREO}}/EUVI and {\emph{SDO}}/AIA for CR~2210, identical to the one used by \cite{2020ApJS..246...23B} in their Figs. 5 and 9. } \label{FIG:Bercic2020} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{Shi2020.png} \caption{ Normalized cross helicity $\sigma_c$ of wave periods 112~s$-$56~s as a function of the radial distance to the Sun R (horizontal axis; $R_S\equiv R_\odot$) and radial speed of the solar wind $V_r$ (vertical axis). The colors of each block represent the median values of the binned data. Text on each block shows the value of the block and the number of data points (bracketed) in the block. Figure adapted from \citet{2021A&A...650A..21S}.} \label{FIG:Shi2020} \end{figure*} \cite{2021A&A...650A..21S} analyzed data from the first five Encs. of {\emph{PSP}} and showed how the Alfv\'enicity varies with the solar wind speed. Fig.~\ref{FIG:Shi2020} shows the statistical result of the normalized cross helicity $\sigma_c$ for waves with periods between 112 and 56 seconds, well inside the inertial range of the MHD turbulence. Although the result may be affected by certain individual wind streams due to the limited volume of data, overall, a positive $\sigma_c-V_r$ correlation is observed, indicating that the faster wind is generally more Alfv\'enic than the slower wind. We should emphasize that the result is acquired mostly from measurements of the slow solar wind as the fast wind was rarely observed by {\emph{PSP}}. Thus, the result implies that even for the slow solar wind, the faster stream is more Alfv\'enic than the slow stream. This could be a result of the shorter nonlinear evolution for the turbulence, which leads to a decrease of the Alfv\'enicity, in the faster stream. \cite{2020ApJS..246...54P} and \cite{2021A&A...650A..21S} showed that the slow wind originating from the equatorial pseudostreamers is Alfv\'enic while that originating from the boundary of the polar coronal holes is low-Alfv\'enic. Thus, this speed-dependence of Alfv\'enicity could also be related to the different sources of the slow wind streams. \subsection{Active Region Sources} \label{SWSAS:actvrgn} The magnetic structure of the corona is important to determining solar wind outflow in at least two different ways, closed coronal field lines providing a geometrical backbone determining the expansion rate of neighboring field lines, and the dynamics of the boundaries between closed and open fields provides time-dependent heating and acceleration mechanisms, as occurs with emerging ARs. Emerging ARs reconfigure the local coronal field, leading to the formation of new coronal hole or coronal hole corridors often at there periphery, and depending on the latitude emergence may lead to the formation of large pseudostreamer or reconfiguration of helmet streamers, therefore changing solar wind distributions. Such reconfiguration is also accompanied by radio bursts and energetic particle acceleration, with at least one energetic particle event seen by IS$\odot$IS, on 4 Apr. 2019, attributed to this type of process \citep{2020ApJS..246...35L,2020ApJ...899..107K}. The event seen by {\emph{PSP}} was very small, with peak 1~MeV proton intensities of $\sim 0.3$ particles~cm$^{-2}$~sr$^{-1}$~s$^{-1}$~MeV$^{-1}$. Temporal association between particle increases and small brightness surges in the EUV observed by {\emph{STEREO}}, which were also accompanied by type III radio emission seen by the Electromagnetic Fields Investigation on {\emph{PSP}}, provided evidence that the source of this event was an AR nearly $80^\circ$ east of the nominal {\emph{PSP}} magnetic footpoint, suggesting field lines expanding over a wide longitudinal range between the AR in the photosphere and the corona. Studies by \citep{2021A&A...650A...6C,2021A&A...650A...7H} further studied the ARs from these times with remote sensing and {\emph{in situ}} data, including the type III bursts, further associating these escaping electron beams with AR dynamics and open field lines indicated by the type III radiation. The fractional contribution of ARs to the solar wind is negligible at solar minimum, and typically around $40\% \mbox{--} 60\%$ at solar maximum, scaling with sunspot number \citep{2021SoPh..296..116S}. The latitudinal extent of AR solar wind is highly variable between different solar cycles and varying from a band of about $\pm30^\circ$ to $\pm60^\circ$ around the equator. As the solar cycle activity increases, {\emph{PSP}} is expected to measure more wind associated with ARs. Contemporaneous measurements by multiple instruments and opportunities for quadratures and conjunctions with {\emph{SolO}} and {\emph{STEREO}} abound, and should shed light into the detailed wind types originating from AR sources. \subsection{Switchback Stream Sources} \label{SWSAS:sbs} As discussed in previous sections, large amplitude fluctuations with characteristics of large amplitude AWs propagating away from the Sun, are ubiquitous in many solar wind streams. Though such features are most frequently found within fast solar wind streams at solar minimum, there are also episodes of Alfv\'enic slow wind visible both at solar minimum and maximum at 0.3~AU and beyond \citep[in the {\emph{Helios}} and {\emph{Wind}} data;][]{2020SoPh..295...46D,2021JGRA..12628996D}. A remarkable aspect of {\emph{PSP}} measurements has been the fact that Alfv\'enic fluctuations also tend to dominate the slow solar wind in the inner heliosphere. Part and parcel of this turbulent regimes are the switchback patches seen throughout the solar Encs. by {\emph{PSP}}, with the possible exception of Enc.~3. As the Probe perihelia get closer to the Sun, there are indications that the clustering of switchbacks into patches remains a prominent feature, though their amplitude decreases w.r.t. the underlying average magnetic field. The sources of such switchback patches appear to be open field coronal hole regions, of which at least a few have been identified as isolated coronal hole or coronal hole equatorial coronal holes (this was the case of the {\emph{PSP}} connection to the Sun throughout the first perihelion), while streams originating at boundaries of polar coronal holes, although also permeated by switchbacks, appear to be globally less Alfv\'enic. The absence of well-defined patches of switchbacks in measurements at 1~AU or other S/C data, together with the association of patches to scales similar to supergranulation, when projected backwards onto the Sun, are indications that switchback patches are a signature of solar wind source structure. {\emph{PSP}} measurements near the Sun provide compelling evidence for the switchback patches being the remnants of magnetic funnels and supergranules \citep{2021ApJ...923..174B,2021ApJ...919...96F}. \section{Kinetic Physics and Instabilities in the Young Solar Wind} \label{KPIYSW} In addition to the observation of switchbacks, the ubiquity of ion- and electron-scale waves, the deformation of the particle VDF from a isotropic Maxwellian, and the kinetic processes connecting the waves and VDFs has been a topic of focused study. The presence of these waves and departures from thermodynamic equilibrium was not wholly unexpected, given previous inner heliospheric observations by {\emph{Helios}} \citep{2012SSRv..172...23M}, but the observations by {\emph{PSP}} at previously unrealized distances has helped to clarify the role they play in the thermodynamics of the young solar wind. In addition, the intensity and large variety of plasma waves in the near-Sun solar wind has offered new insight into the kinetic physics of plasma wave growth. \subsection{Ion-Scales Waves \& Structures} \label{KPIYSW.ion} The prevalence of electromagnetic ion-scale waves in the inner heliosphere was first revealed by {\emph{PSP}} during Enc.~1 at $36-54~R_\odot$ by \cite{2019Natur.576..237B} and studied in more detail in \cite{2020ApJ...899...74B}; they implicated that kinetic plasma instabilities may be playing a role in ion-scale wave generation. A statistical study by \cite{2020ApJS..246...66B} showed that a radial magnetic field was a favorable condition for these waves, namely that $30\%-50\%$ of the circularly polarized waves were present in a quiet, radial magnetic field configuration. However, single-point S/C measurements obscure the ability to answer definitively whether or not the ion-scale waves still exist in non-radial fields, only hidden by turbulent fluctuations perpendicular to the magnetic field. Large-amplitude electrostatic ion-acoustic waves are also frequently observed, and have been conjectured to be driven by ion-ion and ion-electron drift instabilities \cite{2020ApJ...901..107M,2021ApJ...911...89M}. These ubiquitously observed ion-scale waves strongly suggest that they play a role in the dynamics of the expanding young solar wind. The direction of ion-scale wave propagation, however, is ambiguous. The procedure for Doppler-shifting the wave frequencies from the S/C to plasma frame is nontrivial. A complimentary analysis of the electric field measurements is required, \cite[see][for a discussion of initially calibrated DC and low frequency electric field measurements from FIELDS]{2020JGRA..12527980M}. These electric field measurements enabled \cite{2020ApJS..246...66B} to constrain permissible wave polarizations in the plasma frame by Doppler-shifting the cold plasma dispersion relation and comparing to the S/C frame measurements. They found that a majority of the observed ion-scale waves are propagating away from the Sun, suggesting that both left-handed and right-handed wave polarizations are plausible. The question of the origin of these waves and their role in cosmic energy flow remains a topic of fervent investigation; {\emph{c.f.}} reviews in \cite{2012SSRv..172..373M,2019LRSP...16....5V}. An inquiry of the plasma measurements during these wave storms is a natural one, given that ion VDFs are capable of driving ion-scale waves after sufficient deviation from non-local thermal equilibrium \citep[LTE;][]{1993tspm.book.....G}. Common examples of such non-LTE features are relatively drifting components, {\emph{e.g.}}, a secondary proton beam, temperature anisotropies along and transverse to the local magnetic field, and temperature disequilibrium between components. Comprehensive statistical analysis of these VDFs have been performed using {\emph{in situ}} observations from {\emph{Helios}} at 0.3~AU \citep{2012SSRv..172...23M} and at 1~AU ({\emph{e.g.}}, see review of {\emph{Wind}} observations in \citep{2021RvGeo..5900714W}). Many studies employing linear Vlasov theory combined with the observed non-thermal VDFs have implied that the observed structure can drive instabilities leading to wave growth. The question of what modes may dominate, {\emph{e.g.}}, right-handed magnetosonic waves or left-handed ion-cyclotron waves, under what conditions remains open, but {\emph{PSP}} is making progress toward solving this mystery. \begin{figure} \centering {\includegraphics[trim = 0mm 0mm 0mm 0mm, clip, width=1.0\textwidth]{Verniero2020_fig1_v0.pdf}} \caption{Example event on 5 Apr. 2019 (Event \#1) featuring a strong correlation between a proton beam and an ion-scale wave storm. Shown is the (a) radial magnetic field component, (b) angle of wave propagation w.r.t. B, (c) wavelet transform of B, (d) perpendicular polarization of B, (e) SPAN-i measured moment of differential energy flux, (f) SPAN-i measured moments of temperature anisotropy. In panels (c) and (d), the white dashed–dotted line represents the local $f_{cp}$. Figure adapted from \cite{2020ApJS..248....5V}. \label{fig:verniero2020f1}} \end{figure} \begin{figure} \centering {\includegraphics[trim = 0mm 0mm 0mm 0mm, clip, width=0.95\textwidth]{Verniero2020_fig2_v0.pdf}} \caption{Beam evolution for times indicated by the dashed black lines in Fig.~\ref{fig:verniero2020f1}. Left: Proton VDFs, where each line refers to an energy sweep at different elevation angles. Middle: VDF contour elevations that are summed and collapsed onto the $\theta$-plane. Right: VDF contour elevations that are summed and collapsed onto the azimuthal plane. The black arrow represents the magnetic field direction in SPAN-i coordinates, where the head is at the solar wind velocity (measured by SPC) and the length is the Alfv\'en speed. Figure adapted from \cite{2020ApJS..248....5V}. \label{fig:verniero2020f2}} \end{figure} During Enc.~2, {\emph{PSP}} witnessed intense secondary proton beams simultaneous with ion-scale waves at $\sim36~R_\odot$, using measurements from both SWEAP and FIELDS \citep{2020ApJS..248....5V}. The particle instrument suite, SWEAP is comprised of a Faraday Cup, called Solar Probe Cup \citep[SPC;][]{2020ApJS..246...43C} and top-hat electrostatic analyzers called Solar Probe ANalzers that measures electrons \citep[SPAN-e;][]{2020ApJS..246...74W} and ions \citep[SPAN-i;][]{10.1002/essoar.10508651.1}. The SPANs are partially obstructed by {\emph{PSP}}'s heat shield, leading to measurements of partial moments of the solar wind plasma. But, full sky coverage can be leveraged using SPC. The placement of SPAN-i on the S/C was optimal for detecting proton beams, both during initial and ongoing Encs. The time-of-flight capabilities on SPAN-i can separate protons from other minor ions, such as alpha particles. The instrument measures particle VDFs in 3D $(E,\theta,\phi)$ energy-angle phase-space. These particle VDFs were showcased in \cite{2020ApJS..248....5V} where they displayed two events featuring the evolution of an intense proton beam simultaneous with ion-scale wave storms. The first of these, shown in Fig.~\ref{fig:verniero2020f1}, involved left-handed circularly polarized waves parallel propagating in a quiet, nearly radial magnetic field; the frequencies of these waves were near the proton gyrofrequency ($f_{cp}$). Analysis of the FIELDS magnetometer data shows in Fig.~\ref{fig:verniero2020f1}a the steady $B_r/|B|$; Fig.~\ref{fig:verniero2020f1}b shows from MVA the wave traveling nearly parallel to $\mathbf{B}$, and Fig.~\ref{fig:verniero2020f1}d shows the wavelet transform of $\mathbf{B}$ over a narrow frequency range about the $f_{cp}$, indicated by the white dashed horizontal line; Fig.~\ref{fig:verniero2020f1}d represents the wave polarization, where blue is left-handed in the S/C frame, and red is right-handed. The SPAN-i moments of differential energy flux is displayed in Fig.~\ref{fig:verniero2020f1}e, and the temperature anisotropy was extracted from the temperature tensor in Fig.~\ref{fig:verniero2020f1}f. The evolution of proton VDFs reported in \cite{2020ApJS..248....5V} during this event (at the times indicated by the black dashed vertical lines in Fig.~\ref{fig:verniero2020f1}) are displayed in Fig.~\ref{fig:verniero2020f2}. The left column represents the proton VDF in 3D phase-space, where each line represents a different energy sweap at different elevation angles. The middle column represents contours of the VDF in SPAN-i instrument coordinates $v_r$-$v_z$, summed and collapsed onto the $\theta$-plane. The right column represents the VDF in the azimuthal plane, where one can notice the portion of the VDF that is obstructed by the heat shield. During this period of time, the proton core was over 50\% in the SPAN-i FOV, and therefore was determined as a suitable event to analyze. During both wave-particle interaction events described in \cite{2020ApJS..248....5V}, 1D fits were applied to the SPAN-i VDFs and inputted to a kinetic instability solver. Linear Vlasov analysis revealed many wave modes with positive growth rates, and that the proton beam was the main driver of the unstable plasma during these times. \cite{2021ApJ...909....7K} further investigated the nature of proton-beam driven kinetic instabilities by using 3D fits of the proton beam and core populations during Enc.~4. Using the plasma instability solver, PLUMAGE \citep{2017JGRA..122.9815K}, they found significant differences in wave-particle energy transfer when comparing results from modeling the VDF as either one or two components. The differences between the waves predicted by the one- and two-component fits were not universal; in some instances, properly accounting for the beam simply enhanced the growth rate of the instabilities predicted by the one-component model while for other intervals, entirely different sets of waves were predicted to be generated. \vspace{.5 in} \begin{figure} \centering {\includegraphics[trim = 0mm 0mm 0mm 0mm, clip, width=0.6\textwidth]{Verniero2021_fig1_v1.pdf}} \caption{Example of proton VDF displaying a ``hammerhead" feature. The VDF was transformed from the SPAN-i $\theta$-plane to the plasma-frame in magnetic field-aligned coordinates. The black arrow represents the magnetic field, where the head is placed at the solar wind speed and the length is the Alfv\'en speed. The particles diffuse along predicted contours from quasilinear theory, seen by the dashed black curves. Figure adapted from \cite{2022ApJ...924..112V}. \label{fig:verniero2021f1}} \end{figure} \vspace{.5 in} \begin{figure} \centering {\includegraphics[trim = 0mm 0mm 0mm 0mm, clip, width=1.0\textwidth]{Verniero2021_fig2_v1.pdf}} \caption{The proton VDF displays a `hammerhead' feature throughout this interval of enhanced Right-Handed wave power. The proton VDF at the time indicated by the vertical dashed black line is shown in Fig.~\ref{fig:verniero2021f1}. Figure adapted from \cite{2022ApJ...924..112V}. \label{fig:verniero2021f2}} \end{figure} During Encs. 4 and 5, {\emph{PSP}} observed a series of proton beams in which the proton VDF was strongly broadened in directions perpendicular to the magnetic field, but only at $|v_\parallel| \gtrsim 2-3 v_{\rm A}$, where $v_\parallel$ is the proton velocity parallel to the magnetic field relative to the peak of the proton VDF. At $|v_\parallel | \lesssim 2-3 v_{\rm A}$, the beam protons' velocities were much more aligned with the magnetic-field direction. The resulting level contours of the proton VDF exhibited a `hammerhead' shape \citep{2022ApJ...924..112V}. An example VDF is given in Fig.~\ref{fig:verniero2021f1}, at the time indicated by the vertical dashed line in Fig. \ref{fig:verniero2021f2}. These new complex distributions were quantified by modeling the proton VDF as a sum of three bi-Maxwellians, and using the temperature anisotropy of the third population as a proxy for the high energy asymmetries. In addition, the observations substantiate the need for multi-component proton VDF fitting to more accurately characterize the plasma conditions at kinetic scales, as discussed in \cite{2021ApJ...909....7K}. \cite{2022ApJ...924..112V} found that these hammerhead distributions tended to occur at the same time as intense, right circularly polarized, plasma waves at wave lengths comparable to the proton inertial length. Moreover, the level contours of the VDF within the hammerhead region aligned with the velocity-space diffusion contours that arise in quasilinear theory when protons resonantly interact with parallel-propagating, right circularly polarized, fast-magnetosonic/whistler (FM/W) waves. These findings suggest that the hammerhead distributions occur when field-aligned proton beams excite FM/W waves and subsequently scatter off these waves. Resonant interactions between protons and parallel-propagating FM/W waves occur only when the protons' parallel velocities exceed $\simeq 2.6\ v_{\rm A}$, consistent with the observation that the hammerheads undergo substantial perpendicular velocity-space diffusion only at parallel velocities $\gtrsim 2.6\ v_{\rm A}$ \citep{2022ApJ...924..112V}. Initial studies of the transfer of energy between the ion-scale waves and the proton distribution were performed in \cite{2021A&A...650A..10V}. During an Enc.~3 interval where an ion cyclotron wave (ICW) was observed in the FIELDS magnetometer data and SPC was operating in a mode where it rapidly measures a single energy bin, rather than scanning over the entire range of velocities, the work done by the perpendicular electric field on the measured region of the proton VDF was calculated. The energy transferred between the fields and particles was consistent with the damping of an ICW with a parallel $f$ wave-vector of order the thermal proton gyroradius. \subsection{Electron-Scales Waves \& Structures} \label{KPIYSW.electron} Researchers in the 1970s recognized that the distributions should change dramatically as solar wind electrons propagated away from the Sun \citep{1971ApJ...163..383C,hollweg1974electron,feldman1975solar}. As satellites sampled regions from $\sim0.3$~AU to outside 5~AU, studies showed that the relative fractions of the field-aligned strahl and quasi-isotropic halo electrons change with radial distance in a manner that is inconsistent with adiabatic motion \citep{maksimovic2005radial,vstverak2009radial,2017JGRA..122.3858G}. The changes in heat flux, which is carried predominantly by the strahl \citep{scime1994regulation, vstverak2015electron} are also not consistent with adiabatic expansion. Research assessing the relative roles of Coulomb collisions and wave-particle interactions in these changes has often concluded that wave-particle interactions are necessary \citep{phillips1990radial, scudder1979theory, 2019MNRAS.489.3412B, 2013ApJ...769L..22B}. The ambipolar electric field is another mechanism that shapes the electron distributions \citep{lemaire1973kinetic,scudder2019steady}. {\emph{PSP}} has provided new insights into electrons in the young solar wind, and the role of waves and the ambipolar electric field in their evolution. \cite{2020ApJS..246...22H}, in a study of the first two Encs., found that radial trends inside $\sim0.3$~AU were mostly consistent with earlier studies. The halo density, however, decreases close to the Sun, resulting in a large increase in the strahl to halo ratio. In addition, unlike what is seen at 1~AU, the core electron temperature is anti-correlated with solar wind speed \citep{2020ApJS..246...22H,2020ApJS..246...62M}. The core temperature may thus reflect the coronal source region, as also discussed for the strahl parallel temperature in \S4.4 \citep{2020ApJ...892...88B}. \cite{2022ApJ...931..118A} confirmed the small halo density, showing that it continued to decrease in measurements closer to the Sun, and also found that the suprathermal population (halo plus strahl) comprised only 1\% of the solar wind electrons at the closest distances sampled. The electron heat flux carried primarily by strahl \citep{2020ApJ...892...88B, 2021A&A...650A..15H} is also anticorrelated with solar wind speed \citep{ 2020ApJS..246...22H}. Closer to the Sun (from 0.125 to 0.25~AU) the normalized electron heat flux is also anti-correlated with beta \citep{2021A&A...650A..15H}. This beta dependence is not consistent with a purely collisional scattering mechanism. The signature of the ambipolar electric field has also been clearly revealed in electron data \citep{2021ApJ...916...16H, 2021ApJ...921...83B} as a deficit of electrons moving back towards the Sun. This loss of electrons occurs more than 60\% of the time inside 0.2~AU. The angular dependence of the deficit is not consistent with Coulomb scattering, and the deficit disappears in the same radial distances as the increase in the halo. There is also a decrease in the normalized heat flux. Both observations provide additional support for the essential role of waves. The role of whistler-mode waves in the evolution of solar wind electrons and regulation of heat flux has long been a topic of interest. Instability mechanisms including temperature anisotropy and several heat flux instabilities have been proposed \citep{1994JGR....9923391G,1996JGR...10110749G,2019ApJ...871L..29V}. Wave studies near 1~AU utilizing data from {\emph{Wind}}, {\emph{STEREO}}, {\emph{Cluster}} \citep{1997SSRv...79...11E} and the Acceleration, Reconnection, Turbulence and Electrodynamics of the Moon's Interaction \citep[{\emph{ARTEMIS}};][]{2011SSRv..165....3A} missions provided evidence for both low amplitude parallel propagating whistlers \citep{2014ApJ...796....5L,2019ApJ...878...41T} and large amplitude highly oblique waves \citep{2010JGRA..115.8104B,2020ApJ...897..126C}. The Fields instrument on {\emph{PSP}}, using waveform capture, spectral, and bandpass filter datasets using both electric fields and search coil magnetic fields, has enabled study of whistler-mode waves over a wide range of distances from the Sun and in association with different large-scale structures. This research, in concert with studies of solar wind electrons, has provided critical new evidence for the role of whistler-mode waves in regulation of electron heat flux and scattering of strahl electrons to produce the halo. Observational papers have motivated a number of theoretical studies to further elucidate the physics \citep{2020ApJ...903L..23M,2021ApJ...919...42M,2021ApJ...914L..33C,vo2022stochastic}. Enc.~1 waveform data provided the first definitive evidence for the existence of sunward propagating whistler mode waves \citep{2020ApJ...891L..20A}, an important observation because, if the waves have wavevectors parallel to the background magnetic field, only sunward-propagating waves can interact with the anti-sunward propagating strahl. The whistlers observed near the Sun occur with a range of wave angles from parallel to highly oblique \citep{2020ApJ...891L..20A,2021A&A...650A...8C,2022ApJ...924L..33C,Dudok2022_scm}. Because the oblique whistler waves are elliptically polarized (mixture of left and right hand polarized), oblique waves moving anti-sunward can interact with electrons moving away from the Sun. Particle tracing simulations\citep{2021ApJ...914L..33C,vo2022stochastic} and PIC simulations \citep{2020ApJ...903L..23M,2021ApJ...919...42M,2019ApJ...887..190R} have revealed details of wave-electron interactions. Several studies have examined the association of whistlers with large-scale solar wind structures. There is a strong correlation between large amplitude waves and the boundaries of switchbacks \citep{2020ApJ...891L..20A}, and smaller waves can fill switchbacks \citep{2021A&A...650A...8C}. The whistlers are also seen primarily in the slow solar wind \citep{2021A&A...650A...9J,2022ApJ...924L..33C}, as initially observed near 1~AU \citep{2014ApJ...796....5L} and in the recent studies using the {\emph{Helios}} data between 0.3 to 1~AU \citep{2020ApJ...897..118J}. Several studies have found differences in the evolution of the non-thermal electron distributions between the slow and fast wind, suggesting that different scattering mechanisms are active in the fast and slow wind \citep{pagel2005understanding,vstverak2009radial}. \begin{figure} \centering \includegraphics[width=0.95\textwidth]{SSR2021_figure_Whistlers-eps-converted-to.pdf} \caption{ Enlargement of the trailing edge of the switchback of Fig.~\ref{fig:icx1}. Panel (a) shows the magnetic field from MAG with the same color code as in Fig \ref{fig:icx1}. Panels (b) and (c) show magnetic and electric field wave perturbations respectively. Panel (d) displays the dynamic spectrum of magnetic field perturbations $B_w$. The dashed curves in panels (d-f) represent the $f_{LH}$ frequency (bottom curve) and 0.1$f_{ce}$ (upper curve). Panel (e) displays the signed radial component of the Poynting flux. Red colors corresponds to a sunward propagation. Panel (f) shows the wave normal angle relative to the direction of the background magnetic field.} \label{fig:mag} \end{figure} Fig.~\ref{fig:mag} shows a zoom-in of the trailing edge of the switchback displayed in Fig.~\ref{fig:icx1}. Fig.~\ref{fig:mag}a emphasizes that the local dip in the magnetic field is essentially caused by a decrease of its radial component. This dip coincides with an increase of the ratio between electron plasma frequency ($f_{pe}$) and electron gyrofrequency $f_{ce}$ from 120 to $\approx500$. A polarization analysis reveals a right-handed circular polarization of the magnetic field and an elliptical polarization of the electric field perturbations with a $\pi/2$ phase shift. The dynamic spectrum in Fig.~\ref{fig:mag}d shows a complex inner structure of the wave packet, which consists of a series of bursts. The phase shift of the magnetic and electric field components transverse to the radial direction attest a sunward propagation of the observed waves. The sign of the radial component of the Poynting vector (Fig.~\ref{fig:mag}f) changes from positive (sunward) at high frequencies to negative (anti-sunward) at lower frequencies. The frequencies of these wave packets fall between $f_{LH}$ (lower dashed curve in Figs.~\ref{fig:mag}f and \ref{fig:mag}g) and one tenth of $f_{ce}$ (upper dashed curve). This suggests that the observed frequency range of the whistler waves is shifted down by the Doppler effect as the whistler phase velocity ($300-500$~km~s$^{-1}$) is comparable to that of the plasma bulk velocity. The observed whistlers are found to have a wide range of wave normal angle values from quasi-parallel propagation to quasi-electrostatic propagating close to the resonance cone corresponding to the complex structure of the dynamics spectrum (Fig.~\ref{fig:mag}b). Fig.~\ref{fig:mag}h thereby further supports the idea that our complex wave packet consists of a bunch of distinct and narrowband wave bursts. The wave frequency in the solar wind plasma frame, as reconstructed from the Doppler shift and the local parameters of plasma, are found to be in the range of \SIrange{100}{350}{Hz}, which corresponds to $0.2-0.5\ f_{ce}$ (Fig.~\ref{fig:mag}d). Incidentally, the reconstructed wave frequency can be used to accurately calibrate the electric field measurements, and determine the effective length of the electric field antennas, which was found to be approximately \SIrange{3.5}{4.5}{m} in the \SIrange{20}{100}{Hz} frequency range \citep{2020ApJ...891L..20A, 2020ApJ...901..107M}. More details can be found in \cite{2020ApJ...891L..20A}. The population of such sunward propagating whistlers can efficiently interact with the energetic particles of the solar wind and affect the strahl population, spreading their field-aligned PAD through pitch-angle scattering. For sunward propagating whistlers around \SIrange{100}{300}{Hz}, the resonance conditions occur for electrons with energies from approximately \SI{50}{eV} to \SI{1}{keV}. This energy range coincides with that of the observed strahl \citep{2020ApJS..246...22H} and potentially leads to efficient scattering of the strahl electrons. Such an interaction can be even more efficient taking into account that some of the observed waves are oblique. For these waves, the effective scattering is strongly enhanced at higher-order resonances \citep{2020ApJ...891L..20A,2021ApJ...911L..29C}, which leads to a regulation of the heat flux as shown by \cite{2019ApJ...887..190R}, and to an increase in the fraction of the halo distribution with the distance from the Sun. {\emph{PSP}} has provided direct evidence for scattering of strahl into halo by narrowband whistler-mode waves \citep{ 2020ApJ...891L..20A, 2021ApJ...911L..29C, 2021A&A...650A...9J}. The increase in halo occurs in the same beta range and radial distance range as the whistlers, consistent with production of halo by whistler scattering. Comparison of waveform capture data and electron distributions from Enc.~1 \citep{2021ApJ...911L..29C} showed that the narrowest strahl occurred when there were either no whistlers or very intermittent low amplitude waves, whereas the broadest distributions were associated with intense, long duration waves. Features consistent with an energy dependent scattering mechanism were observed in approximately half the broadened strahl distributions, as was also suggested by features in electrons displaying the signature of the ambipolar field \citep{2021ApJ...916...16H}. In a study of bandpass filtered data from Encs. 1 through 9, \cite{ 2022ApJ...924L..33C} have shown that the narrowband whistler-mode waves that scatter strahl electrons and regulate heat flux are not observed inside approximately 30~$R_\odot$. Instead, large amplitude electrostatic (up to 200~mV/m) waves in the same frequency range (from the $f_{LH}$ frequency up to a few tenths of $f_{ce}$) are ubiquitous, as shown in Fig.~\ref{fig:ESradial}. The peak amplitudes of the electrostatic (ES) waves ($\sim220$~mV/m) are an order of magnitude larger than those of the whistlers ($\sim40$~mV/m). Within the same region where whistlers disappear, the deficit of sunward electrons is seen, and the density of halo relative to the total density decreases. The finding that, when the deficit was observed, the normalized heat flux-parallel electron beta relationship was not consistent with the whistler heat flux fan instability is consistent with loss of whistlers. The differences in the functional form of electron distributions due to this deficit very likely result in changes in the instabilities excited \citep{2021ApJ...916...16H, 2021A&A...656A..31B}. Theoretical studies have examined how changes in the distributions and background plasma properties might change which modes are destabilized. \cite{ lopez2020alternative} examined dependence on beta and the ratio of the strahl speed to the electron Alfv\'en speed for different electromagnetic and electrostatic instabilities. This ratio decreases close to the Sun. Other studies \citep{2021ApJ...919...42M, 2019ApJ...887..190R,schroeder2021stability} have concluded that multiple instabilities operate sequentially and/or at different distances from the Sun. \begin{figure} \centering {\includegraphics[width=0.8\textwidth]{Cattell2022.png}} \caption{ Statistics of whistler-mode waves and ES waves identified in bandpass filter data. Left hand column, from top to bottom: the number of BBF samples identified as ES waves versus radial distance, the number of BBF samples identified as whistler-mode waves versus radial distance, and the total number of BBF samples in Enc.~1 through Enc.~9 versus radial distance. The right hand column: the electrostatic wave occurrence rate and the whistler-mode wave occurrence rate , both versus radial distance. Note that the drop off outside 75~$R_\odot$ (0.3~AU) is associated with the impact of the decrease in frequency with radial distance on the algorithm used to identify waves. Figure adapted from \cite{2022ApJ...924L..33C}. \label{fig:ESradial}} \end{figure} Closer to the Sun, other scattering mechanisms must operate, associated with the large narrowband ES waves and the nonlinear waves. Note that in some cases these ES waves have been identified as ion acoustic waves \citep{2021ApJ...919L...2M}. \cite{ dum1983electrostatic} has discussed anomalous transport and heating associated with ES waves in the solar wind. The lack of narrowband whistler-mode waves close to the sun and in regions of either low ($<1$) or high ($>10$) parallel electron beta may also be significant for the understanding and modeling the evolution of flare-accelerated electrons, other stellar winds, the interstellar medium, and intra-galaxy cluster medium. {\emph{PSP}} data has been instrumental in advancing the study of electron-resonant plasma waves other than whistler-mode waves. \citet{Larosa2022} presented the first unambiguous observations of the Langmuir z-mode in the solar wind (Langmuir-slow extraordinary mode) using high frequency magnetic field data. This wave mode is thought to play a key role in the conversion of electron-beam driven Langmuir waves into type~III and type~II solar radio emission \citep[{\emph{e.g.}},][and references therein]{Cairns2018}. However, progress understanding the detailed kinetic physics powering this interaction has been slowed by a lack of direct z-mode observations in the solar wind. Z-mode wave occurrence was found to be highly impacted by the presence of solar wind density fluctuations, confirming long-held theoretical assumptions. {\emph{PSP}} data also revealed the presence of unidentified electrostatic plasma waves near $f_{ce}$ in the solar wind (Fig.~\ref{fig:nearfce}). \citet{Malaspina2020_waves} showed that these waves occur frequently during solar Encs., but only when fluctuations in the ambient solar wind magnetic field become exceptionally small. \citet{2022ApJ...936....7T} identified that a necessary condition for the existence of these waves is the direction of the ambient solar wind magnetic field vector. They demonstrated that these waves occur for a preferential range of magnetic field orientation, and concluded their study by suggesting that these waves may be generated by S/C interaction with the solar wind. \citet{Malaspina2021_fce} explored high-cadence observations of these waves, demonstrating that they are composed of many simultaneously present modes, one of which was identified as the electron Bernstein mode. The other wave modes remain inconclusively identified. \citet{Shi2022_waves} and \citet{Ma2021_waves} explored the possibility that these waves are created by nonlinear wave-wave interactions. Identifying the exact wave mode, the origin of these waves near $f_{ce}$, and their impact on the solar wind remain areas of active study. {\emph{PSP}} data from ever decreasing perihelion distances are expected to enable new progress. \begin{figure} \centering {\includegraphics[width=0.8\textwidth]{Malaspina2020_nearfce_example.png}} \caption{The left-column shows a spectrogram of electric field data, with $f_{ce}$ indicated by the white line. The near-cyclotron waves are present at the center of the interval, where fluctuations in the ambient magnetic field (b), solar wind velocity (c), plasma density (d), and electron temperature (e) show decreased variability. The right-column shows a high-cadence observation of near-cyclotron waves, where three distinct wave Types are identified (A,B,C). Type A was identified as an electron Bernstein wave. Figure adapted from \citet{Malaspina2021_waves} and \citet{Malaspina2020_waves}. \label{fig:nearfce}} \end{figure} \section{Turbulence} \label{TRBLCE} Turbulence refers to a class of phenomena that characteristically occurs in fluids and plasmas when nonlinear effects are dominant. Nonlinearity creates complexity, involvement of many degrees of freedom and an effective lack of predictability. In contrast, linear effects such as viscous damping or waves in uniform media tend to operate more predictably on just a few coordinates or degrees of freedom. Statistical properties in both the space and time domains are crucial to fully understanding turbulence \citep{2004RvMP...76.1015Z, 2015RSPTA.37340154M}. The relative independence of spatial and temporal effects in turbulence presents particular challenges to single S/C missions such as {\emph{PSP}}. Nevertheless various methods, ranging from Taylor's frozen-in hypothesis \citep{2015ApJ...801L..18K,2019ApJS..242...12C,2021A&A...650A..22P} to more elaborate ensemble methods \citep{2019ApJ...879L..16B,2020PhRvR...2b3357P,2020ApJS..246...53C,2021ApJ...923...89C} have been brought to bear to reveal fundamental properties related to the turbulent dynamics of the newly explored regions of the solar atmosphere. This line of research is of specific importance to the {\emph{PSP}} mission in that turbulence is a likely contributor to heating of the corona and the origin of the solar wind -- two of the major goals of the {\emph{PSP}} mission. In this section we will review the literature related to {\emph{PSP}} observations of turbulence that has appeared in the first four years of the mission. \subsection{Energy Range and Large-Scale (Correlation Scale) Properties}\label{sec:5_large_scale} Turbulence is often described by a cascade process, which in simplest terms describes the transfer of energy across scales from largest to smallest scales. The largest relevant scales act as reservoirs and the smallest scales generally contribute to most of the dissipation of the turbulent fluctuations. The drivers of turbulence are notionally the large ``energy-containing eddies'' which may be initially populated by a supply of fluctuations at the boundaries, or injection by ``stirring'' in the interior of the plasma. In the solar wind the dynamics at the photosphere is generally believed to inject a population of fluctuations, usually described as Alfv\'enic fluctuations. These propagate upwards into the corona triggering subsequent turbulent activity \citep{1999ApJ...523L..93M,2009ApJ...707.1659C,2011ApJ...736....3V,2013ApJ...776..124P}. However large scale organized shears in magnetic field and velocity field also exist in the solar atmosphere. While these are not initially in a turbulent state, they may become so at a later time, and eventually participate by enhancing the supply of cascading energy. Stream interaction regions (SIRs) are an example of the latter. Still further stirring mechanisms are possible, such as injection of waves due to scattering of reflected particles upstream of shocks, or by newly ionized particles associated with pickup of interstellar neutrals \citep{2020ApJ...900...94P}. We will not be concerned with this latter class of energy injection processes here. Among the earliest reports from {\emph{PSP}}, \citet{2020ApJS..246...53C} described a number of observations of relevance to the energy range. In particular the large-scale edge of the power-law inertial range (see \S5.2) indicates a transition, moving towards large scale, into the energy range. \citet{2020ApJS..246...53C} reports the presence of a shallower ``$1/f$'' range at these larger scales, a feature that is familiar from 1~AU observations \citep{1986PhRvL..57..495M}. It is important to recognize that in general turbulent theory provides no specific expectation for spectral forms at energy-containing scales, as these may be highly situation dependent. Indeed the implied range of scale at which $1/f$ is observed {\emph{in~situ}} corresponds rather closely to the scales and frequencies at which features are observed in the photospheric magnetic field \citep{2007ApJ...657L.121M} and in the deep corona \citep{2008ApJ...677L.137B}. This correspondence strongly hints that the $1/f$ signal is a vestige of dynamical processes very close to the sun, possibly in the dynamo. Still, \citet{2020ApJS..246...53C} point out that the measured nonlinear times at the {\it break scale} between inertial and energy ranges suggest that there is sufficient time in transit for the source region to develop nonlinear correlations at the that scale. This lends some credence to theories \citep[{\emph{e.g.}},][]{1989PhRvL..63.1807V,2018ApJ...869L..32M} offering explanation for local dynamical emergence of $1/f$ signals. This issue remains to be fully elucidated. An important length scale that describes the energy range is the correlation scale, which is nominally the scale of the energy containing eddies \citep{2012JFM...697..296W}. The notion of a unique scale of this kind is elusive, given that a multiplicity of such scales exists, {\emph{e.g.}}, for MHD and plasmas. Even in incompressible MHD, one deals with one length for each of two Elsasser energies, as well as a separate correlation scale for magnetic field and velocity field fluctuations. Accounting for compressibility, the fluctuations of density \citep{2020ApJS..246...58P} also become relevant, and when remote sensing ({\emph{e.g.}}, radio) techniques are used to probe density fluctuations observed by {\emph{PSP}} citep{2020ApJS..246...57K}, the notion of {\it{effective turbulence scale}} is introduced as a nominal characteristic scale. The correlation lengths themselves are formally defined as integrals computed from correlation functions, and are therefore sensitive to energy range properties. But this definition is notoriously sensitive to low frequency power \citep[or length of data intervals; see][]{2015JGRA..120..868I}. For this reason, simplified methods for estimating correlation lengths are often adopted. \cite{2020ApJS..246...53C} examined two of these in {\emph{PSP}} data -- the so called {\bf{``$1/e$''}} method \citep[see also][]{2020ApJS..246...58P} and the break point method alluded to above. As expected based on analytical spectral models \citep[{\emph{e.g.}},][]{2007ApJ...667..956M}, correlation scales based on the break point and on $1/e$ are quite similar. A number of researchers have suggested that measured correlation scales near {\emph{PSP}} perihelia are somewhat shorter than what may be expected based on extrapolations of 1~AU measurements \citep{2020AGUFMSH0160013C}. It is possible that this is partly explained as a geometric effect, noting that the standard Parker magnetic field is close to radial in the inner heliosphere, while it subtends increasing angles relative to radial at larger distances. One approach to explaining these observations is based on a 1D turbulence transport code \citep{2020ApJS..246...38A} that distinguishes ``slab'' and ``2D'' correlations scales, a parameterization of correlation anisotropy \citep{1994ApJ...420..294B}. Solutions of these equations \citep{2020ApJS..246...38A} were able to account for shorter correlation lengths seen in selected {\emph{PSP}} intervals with nearly radial mean magnetic fields. This represents a partial explanation but leaves open the question of what sets the value of the slab (parallel) correlation scales closer to the sun. The parallel and perpendicular correlation scales, measured relative to the ambient magnetic field direction, have obvious fundamental importance in parameterizing interplanetary turbulence. These scales also enter into expressions for the decay rates of energy and related quantities in von Karman similarity theory and its extensions \citep{1938RSPSA.164..192D,2012JFM...697..296W}. These length scales, or a subset of them, then enter into essentially all macroscopic theories of turbulence decay in the solar wind \citep{2008JGRA..113.8105B,2010ApJ...708L.116V,2014ApJ...782...81V,2014ApJ...796..111L,2017ApJ...851..117A,2018ApJ...865...25U,2019JPlPh..85d9009C}. While the subject is complex and too lengthy for full exposition here, a few words are in order. First, the perpendicular scale may likely be set by the supergranulation scale in the photosphere. A reasonably well accepted number is 35,000~km, although smaller values are sometimes favored. The perpendicular scale is often adopted as a controlling parameter in the cascade, in that the cascade is known to be anisotropic relative to the ambient field direction, and favoring perpendicular spectral transfer \citep{1983JPlPh..29..525S,1995ApJ...438..763G,1999PhRvL..82.3444M}. The parallel correlation scale appears to be less well constrained in general, and may be regulated (at least initially in the photosphere) by the correlation {\it{time}} of magnetic field footpoint motions \citep[see, {\emph{e.g.}},][]{2006ApJ...641L..61G}. Its value at the innermost boundary remains not well determined, even as {\emph{PSP}} observations provide ever better determinations at lower perihelia, where the field direction is often radial. One interesting possibility is that shear driven nonlinear Kelvin Helmholtz-like rollups \citep{2006GeoRL..3314101L,2018MNRAS.478.1980H} drive solar wind fluctuations towards a state of isotropy as reported prior to {\emph{PSP}} based on remote sensing observations \citep{2016ApJ...828...66D,1929ApJ....69...49R}. Shear induced rollups of this type would tap energy in large scale shear flows, enhancing the energy range fluctuations, and supplementing pre-existing driving of the inertial range \citep{2020ApJ...902...94R}. Such interactions are likely candidates for inducing a mixing, or averaging, between the parallel and perpendicular turbulence length scales in regions of Kelvin-Helmholtz-like rollups \citep{2020ApJ...902...94R}. In general, multi-orbit {\emph{PSP}} observations \citep{2020ApJS..246...53C,2021ApJ...923...89C,2020ApJS..246...38A} provide better determination of not only length scales but other parameters that characterize the energy containing scales of turbulence. Knowledge of energy range parameters impacts practical issues such as the selection of appropriate times for describing local bulk properties such as mean density, a quantity that enters into computations of cross helicity and other measures of the Alfv\'enicity in observed fluctuations \citep{2020ApJS..246...58P}. Possibly the most impactful consequence of energy range parameters is their potential control over the cascade rate, and therefore control over the plasma dissipation and heating, whatever the details of those processes may be. One approach is to estimate the energy supply into the smaller scales from the energy containing range by examining the evolution of the break point between the $1/f$ range and the inertial range \citep{2020ApJ...904L...8W}. This approach involves assumptions about the relationship of the $1/f$ range to the inertial range. Using three orbits of {\emph{PSP}} data, these authors evaluated the estimated energy supply rate from the radial break point evolution with the estimated perpendicular proton heating rate, finding a reasonable level of heating in fast and slow wind. Another approach to estimating heating rates in {\emph{PSP}} observations \cite{2020ApJS..246...30M} makes use of approximate connections between heating rate and radial gradient of temperature \citep{2007JGRA..112.7101V,2013ApJ...774...96B} along with theoretical estimates from a form of stochastic heating \citep{2010ApJ...720..503C}. Again, reasonable levels of correspondence are found. Both of these approaches \citep{2020ApJ...904L...8W,2020ApJS..246...30M} derive interesting conclusions based in part on assumptions about transport theory of temperature, or transport of turbulent fluctuations. An alternative approach is based entirely on turbulence theory extended to the solar wind plasma and applied locally to {\emph{PSP}} data \citep{2020ApJS..246...48B}. In this case two evaluations are made -- an energy range estimate adapted from von Karman decay theory \citep{2012JFM...697..296W} and a third order Yaglom-like law \citep{1998GeoRL..25..273P} applied to the inertial range. Cascade rates about 100 times that observed at 1~AU are deduced. The consistency of the estimates of cascade rates obtained from {\emph{PSP}} observations employing these diverse methods suggests a robust interpretation of interplanetary turbulence and the role of the energy containing eddies in exerting control over the cascade. \subsection{Inertial Range}\label{sec:turbulence_inertial_range} During the solar minimum, fast solar wind streams originate near the poles from open magnetic flux tubes within coronal holes, while slow wind streams originate in the streamer belt within a few tens of degrees from the solar equator \citep{2008GeoRL..3518103M}. Because plasma can easily escape along open flux tubes, fast wind is typically observed to be relatively less dense, more homogeneous and characteristically more Alfv\'enic than slow streams, which are believed to arise from a number of sources, such as the tip helmet streamers prevalent in ARs \citep{1999JGR...104..521E,2005ApJ...624.1049L}, opening of coronal loops via interchange reconnection with adjacent open flux tubes \citep{2001ApJ...560..425F}, or from rapidly expanding coronal holes \citep{2009LRSP....6....3C}. The first two {\emph{PSP}} close Encs. with the Sun, which occurred during Nov. 2018 (Enc.~1) and Apr. 2019 (Enc.~2), where not only much closer than any S/C before, but also remained at approximately the same longitude w.r.t. the rotating Sun, allowing {\emph{PSP}} to continuously sample a number of solar wind streams rooted in a small equatorial coronal hole \citep{2019Natur.576..237B,2019Natur.576..228K}. Measurements of velocity and magnetic field during these two Encs. reveal a highly structured and dynamic slow solar wind consisting of a quiet and highly Alfv\'enic background with frequent impulsive magnetic field polarity reversals, which are associated with so called switchbacks \citep[see also][]{2020ApJS..246...39D,2020ApJS..246...45H,2020ApJS..246...67M}. The 30~min averaged trace magnetic spectra for both quiet and switchbacks regions exhibit a dual power-law, with an inertial-range Kolmogorov spectral index of $-5/3$ at high heliocentric distances (as observed in previous observations near 1~AU) and Iroshnikov-Kraichnan index of $-3/2$ near 0.17~AU, consistent with theoretical predictions from MHD turbulence \citep{2016JPlPh..82f5302C}. These findings indicate that Alfv\'enic turbulence is already developed at $0.17$~AU. Moreover, the radial evolution of the turbulent dissipation rate between $0.17$~AU and $0.25$~AU, estimated using Politano-Pouquet third order law and the von Karman decay law, increases from $5\times 10^4~{\rm J~kg}^{-1}{\rm s}^{-1}$ at $0.25$~AU to $2\times 10^5~{\rm J~kg}^{-1}{\rm s}^{-1}$ at $0.17$~AU, which is up to 100 times larger at Perihelion 1 than its measured value at $1$~AU \citep{2020ApJS..246...48B} and in agreement with some MHD turbulent transport models \citep{2018ApJ...865...25U}. \citet{2021ApJ...916...49S} estimated the energy cascade rate at each scale in the inertial range, based on exact scaling laws derived for isentropic turbulent flows in three particular MHD closures corresponding to incompressible, isothermal and polytropic equations of state, and found the energy cascade rates to be nearly constant constant in the inertial range at approximately the same value of $2\times10^5~{\rm{J~kg}}^{-1}~{\rm{s}}^{-1}$ obtained by \citep{2018ApJ...865...25U} at $0.17$~AU, independent of the closure assumption. \citet{2020ApJS..246...71C} performed an analysis to decompose {\emph{PSP}} measurements from the first two orbits into MHD modes. The analysis used solar wind intervals between switchbacks to reveal the presence of a broad spectrum of shear Alfv\'en modes, an important fraction of slow modes and a small fraction of fast modes. The analysis of the Poynting flux reveals that while most of the energy is propagating outward from the sun, inversions in the Poynting flux are observed and are consistent with outward-propagating waves along kinked magnetic field lines. These inversions of the energy flux also correlate with the large rotations of the magnetic field. An observed increase of the spectral energy density of inward-propagating waves with increasing frequency suggests back-scattering of outward-propagating waves off of magnetic field reversals and associated inhomogeneities in the background plasma. Wave-mode composition, propagation and polarization within 0.3~AU was also investigated by~\citet{2020ApJ...901L...3Z} through the Probability Distribution Functions (PDFs) of wave-vectors within 0.3~AU with two main findings: (1) wave-vectors cluster nearly parallel and antiparallel to the local background magnetic field for $kd_i<0.02$ and shift to nearly perpendicular for $kd_i>0.02$. The authors also find that outward-propagating AW dominate over all scales and heliocentric distances, the energy fraction of inward and outward slow mode component increases with heliocentric distance, while the corresponding fraction of fast mode decreases. \cite{2020ApJS..246...53C} investigated the radial dependency of the trace magnetic field spectra, between 0.17~AU to about 0.6~AU, using MAG data from the FIELDS suite \citep{2016SSRv..204...49B} for each 24~h interval during the first two {\emph{PSP}} orbits. Their analysis shows that the trace spectra of magnetic fluctuations at each radii consists of a dual power-law, only this time involving the $1/f$ range followed by an MHD inertial-range with an spectral index varying from approximately $-5/3$ near 0.6~AU to about $-3/2$ at perihelion (0.17~AU). Velocity measurements obtained from SWEAP/SPC \citep{2016SSRv..204..131K} were used for the 24~h interval around Perihelion 1 to obtain the trace spectra of velocity and Elsasser field fluctuations, all of which show a power law with a spectral index closer to $-3/2$. The normalized cross-helicity and residual energy, which was also measured for each 24~h interval, shows that the turbulence becomes more imbalanced closer to the Sun, {\emph{i.e.}}, the dominance of outward-propagating increases with decreasing heliocentric distance. Additional measures of Alfv\'enicity of velocity and magnetic fluctuations as a function of their scale-size \citep{2020ApJS..246...58P} showed that both normalized cross-helicity and the scale-dependent angle between velocity and magnetic field decreases with the scale-size in the inertial-range, as suggested by some MHD turbulence models \citep{2006PhRvL..96k5002B,2009PhRvL.102b5003P,2010ApJ...718.1151P}, followed by a sharp decline in the kinetic range, consistent with observations at 1~AU \citep{2018PhRvL.121z5101P}. The transition from a spectral index of $-5/3$ to $-3/2$ with a concurrent increase in cross helicity is consistent with previous observations at 1~AU in which a spectral index of $-3/2$ for the magnetic field is prevalent in imbalanced streams \citep{2010PhPl...17k2905P,2013ApJ...770..125C,2013PhRvL.110b5003W}, as well as consistent with models and simulations of steadily-driven, homogeneous imbalanced Alfv\'enic turbulence \citep{2009PhRvL.102b5003P,2010ApJ...718.1151P}, and reflection-driven (inhomogeneous) Alfv\'en turbulence \citep{2013ApJ...776..124P,2019JPlPh..85d9009C}. A similar transition was also found by \citet{2020ApJ...902...84A}, where the Hilbert-Huang Transform (HHT) was used to investigate scaling properties of magnetic-field fluctuations as a function of the heliocentric distance, to show that magnetic fluctuations exhibit multifractal scaling properties far from the sun, with a power spectrum $f^{-5/3}$, while closer to the sun the corresponding scaling becomes monofractal with $f^{-3/2}$ power spectrum. Assuming ballistic propagation, \citet{2021ApJ...912L..21T} identified two 1.5~h intervals corresponding to the same plasma parcel traveling from 0.1 to 1~AU during the first {\emph{PSP}} and {\emph{SolO}} radial alignment, and also showed that the solar wind evolved from a highly Alfv\'enic, less developed turbulent state near the sun to a more fully developed and intermittent state near 1~AU. \citet{2021A&A...650A..21S} performed a statistical analysis to investigate how the turbulence properties at MHD scales depend on the type of solar wind stream and distance from the sun. Their results show that the spectrum of magnetic field fluctuations steepens with the distance to the Sun while the velocity spectrum remains unchanged. Faster solar wind is found to be more Alfv\'enic and dominated by outward-propagating waves (imbalanced) and with low residual energy. Energy imbalance (cross helicity) and residual energy decrease with heliocentric distance. Turbulent properties can also vary among different streams with similar speeds, possibly indicating a different origin. For example, slow wind emanating from a small coronal hole has much higher Alfv\'enicity than a slow wind that originates from the streamer belt. \citet{2021A&A...650L...3C} investigated the turbulence and acceleration properties of the streamer-belt solar wind, near the HCS, using measurements from {\emph{PSP}}'s fourth orbit. During this close S/C, the properties of the solar wind from the inbound measurements are substantially different than from the outbound measurements. In the latter, the solar wind was observed to have smaller turbulent amplitudes, higher magnetic compressibility, a steeper magnetic spectrum (closer to $-5/3$ than to $-3/2$), lower Alfv\'enicity and a $1/f$ break at much smaller frequencies. The transition from a more Alfv\'enic (inbound) wind to a non-Alfv\'enic wind occurred at an angular distance of about $4^\circ$ from the HCS. As opposed to the inbound Alfv\'enic wind, in which the measured energy fluxes are consistent with reflection-driven turbulence models~\citep{2013ApJ...776..124P,2019JPlPh..85d9009C}, the streamer belt fluxes are significantly lower than those predicted by the same models, suggesting the streamer-belt wind may be subject to additional acceleration mechanisms. Interpretation of the spectral analysis of temporal signals to investigate scaling laws in the inertial range thus far have relied on the validity of Taylor's Hypothesis (TH). \citet{2021A&A...650A..22P} investigated the applicability of TH in the first four orbits based on a recent model of the space-time correlation function of Alfv\'enic turbulence \citep[incompressible MHD;][]{2018ApJ...858L..20B,2020PhRvR...2b3357P}. In this model, the temporal decorrelation of the turbulence is dominated by hydrodynamic sweeping under the assumption that the turbulence is strongly anisotropic and Alfv\'enic. The only parameter in the model that controls the validity of TH is $\epsilon=\delta u_0/V_\perp$ where $\delta u_0$ is the rms velocity of the large-scale velocity field and $V_\perp$ is the velocity of the S/C in the plasma frame perpendicular to the local field. The spatial energy spectrum of turbulent fluctuations is recovered using conditional statistics based on nearly perpendicular sampling. The analysis is performed on four selected 24h intervals from {\emph{PSP}} during the first four perihelia. TH is observed to still be marginally applicable, and both the new analysis and the standard TH assumption lead to similar results. A general prescription to obtain the energy spectrum when TH is violated is summarized and expected to be relevant when {\emph{PSP}} get closer to the sun. \citet{2021ApJ...915L...8D} investigated the anisotropy of slow Alfv\'enic solar wind in the kinetic range from {\emph{PSP}} measurements. Magnetic compressibility is consistent with kinetic Alfv\'en waves (KAW) turbulence at sub-ion scales. A steepened transition range is found between the (MHD) inertial and kinetic ranges in all directions relative to the background magnetic field. Strong power spectrum anisotropy is observed in the kinetic range and a smaller anisotropy in the transition range. Scaling exponents for the power spectrum in the transition range are found to be $\alpha_{t\|}=-5.7\pm 1.0$ and $\alpha_{t\perp}=-3.7\pm0.3$, while for the kinetic range the same exponent are $\alpha_{k\|}=-3.12\pm0.22$ and $\alpha_{k\perp}=-2.57\pm0.09$. The wavevector anisotropy in the transition and kinetic ranges follow the scaling $k_\|\sim k_\perp^{0.71\pm0.17}$ and $k_\|\sim k_\perp^{0.38\pm0.09}$, respectively. \subsection{Kinetic Range, Dissipation, Heating and Implications} In-situ measurements have revealed that the solar wind is not adiabatically cooling, as both the ion and electron temperatures decay at considerably slower rates than the adiabatic cooling rates induced by the radial expansion effect of the solar wind \citep[{\emph{e.g.}},][]{2020ApJS..246...70H,2020ApJS..246...62M}. Thus, some heating mechanisms must exist in the solar wind. As the solar wind is nearly collisionless, viscosity, resistivity, and thermal conduction are unlikely to contribute much to the heating of the solar wind. Hence, turbulence is believed to be the fundamental process that heats the solar wind plasma. In the MHD inertial range, the turbulence energy cascades from large scales to small scales without dissipation. Near or below the ion kinetic scale, various kinetic processes, such as the wave-particle interaction through cyclotron resonance or Landau damping, and the stochastic heating of the particles, become important. These kinetic processes effectively dissipate the turbulence energy and heat the plasma. As already discussed in \S\ref{sec:5_large_scale}, \citet{2020ApJ...904L...8W} show that the estimated energy supply rate at the large scales agrees well with the estimated perpendicular heating rate of the solar wind, implying that turbulence is the major heating source of the solar wind. However, to fully understand how the turbulence energy eventually converts to the internal energy of the plasma, we must analyze the magnetic field and plasma data at and below the ion scales. \begin{figure} \centering \includegraphics[width=\hsize]{Bowen2020PRLfigure1.PNG} \caption{(a,b) Examples of {\emph{PSP}}/FIELDS magnetic field spectra with 3PL (three spectral range continuous power-law fit, blue) and 2PL (two spectral range continuous power-law fit, orange) fits. Vertical lines show 3PL spectral breaks. (c,d) spectral indices for data (black), 3PL (blue) and 2PL fits (orange). Horizontal lines are shown corresponding to spectral indices of $-8/3$ (teal) and $-4$ (purple). Top interval has statistically significant spectral steepening, while the bottom interval is sufficiently fit by 2PL. Figure adapted from \cite{2020PhRvL.125b5102B}.}. \label{fig:Bowen2020PRLFigure1} \end{figure} \citet{2020PhRvL.125b5102B} analyze data of the MAG and SCM onboard {\emph{PSP}} during Enc.~1 when a slow Alfv\'enic wind originating from an equatorial coronal hole was measured. They show that a very steep transition range in the magnetic field power spectrum with a spectral slope close to $-4$ is observed around the ion gyroscale ($k_\perp \rho_i \sim 1$ where $k_\perp$ is the perpendicular wave number and $\rho_i$ is the ion thermal gyroradius) (Fig.~ \ref{fig:Bowen2020PRLFigure1}). This transition range is steeper than both the inertial range ($k_\perp \rho_i \ll 1$) and the deep kinetic range ($k_\perp \rho_i \gg 1$). \citet{2020PhRvL.125b5102B} estimate that if the steep spectrum corresponds to some dissipation mechanism, then more than 50\% of the turbulence energy is dissipated at the ion scale transition range, which is a sufficient energy source for solar wind heating. \citet{2020ApJS..246...55D} conduct a statistical study of the magnetic field spectrum based on {\emph{PSP}} data from Enc.~2 and show that the break frequency between the inertial range and the transition range decreases with the radial distance to the Sun and is on the order of the ion cyclotron resonance frequency. \citet{2020ApJ...897L...3H} find that, in a one-day interval during Enc.~1, the slow wind observed by {\emph{PSP}} contains both the right-handed polarized KAWs and the left-handed polarized Alfv\'en ion cyclotron waves (ACWs) at sub-ion scales. Many previous observations have shown that at 1~AU KAW dominates the turbulence at sub-ion scales \citep[{\emph{e.g.}},][]{2010PhRvL.105m1101S} and KAW can heat the ions through Landau damping of its parallel electric field. However, the results of \citep{2020ApJ...897L...3H} imply a possible role of ACWs, at least in the very young solar wind, in heating the ions through cyclotron resonance. \citet{2021ApJ...915L...8D}, by binning the Enc.~1 data with different $\mathbf{V-B}$ angles, show that the magnetic field spectrum is anisotropic in both the transition range and kinetic range with $k_\perp \gg k_\parallel$ and the anisotropy scaling relation between $k_\perp$ and $k_\parallel$ is consistent with the critical-balanced KAW model in the sub-ion scales (see \S\ref{sec:turbulence_inertial_range}). Another heating mechanism is the stochastic heating \citep{2010ApJ...720..503C}, which becomes important when the fluctuation of the magnetic field at the ion gyroscale is large enough such that the magnetic moment of the particles is no longer conserved. \citet{2020ApJS..246...30M} calculate the stochastic heating rate using data from the first two Encs. and show that the stochastic heating rate $Q_\perp$ decays with the radial distance as $Q_\perp \propto r^{-2.5}$. Their result reveals that the stochastic heating may be more important in the solar wind closer to the Sun. Last, it is known that development of turbulence leads to the formation of intermittency (see \S\ref{sec:5_intermit} for more detailed discussions). In plasma turbulence, intermittent structures such as current sheets are generated around the ion kinetic scale. \citet{2020ApJS..246...46Q} adopt the partial variance of increments (PVI) technique to identify intermittent magnetic structures using {\emph{PSP}} data from the first S/C. They show that statistically there is a positive correlation between the ion temperature and the PVI, indicating that the intermittent structures may contribute to the heating of the young solar wind through processes like the magnetic reconnection in the intermittent current sheets. At the end of this subsection, it is worthwhile to make several comments. First, the Faraday Cup onboard {\emph{PSP}} (SPC) has a flux-angle operation mode, which allows measurements of the ion phase space density fluctuations at a cadence of 293~Hz \citep{2020ApJS..246...52V}. Thus, combination of the flux-angle measurements with the magnetic field data will greatly help us understand the kinetic turbulence. Second, more studies are necessary to understand behaviors of electrons in the turbulence, though direct measurement of the electron-scale fluctuations in the solar wind is difficult with current plasma instruments. \citet{2021A&A...650A..16A} show that by distributing the turbulence heating properly among ions and electrons in a turbulence-coupled solar wind model, differential radial evolutions of ion and electron temperatures can be reproduced. However, how and why the turbulence energy is distributed unevenly among ions and electrons are not fully understood yet and need future studies. Third, during the Venus flybys, {\emph{PSP}} traveled through Venus’s magnetosphere and made high-cadence measurements. Thus, analysis of the turbulence properties around Venus, {\emph{e.g.}}, inside its magnetosheath \citep{2021GeoRL..4890783B} will also be helpful in understanding the kinetic turbulence (see \S\ref{PSPVENUS}). Last, \citet{2021ApJ...912...28M} compare the turbulence properties inside and outside the magnetic switchbacks using the {\emph{PSP}} data from the first two Encs. They find that the stochastic heating rates and spectral slopes are similar but other properties such as the intermittency level are different inside and outside the switchbacks, indicating that some processes near the edges of switchbacks, such as the velocity shear, considerably affect the turbulence evolution inside the switchbacks (see \S\ref{SB_obs}). \subsection{Intermittency and Small-scale Structure} \label{sec:5_intermit} In the modern era to turbulence research, {\textit{intermittency}} has been established as a fundamental feature of turbulent flows \citep[{\emph{e.g.}},][]{1997AnRFM..29..435S}. Nonlinearly interacting fluctuations are expected to give rise to structure in space and time, which is characterized by sharp gradients, inhomogeneities, and departures from Gaussian statistics. In a plasma such as the solar wind, such ``bursty'' or ``patchy'' structures include current sheets, vortices, and flux tubes. These structures have been associated with enhanced plasma dissipation and heating \citep{2015RSPTA.37340154M}, and with acceleration of energetic particles \citep{2013ApJ...776L...8T}. Studies of intermittency may therefore provide insights into dissipation and heating mechanisms active in the weakly-collisional solar wind plasma. Intermittency also has implications for turbulence theory -- a familiar example is the evolution of hydrodynamic inertial range theory from the classical Kolmogorov \citeyearpar[K41;][]{1941DoSSR..30..301K} theory to the so-called refined similarity hypothesis \citep[][]{1962JFM....13...82K}; the former assumed a uniform energy dissipation rate while the latter allowed for fluctuations and inhomogeneities in this fundamental quantity. Standard diagnostics of intermittency in a turbulent field include PDFs of increments, kurtosis (or flatness; fourth-order moment), and structure functions \cite[{\emph{e.g.}},][]{2015RSPTA.37340154M}. Observational studies tend to focus on the magnetic field due to the availability of higher-quality measurements compared to plasma observations. In well-developed turbulence, one finds that the tails of PDFs of increments become wider (super-Gaussian) and large values of kurtosis are obtained at small scales within the inertial-range. A description in terms of \textit{fractality} is also employed -- monofractality is associated with structure that is non space-filling but lacking a preferred scale ({\emph{i.e.}}, scale-invariance). In contrast, multifractality also implies non space-filling structure but with at least one preferred scale \citep{1995tlan.book.....F}.\footnote{In hydrodynamic turbulence intermittency is often described in terms of the scaling of the structure functions $S^{(p)}_\ell \equiv \langle \delta u_\ell^p \rangle \propto \ell^{p/3 + \mu(p)}$, where \(\delta u_\ell = \bm{\hat{\ell}} \cdot [ \bm{u} (\bm{x} + \bm{\ell}) - \bm{u}(\bm{x}) ]\) is the longitudinal velocity increment, \(\bm{\ell}\) is a spatial lag, and \(\langle\dots\rangle\) is an appropriate averaging operator. In K41 (homogenous turbulence) the intermittency parameter \(\mu(p)\) vanishes. With intermittency one has non-zero \(\mu(p)\), and the scaling exponents \(\zeta (p) = p/3 + \mu(p) \) can be linear or nonlinear functions of \(p\), corresponding to monofractal or multifractal scaling, respectively \citep{1995tlan.book.....F}. The scale-dependent kurtosis \(\kappa\) can be defined in terms the structure functions: \(\kappa (\ell)\equiv S^{(4)}_\ell/ \left[S^{(2)}_\ell\right]^2\).} Intermittency properties of solar wind turbulence have been extensively studied using observations from several S/C \citep[see, {\emph{e.g.}}, review by][]{2019E&SS....6..656B}. High-resolution measurements from the {\emph{Cluster}} and the Magnetospheric Multiscale \citep[{\emph{MMS;}}][]{2014cosp...40E.433B} missions have enabled such investigations within the terrestrial magnetosheath as well \citep[{\emph{e.g.}},][]{2009PhRvL.103g5006K,2018JGRA..123.9941C}, including kinetic-scale studies. {\emph{PSP}}'s trajectory allows us to extend these studies to the near-Sun environment and to examine the helioradial evolution of intermittency in the inner heliosphere. An advantage offered by {\emph{PSP}} is that the observations are not affected by foreshock wave activity that is often found near Earth's magnetosheath \citep[see, {\emph{e.g.}},][]{2012ApJ...744..171W}. \begin{figure} \centering \includegraphics[width=0.7\textwidth]{Alberti2020ApJ.pdf} \includegraphics[width=0.68\textwidth]{Chhiber2021ApJL.pdf} \caption{\textit{Top}: Scaling exponents \(\zeta_q\) (see text) of the radial magnetic field for different orders \(q\), observed by {\emph{PSP}} at different helioradii \(r\). A transition from monofractal linear scaling to multifractal scaling is observed for \(r>0.4\). Figure adapted from \cite{2020ApJ...902...84A}. \textit{Bottom}: Scale-dependent kurtosis of the magnetic field, as a function of lag. A transition from a multifractal inertial range to a monofractal kinetic range occurs near \(20\ d_\text{i}\) \citep{2021ApJ...911L...7C}.} \label{fig:Alberti_Chhib} \end{figure} The radial evolution of intermittency in inertial-range magnetic fluctuations measured by {\emph{PSP}} was investigated by \cite{2020ApJ...902...84A}, who found monofractal, self-similar scaling at distances below 0.4~AU. At larger distances, a transition to multifractal scaling characteristic of strongly intermittent turbulence was observed (see top panel of Fig.~\ref{fig:Alberti_Chhib}). A similar trend was observed by \citep{2021ApJ...912L..21T}, who used measurements during the first radial alignment of {\emph{PSP}} and {\emph{SolO}}. Strong intermittency was obtained in {\emph{SolO}} observations near 1~AU compared to {\emph{PSP}} observations near 0.1~AU, suggesting an evolution from highly Alfv\'enic and under-developed turbulence to fully-developed turbulence with increasing radial distance. Note that several prior studies have found that inertial-range magnetic turbulence at 1~AU is characterized by multifractal intermittency \citep[][]{2019E&SS....6..656B}. It is also worth noting (as in \S\ref{sec:5_large_scale}) that {\emph{PSP}} observations near the Sun may be statistically biased towards sampling variations in a lag direction that is (quasi-)parallel to the mean magnetic field \citep{2021PhPl...28h0501Z,2021ApJ...923...89C}. Future studies could separately examine parallel and perpendicular intervals \citep[{\emph{e.g.}},][]{2011JGRA..11610102R}, which would clarify the extent to which this geometrical sampling bias affects the measured radial evolution of intermittency. A comparison of inertial range vs. kinetic-scale intermittency in near-Sun turbulence was carried out by \cite{2021ApJ...911L...7C} using the publicly available SCaM data product \citep{2020JGRA..12527813B}, which merges MAG and SCM measurements to obtain an optimal signal-to-noise ratio across a wide range of frequencies. They observed multifractal scaling in the inertial range, supported by a steadily increasing kurtosis with decreasing scale down to \(\sim 20\ d_\text{i}\). The level of intermittency was somewhat weaker in intervals without switchbacks compared to intervals with switchbacks, consistent with PVI-based analyses by \cite{2021ApJ...912...28M} (see also \S\ref{sec:5_switchback}). At scales below \(20\ d_\text{i}\), \cite{2021ApJ...911L...7C} found that the kurtosis flattened (bottom panel of Fig.~\ref{fig:Alberti_Chhib}), and their analysis suggested a monofractal and scale-invariant (but still intermittent and non-Gaussian) kinetic range down to the electron inertial scale, a finding consistent with near-Earth observations \citep{2009PhRvL.103g5006K} and some kinetic simulations \citep{2016PhPl...23d2307W}. From these results, they tentatively infer the existence of a scale-invariant distribution of current sheets between proton and electron scales. The SCaM data product was also used by \cite{2021GeoRL..4890783B} to observe strong intermittency at subproton scales in the Venusian magnetosheath. \cite{2020ApJ...905..142P} examined coherent structures at ion scales in intervals with varying switchback activity, observed during {\emph{PSP}}'s first S/C. Using a wavelet-based approach, they found that current sheets dominated intervals with prominent switchbacks, while wave packets were most common in quiet intervals without significant fluctuations. A mixture of vortex structures and wave packets was observed in periods characterized by strong fluctuations without magnetic reversals. A series of studies used the PVI approach \citep{2018SSRv..214....1G} with {\emph{PSP}} data to identify intermittent structures and examine associated effects. \cite{2020ApJS..246...31C} found that the waiting-time distributions of intermittent magnetic and flow structures followed a power law\footnote{Waiting times between magnetic switchbacks, which may also be considered intermittent structures, followed power-law distributions as well \citep{2020ApJS..246...39D}.} for events separated by less than a correlation scale, suggesting a high degree of correlation that may originate in a clustering process. Waiting times longer than a correlation time exhibited an exponential distribution characteristic of a Poisson process. \cite{2020ApJS..246...61B} studied the association of SEP events with intermittent magnetic structures, finding a suggestion of positive correlation (see also \S\ref{sec:5_SEPs}). The association of intermittency with enhanced plasma heating (measured via ion temperature) was studied by \cite{2020ApJS..246...46Q}; their results support the notion that coherent non-homogeneous magnetic structures play a role in plasma heating mechanisms. These series of studies indicate that intermittent structures in the young solar wind observed by {\emph{PSP}} have certain properties that are similar to those observed in near-Earth turbulence. In addition to the statistical properties described in the previous paragraphs, some attention has also been given to the identification of structures associated with intermittency, such as magnetic flux tubes and ropes. \cite{2020ApJS..246...26Z} used wavelet analysis to evaluate magnetic helicity, cross helicity, and residual energy in {\emph{PSP}} observations between 22 Oct. 2018 and 21 Nov. 2018. Based on these parameters they identified 40 flux ropes with durations ranging from 10 to 300 minutes. \cite{2020ApJ...903...76C} used a Grad-Shafranov approach to identify flux ropes during the first two {\emph{PSP}} Encs., and compared this method with the \cite{2020ApJS..246...26Z} approach, finding some consistency. \cite{2021A&A...650A..20P} developed a novel method for flux rope detection based on a real-space evaluation of magnetic helicity, and, focusing on the first {\emph{PSP}} orbit, found some consistency with the previously mentioned approaches. A subsequent work applied this method to orbit 5, presenting evidence that flux tubes act as transport boundaries for energetic particles \citep{2021MNRAS.508.2114P}. The characteristics of so-called interplanetary discontinuities (IDs) observed during {\emph{PSP}}'s $4^{\mathrm{th}}$ and $5^{\mathrm{th}}$ orbits were studied by \cite{2021ApJ...916...65L}, who found that the occurrence rate of IDs decreases from 200 events per day at 0.13~AU to 1 event per day at 0.9~AU, with a sharper decrease observed in RDs as compared with TDs. While the general decrease in occurrence rate may be attributed to radial wind expansion and discontinuity thickening, the authors infer that the sharper decrease in RDs implies a decay channel for these in the young solar wind. We close this section by noting that the studies reviewed above employ a remarkable variety of intermittency diagnostics, including occurrence rate of structures, intensities of the associated gradients, and their fractal properties. The richness of the dataset that {\emph{PSP}} is expected to accumulate over its lifetime will offer unprecedented opportunities to probe the relationships between these various diagnostics and their evolution in the inner heliosphere. \subsection{Interaction Between Turbulence and Other Processes} \subsubsection{Turbulence Characteristics Within Switchbacks}\label{sec:5_switchback} The precise definition of magnetic field reversal (switchbacks) is still ambiguous in the heliophysics community, but the common picture of switchbacks is that they incarnate the S-shaped folding of the magnetic field. Switchbacks are found to be followed by Alv\'enic fluctuations (and often by strahl electrons) and they are associated with an increase of the solar wind bulk velocity as observed recently by {\emph{PSP}} near 0.16~AU \citep{2019Natur.576..228K,2019Natur.576..237B,2020ApJS..246...39D,2020ApJS..246...67M,2020ApJ...904L..30B,2021ApJ...911...73W,2020ApJS..246...74W}. Switchbacks have been previously observed in the fast solar-wind streams near 0.3~AU \citep{ 2018MNRAS.478.1980H} and near or beyond 1~AU \citep{1999GeoRL..26..631B}. The switchback intervals are found to carry turbulence, and the characteristics of that turbulence have been investigated in a number of works using {\emph{PSP}} data. \citet{2020ApJS..246...39D} studied the the spectral properties of inertial range turbulence within intervals containing switchback structures in the first {\emph{PSP}} S/C. In their analysis they introduced the normalized parameter $z=\frac{1}{2}(1-\cos{\alpha})$ (where $\alpha$ is the angle between the instantaneous magnetic field, {\bf B}, and the prevalent field $\langle {\bf B} \rangle$) to determine the deflection of the field. Switchbacks were defined as a magnetic field deflection that exceeds the threshold value of $z=0.05$. They estimated the power spectral density of the radial component $B_R$ of the magnetic field for quiescent ($z < 0.05$) and active (all $z$) regimes. \begin{figure} \centering \includegraphics[width=0.7\textwidth]{dudok_spectra_20.png} \caption{Power spectrum of radial magnetic field fluctuations for quiescent times ($z<0.05$) and for the entire interval (all $z$), during a period near perihelion of Enc.~1. The quiescent times show a lower overall amplitude, and a $1/f$ break at higher frequencies, suggestive of a less evolved turbulence. Figure adapted from \citet{2020ApJS..246...39D}.} \label{fig:Dudock20} \end{figure} Fig.~\ref{fig:Dudock20} shows the results of both power spectra. They found that the properties in active conditions are typical for MHD turbulence, with an inertial range whose spectral index is close to $-3/2$ and preceded by $1/f$ range (consistent with the observation by \citet{2020ApJS..246...53C}. Also, the break frequency (at the lower frequency part) was found to be located around 0.001~Hz. For the quiescent power spectrum, the break frequency moves up to 0.05~Hz showing a difference from the active power spectrum although both power spectra have similar spectral index (about $-3/2$) between 0.05~Hz and 1~Hz. The authors suggest that in the quiescent regime the turbulent cascade has only had time to develop a short inertial range, showing signature of a more pristine solar wind. \citet{2020ApJS..246...67M} have studied the turbulent quantities such as the normalized residual energy, $\sigma_r(s, t)$, and cross helicity, $\sigma_c(s, t)$, during one day of {\emph{PSP}} first S/C as a function of wavelet scale $s$. Their study encompasses switchback field intervals. Overall, they found that the observed features of these turbulent quantities are similar to previous observations made by {\emph{Helios}} in slow solar wind \citep{2007PSS...55.2233B}, namely highly-correlated and Alfv\'enic fluctuations with ($\sigma_c\sim 0.9$ and $\sigma_r\sim -0.3$). However, a negative normalized cross helicity is found within switchback intervals, indicating that MHD fluctuations are following the local magnetic field inside switchbacks, {\emph{i.e.}}, the predominantly outward propagating Alfv\'enic fluctuations outside the switchback intervals become inward propagating during the field reversal. This signature is interpreted as that the field reversals are local kinks in the magnetic field and not due to small regions of opposite polarity of the field. \begin{figure} \centering \includegraphics[width=0.7\textwidth]{bourouaine20_2.png} \caption{Power spectra of Elsasser variables (upper panels) and velocity/magnetic fluctuations (lower panels) for periods of switchbacks (SB; left panels) and periods not within switchbacks (NSB; right panels). Magnetic spectra are multiplied by a factor of 10. Power law fits are also indicated. There are notable differences in both the amplitudes and shape of the spectra between SB and NSB intervals. Figure adapted from \citet{2020ApJ...904L..30B}.} \label{fig:bourouaine20} \end{figure} The turbulence characteristics associated with switchbacks have also been studied by \cite{2020ApJ...904L..30B} using 10 days of {\emph{PSP}} data during the first S/C. The authors used a technique that is based on the conditioned correlation functions to investigate the correlation times and the power spectra of the field $q$ that represents the magnetic field ${\bf B}$, the fluid velocity ${\bf V}$ and the Elsasser fields ${\bf z^\pm}$, inside and outside switchback intervals. In their study, the authors defined switchbacks as field reversals that are deflected by angles that are larger than $90^\circ$ w.r.t. the Parker spiral (the prevalent magnetic field). This work confirms that the dominant Alv\'enic fluctuations follow the field reversal. Moreover, the authors found that, in switchback intervals, the correlation time is about 2 minutes for all fields, but in non-switchback intervals, the correlation time of the sunward propagating Alfv\'enic fluctuations (the minor Elsasser field) is about 3 hr and longer than the ones of the other fields. This result seems to be consistent with previous 1~AU measurements \citep{2013ApJ...770..125C,2018ApJ...865...45B}. Furthermore, the authors estimated the power spectra of the corresponding fields (Fig.~\ref{fig:bourouaine20}), and found that the magnetic power spectrum in switchback intervals is steeper (with spectral index close to $-5/3$) than in switchback intervals, which have a spectral index close to $-3/2$. The analysis also shows that the turbulence is found to be less imbalanced with higher residual energy in switchback intervals. \citet{2021ApJ...911...73W} has investigated the turbulent quantities such as the normalized cross helicity and the residual energy in switchbacks using the first four Encs. of {\emph{PSP}} data. In their analysis they considered separate intervals of 100~s duration that satisfies the conditions of $B_R>0^\circ$ and $B_R>160^\circ$ for the switchbacks and non-switchbacks, respectively. Although, the analysis focuses on the time scale of 100~s, their findings seems to be consistent with the results of \cite{2020ApJ...904L..30B} (for that time scale), {\emph{i.e.}}, the switchback intervals and non-switchback intervals have distinct residual energy and similar normalized cross helicity suggesting that switchbacks have a different Alfv\'enic characteristics. In another study, \citet{2021ApJ...912...28M} have investigated the spectral index and the stochastic heating rate at the inertial range inside and outside switchback intervals and found a fair similar behavior in both intervals. However, at the kinetic range, the kinetic properties, such as the characteristic break scale (frequency that separates the inertial and the dissipation ranges) and the level of intermittency differ inside and outside switchback intervals. The authors found that inside the switchbacks the level of intermittency is higher, which might be a signature of magnetic field and velocity shears observed at the edges. \subsubsection{Impact of Turbulence and Switchbacks on Energetic Particles and CMEs}\label{sec:5_SEPs} Strahl electrons are observed to follow the reversed field within the switchbacks, however, it is not yet understood whether higher energy energetic particles reverse at the switchbacks as well. Recently, \cite{2021AA...650L...4B} examined the radial anisotropy of the energetic particles measured by the EPI-Lo (instrument of the IS$\odot$IS suite) in connection to magnetic switchbacks. The authors investigated switchback intervals with $|\sigma_c|>0.5$ and $z\le 0.5$, respectively. The ratio $r= (F_{\mbox{away}}-F_{\mbox{toward}})/(F_{\mbox{away}}+F_{\mbox{outward}})$ has been used to determine the dominant flux direction of the energetic particles. Here "$\mbox{away}$" and "$\mbox{toward}$" refer to the direction of the measured radial particle fluxes ($F$) in the selected energy range. Fig.~3 of \citet{2021AA...650L...4B} displays the scattering points that correspond to the measurements of the first five Encs. plotted as a function of the $z$ parameter and the ratio $r$. The analysis shows that 80–200~keV energetic ions almost never reverse direction when the magnetic field polarity reverses in switchbacks. One of the reason is that particles with smaller gyroradii, such as strahl electrons, can reverse direction by following the magnetic field in switchbacks, but that larger gyroradii particles likely cannot. Therefore, from this analysis one can expect that particles with higher energies than those detectable by EPI-Lo will likely not get reversed in switchbacks. \cite{2020ApJS..246...48B} studied the connection between the enhanced of the population of energetic particles (measured using IS$\odot$IS) and the intermittent structures (using FIELDS/MAG) near the sun using {\emph{PSP}} data. Intermittent structures are generated naturally by turbulence, and the PVI method was proposed previously to identify these structures \citep{2018SSRv..214....1G} For single S/C measurements, this method relies on the evaluation of the temporal increment in the magnetic field such as $|\Delta B(\tau,t)|=|B(t+\tau)-B(t)|$, and thus the so-called the $PVI(t)$ for a given $\tau$ index is defined as \begin{equation} PVI(t)=\sqrt{\frac{|\Delta(\tau,t)|^2}{|\langle \Delta(\tau,t)|^2\rangle}} \end{equation} where $\langle...\rangle$ denotes a time average computed along the time series. The analysis given in \citet{2020ApJS..246...48B} examined the conditionally averaged energetic-particle count rates and its connection to the intermittent structures using the PVI method. The results from the first two {\emph{PSP}} orbits seem to support the idea that SEPs are are likely correlated with coherent magnetic structures. The outcomes from this analysis may suggest that energetic particles are concentrated near magnetic flux tube boundaries. Magnetic field line topology and flux tube structure may influence energetic particles and their transport in other ways. A consequence of the tendency of particles to follow field lines is that when turbulence is present the particle paths can be influenced by field line {\it random walk}. While this idea is familiar in the context of perpendicular diffusive transport, a recent study of SEP path lengths observed by {\emph{PSP}} \citep{2021A&A...650A..26C} suggested that random walking of field lines or flux tubes may account for apparently increased path of SEP path lengths. This is further discussed in \S\ref{SEPs}. The inertial-range turbulent properties such as the normalized cross helicity, $\sigma_c$, and residual energy, $\sigma_r$ have been examined in magnetic clouds (MCs) using {\emph{PSP}} data by \citet{2020ApJ...900L..32G}. MCs are considered to be large-scale transient twisted magnetic structures that propagate in the solar wind having features of low plasma $\beta$ and low-amplitude magnetic field fluctuations. The analysis presented in \citet{2020ApJ...900L..32G} shows low $|\sigma_c|$ value in the cloud core while the cloud’s outer layers displays higher $|\sigma_c|$ and small residual energy. This study indicates that more balanced turbulence resides in the could core, and large-amplidude Alfv\'enic fluctuations characterize the cloud’s outer layers. These obtained properties suggest that low $|\sigma_c|$ is likely a common feature of magnetic clouds that have a have typical closed field structures. \subsection{Implications for Large-Scale and Global Dynamics} As well as providing information about the fundamental nature of turbulence, and its interaction with the various structures in the solar wind, {\emph{PSP}} has allowed us to study how turbulence contributes to the solar wind at the largest scales. Some of the main goals of {\emph{PSP}} are to understand how the solar wind is accelerated to the high speeds observed and how it is heated to the high temperatures seen, both close to the Sun and further out \citep{2016SSRv..204....7F}. {\emph{PSP}}'s orbits getting increasingly closer to the Sun are allowing us to measure the radial trends of turbulence properties, and directly test models of solar wind heating and acceleration. To test the basic physics of a turbulence driven wind, \citet{2020ApJS..246...53C} compared {\emph{PSP}} measurements from the first two orbits to the 1D model of \citet{2011ApJ...743..197C}. In particular, they calculated the ratio of energy flux in the Alfv\'enic turbulence to the bulk kinetic energy flux of the solar wind (Fig.~\ref{FIG:Chen2020}). This ratio was found to increase towards the Sun, as expected, reaching about 10\% at 0.17~AU. The radial variation of this ratio was also found to be consistent with the model, leading to the conclusion that the wind during these first orbits could be explained by a scenario in which the Sun drives AWs that reflect from the Alfv\'en speed gradient, driving a turbulence cascade that heats and accelerates the wind. Consistent with this picture, \citet{2020ApJS..246...53C} also found that the inward Alfv\'enic fluctuation component grew at a rate consistent with the measured Alfv\'en speed gradient. \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{Chen2020-eps-converted-to.pdf} \caption{(a) Ratio of outward-propagating Alfv\'enic energy flux, $F_\mathrm{A}$, to solar wind bulk kinetic energy flux, $F_\mathrm{k}$, as a function of heliocentric distance, $r$. (b) The same ratio as a function of solar wind radial Alfv\'en Mach number, $M_\mathrm{A}$. In both plots, the black solid line is a power law fit, the red/green dashed lines are solutions to the \citet{2011ApJ...743..197C} model, the data points are colored by solar wind speed, $v$, and crosses mark times during connection to the coronal hole in Enc.~1. Figure adapted from \citet{2020ApJS..246...53C}.} \label{FIG:Chen2020} \end{figure*} To see if such physics can explain the 3D structure of the solar wind, the {\emph{PSP}} observations have also been compared to 3D turbulence-driven solar wind models. \citet{ 2020ApJS..246...48B} calculated the turbulent heating rates from {\emph{PSP}} using two methods: from the third-order laws for energy transfer through the MHD inertial range and from the von K\'arm\'an decay laws based on correlation scale quantities. These were both found to increase going closer to the Sun, taking values at 0.17~AU about 100 times higher than typical values at 1~AU. These were compared to those from the model of \citet{2018ApJ...865...25U}, under two different inner boundary conditions -- an untilted dipole and a magnetogram from the time of the S/C. The heating rates from both models were found to be in broad agreement with those determined from the {\emph{PSP}} measurements, although the magnetogram version provided a slightly better fit overall. \citet{2021ApJ...923...89C} later performed a comparison of the first five orbits to a similar 3D turbulence solar wind model (which captures the coupling between the solar wind flow and small-scale fluctuations), examining both mean-flow parameters such as density, temperature and wind speed, as well as turbulence properties such as fluctuation amplitudes, correlation lengths and cross-helicity. In general, the mean flow properties displayed better agreement with {\emph{PSP}} observations than the turbulence parameters, indicating that aspects of the turbulent heating were possibly being captured, even if some details of the turbulence were not fully present in the model. A comparison between the model and observations for orbit 1 is shown in Fig.~\ref{FIG:Chhiber2021}. \begin{figure*} \centering \includegraphics[width=0.75\textwidth]{Chhiber2021.pdf} \caption{Blue `$+$' symbols show {\emph{PSP}} data from orbit 1, plotted at 1-hour cadence except \(\lambda\), for which daily values are shown. Red curve shows results from the model, sampled along a synthetic {\emph{PSP}} trajectory. Quantities shown are mean radial velocity of ions (\(V_R\)), mean radial magnetic field \(B_R\), mean ion density \(n_p\), mean ion temperature \(T_p\), mean turbulence energy \(Z^2\), correlation length of magnetic fluctuations \(\lambda\), and normalized cross helicity \(\sigma_c\). The shading in the top four panels marks an envelope obtained by adding and subtracting the local turbulence amplitude from the model to the mean value from the model (see the text for details). The vertical black line marks perihelion. The model uses ADAPT map with central meridian time 6 Nov. 2018, at 12:00 UT (Run I). Minor ticks on the time axis correspond to 1 day. Figure adapted from \citet{2021ApJ...923...89C}.} \label{FIG:Chhiber2021} \end{figure*} \citet{2020ApJS..246...38A} first compared the {\emph{PSP}} plasma observations to results from their turbulence transport model based on the nearly incompressible MHD description. They concluded that there was generally a good match for quantities such as fluctuating kinetic energy, correlation lengths, density fluctuations, and proton temperature. Later, \citet{2020ApJ...901..102A} developed a model that couples equations for the large scale dynamics to the turbulence transport equations to produce a turbulence-driven solar wind. Again, they concluded a generally good agreement, and additionally found the heating rate of the quasi-2D component of the turbulence to be dominant, and to be sufficient to provide the necessary heating at the coronal base. Overall, these studies indicate a picture that is consistent with turbulence, driven ultimately by motions at the surface of the Sun, providing the energy necessary to heat the corona and accelerate the solar wind in a way that matches the {\emph{in situ}} measurements made by {\emph{PSP}}. Future work will involve adding even more realistic turbulence physics into these models, and testing them under a wider variety of solar wind conditions. One recent study in this direction is \citet{2021A&A...650L...3C}, which examined the turbulence energy fluxes as a function of distance to the HCS during Enc.~ 4. They found that the turbulence properties changed when {\emph{PSP}} was within 4$^\circ$ of the HCS, resembling more the standard slow solar wind seen at 1~AU, and suggesting this as the angular width of the streamer belt wind at these distances. Also, within this streamer belt wind, the turbulence fluxes were significantly lower, being on average 3 times smaller than required for consistency with the \citet{2011ApJ...743..197C} solar wind model. \citet{2021A&A...650L...3C} concluded, therefore, that additional mechanisms not in these models are required to explain the solar wind acceleration in the streamer belt wind near the HCS. The coming years, with both {\emph{PSP}} moving even closer to the Sun and the the solar cycle coming into its maximum, will provide even better opportunities to further understand the role that turbulence plays in the heating and acceleration of the different types of solar wind, and how this shapes the large-scale structure of our heliosphere. \section{Large-Scale Structures in the Solar Wind} \label{LSSSW} During these four years of mission (within the ascending phase of the solar cycle), {\emph{PSP}} crossed the HCS several times and also observed structures (both remotely and {\emph{in situ}}) with similar features to the internal magnetic flux ropes (MFRs) associated with the interplanetary coronal mass ejections (ICMEs). This section focuses on {\emph{PSP}} observations of large-scale structures, {\emph{i.e.}}, the HCS crossing, ICMEs, and CMEs. Smaller heliospheric flux ropes are also included in this section because of their similarity to larger ICME flux ropes. The comparison of the internal structure of large- and small-scale flux ropes (LFRs and SFRs, respectively) is revealing. They can both store and transport magnetic energy. Their properties at different heliodistances provide insights into the energy transport in the inner heliosphere. {\emph{PSP}} brings a unique opportunity for understanding the role of the MFRs in the solar wind formation, evolution, and thus connecting scales in the heliosphere. \subsection{The Heliospheric Current Sheet} \label{LSSSWHCS} The HCS separates the two heliospheric magnetic hemispheres: one with a magnetic polarity pointing away from the Sun and another toward the Sun. {\emph{PSP}} crossed the HCS multiple times in each orbit due to its low heliographic latitude orbit. Fig.~\ref{Orbit5_HCS} shows a comprehensive set of measurements with periods of HCS crossing identified as gray regions. These crossings are particularly evident in the magnetic field azimuth angle ($\phi_B$) and the PAD of suprathermal electrons. In the Radial-Tangential-Normal (RTN) coordinates used here, the outward and inward polarities have a near-zero degree and $180^{\circ}$ azimuth angle, respectively. Since the electron heat flux always streams away from the Sun, the streaming direction is parallel (antiparallel) to the magnetic field in the regions of the outward (inward) magnetic polarity, resulting in a magnetic pitch angle of $0^{\circ}$ ($180^{\circ}$). \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{Encounter5_HCS.png} \caption{SP solar wind measurements during solar Enc.~5, 10 May 2020 (day of the year [DOY] 130) to 1 Jun. 2020 (DOY 182). The panels from top to bottom are the PAD of 314 eV suprathermal electrons; the normalized PAD of the 314 eV suprathermal electrons; the magnetic field magnitude; the azimuth angle of the magnetic field in the TN plane; the elevation angle of the magnetic field; the solar wind proton number density; the RTN components of the solar wind proton bulk speed; the thermal speed of the solar wind protons; and the S/C radial distance from the Sun. The gray bars mark periods of HCS crossings.} \label{Orbit5_HCS} \end{figure*} Comparing the observed locations of HCS crossings with PFSS model predictions yielded good agreement \citep{2020ApJS..246...47S, 2021A&A...652A.105L}. Lowering the source surface radius to 2~$R_\odot$ or even below would minimize the timing disagreements, though this would increase the amount of open magnetic flux to unreasonable values. The likely resolution is that the appropriate source surface radius is not a constant value but varies depending on the solar surface structures below. Other sources of disagreement between the model predictions and observations are the emergence of ARs not included in the photospheric magnetic maps used by the PFSS models and the presence of solar wind transients ({\emph{e.g.}}, ICMEs). \citet{2021A&A...652A.105L} also found that while the PFSS model predicted a relatively flat HCS, the observed current sheets had a much steeper orientation suggesting significant local corrugation. \citet{2020ApJS..246...47S} also compared the observed HCS crossing times to global MHD model predictions with similar results. The internal structure of the HCS near the Sun is very complex \citep{2020ApJS..246...47S, 2020ApJ...894L..19L, 2021A&A...650A..13P}. \citet{2020ApJS..246...47S} has identified structures within the HCS region with magnetic field magnitude depressions, increased solar wind proton bulk speeds, and associated suprathermal electron strahl dropouts. These might evolve into the small density enhancements observed by \citet{2020ApJ...894L..19L} likely showing magnetic disconnections. In addition, small flux ropes were also identified inside or just outside the HCS, often associated with plasma jets indicating recent magnetic reconnection \citep{2020ApJS..246...47S, 2020ApJ...894L..19L, 2021A&A...650A..13P, 2021A&A...650A..12Z}. The near Sun HCS is much thicker than expected; thus, it is surprising that it is the site of frequent magnetic reconnection \citep{2021A&A...650A..13P}. Moreover, 1~AU observations of the HCS reveal significantly different magnetic and plasma signatures implying that the near-Sun HCS is the location of active evolution of the internal structures \citep{2020ApJS..246...47S}. The HCS also appears to organize the nearby, low-latitude solar wind. \citet{2021A&A...650L...3C} observed lower amplitude turbulence, higher magnetic compressibility, steeper magnetic spectrum, lower Alfv\'enicity, and a $1/f$ break at much lower frequencies within $4^\circ$ of the HCS compared to the rest of the solar wind, possibly implying a different solar source of the HCS environment. \subsection{Interplanetary Coronal Mass Ejections} \label{sec:icme} The accurate identification and characterization of the physical processes associated with the evolution of the ICMEs require as many measurements of the magnetic field and plasma conditions as possible \citep[see][and references therein]{2003JGRA..108.1156C,2006SSRv..123...31Z}. Our knowledge of the transition from CME to ICME has been limited to the {\emph{in~situ}} data collected at 1~AU and remote-sensing observations from space-based observatories. {\emph{PSP}} provides a unique opportunity to link both views through valuable observations that will allow us to distinguish the evidence of the early transition from CME to ICME. Due to its highly elliptical orbit, {\emph{PSP}} measures the plasma conditions of the solar wind at different heliospheric distances. Synergies with other space missions and ground observatories allow building a complete picture of the phenomena from the genesis at the Sun to the inner heliosphere. In general, magnetic structures in the solar wind are MFRs, a subset of which are MCs \citep[][]{1988JGR....93.7217B}, and are characterized by enhanced magnetic fields where the field rotates slowly through a large angle. MCs are of great interest as their coherent magnetic field configuration and plasma properties drive space weather and are related to major geomagnetic storms. \citep{2000JGR...105.7491W}. Therefore, understanding their origin, evolution, propagation, and how they can interact with other transients traveling through space and planetary systems is of great interest. ICMEs are structures that travel throughout the heliosphere and transfer energy to the outer edge of the solar system and perhaps beyond. \paragraph{{\textbf{Event of 11-12 Nov. 2018: Enc. 1 $-$ {\emph{PSP}} at (0.25~AU, -178$^{\circ}$)}}} $\\$ During the first orbit, {\emph{PSP}} collected {\emph{in~situ}} measurements of the thermal solar wind plasma as close as $35.6~R_\odot$ from the Sun. In this new environment, {\emph{PSP}} recorded the signatures of SBO-CMEs: the first on 31 Oct. 2018, at 03:36 UT as it entered the Enc. and the second on 11 Nov. 2018, at 23:50 UT as it exited the Enc.. The signature of the second SBO-CME crossing the S/C was a magnetic field enhancement ({\emph{i.e.}}, maximum strength 97~nT). The event was seen by {\emph{STEREO}}-A but was not visible from L1 or Earth-orbiting S/C as the event was directed away from Earth. The signature and characteristics of this event were the focus of several studies, \citep[see][]{2020ApJS..246...69K,2020ApJS..246...63N,2020ApJS..246...29G}. SBO-CMEs \citep{2008JGRA..113.9105C} are ICMEs that fulfill the following criteria in coronagraph data (1) slow speed ranging from 300 to 500~km~s$^{-1}$; (2) no identifiable surface or low coronal signatures (in this case from Earth point of view); (3) characterized by a gradual swelling of the overlying streamer (blowout type); and (4) follows the tilt of HCS. The source location was determined using remote sensing and {\emph{in situ}} observations, the WSA model \citep{2000JGR...10510465A}, and the Air Force Data Assimilative Photospheric Flux Transport (ADAPT) model \citep{2004JASTP..66.1295A}. Hydrodynamical analytical and numerical simulations were also utilized to predict the CME arrival time to {\emph{PSP}}. Using a CME propagation model, \cite{2020ApJS..246...69K} and \cite{2020ApJS..246...63N} explored the characteristics of the CME using {\emph{in situ}} data recorded closest to the Sun as well as the implications for CME propagation from the coronal source to {\emph{PSP}} and space weather. The CME was traveling at an average speed of $\sim391$~km~s$^{-1}$ embedded in an ambient solar wind flow of $\sim395$~km~s$^{-1}$ and a magnetic field of 37~nT. The difference in speed with the ambient solar wind suggests that drag forces drive the SBO-CME. The internal magnetic structure associated with the SBO displayed signatures of flux-rope but was characterized by changes that deviated from the expected smooth change in the magnetic field direction (flux rope-like configuration), low proton plasma beta, and a drop in the proton temperature. A detailed analysis of the internal magnetic properties suggested high complexity in deviations from an ideal flux rope 3D topology. Reconstructions of the magnetic field configuration revealed a highly distorted structure consistent with the highly elongated “bubble” observed remotely. A double-ring substructure observed in the FOV of COR2 coronagraph on the {\emph{STEREO}-A Sun-Earth Connection Coronal and Heliospheric Investigation \citep[SECCHI;][]{2008SSRv..136...67H}} may also indicate a double internal flux rope. Another possible scenario is described as a mixed topology of a closed flux rope combined with the magnetically open structure, justified by the flux dropout observed in the measurements of the electron PAD. In any case, the plethora of structures observed by the EUV imager (SECCHI-EUVI) in the hours preceding the SBO evacuation indicated that the complexity might be related to the formation processes \citep{2020ApJS..246...63N}. Applying a wavelet analysis technique to the {\emph{in situ}} data from {\emph{PSP}}, \citet{2020ApJS..246...26Z} also identified the related flux rope. They inferred the reduced magnetic helicity, cross helicity, and residual energy. With the method, they also discussed that after crossing the ICME, both the plasma velocity and the magnetic field fluctuate rapidly and positively correlate with each other, indicating that Alfv\'enic fluctuations are generated in the region downstream of the ICME. Finally, \citet{2020ApJS..246...29G} also discussed the SBO-CME as the driver of a weak shock when the ICME was at 7.4 R$_{\odot}$ accelerating energetic particles. {\emph{PSP}}/IS$\odot$IS observed the SEP event (see Fig.~\ref{Giacalone_2020}). Five hours later, {\emph{PSP}}/FIELDS and {\emph{PSP}}/SWEAP detected the passage of the ICME (see \S\ref{EPsRad} for a detailed discussion). \paragraph{{\textbf{Event of 15 Mar. 2019: Enc. 2 $-$ {\emph{PSP}} at (0.547~AU, 161$^{\circ}$)}}} $\\$ An SBO-CME was observed by {\emph{STEREO}}-A and {\emph{SOHO}} coronagraphs and measured {\emph{in situ}} by {\emph{PSP}} at 0.547~AU on 15 Mar. 2019 from 12:14 UT to 17:45 UT. The event was studied in detail by \citet{2020ApJ...897..134L}. The ICME was preceded by two interplanetary shock waves, registered at 08:56:01 UT and 09:00:07 UT (see Fig.~\ref{Lario2020Fig1} in \S\ref{EPsRad}). This study's authors proposed that the shocks were associated with the interaction between the SBO-CME and a HSS. The analysis of the shocks' characteristics indicated that despite the weak strength, the successive structures caused the acceleration of energetic particles. This study aimed to demonstrate that although SBO-CMEs are usually slow close to the Sun, subsequent evolution in the interplanetary space might drive shocks that can accelerate particles in the inner heliosphere. The event is discussed in more detail in \S\ref{EPsRad}, showing that the time of arrival of energetic particles at {\emph{PSP}} (Fig.~\ref{Lario2020Fig2}) is consistent with the arrival of the ICME predicted by MHD simulations. With the simulations, \citet{2020ApJ...897..134L} determined when the magnetic connection was established between {\emph{PSP}} and the shocks, potentially driven by the ICME. \paragraph{{\textbf{Event of 13 Oct. 2019: Enc. 3 $-$ {\emph{PSP}} at (0.81~AU, 75$^{\circ}$)}}} $\\$ The event observed during Enc.~3 was reported by \citet{2021ApJ...916...94W}. The ICME is associated with the stealth CME evacuation on 10 Oct. 2019, at 00:48 UT. It was characterized by an angular width of 19$^{\circ}$, position angle of 82$^{\circ}$, no signatures in {\emph{SDO}}/AIA and EUVI-A images, and reaching a speed of 282~km~s$^{-1}$ at 20 R$_{\odot}$. At the time of the eruption, two coronal holes were identified from EUV images and extrapolations of the coronal magnetic field topology computed using the PFSS model \citep{1992ApJ...392..310W}, suggesting that the stealth CME evolved between two HSSs originated at the coronal holes. The first HSS enabled the ICME to travel almost at a constant speed (minimum interaction time $\sim2.5$ days), while the second overtook the ICME in the later stages of evolution. The event was measured when {\emph{PSP}} was not taking plasma measurements due to its proximity to aphelion, and there are only reliable magnetic field measurements by {\emph{PSP}}/FIELDS instrument. Even with these observational limitations, this event is of particular interest as {\emph{STEREO}}-A was located, 0.15~AU in the radial distance, $<1^{\circ}$ in latitude and $-7.7^{\circ}$ in longitude, away of {\emph{PSP}}. The ICME arrival is characterized by a fast-forward shock observed by {\emph{PSP}} on 13 Oct. 2019, at 19:03 UT and by {\emph{STEREO}}-A on 14 Oct. 2019, at 07:44 UT. Both S/C observed the same main features in the magnetic field components (exhibiting flux rope-like signatures) except for the HSS that {\emph{STEREO}}-A observed as an increasing speed profile and shorter ICME duration. To show the similarity of the main magnetic field features and the effect of the ICME compression due to its interaction with the HSS, the magnetic field and plasma parameters of {\emph{STEREO}}-A were plotted (shown in Fig.~\ref{fig:Winslow2020}) and overlaid the {\emph{PSP}} magnetic field measurements scaled by a factor of $1.235$ and shifted to get the same ICME duration as observed by {\emph{STEREO}}-A. \begin{figure} \centering \includegraphics[width=0.75\textwidth]{Winslow2021.png} \caption{Overlay of the {\emph{in situ}} measurements by {\emph{STEREO}}-A (black) and {\emph{PSP}} (red). From top to bottom: the magnetic field strength and the radial (B$_R$), tangential (B$_T$) and normal (B$_N$) components, and for {\emph{STEREO}}-A only: the proton density (N), temperature (T), and velocity (V). The {\emph{PSP}} data are scaled (by a factor of 1.235) and time-shifted to obtain the same ICME duration delimit by the two dashed vertical red lines. The red vertical solid line marks the fast forward shock at {\emph{PSP}}, while at {\emph{STEREO}}-A with the black vertical solid line. Figure adapted from \citet{2021ApJ...916...94W}.} \label{fig:Winslow2020} \end{figure} \paragraph{{\textbf{Event of 20 Jan. 2020: Enc. 4 $-$ {\emph{PSP}} at (0.32~AU, 80$^{\circ}$)}}} $\\$ \citet{2021A&A...651A...2J} reported a CME event observed by {\emph{PSP}} on 18 Jan. 2020, at 05:30 UT. The event was classified as a stealth CME since the eruption signatures were identified on the Sun's surface. Coronal {\emph{SDO}}/AIA observations indicated the emission of a set of magnetic substructures or loops followed by the evacuation of the magnetic structure on 18 Jan. 2020, at 14:00 UT. The signatures of a few dispersed brightenings and dimmings observed in EUVI-A 195~{\AA} were identified as the source region \citep{2021A&A...651A...2J}. The ICME arrived at {\emph{PSP}} on 20 Jan. 2020, at 19:00 UT, with a clear magnetic obstacle and rotation in the magnetic field direction but no sign of an IP shock wave. The event was also associated with a significant enhancement of SEPs. {\emph{PSP}} and {\emph{STEREO}}-A were almost aligned (separated by 5$^{\circ}$ in longitude). The ICME flew by both S/C, allowing for the examination of the evolution of the associated SEPs. Interestingly, this event established a scenario in which weaker structures can also accelerate SEPs. Thus, the presence of SEPs with the absence of the shock was interpreted as {\emph{PSP}} crossing the magnetic structure's flank, although no dense feature was observed in coronagraph images propagating in that direction \citep{2002ApJ...573..845G}. In \S\ref{EPsRad}, the event is discussed in detail, including the discussion of the associated {\emph{PSP}} observations of SEPs. \paragraph{{\textbf{Event of 25 Jun. 2020: Enc. 5 $-$ {\emph{PSP}} at (0.5~AU, 20$^{\circ}$)}}} $\\$ \citet{2021ApJ...920...65P,2022SpWea..2002914K}, and \citet{2022ApJ...924L...6M} studied and modeled the event of 25 Jan. 2020, which occurred during Enc.~5. The lack of clear signatures on the solar surface and low corona led to the interpretation of this event as an SBO-CME and was the primary motivation for these studies. The models were tested extensively to determine their capabilities to predict the coronal features and their counterparts in space. \citet{2021ApJ...920...65P} focused on predictions of the location of its source and the magnetic field configuration expected to be measured by {\emph{PSP}}. The {\emph{SDO}}/AIA and EUVI-A observations were used to determine the source location of the event. The increase in the solar activity around the source region was followed by a small eruption in the northern hemisphere on 21 Jun. 2020, at 02:00 UT (Fig.~\ref{fig:Palmerio2021}-left). This led to the outbreak of the SBO-CME on 23 Jun. 2020, at 00:54 UT. Using the PFSS model, the authors found that the SBO-CME was triggered by the interaction between the small eruption and the neighboring helmet streamer. The SBO-CME geometry and kinetic aspects were obtained by applying the graduated cylindrical shell \citep[GCS;][]{2011ApJS..194...33T} model to the series of coronagraph images resulting in the estimation of an average speed of 200~km~s$^{-1}$. The magnetic field configuration from Sun to {\emph{PSP}} was obtained by modeling the event using OSPREI suite \citep{2022SpWea..2002914K}. The arrival at {\emph{PSP}} was also predicted to be on 25 Jun. 2020, at 15:50~UT (9 minutes before the actual arrival). {\emph{PSP}} was located at 0.5~AU and 20$^{\circ}$ west of the Sun-Earth line. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{Palmerio2021.png} \includegraphics[width=0.45\textwidth]{Mostl2021.png} \caption{Left: {\emph{PSP}} {\emph{in situ}} magnetic field and plasma measurements of the 21 Jun. 2020 stealth CME. From top to bottom: magnetic field strength and components (B$_R$, B$_T$ and B$_N$), $\theta_B$ and (d)$\phi_B$ magnetic field angles, wind speed (V$_P$), proton number density (N$_P$), and proton temperature (T$_P$). The flux rope interval is shaded in grey. Figure adapted from \citet{2021ApJ...920...65P}. Right: {\emph{in situ}} magnetic field data at {\emph{PSP}}, {\emph{BepiColombo}} and {\emph{Wind}}. Solid vertical lines indicate ICME start times, and dashed lines the boundaries of the magnetic obstacle. Figure adapted from \citet{2022ApJ...924L...6M}. } \label{fig:Palmerio2021} \end{figure} \citet[][; Fig.~\ref{fig:Palmerio2021}-right]{2022ApJ...924L...6M} studied the same event using multipoint measurements from {\emph{SolO}}, {\emph{BepiColombo}} \citep{2021SSRv..217...90B} (0.88~AU, -3.9$^{\circ}$), {\emph{PSP}}, {\emph{Wind}} (1.007~AU, -0.2$^{\circ}$), and {\emph{STEREO}}-A (0.96~AU, -69.6$^{\circ}$). The WSA/THUX \citep{2020ApJ...891..165R}, HELCATS, and 3DCORE \citep{2021ApJS..252....9W} models were used to infer the background solar wind in the solar equatorial plane, the height time plots, and the flux rope, respectively. With the multi-S/C observation, the authors attempted to explain the differences in the {\emph{in~situ}} signatures observed at different locations of a single CME. To accomplish this goal, they modeled the evolution of a circular front shape propagating at a constant speed. The {\emph{in situ}} arrival ICME speeds at {\emph{PSP}} and {\emph{Wind}} were 290~km~s$^{-1}$ and 326~km~s$^{-1}$, respectively. The arrival speed at {\emph{STEREO}}-A was computed using SSEF30 model described by \citet{2012ApJ...750...23D}. The discrepancies between the observed and predicted arrival times ranged from $-11$ to $+18$ hrs. The authors attributed this to a strong ICME deformation. \paragraph{{\textbf{Event of 29 Nov. 2020: Enc. 6 $-$ {\emph{PSP}} at (0.81~AU, 104$^{\circ}$)}}} $\\$ The CME event of 2020 Nov. 29 has been widely studied and identified as the largest widespread SEP event of solar cycle 25 and the direct first {\emph{PSP}} observation of the interaction of two successive ICMEs \citep{2021A&A...656A..29C, 2021ApJ...919..119M, 2021A&A...656A..20K, 2021ApJ...920..123L, 2021A&A...656L..12M, 2022ApJ...924L...6M, 2022ApJ...930...88N}. During this event, {\emph{PSP}}, {\emph{SolO}}, and {\emph{STEREO}}-A were located at respective radial distances of 0.81~AU, 0.87~AU, and $\sim1$~AU. As seen from the Earth, they were at longitudinal angular positions of 104$^{\circ}$ east, 115$^{\circ}$ west, 65$^{\circ}$ east, respectively. The remote sensing observations show that at least four successive CMEs were observed during 24-29 Nov. 2020, although only two were directed toward {\emph{PSP}}. During the SEP event, the particles spread over more than 230$^{\circ}$ in longitude close to 1~AU. \citet{2021A&A...656A..20K} compared the timing when the EUV wave intersects the estimated magnetic foot-points from different S/C with the particle release times from time shift and velocity dispersion analyses. They found that there was no EUV wave related to the event. The PAD and first-order anisotropies studies at {\emph{SolO}}, {\emph{STEREO}}-A, and {\emph{Wind}} S/C suggest that diffusive propagation processes were involved. \citet{2022ApJ...930...88N} analyzed multi-S/C observations and included different models and techniques focusing on creating the heliospheric scenario of the CMEs' evolution and propagation and the impact on their internal structure at {\emph{PSP}}. The observations of {\emph{PSP}}, {\emph{STEREO}}-A, and {\emph{Wind}} of type II and III radio burst emissions indicate a significant left-handed polarization, which has never been detected in that frequency range. The authors identified the period when the interaction/collision between the CMEs took place using the results of reconstructing the event back at the Sun and simulating the event on the WSA-ENLIL+Cone and DBM models. They concluded that both ICMEs interacted elastically while flying by {\emph{PSP}}. The impact of such interaction on the internal magnetic structure of the ICMEs was also considered. Both ICMEs were fully characterized and 3D-reconstructed with the GCS, elliptical cylindrical (EC), and circular cylindrical (CC) models. The aging and expansion effects were implemented to evaluate the consequences of the interaction on the internal structure. \citet{2021ApJ...919..119M} investigated key characteristics of the SEP event, such as the time profile and anisotropy distribution of near-relativistic electrons measured by IS${\odot}$IS/EPI-Lo. They observed the brief PAD with a peak between 40$^{\circ}$ and 90$^{\circ}$ supporting the idea of a shock-drift acceleration, noting that the electron count rate peaks at the time of the shock driven by the faster of the two ICMEs. They concluded that the ICME shock caused the acceleration of electrons and also discussed that the ICMEs show significant electron anisotropy indicating the ICME's topology and connectivity to the Sun. \citet{2021ApJ...920..123L} studied two characteristics of the shock and their impact on the SEP event intensity: (1) the influence of unrelated solar wind structures, and (2) the role of the sheath region behind the shock. The authors found that on arrival at {\emph{PSP}}, the SEP event was preceded by an intervening ICME that modified the low energy ion intensity-time profile and energy spectra. The low-energy ($\lesssim$220~keV) protons accelerated by the shock were excluded from the first ICME, resulting in the observation of inverted energy spectra during its passage. \citet{2021A&A...656A..29C} analyzed the ion spectra during both the decay of the event (where the data are the most complete for H and He) and integrated over the entire event (for O and Fe). They found that the spectra follow a power law multiplied by an exponential with roll-over energies that decrease with the species' increasing rigidities. These signatures are typically found in SEP events where the dominant source is a CME-driven shock, supported by the He/H and Fe/O composition ratios. They also identified signatures in the electron spectrum that may suggest the presence of a population trapped between the ICMEs and pointed out the possibility of having the ICMEs interacting at the time of observation by noting a local ion population with energies up to $\sim1$~MeV. The SEP intensities dropped significantly during the passage of the MFR and returned to high values once {\emph{PSP}} crossed out of the magnetic structure. \citet{2021A&A...656L..12M} compared detailed measurements of heavy ion intensities, time dependence, fluences, and spectral slopes with 41 events surveyed by \citet{2017ApJ...843..132C} from previous solar cycles. They concluded that an interplanetary shock passage could explain the observed signatures. The observed Fe/O ratios dropped sharply above ~1 MeV nucleon$^{-1}$ to values much lower than the averaged SEP survey. They were a few MeV nucleon$^{-1}$ and $^3$He/$^4$He $<0.03$\% at {\emph{ACE}} and $<1$\% at {\emph{SolO}}. For further details on this SEP event, see the discussed in \S\ref{EPsCMENov}. The second ICME hitting {\emph{PSP}} was also analyzed by \citet{2022ApJ...924L...6M}. The authors combined coronagraph images from {\emph{SOHO}} and {\emph{STEREO}} and applied the GCS model to obtain the ICME geometric and kinematic parameters, computing an average speed of 1637~km~s$^{-1}$ at a heliodistance ranging from 6 to 14~$R_{\odot}$. The ICME arrived at {\emph{PSP}} (0.80~AU and $-96.8^{\circ}$) on 1 Dec. 2020, at 02:22 UT and at {\emph{STEREO}}-A (0.95~AU and $-57.6^{\circ}$) on 1 Dec. 2020, at 07:28 UT. They also considered this event an excellent example of the background wind's influence on the possible deformation and evolution of a fast CME and the longitudinal extension of a high-inclination flux rope. \subsection{Magnetic Flux Ropes}\label{7_mfr} The {\emph{in situ}} solar wind measurements show coherent and clear rotations of the magnetic field components at different time scales. These magnetic structures are well known as MFRs. According to their durations and sizes, MFRs are categorized as LFRs \citep[few hours to few days;][]{2014SoPh..289.2633J} and SFRs \citep[tens of minutes to a few hours;][]{2000GeoRL..27...57M}. At 1~AU, it has been found that 30\% to 60\% of the large-scale MFRs are related to CMEs \citep[][]{1990GMS....58..343G,2010SoPh..264..189R}. This subset of MFRs is known as MCs \citep{1988JGR....93.7217B}. On the other hand, the SFRs' origin is not well understood. Several studies proposed that SFRs are produced in the near vicinity of the Sun, while others can consider turbulence as a potential SFR source \citep[{\emph{i.e.}},][]{2019ApJ...881L..11P} or else that SFRs are related and originate from SBO-CMEs. It is worth noticing that observations suggest that SBO-CMEs last a few hours, a time scale that falls in the SFR category. To identify SFRs, \citet{2020ApJS..246...26Z} analyzed the magnetic field and plasma data from the {\emph{PSP}}'s first orbit from 22 Oct. to 21 Nov. 2018. They identified 40 SFRs by following the method described by \citet{2012ApJ...751...19T}. They applied a Morlet analysis technique to estimate an SFR duration ranging from 8 to 300 minutes. This statistical analysis suggests that the SFRs are primarily found in the slow solar wind, and their possible source is MHD turbulence. For the third and fourth orbits, they identified a total of 21 and 34 SFRs, respectively \citep{2021A&A...650A..12Z}, including their relation to the streamer belt and HCS crossing. Alternatively, \citet{2020ApJ...903...76C} identified 44 SFRs by implementing an automated detection method based on the Grad-Shafranov reconstruction technique \citep{2001GeoRL..28..467H,2002JGRA..107.1142H} over the {\emph{in situ}} measurements in a 28-second cadence. They looked for the double-folding pattern in the relation between the transverse pressure and the magnetic vector potential axial component and removed highly Alfv\'enic structures with a threshold condition over a Wal\'en test. The SFRs were identified during the first two {\emph{PSP}} Encs. over the periods 31 Oct. $-$ Dec. 19 2018 ($\sim0.26-0.81$~AU), and 7 Mar. $-$ 15 May 2019 ($\sim0.66-0.78$~AU) with durations ranging from 5.6 to 276.3 min. They found that the monthly counts at {\emph{PSP}} (27 per month) are notably lower than the average monthly counts at {\emph{Wind}} (294 at 1~AU). The authors also noticed that some of the detected SFRs are related to magnetic reconnection processes \citep[two reported by][]{2020ApJS..246...34P} and HCS \citep[three reported by][]{2020ApJS..246...47S}. They argue that the SFR occurrence rate (being far less than at 1~AU) and a power-law tendency of the size-scales point towards an SFRs origin from MHD turbulence but note that the number of events analyzed is not sufficient to yield a statistically significant analysis result. 12 SFRs were also identified with the method proposed by \citet{2020ApJS..246...26Z} with similar duration and two cases with opposite helicity. \subsection{Remote Sensing}\label{7_rs} \subsubsection{Introduction} \label{InstIntro} {\emph{PSP}}/WISPR is a suite of two white light telescopes akin to the heliospheric imagers \citep[HI-1 and HI-2;][]{2009SoPh..254..387E} of {\emph{STEREO}/SECCHI \citep{2008SSRv..136....5K}. The WISPR telescopes look off to the ram side of the S/C ({\emph{i.e.}}, in the direction of motion of {\emph{PSP}} in its counter-clockwise orbit about the Sun). When {\emph{PSP}} is in its nominal attitude ({\emph{i.e.}}, Sun-pointed and ``unrolled''), their combined FOV covers the interplanetary medium on the West side of the Sun, starting at a radial elongation of about $13.5^\circ$ from the Sun and extending up to about $108^\circ$. The FOV of WISPR-i extends up to $53.5^\circ$, while the FOV of WISPR-o starts at $50^\circ$ elongation, both telescopes covering about $40^\circ$ in latitude. Since the angular size of the Sun increases as {\emph{PSP}} gets closer to the Sun, the radial offset from Sun center represents different distances in units of solar radii. For example, on 24 Dec. 2024, at the closest approach of {\emph{PSP}} of 0.046~AU ($9.86~R_\odot$), the offset of $13.5^\circ$ will correspond to $\sim2.3~R_\odot$. \subsubsection{Streamer Imaging with WISPR} \citet{2019Natur.576..232H} reported on the first imaging of the solar corona taken by WISPR during {\emph{PSP}}’s first two solar Encs. ($0.16-0.25$~AU). The imaging revealed that both the large and small scale topology of streamers can be resolved by WISPR and that the temporal variability of the streamers can be clearly isolated from spatial effects when {\emph{PSP}} is corotating with the Sun \cite{ 2020ApJS..246...60P}, by exploiting synoptic maps based on sequential WISPR images, revealed the presence of multiple substructures (individual rays) inside streamers and pseudostreamers. This substructure of the streamers was noted in other studies \citep{2006ApJ...642..523T, 2020ApJ...893...57M}. Noteworthy in the WISPR synoptic maps was the identification of a bright and narrow set of streamer rays located at the core of the streamer belt \citep{2020ApJS..246...60P,2020ApJS..246...25H}. The thickness of this bright region matches the thickness of the heliospheric plasma sheet (HPS) measured in the solar wind (up to 300Mm) at times of sector boundary crossings \citep{1994JGR....99.6667W}. Thus, WISPR may offer the first clear-cut connection between coronal imaging of streamers and the {\emph{in situ}} measurements of the rather narrow HPS. Global PFSS and MHD models of the solar corona during the {\emph{PSP}} Encs. generally agree with the large-scale structure inferred from remote sensing observations \citep[{\emph{e.g.}},][]{2019ApJ...874L..15R,2020ApJS..246...60P}. As noted above, they have been used to interpret streamer sub-structure \citep{2020ApJS..246...60P} observed in WISPR observations, as well as during eclipses \citep{2018NatAs...2..913M}. Equally importantly, they have been used to connect remote solar observations with their {\emph{in~situ}} counterparts (\S\ref{LSSSWHCS}). Comparisons with white-light, but more importantly from emission images, provides crucial constraints for models that include realistic energy transport processes \citep{2019ApJ...872L..18V,2019ApJ...874L..15R}. They have already led to the improvement of coronal heating models \citep{2021A&A...650A..19R}, resulting in better matches with {\emph{in~situ}} measurements during multiple {\emph{PSP}} Encs. Images taken from a vantage point situated much closer to the Sun provide more detailed information on the population of transient structures released continually by helmet streamers. The fine-scale structure of streamer blobs is better resolved by WISPR than previous generations of heliospheric imagers. In addition the WISPR images have revealed that small-scale transients, with aspects that are reminiscent of magnetic islands and/or twisted 3D magnetic fields, are emitted at scales smaller than those of streamer blobs \citep{2019Natur.576..232H}. These very small flux ropes were identified {\emph{in situ}} as common structures during crossings of the HPS \citep{2019ApJ...882...51S} and more recently at {\emph{PSP}} \citep{2020ApJ...894L..19L}. They may also relate to the quasi-periodic structures detected by \citet{2015ApJ...807..176V} -- on-going research is evaluating this hypothesis. Recent MHD simulations have shown that the flux ropes observed in blobs and the magnetic islands between quasi-periodic increases in density could result from a single process known as the tearing-mode instability as the HCS is stretched by the adjacent out-flowing solar wind \citep{2020ApJ...895L..20R}. \subsubsection{Coronal Mass Ejection Imaging with WISPR} \begin{figure*} \centering \includegraphics[width=\textwidth]{RemoteSensing_hess.png} \caption{Multi-S/C observations of the first CME imaged by WISPR, on 1 Nov. 2018. (a) {\emph{SDO}}/AIA imaging of the CME. Different features are visible in each panel, including the dark, circular cavity (193 \AA; 1.6 MK and 20 MK), the bright trailing edge (131 \AA; 0.4 MK and 10 MK), a bright blob that is co-spatial with the cavity (171 \AA; 0.6 MK) and the prominence at the base of the eruption (304 \AA; 0.05 MK). The black line in the 193 \AA \ frame was used to calculate the size of the cavity. The white line in the 171 \AA \ frame is the approximate direction of motion and was used to measure the height and calculate the velocity of the cavity in AIA. (b) The CME as seen by the {\emph{SOHO}}/LASCO-C2 and -C3 coronagraphs. (c) The CME as seen by both {\emph{PSP}}/WISPR telescopes. The white line denotes the solar equatorial plane. The curvature of the line in WISPR-o is due to the distortion of the detector. Figure adapted from \citet{2020ApJS..246...25H}.} \label{FIG_NOV1} \end{figure*} Within a few hours of being turned on in preparation for the first {\emph{PSP}} perihelion passage, the WISPR imager observed its first CME. The WISPR cameras began taking science data at 00:00 UT on 1 Nov. 2018. By 11:00 UT a CME was visible in the inner telescope \citep{2020ApJS..246...25H}. Over the course of the next two days, the CME propagated along the solar equatorial plane throughout both WISPR telescopes, spanning $13.5^{\circ}-108.5^{\circ}$, with a speed of about 300~km~s$^{-1}$, consistent with SBO-CMEs \citep{2018ApJ...861..103V}. The WISPR observations are included in Fig.~\ref{FIG_NOV1}. The CME was also observed from the Earth perspective by the {\emph{SDO}}/AIA EUV imager and the {\emph{SOHO}}/LASCO coronagraphs. In AIA, a small prominence was observed beneath a cavity, which slowly rose from the west limb in a non-radial direction. The cavity and prominence are both visible in the left panel Fig.~\ref{FIG_NOV1}. As this structure enters the LASCO-C2 FOV the cavity remains visible, as does a bright claw-like structure at its base, as seen throughout the middle panel of Fig.~\ref{FIG_NOV1}. The non-radial motion continues until the CME reaches the boundary of an overlying helmet streamer, at which point the CME is deflected out through the streamer along the solar equatorial plane. Because of the alignment of the S/C at the time of the eruption, WISPR was able to see the CME from a similar perspective as LASCO, but from a quarter of the distance. The inner FOV from WISPR was within the LASCO-C3 FOV, meaning that for a brief time WISPR and C3 observations were directly comparable. These direct comparisons demonstrate the improved resolution possible, even in a weaker event, from a closer observational position. This can be seen directly in Fig.~\ref{FIG_NOV1} in the LASCO frame at 17:16 UT and the WISPR-i frame at 17:15~UT. The clarity of the observations of the CME cavity in WISPR allowed for tracking of the cavity out to $40~R{_\odot}$, as well as detailed modeling of the internal magnetic field of the CME \citep{2020ApJS..246...72R}. Both studies would have been impossible without the details provided by WISPR imaging. \begin{figure*} \centering \includegraphics[width=\textwidth]{RemoteSensing_Wood.png} \caption{(a) A LASCO/C3 image from 5 Nov. 2018 showing two small streamer blobs marked by the two arrows. The northern blob (red arrow) is observed by {\emph{PSP}}/WISPR during {\emph{PSP}}'s first perihelion passage. (b) The upper panels are a sequence of four images from WISPR's inner detector of the northern streamer blob eruption from (a), with dotted lines outlining the transient. Synthetic images are shown below the real images, based on the 3D reconstruction of the event. (c) Reconstructed 3D flux rope structure of the streamer blob. The flux rope is shown at two times, t1 and t2, corresponding to 03:48 UT and 09:40 UT, respectively. The red circles indicate the location of {\emph{PSP}} at these two times, and the size of the Sun is to scale. Figure adapted from \citet{2020ApJS..246...28W}.} \label{Wood2020Fig1} \end{figure*} Another transient observed by WISPR during the first {\emph{PSP}} perihelion passage was a small eruption seen by the WISPR-i detector on 5 Nov. 2018, only a day before {\emph{PSP}}'s first close perihelion passage \citep{2020ApJS..246...28W}. As shown in Fig.~\ref{Wood2020Fig1}(a), the LASCO/C3 coronagraph on board {\emph{SOHO}} observed two small jet-like eruptions on that day, with the northern of the two (red arrow) corresponding to the one observed by WISPR. The appearance of the event from 1~AU is very consistent with the class of small transients called ``streamer blobs'' \citep{1997ApJ...484..472S,1998ApJ...498L.165W}, although it is also listed in catalogs of CMEs compiled from {\emph{SOHO}}/LASCO data, and so could also be described as a small CME. At the time of the CME, {\emph{PSP}} was located just off the right side of the LASCO/C3 image in Fig.~\ref{Wood2020Fig1}a, lying almost perfectly in the C3 image plane. The transient's appearance in WISPR images is very different than that provided by the LASCO/C3 perspective, being so much closer to both the Sun and the event itself. This is the first close-up image of a streamer blob. In the WISPR images in Fig.~\ref{Wood2020Fig1}b, the transient is not jet-like at all. Instead, it looks very much like a flux rope, with two legs stretching back toward the Sun, although one of the legs of the flux rope mostly lies above the WISPR FOV. This leg basically passes over {\emph{PSP}} as the transient moves outward. A 3D reconstruction of the flux rope morphology of the transient is shown in Fig.~\ref{Wood2020Fig1}c, based not only on the LASCO/C3 and {\emph{PSP}}/WISPR data, but also on images from the COR2 coronagraph on {\emph{STEREO}}-A, making this the first CME reconstruction performed based on images from three different perspectives that include one very close to the Sun. Although typical of streamer blobs in appearance, a kinematic analysis of the 5 Nov. 2018 event reveals that it has a more impulsive acceleration than previously studied blobs. \subsubsection{Analysis of WISPR Coronal Mass Ejections} \label{intro} The rapid, elliptical orbit of {\emph{PSP}} presents new challenges for the analysis of the white light images from WISPR due to the changing distance from the Sun. While the FOV of WISPR’s two telescopes are fixed in angular size, the physical size of the coronal region imaged changes dramatically, as discussed in \citet{2019SoPh..294...93L}. In addition, because of {\emph{PSP}}’s rapid motion in solar longitude, the projected latitude of a feature changes, as seen by WISPR, even if the feature has a constant heliocentric latitude. Because of these effects, techniques used in the past for studying the kinematics of solar ejecta may no longer be sufficient. The motion observed in the images is now a combination of the motion of the ejecta and of the S/C. On the other hand, the rapid motion gives multiple view points of coronal features and this can be exploited using triangulation. Prior to launch, synthetic white light WISPR images, created using the sophisticated ray-tracing software \citep{2009SoPh..256..111T}, were used to develop new techniques for analyzing observed motions of ejecta. \citet{2020SoPh..295...63N} performed extensive studies of the evolution of the brightness due to the motion of both the S/C and the feature. They concluded that the total brightness evolution could be exploited to obtain a more precise triangulation of the observed features than might be possible otherwise. \begin{figure} \includegraphics[width=\textwidth]{RemoteSensing_PCL1.png} \caption{WISPR-i running-difference images at two times for the CME of 2 Apr. 2019 showing the tracked feature, the lower dark ``eye" (marked with red X's). The image covers approximately $13.5^{\circ} - 53.0^{\circ}$ elongation from the Sun center. The streaks seen in the images are due to reflections off debris created by dust impacts on the {\emph{PSP}} S/C.} \label{figPCL1} \end{figure} \smallskip \paragraph{{\textbf{Tracking and Fitting Technique for Trajectory Determination}}} $\\$ \citet{2020SoPh..295..140L} developed a technique for determining the trajectories of CMEs and other ejecta that takes into account the rapid motion of {\emph{PSP}}. The technique assumes that the ejecta, treated as a point, moves radially at a constant velocity. This technique builds on techniques developed for the analysis of J-maps \citep{1999JGR...10424739S} created from LASCO and SECCHI white light images. For ejecta moving radially at a constant velocity in a heliocentric frame, there are four trajectory parameters: longitude, latitude, velocity and radius (distance from the Sun) at some time $t_0$. Viewed from the S/C, the ejecta is seen to change position in a time sequence of images. The position in the image can be defined by two angles that specify the telescope’s LOS at that pixel location. We use a projective cartesian observer-centric frame of reference that is defined by the Sun-{\emph{PSP}} vector and the {\emph{PSP}} orbit plane. One angle ($\gamma$) measures the angle from the Sun parallel to the {\emph{PSP}} orbit plane and the second angle ($\beta$) measures the angle out of the orbit plane. We call this coordinate system the {\emph{PSP}} orbit frame. Using basic trigonometry, two equations were derived relating the coordinates in the heliocentric frame to those measured in the S/C frame ($\gamma$, $\beta$) as a function of time. The geometry relating the ejecta's coordinates in the two frames is shown in Fig.~1 of \citet{2020SoPh..295..140L} for the case with the inclination of {\emph{PSP}}’s orbit plane w.r.t. the solar equatorial plane neglected. The coordinates of the S/C are $[r_1, \phi_1, 0]$, and the coordinates of the ejecta are $[r_2, \phi_2, \delta_2]$. The two equations are \begin{equation} \frac{\tan\beta (t)}{\sin\gamma (t)} = \frac{\tan\delta_2}{\sin[\phi_2 - \phi_1 (t)]}, \end{equation} \begin{equation} \cot\gamma(t) = \frac{r_1(t) - r_2(t)\cos\delta_2 \cos[\phi_2 - \phi_1(t)]}{r_2(t)\cos\delta_2 \sin[\phi_2 - \phi_1(t)]}. \end{equation} By tracking the point ejecta in a time sequence of WISPR images, we generate a set of angular coordinates $[\gamma(t_i), \beta(t_i)]$ for the ejecta in the {\emph{PSP}} orbit frame. In principle, ejecta coordinates in the S/C frame are only needed at two times to solve the above two equations for the four unknown trajectory parameters. However, we obtain more accurate results by tracking the ejecta in considerably more than two images. The ejecta trajectory parameters in the heliocentric frame are determined by fitting the above equations to the tracking data points $[\gamma(t_i), \beta(t_i)]$. Our fitting technique is described in \citet{2020SoPh..295..140L}, which also gives the equations with the corrections for the inclination of {\emph{PSP}}’s orbit w.r.t. the solar equatorial plane. \begin{figure} \includegraphics[width=0.66\textwidth]{RemoteSensing_PCL2.png} \caption{Trajectory solution for the 2 Apr. 2019 flux rope (magenta arrow) shown in relation to {\emph{PSP}}, {\emph{STEREO}}-A, and Earth at 18:09 UT. The fine solid lines indicate the fields-of-view of the telescopes on {\emph{PSP}} and {\emph{STEREO-A}}. The CME direction was found to be HCI longitude $= 67^{\circ} \pm 1^{\circ}$. Note that the arrow only indicates the direction of the CME, and it is not meant to indicate the distance from the Sun. The HCI longitudes of the Earth and {\emph{STEREO}}-A are 117$^{\circ}$ and 19$^{\circ}$, respectively. The blue dashed ellipse is {\emph{PSP}}'s orbit. The plot is in the Heliocentric Earth Ecliptic (HEE) reference frame and distances are in AU.} \label{figPCL2} \end{figure} \begin{figure} \includegraphics[width=.66\textwidth]{RemoteSensing_PCL3.png} \caption{Trajectory of the flux rope of 2 Apr. 2019, found from the WISPR data using the tracking and fitting technique, projected to images from {\emph{STEREO}}-A/HI-1 at 18:09 UT. The trajectory from tracking and fitting ({\textcolor{red}{\bf{+}}} signs) is shown from 12:09 to 18:09 UT in hourly increments, as seen from {\emph{STEREO}}-A. The location of the prediction for the last time (18:09 UT) is in good agreement with the location of the tracked feature seen in the HI-1A image, thus verifying the trajectory. The grid lines are the coordinate lines of the WCS frame specified in the HI-1A FITS header. The size and location of the Sun (yellow globe) is shown to scale.} \label{figPCL3} \end{figure} The tracking and fitting technique was applied to a small CME seen by WISPR in the second solar Enc. on 1-2 Apr. 2019 \citep{2020SoPh..295..140L}. Fig.~\ref{figPCL1} shows two of the WISPR images used in the tracking; the feature tracked, an eye-like dark oval, is shown as the red X. The direction of the trajectory found for this CME is indicated with an arrow in Fig.~\ref{figPCL2}, which also shows the location and fields of view of {\emph{STEREO}}-A and {\emph{PSP}}. The trajectory solution in heliocentric inertial (HCI) coordinates was longitude $ = 67 \pm 1^{\circ}$, latitude $= 6.0\pm 0.3^{\circ}$, $V = 333 \pm 1$~km~s$^{-1}$, and $r_{2}(t_0) = 13.38 \pm 0.01$\,R$_{\odot}$, where $t_0 $= 12:09 UT on 2 Apr. 2019. There were simultaneous observations of the CME from {\emph{STEREO}}-A and {\emph{PSP}}, which enabled us to use the second viewpoint observation of {\emph{STEREO}}-A/HI-1 to verify the technique and the results. This was done by generating a set of 3D trajectory points using the fitting solution above that included the time of the {\emph{STEREO}}-A/HI-1 observations. These trajectory points were then projected onto an image from HI-1A using the World Coordinate System (WCS) information in the Flexible Image Transport System (FITS) image header. This is illustrated in Fig.~\ref{figPCL3}, which shows the trajectory points generated from the solutions from 12:09 to 18:09 UT in hourly increments projected onto the HI-1A image at 18:09 UT on 2 Apr. 2019. Note that the last point, corresponding to the time of the HI-1A image, falls quite near a similar feature on the CME as was tracked in the WISPR images (Fig.~\ref{figPCL1}). Thus, the trajectory determined from the WISPR data agrees with the {\emph{STEREO}}-A observations from a second view point. This technique was also applied to the first CME seen by WISPR on 2 Nov. 2018. Details of the tracking and the results are in \citet{2020SoPh..295..140L}, and independent analyses of the CME kinematics and trajectory for the 2 Apr. 2019 event were carried out by \citet{2021A&A...650A..31B} and \citet{2021ApJ...922..234W}, with similar results. \begin{figure} \includegraphics[width=.66\textwidth]{RemoteSensing_PCL4.png} \caption{ Trajectory of the 26-27 Jan. 2020 CME (magenta arrow) in relation to {\emph{PSP}}, {\emph{STEREO}}-A, and Earth, projected in the HEE reference frame on 26 Jan. 2020, at 20:49 UT. The fields-of-view of the WISPR-i and WISPR-o telescopes on {\emph{PSP}} and COR2 and HI-1 on {\emph{STEREO}}-A are indicated by solid lines. The plot is in the HEE coordinate frame and distances are in AU.} \label{figPCL4} \end{figure} \begin{figure} \includegraphics[width=\textwidth]{RemoteSensing_PCL5.png} \caption{Image pair used to determine the location of the 26-27 Jan. 2020 CME by triangulation. Left: WISPR-o image of 26 Jan. 2020, at 20:49 UT with the selected feature circled in red.Right: Simultaneous HI-1 image showing the location of the same feature identified in the WISPR-o image. The location of the CME found by triangulation of this feature was in excellent agreement with the trajectory found from tracking and fitting (see core text). The Sun (yellow globe) in the panel on the right is shown to scale. The HI-1 image is projected in the Helioprojective Cartesian (HPC) system (red and blue grid lines) with the Sun (yellow globe) drawn to scale.} \label{figPCL5} \end{figure} \smallskip \paragraph{{\textbf{WISPR and {\emph{STEREO}} Observations of the Evolution of a Streamer Blowout CME}}} $\\$ WISPR obtained detailed images of the flux rope structure of a CME on 26-27 Jan. 2020. The tracking and fitting procedure was also used here to determine the trajectory. The direction of the trajectory is shown in Fig.~\ref{figPCL4}, along with locations and FOVs of {\emph{STEREO}} A and WISPR. The trajectory solution parameters are HCI longitude and latitude = (65$^{\circ} \pm $2$^{\circ}$, 2$^{\circ} \pm$ 2$^{\circ}$), $v=248 \pm 16$~km~s$^{-1}$ and $r_{2}(t_0$)/R$_{\odot}$= $30.3 \pm 0.3$ at $t_0$ = 20:04 UT on 26 Jan. 2020. The CME was also observed by {\emph{STEREO}}-A/COR2 starting on 25 Jan. 2020. The brightening of the streamer before the eruption, the lack of a bright leading edge, and the outflow following the ejecta led to its identification as a SBO-CME \citep{2021A&A...650A..32L}. Data from {\emph{STEREO}}-A/EUVI suggested that this CME originated as a rising flux rope on 23 Jan. 2020, which was constrained in the corona until its eruption of 25 Jan. 2020. The detail of the observations and the supporting data from AIA and the Helioseismic and Magnetic Imager \citep[HMI;][]{2012SoPh..275..207S} on {\emph{SDO}} can be found in \citet{2021A&A...650A..32L}. The direction determined from the tracking and fitting was consistent with this interpretation; the direction was approximately $30^{\circ}$ west of a new AR, which was also a possible source. To verify the CME’s trajectory determined by tracking and fitting, we again made use of simultaneous observations from {\emph{STEREO}}-A, but in this case we used triangulation to determine the 3D location of the CME at the time of a simultaneous image pair. This was only possible because details of the structure of the CME were evident from both viewpoints so that the same feature could be located in both images. Fig.~\ref{figPCL5} shows the simultaneous WISPR-o and HI-1A images of the CME at time 26 Jan. 2020, at 20:49 UT when the S/C were separated by 46$^{\circ}$. The red X in each image marks the what we identify as the same feature in both images (a dark spot behind the bright V). Using a triangulation technique on this image pair \citet{2021A&A...650A..32L} gave the result distance from the Sun $r/R_{\odot}$ = 31$ \pm 2$, HCI longitude and latitude = ( $66^{\circ}\pm 3^{\circ}$, $-2^{\circ} \pm 2^{\circ}$). These angles are in excellent agreement with the longitude found by tracking and fitting given above. The distance from the Sun is also in excellent agreement with the predicted distance at this time of $r_2/R_{\odot}$= $31.2 \pm 0.3$, validating our trajectory solution. Thus the trajectory was confirmed, which further supported our interpretation of the evolution of this slowly evolving SBO-CME. \section{Solar Radio Emission} \label{SRadE} At low frequencies, below $\sim10-20$ MHz, radio emission cannot be observed well from ground-based observatories due to the terrestrial ionosphere. Solar radio emission at these frequencies consists of radio bursts, which are signatures of the acceleration and propagation of non-thermal electrons. Type II and type III radio bursts are commonly observed, with the former resulting from electrons accelerated at shock fronts associated with CMEs, and the latter from electron beams accelerated by solar flares (see Fig.~\ref{Wiedenbeck2020Fig}C). Solar radio bursts offer information on the kinematics of the propagating source, and are a remote probe of the properties of the local plasma through which the source is propagating. Radio observations on {\emph{PSP}} are made by the FIELDS RFS, which measures electric fields from 10 kHz to 19.2 MHz \citep{2017JGRA..122.2836P}. At frequencies below and near $f_{pe}$, the RFS measurements are dominated by the quasi-thermal noise (QTN). {\emph{PSP}} launched at solar minimum, when the occurrence rate of solar radio bursts is relatively low. Several {\emph{PSP}} Encs. (Enc.~1, Enc.~3, Enc.~4) near the start of the mission were very quiet in radio, containing only a few small type III bursts. The second {\emph{PSP}} Enc. (Enc.~2), in Apr. 2019, was a notable exception, featuring multiple strong type III radio bursts and a type III storm \citep{2020ApJS..246...49P}. As solar activity began rising in late 2020 and 2021, with Encs.~5 and beyond, the occurrence of radio bursts is also increasing. Taking advantage of the quiet Encs. near the start of the mission, \citet{ChhabraThesis} applied a correlation technique developed by \citet{2013ApJ...771..115V} for imaging data to RFS light curves, searching for evidence of heating of the coronal by small-scale nanoflares which are too faint to appear to the eye in RFS spectrograms. During {\emph{PSP}} Encs., the cadence of RFS spectra range is typically 3.5 or 7 seconds, which is higher than the typical cadence of radio spectra available from previous S/C such as {\emph{STEREO}} and {\emph{Wind}}. The relatively high cadence of RFS data is particularly useful in the study of type III radio bursts above 1 MHz (in the RFS High Frequency Receiver [HFR] range), which typically last $\lesssim1$~minute at these frequencies. Using the HFR data enabled \citet{2020ApJS..246...49P} to measure circular polarization near the start of several type III bursts in Enc.~2. \citet{2020ApJS..246...57K} characterized the decay times of type III radio bursts up to 10 MHz, observing increased decay times above 1 MHz compared to extrapolation using previous measurements from {\emph{STEREO}}. Modeling suggests that these decay times may correspond to increased density fluctuations near the Alfv\'en point. Recent studies have used RFS data to investigate basic properties of type III bursts, in comparison to previous observations and theories. \cite{2021ApJ...915L..22C} examined a single Enc.~2 type IIIb burst featuring fine structure (striae) in detail. \cite{2021ApJ...915L..22C} found consistency between RFS observations and results of a model with emission generated via the electron cyclotron maser instability \citep{2004ApJ...605..503W}, over the several-MHz frequency range corresponding to solar distances where $f_{ce} > f_{pe}$. \cite{2021ApJ...913L...1M} performed a statistical survey of the lower cutoff frequency of type III bursts using the first five {\emph{PSP}} Encs., finding a higher average cutoff frequency than previous observations from {\emph{Ulysses}} and {\emph{Wind}}. \cite{2021ApJ...913L...1M} proposed several explanations for this discrepancy, including solar cycle and event selection effects. The launch of {\emph{SolO}} in Feb. 2020 marked the first time four S/C ({\emph{Wind}}, {\emph{STEREO}}-A, {\emph{PSP}}, and {\emph{SolO}}) with radio instrumentation were operational in the inner heliosphere. \cite{2021A&A...656A..34M} combined observations from these four S/C along with a model of the burst emission to determine the directivity of individual type III radio bursts, a measurement previously only possible using statistical analysis of large numbers of bursts. \section{Energetic Particles} \label{EPsRad} The first four years of the {\emph{PSP}} mission have provided key insights into the acceleration and transport of energetic particles in the inner heliosphere and has enabled a comprehensive understanding into the variability of solar radio emission. {\emph{PSP}} observed a multitude of solar radio emissions, SEP events, CMEs, CIRs and SIRs, inner heliospheric anomalous cosmic rays (ACRs), and energetic electron events; all of which are critical to explore the fundamental physics of particle acceleration and transport in the near-Sun environment and throughout the universe. \subsection{Solar Energetic Particles} \label{SEPs} On 2 and 4 Apr. 2019, {\emph{PSP}} observed two small SEP events \citep{2020ApJS..246...35L, 2020ApJ...899..107K, 2020ApJ...898...16Z} while at $\sim0.17$~AU (Fig.~\ref{Leske2020Fig}). The event on 4 Apr. 2019 was associated with both a type III radio emission seen by {\emph{PSP}} as well as surges in the EUV observed by {\emph{STEREO}}-A, all of which determined the source was an AR $\sim80^{\circ}$ east of the {\emph{PSP}} footpoint \citep{2020ApJS..246...35L}. To better understand the origin of these SEP events, \citet{2020ApJ...899..107K} conducted a series of simulations constrained by remote sensing observations from {\emph{SDO}}/AIA, {\emph{STEREO}}-A/EUVI and COR2, {\emph{SOHO}}/LASCO, and {\emph{PSP}}/WISPR to determine the magnetic connectivity of {\emph{PSP}}, model the 3D structure and evolution of the EUV waves, investigate possible shock formation, and connect these simulations to the SEP observations. This robust simulation work suggests that the SEP events were from multiple ejections from AR 12738. The 2 Apr. 2019 event likely originated from two ejections that formed a shock in the lower corona \citep{2020ApJ...899..107K}. Meanwhile, the 4 Apr. 2019 event was likely the result of a slow SBO, which reconfigured the global magnetic topology to be conducive for transport of solar particles away from the AR toward {\emph{PSP}}. Interestingly, however, \citet{2020ApJS..246...35L} did not observe \textsuperscript{3}He for this event, as would be expected from flare-related SEPs. \citet{2020ApJ...898...16Z} explained gradual rise of the 4 Apr. 2019 low-energy H\textsuperscript{+} event compared to the more energetic enhancement on 2 Apr. 2019 as being indicative of different diffusion conditions. \begin{figure*} \centering \includegraphics[width=\textwidth]{Leske2020Fig.jpg} \caption{IS$\odot$IS/EPI-Lo time-of-flight measurements for the two SEP events on 2 Apr. 2019 (DOY 92) and 4 Apr. 2019 (DOY 94) are shown in green, blue, and red for the stated energies. IS$\odot$IS/EPI-Hi/LET1 observations are shown in black. Figure adapted from \citet{2020ApJS..246...35L}.} \label{Leske2020Fig} \end{figure*} The same AR (AR 12738) was later responsible for a \textsuperscript{3}He-rich SEP event on 20-21 Apr. 2019 observed by {\emph{PSP}} at $\sim0.46$~AU that was also measured by {\emph{SOHO}} at $\sim1$~AU (shown in Fig.~\ref{Wiedenbeck2020Fig}) \citep{2020ApJS..246...42W}. This SEP event was observed along with type III radio bursts and helical jets. The \textsuperscript{3}He/\textsuperscript{4}He ratios at {\emph{PSP}} and {\emph{SOHO}} were $\sim250$ times the nominal solar wind ratio; such large enhancements are often seen in impulsive SEP events. This event demonstrated the utility of IS$\odot$IS/EPI-Hi to contribute to our understanding of the radial evolution of \textsuperscript{3}He-rich SEP events, which can help constrain studies of potential limits on the amount of \textsuperscript{3}He that can be accelerated by an AR \citep[{\emph{e.g.}},][]{2005ApJ...621L.141H}. \begin{figure*} \centering \includegraphics[width=\textwidth]{Wiedenbeck2020Fig.jpg} \caption{Remote and {\emph{in situ}} observations for the 20-21 Apr. 2019 \textsuperscript{3}He-rich SEP event. (a) Jet onset times and CME release times as reported by \citet{2020ApJS..246...33S}, (b) 5-min 0.05-0.4 nm (blue) and $0.1-0.8$ nm (red) X-ray flux from the Geostationary Operational Environmental Satellite ({\emph{GOES}}), (c) radio emissions from {\emph{Wind}}/WAVES \citep{1995SSRv...71..231B}, (d) electron fluxes for 53 (black), 79 (red), and 133 (blue) keV from the {\emph{ACE}} Electron Proton Alpha Monitor \citep[EPAM;][]{1998SSRv...86..541G}, (e) velocity dispersion with red line indicating the dispersion slope from the {\emph{ACE}} Ultra Low Energy Isotope Spectrometer \citep[ULEIS;][]{1998SSRv...86..409M}, (f) {\emph{ACE}}/ULEIS 1 MeV He flux, (g) {\emph{ACE}}/ULEIS He mass vs. time, and (h) {\emph{PSP}}/IS$\odot$IS/EPI-Hi mass vs. time. Grey boxes in panel (h) indicate times without IS$\odot$IS observations. Figure adapted from \citet{2020ApJS..246...42W}.} \label{Wiedenbeck2020Fig} \end{figure*} \citet{2021A&A...650A..23C} later investigated the helium content of six SEP events from May to Jun. 2020 during the fifth orbit of {\emph{PSP}}. These events demonstrated that SEP events, even from the same AR, can have significantly different \textsuperscript{3}He/\textsuperscript{4}He and He/H ratios. Additionally, EUV and coronagraph observations of these events suggest that the SEPs were accelerated very low in the corona. Using velocity-dispersion analysis, \citet{2021A&A...650A..26C} concluded that the path length of these SEP events to the source was $\sim0.625$~AU, greatly exceeding that of a simple Parker spiral. To explain the large path length of these particles, \citet{2021A&A...650A..26C} developed an approach to estimate how the random-walk of magnetic field lines could affect particle path length, which well explained the computed path length from the velocity-dispersion analysis. During the first orbit of {\emph{PSP}}, shortly after the first perihelion pass, a CME was observed locally at {\emph{PSP}}, which was preceded by a significant enhancement in SEPs with energies below a few hundred keV/nuc \cite[][]{2020ApJS..246...29G,2020ApJS..246...59M}. The CME was observed to cross {\emph{PSP}} on 12 Nov. 2018 (DOY 316), and at this time, {\emph{PSP}} was approximately 0.23~AU from the Sun. The CME was observed remotely by {\emph{STEREO}}-A which was in a position to observe the CME erupting from the east limb of the Sun (w.r.t. {\emph{STEREO}}-A) and moving directly towards {\emph{PSP}}. {\emph{PSP}} was on the opposite side of the Sun relative to Earth. Through analysis of {\emph{STEREO}}-A coronagraph images, the speed of the CME was determined to be 360~km~s$^{-1}$, which is slower than typical SEP-producing CMEs seen by S/C near 1~AU. Moreover, in the few days that preceded the CME, there were very few energetic particles observed, representing a very quiet period. Thus, this represented a unique observation of energetic particles associated with a slow CME near the Sun. Fig.~\ref{Giacalone_2020} shows a multi-instrument, multi-panel plot of data collected during this slow-CME/SEP event. Fig.~\ref{Giacalone_2020}a shows the position of the CME as a function of time based on {\emph{STEREO}}-A observations as well as {\emph{PSP}} (the cyan point), while the lower Fig.~\ref{Giacalone_2020}e-f shows $30-300$~keV energetic particles from the IS$\odot$IS/EPI-Lo instrument. The CME was observed to erupt and move away from the Sun well before the start of the SEP event, but the SEP intensities rose from the background, peaked, and then decayed before the CME crossed {\emph{PSP}}. There was no shock observed locally at {\emph{PSP}}, and there is no clear evidence of local acceleration of SEPs at the CME crossing. It was suggested by \citet{2020ApJS..246...29G} that the CME briefly drove either a weak shock or a plasma compression when it was closer to the Sun, and was capable of accelerating particles which then propagated ahead of the CME and observed by {\emph{PSP}}. In fact, modeling of the CME and local plasma parameters, also presented in this paper, suggested there may have been a weak shock over parts of the (modeled) CME-shock surface, but is not clear whether {\emph{PSP}} was magnetically connected to these locations. The energetic particle event was characterized by a clear velocity dispersion in which higher-energy particles arrived well before the lower energy particles. Moreover, the time-intensity profiles at specific energies, seen in Fig.~\ref{Giacalone_2020}e, show a relatively smooth rise from the background to the peak, and a gradual decay. The particles were observed to be initially anisotropic, moving radially away from the Sun, but at the peak of the event were observed to be more isotropic. \citet{2020ApJS..246...29G} interpreted this in terms of the diffusive transport of particles accelerated by the CME starting about the time it was 7.5~$R_\odot$ and continuing with time but with a decreasing intensity. They used a diffusive-transport model and fit the observed time-intensity profiles, which gave values for the scattering mean-free path, parallel to the magnetic field, of 30~keV to 100~keV protons to be from $0.04-0.09$~AU at the location of {\emph{PSP}}. \begin{figure*} \centering \includegraphics[width=\textwidth]{Giacalone_etal_Fig1.pdf} \caption{Multi-instrument data for a CME and SEP event observed by {\emph{PSP}} and {\emph{STEREO}}-A. (a) shows the heliocentric distance of the CME, (b-c) show solar wind density and solar wind speed from the SWEAP instrument, (d) shows the vector and magnitude of the magnetic field from the FIELDS instrument, and (e-f) show data from the IS$\odot$IS/EPI-Lo instrument. Figure adapted from \citet{2020ApJS..246...29G} where further details are provided.} \label{Giacalone_2020} \end{figure*} Another important feature of this event was the generally steep energy spectrum of the low-energy ions. This suggested a very weak event. In the comparison between the model used by \citet{2020ApJS..246...29G} and the observations, it was found that a source spectrum, assumed to be at the CME when it was close to the Sun, had an approximately $E^{-5.5}$ power-law dependence. At the time, this was the closest to the Sun that a CME-related SEP event had been observed {\emph{in situ}}. \citet{2020ApJS..246...29G} used their diffusive transport model to estimate the total fluence that this event would have had at 1~AU, in order to compare with previous observations of CME-related SEP events. It was determined that this event would have been so weak to not even appear on a figure showing a wide range of values of the SEP fluence as a function of CME speed produced by \citet{2001JGR...10620947K}. \citet[][]{2020ApJS..246...59M} presented a separate analysis of this same CME-SEP event suggested an alternative acceleration mechanism. They noted that since {\emph{PSP}} did not observe a shock locally, and modeling of the CME suggested it may not have ever driven a shock, the acceleration mechanism was not likely diffusive shock acceleration. Instead, they suggested it may be similar to that associated with aurora in planetary magnetospheres \cite[{\emph{e.g.}},][and references therein]{2009JGRA..114.2212M}. This study focused on two important observed aspects: the velocity dispersion profile and the composition of the SEP event. In the proposed mechanism, which was referred to as ``the pressure cooker'' \cite[{\emph{e.g.}},][]{1985JGR....90.4205G}, energetic particles are confined below the CME in the solar corona in a region bound by an electric potential above and strong magnetic fields below. The electric field is the result of strong field-aligned electric currents associated with distorted magnetic fields and plasma flow, perhaps associated with magnetic reconnection, between the CME and corona during its eruption. Particles are confined in this region until their energy is sufficient to overcome the electric potential. There are two key results from this process. One is that the highest energy particles will overcome this barrier earlier, and, hence, will arrive at {\emph{PSP}} earlier than low energy particles, which are presumably released much later when the CME has erupted from the Sun. The other is that the mechanism produces a maximum energy that depends on the charge of the species. Although the event was quite weak, there were sufficient counts of He, O, and Fe, that when combined with assumptions about the composition of these species in the corona, agreed with the observed high-energy cut-off as a function of particle species. {\emph{PSP}} was in a fortunate location, during a fortuitously quiet period, and provided a unique opportunity to study energetic particles accelerated by a very slow and weak CME closer to the Sun that had been see {\emph{in situ}} previously. On the one hand, the observations suggests that very weak shocks, or even non-shock plasma compressions driven by a slow CME, are capable of accelerating particles. On the other hand, the pressure cooker method provides an interesting parallel with processes that occur in planetary ionospheres and magnetosphere. Moreover, the observation of the SEP event provided the opportunity to determine the parallel mean-free path of the particles, at 0.23~AU, as the particles were transported from their source to {\emph{PSP}}. In Mar. 2019, {\emph{PSP}} encountered a SBO-CME with unique properties which was analyzed by \citet{2020ApJ...897..134L}. SBO-CMEs are generally well-structured, slow CMEs that emerge from the streamer belt in extended PILs outside of ARs. Fig.~\ref{Lario2020Fig1} shows an overview of the plasma, magnetic field, electron and energetic particle conditions associated with the CME. Despite the relatively low speed of the SBO-CME close to the Sun determined by remote observation from {\emph{SOHO}} and {\emph{STEREO}}-A ($\sim311$~km~s$^{-1}$), the transit time to {\emph{PSP}} indicated a faster speed and two shocks were observed at {\emph{PSP}} prior to the arrival of the CME. The low initial speed of the SBO-CME makes it unlikely that it would have driven a shock in the corona and \citet{2020ApJ...897..134L} proposed that the formation of the shocks farther from the Sun were likely caused by compression effects of a HSS that followed the CME and that the formation of the two-shock structure may have been caused by distortions in the CME resulting from the HSS. This demonstrates the importance of the surrounding plasma conditions on the viability of energetic particle acceleration in CME events, though the associated energetic particle event in this case was limited to low energies $<$100~kev/nuc. \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{Lario2020Fig1.jpg} \caption{Overview of plasma (measured by SWEAP), magnetic field (measured by FIELDS), electron (SWEAP) and energetic particle (measured by IS$\odot$IS conditions during the Mar. 2019 SBO-CME event observed by {\emph{PSP}}. From top to bottom: (a) radial velocity, (b) tangential velocity component, and (c) normal velocity components of the solar wind proton velocity in RTN coordinates, (d) solar wind proton density, (e) solar wind proton temperature, magnetic field (f) magnitude, (g) elevation and (h) azimuth angles in RTN coordinates, (i) proton plasma beta, (j) the sum of the magnetic and thermal pressures, (k) ram pressure, (l) 314 eV electron PADs, (m) normalized 314 eV electron PADs, and (n) $\sim30-500$ keV TOF-only ion intensities measured by IS$\odot$IS/EPI-Lo. The vertical solid line indicates the passing of the two shocks associated with the CME which are too close in time to be separately resolved here, the vertical dashed lines indicate the boundaries of the CME, and the blue arrow indicates the eruption time of the SBO-CME at the Sun. Figure adapted from \citet{2020ApJ...897..134L}.} \label{Lario2020Fig1} \end{figure*} In order to determine the point at which {\emph{PSP}} would have been magnetically connected to the CME, \citet{2020ApJ...897..134L} ran two ENLIL simulations, one with just ambient solar wind conditions and another including the CME. By evaluating the solar wind speed along the magnetic field line connecting {\emph{PSP}} to the Sun, they find the point at which the solar wind speed in the CME simulation exceeds that of the ambient simulation, which establishes the point at which {\emph{PSP}} is connected to the CME, termed the ``Connecting with the OBserving" point or "cobpoint" \citep{1995ApJ...445..497H}. Fig.~\ref{Lario2020Fig2} shows the coordinates of the cobpoint determined by this analysis alongside energetic particle anisotropy measurements. The energetic particle event is shown to be highly anisotropic, with enhanced particle intensities seen in the sunward-facing sensors of the instrument, and that the onset of energetic particles coincided with the establishment of the cobpoint, connecting the CME to {\emph{PSP}}. Also notable is that the increase in the speed jump at the cobpoint coincides with an increase in the measured energetic particle intensities prior to shock arrival. This analysis demonstrates the importance of energetic particle measurements made by IS$\odot$IS in constraining modelling of large scale magnetic field structures such as CMEs. \begin{figure*} \centering \includegraphics[width=\textwidth]{Lario2020Fig2.jpg} \caption{Cobpoint characteristics as determined from ENLIL simulations and TOF-only energetic ion data measured by IS$\odot$IS/EPI-Lo. From top to bottom: (a) heliocentric radial distance of {\emph{PSP}} cobpoint, (b) Heliocentric Earth Equatorial (HEEQ) longitude of {\emph{PSP}} cobpoint, (c) speed jump ratio measured at {\emph{PSP}} cobpoint by comparing the ENLIL background simulation to the simulation including the CME, (d) speed of the cobpoint, (e) ion intensities $\sim20-500$, (f) ion intensities measured in the Sun-facing wedges of EPI-Lo, (g) ion intensities measured in the EPI-Lo wedges facing away from the Sun, (h) ion intensities measured in the transverse wedges of EPI-Lo. The verticle solid line indicate the passage of the two shocks, the vertical dotted lines show the boundaries of the CME, the vertical purple dashed lie indicates the time when {\emph{PSP}} became connected to the compression region in front of the CME, and the purple vertical arrows indicate the time that the SBO-CME was accelerated at the Sun. Figure adapted from \citet{2020ApJ...897..134L}.} \label{Lario2020Fig2} \end{figure*} \citet{2021A&A...651A...2J} analyzed a CME that was measured by {\emph{PSP}} on 20 Jan. 2020, when the S/C was 0.32~AU from the Sun. The eruption of the CME was well imaged by both {\emph{STEREO}}-A and {\emph{SOHO}} and was observed to have a speed of $\sim380$~km~s$^{-1}$, consistent with the transit time to {\emph{PSP}}, and possessed characteristics indicative of a stealth-type CME. Fig.~\ref{Joyce2021CMEFig1} shows a unique evolution of the energetic particle anisotropy during this event, with changes in the anisotropy seeming to coincide with changes of the normal component of the magnetic field ($B_N$). Of particular interest is a period where $B_N$ is close to zero and the dominant anisotropy is of energetic particles propagating toward the Sun (highlighted in yellow in Fig.~\ref{Joyce2021CMEFig1}), as well as two periods when $B_N$ spikes northward and there is a near complete dropout in energetic particle flux (highlighted in orange). The period dominated by particles propagating toward the Sun has the highest fluxes extending to the highest energies in the event and \citet{2021A&A...651A...2J} argued that this may be evidence that {\emph{PSP}}, located on the western flank of the CME throughout the event, may have briefly been connected to a region of stronger energetic particle acceleration, likely closer to the nose of the CME where the compression is likely strongest. \begin{figure*} \centering \includegraphics[width=\textwidth]{Joyce2021CMEFig1.png} \caption{Overview of energetic particle anisotropy and magnetic field conditions during the Jan. 2020 CME. Energetic particle measurements are from the TOF-only channel of IS$\odot$IS/EPI-Lo and magnetic field data are from FIELDS. From top to bottom: omnidirectional ion spectrogram, ion spectrum from Sun ($0-60^\circ$ from nominal Parker spiral direction), ion spectrogram toward the Sun ($120-180^\circ$), ion spectrogram in the transverse direction ($60-120^\circ$), and the magnetic field vector in RTN coordinates, with the magnetic field magnitude in black. The period highlighted in yellow shows a strong influx of particles propagating toward the Sun, while periods of energetic particle dropouts are highlighted in orange. Figure adapted from \citet{2021A&A...651A...2J}.} \label{Joyce2021CMEFig1} \end{figure*} {\emph{STEREO}}-A was well-aligned radially with {\emph{PSP}} during this time period and observed the same CME also from the western flank. Fig.~\ref{Joyce2021CMEFig2} shows the comparison between energetic particle spectrograms and magnetic field vectors measured by both instruments. Particularly striking is the remarkable similarity of the magnetic field vector measured by both S/C, suggesting that they both encountered a very similar region of the magnetic structure, contrasted with the dissimilarity of the energetic particle observations, with those at {\emph{STEREO}}-A lacking the fine detail and abrupt changes in anisotropy (not shown here) that are seen closer to the Sun. This is likely due to transport effects such as scattering and diffusion which have created a much more uniform distribution of energetic particles by the time the CME has reached 1~AU. This demonstrates the importance of measurements of such events close to the Sun, made possible by {\emph{PSP}}/IS$\odot$IS, when it is still possible to distinguish between different acceleration mechanisms and source regions that contribute to energetic particle populations before these fine distinctions are washed out by transport effects. Such detailed measurements will be critical in determining which mechanisms play an important role in the acceleration of energetic particles close to the Sun. \begin{figure*} \centering \includegraphics[width=\textwidth]{Joyce2021CMEFig2.png} \caption{Comparison of energetic particle and magnetic field measurements of the same CME event observed at both {\emph{PSP}} and {\emph{STEREO}}-A. The data have been lined up by the arrival time of the CME arrival and the {\emph{PSP}} data have been stretched in time by a factor of 1.3 to match the magnetic field features seen by both S/C. Gray dotted lines indicate reference points used to line up the measurements from both S/C.} \label{Joyce2021CMEFig2} \end{figure*} \subsubsection{The Widespread CME Event on 29 Nov. 2020} \label{EPsCMENov} The beginning of solar cycle 25 was marked by a significant SEP event in late Nov. 2020. The event has gained substantial attention and study, not only as one of the largest SEP events in several quiet years, but also because it was a circumsolar event spanning at least 230$^{\circ}$ in longitude and observed by four S/C positioned at or inside of 1~AU (see Fig.~\ref{Kollhoff2021Fig}). Among those S/C was {\emph{PSP}} and {\emph{SolO}}, providing a first glimpse of coordinated studies that will be possible between the two missions. The solar source was AR 12790 and the associated M4.4 class flare (as observed by {\emph{GOES}} at 12:34~UT on 29 Nov.) was at (E99$^{\circ}$,S23$^{\circ}$) (as viewed from Earth), 2$^{\circ}$ east of {\emph{PSP}}’s solar longitude. A CME traveling at $\sim1700$~km~s$^{-1}$ was well observed by {\emph{SOHO}}/LASCO and {\emph{STEREO}}-A/COR2, both positioned west of {\emph{PSP}} \citep{2021A&A...656A..29C}. {\emph{STEREO}}-A/EUVI also observed an EUV wave propagating away from the source at $\sim500$~km~s$^{-1}$, lasting about an hour and traversing much of the visible disk \citep{2021A&A...656A..20K}. \begin{figure*} \centering \includegraphics[width=\textwidth]{Kollhoff2021Fig.png} \caption{Overview of the widespread CME event on 19 Nov. 2020. Counterclockwise from the top right: {\emph{SolO}}, {\emph{PSP}}, {\emph{STEREO}}-A, and {\emph{ACE}} energetic particle observations are shown along with the relative location of all S/C. The direction of the CME is given by the black array, while curved lines in the orbit plot indicate nominal Parker Spiral magnetic field lines each S/C would be connected to. Figure adapted from \citet{2021A&A...656A..20K}.} \label{Kollhoff2021Fig} \end{figure*} Protons at energies $>50$~MeV and $>1$~MeV electrons were observed by {\emph{PSP}}, {\emph{STEREO}}-A, {\emph{SOHO}}, and {\emph{SolO}} with onsets and time profiles that were generally organized by the S/C’s longitude relative to the source region, as has been seen in multi-S/C events from previous solar cycles \citep{2021A&A...656A..20K}. However, it was clear that intervening solar wind structures such as a slower preceding CME and SIRs affected the temporal evolution of the particle intensities. Analysis of the onset times of the protons and electrons observed at the four S/C yielded solar release times that were compared to the EUV wave propagation. The results were inconsistent with a simplistic scenario of particles being released when the EUV wave arrived at the various S/C magnetic footpoints, suggesting more complex particle transport and/or acceleration process. Heavy ions, including He, O and Fe, were observed by {\emph{PSP}}, {\emph{STEREO}}-A, {\emph{ACE}} and {\emph{SolO}} and their event-integrated fluences had longitudinal spreads similar to those obtained from three-S/C events observed in cycle 24 \citep{2021A&A...656L..12M}. The spectra were all well described by power-laws at low energies followed by an exponential roll-over at higher energies (Fig.~\ref{Mason2021CMEFig}). The roll-over energy was element dependent such that Fe/O and He/H ratios decreased with increasing energy; a signature of shock-acceleration that is commonly seen in SEP events \citep{2021A&A...656A..29C, 2021A&A...656L..12M}. The overall composition (relative to O) at 0.32 – 0.45 MeV/nuc was fairly typical of events this size, with the exception of Fe/O at {\emph{PSP}} and {\emph{ACE}} where it was depleted by a factor of $\sim2$ \citep{2021A&A...656L..12M}. \begin{figure*} \centering \includegraphics[width=\textwidth]{Mason_aa41310-21_spectra.jpg} \caption{Multi-species fuence spectra of the 29 Nov. 2020 event from (a) {\emph{PSP}}, (b) {\emph{STEREO}}-A, (c) {\emph{ACE}}, and (d) {\emph{SolO}}. Figure adapted from \citet{2021A&A...656L..12M}.} \label{Mason2021CMEFig} \end{figure*} Due to the relative positioning of the source region and {\emph{PSP}}, the CME passed directly over the S/C. It was traveling fast enough to overtake a preceding, slower CME in close proximity to {\emph{PSP}}, creating a dynamic and evolving shock measured by FIELDS \citep{2021ApJ...921..102G, 2022ApJ...930...88N}. Coincident with this shock IS$\odot$IS observed a substantial increase in protons up to at least 1 MeV, likely due to local acceleration. More surprisingly, an increase in energetic electrons was also measured at the shock passage (Fig.~\ref{Kollhoff2021Fig}). Acceleration of electrons by interplanetary shocks is rare \citep{2016A&A...588A..17D}, thus it is more likely this increase is a consequence of a trapped electron distribution, perhaps caused by the narrowing region between the two CMEs \citep{2021A&A...656A..29C}. \begin{figure*} \centering \includegraphics[width=\textwidth]{cohen_cloud_fig.png} \caption{Time profile of energetic protons stopping in the third and fifth detector of HET (top panel, upper and lower traces, respectively) and electrons stopping in the third and fourth detector of HET (middle panel, upper and lower traces, respectively) with the magnetic field in the bottom panel for the 29 Nov. 2020 CME. The over plotted virticle lines illustrate that the variations in the particle count rates occur at the same time as changes in the magnetic field. See \citet{2021A&A...656A..29C} for more information.} \label{Cohen2021Fig} \end{figure*} The MC of the fast CME followed the shock and sheath region, with a clear rotation seen in the magnetic field components measured by FIELDS (Fig.~\ref{Cohen2021Fig}). At the onset of the cloud, the particle intensities dropped as is often seen due to particles being unable to cross into the magnetic structure (Fig.~\ref{Cohen2021Fig}). During this period there was a 30 minute interval in which all the particle intensities increased briefly to approximately their pre-cloud levels. This was likely the result of {\emph{PSP}} exiting the MC, observing the surrounding environment populated with SEPs, and then returning to the interior of the MC. Although several of the properties of the SEP event are consistent with those seen in previous studies, the 29 Nov. 2020 event is noteworthy as being observed by four S/C over 230$^{\circ}$ longitude and as the first significant cycle 25 event. The details of many aspects of the event (both from individual S/C and multi-S/C observations) remain to be studied more closely. In addition, modeling of the event has only just begun and will likely yield significant insights regarding the evolution of the CME-associated shock wave \citep{2022A&A...660A..84K} and the acceleration and transport of the SEPs throughout the inner heliosphere \citep{2021ApJ...915...44C}. \subsection{Energetic Electrons} \label{EE} The first observations of energetic electrons by {\emph{PSP}}/IS$\odot$IS were reported by \citet{2020ApJ...902...20M}, who analyzed a series of energetic electron enhancements observed during {\emph{PSP}}’s second Enc. period, which reached a perihelion of 0.17~AU. Fig.~\ref{Mitchell2020Fig} shows a series of four electron events that were observed at approximately 03:00, 05:00, 09:00, and 15:40~UT on 2 Apr. 2019. The events are small compared with the background and are only observable due to the small heliocentric distance of {\emph{PSP}} during this time. Background subtraction is applied to the electron rates data to help resolve the electron enhancements as well as a second-degree Savitzky–Golay smoothing filter over 7 minutes is applied to reduce random statistical fluctuations \citep{1964AnaCh..36.1627S}. While the statistics for these events are very low, the fact that they are observed concurrently in both EPI-Hi and EPI-Lo and that they coincide with either abrupt changes in the magnetic field vector or that they can plausibly be connected to type III radio bursts observed by the FIELDS instrument which extend down to $f_{pe}$ suggests that these are real electron events. These are the first energetic electron events which have been observed within 0.2~AU of the Sun and suggest that such small and short-duration electron events may be a common feature close to the Sun that was not previously appreciated since it would not be possible to observe such events farther out from the Sun. This is consistent with previous observations by {\emph{Helios}} between 0.3 and 1~AU \citep{2006ApJ...650.1199W}. More observations and further analysis are needed to determine what physical acceleration mechanisms may be able to produce such events. \begin{figure*} \centering \includegraphics[width=\textwidth]{Mitchell2020Fig.jpg} \caption{Overview of {\emph{PSP}} observations during 2 Apr. 2019. Panels show the following: (a) EPI-Hi electron count rate (0.5–6 MeV) with a background subtraction and 7 minutes Savitzky–Golay smoothing applied and with a dashed line to indicate 2$\sigma$ deviation from the mean, (b) EPI-Lo electron count rate (50–500 keV) with a background subtraction and 7 minutes Savitzky–Golay smoothing applied and with a dashed line to indicate 2$\sigma$ deviation from the mean, (c) FIELDS high-frequency radio measurements (1.3–19.2 MHz), (d) FIELDS low-frequency radio measurements (10.5 kHz–1.7 MHz), (e) SWEAP solar wind ion density ($\sim5$ measurements per second), (f) SWEAP radial solar wind speed ($\sim5$ measurements per second), (g) FIELDS 1 minute magnetic field vector in RTN coordinates (with magnetic field strength denoted by the black line). A series of electron events are observed (in the top two panels), occurring at approximately 03:00, 05:00, 09:00, and 15:40~UT, as well as a series of strong type III radio bursts (seen in panels c and d). Figure adapted from \citet{2020ApJ...902...20M}.} \label{Mitchell2020Fig} \end{figure*} In late Nov. 2020, {\emph{PSP}} measured an SEP event associated with two CME eruptions, when the S/C was at approximately 0.8~AU. This event is the largest SEP event observed during the first 8 orbits of {\emph{PSP}}, producing the highest ion fluxes yet observed by IS$\odot$IS \citep{2021A&A...656A..29C,2021ApJ...921..102G,2021A&A...656A..20K,2021A&A...656L..12M}, and also produced the first energetic electron events capable of producing statistics sufficient to register significant anisotropy measurements by IS$\odot$IS as reported by \citet{2021ApJ...919..119M}. Fig.~\ref{Mitchell2021Fig1} shows an overview of the electron observations during this period along with magnetic field data to provide context. Notable in these observations is the peaking of the EPI-Lo electron count rate at the passage of the second CME shock, which is quite rare, though not unheard of, due to the inefficiency of CME driven shocks in accelerating electrons. This may indicate the importance of quasi-perpendicular shock acceleration in this event, which has been shown to be a more efficient acceleration of electrons \citep{1984JGR....89.8857W,1989JGR....9415089K,1983ApJ...267..837H,2010ApJ...715..406G,2012ApJ...753...28G,2007ApJ...660..336J}. The notable dip in the EPI-Hi electron count rate at this time is an artifact associated with dynamic threshold mode changes of the EPI-Hi instrument during this time \citep[for details, see][]{2021A&A...656A..29C}. \begin{figure*} \centering \includegraphics[width=\textwidth]{Mitchell2021Fig1.png} \caption{Overview of electron observations associated with the Nov. 29 CMEs Panels are as follows: (a) shows the EPI-Hi electron count rates summed in all 5 apertures ($\sim0.5-2$~MeV), (b) shows the EPI-Lo electron count rate from wedges 3 and 7 ($\sim57-870$~keV), (c) shows the FIELDS magnetic field vector in RTN coordinates, and (d) and (e) show the magnetic field vector angles. Vertical lines show the eruption of the second CME and the passage of the shocks associated with both CMEs. Flux-rope like structures are indicated by the shaded grey regions. The decrease in the EPI-Hi electron count rate seen at the passage of the second CME and the overall flat profile are artifacts caused by EPI-Hi dynamic threshold mode changes (explained in detail by Cohen et al. 2021). Figure adapted from \citet{2021ApJ...919..119M}.} \label{Mitchell2021Fig1} \end{figure*} Fig.~\ref{Mitchell2021Fig2} shows the electron and magnetic field measurements during a three-hour period around the shock crossing associated with the second CME, including the electron PAD. Because of the off-nominal pointing of the S/C during this time, the pitch angle coverage is somewhat limited, however the available data shows that the highest intensities to be in the range of $\sim40-90^{\circ}$ at the time of the shock crossing. Distributions with peak intensities at pitch angles of around $90^{\circ}$ may be indicative of the shock-drift acceleration mechanism that occurs at quasi-perpendicular shocks \citep[{\emph{e.g.}},][]{2007ApJ...660..336J,1974JGR....79.4157S,2003AdSpR..32..525M}. This, along with the peak electron intensities seen at the shock crossing, further supports the proposition that electrons may be efficiently accelerated by quasi-perpendicular shocks associated with CMEs. Other possible explanations are that the peak intensities may be a result of an enhanced electron seed population produced by the preceding CME \citep[similar to observations by][]{2016A&A...588A..17D}, that energetic electrons may be accelerated as a result of being trapped between the shocks driven by the two CMEs \citep[a mechanism proposed by][]{2018A&A...613A..21D}, and that enhanced magnetic fluctuations and turbulence created upstream of the shock by the first CME may increase the efficiency of electron acceleration in the shock \citep[as proposed by][]{2015ApJ...802...97G}. \begin{figure*} \centering \includegraphics[width=\textwidth]{Mitchell2021Fig2.png} \caption{Electron and magnetic field measurements around the time of the second CME shock crossing (indicated by the vertical red dashed line). Panels show the following: (a) EPI-Lo electron measurements ($\sim130-870$ keV) in each of its 8 wedges, (b) FIELDS magnetic field vector in RTN coordinates, (c) azimuthal angle of the magnetic field, (d) polar angle of the magnetic field, (e) pitch angle of the geometric center of each EPI-Lo wedge (each with a width of $\sim30^{\circ}$), and (f) pitch angle time series for each EPI-Lo wedge ($80-870$~keV) showing the fraction at each time bin to the total count rate over the entire interval. Figure adapted from \citet{2021ApJ...919..119M}.} \label{Mitchell2021Fig2} \end{figure*} While observations of energetic electron events thus far in the {\emph{PSP}} mission have been few, the measurements that have been made have shown IS$\odot$IS to be quite capable of characterizing energetic electron populations. Because of its close proximity to the Sun during {\emph{PSP}}’s Enc. phases, IS$\odot$IS has been shown to be able to measure small, subtle events which are not measurable farther from the Sun, but which may provide new insights into electron acceleration close to the Sun. The demonstrated ability to provide detailed electron anisotropy analyses is also critical for determining the acceleration mechanisms for electrons (particularly close to the Sun where transport effects have not yet influenced these populations) and for providing insight into the magnetic topology of magnetic structures associated with SEP events. As the current trend of increasing solar activity continues, we can expect many more unique observations and discoveries related to energetic electron events in the inner heliosphere from {\emph{PSP}}/IS$\odot$IS. \subsection{Coronating/Stream Interaction Region-Associated Energetic Particles} \label{CIREPs} SIRs form where HSSs from coronal holes expand into slower solar wind \citep{1971JGR....76.3534B}. As the coronal hole structure corotates on the Sun, the SIR will corotate, as well, and becoming a CIR after one complete solar co-rotation. As the HSS flows radially outward, both forward and reverse shocks can form along the SIR/CIR, often at distances beyond 1~AU \citep[{\emph{e.g.}},][]{2006SoPh..239..337J, 2008SoPh..250..375J, 1978JGR....83.5563P, 1976GeoRL...3..137S}, which act as an important source of energetic ions, particularly during solar minimum. Once accelerated at an SIR/CIR-associated shock, energetic particles can propagate back towards the inner heliosphere along magnetic field lines and are subject to adiabatic deceleration and scattering \citep{1980ApJ...237..620F}. The expected result of these transport effects is a softening of the energetic particle spectra and a hardening of the lower-energy suprathermal spectra \citep[see][]{1999SSRv...89...77M}. This spectral variation, however, has not always been captured in observations, motivating the formulation of various other SIR/CIR-associated acceleration processes such as compressive, non-shock related acceleration \citep[{\emph{e.g.}},][]{2002ApJ...573..845G,2012ApJ...749...73E,2015JGRA..120.9269C} which can accelerate ions into the suprathermal range at lower heliospheric distances. Additionally, footpoint motion and interchange reconnection near the coronal hole boundary has been proposed to lead to more radial magnetic field lines on the HSS side of the SIR/CIR, resulting in more direct access, and less modulation, of energetic particles \citep{2002GeoRL..29.2066M, 2002GeoRL..29.1663S, 2005GeoRL..32.3112S}. Observations within 1~AU, by {\emph{PSP}}, are therefore particularly well suited to detangle these acceleration and transport effects as the SIR/CIR-associated suprathermal to energetic ion populations are further from shock-associated acceleration sites that are usually beyond 1~AU. \begin{figure*} \centering \includegraphics[width=\textwidth]{Allen_2020.jpg} \caption{Overview of four months around the first perihelion (6 Nov. 2018). Panels show (a) the heliographic distance of {\emph{PSP}}; bulk proton (b) radial velocity, (c) density, (d) temperature, and (e) entropy; (f) summation of the magnetic and bulk proton thermal plasma pressure; (g) magnitude of the magnetic field, (h) $\Theta$, and (i) $\Phi$ angels of the magnetic field; and (j) EPI-Lo ion time-of-flight count rate for energies from 30 to 586 keV. Simulated quantities from two simulations are shown by the yellow and blue lines \citep[see][for more information]{2020ApJS..246...36A}. The four HSSs investigated in \citet{2020ApJS..246...36A} are indicated by the grey shaded regions, while pink shaded regions denote CMEs. Figure adapted from \citet{2020ApJS..246...36A}.} \label{Allen_2020} \end{figure*} During the first orbit of {\emph{PSP}}, \citet{2020ApJS..246...36A} reported on four HSSs observed by {\emph{PSP}}, illustrated in Fig.~\ref{Allen_2020}, and compared these to observations of the streams at 1~AU using observations from L1 ({\emph{ACE}} and {\emph{Wind}}) and {\emph{STEREO}}-A. Many of these nascent SIR/CIRs were associated with energetic particle enhancements that were offset from the interface of the SIR/CIR. One of the events also had evidence of local compressive acceleration, which was previously noted by \citet{2019Natur.576..223M}. \citet{2020ApJS..246...20C} further analyzed energetic particle increases associated with SIR/CIRs during the first two orbits of {\emph{PSP}} (Fig.~\ref{Cohen_2020}). They found He/H abundance ratios similar to previous observations of SIR/CIRs at 1~AU with fast solar wind under 600~km~s$^{-1}$, however the proton spectra power laws, with indices ranging from $-4.3$ to 6.5, were softer than those often observed at 1~AU. Finally, \citet{2020ApJS..246...56D} investigated the suprathermal-to-energetic ($\sim0.03-3$ MeV/nuc) He ions associated with these SIR/CIRs from the first two orbits. They found that the higher energy He ions would arrive further in the rarefaction region than the lower-energy ions. The He spectra behaved as flat power laws modulated by exponential roll overs with an e-folding at energies of $\sim0.4$ MeV/nuc, suggesting acceleration at shocks further out in the heliosphere. \citet{2020ApJS..246...56D} interpreted the tendency for the suprathermal ion peak to be within the rarefaction regions with acceleration further out in the heliosphere as evidence that the rarefaction regions allowed easier access for particles than other regions in the SIR/CIR structure. \begin{figure*} \centering \includegraphics[width=\textwidth]{Cohen_2020.jpg} \caption{Summary of EPI-Hi LET $\sim1-2$ MeV proton observations from the first two orbiter of {\emph{PSP}}. SIR-associated energetic particle events studied by \citet{2020ApJS..246...20C} are denoted by the numbered circles. Figure adapted from \citet{2020ApJS..246...20C}.} \label{Cohen_2020} \end{figure*} One fortuitus CIR passed {\emph{PSP}} on 19 Sep. 2019, during the third orbit of {\emph{PSP}}, when {\emph{PSP}} and {\emph{STEREO}}-A were nearly radially aligned and $\sim0.5$~AU apart \citep{2021A&A...650A..25A, 2021GeoRL..4891376A}. As shown in Fig.~\ref{Allen_2021}, while the bulk plasma and magnetic field observations between the two S/C followed expected radial dependencies, the CIR-associated suprathermal ion enhancements were observed at {\emph{PSP}} for a longer duration in time than at {\emph{STEREO}}-A \citep{2021GeoRL..4891376A}. Additionally, the suprathermal ion spectral slopes between {\emph{STEREO}}-A total ions and {\emph{PSP}} H\textsuperscript{+} were nearly identical, while the flux at {\emph{PSP}} was much smaller, suggesting little to no spectral modulation from transport. \citet{2021GeoRL..4891376A} concluded that the time difference in the CIR-associated suprathermal ion enhancement might be related to the magnetic topology between the slow speed stream ahead of the CIR interface, where the enhancement was first observed, and the HSS rarefaction region, where the suprathermal ions returned to background levels. \citet{2021ApJ...908L..26W} furthered this investigation by simulating the {\emph{PSP}} and {\emph{STEREO}}-A observations using the European Heliopheric FORecasting Information Asset \citep[EUHFORIA;][]{2018JSWSC...8A..35P} model and the Particle Radiation Asset Directed at Interplanetary Space Exploration \citep[PARADISE;][]{2019AA...622A..28W, 2020AA...634A..82W} model, suggesting that this event provides evidence that CIR-associated acceleration does not always require shock waves. \begin{figure*} \centering \includegraphics[width=\textwidth]{Allen_2021.jpg} \caption{Comparison of {\emph{PSP}} observations (black) and time-shifted and radially corrected {\emph{STEREO}}-A observations (blue) for the CIR that passed over {\emph{PSP}} on 19 Sep. 2019. While the bulk solar wind and magnetic field observations match well after typical scaling factors are applied \citep[a-h, see][for more information]{2021GeoRL..4891376A}, the energetic particle are elevated at {\emph{PSP}} (i) for longer than at {\emph{STEREO}}-A (j). Figure adapted from \citet{2021GeoRL..4891376A}.} \label{Allen_2021} \end{figure*} An SIR that passed over {\emph{PSP}} on 15 Nov. 2018 when the S/C was $\sim0.32$~AU from the Sun providing insight into energetic particle acceleration by SIRs in the inner heliosphere and the importance of the magnetic field structures connecting the observer to the acceleration region. Fig.~\ref{SSR_SIR_Fig} shows an overview of the energetic particle, plasma and magnetic field conditions during the passage of the SIR and the energetic particle event that followed it, which started about a day after the passage of the compression and lasted for about four days.The spectral analysis provided by \citet{2021A&A...650L...5J}, showed that for the first day of the event, the spectra resembled a simple power law, which is commonly associated with local acceleration, despite being well out of the compression region by that point. The spectrum for the remaining three days of the event was shown to be fairly constant, a finding that is inconsistent with the traditional model of SIR energetic particle acceleration provided by \citet{1980ApJ...237..620F}, which models energetic particle acceleration at distant regions where SIRs have steepened into shocks and predicts changes in the spectral shape with distance from the source region. Within this paradigm, we would expect that the the distance along the magnetic field connecting to the source region would increase during the event and that the observed spectrum would change. This combined with the simple power law spectrum observed on the first day, seems to indicate that the source region is much closer to the observer than is typically thought as we do not see the expected transport effects and that acceleration all along the compression, not only in the distant regions where the SIR may steepen into shocks, may play an important role in energetic particle acceleration associated with SIRs (consistent with previous studies by \citealt{2000JGR...10523107C} and \citealt{2015JGRA..120.9269C}). \begin{figure*} \centering \includegraphics[width=\textwidth]{SSR_SIR_Fig.png} \caption{Overview of energetic particle observations associated with the SIR that passed over {\emph{PSP}} on 15 Nov. 2018. Plasma data is provided by the SWEAP instrument and magnetic field data by the FIELDS instrument. The compression associated with the passage of the SIR is highlighted in yellow. Figure is updated from figures shown in \citet{2021A&A...650A..24S} and \citet{2021A&A...650L...5J}.} \label{SSR_SIR_Fig} \end{figure*} \citet{2021A&A...650A..24S} analyzed the same event, also noting that the long duration of the energetic particle event following the passage of the CME suggests a non-Parker spiral orientation of the magnetic field and proposed that the observations may be explained by a sub-Parker magnetic field structure~\citep{2002GeoRL..29.2066M,2002GeoRL..29.1663S,2005GeoRL..32.3112S,2005JGRA..110.4104S}. The sub-Parker spiral structure forms when magnetic footpoints on the Sun move across coronal hole boundaries, threading the magnetic field between the fast and slow solar wind streams that form the compression and creating a magnetic field structure that is significantly more radial than a nominal Parker spiral. Fig.~\ref{Schwadron2021SIRFig}a shows a diagram of the sub-Parker spiral and Fig.~\ref{Schwadron2021SIRFig}b shows a comparison between the energetic particle fluxes measured by IS$\odot$IS in two different energy regimes compared with modeled fluxes for both the Parker and sub-Parker spiral magnetic field orientations. The modeling includes an analytic solution of the distribution function at the SIR reverse compression/shock and numerical modeling of the propagation of the particles back to the S/C \citep[details in][]{2021A&A...650A..24S}. The modeled fluxes for the sub-Parker spiral match the observed fluxes much better than the nominal Parker spiral, demonstrating that the sub-Parker spiral structure is essential for explaining the extended duration of the energetic particle event associated with the SIR. The sub-Parker spiral is often seen in rarefaction regions, such as those that form behind SIRs, and thus is likely to play a significant role in the observed energetic particle profiles associated with such events. Both the \citet{2021A&A...650L...5J} and \citet{2021A&A...650A..24S} demonstrate the importance of IS$\odot$IS observations of SIRs in understanding the large scale structure of the magnetic field in the inner heliosphere, the motion of magnetic footpoints on the Sun and the propagation of energetic particles, helping us to understand the variability of energetic particles and providing insight into the source of the solar wind. \begin{figure*} \centering \includegraphics[width=0.7\textwidth]{Schwadron2021SIRFig.png} \caption{(a) shows the magnetic field structure associated with an SIR, with the red dashed lines representing the compression region where the fast solar wind overtakes the slow solar wind, the blue lines represent the nominal Parker spiral configuration, and the clack lines represent the sub-Parker spiral field lines that are threaded between the fast and slow solar wind streams as a result of footpoint motion across the coronal hole boundary. (b) shows a comparison between IS$\odot$IS energetic particle fluxes in two energy ranges (blue data points) and modeled energetic particle fluxes for both the Parker spiral (blue lines) and sub-Parker spiral (black lines) magnetic field configurations. Figure adapted from \citet{2021A&A...650A..24S}.} \label{Schwadron2021SIRFig} \end{figure*} \subsection{Inner Heliospheric Anomalous Cosmic Rays} \label{ACRs} The ability of {\emph{PSP}} to measure the energetic particle content at unprecedentedly close radial distances during a deep solar minimum has also allowed for detailed investigations into the radial dependence of ACRs in the inner heliosphere. ACRs are mainly comprised of singly ionized hydrogen, helium, nitrogen, oxygen, neon, and argon, with energies of $\sim5-50$ MeV/nuc \citep[{\emph{e.g.}},][]{1973ApJ...182L..81G, 1973PhRvL..31..650H, 1974IAUS...57..415M, 1988ApJ...334L..77C, 1998SSRv...83..259K, 2002ApJ...578..194C, 2002ApJ...581.1413C, 2013SSRv..176..165P}. The source of these particles are neutral interstellar particles that are part of the interstellar wind \citep{2015ApJS..220...22M} before becoming ionized near the Sun \citep{1974ApJ...190L..35F}. Once the particles become ionized, they become picked-up by the solar wind convective electric field and are convected away from the Sun as pick-up ions. A small portion of these pick-up ions can become accelerated to high energies (tens to hundreds of MeV) further out in the heliosphere before returning into the inner heliosphere, thus becoming ACRs \citep{1996ApJ...466L..47J, 1996ApJ...466L..43M, 2000AIPC..528..337B, 2012SSRv..173..283G}. While the acceleration of ACRs is primarily thought to occur at the termination shock \citep{1981ApJ...246L..85P, 1992ApJ...393L..41J}, neither {\emph{Voyager}}~1 or {\emph{Voyager}}~2 \citep{1977SSRv...21...77K} observed a peak in ACR intensity when crossing the termination shock \citep{2005Sci...309.2017S, 2008Natur.454...71S}. As a result, numerous theories have been proposed to explain this including a ``blunt” termination shock geometry in which the ACR acceleration occurs preferentially along the termination shock flanks and tail \citep{2006GeoRL..33.4102M, 2008ApJ...675.1584S} away from the region the {\emph{Voyager}} S/C crossed, magnetic reconnection at the heliopause \citep{2010ApJ...709..963D}, heliosheath compressive turbulence \citep{2009AdSpR..43.1471F}, and second-order Fermi processes \citep{2010JGRA..11512111S}. After being accelerated, ACR particles penetrate back into the heliosphere, where their intensities decrease due to solar modulation \citep[{\emph{e.g.}},][]{1999AdSpR..23..521K, 2002ApJ...578..194C, 2006GeoRL..33.4102M}. The radial gradients of ACRs in the heliosphere have primarily been studied from 1~AU outward through comparing observations at the {\emph{IMP-8}}\footnote{The Interplanetary Monitoring Platform-8} at 1~AU to observations from {\emph{Pioneer}}~10, {\emph{Pioneer}}~11, {\emph{Voyager}}~1, and {\emph{Voyager}}~2 in the outer heliosphere. These comparisons revealed that the helium ACR intensity varied as $r^{-0.67}$ from 1 to $\sim41$~AU \citep{1990ICRC....6..206C}. Understanding this modulation provides insight into the various processes that govern global cosmic ray drift paths throughout the heliosphere. \begin{figure*} \centering \includegraphics[width=\textwidth]{Rankin_2021_1.jpg} \caption{Helium spectra over the first three orbits of {\emph{PSP}} after removing transient events \citep[see][for more information]{2021ApJ...912..139R} at {\emph{PSP}} (red and blue) and at {\emph{SOHO}} (green). A simulated GCR spectrum at 1~AU is included (black) from HelMOD (version 4.0.1, 2021 January; www.helmod.org). Figure adapted from \citet{2021ApJ...912..139R}.} \label{Rankin_2021_1} \end{figure*} The orbit of {\emph{PSP}} is well suited to investigate ACR radial variations due to its sampling of a large range of radial distances near the ecliptic. Additionally, {\emph{PSP}} enables investigations into the ACR populations at distances closer to the Sun than previously measured. \citet{2021ApJ...912..139R} utilized the {\emph{PSP}}/IS$\odot$IS/EPI-Hi instrument to study the radial variation of the helium ACR content down to $35.6~R_\odot$ (0.166~AU) and compare these observations to ACR observations at 1~AU measured the {\emph{SOHO}} mission. To ensure that the particles included in the comparisons were ACRs, rather than SEPs, only “quiet-time” periods were used \citep[see the Appendix in][]{2021ApJ...912..139R}. The resulting quiet-time EPI-Hi and {\emph{SOHO}} spectra over the first three orbits of {\emph{PSP}} is shown in Fig.~\ref{Rankin_2021_1}. The ACR intensity was observed to increase over energies from $\sim5$ to $\sim40$ MeV/nuc, a characteristic feature of ACR spectra. Figs.~\ref{Rankin_2021_2}a and \ref{Rankin_2021_2}b show normalized ACR fluxes from the {\emph{SOHO}} Electron Proton Helium INstrument \citep[EPHIN;][]{1988sohi.rept...75K} and {\emph{PSP}}/IS$\odot$IS/EPI-Hi, respectively. The ratio of the ACR fluxes (Fig.~\ref{Rankin_2021_2}c) correlate well with the heliographic radial distance of {\emph{PSP}} (Fig.~\ref{Rankin_2021_2}d). This presents clear evidence of radial-dependent modulation, as expected. However, the observed radial gradient is stronger ($\sim25\pm5$\%~AU) than observed beyond 1~AU. Better understanding the radial gradients of ACRs in the inner heliosphere may provide needed constraints on drift transport and cross-field diffusion models, as cross-field diffusion will become more dominant in the inner heliosphere \citep{2010JGRA..11512111S}. Future studies will also be aided by the addition of ACR measurements by {\emph{SolO}}, such as those reported in \citet{2021A&A...656L...5M}. \begin{figure*} \centering \includegraphics[width=\textwidth]{Rankin_2021_2.jpg} \caption{ACR normalized flux at (a) 1~AU averaged over 27.27 days and (b) {\emph{PSP}} averaged over Carrington longitude. The ratio of intensities (c) has a clear dependence on the radial distance of {\emph{PSP}} (d). Figure adapted from \citet{2021ApJ...912..139R}.} \label{Rankin_2021_2} \end{figure*} \subsection{Open Questions and Future Collaborations} \label{EPsRadOQ} Over the first four years of the {\emph{PSP}} prime mission, large advances have been made regarding our understanding of inner heliospheric energetic particles and solar radio emissions. Looking forward, as the solar cycle ascends out of solar minimum, and as additional observatories such as {\emph{SolO}} enter into their science phase and provide robust energetic particle measurements \citep[see][]{2021A&A...656A..22W}, many new opportunities to study energetic particle populations and dynamics will present themselves. For example, while {\emph{PSP}} has begun exploring the radial evolution of SIRs and associated energization and transport of particles, future measurements will explore the cause of known solar cycle dependencies of the SIR/CIR-associated suprathermal ion composition \citep[{\emph{e.g.}},][]{ 2008ApJ...678.1458M, 2012ApJ...748L..31M, 2019ApJ...883L..10A}. Additionally, future {\emph{PSP}} observations of SIR/CIR-associated ions will be a crucial contribution to studies on the radial gradient of energetic ions \citep[{\emph{e.g.}},][]{1978JGR....83.4723V, Allen2021_CIR}. As {\emph{SolO}} begins to return off-ecliptic observations, the combination of {\emph{PSP}} and {\emph{SolO}} at different latitudes will enable needed insight into the latitudinal structuring of SIR/CIRs and associated particle acceleration. As the solar cycle progresses, solar activity will increase. This will provide many new observations of CMEs and SEP events at various intensities and radial distances in the inner heliosphere, particularly for low energy SEP events that are not measurable at 1~AU \citep[{\emph{e.g.}},][]{2020ApJS..246...65H}. These {\emph{PSP}} observations will further our understanding of CME-associated shock acceleration and how the energetic content of CMEs evolves with heliographic distance. The current and future Heliophysics System Observatory (HSO) should also provide additional opportunities to not only study the radial evolution of CMEs \citep[{\emph{e.g.}},][]{2021AA...656A...1F}, but also the longitudinal variations of these structures, as was done for the 29 Nov. 2020 event \citep[{\emph{e.g.}},][]{2021A&A...656A..29C,2021A&A...656A..20K,2021A&A...656L..12M}. As discussed in \S\ref{SEPs}, {\emph{PSP}} has already expanded our understanding of SEP events in the inner heliosphere. Because {\emph{SolO}}, which also returns observations of \textsuperscript{3}He-rich SEP events \citep[{\emph{e.g.}},][]{2021A&A...656L...1M,2021A&A...656L..11B}, will soon be taking measurements of SEP events at different latitudes than {\emph{PSP}}, the combination of these missions will enable exploration of latitudinal variations in SEP content. Similarly, energetic electron measurements on {\emph{SolO}} \citep[{\emph{e.g.}},][]{2021A&A...656L...3G}, soon to be taken off-ecliptic, will enable future studies into the latitudinal variations of electron events. In addition to radio observations using multiple S/C, space-based and ground-based multi-wavelength observations enable new types of coordinated analysis of solar activity. \cite{2021A&A...650A...7H} combined {\emph{Hinode}}, {\emph{SDO}}/AIA, and RFS observations in a joint analysis of a non-flaring AR and a type III storm observed during {\emph{PSP}} Enc.~2, identifying the source of electron beams associated with the storm and using radio measurements to show the evolution of the peak emission height throughout the storm. \cite{2021A&A...650A...6C} studied a different storm occurring slightly after Enc.~2 using radio observations from {\emph{PSP}} and {\emph{Wind}}, and solar observations from {\emph{SDO}}/AIA, {\emph{SDO}}/HMI, and the Nuclear Spectroscopic Telescope ARray \citep[{\emph{NuSTAR}};][]{2013ApJ...770..103H}, finding correlated periodic oscillations in the EUV and radio data indicative of small impulsive electron acceleration. Additionally, the continuation of the {\emph{PSP}} project science team’s close relationship with the Whole Heliosphere and Planetary Interactions (WHPI\footnote{https://whpi.hao.ucar.edu/}) international initiative, the successor of Whole Sun Month \citep{1999JGR...104.9673G} and Whole Heliosphere Interval \citep{2011SoPh..274....5G, 2011SoPh..274...29T, 2011SoPh..274....1B} will allow for multifaceted studies that incorporate ground-based and space-based observatories providing contextual information for the {\emph{PSP}} measurements. Many of these studies are beginning now, and should propel our fundamental understanding of the connection of the solar surface to interplanetary space and beyond. \section{Dust} \label{PSPDUST} \subsection{Dust Populations in the Inner Heliosphere} The ZDC is one of the largest structures in the heliosphere. It is comprised of cosmic dust particles sourced from comets and asteroids, with most of the material located in the ecliptic plane where the majority of these dust sources reside. The orbits of grains gravitationally bound to the Sun, termed ``\amsn'', lose angular momentum from Poynting-Robertson and solar wind drag \citep[{\emph{e.g.}},][]{1979Icar...40....1B} and subsequently circularize and spiral toward the Sun. Due to the inward transport of zodiacal material, the dust spatial density increases as these grains get closer to the Sun \citep[{\emph{e.g.}},][]{1981A&A...103..177L}, until they are ultimately collisionally fragmented or sublimated into smaller grains \citep[{\emph{e.g.}},][]{2004SSRv..110..269M}. Dust-dust collisions within the cloud are responsible for generating a significant portion of the population of smaller grains. Additionally, a local source for dust particles very near the Sun are the near-Sun comets, ``Sunskirters'', that pass the Sun within half of Mercury’s perihelion distance, and sungrazers that reach perihelion within the fluid Roche limit \citep{2018SSRv..214...20J}. Because these comets are in elongated orbits, their dust remains in the vicinity of the Sun only for short time \citep{2004SSRv..110..269M,2018A&A...617A..43C}. Sub-micron sized grains, with radii on the order of a few hundred nm, are most susceptible to outward radiation pressure. The orbital characteristics of these submicron-sized ``\bmsn'' are set by the ratio of solar radiation pressure to gravitational force, $\beta = F_{R}/F_{G}$, dependent on both grain size and composition \citep{1979Icar...40....1B}. Grains with $\beta$ above a critical value dependent on their orbital elements have positive orbital energy and follow hyperbolic trajectories escaping the heliosphere. This population of grains represents the highest number flux of micrometeoroids at 1~AU \citep{1985Icar...62..244G}. For the smallest nanograins ($\lesssim50$~nm), electromagnetic forces play an important role in their dynamics \citep[{\emph{e.g.}},][]{1986ASSL..123..455M}, where a certain population of grains can become electromagnetically trapped very near the Sun \citep{2010ApJ...714...89C}. Fig.~\ref{fig:dust_overview} summarizes these various processes and dust populations. \begin{figure} \includegraphics[width=4.5in]{mann_2019_fig1.png} \caption{The dust environment near the Sun \citep{2019AnGeo..37.1121M}.\label{fig:dust_overview} } \end{figure} When dust particles approach very near to the Sun, they can sublimate rapidly, leaving a region near the Sun relatively devoid of dust. Different estimates of this DFZ have been made based on Fraunhofer-corona (F-corona) observations and model calculations, predicting a DFZ within 2 to 20 solar radii and possible flattened radial profiles before its beginning \citep{2004SSRv..110..269M}. These estimates are based on dust sublimation; however, an additional destruction process recognized in the innermost parts of the solar system is sputtering by solar wind particles. \citet{2020AnGeo..38..919B} showed that sputtering is more effective during a CME event than during other solar wind conditions and suggested that multiple CMEs can lead to an extension of the DFZ. Dust destruction near the Sun releases molecules and atoms, where photoionization, electron-impact ionization, and charge exchange quickly ionize the atoms and molecules. This process contributes to a population of pickup-ions in the solar wind and provides a seed population for energetic particles \citep{2000JGR...105.7465S,2005ApJ...621L..73M}. The inner heliosphere's dust populations within a few~AU have been observed both with {\emph{in~situ}} dust impact detections and remotely via scattered light observations. Due to their higher number densities, dust grains with radii on the order of $\sim\mu$m and smaller can be observed with {\emph{in~situ}} impact measurements. Dedicated dust measurements within this size range have been taken in the inner solar system with {\emph{Pioneers}} 8 and 9 \citep{1973spre.conf.1047B}, {\emph{HEOS-2}}\footnote{The Highly Eccentric Orbit Satellite 2} \citep{1975P&SS...23..985H, 1975P&SS...23..215H}, {\emph{Helios}} \citep{1978A&A....64..119L, 1981A&A...103..177L, 1980P&SS...28..333G, 2006A&A...448..243A, 2020A&A...643A..96K}, {\emph{Ulysses}} \citep{1999A&A...341..296W,2004A&A...419.1169W,2003JGRA..108.8030L, 2015ApJ...812..141S, 2019A&A...621A..54S}, and {\emph{Galileo}} \citep{1997Icar..129..270G}. These observations identified three populations of dust: \amsn, \bmsn, and interstellar grains transiting the solar system \citep{1993Natur.362..428G}. Before {\emph{PSP}}, the innermost dust measurements were made by {\emph{Helios}} as close as 0.3~AU from the Sun. For grains on the order of several $\mu$m and larger, astronomical observations of the F-corona and ZL \citep{1981A&A...103..177L} provide important constraints on their density distributions. Unlike the broader zodiacal cloud (ZC) structure, which is most concentrated near the ecliptic plane, the solar F-corona has a more spherical shape, with the transition from one to the other following a super-ellipsoidal shape according to the radial variation of a flattening index using observations from the {\emph{STEREO}}/SECCHI instrument \citep{2018ApJ...864...29S}. Measurements from {\emph{Helios}}~1 and {\emph{Helios}}~2 at locations between 0.3 to 1~AU showed that the brightness profile at the symmetry axis of the ZL falls off as a power law of solar distance, with exponent 2.3 \citep{1981A&A...103..177L}, which is consistent with a derived dust density profile of the form $n(r) = n_0\ r^{-1.3}$. This dust density dependence is well-reproduced by the dust produced by Jupiter-family comets \citep{2019ApJ...873L..16P}. Additionally, there were a number of discussions on the influence of excess dust in circumsolar rings near the Sun \citep[][]{1998EP&S...50..493K} on the observed F-corona brightness \citep[][]{1998P&SS...46..911K}. Later on, \cite[][]{2004SSRv..110..269M} showed that no prominent dust rings exist near the Sun. More recently, \cite{2021SoPh..296...76L}. \citet{2021SoPh..296...76L} analyzed images obtained with the {\emph{SOHO}}/LASCO-C3 between 1996 and 2019. Based on a polarimetric analysis of the LASCO-C3 images, they separated the F- and K-corona components and derived the electron-density distribution. In addition, they reported the likely increasing polarization of the F-corona with increasing solar elongation. They do not discuss, however, the dust distribution near the Sun. They further discuss in detail the properties of the F-corona in \cite{2022SSRv..218...53L}. To date, our understanding of the near-Sun dust environment is built on both {\emph{in~situ}} and remote measurements outside 0.3~AU, or 65 $\rs$. {\emph{PSP}}, with its eccentric and progressively low perihelion orbit, provides the only {\emph{in~situ}} measurements and remote sensing observations of interplanetary dust in the near-Sun environment inside 0.3~AU. In the first six orbits alone, {\emph{PSP}} has transited as close as 20 $\rs$ from the center of the Sun, offering an unprecedented opportunity to understand heliospheric dust in the densest part of the ZC and provide critical insight into long-standing fundamental science questions concerning the stellar processing of dust debris discs. Key questions {\emph{PSP}} is well-posed to address are: How is the ZC eroded in the very near-Sun environment?; which populations of material are able to survive in this intense environment?; how do the near-Sun dust populations interact with the solar wind?, among others. \subsection{Dust Detection on {\emph{PSP}}} A number of sensors on {\emph{PSP}} are capable of detecting interplanetary dust in the inner heliosphere, each by a different mechanism. The FIELDS instrument detects perturbations to the S/C potential that result from transient plasma clouds formed when dust grains strike the S/C at high velocities, vaporizing and ionizing the impacting grain and some fraction of the S/C surface \citep{2020ApJS..246...27S, Page2020, Malaspina2020_dust}. WISPR detects solar photons scattered by dust in the ZC \citep{2019Natur.576..232H, 2021A&A...650A..28S}, and IS$\odot$IS is sensitive to dust penetration of its collimator foils by dust \citep{2020ApJS..246...27S}. Dust detection by these mechanisms has lead to advances in the our understanding of the structure and evolution of the ZDC, and we describe these observations in the context of {\emph{in~situ}} and remote-based measurements separately below. \subsection{{\emph{in~situ}} impact detection} \subsubsection{FIELDS} As {\emph{PSP}} traverses the inner heliosphere, its orbital trajectory results in high relative velocities between the S/C and interplanetary dust grains. Relative velocities for $\alpha$-meteoroids can approach 100~km~s$^{-1}$ and exceed 100's~km~s$^{-1}$ for $\beta$-meteoroids and retrograde impactors \citep{2020ApJS..246...27S}. The impact-generated plasma cloud perturbs the S/C surface potential, creating a signal detectable by the FIELDS electric field sensors. This method of {\emph{in~situ}} dust detection was first demonstrated on the {\emph{Voyager}} S/C by \citet{Gurnett1983} and has subsequently been reported on numerous other S/C. See the review by \citet{2019AnGeo..37.1121M} and references therein. While there is agreement that electric field sensors detect impact ionization of dust, the physical mechanism by which potential perturbations are generated continues to be an active topic of research, with a range of competing theories \citep{Oberc1996, Zaslavsky2015, Kellogg2016, MeyerVernet2017, 2019AnGeo..37.1121M, 2021JGRA..12628965S}, and rapidly advancing lines of inquiry using controlled laboratory experiments \citep[{\emph{e.g.}},][]{2014JGRA..119.6019C, 2015JGRA..120.5298C, 2016JGRA..121.8182C, Nouzak2018, 2021JGRA..12629645S}. On {\emph{PSP}}, the vast majority of dust impacts ionization events produce high amplitude ($>10$~mV), brief ($\mu$s to ms) voltage spikes. These can be detected in various FIELDS data products, including peak detector data, bandpass filter data, high cadence time-series burst data, and lower cadence continuous time-series data \citep{2016SSRv..204...49B, Malaspina2016_DFB}. Impact plasma clouds often produce asymmetric responses on electric field antennas \citep[{\emph{e.g.}},][]{Malaspina2014_dust}. By comparing the relative dust signal amplitude on each FIELDS antenna for a given impact, the location of the impact on the S/C body can be inferred. From the impact location, and constraints imposed by dust population dynamics, one can deduce the pre-impact directionality of the dust that struck the S/C \citep{Malaspina2020_dust, Pusack2021}. {\emph{PSP}} data have revealed new physical processes active in the impact ionization of dust. \citet{Dudok2022_scm} presented the first observations of magnetic signatures associated with the escape of electrons during dust impact ionization. \citet{Malaspina2022_dust} demonstrated strong connection between the plasma signatures of dust impact ionization and subsequent debris clouds observed by WISPR and the {\emph{PSP}} star trackers. This study also demonstrated that long-duration S/C potential perturbations, which follow some dust impacts, are consistent with theoretical expectations for clouds of S/C debris that electrostatically charge in the solar wind \citep{2021JGRA..12629645S}. These perturbations can persist up to 60 seconds, much longer than the brief ($\mu$s to ms) voltage spikes generated by the vast majority of dust impacts. \subsubsection{Data-Model Comparisons} \begin{figure}[ht] \centering \includegraphics[width=4.5in]{overview_rates_v7_50mv.pdf} \caption{Daily averaged impact rates as a function of radial distance for orbits $1-6$, separated by inbound (a) and outbound (b). (c) Impact rates overlaid on the {\emph{PSP}} trajectory in the ecliptic J2000 frame, averaged over orbits $1-3$, $4-5$, and individually shown for orbit 6. Color and width of the color strip represents the impact rate. Figure adapted from \citet{szalay:21a}. \label{fig:dust_rates}} \end{figure} Since FIELDS can detect impacts over the entire S/C surface area, in the range of $4-7$ m$^2$ \citep{Page2020}, {\emph{PSP}} provides a robust observation of the total impact rate to the S/C. Fig.~\ref{fig:dust_rates} shows the impact rates as a function of heliocentric distance and in ecliptic J2000 coordinates \citep{szalay:21a}. There are a number of features that have been observed in the impact rate profiles. For the first three orbits, all with very similar orbits, a single pre-perihelion peak was observed. For the subsequent two orbit groups, orbits $4-5$, and orbit 6, a post-perihelion peak was also observed, where a local minimum in impact rate was present near perihelion. As shown in Fig.~\ref{fig:dust_rates}c, the substructure in observed impact rate occurs inside the previous inner limit of {\emph{in~situ}} dust detections by {\emph{Helios}}. While {\emph{PSP}} registers a large number of impacts due to its effective area, determining impactor speed, mass, and directionality is not straightforward. To interpret these impact rates into meaningful conclusions about inner zodiacal dust requires data-model comparisons. Analysis of {\emph{PSP}} dust impact data from the first three orbits found the orbital variation in dust count rates detected by FIELDS during the first three solar Encs. were consistent with primarily sub-micron $\beta$-meteoroids \citep{2020ApJS..246...27S,Page2020,Malaspina2020_dust}. From the first three orbits, it was determined that the flux of \bms varies by approximately 50\%, suggesting the inner solar system's collisional environment varies on timescales of 100's of days \citep{Malaspina2020_dust}. Additionally, nanograins with radii below 100 nm were not found to appreciably contribute to the observed impact rates from these first orbits \citep{2021A&A...650A..29M}. Subsequent analysis which included the first six orbits \citep{szalay:21a} compared {\emph{PSP}} data to a two-component analytic dust model to conclude {\emph{PSP}}'s dust impact rates are consistent with at least three distinct populations: ($\alpha$) bound zodiacal \ams on elliptic orbits, ($\beta$) unbound \bms on hyperbolic orbits, and a distinct third population of impactors. Unlike during the first three orbits of dust impact data, which were dominated by escaping \bmsn, larger grains have been inferred to dominate FIELDS detections for sections of each orbit \citep{szalay:21a} during orbits $4-6$. Data-model comparisons from the first six orbits have already provided important insight on the near-Sun dust environment. First, they placed quantitative constraints on the zodiacal collisional erosion rate of greater than 100 kg s$^{-1}$. This material, in the form of outgoing \bmsn, was found to be predominantly produced within $10-20~\rs$. It was also determined that \bms are unlikely to be the inner source of pickup ions, instead suggesting the population of trapped nanograins \citep{2010ApJ...714...89C} with radii $\lesssim 50$ nm is likely this source. The flux of \bms at 1~AU was also estimated to be in the range of $0.4-0.8 \times 10^{-4}$ m$^{-2}$ s$^{-1}$. \begin{figure}[ht] \centering \includegraphics[width=4.5in]{pusack_2021_fig5.png} \caption{(a) Orbit 4 count rates vs. time for antennas V1, V2, V3, and V4 using the $50–1000$ ~mV amplitude window with darker gray lines corresponding to the ram direction of the S/C and lighter gray lines corresponding to the anti-ram direction. (b) Count rates vs. time for each amplitude window on all four planar antennas. Gray shaded region indicates the anomaly duration. (b) carries with it the same gradation as (a) of tone to the various color families depicted, where each color family corresponds to a different amplitude window: blues for $50–100$~mV, orange–yellows for $100–200$~mV, pinks for $200–400$~mV, and greens for $400–1000$~mV. (c) Amplitude window ram/anti-ram rates vs. time with the same color families as (b). Figure adapted from \citet{Pusack2021}. \label{fig:pusack}} \end{figure} From the data-model comparisons, orbits $4-6$ exhibited a post-perihelion peak in the impact rate profile that was not well-described using the two-component model of nominal \ams and \bms \citep{szalay:21a}. Two hypotheses were provided to explain this post-perihelion impact rate enhancements: (a) {\emph{PSP}} directly transited and observed grains within a meteoroid stream or (b) {\emph{PSP}} flew through the collisional by-products produced when a meteoroid stream collides with the nominal ZC, termed a \bsn. The timing and total flux observed during this time favors the latter explanation, and more specifically, a \bs from the Geminids meteoroid stream was suggested to be the most likely candidate \citep{szalay:21a}. A separate analysis focusing on the directionality and amplitude distribution during the orbit 4 post-perihelion impact rate enhancement also supports the Geminids \bs hypothesis \citep{Pusack2021}. Fig.~\ref{fig:pusack} shows the amplitude and directionality trends observed during orbit 4, where the two impact rate peaks exhibit different behaviors. For the pre-perihelion peak, predicted by the two-component model, impact rates for multiple separate amplitude ranges all peak at similar times (Fig.~\ref{fig:pusack}b) and impact the S/C from similar locations (Fig.~\ref{fig:pusack}c). The post-perihelion peak exhibits a clear amplitude dispersion, where impacts producing smaller amplitudes peak in rate ahead of the impacts that produce larger amplitudes. Additionally, the ram/anti-ram ratio is significantly different from the pre-perihelion peak. As further described in \citet{Pusack2021}, these differences are also suggestive of a Geminids \bsn. We note that grains that are smaller than the detected \bms and affected by electromagnetic forces have a much larger flux close to the orbital perihelia than at other parts of the orbit \citep{2021A&A...650A..29M}, yet their detection is difficult with {\emph{PSP}}/FIELDS due to the low expected impact charge generated by such small mass grains \citep{szalay:21a}. Fig.~\ref{fig:dust_overview_PSP} summarizes the dust populations {\emph{PSP}} is likely directly encountering. From the data-model comparisons, the relative fluxes and densities of bound \ams and unbound \bms has been quantitatively constrained. {\emph{PSP}}'s dust impact measurements have been able to directly inform on the intense near-Sun dust environment. Furthermore, the existence of a third dust population suggests collisions between material along asteroid or cometary orbits can be a significant source of near-sun collisional grinding and \bmm production in the form of \bs \citep{szalay:21a}, and is a fundamental physical process occurring at all stellar dust clouds. \begin{figure}[ht] \centering \includegraphics[width=4.5in]{summary_v6_50mv.pdf} \caption{{\emph{PSP}} detects impacts due to \amsn, \bmsn, and likely from discrete meteoroid streams. {\it Left}: Impact rates and model fits from orbit 3 (inbound) and orbit 4 (outbound). {\it Right}: Sources for the multiple populations observed by {\emph{PSP}}. Figure adapted from \citet{szalay:21a}. \label{fig:dust_overview_PSP}} \end{figure} \subsection{Remote Sensing} \subsubsection{Near-sun dust density radial dependence} \label{dddZ} \begin{figure} \includegraphics[scale=0.35, clip=true]{FigDustDepletion.pdf} \caption{(a) Left panel: Sample of radial brightness gradients along the symmetry axis of the F-corona (black) and data fit with an empirical model (red dashed line). The linear portion of the model is delineated with the light-blue dashed line. The inset shows the percentage departure of the empirical model from the linear trend. Upper Right panel: Comparison of the empirical model (in black color) and the forward modeling of the ZL brightness along the symmetry axis considering a DDZ between $2-20~R_\odot$ (in green color) and $3-19~R_\odot$ (in red color). The inset shows the relative error of the simulations compared to the empirical model (same color code). Bottom Right panel: Dust density model used in the simulations relative to the outermost edge of the DDZ between $3–19~R_\odot$ assuming a linear decrease in the multiplier of the nominal density. The dashed, vertical line in a red color indicates the innermost distance of the WISPR FOV in this study. Figure adapted from \cite{2021A&A...650A..28S}.} \label{fig:depl} \end{figure} In anticipation of the {\emph{PSP}} observations, several studies of the ZL/F-corona based on observations from the {\emph{STEREO}}/SECCHI instrument were carried out \citep{2017ApJ...839...68S, 2017ApJ...848...57S, 2018ApJ...862..168S,2018ApJ...864...29S}. These studies established a baseline of F-corona properties from 1 AU to help identify any differences that may arise due to the varying heliocentric distance of the corresponding WISPR observations. The question of whether a DFZ \citep[][]{1929ApJ....69...49R} exists close to the Sun is long-standing and has not been answered by pre-{\emph{PSP}} observations of the ZL/F-corona. White light observations obtained from distances between $0.3-1$~AU \citep[{\emph{e.g.}},][]{2018ApJ...862..168S,1981A&A...103..177L} do not reveal any break in the radial gradient of the brightness along of the symmetry plane of the ZDC, which was found to follow a power law $I(r) \sim r^{-2.3}$ to at least below the theoretically predicted start of the DFZ at $\approx 4-5 R_\odot$. The WISPR instrument has recorded the intensities of the ZL/F-corona from ever decreasing observing distances, down to about 0.074~AU ($\sim15.9~R_\odot$) at the last executed perihelion (by the time of this writing). This unprecedented observer distance corresponds to an inner limit of the FOV of WISPR-i of about 3.7~$R_\odot$ (0.017~AU). A striking result from the WISPR observations obtained during the first five orbits, was the departure of the radial dependence of the F-corona brightness profile along the symmetry axis of the ZDC from the previously-established power law \citep[][hereafter referred to as HS]{2019Natur.576..232H,2021A&A...650A..28S}. In the left panel of Fig.~\ref{fig:depl}, we show a sample of WISPR brightness profiles along the symmetry axis obtained during orbits 1, 2, 4, and 5 (in black color), along with the fitting of an empirical model comprising a linear and an exponential function (red dashed line). The linear portion of the empirical model is delineated with the light-blue dashed line. We note that the linear behavior ({\emph{i.e.}}, constant gradient) continues down to $\sim20~R_\odot$ with the same slope as observed in former studies ({\emph{i.e.}}, $\propto r^{-2.3}$). Below that elongation distance, the radial brightness gradient becomes less steep. The modeled brightness measurements depart by about 35\% at the inner limit of $7.65~R_\odot$ (0.036~AU) from the extrapolation of the linear part the model (see inset in Fig.~\ref{fig:depl}). The brightness decrease is quite smooth down to $7.65~R_\odot$, {\emph{i.e.}}, it does not show discrete brightness enhancements due to sublimation effects of any particular dust species \citep[][]{2009Icar..201..395K}. The brightness profile was forward-modeled using RAYTRACE \citep{2006ApJ...642..523T}\footnote{RAYTRACE is available in the {\emph{SOHO}} Solarsoft library, http://www.lmsal.com/solarsoft/. }, which was adapted to integrate the product of an empirical volume scattering function (VSF) and a dust density at each point along any given LOS. The VSF was given by \cite{1986A&A...163..269L}, which condenses all the physics of the scattering process into an empirical function of a single parameter, the scattering angle. The dust density along the symmetry axis of the ZDC was taken from \cite{1981A&A...103..177L} ($n(r) \propto r^{-1.3}$). The intensity decrease observed in WISPR results was ascribed to the presence of a dust depletion zone (DDZ), which appears to begin somewhere between 19 and 20~$R_\odot$ and extends sunward to the beginning of the DFZ. To model the density decrease in the DDZ, we used a multiplier in the density model defined as a linear factor that varied between 1 at the outer boundary of the DDZ and 0 at the inner boundary. The extent of the DDZ was determined empirically by matching the RAYTRACE calculation with the empirical model of the radial brightness profile. In the upper right panel of Fig.~\ref{fig:depl} we show in green and red colors (the red is fully underneath the green) the forward modeling of the brightness with two different boundaries for the DDZ along with the empirical model (in black). The inner boundary was a free parameter to give the best match to the empirical model. The $3-19~R_\odot$ range (green) for the DDZ yields a slightly better fit to the observation than in the $2-20~R_\odot$ range (red). Note that the behavior below the observational limit of 7.65~$R_\odot$ is only an extrapolation. The inset shows the difference between the two forward models. We thus choose $19~R_\odot$ as the upper limit of the DDZ, although depletion could start beyond $19~R_\odot$ but doesn't cause a noticeable change in the intensities until about $19~R_\odot$. In the bottom right panel of Fig.~\ref{fig:depl}, we show the radial profile of the dust density relative to the density at $19~R_\odot$ for the best fit to the intensity profile. Note that from $\sim10$ to $19~R_\odot$, the density appears to be approximately constant. In future orbits, WISPR will observe the corona down to 2.3~$R_\odot$, which will help establish more accurately the actual limit of the DFZ. \subsubsection{Implications for collisions and/or sublimation:} The smooth behavior of the radial brightness profile of the F-corona along its symmetry axis from 35~$R_\odot$ down to 7.65~$R_\odot$ is suggestive of a smooth and continuous process of dust removal. No evidence is seen of dust depletion at a particular distance due to the sublimation of a particular species. Thus the dust remaining at these distances is probably similar to quartz or obsidian, which are fairly resistant to sublimation \citep[{\emph{e.g.}},][]{2004SSRv..110..269M}. \subsubsection{Dust density enhancement along the inner planets' orbits} \label{VdustRing} In addition to measurements of the broad ZC structure, discrete dust structures have also been observed by WISPR. A dust density enhancement nearby Earth's orbit was theoretically predicted in the late eighties by \cite{1989Natur.337..629J} and observationally confirmed by \cite{1994Natur.369..719D} using observations from the Infrared Astronomy Satellite \citep[{\emph{IRAS}};][]{1984ApJ...278L...1N}. \cite{1995Natur.374..521R} confirmed the predicted structure of the dust ring near Earth using observations from the Diffuse Infrared Background Experiment \citep[{\emph{DIRBE}};][]{1993SPIE.2019..180S} on the Cosmic Background Explorer mission \citep[{\emph{COBE}};][]{1992ApJ...397..420B}. More recently, in a reanalysis of white light observations from the {\emph{Helios}} mission \citep{1981ESASP.164...43P}, \cite{2007A&A...472..335L} found evidence of a brightness enhancement nearby Venus' orbit , which was later confirmed by \cite{2013Sci...342..960J,2017Icar..288..172J} using {\emph{STEREO}}/SECCHI observations. Finally, in spite of the lack of a theoretical prediction, a very faint, circumsolar dust ring associated with Mercury was indirectly inferred from 6+ years of white-light observations \citep{2018ApJ...868...74S} obtained with the {\emph{STEREO}}-Ahead/HI-1 instrument. In all the observational cases mentioned above, only particular viewing geometries allowed the detection of just a small portion of the dust rings. The {\emph{Helios}} measurements were carried out with the $90^\circ$ photometer of the Zodiacal Light Experiment \citep[{\emph{ZLE}};][]{1975RF.....19..264L}, which looked perpendicular to the ecliptic plane. The observations reported a 2\% increase in brightness as {\emph{Helios}} crossed just outside of Venus’s orbit \citep{2007A&A...472..335L}. On the other hand, the {\emph{STEREO}} observations were obtained with the SECCHI/HI-2 telescopes, which image the interplanetary medium about $\pm20^\circ$ above and below the ecliptic plane. In the latter, the brightness enhancements observed were detected only when the viewing geometry was tangent to the orbit of Venus. The findings were interpreted, via theoretical modeling, as due to the presence of a resonant dust ring slightly beyond Venus' orbit \citep{2013Sci...342..960J,2017Icar..288..172J}. However, in a more recent work, the dust environment near Venus' orbit was modeled by coalescing the orbital paths of more than 10,000,000 dust particles of different provenance under the influence of gravitational and non-gravitational forces \citep{2019ApJ...873L..16P}. According to this model, an hypothetical population of dust particles released by Venus co-orbital asteroids could be stable enough to produce enough signal to match the observations. So far, twilight telescopic surveys have not found any long-term stable Venus co-orbital asteroids \citep{2020PSJ.....1...47P}; however, their existence cannot be ruled out. At visible wavelengths, the high density and scattering properties of the dust particles in the ZC \citep[{\emph{e.g.}},][]{1986A&A...163..269L}, makes it difficult to detect localized density structures embedded in it from 1~AU. However, as shown in \cite{2021ApJ...910..157S}, the {\emph{PSP}} mission traveling through regions never visited before by any man-made probe, allows the comprehensive visualization of discrete dust density enhancements in the ZDC. As with other white-light heliospheric imagers, the scene recorded in WISPR observations is dominated by the ZL \citep[or F-corona close to the Sun; see, {\emph{e.g.}},][]{2019Natur.576..232H}. To reveal discrete, stationary F-corona features in the FOV of the WISPR instrument, it is necessary to estimate the F-corona background component (for its subsequent removal from the images) with images where the stationary feature is present at a different location in the FOV. By exploiting the different rolls while the S/C was between 0.5 and 0.25~AU, \cite{2021ApJ...910..157S} revealed the first comprehensive, white light observation of a brightness enhancement across a $345^\circ$ longitudinal extension along the Venus' orbit. \begin{figure} \includegraphics[scale=0.33, clip=true, trim=0.0cm -0.5cm 0.0cm 0.0cm]{Fig_Ring.png} \caption{Combined WISPR observations of a circumsolar dust ring near Venus’s orbit on 25 Aug. 2019. Images are projected onto the surface of a sphere with observer at the center ({\emph{PSP}} S/C) and radius equal to the heliocentric distance of the observer. The Sun is not to scale. The gray areas surrounding the bright point-like objects (Mercury, Venus, and Earth) are artifacts of the image processing due to the saturation caused by their excessive brightness. The odd oval-shaped object and its surrounding area are caused by reflections in the optics of the very bright Venus. The red dots delineate Venus’s orbital path, the dashed orange line the ecliptic, and the yellow dotted line the invariable plane. Figure adapted from \cite{2021ApJ...910..157S}.} \label{fig:DustRing} \end{figure} Fig.~\ref{fig:DustRing} shows a composite panorama of the Venusian dust ring in WISPR images acquired during the inbound segment of orbit 3 while the {\emph{PSP}} S/C was performing roll maneuvers \citep[as extracted from][]{2021ApJ...910..157S}. The study showed that the latitudinal extension of the brightness enhancement corresponds to a dust ring extending 0.043~AU $\pm$ 0.004~AU, co-spatial with Venus' orbital path. Likewise, the median excess brightness of the band w.r.t. the background (of about 1\%), was shown to correspond to a dust density enhancement relative to the local density of the ZC of about 10\%. Both, the latitudinal extension and density estimates is in general agreement with the findings of \cite{2007A&A...472..335L} and \cite{2013Sci...342..960J,2017Icar..288..172J}. The viewing geometry only allowed a measure of the inclination and projected height of the ring, not of its radial position or extent. Therefore, no detailed information on the distance of the dust ring from the orbit of Venus could be extracted. \subsubsection {Dust Trail of 3200 Phaethon} \label{Phaethon} Discovered in 1983 \citep{1983IAUC.3878....1G}, asteroid (3200) Phaethon is one of the most widely-studied inner solar system minor bodies, by virtue of a 1.434 year orbit, its large size for a near-Earth object \citep[6~km in diameter,][]{2019P&SS..167....1T}, and a low 0.0196~AU Earth minimum orbit intersection distance (MOID) favorable to ground-based optical and radar observations \citep{1991ASSL..167...19J,2010AJ....140.1519J}. Phaethon is recognized as the parent of the Geminid meteor shower and is associated with the Phaethon-Geminid meteoroid stream complex including likely relationships with asteroids 2005 UD and 1999 YC \citep[{\emph{e.g.}},][]{1983IAUC.3881....1W,1989A&A...225..533G,1993MNRAS.262..231W,2006A&A...450L..25O,2008M&PSA..43.5055O}. Due to a small 0.14~AU perihelion distance, observations of Phaethon near the Sun are impossible from traditional ground-based telescopes. The first detections of Phaethon at perihelion were made by {\emph{STEREO}}/SECCHI \citep{2013ApJ...771L..36J}. While Phaethon is active near perihelion and experiences an intense impact environment near the Sun, the mass-loss rates from cometary-like activity \citep{2013ApJ...771L..36J} and impact ejecta \citep{2019P&SS..165..194S} were both found to be orders of magnitude too low to sustain the Geminids. \begin{figure}[ht!] \centering \includegraphics[scale=0.35]{FigPhaethon.pdf} \caption{WISPR-i observations recorded on 5 Nov. 2018, 14:14~UT, 6 Nov. 1:43~UT and 6 Nov. 14:54~UT. Plotted symbols indicate the imaginary position of Phaethon along the orbit in 60 minute increments, both pre-perihelion (blue) and post-perihelion (white). Symbols are excluded in the region where the trail is most easily visible. The white diamond indicates the perihelion position of the orbit in the FOV. Figure adapted from \cite{2020ApJS..246...64B}. \label{fig:trail-evolution}} \end{figure} As presented in \cite{2020ApJS..246...64B}, an unexpected white-light feature revealed in the WISPR background-corrected data was the presence of a faint extended dust trail following the orbit of Phaethon. In Fig.~\ref{fig:trail-evolution}, we show three WISPR-i telescope observations that highlight the most visible portion of this dust trail as detected during {\emph{PSP}} Enc.~1, which is seen following the projection of Phaethon's orbital path perfectly. Despite the dust trail being close to the instrument noise floor, the mean brightness along the trail was determined to be 8.2$\times10^{-15} B_\odot$ (where $B_\odot$ is the mean solar brightness), which equates to a visual magnitude of 15.8$\pm$0.3 per pixel. This result, coupled with the 70~arcsec per pixel resolution of WISPR-i, yields an estimated surface brightness of 25.0~mag~arcsec$^{-2}$ for the dust trail, which in turn is shown to yield a total mass of dust in the entire trail of $\sim(0.4-1.3){\times}10^{12}$~kg. This mass estimate is inconsistent with dust by Phaethon at perihelion, but is plausibly in-line (slightly below) mass estimates of the Geminids. The difference is attributed primarily to the faintness of the detection. This detection highlights the remarkable sensitivity of WISPR to white-light dust structures. Recent ground- and space-based surveys have failed to detect a dust trail in the orbit of 3200 Phaethon \citep{2018ApJ...864L...9Y}. The WISPR observation explains this as, when factors such as heliocentric distance and orbital spreading/clustering of the dust are considered, it can be shown that the surface brightness of the trail as seen from a terrestrial viewpoint is less than 30~mag~arcsec$^{-2}$, which constitutes an extremely challenging target even for deep sky surveys. The Phaethon dust trail continues to be clearly observed in the WISPR data in every {\emph{PSP}} orbit, and remains under continued investigation. The dust trail of comet 2P/Encke is also quite clearly visible in the WISPR data, again highlighting the instrument's ability to detect faint dust features. The inner solar system is rich with fragmenting comets and comet families, yielding the potential for the discovery of additional dust features as the mission orbit evolves. \subsubsection{Mass Loading of the Solar Wind by Charged Interplanetary Dust} If charged dust grains reach sufficient density, they are theoretically capable of impacting solar wind plasma dynamics, primarily through mass-loading the wind \citep[{\emph{e.g.}},][]{Rasca2014a}. As the solar wind flows over charged dust grains, the Lorentz force attempts to accelerate these grains up to the solar wind velocity. The resulting momentum exchange can slow the solar wind and distort solar wind magnetic fields \citep{Lai2015}. In practice, a high enough density of dust grains with sufficiently large charge-to-mass ratio to distort the solar wind flow is most likely to be found near localized dust sources, like comets \citep{Rasca2014b}. The Solar Probe data so far have yielded one such potential comet-solar wind interaction, and a study of this event was inconclusive with regard to whether mass loading created an observable impact on the solar wind \citep{He2021_comet}. \subsection{Summary of Dust Observations and Future Prospects for {\emph{PSP}} Dust Measurements} Summarizing our understanding of the inner heliosphere's dust environment after four years of {\emph{PSP}} dust data: \begin{itemize} \item[1.] Impact rates from the first six orbits are produced by three dust sources: \ams on bound elliptic orbits, \bms on unbound, hyperbolic orbits, and a third dust source likely related to meteoroid streams. \item[2.] The flux of \bms varies by at least 50\% on year-long timescales. \item[3.] Directionality analysis and data-model comparisons suggests the third source detected during {\emph{PSP}}'s first six orbits is a \bsn. \item[4.] A zodiacal erosion rate of at least $\sim100$~kg s$^{-1}$ is consistent with observed impact rates. \item[5.] The flux of \bms at 1~AU is estimated to be in the range of $0.4-0.8 \times 10^{-4}$ m$^{-2}$ s$^{-1}$. \item[6.] The majority of zodiacal collisions production \bms occur in a region from $\sim10-20~\rs$. \item[7.] If the inner source of pickup ions is due to dust, it must be from nanograins with radii $\lesssim50$~nm. \item[8.] The zodiacal dust density is expected to maintain a constant value in the range of $10-19~\rs$. \item[9.] A dust ring along the orbit of Venus' orbit has been directly observed. \item[10.] Multiple meteoroid streams have been directly observed, including the Geminids meteoroid stream. \end{itemize} There are a number of ongoing and recently open questions in the {\emph{PSP}}-era of ZC exploration. For example, it is not yet determined why the FIELDS dust count rate rises within each orbital group. Increases among orbital groups are expected because, as the S/C moves closer to the Sun, its relative velocity to zodiacal dust populations increases and the zodiacal dust density increases closer to the Sun \citep{2020ApJS..246...27S}. While this effect is observed, it is also observed \citep{Pusack2021, szalay:21a} that successive orbits with the same perihelion distance show increasing dust count rates ({\emph{e.g.}}, high dust count rates on orbit 5 compared to 4). Additionally, can FIELDS dust detections be used to differentiate between existing theories for the generation of voltage spikes by impact-generated plasma? {\emph{PSP}} traverses a wide range of thermal plasma, photon flux, magnetic field, and dust impact velocity conditions, enabling new tests of impact-plasma behavior as a function of these parameters. The WISPR remote measurements have provided an unparalleled look at the dust environment with a few 10's of $\rs$. Upcoming orbits will reveal whether the DFZ indicated by WISPR data \citep{2019Natur.576..232H, 2021A&A...650A..28S} will be directly observable {\emph{in~situ}} with FIELDS, and if the observed trend from WISPR for larger grains holds in the micron-sized regime. While {\emph{PSP}} will directly transit the region of constant radial density profile inferred by WISPR in the range of $10-19~\rs$, the decrease of this profile towards a DFZ occurs inside 10~$\rs$ where {\emph{PSP}} will not transit. \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{beta_fluxes_v0.pdf} \caption{\bmm fluxes observed by multiple S/C and detection schemes. \label{fig:beta_fluxes}} \end{figure} Finally, {\emph{PSP}}'s long mission duration will enable it to be a long-term observation platform for \bmm fluxes inside 1~AU. The flux of \bms directly encodes the collisional erosion occurring in the inner heliosphere, therefore a determination of their flux provides an important window into the dynamics and evolution of the ZC. Furthermore, the flux of \bms is the largest impactor source by number flux at 1~AU, and may be responsible for sustaining a significant portion of the Moon's impact ejecta cloud \citep{2020ApJ...890L..11S}. Hence, \bms may play a more important role in space weathering airless bodies than previously considered, and constraining their fluxes and variations with time can provide key insight on the space weathering of airless bodies which transit inside 1~AU. Fig.~\ref{fig:beta_fluxes} highlights the multiple \bmm flux estimates from dedicated dust instruments onboard {\emph{Pioneers}} 8 \& 9 \citep{1973spre.conf.1047B}, {\emph{Ulysses}} \citep{1999A&A...341..296W,2004A&A...419.1169W}, as well electric field-based observations from {\emph{STEREO}} \citep{2012JGRA..117.5102Z}, {\emph{SolO}} \citep{2021A&A...656A..30Z}, and {\emph{PSP}} \citep{2020ApJS..246...27S,szalay:21a,2021A&A...650A..29M}. As shown in this figure, the dedicated dust observations indicated a higher flux of \bms than the more recent estimates derived from electric field observations taken decades later. The extent to which the flux of \bms varies over time is a quantity {\emph{PSP}} will be uniquely posed to answer in its many years of upcoming operations. \section{Venus} \label{PSPVENUS} Putting {\emph{PSP}} into an orbit that reaches within 10~R$_{\odot}$ of the Sun requires a series of VGA flybys to push the orbital perihelion closer and closer to the Sun. A total of seven such flybys are planned, five of which have already occurred as of this writing. These visits to Venus naturally provide an opportunity for {\emph{PSP}} to study Venus and its interactions with the solar wind. In this section, we review results of observations made during these flybys. Direct images of Venus have been obtained by the WISPR imagers on board {\emph{PSP}}. The first attempt to image Venus with WISPR was during the third flyby (VGA3) on 11 Jul. 2020. The dayside of Venus is much too bright for WISPR to image. With no shutter mechanism, the effective minimum exposure time with WISPR is the image readout time of about 2~s, much too long for Venus to be anything other than highly overexposed in WISPR images made close to the planet. Furthermore, the VGA3 sequence of images demonstrated that if any part of dayside Venus is in the FOV, not only is the planet highly overexposed, but there are scattered light artifacts that contaminate the entire image. \begin{figure*} \centering \includegraphics[width=0.6\textwidth]{Wood2022Fig1.png} \caption{(a) WISPR-i image of the nightside of Venus from VGA3, showing thermal emission from the surface on the disk and O$_2$ nightglow emission at the limb. (b) Topographical map from Magellan, using an inverse black and white scale to match the WISPR image, with bright regions being low elevation and dark regions being high elevation. (c) WISPR-i and -o images of Venus from VGA4. The same part of the Venusian surface is observed as in (a). Red numbers in all panels mark common features for ease of reference. Figure adapted from \citet{2022GeoRL..4996302W}.} \label{Wood2022Fig1} \end{figure*} Fortunately, there were a couple VGA3 images that only contained nightside Venus in the FOV, and these images proved surprisingly revelatory. One of these images is shown in Fig.~\ref{Wood2022Fig1}a \citep{2022GeoRL..4996302W}. Structure is clearly seen on the disk. Furthermore, comparison with a topographical map of Venus from the Magellan mission (see Fig.~\ref{Wood2022Fig1}b) makes it clear that we are actually seeing the surface of the planet. This was unexpected, as the surface of Venus had never before been imaged at optical wavelengths. Viewing the planetary surface is impossible on the dayside due to the blinding presence of scattered sunlight from the very thick Venusian atmosphere. However, on the nightside there are windows in the near infrared (NIR) where the surface of the planet had been imaged before, particularly by the {\emph{Venus Express}} \citep[{\emph{VEX}};][]{2006CosRe..44..334T} and {\emph{AKATSUKI}} \citep{2011JIEEJ.131..220N} missions \citep{2008GeoRL..3511201H,nm08,ni18}. This is not reflected light but thermal emission from the surface, which even on the nightside of Venus is about 735~K. The WISPR imagers are sensitive enough to detect this thermal emission within their optical bandpass. Because surface temperature decreases with altitude on Venus, as it does on Earth, dark areas in the {\emph{PSP}}/WISPR images correspond to cooler highland areas while bright areas correspond to hotter lowland regions. The dark oval-shaped region dominating the WISPR image near the equator is the Ovda Regio plateau at the western end of Aphrodite Terra, the largest highland region on Venus. In addition to the thermal emission from the disk of the planet, a bright rim of emission is seen at the limb of the planet. This is O$_2$ nightglow emission from the upper atmosphere of the planet, which had been observed by previous missions, particularly {\emph{VEX}} \citep{2009JGRE..11412002G,2013GeoRL..40.2539G}. This emission is believed to be excited by winds of material flowing in the upper atmosphere from the dayside to the nightside. The experience with the VGA3 images allowed for better planning for VGA4, and during this fourth flyby on 21 Feb. 2021 a much more extensive series of images was taken of the Venusian nightside, using both the WISPR-i and WISPR-o imagers. Fig.~\ref{Wood2022Fig1}c shows a view from VGA4, combining the WISPR-i and WISPR-o images. It so happened that VGA4 occurred with essentially the same part of Venus on the nightside as VGA3, so the VGA3 and VGA4 images in Figs.~\ref{Wood2022Fig1}a and \ref{Wood2022Fig1}c are of roughly the same part of the planet, with the Ovda Regio area dominating both. An initial analysis of the WISPR images has been presented by \citet{2022GeoRL..4996302W}. A model spectrum of the surface thermal emission was computed, propagated through a model Venusian atmosphere. This model, assuming a 735~K surface temperature, was able to reproduce the count rates observed by WISPR. A long-term goal will be to compare the WISPR observations with NIR images. Ratios of the two could potentially provide a diagnostic for surface composition. However, before such mineralogy can be done, a more detailed analysis of the WISPR data must be performed to correct the images for scattered light, disk O$_2$ nightglow, and the effects of spatially variable atmospheric opacity. Finally, additional WISPR observations should be coming in the future. Although the Enc. geometry of VGA5 was not favorable for nightside imaging, and VGA6 will likewise be unfavorable, the final flyby (VGA7) on 6 Nov. 2024 should provide an opportunity for new images to be made. Furthermore, for VGA7 we will be viewing the side of Venus not observed in VGA3 and VGA4. {\emph{PSP}} also made extensive particle and fields measurements during the Venus flybys. Such measurements are rare at Venus, particularly high cadence electric and magnetic field measurements \citep{Futaana2017}. Therefore, {\emph{PSP}} data recorded near Venus has the potential to yield new physical insights. Several studies examined the interaction between the induced magnetosphere of Venus and the solar wind. \citet{2021GeoRL..4890783B} explored kinetic-scale turbulence in the Venusian magnetosheath, quantifying properties of the shock and demonstrating developed sub-ion kinetic turbulence. \citet{Malaspina2020_Venus} identified kinetic-scale electric field structures in the Venusian bow shock, including electron phase space holes and double layers. The occurrence rate of double layers was suggested to be greater than at Earth's bow shock, hinting at a potential significant difference in the kinetic properties of bow shocks at induced magnetospheres vs. intrinsic magnetospheres. \citet{Goodrich2020} identified subproton scale magnetic holes in the Venusian magnetosheath, one of the few observations of such structures beyond Earth's magnetosphere. Other studies used the closest portions of the flybys to examine the structure and properties of the Venusian ionosphere. \citet{Collinson2021} examined the ionospheric density at 1,100 km altitude, demonstrating consistency with solar cycle predictions for ionospheric variability. \citet{2022GeoRL..4996485C} used {\emph{PSP}} observations of cold plasma filaments extending from the Venus ionosphere (tail rays) to reconcile previously inconsistent observations of tail rays by {\emph{Pioneer}}~12 (also named Pioneer Venus Orbiter) and {\emph{VEX}}. Finally, \citet{Pulupa2021} examined radio frequency data recorded during {\emph{PSP}} Venus flybys, searching for evidence of lightning-generated plasma waves. No such waves were found, supporting results from earlier Cassini flyby observations \citep{2001Natur.409..313G}. \section{Summary and Conclusions} \label{SUMCONC} {\emph{PSP}} has completed 13 of its 24 scheduled orbits around the Sun over a 7-year nominal mission duration. The S/C flew by Venus for the fifth time on 16 Oct. 2021, followed by the closest perihelion of $13.28~R_\odot$. Generally, the S/C has performed well within the expectations. The science data returned is a true treasure trove, revealing new aspects of the young solar wind and phenomena that we did not know much about. The following is a summary of the findings of the {\emph{PSP}} mission during its four years of operations. We, however, refer the readers to the corresponding sections for more details. \paragraph{\textbf{Switchbacks ---}} The magnetic field switchbacks observed by the {\emph{PSP}} are a fundamental phenomenon of the young solar wind. SBs show an impressive effect; they turn the ambient slow solar wind into fast for the crossing duration without changing the connection to the source. These structures are Alfv\'enic, show little changes in the density, and display a slight preference to deflect in the tangential direction. The duration of the observed switchbacks is related to how S/C cross through the structure, which is in turn associated with the deflection, dimensions, orientation, and the S/C velocity. Most studies implied that these structures are long and thin along the flow direction. SB patches have shown the local modulation in the alpha fraction observed in-situ, which could be a direct signature of spatial modulation in solar sources. They also have shown large-scale heating of protons in the parallel direction to the magnetic field, indicating the preferential heating of the plasmas inside the switchbacks. Observations provided a clue that switchbacks might have relevance to the heating and acceleration of the solar wind. Therefore it is essential to understand their generation and propagation mechanism. Some aspects of these features point toward ex-situ processes ({\emph{e.g.}}, interchange reconnection and other solar-surface processes) and others toward in-situ mechanisms (covering stream interactions, AWs, and turbulence) in which switchbacks result from processes within the solar wind as it propagates outwards. The various flavors of interchange-reconnection-based models have several attractive features, in particular their natural explanation of the likely preferred tangential deflections of large switchbacks, the bulk tangential flow, and the possible observed temperature enhancements. However, some important features remain unclear, such as the Alfv\'enicity of the structures and how they evolve as they propagate to {\emph{PSP}} altitudes. While, AW models naturally recover the Alfv\'enicity and radial elongation of switchbacks seen in {\emph{PSP}} observations, but can struggle with some other features. In particular, it remains unclear whether the preferred tangential deflections of large switchbacks can be recovered and also struggle to reproduce the high switchback fractions observed by {\emph{PSP}}. When radially stratified environment conditions are considered for AW models, studies showed that before propagating any significant distance, a switchback will have deformed significantly, either changing shape or unfolding depending on the background profile. This blurs the line between ex-situ and in-situ formation scenarios. There are interrelationships and the coexistence of different mechanisms in some of the proposed models; moving forward, we must keep all the models in mind as we attempt to distinguish observationally between different mechanisms. Further understanding of switchback formation will require constant collaboration between observers and theorists. \paragraph{\textbf{Solar Wind Sources ---}} A central question in heliophysics is connecting the solar wind to its sources. A broad range of coronal and heliospheric modeling efforts have supported all the {\emph{PSP}} Encs. {\emph{PSP}} has mainly observed only slow solar wind with a few exceptions. The first Enc. proved unique where all models pointed to a distinct equatorial coronal hole at perihelion as the dominant solar wind source. The flow was predominantly slow and highly Alfv\'enic. During the subsequent Encs., the S/C was connected to polar coronal hole boundaries and a flatter HCS. However, what has been a surprise is that the slow solar wind streams were seen to have turbulence and fluctuation properties, including the presence of the SBs, typical of Alfv\'enic fluctuations usually associated with HSSs. That slow wind interval appeared to have much of the same characteristics of the fast wind, including the presence of predominantly outwards Alfv\'enic fluctuations, except for the overall speed. The consensus is that the slow Alfv\'enic solar wind observed by {\emph{PSP}} originates from coronal holes or coronal hole boundaries. It is still unclear how the Alfv\'enic slow wind emerge: (1) does it always arise from small isolated coronal holes with large expansion factors within the subsonic/supersonic critical point? Or is it born at the boundaries of large, polar coronal holes? There is, however, one possible implication of the overall high Alfv\'enicity observed by {\emph{PSP}} in the deep inner heliosphere. All solar wind might be born Alfv\'enic, or rather that Alfv\'enic fluctuations be a universal initial condition of solar wind outflow. Whether this is borne out by {\emph{PSP}} measurements closer to the Sun remains to be seen. Quiet periods typically separate the SBs-dominated patches. These quiet periods are at odds with theories relating to slow wind formation and continual reconfiguration of the coronal magnetic field lines due to footpoint exchange. This should drive strong wind variability continually \citep[{\emph{e.g.}},][]{1996JGR...10115547F}. Another interesting finding from the {\emph{PSP}} data is the well-known open flux problem persists down to 0.13~AU, suggesting there exist solar wind sources which are not yet captured accurately by modeling. \paragraph{\textbf{Kinetic Physics ---}} {\emph{PSP}} measurements show interesting kinetic physics phenomena. The plasma data reveal the prevalence of electromagnetic ion-scale waves in the inner heliosphere for the first time. The statistical analysis of these waves shows that a near-radial magnetic field is favorable for their observation and that they mainly propagate anti-sunward. {\emph{PSP}} observed for the first time a series of proton beams with the so-called hammerhead velocity distributions that is an excessively broadened VDF in the direction perpendicular to the mean magnetic field vector. These distributions coincide with intense, circularly polarized, FM/W waves. These findings suggest that the hammerhead distributions arise when field-aligned proton beams excite FM/W waves and subsequently scatter off these waves. {\emph{PSP}} waveform data has also provided the first definitive evidence of sunward propagating whistler-mode waves. This is an important discovery because sunward-propagating waves can interact with the anti-sunward propagating strahl only if the wave vector is parallel to the background magnetic field. \paragraph{\textbf{Turbulence ---}} Turbulence often refers to the energy cascade process that describes the energy transfer across scales. In solar wind turbulence, the energy is presumably injected at a very large scale ({\emph{e.g.}}, with a period of a few days). It cascades then down to smaller scales until it dissipates at scales near the ion and electron scales. The intermediate scale range between the injection scale and dissipation (or the kinetic) range is known as the inertial range. The {\emph{PSP}} observations shed light on the properties of the turbulence at various scales ({\emph{i.e.}}, outer scale, inertial-range scale, and kinetic scales) at the closest distances to the Sun. This includes the sub-Alfv\'enic region where the solar wind speed becomes smaller than the typical Alfv\'en speed. Several recent studies using {\emph{PSP}} data reveal the significance of solar wind turbulence on the overall heating and acceleration of the solar wind plasma. For instance, magnetic field switchbacks are associated with turbulent structures, which mainly follow the field's kink. Turbulence features such as the intermittency, the Alf\'venicity, and the compressibility have also been investigated. Overall, the data show that solar wind turbulence is mostly highly Alfv\'enic with less degree of compressibility even in the slow solar wind. Other studies used {\emph{PSP}} measurements to examine the typical plasma scale at which the energy spectrum breaks. However, it remains challenging to interpret the appropriate plasma scales corresponding to the empirical timescales using the standard frozen-in-flow Taylor hypothesis as the solar wind speed and the local Alfv\'en speed becomes comparable. \paragraph{\textbf{Large Scale ---}} Due to its low heliographic latitude orbit, {\emph{PSP}} crossed the HCS multiple times in each Enc. and observed many LFRs and SFRs. The observed locations of HCS crossings and PFSS model predictions were compared. An irregular source surface with a variable radius is utilized to minimize the timing and location differences. The internal structure of the HCS near the Sun is very complex, comprising structures with magnetic field magnitude depressions, increased solar wind proton bulk speeds, and associated suprathermal electron strahl dropouts, likely indicating magnetic disconnections. In addition, small flux ropes were also identified inside or just outside the HCS, often associated with plasma jets indicating recent magnetic reconnection. {\emph{PSP}} measurements also show that, despite being the site of frequent magnetic reconnection, the near-Sun HCS is much thicker than expected. HCS observations at 1 AU reveal significantly different magnetic and plasma signatures implying that the near-Sun HCS is the location of active evolution of the internal structures. In addition, our knowledge of the transition from CME to ICME has been limited to the in-situ data collected at 1 AU and remote-sensing observations from space-based observatories. {\emph{PSP}} provides a unique opportunity to link both views by providing valuable information that will allow us to distinguish the evidence of the early transition from CME to ICME. {\emph{PSP}} has also observed a multitude of events, both large- and small-scale, connected to flux ropes. For instance, at least one SBO event showed a flux rope characterized by changes that deviated from the expected smooth change in the magnetic field direction (flux rope-like configuration), low proton plasma beta, and a drop in the proton temperature. {\emph{PSP}} also observed a significant number of SFRs. Several tens of SFRs were analyzed, suggesting that the SFRs are primarily found in the slow solar wind and that their possible source is MHD turbulence. Other SFRs seem to be the result of magnetic reconnection. From WISPR imaging data, the most striking features (in addition to CMEs) are the small-scale features observed when the S/C crosses the HCS. The imaging of the young solar wind plasma is revealing. The internal structure of CMEs is observed in ways not accessible before the {\emph{PSP}} era. Also, features such as the fine structure of coronal streamers indicate the highly-dynamic nature of the solar wind close to the Sun. An excellent example of the feature identified by WISPR are bright and narrow streamer rays located at the core of the streamer belt. \paragraph{\textbf{Radio Emissions and Energetic Particles ---}} The first four years of the {\emph{PSP}} mission enabled an essential understanding of the variability of solar radio emissions and provided critical insights into the acceleration and transport of energetic particles in the inner heliosphere. {\emph{PSP}} observed many solar radio emissions, SEP events, CMEs, CIRs and SIRs, inner heliospheric ACRs, and energetic electron events, which are critical to exploring the fundamental physics of particle acceleration and transport in the near-Sun environment. The {\emph{PSP}}/FIELDS RFS measures electric fields from 10 kHz to 19.2 MHz, enabling radio observations. Only Enc.~2 featured multiple strong type III radio bursts and a type III storm during the first four Encs. As the solar activity began rising with Encs.~5 and beyond, the occurrence of radio bursts has also increased. The {\emph{PSP}} radio measurements enabled several critical studies, {\emph{e.g.}}: (1) Searching for evidence of heating of the corona by small-scale nanoflares; (2) Measurement of the circular polarization near the start of several type III bursts in Enc.~2; (3) Characterization of the decay times of type III radio bursts up to 10 MHz, observing increased decay times above 1 MHz compared to extrapolation using previous measurements from {\emph{STEREO}}; (4) Finding evidence for emission generated via the electron cyclotron maser instability over the several-MHz frequency range corresponding to solar distances where $f_{ce}>f_{pe}$; and (5) and determine the directivity of individual type III radio bursts using data from other missions, which was only possible using statistical analysis of large numbers of bursts. {\emph{PSP}} observed many SEP events from different sources ({\emph{e.g.}}, SBOs, jets, surges, CMEs, flares, etc.) and with various properties that are key to characterizing the acceleration and transport of particles in the inner heliosphere. \citet{2021A&A...650A..23C} investigated the helium content of six SEP events from May to Jun. 2020 during the fifth orbit. At least three of these six events originated from the same AR. Yet, they have significantly different $^3$He/$^4$He and He/H ratios. In addition, \citet{2021A&A...650A..26C} found that the path length of these events greatly exceeded that of the Parker spiral. They attributed that to the random walk of magnetic field lines. Most of the CMEs observed by {\emph{PSP}} were slow and did not produce clear events at 1~AU. They nonetheless produced particle events that were observed by {\emph{PSP}} closer to Sun. \citet{2020ApJS..246...29G} and \citet{2020ApJS..246...59M} reported on a particular CME event observed by {\emph{PSP}} shortly after the first perihelion pass that produced a significant enhancement in SEPs with energies below a few hundred keV/nuc, which also showed a clear velocity dispersion. The {\emph{PSP}} plasma measurement did not show any shock evidence, and the particle flux decayed before the CME crossed the S/C. Two different interpretations were proposed for this event. \citet{2020ApJS..246...29G} suggested diffusive transport of particles accelerated by the CME starting about the time it was 7.5~$R_\odot$ as observations suggest that very weak shocks, or even non-shock plasma compressions driven by a slow CME, are capable of accelerating particles. \citet{2020ApJS..246...59M} proposed an alternative based on the “pressure cooker” mechanism observed in the magnetosphere, where energetic particles are confined below the CME in the solar corona in a region bound by an electric potential above and strong magnetic fields below. The highest-energy particles overcome this barrier earlier and arrive at the S/C earlier than low-energy particles, presumably released much later when the CME has erupted from the Sun. The other interesting aspect is that the “pressure cooker” mechanism produces maximum energy that depends on the charge of the species. Although the event was relatively weak, there were sufficient counts of He, O, and Fe that, when combined with assumptions about the composition of these species in the corona, agreed with the observed high-energy cut-off as a function of particle species. SIRs/CIRs are known to be regions where energetic particles are accelerated. Therefore, {\emph{PSP}} observations within 1~AU are particularly well suited to detangle these acceleration and transport effects as the SIR/CIR-associated suprathermal to energetic ion populations are further from shock-associated acceleration sites that are usually beyond 1~AU. Many of these nascent SIR/CIRs were associated with energetic particle enhancements offset from the SIR/CIR interface. At least one of these events had evidence of local compressive acceleration, suggesting that this event provides evidence that CIR-associated acceleration does not always require shock waves. {\emph{PSP}} also observed ACRs with intensities increasing over energies from $\sim5$ to $\sim40$ MeV/nuc, a characteristic feature of ACR spectra. However, the observed radial gradient is stronger ($\sim25\pm5$\%~AU) than observed beyond 1~AU. Understanding the radial gradients of ACRs in the inner heliosphere provides constraints on drift transport and cross-field diffusion models. \paragraph{\textbf{Dust in the Inner Heliosphere ---}} The zodiacal dust cloud is one of the most significant structures in the heliosphere. To date, our understanding of the near-Sun dust environment is built on both in-situ and remote measurements outside 0.3~AU. {\emph{PSP}} provides the only in-situ measurements and remote sensing observations of interplanetary dust in the near-Sun environment inside 0.3~AU. {\emph{PSP}} provides the total dust impact rate to the S/C. The FIELDS instrument detects perturbations to the S/C potential that result from transient plasma clouds formed when dust grains strike the S/C at high velocities, vaporizing and ionizing the impacting grain and some fraction of the S/C surface. Several features have been observed in the impact rate profiles. For the first three orbits, a single pre-perihelion peak was observed. Another post-perihelion peak also marks the subsequent orbits. Between these two peaks, a local minimum in impact rate was present near the perihelion. Comparing the {\emph{PSP}} data to a two-component analytic dust model shows that {\emph{PSP}}’s dust impact rates are consistent with at least three distinct populations: a bound zodiacal $\alpha$-meteoroids on elliptic orbits; an unbound $\beta$-meteoroids on hyperbolic orbits; and a distinct third population of impactors. The data-model comparison indicates that the $\beta$-meteoroids are predominantly produced within $10-20~R_\odot$ and are unlikely to be the inner source of pickup ions, instead suggesting the population of trapped nanograins is likely this source. The post-perihelion peak is like the result of {\emph{PSP}} flying through the collisional by-products produced when a meteoroid stream ({\emph{i.e.}}, the Geminids meteoroid stream) collides with the nominal zodiacal cloud. At about 19~$R_\odot$, WISPR white-light observations revealed a lower increase of F-corona brightness compared to observations obtained between 0.3~AU and 1~AU. This marks the outer boundary of the DDZ. The radius of the DFZ itself is found to be about 4~$R_\odot$. The {\emph{PSP}} imaging observations confirm a nine-decade prediction of stellar DFZs by \citet{1929ApJ....69...49R}. \paragraph{\textbf{Venus ---}} WISPR Images of the Venusian night side during VGAs 3 and 4 proved surprisingly revelatory, clearly showing structures on the disk. This was unexpected, as the surface of Venus had never before been imaged at optical wavelengths. The WISPR imagers are sensitive enough to detect this thermal emission within their optical bandpass. The WISPR images show the Ovda Regio plateau at the western end of Aphrodite Terra, the most extensive highland region on Venus. In addition to the thermal emission from the planet's disk, the data show an O\_2 night glow emission from the planet's upper atmosphere, which previous missions had observed. Another important planetary discovery is that of the dust ring along the orbit of Venus (see \S\ref{VdustRing}). {\emph{PSP}} is over four years into its prime mission. It uncovered numerous phenomena that were unknown to us so far and which are about phenomena occurring during the solar cycle minimum, where the Sun is not very active. The activity level is rising as we approach the maximum of solar cycle 25. We will undoubtedly discover other aspects of the solar corona and inner heliosphere. For instance, we pine for the S/C to fly through many of the most violent solar explosions and tell us how particles are accelerated to extreme levels. \section{List of Abreviations} {\small \begin{longtable}{ p{.10\textwidth} p{.90\textwidth} } \multicolumn{2}{l}{\textbf{Space Agencies}} \\ ESA & European Space Agency \\ JAXA & Japan Aerospace Exploration Agency \\ NASA & National Aeronautics and Space Administration \\ \end{longtable} } {\small \begin{longtable}{ p{.15\textwidth} p{.5\textwidth} p{.35\textwidth} } \multicolumn{3}{l}{\textbf{Missions, Observatories, and Instruments}} \\ \multicolumn{1}{l}{\textbf{Acronym}} & \multicolumn{1}{l}{\textbf{Expanded Form}} & \multicolumn{1}{l}{\textbf{References}} \\ \endfirsthead \multicolumn{3}{p{\textwidth}}{ --- {\it{continued from previous page}} --- } \\ \multicolumn{1}{l}{\textbf{Acronym}} & \multicolumn{1}{l}{\textbf{Expanded Form}} & \multicolumn{1}{l}{\textbf{References}} \\ \endhead \multicolumn{3}{p{\textwidth}}{ --- {\it{continued on next page}} ---} \\ \endfoot \endlastfoot {\emph{\textbf{ACE}}} & The Advanced Composition Explorer mission & \citet{1998SSRv...86....1S} \\ \hspace{0.15cm} {\emph{EPAM}} & The Electron, Proton, and Alpha Monitor instrument & \citet{1998SSRv...86..541G} \\ \hspace{0.15cm} {\emph{ULEIS}} & The Ultra Low Energy Isotope Spectrometer instrument & \citet{1998SSRv...86..409M} \\ {\emph{\textbf{AKATSUKI}}} & The AKATSUKI/PLANET-C mission & \citet{2011JIEEJ.131..220N} \\ {\emph{\textbf{ARTEMIS}}} & The Acceleration, Reconnection, Turbulence and Electrodynamics of the Moon's Interaction with the Sun mission & \citet{2011SSRv..165....3A} \\ {\emph{\textbf{BepiColombo}}} & The BepiColombo mission & \citet{2021SSRv..217...90B} \\ {\emph{\textbf{Cluster}}} & The Cluster mission & \citet{1997SSRv...79...11E} \\ {\emph{\textbf{COBE}}} & The Cosmic Background Explorer mission & \citet{1992ApJ...397..420B} \\ \hspace{0.15cm} DIRBE & The Diffuse Infrared Background Experiment & \citet{1993SPIE.2019..180S} \\ {\emph{\textbf{Galileo}}} & The Galileo mission & \citet{1992SSRv...60....3J} \\ {\emph{\textbf{GOES}}} & The Geostationary Operational Environmental Satellite program & \url{https://www.nasa.gov/content/goes-overview/index.html} \\ {\emph{\textbf{GONG}}} & The Global Oscillation Network Group & \citet{1988AdSpR...8k.117H} \\ {\emph{\textbf{Helios}}} & The Helios (1 \& 2) mission & \citet{Marsch1990} \\ \hspace{0.15cm} ZLE & The Zodiacal Light Experiment & \citet{1975RF.....19..264L} \\ {\emph{\textbf{HEOS-2}}} & The Highly Eccentric Orbit Satellite-2 & \url{https://nssdc.gsfc.nasa.gov/nmc/spacecraft/display.action?id=1972-005A} \\ {\emph{\textbf{IMP-8}}} & The Interplanetary Monitoring Platform-8 & \url{https://science.nasa.gov/missions/imp-8} \\ {\emph{\textbf{IRAS}}} & The Infrared Astronomy Satellite & \citet{1984ApJ...278L...1N} \\ {\emph{\textbf{ISEE}}} & The International Sun-Earth Explorer & \cite{1979NCimC...2..722D} \\ {\emph{\textbf{Mariner~2}}} & The Mariner~2 mission & \url{https://www.jpl.nasa.gov/missions/mariner-2} \\ {\emph{\textbf{MMS}}} & The Magnetospheric Multiscale mission & \citet{2014cosp...40E.433B} \\ {\emph{\textbf{NuSTAR}}} & The Nuclear Spectroscopic Telescope ARray & \citet{2013ApJ...770..103H} \\ {\emph{\textbf{Pioneer}}} & The Pioneer mission & \url{https://www.nasa.gov/centers/ames/missions/archive/pioneer.html} \\ {\emph{\textbf{PSP}}} & The Parker Solar Probe mission & \citet{2016SSRv..204....7F} \\ & & \citet{doi:10.1063/PT.3.5120} \\ \hspace{0.15cm} FIELDS & The FIELDS investigation & \citet{2016SSRv..204...49B} \\ \hspace{0.3cm} AEB & The Antenna Electronics Board & --- \\ \hspace{0.3cm} DFB & The Digital Field Board & --- \\ \hspace{0.3cm} HFR & The High Frequency Receiver & --- \\ \hspace{0.3cm} MAG(s) & The Fluxgate magnetometer(s) & --- \\ \hspace{0.3cm} RFS & The Radio Frequency Spectrometer & \citet{2017JGRA..122.2836P} \\ \hspace{0.3cm} SCM & The Search Coil Magnetometer & --- \\ \hspace{0.3cm} TDS & The Time Domain Sampler & --- \\ \hspace{0.15cm} SWEAP & The Solar Wind Electrons Alphas and Protons Investigation & \citet{2016SSRv..204..131K} \\ \hspace{0.3cm} SPAN & Solar Probe ANalzers (A \& B) & \citet{2020ApJS..246...74W} \\ \hspace{0.3cm} SPAN-e & Solar Probe ANalzers-electrons & \citet{2020ApJS..246...74W} \\ \hspace{0.3cm} SPAN-i & Solar Probe ANalzers-ions & \citet{10.1002/essoar.10508651.1} \\ \hspace{0.3cm} SPC & Solar Probe Cup & \citet{2020ApJS..246...43C} \\ \hspace{0.3cm} SWEM & SWEAP Electronics Module & --- \\ \hspace{0.15cm} {\emph{IS$\odot$IS}} & The Integrated Science Investigation of the Sun & \citet{2016SSRv..204..187M} \\ \hspace{0.3cm} EPI-Hi & Energetic Particle Instrument-High & --- \\ \hspace{0.45cm} HET & High Energy Telescope & --- \\ \hspace{0.45cm} LET & Low Energy Telescope & --- \\ \hspace{0.3cm} EPI-Lo & Energetic Particle Instrument-Low & --- \\ \hspace{0.15cm} {\emph{WISPR}} & The Wide-field Imager for Solar PRobe & \citet{2016SSRv..204...83V} \\ \hspace{0.3cm} {\emph{WISPR-i}} & WISPR inner telescope & \citet{2016SSRv..204...83V} \\ \hspace{0.3cm} {\emph{WISPR-o}} & WISPR outer telescope & \citet{2016SSRv..204...83V} \\ \hspace{0.15cm} {\emph{TPS}} & the Thermal Protection System & --- \\ {\emph{\textbf{SDO}}} & The Solar Dynamic Observatory & \citet{2012SoPh..275....3P} \\ \hspace{0.15cm} AIA & The Advanced Imaging Assembly & \citet{2012SoPh..275...17L} \\ \hspace{0.15cm} HMI & The Helioseismic and Magnetic Imager & \citet{2012SoPh..275..207S} \\ {\emph{\textbf{SOHO}}} & The Solar and Heliospheric Observatory & \citet{1995SSRv...72...81D} \\ \hspace{0.15cm} {\emph{EPHIN}} & Electron Proton Helium INstrument & \citet[EPHIN;][]{1988sohi.rept...75K} \\ \hspace{0.15cm} {\emph{LASCO}} & Large Angle and Spectrometric COronagraph & \citet{1995SoPh..162..357B} \\ {\emph{\textbf{SolO}}} & The Solar Orbiter mission & \citet{2020AA...642A...1M} \\ {\emph{\textbf{STEREO}}} & The Solar TErrestrial RElations Observaory & \citet{2008SSRv..136....5K} \\ \hspace{0.15cm} SECCHI & Sun-Earth Connection Coronal and Heliospheric Investigation & \citet{2008SSRv..136...67H} \\ \hspace{0.3cm} COR2 & Coronagraph 2 & --- \\%7_LargeScale DONE \hspace{0.3cm} EUVI & Extreme Ultraviolet Imager & \citet{2004SPIE.5171..111W} \\ \hspace{0.3cm} HI & Heliospheric Imager (1 \& 2) & \citet{2009SoPh..254..387E} \\ {\emph{\textbf{Ulysses}}} & Ulysses & \citet{1992AAS...92..207W} \\ {\emph{\textbf{VEX}}} & Venus Express & \citet{2006CosRe..44..334T} \\ {\emph{\textbf{Voyager}}} & Voyager (1 \& 2) & \citet{1977SSRv...21...77K} \\ {\emph{\textbf{Wind}}} & Wind & \url{https://wind.nasa.gov} \\ \hspace{0.15cm} {\emph{WAVES}} & WAVES & \citet{1995SSRv...71..231B} \end{longtable} } {\small \begin{longtable}{ p{.15\textwidth} p{.4\textwidth} p{.35\textwidth} } \multicolumn{3}{l}{\textbf{Models}} \\ \multicolumn{1}{l}{\textbf{Acronym}} & \multicolumn{1}{l}{\textbf{Expanded Form}} & \multicolumn{1}{l}{\textbf{References}} \\ \endfirsthead \multicolumn{3}{p{\textwidth}}{ --- {\it{continued from previous page}} --- } \\ \multicolumn{1}{l}{\textbf{Acronym}} & \multicolumn{1}{l}{\textbf{Expanded Form}} & \multicolumn{1}{l}{\textbf{References}} \\ \endhead \multicolumn{3}{p{\textwidth}}{ --- {\it{continued on next page}} ---} \\ \endfoot \endlastfoot 3DCORE & 3D Coronal Rope Ejection model & \citet{2021ApJS..252....9W} \\ ADAPT & Air Force Data Assimilative Photospheric Flux Transport model & \citet{2004JASTP..66.1295A} \\ CC & Circular Cylindrical model & --- \\ EC & Elliptical-Cylindrical model & --- \\ EUHFORIA & European Heliopheric FORecasting Information Asset & \citet{2018JSWSC...8A..35P} \\ GCS & Graduated Cylindrical Shell model & \citet{2011ApJS..194...33T} \\ HELCATS & Heliospheric Cataloguing, Analysis and Techniques Service model & \citet{2014AGUFMSH43B4214B} \\ HelMOD & Heliospheric Modulation model & \citet{2018AdSpR..62.2859B} \\ OSPREI & Open Solar Physics Rapid Ensemble Information model & \citet{2022SpWea..2002914K} \\ PARADISE & Particle Radiation Asset Directed at Interplanetary Space Exploration model & \citep{2019AA...622A..28W,2020AA...634A..82W} \\ PFSS & Potential-Field Source-Surface model & \citet{1969SoPh....9..131A,1969SoPh....6..442S} \\ PIC & Particle-In-Cell & --- \\ SSEF30 & The Self-Similar Expansion Fitting (30) model & \citet{2012ApJ...750...23D} \\ WSA & Wang-Sheeley-Arge (PFSS) model & \citet{2000JGR...10510465A} \\ WSA/THUX & WSA/Tunable HUX model & \citet{2020ApJ...891..165R} \end{longtable} } {\small \begin{longtable}{ p{.20\textwidth} p{.80\textwidth} } \multicolumn{2}{l}{\textbf{Acronyms and Symbols}} \\ \multicolumn{1}{l}{\textbf{Acronym}} & \multicolumn{1}{l}{\textbf{Expanded Form}} \\ \endfirsthead \multicolumn{2}{p{\textwidth}}{ --- {\it{continued from previous page}} --- } \\ \multicolumn{1}{l}{\textbf{Acronym}} & \multicolumn{1}{l}{\textbf{Expanded Form}} \\ \endhead \multicolumn{2}{p{\textwidth}}{ --- {\it{continued on next page}} ---} \\ \endfoot \endlastfoot 1D & One-dimensional \\ 2D & Two-dimensional \\ 2PL & Two spectral range continuous power-law fit \\ 3D & Three-dimensional \\ 3PL & Three spectral range continuous power-law fit \\ ACR(s) & Anomalous cosmic ray(s) \\ ACW(s) & Alfv\'en ion cyclotron wave(s) \\ AR(s) & Active region(s) \\ AU & Astronomical unit \\ AW(s) & Alfv\'en wave(s) \\ CIR(s) & Corotating interaction region(s) \\ CME(s) & Coronal mass ejection(s) \\ cobpoint & “Connecting with the OBserving” point \\ CR & Carrington rotation \\ DC & Direct current \\ DDZ & Dust depletion zone \\ DFZ & Dust-free zone \\ dHT & de Hoffman-Teller frame \\ DOY & Day of the year \\ ED & ``Either" discontinuity \\ Enc. / Encs. & Encounter(s) \\ ES (waves) & Electrostatic waves \\ EUV & Extreme ultraviolet \\ $f_{ce}$ & Electron gyrofrequency or electron cyclotron frequency \\ $f_{cp}$ & Proton gyrofrequency or proton cyclotron frequency \\ $f_{pe}$ & Plasma frequency \\ $f_{LH}$ & Lower hybrid frequency \\ F-corona & Fraunhofer-corona \\ $F_\mathrm{A}$ & Alfv\'enic energy flux \\ $F_\mathrm{K}$ & Bulk kinetic energy flux \\ FITS & Flexible Image Transport System \\ FM/W & Fast-magnetosonic/whistler \\ FOV & Field of view \\ GCR(s) & Galactic cosmic ray(s) \\ HCI & Heliocentric inertial coordinate system \\ HCS & Heliospheric current sheet \\ HEE & Heliocentric Earth Ecliptic system \\ HEEQ & Heliocentric Earth Equatorial system \\ HFR & High frequency receiver \\ HHT & Hilbert-Huang transform \\ HPC & Helioprojective cartesian system \\ HPS & Heliospheric plasma sheet \\ HSO & Heliophysics System Observatory \\ HSS(s) & High-speed stream(s) \\ ICME(s) & Interplanetary coronal mass ejection(s) \\ ICW & Ion cyclotron wave \\ ID(s) & Interplanetary discontinuity(ies) \\ KAW(s) & Kinetic Alfv\'en wave(s) \\ LFR(s) & Large-scale flux rope(s) \\ LTE & Local thermal equilibrium \\ LOS & Line of sight \\ $M_{A}$ & Alfv\'enic Mach number \\ MAG(s) & Fluxgate magnetometer(s) \\ MC(s) & Magnetic cloud(s) \\ MFR(s) & Magnetic flux rope(s) \\ MHD & Magneto-HydroDynamic \\ MOID & Earth Minimum Orbit Intersection Distance \\ MVA & Minimum variance analysis \\ ND & ``Neither" discontinuity \\ NIR & Near Infrared \\ PAD(s) & Pitch angle distribution(s) \\ PDF(s) & Probability distribution function(s) \\ PIC & Particle-in-cell \\ PIL(s) & Polarity inversion line(s) \\ PVI & Partial variance of increments \\ QTN & Quasi-thermal noise \\ RD(s) & Rotational discontinuity(ies) \\ RLO & Reconnection/Loop-Opening \\ RTN & Radial-Tangential-Normal frame \\ $R_\odot$ & Solar radius \\ SBO(s) & Streamer blowout(s) \\ SBO-CME(s) & Streamer blowout CME(s) \\ S/C & Spacecraft \\ SEP(s) & Solar energetic particle event(s) \\ SFR(s) & Small-scale flux rope(s) \\ SIR(s) & Stream interaction region(s) \\ TD(s) & Tangential discontinuity(ies) \\ TH & Taylor's hypothesis \\ TOF & Time of flight \\ TPS & Thermal Protection System \\ UT & Universal Time \\ VDF(s) & Velocity distribution function(s) \\ VGA(s) & Venus gravity assist(s) \\ VSF & Volume scattering function \\ WCS & World Coordinate System \\ WHPI & Whole Heliosphere and Planetary Interactions \\ WKB & The Wentzel, Kramers, and Brillouin approximation \\ w.r.t. & With respect to \\ WTD & Wave/Turbulence-Driven \\ ZC & Zodiacal cloud \\ ZDC & Zodiacal dust cloud \\ ZL & Zodiacal light \\ \end{longtable} \begin{acknowledgements} Parker Solar Probe was designed, built, and is now operated by the Johns Hopkins Applied Physics Laboratory as part of NASA’s Living with a Star (LWS) program (contract NNN06AA01C). Support from the LWS management and technical team has played a critical role in the success of the Parker Solar Probe mission. \smallskip The FIELDS instrument suite was designed and built and is operated by a consortium of institutions including the University of California, Berkeley, University of Minnesota, University of Colorado, Boulder, NASA/GSFC, CNRS/LPC2E, University of New Hampshire, University of Maryland, UCLA, IFRU, Observatoire de Meudon, Imperial College, London and Queen Mary University London. \smallskip The SWEAP Investigation is a multi-institution project led by the Smithsonian Astrophysical Observatory in Cambridge, Massachusetts. Other members of the SWEAP team come from the University of Michigan, University of California, Berkeley Space Sciences Laboratory, The NASA Marshall Space Flight Center, The University of Alabama Huntsville, the Massachusetts Institute of Technology, Los Alamos National Laboratory, Draper Laboratory, JHU's Applied Physics Laboratory, and NASA Goddard Space Flight Center. \smallskip The Integrated Science Investigation of the Sun (IS$\odot$IS) Investigation is a multi-institution project led by Princeton University with contributions from Johns Hopkins/APL, Caltech, GSFC, JPL, SwRI, University of New Hampshire, University of Delaware, and University of Arizona. \smallskip The Wide-Field Imager for Parker Solar Probe (WISPR) instrument was designed, built, and is now operated by the US Naval Research Laboratory in collaboration with Johns Hopkins University/Applied Physics Laboratory, California Institute of Technology/Jet Propulsion Laboratory, University of Gottingen, Germany, Centre Spatiale de Liege, Belgium and University of Toulouse/Research Institute in Astrophysics and Planetology. \smallskip IM is supported by the Research Council of Norway (grant number 262941). OVA was supported by NASA grants 80NNSC19K0848, 80NSSC22K0417, 80NSSC21K1770, and NSF grant 1914670. \end{acknowledgements} \section*{Compliance with Ethical Standards} {\bf{Disclosure of potential conflicts of interest:}} There are no conflicts of interest (financial or non-financial) for any of the co-authors of this article. \\ {\bf{Research involving Human Participants and/or Animals:}} The results reported in this article do not involve Human Participants and/or Animals in any way. \\ {\bf{Informed consent:}} The authors agree with sharing the information reported in this article with whoever needs to access it. \bibliographystyle{aa}
{'timestamp': '2023-01-10T02:02:19', 'yymm': '2301', 'arxiv_id': '2301.02727', 'language': 'en', 'url': 'https://arxiv.org/abs/2301.02727'}
\section{Introduction} Helicoidal surfaces in $3$-dimensional space forms arise as a natural generalization of rotational surfaces in such spaces. These surfaces are invariant by a subgroup of the group of isometries of the ambient space, called helicoidal group, whose elements can be seen as a composition of a translation with a rotation for a given axis. In the Euclidean space $\mathbb{R}^3$, do Carmo and Dajczer \cite{docarmo} describe the space of all helicoidal surfaces that have constant mean curvature or constant Gaussian curvature. This space behaves as a circular cylinder, where a given generator corresponds to the rotational surfaces and each parallel corresponds to a periodic family of helicoidal surfaces. Helicoidal surfaces with prescribed mean or Gaussian curvature are obtained by Baikoussis and Koufogiorgos \cite{BK}. More precisely, they obtain a closed form of such a surface by integrating the second-order ordinary differential equation satisfied by the generating curve of the surface. Helicoidal surfaces in $\mathbb{R}^3$ are also considered by Perdomo \cite{P1} in the context of minimal surfaces, and by Palmer and Perdomo \cite{PP2} where the mean curvature is related with the distance to the $z$-axis. In the context of constant mean curvature, helicoidal surfaces are considered by Solomon and Edelen in \cite{edelen1}. In the $3$-dimensional hyperbolic space $\mathbb{H}^3$, Mart\'inez, the second author and Tenenblat \cite{MST} give a complete classification of the helicoidal flat surfaces in terms of meromorphic data, which extends the results obtained by Kokubu, Umehara and Yamada \cite{KUY} for rotational flat surfaces. Moreover, the classification is also given by means of linear harmonic functions, characterizing the flat fronts in $\mathbb{H}^3$ that correspond to linear harmonic functions. Namely, it is well known that for flat surfaces in $\mathbb{H}^3$, on a neighbourhood of a non-umbilical point, there is a curvature line parametrization such that the first and second fundamental forms are given by \begin{equation} \begin{array}{rcl} I &=& \cosh^2 \phi(u,v) (du)^2 + \sinh^2 \phi(u,v) (dv)^2, \\ II &=& \sinh \phi(u,v) \cosh \phi(u,v) \left((du)^2 + (dv)^2 \right), \end{array} \label{firstff} \end{equation} where $\phi$ is a harmonic function, i.e., $\phi_{uu}+\phi_{vv}=0$. In this context, the main result states that a surface in $\mathbb{H}^3$, parametrized by curvature lines, with fundamental forms as in \eqref{firstff} and $\phi(u,v)$ linear, i.e, $\phi(u,v) = a u + b v + c$, is flat if and only if, the surface is a helicoidal surface or a {\em peach front}, where the second one is associated to the case $(a,b,c) = (0,\pm1,0)$. Helicoidal minimal surfaces were studied by Ripoll \cite{ripoll} and helicoidal constant mean curvature surfaces in $\mathbb{H}^3$ are considered by Edelen \cite{edelen2}, as well as the cases where such invariant surfaces belong to $\mathbb{R}^3$ and $\mathbb{S}^3$. Similarly to the hyperbolic space, for a given flat surface in the $3$-dimensional sphere $\mathbb{S}^3$, there exists a parametrization by asymptotic lines, where the first and the second fundamental forms are given by \begin{equation} \begin{array}{rcl} I &=& du^2 + 2 \cos \omega du dv + dv^2, \\ II &=& 2 \sin \omega du dv \label{primeiraffs} \end{array} \end{equation} for a smooth function $\omega$, called the {\em angle function}, that satisfy the homogeneous wave equation $\omega_{uv} = 0$. Therefore, one can ask which surfaces are related to linear solutions of such equation. The aim of this paper is to give a complete classification of helicoidal flat surfaces in $\mathbb{S}^3$, established in Theorems 1 and 2, by means of asymptotic lines coordinates, with first and second fundamental forms given by \eqref{primeiraffs}, where the angle function is linear. In order to do this, one uses the Bianchi-Spivak construction for flat surfaces in $\mathbb{S}^3$. This construction and the Kitagawa representation \cite{kitagawa1}, are important tools used in the recent developments of flat surface theory. Examples of applications of such representations can be seen in \cite{galvezmira1} and \cite{aledogalvezmira}. Our classification also makes use of a representation for constant angle surfaces in $\mathbb{S}^3$, who comes from a characterization of constant angle surfaces in the Berger spheres obtained by Montaldo and Onnis \cite{MO}. This paper is organized as follows. In Section 2 we give a brief description of helicoidal surfaces in $\mathbb{S}^3$, as well as a ordinary differential equation that characterizes those one that has zero intrinsic curvature. In Section 3, the Bianchi-Spivak construction is introduced. It will be used to prove Theorem 1, which states that a flat surface in $\mathbb{S}^3$, with asymptotic parameters and linear angle function, is invariant under helicoidal motions. In Section 4, Theorem 2 establishes the converse of Theorem 1, that is, a helicoidal flat surface admits a local parametrization, given by asymptotic parameters where the angle function is linear. Such local parametrization is obtained by using a characterization of constant angle surfaces in Berger spheres, which is a consequence of the fact that a helicoidal flat surface is a constant angle surface in $\mathbb{S}^3$, i.e., it has a unit normal that makes a constant angle with the Hopf vector field. In section 5 we present an application for conformally flat hypersurfaces in $\mathbb{R}^4$. The classification result obtained is used to give a geometric characterization for special conformally flat surfaces in $4-$dimensional space forms. It is known that conformally flat hypersurfaces in $4$-dimensional space forms are associated with solutions of a system of equations, known as Lam\'e's system (see \cite{Jeromin1} and \cite{santos} for details). In \cite{santos}, Tenenblat and the second author obtained invariant solutions under symmetry groups of Lam\'e's system. A class of those solutions is related to flat surfaces in $\mathbb{S}^3$, parametrized by asymptotic lines with linear angle function. Thus a geometric description of the correspondent conformally flat hypersurfaces is given in terms of helicoidal flat surfaces in $\mathbb{S}^3$. \section{Helicoidal flat surfaces} Given any $\beta\in\mathbb{R}$, let $\{\varphi_\beta(t)\}$ be the one-parameter subgroup of isometries of $\mathbb{S}^3$ given by \[ \varphi_\beta(t) = \left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & \cos \beta t & -\sin \beta t \\ 0 & 0 & \sin \beta t & \cos \beta t \\ \end{array} \right). \] When $\beta\neq0$, this group fixes the set $l=\{(z,0)\in\mathbb{S}^3\}$, which is a great circle and it is called the {\em axis of rotation}. In this case, the orbits are circles centered on $l$, i.e., $\{\varphi_\beta(t)\}$ consists of rotations around $l$. Given another number $\alpha\in\mathbb{R}$, consider now the translations $\{\psi_\alpha(t)\}$ along $l$, \[ \psi_\alpha(t)=\left( \begin{array}{cccc} \cos \alpha t & -\sin \alpha t & 0 & 0 \\ \sin \alpha t & \cos \alpha t & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{array} \right). \] \begin{defi}\label{def:helicoidal} {\em A {\em helicoidal} surface in $\mathbb{S}^3$ is a surface invariant under the action of the helicoidal 1-parameter group of isometries \begin{equation}\label{eq:movhel} \phi_{\alpha,\beta}(t)=\psi_\alpha(t)\circ\varphi_\beta(t)= \left( \begin{array}{cccc} \cos \alpha t & -\sin \alpha t & 0 & 0 \\ \sin \alpha t & \cos \alpha t & 0 & 0 \\ 0 & 0 & \cos \beta t & -\sin \beta t \\ 0 & 0 & \sin \beta t & \cos \beta t \\ \end{array} \right), \end{equation}} {\em given by a composition of a translation $\psi_\alpha(t)$ and a rotation $\varphi_\beta(t)$ in $\mathbb{S}^3$. } \end{defi} \begin{rem} {\em When $\alpha=\beta$, these isometries are usually called {\em Clifford translations}. In this case, the orbits are all great circles, and they are equidistant from each other. In fact, the orbits of the action of $G$ coincide with the fibers of the Hopf fibration $h:\mathbb{S}^3\to\mathbb{S}^2$. We note that, when $\alpha=-\beta$, these isometries are also, up to a rotation in $\mathbb{S}^3$, Clifford translations. For this reason we will consider in this paper only the cases $\alpha\neq\pm\beta$.} \end{rem} With these basic properties in mind, a helicoidal surface can be locally parametrized by \begin{equation}\label{eq:param-helicoidal} X(t,s) = \phi_{\alpha,\beta}(t)\cdot\gamma(s), \end{equation} where $\gamma:I\subset\mathbb{R}\to\mathbb{S}^2_+$ is a curve parametrized by the arc length, called the {\em profile curve} of the parametrization $X$. Here, $\mathbb{S}^2_+$ is the half totally geodesic sphere of $\mathbb{S}^3$ given by \[ \mathbb{S}^2_+ = \left\{(x_1, x_2, x_3, 0)\in\mathbb{S}^3: x_3>0\right\}. \] Then we have \[ \begin{array}{rcl} X_t &=& \phi_{\alpha,\beta}(t)\cdot(-\alpha x_2,\alpha x_1, 0,\beta x_3), \\ X_s &=& \phi_{\alpha,\beta}(t)\cdot\gamma'(s). \end{array} \] Moreover, a unit normal vector field associated to the parametrization $X$ is given by $N=\tilde N/ \|\tilde N\|$, where $\tilde N$ is explicitly given by \begin{equation}\label{normal-field} \tilde{N} = \phi_{\alpha,\beta}(t)\cdot \big(\beta x_3(x_2'x_3-x_2 x_3',\beta x_3(x_1x_3'-x_1'x_3), \beta x_3 (x_1'x_2-x_1 x_2'),-\alpha x_3' \big). \end{equation} Let us now consider a parametrization by the arc length of $\gamma$ given by \begin{equation} \label{eq:param-gamma} \gamma(s) = \big(\cos\varphi(s)\cos\theta(s),\cos\varphi(s)\sin\theta(s), \sin\varphi(s),0\big). \end{equation} We will finish this section discussing the flatness of helicoidal surfaces in $\mathbb{S}^3$. Recall that a simple way to obtain flat surfaces in $\mathbb{S}^3$ is by means of the Hopf fibration $h:\mathbb{S}^3\to\mathbb{S}^2$. More precisely, if $c$ is a regular curve in $\mathbb{S}^2$, then $h^{-1}(c)$ is a flat surface in $\mathbb{S}^3$ (cf. \cite{spivak}). Such surfaces are called {\em Hopf cylinders}. The next result provides a necessary and sufficient condition for a helicoidal surface, parametrized as in \eqref{eq:param-helicoidal}, to be flat. \begin{prop}\label{prop:HSF} A helicoidal surface locally parametrized as in \eqref{eq:param-helicoidal}, where $\gamma$ is given by \eqref{eq:param-gamma}, is a flat surface if and only if the following equation \begin{equation} \beta^2\varphi''\sin^3\varphi\cos\varphi - \beta^2(\varphi')^2 \sin^4\varphi +\alpha^2(\varphi')^4 \cos^4 \varphi = 0 \label{ode-helicoidal} \end{equation} \label{prop-ode-helicoidal} is satisfied. \end{prop} \begin{proof} Since $\phi_{\alpha,\beta}(t)\in O(4)$ and $\gamma$ is parametrized by the arc length, the coefficients of the first fundamental form are given by \[ \begin{array}{rcl} E &=& \alpha^2\cos^2\varphi + \beta^2\sin^2\varphi, \\ F &=& \alpha\theta'\cos^2\varphi,\\ G &=& (\varphi')^2 + (\theta')^2 \cos^2 \varphi =1. \end{array} \] Moreover, the Gauss curvature $K$ is given by \[ \begin{array}{rcl} 4(EG - F^2)^2 K &=& E \left[ E_s G_s - 2 F_t G_s + (G_t)^2 \right] + G \left[E_t G_t - 2 E_t F_s + (E_s)^2 \right] \\ &&+ F (E_t G_s - E_s G_t - 2 E_s F_s + 4 F_t F_s - 2 F_t G_t ) \\ && - 2 (EG-F^2)(E_{ss} - 2 F_{st} + G_{tt} ). \end{array} \] Thus, it follows from the expression of $K$ and from the coefficients of the first fundamental form that the surface is flat if, and only if, \begin{equation}\label{eq:gauss} E_s (EG - F^2)_s - 2 (EG-F^2)E_{ss}=0. \end{equation} When $\alpha=\pm\beta$, the equation \eqref{eq:gauss} is trivially satisfied, regardless of the chosen curve $\gamma$. For the case $\alpha\neq\pm\beta$, since \[ EG-F^2 =\beta^2\sin^2\varphi+\alpha^2(\varphi')^2\cos^2\varphi, \] a straightforward computation shows that the equation \eqref{eq:gauss} is equivalent to \[ (\beta^2-\alpha^2)\big(\beta^2\varphi''\sin^3\varphi\cos\varphi -\beta^2(\varphi')^2\sin^4\varphi+\alpha^2(\varphi')^4\cos^4\varphi\big) = 0, \] and this concludes the proof. \end{proof} \section{The Bianchi-Spivak construction} A nice way to understand the fundamental equations of a flat surface $M$ in $\mathbb{S}^3$ is by parameters whose coordinate curves are asymptotic curves on the surface. As $M$ is flat, its intrinsic curvature vanishes identically. Thus, by the Gauss equation, the extrinsic curvature of $M$ is constant and equal to $-1$. In this case, as the extrinsic curvature is negative, it is well known that there exist Tschebycheff coordinates around every point. This means that we can choose local coordinates $(u,v)$ such that the coordinates curves are asymptotic curves of $M$ and these curves are parametrized by the arc length. In this case, the first and second fundamental forms are given by \begin{eqnarray}\label{eq:forms} \begin{aligned} I &= du^2 + 2 \cos \omega dudv + dv^2, \\ II& = 2 \sin \omega du dv, \end{aligned} \end{eqnarray} for a certain smooth function $\omega$, usually called the {\em angle function}. This function $\omega$ has two basic properties. The first one is that as $I$ is regular, we must have $0<\omega<\pi$. Secondly, it follows from the Gauss equation that $\omega_{uv}=0$. In other words, $\omega$ satisfies the homogeneous wave equation, and thus it can be locally decomposed as $\omega(u,v) = \omega_1(u) + \omega_2(v)$, where $\omega_1$ and $\omega_2$ are smooth real functions (cf. \cite{galvez1} and \cite{spivak} for further details). \vspace{.2cm} Given a flat isometric immersion $f:M\to\mathbb{S}^3$ and a local smooth unit normal vector field $N$ along $f$, let us consider coordinates $(u,v)$ such that the first and the second fundamental forms of $M$ are given as in \eqref{eq:forms}. The aim of this work is to characterize the flat surfaces when the angle function $\omega$ is linear, i.e., when $\omega=\omega_1+\omega_2$ is given by \begin{eqnarray}\label{eq:linear} \omega_1(u) + \omega_2 (v) = \lambda_1 u + \lambda_2 v + \lambda_3 \end{eqnarray} where $\lambda_1, \lambda_2, \lambda_3 \in \mathbb{R}$. I order to do this, let us first construct flat surfaces in $\mathbb{S}^3$ whose first and second fundamental forms are given by \eqref{eq:forms} and with linear angle function. This construction is due to Bianchi \cite{bianchi} and Spivak \cite{spivak}. \vspace{.2cm} We will use here the division algebra of the quaternions, a very useful approach to describe explicitly flat surfaces in $\mathbb{S}^3$. More precisely, we identify the sphere $\mathbb{S}^3$ with the set of the unit quaternions $\{q\in\mathrm{H}: q\overline q=1\}$ and $\mathbb{S}^2$ with the unit sphere in the subspace of $\mathrm{H}$ spanned by $1$, $i$ and $j$. \begin{prop}[Bianchi-Spivak representation]\label{teo:BS} Let $c_a,c_b:I\subset\mathbb{R}\to\mathbb{S}^3$ be two curves parametrized by the arc length, with curvatures $\kappa_a$ and $\kappa_b$, and whose torsions are given by $\tau_a=1$ and $\tau_b=-1$. Suppose that $0 \in I$, $c_a(0)=c_b(0)=(1,0,0,0)$ e $c_a'(0) \wedge c_b '(0) \neq 0$. Then the map \[ X(u,v) = c_a (u) \cdot c_b (v) \] is a local parametrization of a flat surface in $\mathbb{S}^3$, whose first and second fundamental forms are given as in \eqref{eq:forms}, where the angle function satisfies $\omega_1'(u) = -\kappa_a (u)$ and $\omega_2'(v) =\kappa_b (v)$. \end{prop} Since the goal here is to find a parametrization such that $\omega$ can be written as in \eqref{eq:linear}, it follows from Theorem \ref{teo:BS} that the curves of the representation must have constant curvatures. Therefore, we will use the Frenet-Serret formulas in order to obtain curves with torsion $\pm 1$ and with constant curvatures. \vspace{.2cm} Given a real number $r>1$, let us consider the curve $\gamma_r:\mathbb{R}\to\mathbb{S}^3$ given by \begin{equation}\label{eq:base} \gamma_r(u) = \frac{1}{\sqrt{1+r^2}} \left(r\cos\frac{u}{r},r\sin\frac{u}{r}, \cos ru,\sin ru\right). \end{equation} A straightforward computation shows that $\gamma_r(u)$ is parametrized by the arc length, has constant curvature $\kappa=\frac{r^2-1}{r}$ and its torsion $\tau$ satisfies $\tau^2=1$. Observe that $\gamma_r(u)$ is periodic if and only if $r^2\in\mathbb Q$. When $r$ is a positive integer, $\gamma_r(u)$ is a closed curve of period $2\pi r$. A curve $\gamma$ as in \eqref{eq:base} will be called a {\em base curve}. \vspace{.2cm} Now we just have to apply rigid motions to a base curve in order to satisfy the remaining requirements of the Bianchi-Spivak construction. It is easy to verify that the curves \begin{eqnarray}\label{eq:curves-condition} \begin{array}{rcl} c_a(u) &=& \dfrac{1}{\sqrt{1+a^2}}(a,0,-1,0) \cdot \gamma_a (u), \\ c_b(v) &=& \dfrac{1}{\sqrt{1+b^2}}T ( \gamma_b (v)) \cdot (b,0,0,-1), \end{array} \end{eqnarray} are base curves, and satisfy $c_a(0)=c_b(0)=(1,0,0,0)$ and $c_a'(0)\wedge c_b'(0)\neq0$, where \begin{equation} \begin{array}{rcl} T &=& \left( \begin{array}{cccc} 1&0&0 & 0\\ 0&1&0& 0 \\ 0&0&0&1\\ 0&0&1&0 \end{array} \right). \\ \end{array} \end{equation} Therefore we can establish our first main result: \begin{thm}\label{teo:main1} The map $X:U\subset\mathbb{R}^2\to\mathbb{S}^3$ given by \[ X(u,v) = c_a(u) \cdot c_b(v), \] where $c_a$ and $c_b$ are the curves given in \eqref{eq:curves-condition}, is a parametrization of a flat surface in $\mathbb{S}^3$, whose first and second fundamental forms are given by \[ \begin{array}{rcl} I &=& du^2 +2 \cos \left( \left( \frac{1-a^2}{a} \right) u + \left( \frac{b^2-1}{b} \right) v + c \right) du dv + dv^2, \\ II &=& 2 \sin \left(\left( \frac{1-a^2}{a} \right) u + \left( \frac{b^2-1}{b} \right) v + c \right) du dv, \end{array} \] where $c$ is a constant. Moreover, up to rigid motions, $X$ is invariant under helicoidal motions. \end{thm} \begin{proof} The statement about the fundamental forms follows directly from the Bianchi-Spivak construction. For the second statement, note that the parametrization $X(u,v)$ can be written as \[ X(u,v) = g_a \cdot Y(u,v) \cdot g_b, \] where \[ \begin{array}{rcl} g_a &=& \dfrac{1}{\sqrt{1+a^2}}(a,0,-1,0), \\ g_b &=& \dfrac{1}{\sqrt{1+b^2}}(b,0,0,-1), \end{array} \] and \[ Y(u,v) = \gamma_a(u) \cdot T(\gamma_b(v)). \] To conclude the proof, it suffices to show that $Y(u,v)$ is invariant by helicoidal motions. To do this, we have to find $\alpha$ and $\beta$ such that \[ \phi_{\alpha,\beta}(t)\cdot Y(u,v)=Y\big(u(t),v(t)\big), \] where $u(t)$ and $v(t)$ are smooth functions. Observe that $Y(u,v)$ can be written as \begin{eqnarray}\label{eq:expY} Y(u,v) = \dfrac{1}{\sqrt{(1+a^2)(1+b^2)}} (y_1, y_2, y_3, y_4), \end{eqnarray} where \begin{equation*} \begin{array}{rcl} y_1(u,v) &=& ab \cos \left(\dfrac{u}{a} + \dfrac{v}{b} \right) - \sin (au+ bv), \\ y_2(u,v) &=& ab \sin \left(\dfrac{u}{a} + \dfrac{v}{b} \right) + \cos (au+ bv), \\ y_3(u,v) &=& b \cos \left(au- \dfrac{v}{b} \right) - a\sin\left(\dfrac{u}{a}-bv\right), \\ y_4(u,v) &=& b\sin\left(au-\dfrac{v}{b} \right)+a\cos\left(\dfrac{u}{a}-bv\right). \\ \end{array} \label{parametrizacao-y} \end{equation*} A straightforward computation shows that if $\phi_{\alpha,\beta}(t)$ is given by \eqref{eq:movhel}, we have \[ u(t) =u+z(t) \quad\textrm{and}\quad v(t)=v+w(t), \] where \begin{eqnarray}\label{eq:z(t)w(t)} z(t)=\frac{a(b^2-1)}{a^2b^2-1}\beta t \quad\text{and}\quad w(t)=\frac{b(1-a^2)}{a^2b^2-1}\beta t, \end{eqnarray} with \begin{eqnarray}\label{eq:alpha-beta} \alpha=\dfrac{b^2-a^2}{a^2b^2-1}\beta, \end{eqnarray} showing that $Y(u,v)$ is invariant by helicoidal motions. Observe that when $a=\pm b$ we have $\alpha=0$, i.e., $X$ is a rotational surface in $\mathbb{S}^3$. \end{proof} \begin{rem} {\em It is important to note that the constant $a$ and $b$ in \eqref{eq:curves-condition} were considered in $(1,+\infty)$ in order to obtain non-zero constant curvatures with its well defined torsions, and then to apply the Bianchi-Spivak construction. This is not a strong restriction since the curvature function $\kappa(t)=\frac{t^2-1}{t}$ assumes all values in $\mathbb{R} \setminus\{0\}$ when $t \in (1, + \infty)$. However, by taking $a=1$ and $b>1$ in \eqref{eq:curves-condition}, a long but straightforward computation gives an unit normal vector field \[ N(u,v) = \dfrac{1}{\sqrt{2(1+b^2)}} (n_1, n_2, n_3, n_4), \] where \[ \begin{array}{rcl} n_1(u,v) &=& -b \sin \left(u + \dfrac{v}{b} \right) + \cos (u+ bv), \\ n_2(u,v) &=& b \cos \left(u + \dfrac{v}{b} \right) + \sin (u+ bv), \\ n_3(u,v) &=& b \sin \left(u- \dfrac{v}{b} \right) - \cos\left(a-bv\right), \\ n_4(u,v) &=& -b\cos \left(u-\dfrac{v}{b} \right) - \sin\left(u-bv\right). \\ \end{array} \] Therefore, one shows that this parametrization is also by asymptotic lines where the angle function is given by $\omega(u,v) =\frac{1-b^2}{b}v -\frac{\pi}{2}$. Moreover, this is a parametrization of a Hopf cylinder, since the unit normal vector field $N$ makes a constant angle with the Hopf vector field (see section 4).} \end{rem} We will use the parametrization $Y(u,v)$ given in \eqref{eq:expY}, compose with the stereographic projection in $\mathbb{R}^3$, to visualize some examples with the corresponding constants $a$ and $b$. \begin{figure}[!htb] \begin{minipage}[]{0.45\linewidth} \includegraphics[width=\linewidth]{figura1.jpg} \end{minipage} \begin{minipage}[]{0.45\linewidth} \includegraphics[width=\linewidth]{figura2.jpg} \end{minipage} \caption{$a=2$ and $b=3$.} \end{figure} \begin{figure}[!htb] \begin{minipage}[]{0.45\linewidth} \includegraphics[width=\linewidth]{figura3.jpg} \end{minipage} \begin{minipage}[]{0.45\linewidth} \includegraphics[width=\linewidth]{figura4.jpg} \end{minipage} \caption{$a=\sqrt2$ and $b=3$.} \end{figure} \begin{figure}[!htb] \begin{minipage}[]{0.45\linewidth} \includegraphics[width=\linewidth]{figura5.jpg} \end{minipage} \begin{minipage}[]{0.45\linewidth} \includegraphics[width=\linewidth]{figura6.jpg} \end{minipage} \caption{$a=\sqrt3$ and $b=\sqrt2$.} \end{figure} \section{Constant angle surfaces} In this section we will complete our classification of helicoidal flat surfaces in $\mathbb{S}^3$, by establishing our second main theorem, that can be seen as a converse of Theorem \ref{teo:main1}. It is well known that the Hopf map $h:\mathbb{S}^3\to\mathbb{S}^2$ is a Riemannian submersion and the standard orthogonal basis of $\mathbb{S}^3$ \[ E_1(z,w)=i(z,w), \ \ E_2(z,w)=i(-\overline w,\overline z), \ \ E_3(z,w)=(-\overline w,\overline z) \] has the property that $E_1$ is vertical and $E_2$, $E_3$ are horizontal. The vector field $E_1$, usually called the Hopf vector field, is an unit Killing vector field. \vspace{.2cm} Constant angle surface in $\mathbb{S}^3$ are those surfaces whose its unit normal vector field makes a constant angle with the Hopf vector field $E_1$. The next result states that flatness of a helicoidal surface in $\mathbb{S}^3$ turns out to be equivalent to constant angle surface. \begin{prop}\label{prop:CAS} A helicoidal surface in $\mathbb{S}^3$, locally parametrized by \eqref{eq:param-helicoidal} and with the profile curve $\gamma$ parametrized by \eqref{eq:param-gamma}, is a flat surface if and only if it is a constant angle surface. \end{prop} \begin{proof} Let us consider the Hopf vector field \[ E_1(x_1, x_2, x_3, x_4) = (-x_2, x_1, -x_4, x_3), \] and let us denote by $\nu$ the angle between $E_1$ and the normal vector field $N$ along the surface given in \eqref{normal-field}. Along the parametrization \eqref{eq:param-helicoidal}, we can write the vector field $E_1$ as \[ E_1(X(t,s)) = \phi_{\alpha,\beta}(t)(-x_2, x_1,0,x_3). \] Then, since $\phi_{\alpha,\beta}(t)\in O(4)$, we have \[ \langle N,E_1\rangle(t,s)=\langle N,E_1\rangle(s)= (\beta-\alpha) \frac{x_3 x_3'}{\sqrt{\beta^2x_3^2 + \alpha^2 (x_3')^2}}. \] By considering the parametrization \eqref{eq:param-gamma} for the profile curve $\gamma$, the angle $\nu=\nu(s)$ between $N$ and $E_1$ is given by \begin{eqnarray}\label{eq:cosnu} \cos \nu (s) = (\beta-\alpha) \frac{\varphi' \sin\varphi \cos\varphi} {\sqrt{\beta^2\sin^2 \varphi + \alpha^2 (\varphi')^2 \cos^2 \varphi}}. \end{eqnarray} By taking the derivative in \eqref{eq:cosnu}, we have \[ \frac{d}{ds}(\cos\nu(s))=\frac{(\beta-\alpha)\big(\beta^2\varphi''\sin^3 \varphi\cos\varphi - \beta^2(\varphi')^2 \sin^4 \varphi + \alpha^2 (\varphi')^2 \cos^4 \varphi\big)} {\big(\beta^2\sin^2\varphi+\alpha^2(\varphi')^2 \cos^2 \varphi \big)^{\frac{3}{2}}}, \] and the conclusion follows from the Proposition \ref{prop:HSF}. \end{proof} Given a number $\epsilon>0$, let us recall that the Berger sphere $\mathbb{S}^3_\epsilon$ is defined as the sphere $\mathbb{S}^3$ endowed with the metric \begin{eqnarray}\label{eq:berger} \langle X,Y\rangle_\epsilon=\langle X,Y\rangle+(\epsilon^2-1)\langle X,E_1\rangle\langle Y,E_1\rangle, \end{eqnarray} where $\langle,\rangle$ denotes de canonical metric of $\mathbb{S}^3$. We define constant angle surface in $\mathbb{S}^3_\epsilon$ in the same way that in the case of $\mathbb{S}^3$. Constant angle surfaces in the Berger spheres were characterized by Montaldo and Onnis \cite{MO}. More precisely, if $M$ is a constant angle surface in the Berger sphere, with constant angle $\nu$, then there exists a local parametrization $F(u,v)$ given by \begin{eqnarray}\label{eq:paramMO} F(u,v)=A(v)b(u), \end{eqnarray} where \begin{eqnarray}\label{eq:curveb} b(u)=\big(\sqrt{c_1}\cos(\alpha_1u),\sqrt{c_1}\sin(\alpha_1u), \sqrt{c_2}\cos(\alpha_2u),\sqrt{c_2}\sin(\alpha_2u)\big) \end{eqnarray} is a geodesic curve in the torus $\mathbb{S}^1(\sqrt{c_1})\times\mathbb{S}^1(\sqrt{c_2})$, with \[ c_{1,2}=\frac{1}{2}\mp\frac{\epsilon\cos\nu}{2\sqrt{B}},\ \alpha_1=\frac{2B}{\epsilon}c_2, \ \alpha_2=\frac{2B}{\epsilon}c_1, \ B=1+(\epsilon^2-1)\cos^2\nu, \] and \begin{eqnarray}\label{eq:xi_i} A(v)=A(\xi,\xi_1,\xi_2,\xi_3)(v) \end{eqnarray} is a $1$-parameter family of $4\times4$ orthogonal matrices given by \[ A(v)=A(\xi)\cdot\tilde A(v), \] where \[ A(\xi)=\left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & \sin\xi & \cos\xi \\ 0 & 0 & -\cos\xi & \sin\xi \end{array} \right) \] and \[ \tilde A(v)=\left( \begin{array}{rrrr} \cos\xi_1\cos\xi_2 & -\cos\xi_1\sin\xi_2 & \sin\xi_1\cos\xi_2 & -\sin\xi_1\sin\xi_3 \\ \cos\xi_1\sin\xi_2 & -\cos\xi_1\cos\xi_2 & \sin\xi_1\sin\xi_3 & \sin\xi_1\cos\xi_3 \\ -\sin\xi_1\cos\xi_3 & \sin\xi_1\sin\xi_3 & \cos\xi_1\cos\xi_2 & \cos\xi_1\sin\xi_2 \\ \sin\xi_1\sin\xi_3 & -\sin\xi_1\cos\xi_3 & -\cos\xi_1\sin\xi_2 & \cos\xi_1\cos\xi_2 \end{array} \right), \] $\xi$ is a constant and the functions $\xi_i(v)$, $1\leq i\leq 3$, satisfy \begin{eqnarray}\label{eq:xis} \cos^2(\xi_1(v))\xi_2'(v)-\sin^2(\xi_1(v))\xi_3'(v)=0. \end{eqnarray} In the next result we obtain another relation between the function $\xi_i$, given in \eqref{eq:xi_i}, and the angle function $\nu$. \begin{prop} {\em The functions $\xi_i(v)$, given in \eqref{eq:xi_i}, satisfy the following relation: \begin{eqnarray}\label{eq:relxi_inu} (\xi_1'(v))^2+(\xi_2'(v))^2\cos^2(\xi_1(v))+(\xi_3'(v))\sin^2(\xi_1(v))= \sin^2\nu, \end{eqnarray} where $\nu$ is the angle function of the surface $M$.} \end{prop} \begin{proof} With respect to the parametrization $F(u,v)$, given in \eqref{eq:paramMO}, we have \[ F_v=A'(v)\cdot b(u)=A(\xi)\cdot\tilde A'(v)\cdot b(u). \] We have $\langle F_v,F_v\rangle=\sin^2\nu$ (cf. \cite{MO}). On the other hand, if we denote by $c_1$, $c_2$, $c_3$, $c_4$ the columns of $\tilde A$, we have \[ \langle F_v,F_v\rangle\vert_{u=0}=g_{11}\langle c_1',c_1'\rangle+g_{33}\langle c_3',c_3'\rangle. \] As $\langle c_1',c_1'\rangle=\langle c_3',c_3'\rangle$, $\langle c_1',c_3'\rangle=0$ and $g_{11}+g_{33}=1$, a straightforward computation gives \begin{eqnarray*} \sin^2\nu &=& \langle F_v,F_v\rangle = (g_{11}+g_{33})\langle c_1',c_1'\rangle \\ &=& (\xi_1'(v))^2+(\xi_2'(v))^2\cos^2(\xi_1(v))+(\xi_3'(v))\sin^2(\xi_1(v)), \end{eqnarray*} and we conclude the proof. \end{proof} \begin{thm} Let $M$ be a helicoidal flat surface in $\mathbb{S}^3$, locally parametrized by \eqref{eq:param-helicoidal}, and whose profile curve $\gamma$ is given by \eqref{eq:param-gamma}. Then $M$ admits a new local parametrization such that the fundamental forms are given as in \eqref{eq:forms} and $\omega$ is a linear function. \end{thm} \begin{proof} Consider the unit normal vector field $N$ associated to the local parametrization $X$ of $M$ given in \eqref{eq:param-helicoidal}. From Proposition \ref{prop:CAS}, the angle between $N$ and the Hopf vector field $E_1$ is constant. Hence, it follows from \cite{MO} (Theorem 3.1) that $M$ can be locally parametrized as in \eqref{eq:paramMO}. By taking $\epsilon=1$ in \eqref{eq:berger}, we can reparametrize the curve $b$ given in \eqref{eq:curveb} in such a way that the new curve is a base curve $\gamma_a$. In fact, by taking $\epsilon=1$, we obtain $B=1$, and so $\alpha_1=2c_2$ and $\alpha_2=2c_1$. This implies that $\|b'(u)\|=2\sqrt{c_1c_2}$, because $c_1+c_2=1$. Thus, by writing $s=2\sqrt{c_1c_2}$, the new parametrization of $b$ is given by \[ b(s)=\frac{1}{\sqrt{1+a^2}} \left(a\cos\frac{s}{a},a\sin\frac{s}{a}, \cos(as),\sin(as)\right), \] where $a=\sqrt{c_1/c_2}$. On the other hand, we have \[ A(v)\cdot b(s)=A(\xi)X(v,s), \] where $X(v,s)$ can be written as \[ X(v,s)=\frac{1}{\sqrt{1+a^2}}(x_1,x_2,x_3,x_4), \] with \begin{eqnarray}\label{eq:coef_xi} \begin{array}{rcl} x_1 &=& a\cos\xi_1\cos\left(\dfrac{s}{a}+\xi_2\right)+\sin\xi_1\cos(as+\xi_3), \\ x_2 &=& a\cos\xi_1\sin\left(\dfrac{s}{a}+\xi_2\right)+\sin\xi_1\sin(as+\xi_3), \\ x_3 &=& -a\sin\xi_1\cos\left(\dfrac{s}{a}-\xi_3\right)+\cos\xi_1\cos(as-\xi_2), \\ x_4 &=& -a\sin\xi_1\sin\left(\dfrac{s}{a}-\xi_3\right)+\cos\xi_1\sin(as-\xi_2). \\ \end{array} \end{eqnarray} On the other hand, the product $\phi_{\alpha,\beta}(t)\cdot X(v,s)$ can be written as \[ \phi_{\alpha,\beta}(t)\cdot X(v,s)=\frac{1}{\sqrt{1+a^2}}(z_1,z_2,z_3,z_4), \] where \begin{eqnarray}\label{eq:coef_zi} \begin{array}{rcl} z_1 &=& a\cos\xi_1\cos\left(\dfrac{s}{a}+\xi_2+\alpha t\right) +\sin\xi_1\cos\left(as+\xi_3+\alpha t\right), \\ z_2 &=& a\cos\xi_1\sin\left(\dfrac{s}{a}+\xi_2+\alpha t\right) +\sin\xi_1\sin\left(as+\xi_3+\alpha t\right), \\ z_3 &=& -a\sin\xi_1\cos\left(\dfrac{s}{a}-\xi_3+\beta t\right) +\cos\xi_1\cos\left(as-\xi_2+\beta t\right), \\ z_4 &=& -a\sin\xi_1\sin\left(\dfrac{s}{a}-\xi_3+\beta t\right) +\cos\xi_1\sin\left(as-\xi_2+\beta t\right). \end{array} \end{eqnarray} As the surface is helicoidal, we have \[ \phi_{\alpha,\beta}(t)\cdot X(v,s)=X(v(t),s(t)), \] for some smooth functions $v(t)$ and $s(t)$, which satisfy the following equations: \begin{eqnarray} \xi_2(v(t)) + \dfrac{s(t)}{a} = \xi_2(v)+ \dfrac{s}{a}+\alpha t, \label{eq:xi2-alpha} \\ \xi_3(v(t)) + a s(t) = \xi_3 (v) + a s + \alpha t, \label{eq:xi3-alpha} \\ \dfrac{s(t)}{a} - \xi_3(v(t)) = \dfrac{s}{a} - \xi_3(v) + \beta t, \label{eq:xi3-beta} \\ a s(t) - \xi_2(v(t)) = as - \xi_2(v) + \beta t. \label{eq:xi2-beta} \end{eqnarray} It follows directly from \eqref{eq:xi2-alpha} and \eqref{eq:xi2-beta} that \begin{equation} s(t) = s + \dfrac{a(\alpha+\beta)}{a^2+1} t. \label{eq:s(t)} \end{equation} Note that the same conclusion is obtained by using \eqref{eq:xi3-alpha} and \eqref{eq:xi3-beta}. By substituting the expression of $s(t)$ given in \eqref{eq:s(t)} on the equations \eqref{eq:xi2-alpha} -- \eqref{eq:xi2-beta}, one has \begin{eqnarray} \xi_2(v(t))= \xi_2(v) + \left( \dfrac{a^2 \alpha - \beta}{a^2+1} \right) t, \label{eq:xi2(v(t))} \\ \xi_3(v(t))= \xi_3(v) + \left( \dfrac{\alpha - a^2 \beta}{a^2+1} \right) t. \label{eq:xi3(v(t))} \end{eqnarray} From now on we assume that $v'(t)\neq0$ since, otherwise, we would have \[ \frac{s(t)}{a}=\frac{s}{a}+\alpha t=\frac{s}{a}+\beta t \quad\text{and}\quad as(t)=as+\alpha t=as+\beta t. \] But the equalities above imply that $a^2=1$, which contradicts the definition of base curve in \eqref{eq:base}. Thus, it follows from \eqref{eq:xi2(v(t))} and \eqref{eq:xi3(v(t))} that \begin{eqnarray}\label{eq:xi_2exi_3} \xi_2'=\frac{a^2 \alpha-\beta}{a^2+1}\cdot\frac{1}{v'} \quad\text{and}\quad \xi_3'=\frac{\alpha-a^2 \beta}{a^2+1}\cdot\frac{1}{v'}. \end{eqnarray} Therefore, from \eqref{eq:xis} and \eqref{eq:xi_2exi_3} we obtain \begin{eqnarray}\label{eq:relxi_1} \cos^2(\xi_1(v))(a^2\alpha-\beta)=\sin^2(\xi_1(v))(\alpha-a^2\beta). \end{eqnarray} As $a>1$, one has $a^2\alpha-\beta\neq0$ or $\alpha-a^2\beta\neq0$, and we conclude from \eqref{eq:relxi_1} that $\xi_1(v)$ is constant. In this case, there is a constant $b>1$ such that $\cos^2\xi_1=\dfrac{b^2}{1+b^2}$ and $\sin^2 \xi_1 = \dfrac{1}{1+b^2}$. Therefore, it follows from \eqref{eq:xis} that \begin{eqnarray}\label{eq:xi2-xi3} \xi_2(v)= \dfrac{1}{b^2} \xi_3(v)+d, \end{eqnarray} for some constant $d$. On the other hand, if $\cos\xi_1\neq0$, it follows from \eqref{eq:xis} that \begin{eqnarray}\label{eq:xi_2xi_3} (\xi_2'(v))^2=\tan^4\xi_1\cdot(\xi_3'(v))^2. \end{eqnarray} By substituting \eqref{eq:xi_2xi_3} in \eqref{eq:relxi_inu} we obtain \[ \tan^2\xi_1\cdot(\xi_3'(v))^2=\sin^2\nu, \] ant this implies that we can choose $\xi_3(v)=bv$, and from \eqref{eq:xi3(v(t))} we obtain \begin{eqnarray}\label{eq:v(t)} v(t)=v+\frac{\alpha-a^2\beta}{b(a^2+1)}t. \end{eqnarray} Moreover, from \eqref{eq:xi2-xi3}, the equation $\xi_2(v(t)) = \dfrac{1}{b^2} \xi_3(v(t))+d$ implies that \begin{eqnarray}\label{eq:b-alpha-beta} \dfrac{1}{b^2} = \dfrac{a^2 \alpha - \beta}{\alpha-a^2 \beta}, \end{eqnarray} and from \eqref{eq:b-alpha-beta} we obtain the same relation \eqref{eq:alpha-beta} between $\alpha$ and $\beta$. This relation, when substituted in \eqref{eq:s(t)} and \eqref{eq:v(t)}, gives \[ s(t)=s+\frac{a(b^2-1)}{a^2b^2-1}\beta t \quad\text{and}\quad v(t)=v+\frac{b(1-a^2)}{a^2b^2-1}\beta t, \] that coincide with the expressions in \eqref{eq:z(t)w(t)}. Finally, from the relation \eqref{eq:xi2-xi3} we obtain $\xi_2(v)=\frac{v}{b}-\frac{\pi}{2}$. By taking $\xi=\frac{\pi}{2}$ and $\xi_1(v)=\arcsin\left(\frac{1}{\sqrt{1+b^2}}\right)$, the new parametrization $F(u,v)$ thus obtained coincides with $Y(u,v)$ given in \eqref{eq:expY}, up to isometries of $\mathbb{S}^3$ and linear reparametrization. The conclusion follows from Theorem \ref{teo:main1}. \end{proof} \section{Conformally flat hypersurfaces} In this section, it will presented an application of the classification result for helicoidal flat surfaces in $\mathbb{S}^3$ in a geometric description for conformally flat hypersurfaces in four-dimensional space forms. The problem of classifying conformally flat hypersurfaces in space forms has been investigated for a long time, with special attention on $4$-dimensional space forms. In fact, any surface in $\mathbb{R}^3$ is conformally flat, since it can be parametrized by isothermal coordinates. On the other hand, Cartan \cite{Cartan} gave a complete classification of conformally flat hypersurfaces into a $(n+1)$-dimensional space form, with $n+1\geq5$. Such hypersurfaces are quasi-umbilic, i.e., one of the principal curvatures has multiplicity at least $n-1$. In the same paper, Cartan showed that the quasi-umbilic surfaces are conformally flat, but the converse does not hold. Since then, there has been an effort to obtain a classification of hypersurfaces with three distinct principal curvatures. Lafontaine \cite{Lafontaine} considered hypersurfaces of type $M^3 = M^2 \times I\subset\mathbb{R}^4$ and obtained the following classes of conformally flat hypersurfaces: (a) $M^3$ is a cylinder over a surface, where $M^2 \subset \mathbb{R}^3$ has constant curvature; (b) $M^3$ is a cone over a surface in the sphere, where $M^2 \subset\mathbb{S}^3$ has with constant curvature; (c) $M^3$ is obtained by rotating a constant curvature surface of the hyperbolic space and $M^2 \subset \mathbb{H}^3 \subset \mathbb{R}^4$, where $\mathbb{H}^3$ is the half space model (see \cite{Suyama2} for more details). Hertrich-Jeromin \cite{Jeromin1} established a correspondence between conformally flat hypersufaces in space forms, with three distinct principal curvatures, and solutions $(l_1, l_2, l_3) : U \subset \mathbb{R}^3 \rightarrow \mathbb{R}$ for the Lam\'e's system \cite{reflame} \begin{equation} \begin{array}{rcl} l_{i,x_jx_k} - \dfrac{l_{i,x_j} l_{j,x_k}}{l_j} - \dfrac{l_{i,x_k} l_{k,x_j}}{l_k} &=& 0, \\ \left( \dfrac{l_{i,x_j}}{l_j} \right)_{,x_j} + \left( \dfrac{l_{j,x_i}}{l_i} \right)_{,x_i} + \dfrac{l_{i,x_k} l_{j,x_k}}{l_k^2} &=& 0, \end{array} \label{lame} \end{equation} where $i, j, k$ are distinct indices that satisfies the condition \begin{equation} \label{guich} l_1^2 - l_2^2 + l_3^2 = 0 \end{equation} known as Guichard condition. In this case, the correspondent coformally flat hypersurface in $M^4_K$ is parametrized by curvature lines, with induced metric given by \[ g = e^{2u} \left\{ l_1^2 (dx_1)^2 + l_1^2 (dx_1)^2 + l_3^2 (dx_3)^2 \right\}. \] In \cite{santos} the second author and Tenenblat obtained solutions of Lam\'e's system (\ref{lame}) that are invariant under symmetry groups. Among the solutions, there are those that are invariant under the action of the 2-dimensional subgroup of translations and dilations and depends only on two variables: \begin{enumerate} \item[(a)] $l_1 = \lambda_1, \; l_2 = \lambda_1 \cosh (b\xi + \xi_0), \; l_3 = \lambda_1 \sinh (b\xi + \xi_0)$, where $\xi = \alpha_2 x_2 + \alpha_3 x_3$, $\alpha_2^2 + \alpha_3^2 \neq 0$ and $b,\, \xi_0\in \mathbb{R}$ ; \item[(b)] $ l_2 = \lambda_2, \; l_1 = \lambda_2 \cos \varphi(\xi) , \; l_3 = \lambda_2 \sin \varphi(\xi) $, where $\xi = \alpha_1 x_1 + \alpha_3 x_3$, $\alpha_1^2 + \alpha_3^2 \neq 0$ and $\varphi$ is one of the following functions: \begin{enumerate} \item[(b.1)] $\varphi(\xi) = b \xi + \xi_0$, if $\alpha_1^2 \neq \alpha_3^2$, where $\xi_0, \,b\in \mathbb{R}$; \item[(b.2)] $\varphi$ is any function of $\xi$, if $\alpha_1^2 = \alpha_3^2$; \end{enumerate} \item[(c)] $l_3 = \lambda_3, \; l_2 = \lambda_3 \cosh (b\xi + \xi_0), \; l_1 = \lambda_3 \sinh (b\xi + \xi_0)$, where $\xi = \alpha_1 x_1 + \alpha_2 x_2$, $\alpha_1^2 + \alpha_2^2 \neq 0$ and $b,\,\xi_0\in\mathbb{R} $. \end{enumerate} It is known (see \cite{Suyama2}) that the solutions that do not depend on one of the variables are associated to the products given by Lafontaine. For the solutions given in (b), further geometric solutions can be obtained with the classification result for helicoidal falt surfaces in $\mathbb{S}^3$. These solutions are associated to conformally flat hypersurfaces that are conformal to the products $M^2 \times I \subset \mathbb{R}^4$ given by \[ M^2 \times I = \left\{ tp:0<t<\infty, p\in M^2 \subset \mathbb{S}^3 \right\}, \] where $M^2$ is a flat surface in $S^3$, parametrized by lines of curvature, whose first and second fundamental forms are given by \begin{equation} \begin{array}{rcl} \label{firstffsphere} I &=& \sin^2 (\xi + \xi_0) dx_1^2 + \cos^2 (\xi + \xi_0) dx_3^2, \\ II &=& \sin (\xi + \xi_0) \cos (\xi + \xi_0) (dx_1^2 - dx_3^2), \end{array} \end{equation} which are, up to a linear change of variables, the fundamental forms that are considered in this paper. Therefore, as an application of the characterization of helicoidal flat surfaces in terms of first and second fundamental forms, one has the following theorem: \begin{thm} \label{characterization.helicoidal} Let $ l_2 = \lambda_2, \; l_1 = \lambda_2 \cos \xi + \xi_0 , \; l_3 = \lambda_2 \sin \xi + \xi_0 $ be solutions of the Lam\'e's sytem, where $\xi = \alpha_1 x_1 + \alpha_3 x_3$ and $\alpha_1, \, \alpha_3, \, \lambda_2, \, \xi_0$ are real constants with $\alpha_1\cdot\alpha_3 \neq 0$. Then the associated conformally flat hypersurfaces are conformal to the product, $M^2 \times I$, where $M^2 \subset \mathbb{S}^3$ is locally congruent to helicoidal flat surface. \end{thm} \bibliographystyle{acm}
{'timestamp': '2016-01-20T02:09:06', 'yymm': '1601', 'arxiv_id': '1601.04930', 'language': 'en', 'url': 'https://arxiv.org/abs/1601.04930'}
\section{Introduction} Let $M=\mathbb{H}^3/ \Gamma$, $\Gamma \subset \mathrm{PSL}_2(\mathbb{C})$ be an orientable hyperbolic $3$--manifold, and let $f\co F \rightarrow M$ be a proper immersion of a connected, orientable surface of genus at least $2$ such that $f_{*}\co\pi_1(F) \rightarrow \Gamma$ is injective. $F$ (or more precisely $(f,F)$) is said to be {\em totally geodesic\/} if $f_{*}(\pi_{1}(F)) \subset \Gamma$ is conjugate into $\mathrm{PSL}_2(\mathbb{R})$. Thurston and Bonahon have described the geometry of surface groups in hyperbolic $3$--manifolds as falling into three classes: doubly degenerate groups, quasi-Fuchsian groups and groups with accidental parabolics. The class of totally geodesic surface groups is a ``positive codimension'' subclass of the quasi-Fuchsian groups, so one may expect that hyperbolic $3$--manifolds containing totally geodesic surface groups are special. Indeed, the presence of a totally geodesic surface in a hyperbolic $3$--manifold has important topological implications. Long showed that immersed totally geodesic surfaces lift to embedded nonseparating surfaces in finite covers \cite{Long}, proving the virtual Haken and virtually positive $\beta_1$ conjectures for hyperbolic manifolds containing totally geodesic surfaces. Given this, it is natural to wonder about the extent to which topology constrains the existence of totally geodesic surfaces in hyperbolic $3$--manifolds. Menasco--Reid have made the following conjecture \cite{MeR}: \begin{conj}[Menasco--Reid] No hyperbolic knot complement in $\mathrm{S}^3$ contains a closed embedded totally geodesic surface. \end{conj} They proved this conjecture for alternating knots. The Menasco--Reid conjecture has been shown true for many other classes of knots, including almost alternating knots \cite{Aetal}, Montesinos knots \cite{Oer}, toroidally alternating knots \cite{Ad}, $3$--bridge and double torus knots \cite{IO} and knots of braid index 3 \cite{LP} and 4 \cite{Ma}. For a knot in one of the above families, any closed essential surface in its complement has a topological feature which obstructs it from being even quasi-Fuchsian. In general, however, one cannot hope to find such obstructions. Adams--Reid have given examples of closed embedded quasi-Fuchsian surfaces in knot complements which volume calculations prove to be not totally geodesic \cite{AR}. On the other hand, C\,Leininger has given evidence for a counterexample by constructing a sequence of hyperbolic knot complements in $\mathrm{S}^3$ containing closed embedded surfaces whose principal curvatures approach 0 \cite{Le}. In this paper, we take an alternate approach to giving evidence for a counterexample. \begin{thm} \label{cusped} There exist infinitely many hyperbolic knot complements in rational homology spheres containing closed embedded totally geodesic surfaces. \end{thm} This answers a question of Reid---recorded as Question 6.2 in \cite{Le}---giving counterexamples to the natural generalization of the Menasco--Reid conjecture to knot complements in rational homology spheres. Thus the conjecture, if true, must reflect a deeper topological feature of knot complements in $\mathrm{S}^3$ than simply their rational homology. Prior to proving \fullref{cusped}, in \fullref{sec2} we prove the following theorem. \begin{thm} \label{closed} There exist infinitely many hyperbolic rational homology spheres containing closed embedded totally geodesic surfaces. \end{thm} This seems of interest in its own right, and the proof introduces many of the techniques we use in the proof of \fullref{cusped}. Briefly, we find a two-cusped hyperbolic manifold containing an embedded totally geodesic surface which remains totally geodesic under certain orbifold surgeries on its boundary slopes and use the Alexander polynomial to show that branched covers of these surgeries have no rational homology. In \fullref{sec3} we prove \fullref{cusped}. In the final section, we give some idea of further directions and questions suggested by our approach. \subsubsection*{Acknowledgements} The author thanks Cameron Gordon, Richard Kent, Chris Lein\-inger, Jessica Purcell and Alan Reid for helpful conversations. The author also thanks the Centre Interfacultaire Bernoulli at EPF Lausanne for their hospitality during part of this work. \section[Theorem 2]{\fullref{closed}}\label{sec2} Given a compact hyperbolic manifold $M$ with totally geodesic boundary of genus $g$, gluing it to its mirror image $\wwbar{M}$ along the boundary yields a closed manifold $DM$---the ``double'' of $M$---in which the former $\partial M$ becomes an embedded totally geodesic surface. One limitation of this construction is that this surface contributes half of its first homology to the first homology of $DM$, so that $\beta_1(DM) \geq g$. This is well known, but we include an argument to motivate our approach. Consider the relevant portion of the rational homology Mayer--Vietoris sequence for $DM$: \[ \cdots \rightarrow \mathrm{H}_1(\partial M,\mathbb{Q}) \stackrel{(i_*,-j_*)}{\longrightarrow} \mathrm{H}_1(M,\mathbb{Q}) \oplus \mathrm{H}_1(\wwbar{M},\mathbb{Q}) \rightarrow \mathrm{H}_1(DM,\mathbb{Q}) \rightarrow 0 \] The labeled maps $i_*$ and $j_*$ are the maps induced by inclusion of the surface into $M$ and $\wwbar{M}$, respectively. Recall that by the ``half lives, half dies'' lemma (see eg Hatcher \cite[Lemma 3.5]{Ha}), the dimension of the kernel of $i_*$ is equal to $g$. Hence $\beta_1(M) \geq g$. The gluing isometry $\partial M \rightarrow \partial \wwbar{M}$ (the identity) extends over $M$, thus $\mathrm{Ker}\ i_*=\mathrm{Ker}\ j_*$, and so $\mathrm{dim}\ \mathrm{Im}(i_*,-j_*)=g$. Hence \[ \mathrm{H}_1(DM, \mathbb{Q}) \cong \frac{\mathrm{H}_1(M, \mathbb{Q}) \oplus \mathrm{H}_1(\wwbar{M}, \mathbb{Q})}{\mathrm{Im}((i_*,-j_*))} \] has dimension at least $g$. Considering the above picture gives hope that by cutting $DM$ along $\partial M$ and regluing via some isometry $\phi\co \partial M \rightarrow \partial M$ to produce a ``twisted double'' $D_{\phi}M$, one may reduce the homological contribution of $\partial M$. For then $j=i \circ \phi$, and if $\phi_*$ moves the kernel of the inclusion off of itself, then the argument above shows that the homology of $D_{\phi}M$ will be reduced. Below we apply this idea to a family of examples constructed by Zimmerman and Paoluzzi \cite{ZP} which build on the ``Tripos'' example of Thurston \cite{Th}. \begin{figure}[ht!] \begin{center} \begin{picture}(0,0)% \includegraphics[scale=.8]{\figdir/Fig1}% \end{picture}% \setlength{\unitlength}{3315sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(5354,2428)(1199,-2209) \put(6076,-2131){\makebox(0,0)[b]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$L$}% }}}} \put(3961,-2131){\makebox(0,0)[b]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$L_0$}% }}}} \put(1891,-2131){\makebox(0,0)[b]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$T$}% }}}} \end{picture}% \caption{The tangle $T$ and its double and twisted double.} \label{links} \end{center} \end{figure} The complement in the ball of the tangle $T$ in \fullref{links} is one of the minimal volume hyperbolic manifolds with totally geodesic boundary, obtained as an identification space of a regular ideal octahedron \cite{Mi}. We will denote it by $O_{\infty}$. For $n \geq 3$, the orbifold $O_n$ with totally geodesic boundary consisting of the ball with cone locus $T$ of cone angle $2\pi/n$ has been explicitly described by Zimmerman and Paoluzzi \cite{ZP} as an identification space of a truncated tetrahedron. For each $k<n$ with $(k,n)=1$, Zimmerman and Paoluzzi describe a hyperbolic manifold $M_{n,k}$ which is an $n$--fold branched cover of $O_n$. Topologically, $M_{n,k}$ is the $n$--fold branched cover of the ball, branched over $T$, obtained as the kernel of $\langle x,y \rangle = \mathbb{Z} \oplus \mathbb{Z} \rightarrow \mathbb{Z}/n\mathbb{Z} = \langle t \rangle$ via $x \mapsto t$, $y \mapsto t^k$, where $x$ and $y$ are homology classes representing meridians of the two components of $T$. We recall a well-known fact about isometries of spheres with four cone points: \begin{fact} Let $S$ be a hyperbolic sphere with four cone points of equal cone angle $\alpha$, $0 \leq \alpha \leq 2\pi/3$, labeled $a$, $b$, $c$, $d$. Each of the following permutations of the cone points may be realized by an orientation-preserving isometry: \begin{align*} &(ab)(cd)& &(ac)(bd)& &(ad)(bc)& \end{align*} \end{fact} Using this fact and abusing notation, let $\phi$ be the isometry $(ab)(cd)$ of $\partial O_n$, with labels as in \fullref{links}. Doubling the tangle ball produces the link $L_0$ in \fullref{links}, and cutting along the separating $4$--punctured sphere and regluing via $\phi$ produces the link $L$, a \textit{mutant\/} of $L_0$ in the classical terminology. Note that $L$ and all of the orbifolds $D_{\phi} O_n$ contain the mutation sphere as a totally geodesic surface, by the fact above. $\phi$ lifts to an isometry ${}\mskip2.5mu\tilde{\mskip-2.5mu \vphantom{t}\smash{\phi}\mskip-1mu}\mskip1mu$ of $\partial M_{n,k}$, and the twisted double $D_{{}\mskip2.5mu\tilde{\mskip-2.5mu \vphantom{t}\smash{\phi}\mskip-1mu}\mskip1mu}M_{n,k}$ is the corresponding branched cover over $L$. The homology of $D_{{}\mskip2.5mu\tilde{\mskip-2.5mu \vphantom{t}\smash{\phi}\mskip-1mu}\mskip1mu}M_{n,k}$ can be described using the Alexander polynomial of $L$. The two variable Alexander polynomial of $L$ is \[ \Delta_L(x,y) = \frac{1}{x^3}(x-1)(xy-1)(y-1)^2(x-y). \] For the regular $\mathbb{Z}$--covering of $\mathrm{S}^3-L$ given by $x \mapsto t^k$, $y \mapsto t$, the Alexander polynomial is \[ \Delta_L^k(t)= (t-1)\Delta(t^k,t)= \frac{1}{t^{3k-1}}(t-1)^5\nu_{k-1}(t) \nu_k(t) \nu_{k+1}(t) \] where $\nu_k(t)=t^{k-1} + t^{k-2} + \cdots + t + 1$. By a theorem originally due to Sumners \cite{Su} in the case of links, the first Betti number of $D_{{}\mskip2.5mu\tilde{\mskip-2.5mu \vphantom{t}\smash{\phi}\mskip-1mu}\mskip1mu}M_{n,k}$ is the number of roots shared by $\Delta_L^k(t)$ and $\nu_n(t)$. Since this number is $0$ for many $n$ and $k$, we have a more precise version of \fullref{closed}. \begin{thm1} For $n>3$ prime and $k \neq 0,1,n-1$, the manifold $D_{{}\mskip2.5mu\tilde{\mskip-2.5mu \vphantom{t}\smash{\phi}\mskip-1mu}\mskip1mu}M_{n,k}$ is a hyperbolic rational homology sphere containing an embedded totally geodesic surface. \end{thm1} The techniques used above are obviously more generally applicable. Given any hyperbolic two-string tangle in a ball with totally geodesic boundary, one may double it to get a $2$--component hyperbolic link in $\mathbb{S}^3$ and then mutate along the separating $4$--punctured sphere by an isometry. By the hyperbolic Dehn surgery theorem and the fact above, for large enough $n$, $(n,0)$ orbifold surgery on each component will yield a hyperbolic orbifold with a separating totally geodesic orbisurface. Then $n$--fold manifold branched covers can be constructed as above. One general observation about such covers follows from the following well-known fact, originally due to Conway: \begin{fact} The one variable Alexander polynomial of a link is not altered by mutation; ie, \[ \Delta_{L_0}(t,t) = \Delta_L(t,t) \] when L is obtained from $L_0$ by mutation along a $4$--punctured sphere. \end{fact} In our situation, this implies the following: \begin{cor1} A $2$--component link in $\mathrm{S}^3$ which is the twisted double of a tangle has no integral homology spheres among its abelian branched covers. \end{cor1} \begin{proof} A link $L_0$ which is the double of a tangle has Alexander polynomial $0$. Therefore by the fact above, \[ \Delta_L^1(t)=(t-1)\Delta_L(t,t)=(t-1)\Delta_{L_0}(t,t)=0, \] and so $D_{{}\mskip2.5mu\tilde{\mskip-2.5mu \vphantom{t}\smash{\phi}\mskip-1mu}\mskip1mu}M_{n,1}$ has positive first Betti number by Sumners' theorem. The canonical abelian $n^2$--fold branched cover of $L$ covers $D_{{}\mskip2.5mu\tilde{\mskip-2.5mu \vphantom{t}\smash{\phi}\mskip-1mu}\mskip1mu}M_{n,1}$ and so also has positive first Betti number. Since the other $n$--fold branched covers of $L$ have $n$--torsion, no branched covers of $L$ have trivial first homology. \end{proof} \section[Theorem 1]{\fullref{cusped}}\label{sec3} In this section we construct hyperbolic knot complements in rational homology spheres containing closed embedded totally geodesic surfaces. The following ``commutative diagram'' introduces the objects involved in the construction and the relationships between them. \[ \disablesubscriptcorrection\xysavmatrix{ N_n \ar[rr]^{\mbox{\tiny{Dehn}}}_{\mbox{\tiny{filling}}} \ar[dd] && M_n \ar[rr]^{\mbox{\tiny{Dehn}}}_{\mbox{\tiny{filling}}} \ar[dd] && S_n \\ && && \\ N \ar[rr]^{\mbox{\tiny{orbifold}}}_{\mbox{\tiny{filling}}} && O_n && \\} \] \fullref{cusped} may now be more precisely stated as follows. \begin{thm1} For each $n \geq 3$ odd, $O_n$ is a one-cusped hyperbolic orbifold containing a totally geodesic sphere with four cone points of order $n$, $M_n$ is a branched covering of $O_n$ which is a one-cusped hyperbolic manifold, and $S_n$ is a rational homology sphere. \end{thm1} Before beginning the proof, we give a brief sketch of the strategy. We give an explicit polyhedral construction of a three-cusped hyperbolic manifold $N$ containing an embedded totally geodesic $4$--punctured sphere which intersects two of the cusps. For $n \geq 3$, we give the polyhedral decomposition of the orbifold $O_n$ resulting from $n$--fold orbifold surgery on the boundary slopes of this $4$--punctured sphere. From this it is evident that $O_n$ is hyperbolic and the sphere remains totally geodesic. For odd $n \geq 3$, we prove that $O_n$ has a certain one-cusped $n$--fold manifold cover $M_n$ with a surgery $S_n$ which is a rational homology sphere. This is accomplished by adapting an argument of Sakuma \cite{Sa} to relate the homology of the $n$--fold cover $N_n \rightarrow N$ corresponding to $M_n \rightarrow O_n$, to the homology of $S_n$. $M_n$ is thus a hyperbolic knot complement in a rational homology sphere, containing the closed embedded totally geodesic surface which is a branched covering of the totally geodesic sphere with four cone points in $O_n$. \begin{remark} It follows from the construction that the ambient rational homology sphere $S_n$ covers an orbifold produced by $n$--fold orbifold surgery on each cusp of $N$. Thus by the hyperbolic Dehn surgery theorem, $S_n$ is hyperbolic for $n >> 0$. \end{remark} The proof occupies the remainder of the section. We first discuss the orbifolds $O_n$. For each $n$, the orbifold $O_n$ decomposes into the two polyhedra in \fullref{orbipoly}. Realized as a hyperbolic polyhedron, $P_a^{(n)}$ is composed of two truncated tetrahedra, each of which has two opposite edges of dihedral angle $\pi/2$ and all other dihedral angles $\pi/2n$, glued along a face. This decomposition is indicated in \fullref{orbipoly} by the lighter dashed and dotted lines. The polyhedron $P_b^{(n)}$ has all edges with dihedral angle $\pi/2$ except for those labeled otherwise and realized as a hyperbolic polyhedron it has all combinatorial symmetries and all circled vertices at infinity. By Andreev's theorem, polyhedra with the desired properties exist in hyperbolic space. Certain face pairings (described below) of $P_a^{(n)}$ yield a compact hyperbolic orbifold with totally geodesic boundary a sphere with four cone points of cone angle $2\pi/n$. Faces of $P_b^{(n)}$ may be glued to give a one-cusped hyperbolic orbifold with a torus cusp and totally geodesic boundary isometric to the boundary of the gluing of $P_a^{(n)}$. $O_n$ is formed by gluing these orbifolds along their boundaries. \begin{figure}[ht!] \begin{center} \begin{picture}(0,0)% \includegraphics{\figdir/orbipoly}% \end{picture}% \setlength{\unitlength}{4144sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(5412,2830)(1852,-3149) \put(2229,-1591){\makebox(0,0)[b]{\smash{{\SetFigFont{9}{10.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(3840,-2382){\makebox(0,0)[b]{\smash{{\SetFigFont{9}{10.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2n}$}% }}}} \put(3536,-1591){\makebox(0,0)[b]{\smash{{\SetFigFont{9}{10.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(3901,-831){\makebox(0,0)[b]{\smash{{\SetFigFont{9}{10.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2n}$}% }}}} \put(2168,-862){\makebox(0,0)[b]{\smash{{\SetFigFont{9}{10.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2n}$}% }}}} \put(2198,-2351){\makebox(0,0)[b]{\smash{{\SetFigFont{9}{10.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2n}$}% }}}} \put(2989,-2412){\makebox(0,0)[b]{\smash{{\SetFigFont{9}{10.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{n}$}% }}}} \put(2746,-1044){\makebox(0,0)[b]{\smash{{\SetFigFont{9}{10.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{n}$}% }}}} \put(5693,-779){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{n}$}% }}}} \put(5201,-1548){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0.447,0.431,0.427}$\frac{\pi}{n}$}% }}}} \put(6248,-480){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0.447,0.431,0.427}\adjustlabel<-15pt,-1pt>{$\frac{\pi}{n}$}}% }}}} \put(7101,-1528){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0.447,0.431,0.427}\adjustlabel<-3pt,0pt>{$\frac{\pi}{n}$}}% }}}} \put(5672,-2659){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{n}$}% }}}} \put(4646,-1613){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{n}$}% }}}} \put(6525,-1613){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{n}$}% }}}} \put(6268,-2402){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0.447,0.431,0.427}$\frac{\pi}{n}$}% }}}} \put(2881,-3031){\makebox(0,0)[b]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$P_a^{(n)}$}% }}}} \put(5626,-3076){\makebox(0,0)[b]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$P_b^{(n)}$}% }}}} \end{picture}% \caption{Cells for $O_n$} \label{orbipoly} \end{center} \end{figure} The geometric limit of the $O_n$ as $n \rightarrow \infty$ is $N$, a $3$--cusped manifold which decomposes into the two polyhedra in \fullref{idealpoly}. As above, realized as a convex polyhedron in hyperbolic space $Q_a$ has all circled vertices at infinity. The edge of $Q_a$ connecting face $A$ to face $C$ is finite length, as is the corresponding edge on the opposite vertex of $A$; all others are ideal or half-ideal and all have dihedral angle $\pi/2$. $Q_a$ has a reflective involution of order $2$ corresponding to the involution of $P_a^{(n)}$ interchanging the two truncated tetrahedra. The fixed set of this involution on the back face is shown as a dotted line, and notationally we regard $Q_a$ as having an edge there with dihedral angle $\pi$, splitting the back face into two faces $X_5$ and $X_6$. $Q_b$ is the regular all-right hyperbolic ideal cuboctahedron. \begin{figure}[ht!] \begin{center} \begin{picture}(0,0)% \includegraphics{\figdir/idealpoly1}% \end{picture}% \setlength{\unitlength}{4144sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(5340,2830)(1381,-2249) \put(1936,-1366){\makebox(0,0)[b]{\smash{{\SetFigFont{10}{12.0}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$X_1$}% }}}} \put(3241,-1366){\makebox(0,0)[b]{\smash{{\SetFigFont{10}{12.0}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$X_2$}% }}}} \put(3196,-466){\makebox(0,0)[b]{\smash{{\SetFigFont{10}{12.0}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$X_3$}% }}}} \put(1936,-511){\makebox(0,0)[b]{\smash{{\SetFigFont{10}{12.0}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$X_4$}% }}}} \put(1396,-241){\makebox(0,0)[b]{\smash{{\SetFigFont{10}{12.0}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$X_5$}% }}}} \put(3781,-241){\makebox(0,0)[b]{\smash{{\SetFigFont{10}{12.0}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$X_6$}% }}}} \put(2476,-961){\makebox(0,0)[b]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$A$}% }}}} \put(2566, 29){\makebox(0,0)[b]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$B$}% }}}} \put(4591,-1591){\makebox(0,0)[b]{\smash{{\SetFigFont{10}{12.0}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$Y_1$}% }}}} \put(6031,-1546){\makebox(0,0)[b]{\smash{{\SetFigFont{10}{12.0}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$Y_2$}% }}}} \put(5896,-196){\makebox(0,0)[b]{\smash{{\SetFigFont{10}{12.0}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$Y_3$}% }}}} \put(4546,-151){\makebox(0,0)[b]{\smash{{\SetFigFont{10}{12.0}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$Y_4$}% }}}} \put(5131,-916){\makebox(0,0)[b]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$D$}% }}}} \put(5266,434){\makebox(0,0)[b]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$E$}% }}}} \put(3781,-916){\makebox(0,0)[b]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$C$}% }}}} \put(5221,-2176){\makebox(0,0)[b]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$Q_b$}% }}}} \put(2656,-2176){\makebox(0,0)[b]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$Q_a$}% }}}} \put(6706,-871){\makebox(0,0)[b]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$F$}% }}}} \put(4231,-1906){\makebox(0,0)[b]{\smash{{\SetFigFont{10}{12.0}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$v_1$}% }}}} \put(6481,344){\makebox(0,0)[b]{\smash{{\SetFigFont{10}{12.0}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$v_3$}% }}}} \put(4231,344){\makebox(0,0)[b]{\smash{{\SetFigFont{10}{12.0}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$v_4$}% }}}} \put(6526,-1906){\makebox(0,0)[b]{\smash{{\SetFigFont{10}{12.0}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$v_2$}% }}}} \end{picture}% \caption{Cells for $N$} \label{idealpoly} \end{center} \end{figure} Another remark on notation: the face opposite a face labeled with only a letter should be interpreted as being labeled with that letter ``prime''. For instance, the leftmost triangular face of $Q_a$ has label $C'$. Also, each ``back'' triangular face of $Q_b$ takes the label of the face with which it shares a vertex. For example, the lower left back triangular face is $Y_1'$. We first consider face pairings of $Q_a$ producing a manifold $N_a$ with two annulus cusps and totally geodesic boundary. Let $r$, $s$ and $t$ be isometries realizing face pairings $X_1 \mapsto X_3$, $X_6 \mapsto X_4$ and $X_2 \mapsto X_5$, respectively. Poincar\'{e}'s polyhedron theorem yields a presentation \[ \langle\ r,s,t\ \,|\,\ rst=1\ \rangle \] for the group generated by $r$, $s$ and $t$. Note that this group is free on two generators, say $s$ and $t$, where by the relation $r=t^{-1}s^{-1}$. Choose as the ``boundary subgroup'' (among all possible conjugates) the subgroup fixing the hyperbolic plane through the face $A$. A fundamental polyhedron for this group and its face-pairing isometries are in \fullref{faces}. Note that the boundary is a $4$--punctured sphere, and two of the three generators listed are the parabolics $t^{-1}s^{-1}ts^{-1}$ and $sts^{-1}t$, which generate the two annulus cusp subgroups of $\langle\,s,t\,\rangle$. We now consider $Q_b$ and the $3$--cusped quotient manifold $N_b$. For $i \in \{1,2,3,4\}$, let $f_i$ be the isometry pairing the face $Y_i \rightarrow Y_{i+1}'$ so that $v_i \mapsto v_{i+1}$. Let $g_1$ be the \textit{hyperbolic\/} isometry (that is, without twisting) sending $E \rightarrow E'$ and $g_2$ the hyperbolic isometry sending $F \rightarrow F'$. The polyhedron theorem gives presentation \begin{align*} \langle f_1, f_2, f_3, f_4, g_1, g_2 \,|\, & f_1g_2f_2^{-1}g_1^{-1}=1, \\ & f_2^{-1}g_2^{-1}f_3g_1^{-1}=1, \\ & f_3g_2^{-1}f_4^{-1}g_1=1, \\ & f_4^{-1}g_2f_1g_1=1 \rangle \end{align*} for the group generated by the face pairings. The first three generators and relations may be eliminated from this presentation using Nielsen--Schreier transformations, yielding a presentation \[\langle\ f_4, g_1, g_2\ \,|\,\ f_4^{-1}[g_2,g_1]f_4[g_2,g_1^{-1}] = 1\ \rangle \] (our commutator convention is $[x,y]=xyx^{-1}y^{-1}$), where the first three relations yield \[ f_1=g_1g_2^{-1}g_1^{-1}f_4g_2g_1^{-1}g_2^{-1},\quad f_2=g_2^{-1}g_1^{-1}f_4g_2g_1^{-1}\quad \text{and}\quad f_3=g_1^{-1}f_4g_2. \] The second presentation makes clear that the homology of $N_b$ is free of rank $3$, since each generator has exponent sum $0$ in the relation. Faces $D$ and $D'$ make up the totally geodesic boundary of $N_b$. \fullref{faces} shows a fundamental polyhedron for the boundary subgroup fixing $D$, together with the face pairings generating the boundary subgroup. \begin{figure}[ht!] \begin{center} \begin{picture}(0,0)% \includegraphics{\figdir/closedcusp}% \end{picture}% \setlength{\unitlength}{4144sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(4206,1668)(2221,-3914) \put(3358,-2927){\makebox(0,0)[b]{\smash{{\SetFigFont{9}{10.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$v_1$}% }}}} \put(4102,-3225){\makebox(0,0)[b]{\smash{{\SetFigFont{9}{10.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$f_1^{-1}(v_2)$}% }}}} \put(4845,-2927){\makebox(0,0)[b]{\smash{{\SetFigFont{9}{10.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\adjustlabel<-25pt,22pt>{$f_1^{-1}f_2^{-1}(v_3)$}}% }}}} \put(5217,-3225){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{10.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\adjustlabel<-5pt,-22pt>{$f_1^{-1}f_2^{-1}f_3^{-1}(v_4)$}}% }}}} \put(3284,-2370){\makebox(0,0)[b]{\smash{{\SetFigFont{10}{12.0}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$bob$}% }}}} \put(5143,-3745){\makebox(0,0)[b]{\smash{{\SetFigFont{10}{12.0}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\adjustlabel<-10pt,-5pt>{$rita$}}% }}}} \end{picture}% \caption{Closed cusp of $N_b$} \label{closedcusp} \end{center} \end{figure} $N_b$ has two annulus cusps, each with two boundary components on the totally geodesic boundary, and one torus cusp. A fundamental domain for the torus cusp in the horosphere centered at $v_1$ is shown in \fullref{closedcusp}, together with face pairing isometries generating the rank--$2$ parabolic subgroup fixing $v_1$. The generators shown are \[ bob=(f_4g_1^{-1})^2f_4g_2g_1^{-1}g_2^{-1} \quad\text{and}\quad rita=(f_4g_1^{-1})^3f_4g_2g_1^{-1}g_2^{-1}. \] Note that $(bob)^{-4}(rita)^3$ is trivial in homology. This and $rita \cdot (bob)^{-1}=f_4 g_1^{-1}$ together generate the cusp subgroup fixing $v_1$. For later convenience, we now switch to the conjugate of this subgroup by $f_4^{-1}$, fixing $v_4$ and refer to the conjugated elements $m=f_4^{-1}(f_4g_1^{-1})f_4=g_1^{-1}f_4$ and $l=f_4^{-1}((bob)^{-4}(rita)^3)f_4$ as a ``meridian-longitude'' generating set for the closed cusp of $N_b$. \begin{figure}[ht!] \begin{center} \begin{picture}(0,0)% \includegraphics{\figdir/faces}% \end{picture}% \setlength{\unitlength}{4144sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(5298,1983)(917,-1828) \put(2206,-1006){\makebox(0,0)[b]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$A$}% }}}} \put(1756,-1276){\makebox(0,0)[b]{\smash{{\SetFigFont{10}{12.0}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$r^{-1}(C)$}% }}}} \put(2701,-1276){\makebox(0,0)[b]{\smash{{\SetFigFont{10}{12.0}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$t^{-1}(B)$}% }}}} \put(4456,-1006){\makebox(0,0)[b]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$f_4^{-1}(D')$}% }}}} \put(5356,-1006){\makebox(0,0)[b]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$D$}% }}}} \put(5446,-376){\makebox(0,0)[b]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$f_4^{-1}f_3$}% }}}} \put(4231,-16){\makebox(0,0)[b]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$f_4^{-1}f_2$}% }}}} \put(4366,-1636){\makebox(0,0)[b]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$f_1^{-1}f_4$}% }}}} \put(3016,-16){\makebox(0,0)[b]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$t^{-2}s^{-2}$}% }}}} \put(2746,-1636){\makebox(0,0)[b]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$sts^{-1}t$}% }}}} \put(1576,-376){\makebox(0,0)[b]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$t^{-1}s^{-1}ts^{-1}$}% }}}} \put(1666,-736){\makebox(0,0)[b]{\smash{{\SetFigFont{10}{12.0}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$s(B')$}% }}}} \put(2656,-691){\makebox(0,0)[b]{\smash{{\SetFigFont{10}{12.0}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$r(C')$}% }}}} \end{picture}% \caption{Totally geodesic faces of $N_a$ and $N_b$} \label{faces} \end{center} \end{figure} The totally geodesic $4$--punctured spheres on the boundaries of $N_a$ and $N_b$ are each the double of a regular ideal rectangle, and we construct $N$ by gluing $N_a$ to $N_b$ along them. Let us therefore assume that the polyhedra in \fullref{idealpoly} are realized in hyperbolic space in such a way that face $A$ of $Q_a$ and face $D$ of $Q_b$ are in the same hyperbolic plane, with $Q_a$ and $Q_b$ in opposite half-spaces. Further arrange so that the polyhedra are aligned in the way suggested by folding the page containing \fullref{faces} along the dotted line down the center of the figure. With this arrangement, Maskit's combination theorem gives a presentation for the amalgamated group: \begin{align*} \langle\ f_4, g_1, g_2, s, t \,|\, & f_4^{-1}[g_2,g_1]f_4[g_2,g_1^{-1}] = 1, \\ & t^{-2}s^{-2}=f_4^{-1}g_2^{-1}g_1^{-1}f_4g_2g_1^{-1}, \\ & sts^{-1}t=g_2g_1g_2^{-1}f_4^{-1}g_1g_2g_1^{-1}f_4, \\ & t^{-1}s^{-1}ts^{-1}=f_4^{-1}g_1^{-1}f_4g_2\ \rangle \end{align*} The first relation comes from $N_b$ and the others come from setting the boundary face pairings equal to each other. Observe that the last relation can be solved for $g_2$. Using Nielsen--Schreier transformations to eliminate $g_2$ and the last relation results in the presentation: \begin{align*} \langle\ f_4, g_1, s, t\ \,|\, &f_4^{-2}g_1[f_4t^{-1}s^{-1}ts^{-1},g_1]f_4[f_4t^{-1}s^{-1}ts^{-1},g_1^{-1}]g_1^{-2}f_4g_1 = 1 \\ &t^{-2}s^{-2}=f_4^{-1}st^{-1}stf_4^{-1}g_1^{-1}f_4^2t^{-1}s^{-1}ts^{-1}g_1^{-1},\\ &sts^{-1}t=f_4^{-1}g_1f_4t^{-1}s^{-1}ts^{-1}g_1st^{-1}stf_4^{-2}g_1f_4t^{-1}s^{-1}ts^{-1}g_1^{-1}f_4\ \rangle\end{align*} Replace $g_1$ with the meridian generator $m=\smash{g_1^{-1}}f_4$ of the closed cusp of $N_b$ and add generators $m_1=f_4^{-1}mf_4$ and $m_2=st^{-1}stm_1t^{-1}s^{-1}ts^{-1}$, each conjugate to $m$, yielding: \begin{align*} \langle\ f_4,m,m_1,m_2,s,t\ \,|\, & m_1=f_4^{-1}mf_4,\ m_2=st^{-1}stm_1t^{-1}s^{-1}ts^{-1} \\ &m_1^{-1}t^{-1}s^{-1}ts^{-1}f_4m^{-1}m_2mf_4^{-1}st^{-1}stm_1m^{-1}=1 \\ &s^2t^2f_4^{-1}m_2mf_4^{-1}=1 \\ &t^{-1}st^{-1}s^{-1}m^{-1}f_4t^{-1}s^{-1}ts^{-1}f_4m^{-1}m_2^{-1}m=1\ \rangle \end{align*} Note that after abelianizing, each of the last two relations expresses $f_4^2=m^2s^2t^2$, since $m_1$ and $m_2$ are conjugate to $m$ and therefore identical in homology. In light of this, we replace $f_4$ by $u=t^{-1}s^{-1}f_4m^{-1}$, which has order $2$ in homology. This yields the presentation:\begin{align} \nonumber \langle\ m,m_1,m_2,s,t,u\ \,|\, \\ &m_1^{-1}m^{-1}u^{-1}t^{-1}s^{-1}mstum=1 \\ &m_2^{-1}st^{-1}stm_1t^{-1}s^{-1}ts^{-1}=1 \\ &m_1^{-1}t^{-1}s^{-1}t^{2}um_2u^{-1}t^{-2}stm_1m^{-1}=1 \\ &s^2t^2m^{-1}u^{-1}t^{-1}s^{-1}m_2u^{-1}t^{-1}s^{-1}=1 \\ &t^{-1}st^{-1}s^{-1}m^{-1}stumt^{-1}s^{-1}t^2um_2^{-1}m=1\ \rangle \end{align} Let $R_i$ denote the relation labeled $(i)$ in the presentation above. In the abelianization, $R_1$ sets $m_1=m$, $R_2$ sets $m_2=m_1$, $R_3$ disappears, and the last two relations set $u^2=1$. Therefore \[ \mathrm{H}_1(N) \cong \mathbb{Z}^3 \oplus \mathbb{Z}/2\mathbb{Z} = \langle m \rangle \oplus \langle s \rangle \oplus \langle t \rangle \oplus \langle u \rangle. \] (In this paper we will generally blur the distinction between elements of $\pi_1$ and their homology classes.) The boundary slopes of the totally geodesic $4$--punctured sphere coming from $\partial N_a$ and $\partial N_b$ are represented in $\pi_1(N)$ by $t^{-1}s^{-1}ts^{-1}$ and $sts^{-1}t$. Let $O_n$ be the finite volume hyperbolic orbifold produced by performing face identifications on $P^{(n)}_a$ and $P^{(n)}_b$ corresponding to those on $Q_a$ and $Q_b$. $O_n$ is geometrically produced by $n$--fold orbifold filling on each of the above boundary slopes of $N$. Appealing to the polyhedral decomposition, we see that the separating $4$--punctured sphere remains totally geodesic, becoming a sphere with four cone points of order $n$. Our knots in rational homology spheres are certain manifold covers of the $O_n$. In order to understand the homology of these manifold covers, we compute the homology of the corresponding abelian covers of $N$. Let $p\co\widetilde{N} \rightarrow N$ be the maximal free abelian cover; that is, $\widetilde{N}$ is the cover corresponding to the kernel of the map $\pi_1(N) \rightarrow \mathrm{H}_1(N) \rightarrow \mathbb{Z}^3 =\langle x,y,z \rangle$ given by: \begin{align*} & m \mapsto x & & s \mapsto y & & t \mapsto z & & u \mapsto 1 & \end{align*} Let $X$ be a standard presentation $2$--complex for $\pi_1(N)$ and $\widetilde{X}$ the $2$--complex covering $X$ corresponding to $\widetilde{N} \rightarrow N$. Then the first homology and Alexander module of $\widetilde{X}$ are naturally isomorphic to those of $\widetilde{N}$, since $N$ is homotopy equivalent to a cell complex obtained from $X$ by adding cells of dimension three and above. The covering group $\mathbb{Z}^3$ acts freely on the chain complex of $\widetilde{X}$, so that it is a free $\mathbb{Z}[x,x^{-1},y,y^{-1},z,z^{-1}]$--module. Below we give a presentation matrix for the Alex\-ander module of $\widetilde{X}$: \[ \left( \begin{array}{ccccc} \frac{1-yz+xyz}{x^2yz} & 0 & -1 & -\frac{y^2z^2}{x} & \frac{-1+yz+z^2}{xz^2} \\ -\frac{1}{x} & \frac{y^2}{x} & \frac{x-1}{x} & 0 & 0 \\ 0 & -\frac{1}{x} & \frac{z}{xy} & \frac{yz}{x} & -\frac{1}{x} \\ \frac{x-1}{x^2yz} & -\frac{(x-1)(y+z)}{xz} & \frac{x-1}{xyz} & \frac{y(x-z)}{x} & \frac{1-2x+xz}{xz^2} \\ \frac{x-1}{x^2z} & -\frac{y(x-1)(y-1)}{xz} & \frac{(x-1)(-1+y-z)}{xyz} & \frac{y(-x+xy+xyz-yz)}{x} & \frac{x+y-2xy}{xz^2} \\ \frac{x-1}{x^2} & 0 & -\frac{z(x-1)}{xy} & -\frac{yz(x+yz)}{x} & \frac{y+xz}{xz} \end{array} \right) \] The rows of the matrix above correspond to lifts of the generators for $\pi_1(N)$ sharing a basepoint, ordered as $\{\tilde{m},\wwtilde{m_1},\wwtilde{m_2},\tilde{s},\tilde{t},\tilde{u}\}$ reading from the top down. These generate $\mathrm{C}_1(\widetilde{X})$ as a $\mathbb{Z}[x,x^{-1},y,y^{-1},z,z^{-1}]$--module. The columns are the Fox free derivatives of the relations in terms of the generators, giving a basis for the image of $\partial \mathrm{C}_2(\widetilde{X})$. For a generator $g$ above, let $p_g$ be the determinant of the square matrix obtained by deleting the row corresponding to $\tilde{g}$. These polynomials are: \[ \begin{array}{lcl} p_m & = & -(x^{-4}z^{-2})(x-1)^2(y-1)(z-1)(y+z+4yz+y^2z+yz^2) \\ p_{m_1} & = & (x^{-4}z^{-2})(x-1)^2(y-1)(z-1)(y+z+4yz+y^2z+yz^2) \\ p_{m_2} & = & -(x^{-4}z^{-2})(x-1)^2(y-1)(z-1)(y+z+4yz+y^2z+yz^2) \\ p_s & = & (x^{-4}z^{-2})(x-1)(y-1)^2(z-1)(y+z+4yz+y^2z+yz^2) \\ p_t & = & -(x^{-4}z^{-2})(x-1)(y-1)(z-1)^2(y+z+4yz+y^2z+yz^2) \\ p_u & = & 0 \end{array} \] The Alexander polynomial of $\mathrm{H}_1(\widetilde{N})$ is the greatest common factor: \[ \Delta(x,y,z)=(x-1)(y-1)(z-1)(y+z+4yz+y^2z+yz^2) \] up to multiplication by an invertible element of $\mathbb{Z}[x,x^{-1},y,y^{-1},z,z^{-1}]$. Let $N_{\infty}$ be the infinite cyclic cover of $N$ factoring through $\widetilde{N}$ given by: \begin{align*} & m \mapsto x^2 & & s \mapsto x & & t \mapsto x & & u \mapsto 1 & \end{align*} Then the chain complex of $N_{\infty}$ is a $\Lambda$--module, where $\Lambda=\mathbb{Z}[x,x^{-1}]$ and specializing the above picture yields an Alexander polynomial \[ \begin{array}{lcl} \Delta_{\infty}(x) & = & (x^2-1)(x-1)^2(2x+4x^2+2x^3) \\ & = & 2x(x-1)^3(x+1)^3. \end{array} \] Let $N_n$ be the $n$--fold cyclic cover of $N$ factoring through $N_{\infty}$. For $n$ odd, $N_n$ has three cusps, since $m$, $sts^{-1}t$, and $t^{-1}s^{-1}ts^{-1}$ map to $x^{\pm2}$, which generates $\mathbb{Z}/n\mathbb{Z}$. Let $S_n$ be the closed manifold obtained by filling $N_n$ along the slopes covering $m$, $sts^{-1}t$, and $t^{-1}s^{-1}ts^{-1}$. Theorem \ref{cusped} follows quickly from the following lemma. \begin{lem} For odd $n \geq 3$, $S_n$ is a rational homology sphere. \end{lem} \begin{proof} The proof is adapted from an analogous proof of Sakuma concerning link complements in $S^3$. The chain complex of $N_n$ is isomorphic to $C_*(N_{\infty}) \otimes (\Lambda/(x^n-1))$. Note that $x^n - 1 = (x-1)\nu_n$, where $\nu_n(x) = x^{n-1}+x^{n-2} + \hdots + x + 1$. Sakuma observes that the short exact sequence of coefficient modules \[ 0 \rightarrow \mathbb{Z} \stackrel{\nu_n}{\longrightarrow} \Lambda/(x^n-1) \rightarrow \Lambda/(\nu_n) \rightarrow 0 \] where the map on the left is multiplication by $\nu_n$, gives rise to a short exact sequence in homology \[ 0 \rightarrow \mathrm{H}_1(N) \stackrel{tr}{\longrightarrow} \mathrm{H}_1(N_n) \rightarrow \mathrm{H}_1(N_{\infty})/\nu_n \mathrm{H}_1(N_{\infty}) \rightarrow 0 \] where $tr$ is the transfer map, $tr(h)=h+x.h+\cdots+x^{n-1}.h$ for a homology class $h$. Define $\mathrm{H}_n=\mathrm{H}_1(N_{\infty})/\nu_n \mathrm{H}_1(N_{\infty})$. Since the Alexander polynomial of $N_{\infty}$ does not share roots with $\nu_n$, $\mathrm{H}_n$ is a torsion $\mathbb{Z}$--module. The lemma follows from a comparison between $\mathrm{H}_1(S_n)$ and $\mathrm{H}_n$. The Mayer--Vietoris sequence implies that $\mathrm{H}_1(S_n)$ is obtained as the quotient of $\mathrm{H}_1(N_n)$ by the subgroup generated by transfers of the meridians. If $N$ were a link complement in $\mathrm{S}^3$, it would immediately follow that $\mathrm{H}_n = \mathrm{H}_1(S_n)$, since the homology of a link complement is generated by meridians. In our case we have \[ \mathrm{H}_n = \mathrm{H}_1(N_n) / \langle\ tr(m),tr(s),tr(t),tr(u)\ \rangle, \] whereas \[ \mathrm{H}_1(S_n) = \mathrm{H}_1(N_n) / \langle\ tr(m),tr(2s),tr(2t)\ \rangle. \] However one observes that $\mathrm{H}_1(S_n) \rightarrow \mathrm{H}_n$ is an extension of degree at most 8 (since $u$ has order 2 in $\mathrm{H}_1(N)$), and so $\mathrm{H}_1(S_n)$ is also a torsion group. \end{proof} Let $M_n$ be the manifold obtained by filling two of the three cusps of $N_n$ along the slopes covering $sts^{-1}t$ and $t^{-1}s^{-1}ts^{-1}$. We have geometrically described $M_n$ as a branched cover of $O_n$, produced by $n$--fold orbifold filling along $sts^{-1}t$ and $t^{-1}s^{-1}ts^{-1}$. There is a closed totally geodesic surface in $M_n$ covering the totally geodesic sphere with four cone points in $O_n$. A closed manifold $S_n$ is produced by filling the remaining cusp of $M_n$ along the meridian covering $m$. Since $S_n$ is a rational homology sphere, $M_n$ is a knot complement in a rational homology sphere, and we have proven \fullref{cusped}. \section{Further directions} Performing ordinary Dehn filling along the three meridians of $N$ specified in the previous section yields a manifold $S$, which is easily seen to be the connected sum of two spherical manifolds. The half arising from the truncated tetrahedra is the quotient of $\mathrm{S}^3$, regarded as the set of unit quaternions, by the subgroup $\langle i,j,k \rangle$. The half arising from the cuboctahedron is the lens space $\mathrm{L}(4,1)$. The manifolds $S_n$ may be regarded as $n$--fold branched covers over the three-component link $L$ in $S$ consisting of the cores of the filling tori. Since the meridians $t^{-1}s^{-1}ts^{-1}$ and $sts^{-1}t$ represent squares of primitive elements in the homology of $N$, any cover of $S$ branched over $L$ will have nontrivial homology of order $2$ coming from the transfers of $s$ and $t$. However, it is possible that techniques similar to those above may be used to create knot complements in integral homology spheres. If the manifold $N$ above---in addition to its geometric properties---had trivial nonperipheral \textit{integral\/} homology, then $S$ would be an integral homology sphere. Porti \cite{Po} has supplied a formula in terms of the Alexander polynomial for the order of the homology of a cover of an integral homology sphere branched over a link, generalizing work of Mayberry--Murasugi in the case of $S^3$ \cite{MM}. Using this formula, the order of the homology of branched covers of $S$ could be easily checked. In fact, the Menasco--Reid conjecture itself may be approached using a variation of these techniques. A genus $n-1$ handlebody may be obtained as the $n$--fold branched cover of a ball over the trivial $2$--string tangle, so knot complements in the genus $n-1$ handlebody may be obtained as $n$--fold branched covers over the trivial tangle of a knot complement in the ball. In analogy with \fullref{sec3}, allowing the complement of $T$ to play the role of $N_a$ we ask the following: \begin{ques} Does there exist a hyperbolic $3$--manifold with one rank $2$ and two rank $1$ cusps, which is the complement of a tangle in the ball, with totally geodesic boundary isometric to the totally geodesic boundary of the complement of the tangle $T$? \end{ques} Such a manifold would furnish an analog of the manifold $N_b$ in \fullref{sec3}. If the glued manifold $N$ was a $2$--component link complement in $\mathrm{S}^3$, with an unknotted component intersecting the totally geodesic Conway sphere, and this sphere remained totally geodesic under the right orbifold surgery along its boundary slopes, branched covers would give a counterexample to the Menasco--Reid conjecture. In any case, Thurston's hyperbolic Dehn surgery theorem implies that as $n \rightarrow \infty$, the resulting surfaces would have principal curvature approaching 0, furnishing new examples of the phenomenon discovered by Leininger in \cite{Le} (although unlike Leininger's examples this would not give bounded genus). \bibliographystyle{gtart}
{'timestamp': '2009-04-23T13:07:39', 'yymm': '0601', 'arxiv_id': 'math/0601561', 'language': 'en', 'url': 'https://arxiv.org/abs/math/0601561'}
\section{Introduction} \label{s:introduction} The recent development of disentangled representation learning benefits various medical image analysis tasks including segmentation~\cite{liu2020variational,ouyang2021representation,duan2022cranial}, quality assessment~\cite{hays2022evaluating,shao2022evaluating}, domain adaptation~\cite{tang2021disentangled}, and image-to-image translation~(I2I)~\cite{dewey2020disentangled,zuo2021information,zuo2021unsupervised}. The underlying assumption of disentanglement is that a high-dimensional observation $x$ is generated by a latent variable $z$, where $z$ can be decomposed into independent factors with each factor capturing a certain type of variation of $x$, i.e., the probability density functions satisfy $p(z_1,z_2) = p(z_1)p(z_2)$ and $z = (z_1, z_2)$~\cite{locatello2020weakly}. For medical images, it is commonly assumed that $z$ is a composition of contrast~(i.e., acquisition-related) and anatomical information of image $x$~\cite{chartsias2019disentangled,dewey2020disentangled,zuo2021unsupervised,ouyang2021representation,liu2020variational}. While the contrast representations capture specific information about the imaging modality, acquisition parameters, and cohort, the anatomical representations are generally assumed to be invariant to image domains\footnote{Here we assume that images acquired from the same scanner with the same acquisition parameters are from the same domain.}. It has been shown that the disentangled domain-invariant anatomical representation is a robust input for segmentation~\cite{ouyang2021representation,chartsias2019disentangled,liu2020variational,ning2021new}, and the domain-specific contrast representation provides rich information about image acquisitions. Recombining the disentangled anatomical representation with the desired contrast representation also enables cross-domain I2I~\cite{zuo2021unsupervised,lyu2021dsegnet,ouyang2021representation}. Disentangling anatomy and contrast in medical images is a nontrivial task. Locatello~et~al.~\cite{locatello2019challenging} showed that it is theoretically impossible to learn disentangled representations from independent and identically distributed observations without inductive bias~(e.g., domain labels or paired data). Accordingly, most research efforts have focused on learning disentangled representations with image pairs or auxiliary labels. Specifically, image pairs introduce an inductive bias that the two images differ exactly in one factor of $z$ and share the remaining information. Multi-contrast or multi-scanner images of the same subject are the most commonly used paired data. For example, T$_1$-weighted~(T$_1$-w)/T$_2$-weighted~(T$_2$-w) magnetic resonance~(MR) images~\cite{ouyang2021representation,dewey2020disentangled,zuo2021information}, MR/computational tomography images~\cite{chartsias2019disentangled}, or multi-scanner images~\cite{liu2020variational} of the same subject are often used in disentangling contrast and anatomy. The underlying assumption is that the paired images share the same anatomy~(domain-invariant) while differing in image contrast~(domain-specific). The requirement of paired training images with the same anatomy is a limitation due to the extra time and cost of data collection. Even though such paired data are available in some applications---for example, paired T$_1$-w and T$_2$-w images are routinely acquired in MR imaging---registration error, artifacts, and difference in resolution could violate the fundamental assumption that only one factor of $z$ changes between the pairs. As we show in Sec.~\ref{sec:experiments}, non-ideal paired data can have negative effects in disentangling. Labels, such as manual delineations or domain labels, usually provide explicit supervision in either the disentangled representations or synthetic images generated by the I2I algorithm for capturing desired properties. Chartsias~et~al.~\cite{chartsias2019disentangled} used manual delineations to guide the disentangled anatomical representations to be binary masks of the human heart. In \cite{huang2018multimodal,jha2018disentangling,ning2021new}, researchers used domain labels with domain-specific image discriminators and a cycle consistency loss to encourage the synthetic images to be in the correct domain and the representations to be properly disentangled. Although these methods have shown encouraging performance in I2I tasks, there is still potential for improvements. The dependency of pixel-level annotations or domain labels limits the applicability, since these labels are sometimes unavailable or inaccurate. Additionally, the cycle consistency loss is generally memory consuming and found to be an over-strict constraint of I2I~\cite{amodio2019travelgan}, which leads to limited scalability when there are many image domains. Can we overcome the limitations of current disentangling methods and design a model that does not rely on paired multi-modal images or labels and is also scalable in a large number of image domains? Deviating from most existing literature that heavily relies on paired multi-modal images for training, we propose a \textbf{single modality disentangling framework.} Instead of using domain-specific image discriminators or cycle consistency losses, we design a novel distribution discriminator that is shared by all image domains for \textbf{theoretically and practically superior disentanglement~$p(z_1,z_2) = p(z_1)p(z_2)$.} Additionally, we present \textbf{an information-based metric to quantitatively measure disentanglement.} We demonstrate the broad applicability of the proposed method in a multi-site brain MR image harmonization task and a cardiac MR image segmentation task. Results show that our single-modal disentangling framework achieves performance comparable to methods which rely on multi-modal images for disentanglement. Furthermore, we demonstrate that the proposed framework can be incorporated into existing methods and trained with a mixture of paired and single modality data to further improve performance. \section{Method} \subsection{The single-modal disentangling network} \begin{figure}[!tb] \centering \includegraphics[width=0.65\textwidth]{fig/framework.png} \caption{The proposed disentangling framework has an encoder-decoder like structure. $x$ and $x'$ are two slices from different orientations of the same $3$D volume, which we assume embed the same contrast but different anatomy information. $I(\cdot;\cdot)$ denotes MI.} \label{fig:framework} \end{figure} \textbf{General framework} As shown in Fig.~\ref{fig:framework}, the proposed method has an autoencoder-like structure, where the inputs $x$ and $x'$ are two slices from different orientations of the same $3$D volume. $I(\cdot;\cdot)$ denotes mutual information~(MI). Note that the proposed method can be applied to datasets of other organs, as shown in Sec.~\ref{sec:experiments}; we use brain MR images for demonstration purposes. The anatomy encoder has a U-Net~\cite{ronneberger2015u} structure similar to \cite{chartsias2019disentangled,dewey2020disentangled,liu2020variational}. It generates one-hot encoded anatomical representations $a \in \mathbb{R}^{H\times W \times M}$, where $H$ and $W$ are with the same spatial dimension of $x$, and $M$ is the number of channels. The contrast encoder is composed of a sequence of convolutional blocks with output contrast representations $c \in \mathbb{R}^2$. $c$ is then broadcast to $H\times W \times 2$ and concatenated with $a$ as the input to the decoder, which is also a U-Net. The same networks are shared by all image domains, so the model size stays constant. The overall objectives of the framework are 1) to disentangle anatomical and contrast information from input images without multi-modal paired data or labels and 2) to generate high quality synthetic images based on the disentangled representations. The above two objectives can be mathematically summarized into three terms~(also shown in Fig.~\ref{fig:framework}) that we show are optimizable by loss functions, i.e., \begin{equation} \label{eq:objective} \mathcal{L} \triangleq \lambda_1 ||x - \hat{x}||_1 - \lambda_2 I(c;x) + \lambda_3 I(a;c), \end{equation} where $\lambda$'s are the hyperparameters and $||x - \hat{x}||_1$ is the $l_1$ reconstruction loss that encourages the generated image $\hat{x}$ to be similar to the original image $x$. The second term $-I(c;x)$ encourages $c$ to capture as much information of $x$ as possible. Since $c$ is calculated from $x'$ instead of $x$ with $E_C(\cdot)$ and the shared information between $x'$ and $x$ is the contrast, maximizing $I(c;x)$ guides $E_C(\cdot)$ to extract contrast information. Lastly, minimizing $I(a;c)$ penalizes $a$ and $c$ from capturing common information, thus encouraging disentanglement. Since $c$ captures contrast information by maximizing $I(c;x)$, this helps the anatomy encoder $E_A(\cdot)$ learn anatomical information from the input $x$. This can also prevent a trivial solution where $E_A(\cdot)$ and $D(\cdot)$ learn an identity transformation. In the next section, we show how the two MI terms are optimized in training. We also theoretically show that $a$ and $c$ are perfectly disentangled, i.e., $I(a;c)=0$, at the global minimum of $E_A(\cdot)$. \textbf{Maximizing $\boldsymbol{I(c;x)}$} We adopt DeepInfomax~\cite{hjelm2018learning} to maximize the lower bound of $I(c;x)$ during training. This approach uses this inequality: \begin{equation} \label{eq:deep_infomax} I(c;x) \geq \hat{I}(c;x) \triangleq \mathbb{E}_{p(c,x)} \left[ -\text{sp} \left(-T(c,x) \right) \right] - \mathbb{E}_{p(c)p(x)} \left[ \text{sp} \left( T(c,x) \right) \right], \end{equation} where $\text{sp}(r) = \log(1+e^r)$ is the softplus function and $T(\cdot, \cdot)$ is a trainable auxiliary network $T(c, x) \colon \mathcal{C} \times \mathcal{X} \rightarrow \mathbb{R}$. The gradient calculated by maximizing Eq.~(\ref{eq:deep_infomax}) is applied to both $T(\cdot,\cdot)$ and $E_C(\cdot)$. Density functions $p(c,x)$ and $p(c)p(x)$ are sampled by selecting matched pairs $\{(c^{(i)},x^{(i)}) \}_{i=1}^N$ and shuffled pairs $\{(c^{(i)},x^{(l_i)}) \}_{i=1}^N$ from a mini-batch with $N$ being the batch size and $c=E_C(x')$. Note that paired multi-modal images are not required; $x$ and $x'$ are 2D slices from the same volume with different orientations. $i$ is the instance index of a mini-batch and $\{l_i\}_{i=1}^N$ is a permutation of sequence $\{1,\dots,N\}$. \textbf{Minimizing $\boldsymbol{I(a;c)}$} Since Eq.~(\ref{eq:deep_infomax}) provides a lower bound of MI, it cannot be used to minimize $I(a;c)$. We propose a novel way to \textit{minimize} $I(a;c)$. Inspired by the distribution matching property of generative adversarial networks~(GANs)~\cite{goodfellow2014generative}, we introduce a distribution discriminator~$U(\cdot,\cdot) \colon \mathcal{A} \times \mathcal{C} \rightarrow \mathbb{R}$ that distinguishes whether the inputs $(a,c)$ are sampled from the joint distribution $p(a,c)$ or product of the marginals $p(a)p(c)$. Note that $c$ is detached from the computational graph while minimizing $I(a;c)$, so the GAN loss only affects $U(a,c)$ and $E_A(x)$, where $E_A(x)$ tries to ``fool'' $U(a,c)$ by generating anatomical representations $a$, such that $p(a,c)$ and $p(a)p(c)$ are sufficiently indistinguishable. \begin{theorem} \label{theory:disentangle} $E_A(x)$ achieves global minimum $\iff$ $p(a,c) = p(a)p(c)$. \end{theorem} Theorem~\ref{theory:disentangle} says $a$ and $c$ are disentangled at the global minimum of $E_A(x)$. The minmax training objective between $E_A(x)$ and $U(a, c)$ is given by \begin{equation} \min_{E_A} \max_{U} \mathbb{E}_{p(a,c)}\left[ \log U(a,c) \right] + \mathbb{E}_{p(a)p(c)} \left[ \log \left( 1 - U(a,c) \right) \right], \end{equation} where $a = E_A(x)$. Density functions $p(a,c)$ and $p(a)p(c)$ are sampled by randomly selecting matched pairs $\{(a^{(i)},c^{(i)}) \}_{i=1}^N$ and shuffled pairs $\{(a^{(i)},c^{(l_i)}) \}_{i=1}^N$, respectively. \textbf{Implementation details} There are in total five networks shared by all domains. $E_A(\cdot)$ is a three-level~(downsampling layers) U-Net with all convolutional layers being a kernel size of $3\times 3$ convolution followed by instance normalization and LeakyReLU. $D(\cdot)$ has a similar U-Net structure as $E_A(\cdot)$ with four levels. $E_C(\cdot)$ is a five-layer CNN with $4\times4$ convolutions with stride $2$. The kernel size of the last layer equals the spatial dimension of the features, making the output variable $c$ a two-channel feature with $H = W = 1$. Both $T(\cdot,\cdot)$ and $U(\cdot,\cdot)$ are five-layer CNNs. We use the Adam optimizer in all our experiments, where our model consumed approximately $20$GB GPU memory for training with batch size $16$ and image dimension $288 \times 288$. Our learning rate is $10^{-4}$. $\lambda_1=1.0$ and $\lambda_2=\lambda_3=0.1$. \subsection{A new metric to evaluate disentanglement} Since MI between two perfectly disentangled variables is zero, intuition would have us directly estimate MI between two latent variables to evaluate disentanglement. MINE~\cite{belghazi2018mine} provides an efficient way to estimate MI using a neural network. However, simply measuring MI is less informative since MI is not upper bounded; e.g., how much worse is $I(a;c) = 10$ compared with $I(a;c)=0.1$? Inspired by the fact that $I(a;c) = H(c) - H(c|a)$, where $H(\cdot)$ is entropy, we define a bounded ratio $R_I(a;c) \triangleq I(a;c) / H(c) \in [0,1]$ to evaluate disentanglement. $R_I(a,c)$ has a nice theoretical interpretation: the \textit{proportion} of information that $c$ shares with $a$. Different from MIG~\cite{chen2018isolating}, which requires the ground truth factor of variations, the ratio $R_I(a;c)$ directly estimates how well the two latent variables are disentangled. When the distribution $p(c)$ is known, $H(c)$ can be directly calculated using the definition $H(c) = -\sum p(c) \log p(c)$. To estimate $H(c)$ when $p(c)$ is an arbitrary distribution, we follow~\cite{chan2019neural}. Accordingly, $R_I(a;c)$ for unknown $p(c)$ is given by \begin{align} R_I(a;c) &\triangleq \frac{I(a;c)}{H(c)} = \frac{\mathcal{D}_{\text{KL}} \left[ p(a,c) || p(a) p(c) \right] }{-\mathbb{E}_{p(c)} \left[ \log q(c) \right] - \mathcal{D}_{\text{KL}} \left[ p(c) || q(z) \right] }, \end{align} where $z \sim q(z) = \mathcal{N}(0,\mathbb{I})$ is an auxiliary variable with the same dimension as $c$. The two Kullback-Leibler divergence terms can be estimated using MINE~\cite{belghazi2018mine}. The cross-entropy term is approximated by the empirical mean $-\frac{1}{N}\sum_{i=1}^N \log q(c_i)$. \section{Experiments and Results} \label{sec:experiments} We evaluate the proposed single-modal disentangling framework on two different tasks: harmonizing multi-site brain MR images and domain adaptation~(DA) for cardiac image segmentation. In the brain MR harmonization experiment, we also quantitatively evaluate disentanglement of different comparison methods. \begin{figure}[!tb] \centering \includegraphics[width=0.85\textwidth]{fig/harmonized_images_6.png} \caption{T$_1$-w brain MR images from ten sites~($S_1$ to $S_{10}$) are harmonized to $S_1$ using the proposed method. Six representative sites are shown due to the page limit. Green boxes highlight gray and white matter contrasts becoming more similar to the target after harmonization. Yellow boxes indicate harmonization error in the ventricles from the proposed method.} \label{fig:harmonized_images} \end{figure} \textbf{Brain MR harmonization} T$_1$-w and T$_2$-w MR images of the human brain collected at ten different sites~(denoted as $S_1$ to $S_{10}$) are used in our harmonization task. We use the datasets~\cite{ixi,lamontagne2019oasis,resnick2000one} and preprocessing reported in~\cite{zuo2021unsupervised} after communication with the authors. For each site, 10 and 5 subjects were used for training and validation, respectively. As shown in Fig.~\ref{fig:harmonized_images}, the original T$_1$-w images have different contrasts due to their acquisition parameters. We seek a T$_1$-w harmonization such that the image contrast of the source site matches the target site while maintaining the underlying anatomy. We have a set of held-out subjects~($N=10$) who traveled between $S_1$ and $S_2$ to evaluate harmonization. \textit{Compare with unpaired I2I method without disentangling.} We first compared our method trained only on T$_1$-w images (100\%\texttt{U}) with CycleGAN~\cite{zhu2017unpaired}, which conducts unpaired I2I based on image discriminators and a cycle consistency loss without disentanglement. Results in Table~\ref{tab:compare_harmonization} show that our (100\%\texttt{U}) outperforms CycleGAN with statistical significance~($p<0.01$) in a paired Wilcoxon test. \textit{Does single-modal disentangling perform as well as multi-modal disentangling?} We then compared our method with two different disentangling methods which use paired T$_1$-w and T$_2$-w for training. Specifically, Adeli~et~al.~\cite{adeli2021representation} learns latent representations that are mean independent of a protected variable. In our application, $a$ and $c$ are the latent and protected variable, respectively. Zuo~et~al.~\cite{zuo2021unsupervised} tackles the harmonization problem with disentangled representations without explicitly minimizing $I(a;c)$. Paired Wilcoxon tests show that our~(100\%\texttt{U}) has comparable performance with Adeli~et~al.~\cite{adeli2021representation} and Zuo~et~al.~\cite{zuo2021unsupervised}, both of which rely on paired T$_1$-w and T$_2$-w images for training (see Table~\ref{tab:compare_harmonization}). \textit{Are paired data helpful to our method?} We present three ablations of our method: training with all T$_1$-w images~(\texttt{$100\%$U}), training with $50\%$ paired and $50\%$ unpaired images~(\texttt{$50\%$P, $50\%$U}), and training with $100\%$ paired T$_1$-w and T$_2$-w images~(\texttt{$100\%$P}). The existence of paired T$_1$-w and T$_2$-w images of the same anatomy provides an extra constraint: $a$ of T$_1$-w and T$_2$-w should be identical. We observe in Tabel~\ref{tab:compare_harmonization} that introducing a small amount of paired multi-modal images in training can boost the performance of our method, as our (50\%\texttt{U}, 50\%\texttt{P}) achieves the best performance. Yet, training the proposed method using all paired images has no benefit to harmonization, which is a surprising result that we discuss in Sec.~\ref{sec:discussion}. \begin{table}[!tb] \centering \caption{Numerical comparisons between the proposed approach and existing I2I methods in a T$_1$-w MR harmonization task. SSIM and PSNR are calculated in 3D by stacking 2D axial slices and are reported as ``mean $\pm$ standard deviation''. Bold numbers indicate the best mean performance. \texttt{U}: training with unpaired data~(T$_1$-w only). \texttt{P}: training with paired T$_1$-w and T$_2$-w images.} \resizebox{0.9\columnwidth}{!}{ \begin{tabular}{p{0.15\textwidth} C{0.17\textwidth} C{0.17\textwidth} C{0.17\textwidth} C{0.17\textwidth} C{0.17\textwidth} C{0.15\textwidth} } \toprule & \multirow{2}{*}{\textbf{\thead{Training\\data}}} & \multicolumn{2}{c}{\textbf{I2I: $\boldsymbol{S_1}$ to $\boldsymbol{S_2}$}} & \multicolumn{2}{c}{\textbf{I2I: $\boldsymbol{S_2}$ to $\boldsymbol{S_1}$}} & \textbf{Disentangle} \\ \cmidrule(lr){3-4} \cmidrule(lr){5-6} \cmidrule(lr){7-7} & & SSIM~($\%$) & PSNR~(dB) & SSIM~($\%$) & PSNR~(dB) & $R_I(a;c)~(\%)$ \\ \midrule \texttt{Before I2I} & -- & $87.54\pm1.18$ & $26.68\pm0.77$ & $87.54\pm1.18$ & $26.68\pm0.77$ & -- \\ \texttt{CycleGAN}~\cite{zhu2017unpaired} & \texttt{$100\%$U} & $89.62\pm1.14$ & $27.35\pm0.52$ & $90.23\pm1.12$ & $28.15\pm0.59$ & --\\ \texttt{Adeli}~\cite{adeli2021representation} & \texttt{$100\%$P} & $89.92\pm 0.98$ & $27.47\pm 0.52$ & $90.36\pm 1.05$ & $28.41\pm 0.49$ & $5.9$ \\ \texttt{Zuo}~\cite{zuo2021unsupervised} & \texttt{$100\%$P} & $90.63\pm 1.08$ & $27.60\pm 0.61$ & $90.89\pm 1.01$ & $28.14\pm 0.54$ & $10.8$ \\ \cmidrule{1-7} \texttt{Ours} & \texttt{$100\%$U} & $90.25\pm 1.02$ & $27.86\pm 0.59$ & $90.52\pm 1.03$ & $28.23\pm 0.45$ & $\boldsymbol{0.1}$ \\ \texttt{Ours} & \texttt{$50\%$U, $50\%$P}& $\boldsymbol{90.96\pm 1.00}$ & $\boldsymbol{28.55\pm 0.61}$ & $\boldsymbol{91.16\pm 1.31}$ & $\boldsymbol{28.60\pm 0.49}$ & $1.5$ \\ \texttt{Ours} & \texttt{$100\%$P} & $90.27\pm 0.95$ & $27.88\pm 0.54$ & $90.70\pm 1.02$ & $28.59\pm 0.52$ & $3.1$ \\ \bottomrule \end{tabular}} \label{tab:compare_harmonization} \end{table} \textit{Do we learn better disentanglement?} We calculated the proposed $R_I(a;c)$ to evaluate all the comparison methods that learn disentangled representations. Results are shown in the last column of Table~\ref{tab:compare_harmonization}. All three ablations of the proposed method achieve superior disentangling performance than the other methods. Out of the five comparisons, Zuo~et~al.~\cite{zuo2021unsupervised} has the worst disentanglement between $a$ and $c$; this is likely because Zuo~et~al. only encourages disentangling between $a$ and domain labels. Surprisingly, $a$ and $c$ become more entangled as we introduce more T$_2$-w images in training. A possible reason could be that T$_1$-w and T$_2$-w images carry slightly different observable anatomical information, making it impossible to completely disentangle anatomy and contrast because several factors of $z$ are changing simultaneously. A similar effect is also reported in~\cite{trauble2021disentangled}, where non-ideal paired data are used in disentangling. \textbf{Cardiac MR image segmentation} \begin{figure}[!tb] \centering \includegraphics[width=0.8\textwidth]{fig/cardiac_seg.png} \caption{Top: An example of improved segmentation from $S_4$ after DA. Bottom: DSC of multi-site cardiac image segmentation. The segmentation model was trained on $S_1$ to $S_3$, and then applied to a held-out dataset with images from all five sites. DA was conducted to translate all images to $S_3$ using the proposed method. Asterisks indicate statistically significant tests.} \label{fig:cardiac_segmentation} \end{figure} To further evaluate the proposed method, we used data from the M\&M cardiac segmentation challenge~\cite{campello2021multi}, where cine MR images of the human heart were collected by six sites, out of which five~($S_1$ to $S_5$) are available to us. The task is to segment the left and right ventricle and the left ventricular myocardium of the human heart. We followed the challenge guidelines to split data so that the training data (MR images and manual delineations) only include $S_1$, $S_2$, and $S_3$, and the validation and testing data include all five sites. In this way there is a domain shift between the training sites ($S_1$ thru $S_3$) and testing sites ($S_4$ and $S_5$). Since images from all five sites are available to challenge participants, DA can be applied. Due to the absence of paired data, we only applied the proposed 100\%\texttt{U} to this task (other evaluated disentangling methods in the brain MR harmonization task cannot be applied here). \textit{Does our method alleviate domain shift in downstream segmentation?} We adopted a 2D U-Net structure similar to the $B_1$ model reported in~\cite{campello2021multi} as our segmentation baseline. Without DA, our baseline method achieved a performance in Dice similarity coefficient (DSC) within the top 5. Due to domain shift, the baseline method has a decreased performance on $S_4$ and $S_5$ (see Fig.~\ref{fig:cardiac_segmentation}). We applied the proposed method trained on MR images from all five available sites to translate testing MR images to site $S_3$ (as the baseline segmentation model has the best overall performance on the original $S_3$ images). Due to the poor through-plane resolution of cine MR images, we chose $x$ and $x'$ by selecting slices from two different cine time frames. Segmentation performance was re-evaluated using images after DA. Paired Wilcoxon test on each label of each site shows that the segmentation model has significantly improved~($p<0.01$) DSC for all labels of $S_4$ and $S_5$ after DA, except for the right ventricle of $S_5$. Although segmentation is improved after DA, the ability of our method to alleviate domain shift is not unlimited; the segmentation performance on $S_4$ and $S_5$ is still worse than the training sites. We regard this as a limitation for future improvement. \section{Discussion and Conclusion} \label{sec:discussion} In this paper, we present a single-modal MR disentangling framework with theoretical guarantees and an information-based metric for evaluating disentanglement. We showcase the broad applicability of our method in brain MR harmonization and DA for cardiac image segmentation. We show in the harmonization task that satisfactory performance can be achieved without paired data. With limited paired data for training, our method demonstrates superior performance over existing methods. However, with all paired data for training, we observed decreased performance in both harmonization and disentanglement. We view this seemingly surprising observation as a result of the disentanglement-reconstruction trade-off reported in~\cite{trauble2021disentangled}. This is also a reminder for future research: using paired multi-modal images for disentangling may have negative effects when the paired data are non-ideal. Our cardiac segmentation experiment shows that domain shift can be reduced with the proposed method; all six labels improved, with five of the six labels being statistically significant. \subsubsection{Acknowledgement} The authors thank BLSA participants. This work was supported in part by the Intramural Research Program of the NIH, National Institute on Aging, in part by the NINDS grants R01-NS082347 (PI: P.~A. Calabresi) and U01-NS111678 (PI: P.~A. Calabresi), and in part by the TREAT-MS study funded by the Patient-Centered Outcomes Research Institute~(PCORI/MS-1610-37115). \bibliographystyle{splncs04}
{'timestamp': '2022-05-11T02:21:41', 'yymm': '2205', 'arxiv_id': '2205.04982', 'language': 'en', 'url': 'https://arxiv.org/abs/2205.04982'}
\section{\label{sec:Introduction}Introduction} Metal-semiconductor (M-S) contacts play a pivotal role in almost any semiconductor-based technology. They are an integral part of a broad range of devices with application as diverse as photovoltaics (PV) \cite{Saga2010}, transistors and diodes \citep{Sze,Kroemer2001}, and fuel-cells \cite{Carrette2001,Fu2015}. The requirement of M-S interfaces with tailored characteristics, such as a specific resistance at the contact, has fueled research on the topic for decades \cite{Brillson1982,Tung2001}. Nevertheless, despite the high degree of sophistication of current semiconductor technology, the understanding of M-S interfaces at the microscopic level still constitutes a considerable challenge \cite{Ratner2008,Tung2014}. Even the structure of the interface itself, which is buried in the macroscopic bulk metal and semiconductor materials, represent a serious impediment, as it makes the direct exploration of the interface properties cumbersome. A measure of the device current $I$ as a function of the applied bias $V_{bias}$ is a standard procedure to probe a M-S interface \citep{Sze}, despite the drawback that the measured $I$-$V_{bias}$ curves do not provide any direct information on the interface itself, but rather on the full device characteristics. As such, it is common practice to interpret the $I$-$V_{bias}$ curves by fitting the data with analytical models, which are then used to extract the relevant interface parameters such as the Schottky barrier height $\Phi$ \cite{Schroder}. As no general analytical model exists, the accuracy of this procedure critically depends on whether the model describes well the physical regime of the interface under scrutiny. Furthermore, most models disregard the atomistic aspect of the interface, although it is nowadays accepted that chemistry plays a dominant role in determining the electronic characteristics of the interface \cite{Tung1984,Schmitsdor1995,Tung2000,Tung2001,Tung2014}. These ambiguities complicate the assignment of the features observed in the measured spectra to specific characteristics of the M-S interface. Conversely, atomistic electronic structure methods \cite{Martin} are an ideal tool for the characterization of M-S interfaces, and have been successfully employed over the years for their analysis \cite{Das1989,vanSchilfgaarde1990,Picozzi2000,Hoekstra1998,Tanaka2001,Ruini1997,Delaney2010,Hepplestone2014}. However, due to their computational cost, these studies have focussed on model interfaces described using finite-size models formed by few atomic layers ({\em e.g.}, slabs), the validity of which is justified in terms of the local nature of the electronic perturbation due to the interface. For similar reasons, most studies have considered non-doped interfaces, as the models required to describe a statistically meaningful distribution of dopants in the semiconductor would be excessively demanding \cite{Butler2012,Jiao2015}. Last but not least, these model calculations only describe the system at equilibrium ({\em i.e.}, at $V_{bias}$ = 0 V), thereby missing a direct connection with the $I$-$V_ {bias}$ measurements. Here, we develop a general framework which attempts to overcome the limitations inherent in conventional electronic structure methods for simulating M-S interfaces. We employ density functional theory (DFT) \cite{Payne1992} together with the non-equilibrium Green's function (NEGF) method \cite{Brandbyge2002} to describe the infinite, non-periodic interface exactly. The DFT+NEGF scheme allows us to predict the behavior of the M-S interface under working conditions by simulating the $I$-$V_{bias}$ characteristics of the interface at zero and at finite $V_{bias}$. To describe correctly the electronic structure of the doped semiconductor, we employ an exchange-correlation (xc) functional designed {\em ad-hoc} to reproduce the experimental semiconductor band gap \cite{Tran2009}, and a novel spatially dependent effective scheme to account for the doping on the semiconductor side. We apply this novel DFT+NEGF approach to study the characteristics of a Ag/Si interface relevant for PV applications \cite{Kim1998,Weitering1993,Schmitsdor1995,Ballif2003,Li2009,Horteis2010,Garramone2010, Hilali2005,Pysch2009,Li2010,Butler2011,Butler2012,Balsano2013}. Specifically, we focus on the (100)/(100) interface \cite{Kim1998,Butler2011} and on the dependence of $I$-$V_{bias}$ characteristics on the semiconductor doping -- notice that the method is completely general and can be used to describe other M-S interfaces with different crystalline orientations. We consider a range of doping densities for which the interface changes from rectifying to Ohmic. We demonstrate that the ``Activation Energy" (AE) method routinely employed to analyse M-S contacts systematically overestimate the value of $\Phi$, with an error that is both bias and doping dependent, due to the assumption of a purely thermionic transport mechanism across the barrier. Conversely, we show how an analysis of the $I$-$V_{bias}$ characteristics based on the DFT+NEGF electronic structure data provides a coherent picture of the rectifying-to-Ohmic transition as the doping is varied. Finally, we also show that a slab model does not provide a good representation of the interface electronic structure when doping in the semiconductor is taken into account. This is due to the inability of the semiconductor side of the slab to screen the electric field resulting from the formation of the interface. The paper is organized as follows. Section \ref{sec:Computational methods} and Section \ref{sec:System} describe the computational methods and the system models employed in this work, respectively. Section \ref{sec:Device characteristics and validation of the activation energy model} presents the calculated $I$-$V_{bias}$ characteristics and the validation of the AE method based on the calculated data. Section \ref{sec:Electronic properties of the interface} deals with the analysis of the $I$-$V_{bias}$ curves in terms of the electronic structure of the interface as obtained from the DFT+NEGF calculations. In Section \ref{sec:Comparison of the two-probe with the slab model}, the simulations are compared to finite-size slab models. The main conclusions are drawn in Section \ref{sec:Conclusions}. \section{\label{sec:Computational methods}Computational methods} The Ag(100)/Si(100) interface has been simulated using Kohn-Sham (KS) DFT as implemented in \textsc{atomistix toolkit} \cite{ATK} (ATK). DFT \cite{Payne1992} and DFT+NEGF \cite{Brandbyge2002} simulations have been performed using a formalism based on a non-orthogonal pseudo-atomic orbitals \cite{Soler2002} (PAOs) basis set. The one-electron KS valence orbitals are expanded using a linear combination of double-$\zeta$ PAOs including polarization functions (DZP). The confinement radii $r_c$ employed are 4.39 Bohr, 7.16 Bohr, 7.16 Bohr for the Ag 4$d$, 5$s$ and 5$p$ orbitals, and 5.40 Bohr, 6.83 Bohr, 6.83 Bohr for the Si 3$s$, 3$p$, 3$d$ orbitals, respectively. The ionic cores have been described using Troullier-Martins \cite{Troullier1991} norm-conserving pseudo-potentials \cite{Hamann1979}. The energy cutoff for the real-space grid used to evaluate the Hartree and xc contributions of the KS Hamiltonian has been set to 150 Ry. Monkhorst-Pack \cite{Monkhorst1976} grids of {\em k}-points have been used to sample the 3D (2D) Brillouin zone in the DFT (DFT+NEGF) simulations. We have used an $\mathrm{11\times11\times11}$ grid of {\em k}-points for the bulk calculations, and a {\em k}-points grid of $\mathrm{18\times9\times1}$ ($\mathrm{18\times9}$) for the DFT (DFT+NEGF) simulations of the interface. Geometry optimizations have been performed by setting the convergence threshold for the forces of the moving atoms to 2 $\times$ 10$^{-2}$ eV/{\AA}. In all the simulations, periodic boundary conditions (PBCs) were used to describe the periodic structure extending along the directions parallel to the interface plane. In the slab DFT simulation, Dirichelet and Neumann boundary conditions were applied in the direction normal to the interface on the silver and silicon sides of the simulation cell, respectively, whereas in the DFT+NEGF simulations the same direction was described using Dirichelet boundary conditions at the two boundaries between the interface and the bulk-like electrodes. \subsection{\label{sec:Spill-in terms}``Spill-in" terms} \begin{figure} \includegraphics[width=0.4\textwidth]{figure1} \caption{Scheme showing the two-center ``{\em spill-in}" terms used for the evaluation of the Hamiltonian (a) and the electronic density (b) of the C region for a pair of $s$ type PAOs $\phi_j$ and $\phi_i$ located close to the L/C boundary. In (a,b) the blue shaded regions indicate the integrals performed in the C region, whereas the red and green regions indicate the ``spill-in" terms for the Hamiltonian and for the electronic density, respectively.} \label{fig:figure1} \end{figure} As described in Ref. \citenum{Brandbyge2002}, the DFT+NEGF method used to simulate the infinite, non-periodic Ag(100)/Si(100) interface relies on a two-probe setup, in which a left (L) and a right (R) semi-infinite electron reservoirs are connected through a central (C) region containing the interface. Once the chemical potentials $\mu_{L,R}$ of the reservoirs have been defined, a self-consistent (SCF) KS procedure is used to obtain the electronic density in the C region. The main quantity being evaluated in the SCF cycle is the density matrix required to express the electronic density of the C region in the basis of PAOs centered in the same region, $\bar{D}^{CC}$. Assuming $\mu_L > \mu_R$, $\bar{D}^{CC}$ takes the form \begin{equation} \begin{split} \bar{D}^{CC} = & - \frac{1}{\pi}\int_{-\infty}^{\mu_R} \mathrm{Im}[\bar{G}^{CC}]d\epsilon \\ & - \frac{1}{\pi}\int_{\mu_R}^{\mu_L} \bar{G}^{CC}\mathrm{Im}[\bar{\Sigma}^{LL}] \bar{G}^{\dagger CC} d\epsilon, \label{eq:negf_1} \end{split} \end{equation} \noindent where $\bar{\Sigma}^{LL}$ is the self-energy matrix describing the coupling of the central region to the semi-infinite L reservoir, and the Green's function of the central region $\bar{G}^{CC}$ is obtained by \begin{equation} \bar{G}^{CC}(\epsilon) = [(\epsilon + i\delta)\bar{S}^{CC}-\bar{H}^{CC}-\bar{\Sigma}^{LL}-\bar{\Sigma}^{RR}]^{-1}, \label{eq:negf_2} \end{equation} \noindent with $\bar{S}^{CC}$ and $\bar{H}^{CC}$ being the overlap and Hamiltonian matrices associated with the PAOs centered at the C region, and $\bar{\Sigma}^{RR}$ being the self-energy matrix of the R reservoir. However, even if the DFT+NEGF method provides an elegant scheme to evaluate $\bar{D}^{CC}$, it should be noticed that solving Eqs. \ref{eq:negf_1}-\ref{eq:negf_2} is not sufficient to obtain the correct Hamiltonian and the electronic density of the C region. The reason is that the relevant integrals involved in Eqs. \ref{eq:negf_1}-\ref{eq:negf_2} are evaluated only in the region of space encompassing the C region, and only for the atoms localized in that region. As a consequence, the tails of the PAOs located close to both sides of the L/C and R/C boundaries, which penetrate into the neighboring regions, are not accounted for (see Fig. \ref{fig:figure1}). To correct this behavior, we introduce additional corrective terms, that we name ``spill-in". For the Hamiltonian, corrective terms are applied to both the two-center and three-center integrals. Specifically, if two PAOs $\phi_i$ and $\phi_j$ centered in the C region lie close to a boundary, {\em e.g.} the L/C one, the corrected Hamiltonian in Eq. \ref{eq:negf_2} will include also the matrix element $H_{i,j}^{\prime} = \langle \phi_i | V_{eff}^{LL}(\mathbf{r}) | \phi_j \rangle$ associated with the tail of the PAOs protruding into the L region, $V_{eff}^{LL}(\mathbf{r})$ being the periodic KS potential of the semi-infinite L reservoir (Fig. \ref{fig:figure1}a). Similar arguments hold also for the Hamiltonian three-center non-local terms. For the electronic density of the C region, additional contributions are included for each pair of PAOs $\phi_i$ and $\phi_j$ located close to a boundary when at least one of them is centered at the neighboring reservoir region. In total, two new contributions must be added to the electronic density evaluated using Eqs. \ref{eq:negf_1}-\ref{eq:negf_2} for each pair of PAOs at each boundary. For the L/C boundary, these are (Fig. \ref{fig:figure1}b): \begin{equation} \begin{split} n^{LL} & = \sum_{i,j} D_{i,j}^{LL} \phi_{i}^L \phi_{j}^L, \\ n^{LC} & = \sum_{i,j} D_{i,j}^{LC} \phi_{i}^L \phi_{j}^C, \end{split} \end{equation} \noindent which can be further distinguished based on whether both ($\bar{D}^{LL}$) or just one ($\bar{D}^{LC}$) of the two PAOs involved is centered at the L region. In the calculations presented in this work, the ``spill-in" terms are independent of the applied bias voltage. This is justified as we checked that the non-periodic KS potential at the boundary of the C region for each value of the applied bias matches smoothly with the periodic KS potential of the neighboring reservoir, {\em i.e.} that the ``screening approximation" is verified -- see Ref. \citenum{Brandbyge2002} for additional details. We stress that including these ``spill-in" terms is essential to ensure a stable and well-behaved convergence behavior of the SCF cycle, which turns out to be especially important for heterogeneous systems such as the Ag(100)/Si(100) interface investigated in this work. \begin{figure} \includegraphics[width=0.35\textwidth]{figure2} \caption{(a) Fitting procedure for the TB09 xc-functional $c$ parameter. Squares (blue): calculated indirect band gap of bulk silicon $\mathrm{E_{gap}^{TB09}}$ obtained for different values of the $c$ parameter. Dashed line (blue): fit to the computed data of $\mathrm{E_{gap}^{TB09}}$ {\em vs.} $c$ obtained by linear regression. Dotted line (black): experimentally measured bulk silicon band gap \cite{Kittel}. Dashed-dotted line (orange): optimal value of the $c$ parameter (opt-$c$), obtained as the intersect between $\mathrm{E_{gap}^{TB09}}$ and the fit to the $\mathrm{E_{gap}^{TB09}}$ data. (b) Region around the indirect band gap in the bulk silicon band structure calculated using the optimal $c$ parameter determined from (a).} \label{fig:figure2} \end{figure} \subsection{\label{sec:Exchange-correlation potential}Exchange-correlation potential} Further complications in describing the Ag(100)/Si(100) interface arise from the fact that one of its sides is semiconducting. In fact, a major problem affecting the description of metal-semiconductor interfaces is the severe underestimation of the semiconductor band gap in DFT calculations using (semi-)local xc-functionals based on the local density approximation (LDA) or on the generalized gradient approximation (GGA) \cite{Choen2012}. For model calculations based on few-layer thick fully periodic systems, such an underestimation has been shown to result in unrealistically low Schottky barriers at the interface \cite{Das1989,Godby1990}. In order to remedy this drawback, we have evaluated the electronic structure of the LDA-optimized interface geometries using the Tran-Blaha meta-GGA xc-functional (TB09) \cite{Tran2009}. The TB09 xc-functional has been shown to provide band gaps in excellent agreement with the experiments for a wide range of semiconductors including silicon, at a computational cost comparable to that of conventional (semi-)local functionals. In the TB09 xc-functional, the exchange potential $\upsilon^\mathrm{TB09}_x(\mathbf{r})$ depends explicitly on electron kinetic energy $\tau(\mathbf{r})$, \begin{equation} \upsilon^\mathrm{TB09}_x(\mathbf{r}) = c\upsilon^\mathrm{BR}_x(\mathbf{r}) + \frac{3c-2}{\pi} \sqrt{\frac{4\tau(\mathbf{r})}{6\rho(\mathbf{r})}}, \label{eq:tb09-1} \end{equation} \noindent with $\tau(\mathbf{r})=1/2\sum_{i=1}^N |\nabla\psi_i(\mathbf{r})|^2$, $N$ being the total number of KS orbitals, $\psi_i(\mathbf{r})$ the $i$-th orbital, $\rho(\mathbf{r})$ the electronic density and $\upsilon^\mathrm{BR}_x(\mathbf{r})$ the Becke-Roussel exchange potential \cite{Becke1989}. The parameter $c$ in equation (\ref{eq:tb09-1}) is evaluated self-consistently and takes the form \begin{equation} c = \alpha + \beta \left[ \frac{1}{\Omega} \int_\infty \frac{|\nabla\rho(\mathbf{r})|}{\rho(\mathbf{r})} d\mathbf{r} \right]^{\frac{1}{2}}, \label{eq:tb09-2} \end{equation} \noindent where $\Omega$ is the volume of the simulation cell and the two empirical parameters $\alpha = -0.012$ (dimensionless) and $\beta = 1.023$ Bohr$^{\frac{1}{2}}$ have been fitted to reproduce the experimental band gaps of a large set of semiconductors \cite{Tran2009}. To obtain a description as accurate as possible of the semiconductor band gap at the Si(100) side of the interface, we have tuned the value of the $c$ parameter in order to reproduce the experimentally measured band gap of bulk silicon, $E_\mathrm{gap}^\mathrm{exp}$ = 1.17 eV \cite{Kittel}. This has been accomplished by calculating the band gap of bulk silicon at fixed values of the $c$ parameter in a range around the self-consistently computed value in which the variation of $E_\mathrm{gap}^\mathrm{TB09}$ with $c$ is linear. Then, the optimal value of $c$ has been determined as the intersect between the value of $E_\mathrm{gap}^\mathrm{exp}$ and a linear fit to the computed values of $E_\mathrm{gap}^\mathrm{TB09}$ (Fig. \ref{fig:figure2}a). Using the TB09 xc-functional with the $c$ parameter fixed at the optimal value determined using this procedure (hereafter, TB09-o), we calculate the indirect band gap of bulk silicon to be $E_\mathrm{gap}^{\mathrm{TB09-o}}$ = 1.169 eV (Fig. \ref{fig:figure2}b), in excellent agreement with the value 1.17 eV. The TB09-o functional has been used for all the electronic structure and transport analyses of the Ag(100)/Si(100) interface reported in this work. We have checked that the band structure of bulk silver calculated using the TB09-o functional is very similar to that calculated using the LDA, which is known to perform well for noble metals. \subsection{\label{sec:Semiconductor doping}Semiconductor doping} \begin{figure*} \includegraphics[width=0.8\textwidth]{figure3} \caption{Geometries employed to simulate the Ag(100)/Si(100) interface using two-probe models (a) or slab models (b). Silver, silicon and hydrogen atoms are shown in grey, beige and white, respectively.} \label{fig:figure3} \end{figure*} The last requirement to describe realistically the electronic structure of the Si(100)/Ag(100) interface is to account for the doping on the silicon side of the interface. Here, doping is achieved in an effective scheme by introducing localized charges bound to the individual silicon atoms. More specifically, in ATK \cite{ATK} the total self-consistent electronic density $\rho_\mathrm{tot}(\mathbf{r})$ is defined as \cite{Soler2002}: \begin{equation} \rho_\mathrm{tot}(\mathbf{r}) = \delta \rho(\mathbf{r}) + \sum_I^\mathrm{N_{atoms}} \rho_{I}(\mathbf{r}), \label{eq:compensation_charge-1} \end{equation} \noindent where $\sum_I^\mathrm{N_{atoms}} \rho_{I}(\mathbf{r})$ is the sum of the atomic densities of the individual neutral atoms of the system. As each atomic density $\rho_{I}(\mathbf{r})$ is a constant term, it can be augmented with a localized ``compensation" charge having the opposite sign of the desired doping density, which acts as a carrier attractor by modifying the electrostatic potential on the atom. Using these ``compensation" charges, an effective doping can be achieved both in the DFT and in the DFT+NEGF simulations. In the former, the ``compensation" charge added to each silicon atom is neutralized by explicitly adding a valence charge of the opposite sign, so that the system remains charge neutral. In the latter, the ``compensation" charge is neutralized implicitly by the carriers provided by the reservoirs, and the system is maintained charge neutral under the condition that the intrinsic electric field in the system is zero. This effective doping scheme has the advantage of (i) not depending on the precise atomistic details of the doping impurities, and (ii) being completely independent of the size and exact geometry of the system. \section{\label{sec:System}System} In order to obtain a reliable description of the Ag(100)/Si(100) interface, we have followed a stepwise procedure. Initially, we have carried out a preliminary screening of the interface geometries and bonding configurations by using a 2$\times$1 slab model formed by a 6-layers Ag(100) slab interfaced with a 9-layers unreconstructed Si(100) slab. The calculated bulk lattice constants of silicon ($a_\mathrm{Si}$ = 5.41 \AA) and silver ($a_\mathrm{Ag}$ = 4.15\AA) are in good agreement with those reported in the literature \cite{Butler2012}. To match the Ag(100) and the Si(100) surface, we have applied an isotropic compressive strain $\epsilon_{xx}$ = $\epsilon_{yy}$ = --0.0793 along the surface lattice vectors $\mathbf{v}_{1,2}$ of the Ag(100) surface. We have checked that in the compressed Ag(100) structure, the dispersion of the $s$-band and its position with respect to the $d$-band are very similar to those calculated using the equilibrium value of $a_\mathrm{Ag}$. The Si(100) surface opposite to the interface has been passivated with hydrogen atoms. The geometry of the resulting 15-layers slab has then been optimized using the LDA by keeping the farthest (with respect to the interface plane) 4 layers of the Ag(100) surface frozen, and by allowing the farthest (with respect to the interface plane) 4 layers of the Si(100) slab to move as a rigid body, thereby freezing only the interatomic distances and angles. All the remaining atoms have been allowed to fully relax. Different starting guesses for the interface structure have been tested, corresponding to different configurations of the Si(100) dangling bonds with respect to the high symmetry {\em fcc} sites of the Ag(100) surface. The lowest energy configuration among those considered, corresponding to the Si(100) dangling bonds sitting above the ``hollow" {\em fcc} sites of the Ag(100) surface, has then been used as a representative model of the interface. Starting from the lowest energy configuration obtained using the 15-layers slab, we have then constructed more realistic models of the interface. Specifically, we have expanded the bulk regions of the 15-layer slab to create two-probe setups effectively describing the infinite, non-periodic interface (Fig. \ref{fig:figure3}a). A final geometry optimization has been carried out using a two-probe setup in which the C region has been described by 8 Ag(100) layers and an undoped silicon layer having a total width $W_\mathrm{Si(100)}^\mathrm{CC}$ = 47.84 \AA. The optimized geometry has been used to construct two-probe setups in which the doping of the silicon side has been taken into account using the effective doping method described in Section \ref{sec:Semiconductor doping}. We have considered doping densities of {\em n}-type carriers ($n_\mathrm{d}$) in the experimentally relevant range [10$^{18}$ cm$^{-3}$ -- 10$^{20}$ cm$^{-3}$]. As discussed in more detail in Section \ref{sec:Results}, the width of the Si(100) layer needed to describe accurately the interface in the two-probe simulations depends on the size of the depletion region ($W_\mathrm{D}$) on the silicon side of the interface. The relation between $W_\mathrm{D}$ and $n_\mathrm{d}$ is 1/$W_\mathrm{D}$ $\propto$ n$_\mathrm{d}^{1/2}$, so that progressively narrower C regions can be used as the doping level is increased without any loss in accuracy. Therefore, in the following, the results presented for $n_\mathrm{d}$ = 10$^{20}$ cm$^{-3}$, $n_\mathrm{d}$ = 10$^{19}$ cm$^{-3}$ and $n_\mathrm{d}$ = 10$^{18}$ cm$^{-3}$ refer to calculations performed with C regions of widths $W_\mathrm{Si(100)}^\mathrm{CC}$ = 47.84 \AA, $W_\mathrm{Si(100)}^\mathrm{CC}$ = 197.436 \AA\ and $W_\mathrm{Si(100)}^\mathrm{CC}$ = 447.92 \AA, respectively. We have checked that reducing the width of the C region does not have any effect on the results, as long as all the space-charge effects due to the presence of the interface take place within the screening region. Furthermore, we notice how using a two-probe setup also allows to simulate the characteristics of the interface when the L and R reservoirs are set at two different chemical potentials $\mu_L \neq \mu_R$ due to an applied bias voltage $qV_{bias} = \mu_R - \mu_L$. As will become clear later, this allows for a direct comparison to experiments and for analyzing the electronic structure of the interface under working conditions. Finally, to understand to which extent the slab model is able to describe accurately the electronic structure of the infinite, non-periodic interface, we have also considered slab models having a similar interface geometry as that used in the two probe setup (Fig. \ref{fig:figure3}b). Both short and long slab models have been constructed, in which the width of the Si(100) layer used to describe the silicon side of the interface has been set to either $W_\mathrm{Si(100)}^\mathrm{slab(short)}$ = 38.33 \AA\, or $W_\mathrm{Si(100)}^\mathrm{slab(long)}$ = 98.62 \AA. Notice how these values of $W_\mathrm{Si(100)}^\mathrm{slab}$ are many times larger than those used for similar studies of the Ag(100)/Si(100) interface reported in the literature \cite{Butler2011}. \begin{figure} \includegraphics[width=0.4\textwidth]{figure5} \caption{Calculated $I$-$V_\mathrm{bias}$ (a) and forward bias $I/(1-e^{q|V_{bias}|/k_BT})$-$V_{bias}$ (b) characteristics at $n_\mathrm{d}$ = 10$^{18}$ cm$^{-3}$ (blue triangles), $n_\mathrm{d}$ = 10$^{19}$ cm$^{-3}$ (green squares), $n_\mathrm{d}$ = 10$^{20}$ cm$^{-3}$ (red dots). In (a), the values of $I$ at $n_\mathrm{d}$ = 10$^{18}$ cm$^{-3}$ and $n_\mathrm{d}$ = 10$^{19}$ cm$^{-3}$ have been multiplied by a factor 10 and 100, respectively. The solid lines in (b) are fit to the data in the range 0.02 $\leq$ $V_\mathrm{bias}$ $\leq$ 0.08 V using Eq. \ref{eq:thermionic_1}. The ideality factor {\em n} extracted from the slope of each fitted curve is reported using the same color as the corresponding curve.} \label{fig:figure5} \end{figure} \section{\label{sec:Results}Results} \subsection{\label{sec:Device characteristics and validation of the activation energy model}Device characteristics and validation of the activation energy model} Fig. \ref{fig:figure5}a shows the current--voltage ($I$-$V_{bias}$) characteristics calculated for the two-probe setup at low ($n_\mathrm{d}$ = 10$^{18}$ cm$^{-3}$), intermediate ($n_\mathrm{d}$ = 10$^{19}$ cm$^{-3}$) and high ($n_\mathrm{d}$ = 10$^{20}$ cm$^{-3}$) doping densities of the Si(100) side of the interface. A strong dependence on the doping concentration is evident. At low doping, the interface shows a well-defined Schottky diode-like behavior: the forward bias ($V_{bias}$ $>$ 0 V) current increases about six orders of magnitude in the range of $V_{bias}$ [0.02 V -- 0.5 V], whereas the reverse bias one (V$_\mathrm{bias}$ $<$ 0 V) varies only within one order of magnitude in the corresponding range. The diode-like asymmetry in the $I$-$V_{bias}$ curves persists at intermediate doping, although it is less pronounced than at low doping; the current at forward bias and reverse bias varying within three and two orders of magnitude, respectively. The scenario changes qualitatively at high doping as the $I$-$V_{bias}$ curve becomes highly symmetric, suggesting an Ohmic behavior of the interface. \begin{figure*} \includegraphics[width=0.9\textwidth]{figure6} \caption{(a,b) Empty dots: calculated $I$-$T$ data at different bias voltages for $n_\mathrm{d}$ = 10$^{18}$ cm$^{-3}$ (a) and $n_\mathrm{d}$ = 10$^{19}$ cm$^{-3}$ (b). Solid lines: fit to the simulated data using Eq. \ref{eq:thermionic_3}. (c,d) Left-hand side (filled dots) and right-hand side (dashed line) of Eq. \ref{eq:thermionic_3} as a function of $V_{bias}$. The values of the left-hand side have been extracted from the slope of the fitted $I$-$T$ curves in (a,b). The solid lines are linear fits to the data. The right-hand side of Eq. \ref{eq:thermionic_3} has been plotted using the value of $\Phi^\mathrm{AE}$ calculated at $V_{bias}$ = 0.02 V, which approaches the value of $\Phi$ at $V_{bias}$ = 0 V. (e,f) Schottky barrier height $\Phi^\mathrm{AE}$ evaluated using Eq. \ref{eq:thermionic_3} as a function of $V_{bias}$.} \label{fig:figure6} \end{figure*} According to thermionic emission theory, the $I$-$V_{bias}$ characteristics of a Schottky diode can be described by \citep{Sze} \begin{equation} I = I_0\, \Big[ e^{\frac{q V_{bias}}{n k_B T}}-1 \Big], \label{eq:thermionic_1} \end{equation} \noindent where $q$ is the elementary charge, $k_B$ is the Boltzmann constant, $T$ is the temperature, $I_0$ is the saturation current and $n$ is the so-called ideality factor. The latter accounts for the deviation of the $I$-$V_{bias}$ characteristics from those of an ideal diode, for which $n$ = 1. Fitting the simulated data at forward bias to Eq. \ref{eq:thermionic_1} allows to extract $n$ from the slope of the fitted curves. In Fig. \ref{fig:figure5}b the fitted curves are compared to the forward bias data. The latter are presented using an alternative form of Eq. \ref{eq:thermionic_1}, \begin{equation} {I} = I_0\, e^{\frac{q V_{bias}}{n k_B T}}\, \Big(1 - e^{-\frac{q V_{bias}}{k_B T}}\Big), \label{eq:thermionic_1b} \end{equation} \noindent which allows for a better comparison with the fitted curves as $I/(1-e^{-qV_{bias}/k_BT})$ varies exponentially with $V_{bias}$ in the fitting interval considered, {\em viz.} 0.02 V $\leq$ V$_{bias}$ $\leq$ 0.08 V. At low doping, $n$ = 1.09, indicating that the system behaves essentially as an ideal Schottky diode. At intermediate doping, $n$ = 1.82, and the system deviates significantly from the ideal behavior. At high doping, $n$ = 2.40, consistently with the observation that the system does not behave anymore as a Schottky diode. The $I$--$V_{bias}$ simulation allows to test the reliability of the experimental procedures used to extract the Schottky barrier $\Phi$. In particular, we focus on the so-called ``Activation-Energy" (AE) method, which does not require any {\em a priori} assumption on the electrically active interface area $A$ \cite{Sze}. In the AE method the $I$--$T$ dependence is measured at a small constant $V_{bias}$. Over a limited range of $T$ around room temperature, assuming that the Richardson constant $A^{*}$ and $\Phi$ are constant, the $I$-$T$ characteristics can be described by the expression \begin{equation} I T^{-2} = AA^{*}\, e^{-\frac{q\Phi^{AE}}{k_B T}}\, e^{\frac{q(V_{bias}/n)}{k_B T}}. \label{eq:thermionic_2} \end{equation} Following Eq. \ref{eq:thermionic_2}, the Schottky barrier height $\Phi^\mathrm{AE}$ can be extracted from the $\ln(I/T^2)$ {\em vs.} $1/T$ data using \begin{equation} - \frac{k_B}{q} \frac{d[\ln(I/T^2)]}{d(1/T))} = \Phi^\mathrm{AE} -\frac{V_{bias}}{n}, \label{eq:thermionic_3} \end{equation} \noindent in which $n$ is the ideality factor extracted above. \begin{figure*} \includegraphics[width=0.8\textwidth]{figure4} \caption{Local density of states (LDOS) of the two-probe setup at equilibrium for $n_\mathrm{d}$ = 10$^{18}$ cm$^{-3}$ (a), $n_\mathrm{d}$ = 10$^{19}$ cm$^{-3}$ (b) and $n_\mathrm{d}$ = 10$^{20}$ cm$^{-3}$ (c). The energy on the vertical is relative to the system chemical potential $\mu_\mathrm{L,R}$. Regions of low (high) LDOS are shown in dark (bright) color. The blue line in each panel indicates the macroscopic average of the Hartree potential $\langle V_H \rangle$ subtracted the electron affinity of bulk Si and $\mu_\mathrm{L,R}$. The yellow vertical line in each panel indicates the associated $\Phi^\mathrm{pot}$.} \label{fig:figure4} \end{figure*} Fig. \ref{fig:figure6}a,b shows the simulated AE plots (as Arrhenius plots) at different values of $V_{bias}$ for low and intermediate doping densities, at which the interface still displays clear Schottky diode--like characteristics. The $I$-$T$ dependence has been evaluated in a linear response fashion, using the Landauer-B\"uttiker expression for the current, $I = \frac{2q}{h}\int T(E,\mu_L,\mu_R) [f(\frac{E-\mu_L}{k_B T}) - f(\frac{E-\mu_R}{k_B T})] dE$ with the transmission coefficient $T(E,\mu_L,\mu_R)$ evaluated self-consistently at an electron temperature of 300 K. Fully self-consistent simulations performed for selected temperatures show that this approach is valid within the range of $T$ considered, 250 K $\leq T \leq$ 400 K. Ideally, for a given doping the Schottky barrier depends exclusively on the M-S energy level alignment at the interface and therefore, disregarding image-force lowering effects, should remain constant with $V_{bias}$ \citep{Sze}. This implies that in Eq. \ref{eq:thermionic_3}, the left-hand side should equal the right-hand side at any value of $V_{bias}$. However, in the present case this condition is not verified, as the variation of the right-hand side term with $V_{bias}$ is larger than that of the left-hand side term (see Fig. \ref{fig:figure6}c,d). Indeed, for $n_\mathrm{d}$ = 10$^{18}$ cm$^{-3}$ ($n_\mathrm{d}$ = 10$^{19}$ cm$^{-3}$), a linear fit to the calculated values of the left-hand side of Eq. \ref{eq:thermionic_3} gives a slope of --664 meV/V (--177 meV/V), whereas the slope associated to the variation of the right-hand side term is --917 meV/V (--549 meV/V). Following the procedure in the AE method we use the value of $n$ obtained from Fig.~\ref{fig:figure5} to subtract the bias dependence. The result is shown in Fig. \ref{fig:figure6}e,f and it can be seen that this leads to an unphysical increase of $\Phi^\mathrm{AE}$ with $V_{bias}$. The error becomes more severe as $n_\mathrm{d}$ is increased. At low (intermediate) doping, $\Phi^\mathrm{AE}$ varies from by 30$\%$ (325$\%$) in the range of V$_{bias}$ considered, leading to a change $\Delta\Phi^\mathrm{AE}$ = 3.73 $k_B T$ ($\Delta\Phi^\mathrm{AE}$ = 7.31 $k_B T$). Thus, the intrinsic accuracy of the AE method depends strongly on multiple factors. On the one side, the non-linear increase in $\Phi^\mathrm{AE}$ with $V_{bias}$ suggests that using a single value of $V_{bias}$ is not sufficient to obtain an accurate estimate of $\Phi$. On the other side, the change in $\Delta\Phi^\mathrm{AE}$ with $n_\mathrm{d}$ at a given $V_{bias}$ indicates that the AE method is unsuited for comparative analyses of the variation of $\Phi$ with doping. These facts call for a more direct and general strategy for the characterization of M-S interfaces under working conditions. \subsection{\label{sec:Electronic properties of the interface}Electronic properties of the interface} \begin{figure} \includegraphics[width=0.5\textwidth]{figure7} \caption{(a) Scheme of the electronic structure of the Ag/Si interface at forward bias voltage. (b) Profile of $\langle V_H \rangle$ for different $V_{bias}$ at low doping. The energy on the vertical axis is relative to the electron affinity $\chi$ of bulk Si and the metal chemical potential $\mu_\mathrm{L}$. The vertical lines indicate $\phi_F$ at $V_{bias}$ = 0.02 V (blue, solid) and $V_{bias}$ = 0.2 V (red, dashed). (c) Solid curves: spectral current density $I(E)$ for different $V_{bias}$ at low doping. The dashed line indicates the value of $\Phi^\mathrm{pot}$. $\langle V_H \rangle$ and $I(E)$ curves calculated at increasingly higher $V_{bias}$ are shown in blue$\to$green$\to$yellow$\to$red color scale. (d,e) Same as (b,c), but for intermediate doping.} \label{fig:figure7} \end{figure} \begin{figure} \includegraphics[width=0.425\textwidth]{figure11} \caption{(a) Filled circles: energy of maximum spectral current $E(I^\mathrm{max})$ in Fig. \ref{fig:figure7}c as a function of $V_{bias}$ at low doping. The solid line is a guide to the eyes. Filled squares: variation of the slope-dependent term of Eq. \ref{eq:thermionic_3} (same as in Fig. \ref{fig:figure6}c). The solid line is a guide to the eyes. Filled triangles: $\phi_F$ as a function of $V_{bias}$. The dashed line shows the bias dependence $V_{bias}/n$ from Eq. \ref{eq:thermionic_3}. The energy on the vertical axis is relative to the semiconductor chemical potential $\mu_R$. (b) Same as (a), but for intermediate doping.} \label{fig:figure11} \end{figure} A strong advantage of the DFT+NEGF simulations is that they allow the visualization of the electronic structure of the interface and the direct tracking of its changes when $n_\mathrm{d}$ and $V_{bias}$ are varied. This makes it possible to analyze the calculated $I$-$V_{bias}$ characteristics in terms of the electronic structure of the interface. Fig. \ref{fig:figure4} shows the local density of states \cite{LDOS} (LDOS) of the two-probe model at equilibrium ({\em i.e.}, at $V_{bias}$ = 0 V) along the direction normal to the interface at the different doping densities considered. Increasing the doping has a two-fold effect on the electronic properties of the system: on the one side, $W_\mathrm{D}$ decreases from $\sim$200 nm to $\sim$20 nm when the doping is increased from $n_\mathrm{d}$ = 10$^{18}$ cm$^{-3}$ to $n_\mathrm{d}$ = 10$^{20}$ cm$^{-3}$, as a direct consequence of the increased screening of the n-doped silicon. Increasing $n_\mathrm{d}$ also shifts the Fermi level towards the silicon conduction bands. In particular, at $n_\mathrm{d}$ = 10$^{18}$ cm$^{-3}$ the conduction band minimum (CBM) of silicon at Z $>$ $W_\mathrm{D}$ lies at $E-\mu_{L,R}$ = +20 meV, whereas at $n_\mathrm{d}$ = 10$^{19}$ cm$^{-3}$ and $n_\mathrm{d}$ = 10$^{20}$ cm$^{-3}$ it lies at $E-\mu_{L,R}$ = --40 meV and $E-\mu_{L,R}$ = --100 meV, respectively. It is also worth noticing how the macroscopic average \cite{Baldereschi1988} of the Hartree potential along the direction normal to the interface, $\langle V_H \rangle$ (blue lines in Fig. \ref{fig:figure4}), follows the profile of the silicon CBM close to as well as far away from the interface. Similarly to what happens for the electronic bands, $\langle V_H \rangle$ becomes constant at Z $>$ $W_\mathrm{D}$, indicating that the electronic structure starts to resemble that of the infinite periodic bulk. A closer analysis also reveals that a finite density of states extends considerably on the semiconductor side of the interface, due to penetration of the metallic states into the semiconductor side \cite{Heine1965,Louie1975,Louie1976}. Due to the lack of a well-defined electronic separation between the metal and the semiconductor, it is difficult to provide an unambiguous value for $\Phi$ based on the electronic structure data only. However, due to the fact that $\langle V_H \rangle$ closely traces the CBM, it is still possible to estimate the Schottky barrier by defining $\Phi^\mathrm{pot}$ as the difference between $\mu_{L}$ and the maximum of $\langle V_H \rangle$ on the semiconductor side of the interface, $\langle V_H^\mathrm{max}\rangle$ (see Fig. \ref{fig:figure4}). We calculate $\Phi^\mathrm{pot}$ = 412 meV and $\Phi^\mathrm{pot}$ = 342 meV for $n_\mathrm{d}$ = 10$^{18}$ cm$^{-3}$ and $n_\mathrm{d}$ = 10$^{19}$ cm$^{-3}$, respectively. For $n_\mathrm{d}$ = 10$^{20}$ cm$^{-3}$ the barrier is considerably lower, $\Phi^\mathrm{pot}$ = 133 meV, reflecting the more pronounced Ohmic behavior observed in the $I$-$V_{bias}$ curves. Focusing on the low and intermediate doping cases, it can be noticed how the values of $\Phi^\mathrm{pot}$ are considerably larger than those of $\Phi^\mathrm{AE}$ at V$_\mathrm{bias}$ $\to$ 0 V. In particular, at low doping $\Phi^\mathrm{pot} - \Phi^\mathrm{AE}$ = 112 meV, whereas at intermediate doping the difference is even larger, $\Phi^\mathrm{pot} - \Phi^\mathrm{AE}$ = 286 meV. A consistent physical picture that rationalizes the $I$-$V_{bias}$ curves can be obtained by studying the doping dependence of the spectral current $I(E) = \frac{2q}{h} T(E,\mu_L,\mu_R) [f(\frac{E-\mu_L}{k_B T}) - f(\frac{E-\mu_R}{k_B T})]$. Fig. \ref{fig:figure7}b,d shows the profiles of $\langle V_H \rangle$ obtained at forward bias in the bias range 0.02 V $< V_{bias} <$ 0.2 V for low and intermediate doping densities. As $V_{bias}$ is increased, $\langle V_H^\mathrm{max} \rangle$ shifts towards higher energies due to image-force effects \cite{Sze}, and $\langle V_H \rangle$ becomes progressively flatter on the semiconductor side. The overall result of these changes is a decrease of the barrier $\phi_F$ associated with the thermionic emission process from the Si(100) conduction band to Ag(100) (see Fig. \ref{fig:figure7}a): \begin{equation} \phi_F = \Phi - V_{bias}/n. \label{eq:phi_1} \end{equation} The associated spectral currents $I(E)$ are shown in Fig. \ref{fig:figure7}c,e. For an interface in which the only contribution to transport comes from thermionic emission, $I(E)$ should be non-zero only at $E-\mu_L > \Phi^\mathrm{pot}$. However, in the present case $I(E)$ is finite also at $E-\mu_L < \Phi^\mathrm{pot}$, indicating that electron tunneling has a non-negligible contribution to $I$. This contribution is much larger for intermediate than for low doping densities. Indeed, at $V_{bias} \to$ 0 V, the position of $E(I^\mathrm{max})$ lies very close to $\Phi^\mathrm{pot}$ in the low doping case, as expected in the case of a nearly ideal Schottky diode. Conversely, at intermediate doping $E(I^\mathrm{max})$ lies well below $\Phi^\mathrm{pot}$, indicating that electron tunneling has become the dominant transport process. The trend of $I(E)$ with $V_{bias}$ is consistent with these considerations. At low doping, $E(I^\mathrm{max})$ is pinned to $\langle V_H^\mathrm{max} \rangle$, whereas the onset of finite $I(E)$ at $E-\mu_L < \Phi^\mathrm{pot}$ moves towards higher energies, following the variation of $\langle V_H \rangle$. On the other hand, at intermediate doping the overall shape of $I(E)$ remains the same as $V_{bias}$ is increased, and the variation of $E(I^\mathrm{max})$ follows closely that of $\langle V_H \rangle$. We also notice the presence of a narrow resonance at $E-\mu_L$ = +0.395 eV, whose position is independent on $n_\mathrm{d}$ and $V_{bias}$. This is due to a localized electronic state at the interface which is pinned to $\mu_L$. The variation of $\phi_F$ with $V_{bias}$ can be related to the slope-dependent term of Eq. \ref{eq:thermionic_3} by assuming $\Phi$ = $\Phi^\mathrm{AE}$ in Eqs. \ref{eq:thermionic_3}-\ref{eq:phi_1}, thus allowing for a direct comparison with the AE data (see Fig. \ref{fig:figure11}). Independently of the value of $V_{bias}$, the slope-dependent term lies always below $\phi_F$, due to the missing contribution of electron tunneling in the AE method: the latter assumes that the current has a purely thermionic origin, and consequently predicts a value of $\phi_F$ lower than the actual one. In agreement with the previous analyses, this deviation is considerably larger in the intermediate doping case, due to the much larger contribution of electron tunneling. \subsection{\label{sec:Comparison of the two-probe with the slab model}Comparison of the two-probe with the slab model} \begin{figure} \includegraphics[width=0.4\textwidth]{figure8} \caption{Profile of $\langle V_H \rangle$ along the direction Z normal to the interface plane, calculated for the two-probe setup (blue solid line) and for the short (green dotted line) and long (red dashed line) slab models, at doping densities $n_\mathrm{d}$ = 10$^{18}$ cm$^{-3}$ (a), $n_\mathrm{d}$ = 10$^{19}$ cm$^{-3}$ (b) and $n_\mathrm{d}$ = 10$^{20}$ cm$^{-3}$ (c). The vertical black solid line indicated the position of the interface. The vertical green (red) line indicates the position Si(100) layer farthest from the interface in the short (long) slab model.} \label{fig:figure8} \end{figure} \begin{figure} \includegraphics[width=0.425\textwidth]{figure9} \caption{LDOS of the short (a,c,e) and long (b,d,f) slab models for different effective doping densities. Doping densities: (a,b) $n_\mathrm{d}$ = 10$^{18}$ cm$^{-3}$; (c,d) $n_\mathrm{d}$ = 10$^{19}$ cm$^{-3}$; (e,f) $n_\mathrm{d}$ = 10$^{20}$ cm$^{-3}$. The energy on the Y-axis is scaled with respect to the system Fermi energy $\mathrm{E_F}$.} \label{fig:figure9} \end{figure} The results obtained using the two-probe model can be used as a reference to validate the use of finite-size models to describe the Ag(100)/Si(100) interface. Such models are integral parts of the band alignment method often used to evaluate $\Phi$ using conventional DFT \cite{VandeWalle1987,VandeWalle1989,Fraciosi1996,Peressi1998}. The method relies on aligning the electronic band structures of the two bulk materials forming the interface on an absolute energy scale by using a reference quantity, often $\langle V_H \rangle$ \cite{Baldereschi1988}. The perturbation of the bulk electronic structure in each material due to the presence of the interface is accounted for by either a slab \cite{Butler2011} or a fully periodic \cite{Niranjan2006} model. $\langle V_H \rangle$ is then used as a common reference to align the electronic structure obtained from independent calculations of the two bulk materials. Despite its widespread use, this strategy relies on two drastic assumptions. Firstly, it is implicitly assumed that the electronic properties of the interface are independent of the doping level of the semiconductor. Moreover, it is assumed that the electronic properties in the central part of each side of the interface model are a good approximation to those of the two bulk materials. Fig. \ref{fig:figure8} shows a comparison between $\langle V_H \rangle$ obtained at the different doping densities considered for the two slab models (short and long, see Section \ref{sec:System}) and for the two-probe setup. We notice that introducing an effective doping in the slab model, which was not taken into account in previous slab models for Ag/Si interfaces \cite{Butler2011}, attempts to better mimic the two-probe simulation in which silver is interfaced with n-doped silicon. The profiles of $\langle V_H \rangle$ of the three different systems have been aligned according to the value of $\mu_L$, the side at which Dirichlet boundary conditions are used for the three systems. Irrespectively of the doping level, the doped short slab model provides a poor description of the variation of $\langle V_H \rangle$ at the interface. In particular, $\langle V_H^\mathrm{max} \rangle$ is always $\sim$200 meV higher that that obtained for the two-probe model. Furthermore, on the Si(100) side of the interface, $\langle V_H \rangle$ does not decay correctly with the distance from the interface for the short slab model and, even more importantly, it does not converge to a constant value. The situation improves by increasing the width of the Si(100) layer. For the long slab model, at increasingly larger doping densities the profile of $\langle V_H \rangle$ resembles more and more that of the two-probe model. Indeed, in the best case scenario, \emph{i.e.}, at $n_\mathrm{d} = $10$^{20}$ cm$^{-3}$, the profile of $\langle V_H \rangle$ evaluated using the long slab becomes constant in the center of the Si(100) region, albeit still higher than that of the reference by $\sim$100 meV. The limitations of the slab model in reproducing the electronic structure at the interface are also evident by looking at the corresponding LDOS plots (see Fig. \ref{fig:figure9}). Similarly to what is observed for $\langle V_H \rangle$, the short slab model fails to reproduce the band bending observed at low doping, as well as the correct trend in the decrease of W$_\mathrm{D}$ as doping is increased. The latter is qualitatively reproduced using the long slab model. However, these modest improvements going from the short to the long slab model come at the expenses of a much higher computational cost. In fact, each DFT calculation for the long slab model takes on average 338.6 s/step. Conversely, each DFT+NEGF calculation using the two-probe model is approximatively one order of magnitude faster, taking on average 46.6 s/step. This suggests that, in addition to computational efficiency, there are also more fundamental reasons for making DFT+NEGF the method of choice for describing M-S interfaces, as in the two-probe setup the two main assumptions of the band alignment method are naturally lifted. We emphasize that, although the results presented in this paper are specific to the Ag(100)/Si(100) interface only, similar conclusions are likely to hold true for all systems for which the poor screening on the semiconductor side of the interface results in space--charge effects that extend over widths of the order of several nanometers. \section{\label{sec:Conclusions} Conclusions} In this work, we have presented an approach based on density functional theory (DFT) and non-equilibrium Green's functions (NEGF) for realistic metal-semiconductor (M-S) interfaces modeling. Our approach is designed to deal effectively and correctly with the non-periodic nature of the interface, with the semiconductor band gap and with the doping on the semiconductor side of the contact, and allows for a direct theory-experiment comparison as it can simulate $I$-$V_{bias}$ characteristics. Using a Ag/Si interface relevant for photovoltaic applications as a model system, we have shown that our approach is a better alternative to (i) analytical approaches such as the ``Activation Energy" (AE) method to analyze the $I$-$V_{bias}$ characteristics of non-ideal rectifying system with non-negligible tunneling contribution, and (ii) finite-size slab models to describe the interface between metals and doped semiconductors. This DFT+NEGF approach could pave the way for a novel understanding of M-S interfaces beyond the limitations imposed by traditional analytical and atomistic methods. \section*{\label{sec:Acknowledgements} Acknowledgements} QuantumWise acknowledges support from Innovation Fund Denmark, grant Nano-Scale Design Tools for the Semi-conductor Industry (Grant No. 79-2013-1). The research leading to these results has received funding from the European Community’s Seventh Framework Programme (FP7/2007-2013) under grant agreement III-V-MOS Project n619326). We thank Walter A. Harrison, Andreas Goebel, and Paul A. Clifton at Acorn Technologies for their input on this work. DS acknowledges support from the H.C. {\O}rsted-COFUND postdoc program at DTU.
{'timestamp': '2016-03-02T02:09:57', 'yymm': '1601', 'arxiv_id': '1601.04651', 'language': 'en', 'url': 'https://arxiv.org/abs/1601.04651'}
\section{Introduction} \subsection{Statement of Results for Bernoulli Convolutions} The main result of this is paper is to give a sufficient condition for a self similar measure to be absolutely continuous. For simplicity we will first state this result in the case of Bernoulli convolutions. First we need to define Bernoulli convolutions. \begin{definition}[Bernoulli Convolution] Given some $\lambda \in (0,1)$ we define the Bernoulli convolution with parameter $\lambda$ to be the law of the random variable $Y$ given by, \begin{equation*} Y = \sum_{i=0}^{\infty} X_n \lambda^n. \end{equation*} Where each of the $X_n$ are independent i.i.d. random variables which have probability $\frac{1}{2}$ of being $1$ and probability $\frac{1}{2}$ of being $-1$. We will denote this measure by $\mu_{\lambda}$. \end{definition} Bernoulli convolutions are the most well studied examples of self-similar measures which are an important object in fractal geometry. We will dsicuss these further in Section \ref{litreview}. Despite much effort it is still not known for which $\lambda$ the Bernoulli convolution with parameter $\lambda$ is absolutely continuous. The results of this paper contribute towards answering this question. \begin{definition}[Mahler Measure] Given some algebraic number $\alpha_1$ with conjugates $\alpha_2, \alpha_3, \dots, \alpha_n$ whose minimal polynomial (over $\mathbb{Z}$) has leading coefficient $C$ we define the Mahler measure of $\alpha_1$ to be, \begin{equation*} M_{\alpha_1} = |C| \prod_{i=1}^{n} \max \{|\alpha_i|, 1\} \end{equation*} \end{definition} \begin{theorem} \label{mainbernoulli} Suppose that $\lambda \in \left( \frac{1}{2}, 1 \right)$ is algebraic and has Mahler measure $M_{\lambda}$. Suppose also that $\lambda$ is not the root of a non zero polynomial with coefficients $0, \pm 1$ and, \begin{equation*} (\log M_{\lambda} - \log 2) (\log M_{\lambda})^2 < \frac{1}{27} (\log M_{\lambda} - \log \lambda^{-1})^3 \lambda^4 \label{mainbernoullieqn} \end{equation*} then the Bernoulli convolution with parameter $\lambda$ is absolutely continuous. \end{theorem} This is a corollary of a more general statement about a more general class of self similar measures which we will dicuss in Section \ref{intro:generalIFS}. The requirement \eqref{mainbernoullieqn} is equivalent to $M_{\lambda} < F(\lambda)$ where $F:(\frac{1}{2}, 1) \to \mathbb{R}$ is some strictly increasing continuous function satisfying $F(\lambda) > 2$. Figure \ref{fgraph} displays the graph of $F$. \begin{figure}[h] \centering \includegraphics[scale=0.8]{fgraph} \caption{The graph of $F$} \label{fgraph} \end{figure} It is worth noting that $F(\lambda) \to 2^{\frac{27}{26}} \approx 2.054$ as $\lambda \to 1$. The fact that $F(\lambda) > 2$ is important because the box packing principal means that the requirement that $\lambda$ is not the root of a polynomial with coefficients $\pm 1$ forces $M_{\lambda} \geq 2$. It is also worth noting that this result is stronger than the result obtained by Garsia in \cite{garsiasep} where he showed that if $M_{\lambda} = 2$ and $\lambda \in (\frac{1}{2}, 1)$ then the Bernoulli convolution with parameter $\lambda$ is absolutely continuous. In Figure \ref{numexamples} there are some examples of $\lambda$ satisfying the conditions of Theorem \ref{mainbernoulli} found by a simple computer search. \begin{figure}[h] \begin{center} \begin{tabular}{ | m{19em} | c| c| } \hline Minimal Polynomial & Mahler Measure & $\lambda$\\ \hline $X^{7}-X^{6}-2X^{5}-X^{2}+X+1$ & 2.010432 & 0.879161\\ $X^{7}+2X^{6}-X-1$ & 2.015159 & 0.932864\\ $X^{8}+2X^{7}-1$ & 2.007608 & 0.860582\\ $X^{9}-2X^{8}-X+1$ & 2.003861 & 0.799533\\ $X^{9}+2X^{8}-X-1$ & 2.003861 & 0.949560\\ $X^{10}-2X^{9}-X^{2}+1$ & 2.005754 & 0.852579\\ $X^{10}-2X^{9}-X+1$ & 2.001940 & 0.813972\\ $X^{10}-X^{9}-2X^{8}-X^{5}+X^{4}+X^{3}-X^{2}+1$ & 2.014180 & 0.911021\\ $X^{10}-X^{9}-X^{8}-2X^{7}-X^{5}+X^{4}+X^{2}+1$ & 2.012241 & 0.939212\\ $X^{10}-X^{9}-X^{8}-X^{7}-2X^{6}-X^{5}+X^{3}+X^{2}+X+1$ & 2.017567 & 0.953949\\ $X^{10}-2X^{8}-3X^{7}-2X^{6}-X^{5}+X^{3}+2X^{2}+2X+1$ & 2.008264 & 0.968459\\ $X^{10}+X^{9}-2X^{8}+X^{7}+X^{6}-X^{5}+X^{4}-X^{3}+X-1$ & 2.016061 & 0.875809\\ $X^{10}+2X^{9}-X^{4}-1$ & 2.030664 & 0.946934\\ $X^{10}+2X^{9}-1$ & 2.001936 & 0.888810\\ $X^{10}+3X^{9}+3X^{8}+3X^{7}+2X^{6}-2X^{4}-3X^{3}-3X^{2}-2X-1$ & 2.047156 & 0.984474\\ \hline \end{tabular} \end{center} \caption{Examples of $\lambda$ for which the Bernoulli convolution with parameter $\lambda$ can be shown to be absolutely continuous using the methods of this paper} \label{numexamples} \end{figure} The smallest value of $\lambda$ which we were able to find for which the Bernoulli convolution with parameter $\lambda$ can be shown to be absolutely continuous using these method is $\lambda \approx 0.799533$ with minimal polynomial $X^9 - 2 X^8 - X + 1$. This is much smaller than the examples given in \cite{varjupaper} the smallest of which was $\lambda = 1 - 10^{-50 }$. We also show that for all $n \geq 13$ the Bernoulli convolution with parameter given by the root of the polynomial $X^n - 2X^{n-1} - X + 1$ in $\left( \frac{1}{2}, 1 \right)$ is absolutely continuous. Finally we give some explicit examples of absolutely continuous self similar measures in $\mathbb{R}^2$ which cannot be expressed as the product of absolutely continuous measures of $\mathbb{R}$. \subsection{Review of Existing Literature} \label{litreview} For a thorough survey on Bernoulli convolutions see \cite{firstrev} or \cite{secondrev}. For a review of the most recent developments see \cite{varjulitrev}. We will only discuss the case of unbiased Bernoulli convolutions in this review. Bernoulli convolutions were first introduced by Jessen and Winter in \cite{intobaper}. They have also been studied by Erd\H{o}s. In \cite{erdosnotcont} Erd\H{o}s showed that the Bernoulli convolution with parameter $\lambda$ is not absolutely continuous whenever $\lambda^{-1} \in (1,2)$ is a Pisot number. In his proof he exploited the property of Pisot numbers that powers of Pisot numbers approximate integers exponentially well. These are currently the only values of $\lambda \in \left( \frac{1}{2}, 1 \right)$ for which the Bernoulli convolution with parameter $\lambda$ is known not to be absolutely continuous. The typical behaviour for Bernoulli convolutions with parameters in $\left( \frac{1}{2}, 1 \right)$ is absolute continuity. In \cite{erdosabscon} by a beautiful combinatorial argument Erd\H{o}s showed that there is some $c < 1$ such that for almost all $\lambda \in (c,1)$ the Bernoulli convolution with parameter $\lambda$ is absolutely continuous. This was extended by Solomyak in \cite{solomyak} to show that we may take $c=\frac{1}{2}$. This was later extended by Smerkin in \cite{firstsmherkin} where he showed that the set of exceptional parameters has Hausdorff dimension $0$. These results have been further extended by for example Shmerkin in \cite{shmerkin} who showed that for every $\lambda \in \left( \frac{1}{2}, 1 \right)$ apart from an exceptional set of zero Hausdorff dimension the Bernoulli convolution with parameter $\lambda$ is absolutely continuous with density in $L^q$ for all finite $q>1$. In a ground breaking paper \cite{hochman} Hochman made progress on a related problem by showing that if $\lambda \in (\frac{1}{2}, 1)$ is algebraic and not the root of a polynomial with coefficients $-1, 0, +1$ then the Bernoulli convolution with parameter $\lambda$ has dimension $1$. Much of the progress in the last decade builds on the results of Hochman. There are relatively few known explicit examples of $\lambda$ for which the Bernoulli convolution with parameter $\lambda$ is absolutely continuous. It can easily be shown that for example the Bernoulli convolution with parameter $2^{-\frac{1}{k}}$ is absolutely continuous when $k$ is a positive integer. This is because it may be written as the convolution of the Bernoulli convolution with parameter $\frac{1}{2}$ with another measure. In \cite{garsiasep} Garsia showed that if $\lambda \in (\frac{1}{2}, 1)$ has Mahler measure $2$ then the Bernoulli convolution with parameter $\lambda$ is absolutely continuous. It is worth noting that the condition that $\lambda$ has Mahler measure $2$ implies that $\lambda$ is not the root of a polynomial with coefficients $0 \pm 1$. This means that my result is very similar to the result of Garsia. There has also been recent progress in this area by Varj\'u in \cite{varjupaper}. In his paper he showed that providing $\lambda$ is sufficiently close to $1$ depending on the Mahler measure of $\lambda$ the Bernoulli convolution with parameter $\lambda$ is absolutely continuous. The techniques we will use in this paper are similar in many ways to those used by Varj\'u however, we introduce several crucial new ingredients. Perhaps the most important innovation of this paper is the quantity, which we call the ``detail of a measure around a scale'', which we will use in place of entropy. We will discuss this further in Section 1.4. \subsection{Results for More General IFS} \label{intro:generalIFS} In this Section we will discuss how the results of this paper apply to a more general class of iterated function system. \begin{definition}[Iterated Function System] Given some $n, d \in \mathbb{N}$, some continuous bijective functions $S_1, S_2, ..., S_n : \mathbb{R}^d \to \mathbb{R}^d$ and a probability vector $\mathbf{p} = (p_1, p_2, \dots, p_n)$ we say that $F = (n, d, (S_i)_{i=1}^n, \mathbf{p})$ is an iterated function system. \end{definition} \begin{definition}[Self similar measure] Given some iterated function system $F = (n, d, (S_i)_{i=1}^n, (p_i)_{i=1}^n)$ in which all of the $S_i$ are similarities with Lipsitz constant less than $1$ we say that a probability measure $\mu$ is a self-similar measure generated by $F$ if, \begin{equation*} \mu = \sum_{i=1}^n p_i \mu \circ S_i ^{-1}. \end{equation*} \end{definition} It is a result of J. Hutchinson in \cite{hutchinsonpaper} section 3.1 part 5 that under these conditions there is a unique self similar measure given by some iterated function system. Given an iterated function system satisfying these conditions let $\mu_F$ denote the unique self similar measure generated by $F$. This paper will only deal with a very specific class of iterated function systems. Specifically we care about the case where there is some orthogonal transformation $U$, some constant $\lambda \in (0, 1)$ and vectors $a_1, \dots, a_n \in \mathbb{R}^d$ such that for all $i = 1, \dots, n$ we have, \begin{equation*} S_i: x \mapsto \lambda U x + a_i. \end{equation*} I will describe such iterated function systems and the self similar measures generated by them as having a uniform contraction ratio with uniform rotation. It is easy to see from the definitions that self-similar measures with uniform rotation and uniform contraction ratio can be expressed in the following way. \begin{lemma} Let $F = (n, d, (S_i)_{i=1}^n, (p_i)_{i=1}^n)$ be an iterated function system with uniform contraction ratio and uniform rotation. Let $\lambda \in (0,1)$, let $U$ be an orthonormal transformation and let $x_1, \dots, x_n \in \mathbb{R}^d$ be vectors such that, \begin{equation*} S_i : x \mapsto\lambda U x +a_i. \end{equation*} Let $X_0, X_1, X_2, \dots$ be i.i.d. random variables such that $\mathbb{P}[X_0 = a_i] = p_i$ for $i=1,\dots,n$ and let, \begin{equation*} Y = \sum_{i=0}^{\infty} \lambda^i U^i X_i. \end{equation*} Then the law of $Y$ is $\mu_F$. \end{lemma} \begin{definition} We define the $k$-step support of an iterated function system $F$ to be given by, \begin{equation*} V_{F,k} := \left\{ S_{j_1} \circ S_{j_1} \circ \dots \circ S_{j_k} (0) : j_1, j_2, \dots, j_k \in \{ 1, 2, ..., n\}\right\}. \end{equation*} \end{definition} \begin{definition} Let $F$ be an iterated function system. We define the seperation of $F$ after $k$ steps to be, \begin{equation*} \Delta_{F, k} := \inf \{ |x-y| : x, y \in V_{F,k}, x \neq y \}. \end{equation*} \end{definition} \begin{definition} Given an iterated function system $F$ let the splitting rate of $F$, which we will denote by $M_F$, be defined by, \begin{equation*} M_F := \limsup \limits_{k \to \infty} \left(\Delta_{F, k} \right) ^{-\frac{1}{k}}. \end{equation*} \end{definition} \begin{definition} With $F$ and $X_i$ defined as above let $h_{F,k}$ be defined by, \begin{equation*} h_{F,k} := H\left( \sum_{i=0}^{k-1} X_i \right). \end{equation*} Here $H(\cdot)$ denotes the Shannon entropy. \end{definition} \begin{definition}[Garsia Entropy] Given an iterated function system $F$ with uniform contraction ratio and uniform rotation define the \emph{Garsia entropy} of $F$ by, \begin{equation*} h_F := \liminf_{k \to \infty} \frac{1}{k} h_{F, k}. \end{equation*} \end{definition} I now have the notation necessary to describe the main result. \begin{theorem} \label{mainresult} Let $F$ be an iterated function system on $\mathbb{R}^d$ with uniform contraction ratio and uniform rotation. Suppose that $F$ has Garsia entropy $h_F$, splitting rate $M_F$, and uniform contraction ratio $\lambda$. Suppose further that, \begin{equation*} (d \log M_F - h_F)(\log M_F)^2 < \frac{1}{27} (\log M_F - \log \lambda^{-1})^3 \lambda^4. \end{equation*} Then the self similar measure $\mu_F$ is absolutely continuous. \end{theorem} We will give examples of self similar measures which can be shown to be absolutely continuous using these methods in Section \ref{examples}. \subsection{Outline of Proof} The main innovation of this paper is to use a different quantity for measuring the smoothness of a measure at a given scale. Before defining this quantity we will need the following definition. \newtheorem*{definition:spherical_normal}{Definition \ref{spherical_normal}} \begin{definition:spherical_normal} Given an integer $d \in \mathbb{N}$ and some $y>0$ let $n_y^{(d)}$ be the density function of the multivariate normal distributon with covariance matrix $yI$ and mean $0$. Specifically let, \begin{equation*} n_y^{(d)}(x) = (2 \pi y)^{-\frac{d}{2}} e^{-\frac{1}{2y} \sum_{i=1}^d x_i^2}, \end{equation*} where the value of $d$ is clear from context we will usually just write $n_y$. \end{definition:spherical_normal} The quantity we use to measure the smoothness at a given scale is the following. \newtheorem*{definition:detail}{Definition \ref{detail}} \begin{definition:detail}[Detail Around a Scale] Given a compactly supported probability measure $\mu$ on $\mathbb{R}^d$ and some $r>0$ we define the \emph{detail of $\mu$ around scale $r$} by, \begin{equation*} s_r(\mu) := r^2 \frac{ \Gamma \left( \frac{d}{2} \right)}{2} \left( \frac{d}{2e} \right) ^{-\frac{d}{2}} \left| \left| \left. \mu * \frac{\partial}{\partial y} n_y \right|_{y=r^2} \right| \right|_1. \end{equation*} \end{definition:detail} Here the constant $r^2 \frac{ \Gamma \left( \frac{d}{2} \right)}{2} \left( \frac{d}{2e} \right) ^{-\frac{d}{2}}$ is choosen so that $s_r(\mu) \in (0,1]$ for all probability measures $\mu$ on $\mathbb{R}^d$. The main advantage of this quantity is that it allows much better quantative estimates for the increase in smoothness under convolution than those given in \cite{varjupaper}. Specifically we prove the following; \begin{theorem} \label{manyconv} Let $n \in \mathbb{N}$, $K>1$ and suppose that there are measures $\mu_1, \mu_2, \dots,$ $ \mu_n$ on $\mathbb{R}^d$ and postive real numbers $0 < \alpha_1 \leq \alpha_2 \leq \dots \leq \alpha_n < C_{K, d}^{-1}$. Let $m = \frac{\log n}{\log \frac{3}{2}}$ and let $r>0$. Suppose that for all $t \in \left[ 2^{-\frac{m}{2}}r, K^m \alpha_1^{-m 2^m }r \right]$ and $i \in \{1, 2, ..., n\}$ we have $s_t(\mu_i) \leq \alpha_i$. Then we have, \begin{equation*} s_r(\mu_1 * \mu_2 * \dots * \mu_n) \leq C_{K, d} ^ {n-1} \alpha_1 \alpha_2 \dots \alpha_n . \end{equation*} \end{theorem} Where $C_{K, d}$ is some constant depending only on $K$ and $d$ which is defined in Section 2. This is much stronger than the equivalent theorem given in \cite{varjupaper} in which P. Varj\'u proves that, \begin{equation*} (1 - H(\mu * \tilde{\mu} ; r |2r) ) \leq C \alpha^2 \left( \log \alpha^{-1} \right)^3 \end{equation*} where $C = 10^8$, $\alpha \in (0, \frac{1}{2})$ satisfies, \begin{equation*} \alpha \geq \sup_{\nu \in \{\mu, \tilde{\mu}\}, t \in [\alpha^3r, \alpha^{-3}r]}1- H(\nu : t | 2t) \end{equation*} and $1 - H(\mu ; r |2r) $ is a quantity defined in \cite{varjupaper} which measures how smooth a measure is at scale $r$. We will typically apply Theorem \ref{manyconv} in the case $K \to \infty$. It is worth noting that $C_{K, 1} \to \frac{8}{\Gamma \left( \frac{1}{2} \right)} \left( \frac{1}{2 e} \right) ^ {\frac{1}{2}} \approx 1.93577$. This is a much better constant than the one obtained in \cite{varjupaper}. Later in Section 2 we will show that if $s_r(\mu) \to 0$ sufficiently quickly then $\mu$ is absolutely continuous. Specifically we will show, \newtheorem*{lemma:isabscont}{Lemma \ref{isabscont}} \begin{lemma:isabscont} Suppose that $\mu$ is a measure on $\mathbb{R}^d$ and that there exists some constant $\beta>1$ such that for all sufficiently small $r>0$ we have, \begin{equation*} s_r(\mu) < \left( \log r^{-1} \right) ^{-\beta}, \end{equation*} then $\mu$ is absolutely continuous. \end{lemma:isabscont} In order to show that a self similar measure $\mu_F$ is absolutely continuous we will express it as a convolution of many measures and using Theorem \ref{manyconv} show that detail around a scale decreases under convolution. In the case of Bernoulli convolutions this is done by noting that the Bernoulli convolution with parameter $\lambda$ is the law of the random varaible $Y$ given by, \begin{equation*} Y = \sum_{i=0}^{\infty} \pm \lambda^i. \end{equation*} With each $\pm$ being indpendent with probability $\frac{1}{2}$ of being positive or negative. We then let, \begin{equation*} Y_j = \sum_{i=m_j}^{m_{j+1}-1} \pm \lambda^i \end{equation*} for $j = 0, \dots, n$ for some $n$. We also take $m_{n+1} = \infty$ and $m_0 = 0$. We then let $\nu_j$ be the law of $Y_j$ and note that the Bernoulli convolution with parameter $\lambda$ which I will denote by $\mu_{\lambda}$ is given by, \begin{equation*} \mu_{\lambda} = \nu_0 * \dots * \nu_n \end{equation*} To make this argument work we need to find some upped bound for the detail of the measures $\nu_j$. In the case of Bernoulli convolutions this is done by using entropy and a result dating back to at least Garsia in Lemma 1.51 in \cite{garsiasep}. This can be generalized to fit the requirements of the more general case. Specifically the two results we will need are the following, \newtheorem*{theorem:entropytodetail}{Theorem \ref{entropytodetail}} \begin{theorem:entropytodetail} Let $\mu$ and $\nu$ be compactly supported probability measures on $\mathbb{R}^d$ let $r, u$ and $v$ be positive real numbers such that $r^2=u+v$. Then, \begin{equation*} s_r(\mu * \nu)\leq \frac{r^{-2}}{2} \Gamma \left( \frac{d}{2} \right) \left( \frac{d}{2e} \right) ^{-\frac{d}{2}} \sqrt{ \frac{\partial }{\partial u} H(\mu * n_u) \frac{\partial }{\partial v} H(\nu * n_v) }. \end{equation*} \end{theorem:entropytodetail} And, \newtheorem*{lemma:garsia}{Lemma \ref{garsia}} \begin{lemma:garsia} Let $\lambda$ be an algebraic number, let $M_{\lambda}$ be it's Mahler measure and let $l$ denote the number of conjugates of $\lambda$ which have modulus $1$. Then there is some $c_{\lambda} > 0$ such that the following holds. Suppose that $p$ is a polynomial of degree $n$ with coefficients $-1, 0, 1$ and that $p(\lambda) \neq 0$. Then, \begin{equation*} \left| p(\lambda) \right| > c_{\lambda} n^l M_{\lambda}^{-n}. \end{equation*} \end{lemma:garsia} We use these to lemmas to get an upper bound on the detail of $\nu_j$ around some given scales and then using Theorem \ref{manyconv} we can get a bound on the detail of $\mu_{\lambda}$ around some given scale. One of the ways in which our method differs from that used by Varju in \cite{varjupaper} is that we do not have a way of dealing with the ``low entropy" regime and instead use this result straight after finding the initial measures with small entropy given by Theorem \ref{entropytodetail}. This means that our method is currently unable to deal with parameters with large Mahler measure. \subsection{Structure of the Paper} In Section 2 of the paper we introduce a new quantity for measuring the smoothness of a measure at a given scale which we call the detail around a scale and give some of its properties. In Section 3 we give some basic properties of entropy which we will use later in the paper. In section 4 we prove an upper bound for the detail around a scale in terms of some quantity involving entropy. In Section 5 we give a bound for the gap in entropy using the Garsia entropy and the initial seperation. In Section 6 we prove the main results of the paper using a combinatorial argument and the results of the previous sections to express the self-similar measure as the convolution of many measures each of which have at most some detail around a given scale. Finally in Section 7 we give some explicit examples of self-similar measures which may be shown to be absolutely continuous using the methods of this paper. \subsection{Notation} Here we will summarize some of the commonly used notation in the paper. \newcommand{0.5cm}{0.5cm} \begin{center} \begin{tabular}{ m{4em} |m{28em} } \hline $\mathbb{N}$ & Natural numbers $\{1, 2, \dots \}$.\\ & \\ $M_{\lambda}$ & The Mahler measure of the algebraic number $\lambda$.\\ & \\ $F$ & An iterated function system often given as $F=(n, d, (S_i)_{i=1}^n, (p_i)_{i=1}^n)$ with the iterated function system being an iterated function system on $\mathbb{R}^d$, $S_i$ being similarities and $(p_i)_{i=1}^n$ being a probability vector.\\ & \\ $V_{F, k}$ & The $k$-step support of an iterated function system $F$.\\ & \\ $\Delta_{F,k}$ & $\inf \{|x-y| : x, y \in V_{F,k}, x \neq y \}$.\\ & \\ $M_F$ & $\limsup \limits_{k \to \infty} (\Delta_{F, k}) ^ {- \frac{1}{k}}$.\\ & \\ $\left| \left| \cdot \right| \right|_1$ & The $L^1$ norm of a signed measure defined by $\left| \left| \mu\right| \right|_1 = \sup \limits_{A \subset \mathbb{R}^d} \left(|\mu(A)| +|\mu(A^C)|\right) $.\\ & \\ $n_y^{(d)}$ & The density function of the normal distribution on $\mathbb{R}^d$ with mean zero and covariance matrix $yI$. The $(d)$ will often be ommitted where the value of $d$ is clear from context.\\ & \\ $N_{\Sigma}$ & The density function of the multivariate normal distribution with mean $0$ and covariance matrix $\Sigma$.\\ & \\ $s_r(\cdot)$ & The detail of a measure around a scale. See Definition \ref{detail} in Section 2.\\ & \\ $\mathcal{B}(\mathbb{R}^d)$ & The Borel $\sigma$-algebra on $\mathbb{R}^d$.\\ & \\ $H(\cdot)$ & The Shannon entropy. This can represent both the differential Shannon entropy and the discrete Shannon entropy. Which one will be clear from context.\\ \hline \end{tabular} \end{center} \section{Detail around a Scale} In this section we will discuss some properties of detail around a scale as defined in the previous section. The main results will be proving Lemmas \ref{isabscont} and \ref{t2} as well as Theorem \ref{manyconv}. \begin{definition} \label{spherical_normal} Given an integer $d \in \mathbb{N}$ and some $y>0$ let $n_y^{(d)}$ be the density function of the multivariate normal distributon with covariance matrix $yI$ and mean $0$. Specifically let, \begin{equation*} n_y^{(d)}(x) = (2 \pi y)^{-\frac{d}{2}} e^{-\frac{1}{2y} \sum_{i=1}^d x_i^2} \end{equation*} where the value of $d$ is clear from context we will usually just write $n_y$. \end{definition} \begin{lemma} Let $\mu$ and $\nu$ be probability measures. Then we have, \begin{equation*} \left| \left| \mu * \nu * \frac{\partial}{\partial y} n_y \right| \right|_1 \leq \left| \left| \nu * \frac{\partial}{\partial y} n_y \right| \right|_1 \end{equation*} in particular, \begin{equation} \left| \left| \mu * \frac{\partial}{\partial y} n_y \right| \right|_1 \leq \left| \left| \frac{\partial}{\partial y} n_y \right| \right|_1 = \frac{1}{y} \frac{2}{ \Gamma \left( \frac{d}{2} \right)} \left( \frac{d}{2e} \right) ^{-\frac{d}{2}} . \label{maxderivative} \end{equation} \end{lemma} \begin{proof} For the first part simply write the measure $\nu * \frac{\partial}{\partial y} n_y $ as $\nu * \frac{\partial}{\partial y} n_y = \tilde{\nu}_+ - \tilde{\nu}_-$ where $\tilde{\nu}_+$ and $\tilde{\nu}_-$ are (non-negative) measures with disjoint support. Note that this means, \begin{equation*} \left| \left| \nu * \frac{\partial}{\partial y} n_y \right| \right|_1 = \left| \left| \tilde{\nu}_+ \right| \right|_1 + \left| \left| \tilde{\nu}_- \right| \right|_1 \end{equation*} and so, \begin{eqnarray} \left| \left| \mu * \nu * \frac{\partial}{\partial y} n_y \right| \right|_1 \nonumber & = & \left| \left| \mu * \tilde{\nu}_+ - \mu * \tilde{\nu}_-\right| \right|_1 \nonumber\\ & \leq & \left| \left| \mu \right| \right|_1 \left| \left| \tilde{\nu}_+ \right| \right|_1 + \left| \left| \mu \right| \right|_1 \left| \left| \tilde{\nu}_- \right| \right|_1\nonumber \\ & = & \left| \left| \nu * \frac{\partial}{\partial y} n_y \right| \right|_1.\nonumber \end{eqnarray} For the second part we need to compute, \begin{equation*} \int_{\mathbf{x} \in \mathbb{R}^d} \left| \frac{\partial}{\partial y} n_y \right| \, d \mathbf{x} . \end{equation*} To do this we work in polar coordinates. Let $r = \sqrt{ \sum_{i=1}^d x_i^2}$. Then we have, \begin{equation*} \frac{\partial}{\partial y} n_y = \left( \frac{r^2}{2y^2} - \frac{d}{2y} \right) (2 \pi y) ^ {-\frac{d}{2}} \exp(- \frac{r^2}{2y}) . \end{equation*} Noting that the $(d-1)$- dimensional Lesbegue measure of $S^{(d-1)}$ is $\frac{2 \pi^{\frac{d}{2}}}{\Gamma \left( \frac{d}{2} \right) }$ we get, \begin{eqnarray} \nonumber \int_{\mathbf{x} \in \mathbb{R}^d} \left| \frac{\partial}{\partial y} n_y \right| \, d \mathbf{x} &=& \frac{2 \pi^{\frac{d}{2}}}{\Gamma \left( \frac{d}{2} \right) } \left( -\int_{r=0}^{\sqrt{dy}} \left( \frac{r^2}{2y^2} - \frac{d}{2y} \right) (2 \pi y) ^ {-\frac{d}{2}} r^{d-1}\exp(- \frac{r^2}{2y}) \,dr \right.\\ &+& \left. \int_{\sqrt{dy}}^{\infty} \left( \frac{r^2}{2y^2} - \frac{d}{2y} \right) (2 \pi y) ^ {-\frac{d}{2}} r^{d-1}\exp(- \frac{r^2}{2y}) \,dr \right) .\nonumber \end{eqnarray} By differentiation it is easy to check that, \begin{equation*} \int \left( \frac{r^2}{y} - d \right) r^{d-1} e^{- \frac{r^2}{2y}} \, dr = - r^d e^{- \frac{r^2}{2y}} . \end{equation*} Hence, \begin{eqnarray} \nonumber \lefteqn{ -\int_{r=0}^{\sqrt{dy}} \left( \frac{r^2}{2y^2} - \frac{d}{2y} \right) (2 \pi y) ^ {-\frac{d}{2}} r^{d-1}\exp(- \frac{r^2}{2y}) \,dr} \\ \nonumber & + & \int_{\sqrt{dy}}^{\infty} \left( \frac{r^2}{2y^2} - \frac{d}{2y} \right) (2 \pi y) ^ {-\frac{d}{2}} r^{d-1}\exp(- \frac{r^2}{2y}) \,dr \\ \nonumber & = & \frac{1}{2y} (2 \pi y) ^ {-\frac{d}{2}} 2 (dy) ^ {\frac{d}{2}} e^{-\frac{d}{2}}\\ & = & \frac{1}{y} (2 \pi)^{-\frac{d}{2}} d^{\frac{d}{2}} e^{-\frac{d}{2}} .\nonumber \end{eqnarray} Meaning that, \begin{equation*} \int_{\mathbf{x} \in \mathbb{R}^d} \left| \frac{\partial}{\partial y} n_y \right| \, d \mathbf{x} = \frac{1}{y} \frac{2}{ \Gamma \left( \frac{d}{2} \right)} \left( \frac{d}{2e} \right) ^{\frac{d}{2}} . \end{equation*} \end{proof} \begin{definition}[Detail Around a Scale] \label{detail} Given a compactly supported probability measure $\mu$ on $\mathbb{R}^d$ and some $r>0$ we define the \emph{detail of $\mu$ around scale $r$} by, \begin{equation*} s_r(\mu) := r^2 \frac{ \Gamma \left( \frac{d}{2} \right)}{2} \left( \frac{d}{2e} \right) ^{-\frac{d}{2}} \left| \left| \left. \mu * \frac{\partial}{\partial y} n_y \right|_{y=r^2} \right| \right|_1 . \end{equation*} Note that this means $s_r(\mu) \in (0,1]$ \end{definition} We am now almost ready to prove Lemma \ref{t2} and Theorem \ref{manyconv}. First we will need the following result. \begin{lemma} \label{xtoy} Let $y>0$. Then we have, \begin{equation*} \frac{1}{2} \bigtriangleup n_y = \frac{\partial}{\partial y} n_y \end{equation*} where $\bigtriangleup$ denotes the Laplacian, \begin{equation*} \bigtriangleup = \sum_{i=1}^d \frac{\partial^2}{\partial x_i^2} . \end{equation*} \end{lemma} \begin{proof} This is just a simple computation. Simply note that, \begin{equation*} \frac{\partial}{\partial x_i} n_y = -\frac{x_i}{y} n_y \end{equation*} and so, \begin{equation*} \frac{1}{2} \sum_{i=1}^d \frac{\partial^2}{\partial x_i^2} n_y = \frac{1}{2}\left( \frac{|x|^2}{y^2} - \frac{d}{y} \right) n_y = \frac{\partial}{\partial y} n_y . \end{equation*} \end{proof} \begin{lemma} Let $\mu$ and $\nu$ be probability measures and let $y>0$. Then, \begin{equation*} \left| \left| \mu * \nu * \frac{\partial}{\partial y} n_y \right| \right|_1 \leq 2 \int_{\frac{y}{2}}^{\infty} \left| \left| \mu * \frac{\partial}{\partial v} n_v \right| \right|_1 \left| \left| \nu * \frac{\partial}{\partial v} n_v \right| \right|_1 \, dv . \end{equation*} \end{lemma} \begin{proof} First note that, \begin{equation*} \left| \left| \mu * \nu * \frac{\partial}{\partial y} n_y \right| \right|_1 \leq \int_y^{w} \left| \left| \frac{\partial}{\partial u} \left( \mu * \nu * \frac{\partial}{\partial u} n_u\right) \right| \right|_1 \, du + \left| \left| \mu * \nu * \frac{\partial}{\partial w} n_w \right| \right|_1 . \end{equation*} Taking $w \to \infty$ and using \eqref{maxderivative} this gives, \begin{equation*} \left| \left| \mu * \nu * \frac{\partial}{\partial y} n_y \right| \right|_1 \leq \int_y^{\infty} \left| \left| \frac{\partial}{\partial u} \left( \mu * \nu * \frac{\partial}{\partial u} n_u\right) \right| \right|_1 \, du . \end{equation*} We can then use Lemma \ref{xtoy} and standard properties of the convolution of distributions to move the derivatives around as follows, \begin{eqnarray} \frac{\partial}{\partial y} \left( \mu * \nu * \frac{\partial}{\partial y} n_y \right) & = & \frac{1}{2} \frac{\partial}{\partial y} \left( \mu * \nu * \bigtriangleup n_y \right) \nonumber\\ & = & \frac{1}{2} \frac{\partial}{\partial y} \left( \mu * \nu * n_{y-a} *\bigtriangleup n_a \right) \nonumber\\ & = & \frac{1}{2} \left(\mu * \frac{\partial}{\partial y} n_{y-a} \right) * \left( \nu * \bigtriangleup n_a \right) .\nonumber \end{eqnarray} Letting $a= \frac{1}{2}y$ and applying Lemma \ref{xtoy} again this gives, \begin{equation*} \frac{\partial}{\partial y} \left( \mu * \nu * \frac{\partial}{\partial y} n_y \right) = \left(\mu * \left. \frac{\partial}{\partial u} n_{u} \right|_{u=\frac{y}{2}} \right) * \left(\nu * \left. \frac{\partial}{\partial u} n_{u} \right|_{u=\frac{y}{2}} \right) . \end{equation*} Where this is understood to mean that they are representitives of the same distribution and so in particular they have the same $L^1$ norm. This means, \begin{eqnarray} \lefteqn{\left| \left| \mu * \nu * \frac{\partial}{\partial y} n_y \right| \right|_1 } \nonumber\\ & \leq & \int_y^{\infty} \left| \left| \frac{\partial}{\partial u} \left( \mu * \nu * \frac{\partial}{\partial u} n_u\right) \right| \right|_1 \, du\nonumber \\ & = & \int_y^{\infty} \left| \left| \left(\mu * \left. \frac{\partial}{\partial v} n_{v} \right|_{v=\frac{u}{2}} \right) * \left(\nu * \left. \frac{\partial}{\partial v} n_{v} \right|_{v=\frac{u}{2}} \right) \right| \right|_1 \, du \label{intv1} \nonumber\\ & \leq & \int_y^{\infty} \left| \left| \mu * \left. \frac{\partial}{\partial v} n_{v} \right|_{v=\frac{u}{2}} \right| \right|_1 \left| \left| \nu * \left. \frac{\partial}{\partial v} n_{v} \right|_{v=\frac{u}{2}} \right| \right|_1 \, du\nonumber\\ & = & 2 \int_{\frac{y}{2}}^{\infty} \left| \left| \mu * \frac{\partial}{\partial v} n_v \right| \right|_1 \left| \left| \nu * \frac{\partial}{\partial v} n_v \right| \right|_1 \, dv \nonumber \end{eqnarray} as required. \end{proof} \begin{lemma} \label{t2} Let $\mu_1$ and $\mu_2$ be probability measures on $\mathbb{R}^d$, let $r>0$, $\alpha_1,\alpha_2>0$ and let $K>1$. Suppose that for all $t \in [r/\sqrt{2}, K\alpha_1^{-\frac{1}{2}} \alpha_2^{-\frac{1}{2}}r]$ and for all $i\in \{1, 2\}$ we have, \begin{equation*} s_r(\mu_i) \leq \alpha_i \end{equation*} then, \begin{equation*} s_r(\mu_1*\mu_2) \leq C_{K.d} \alpha_1 \alpha_2 \end{equation*} where, \begin{equation*} C_{K,d} = \frac{8}{ \Gamma \left( \frac{d}{2} \right)} \left( \frac{d}{2e} \right) ^{\frac{d}{2}} \left( 1 + \frac{1}{2K^2} \right) . \end{equation*} \end{lemma} \begin{proof} We have, \begin{eqnarray} s_r(\mu_1*\mu_2) & = & r^2 \frac{ \Gamma \left( \frac{d}{2} \right)}{2} \left( \frac{d}{2e} \right) ^{-\frac{d}{2}} \left| \left| \left. \mu *\nu * \frac{\partial}{\partial y} n_y \right|_{y=r^2} \right| \right|_1\nonumber\\ & \leq & r^2 \Gamma \left( \frac{d}{2} \right) \left( \frac{d}{2e} \right) ^{-\frac{d}{2}} \int_{\frac{r^2}{2}}^{\infty} \left| \left| \mu * \frac{\partial}{\partial v} n_v \right| \right|_1 \left| \left| \nu * \frac{\partial}{\partial v} n_v \right| \right|_1 \, dv\nonumber\\ & \leq & r^2 \frac{4}{ \Gamma \left( \frac{d}{2} \right)} \left( \frac{d}{2e} \right) ^{\frac{d}{2}} \int_{\frac{r^2}{2}}^{\infty} v^{-2} \alpha_1 \alpha_2 \, dv\nonumber\\ & + & r^2 \frac{4}{ \Gamma \left( \frac{d}{2} \right)} \left( \frac{d}{2e} \right) ^{\frac{d}{2}} \int_{K^2\alpha_1^{-1}\alpha_2^{-1}r^2}^{\infty} v^{-2} \, dv\nonumber\\ & = & \frac{8}{ \Gamma \left( \frac{d}{2} \right)} \left( \frac{d}{2e} \right) ^{\frac{d}{2}} \left( 1 + \frac{1}{2K^2} \right) \alpha_1 \alpha_2.\nonumber \end{eqnarray} \end{proof} When we apply this lemma we will do it in the case where $K$ is large and the only important property of $C_{K,d}$ is that $C_{K,d} \to \frac{8}{ \Gamma \left( \frac{d}{2} \right)} \left( \frac{d}{2e} \right) ^{\frac{d}{2}} $ as $K \to \infty$. In the case $d=1$ we get $\frac{8}{ \Gamma \left( \frac{d}{2} \right)} \left( \frac{d}{2e} \right) ^{\frac{d}{2}} \approx 1.93577$. This result is in many ways similar to Theorem 2 in \cite{varjupaper} though is much stronger as it does not include any logorithm factors and has a much smaller constant (the constant given in \cite{varjupaper} is $10^8$). We will now prove Theorem \ref{manyconv} \newtheorem*{theorem:manyconv}{Theorem \ref{manyconv}} \begin{theorem:manyconv} Let $n \in \mathbb{N}$, $K>1$ and suppose that there are measures $\mu_1, \mu_2, \dots,$ $ \mu_n$ on $\mathbb{R}^d$ and postive real numbers $0 < \alpha_1 \leq \alpha_2 \leq \dots \leq \alpha_n < C_{K, d}^{-1}$. Let $m = \frac{\log n}{\log \frac{3}{2}}$ and let $r>0$. Suppose that for all $t \in \left[ 2^{-\frac{m}{2}}r, K^m \alpha_1^{-m 2^m }r \right]$ and $i \in \{1, 2, ..., n\}$ we have $s_t(\mu_i) \leq \alpha_i$. Then we have, \begin{equation*} s_r(\mu_1 * \mu_2 * \dots * \mu_n) \leq C_{K, d} ^ {-n+1} \alpha_1 \alpha_2 \dots \alpha_n . \end{equation*} \end{theorem:manyconv} \begin{proof} I will prove this by induction. The case $n=1$ is trivial. Suppose that $n>1$. Let $n' = \left\lceil \frac{n}{2} \right\rceil$ and let $m' = \frac{\log n'}{\log \frac{3}{2}}$. Define $\nu_1, \nu_2, ..., \nu_{n'}$ and $\beta_1, \beta_2, ..., \beta_{n'}$ as follows. For $i=1, 2, ..., \left \lfloor \frac{n'}{2} \right \rfloor$ let $\nu_i = \mu_{2i-1} * \mu_{2i}$ and $\beta_i = C_{K, d} \alpha_{2i-1} \alpha_{2i}$ and if $n$ is odd let $\nu_{n'} = \mu_n$ and $\beta_{n'} = \alpha_n$. Note that $C_{K,d} ^ {n'-1} \beta_1 \beta_2 \dots \beta_{n'} = C_{K,d}^{n-1} \alpha_1 \alpha_2 \dots \alpha_n$ so we need to show that $n'$, $\left(\nu_i \right)_{i=1}^{n'}$ and $\left(\beta_i \right)_{i=1}^{n'}$ satisfy the inductive hypothesis. I want to use Lemma \ref{t2}. This means that we need to show that if $t \in \left[ 2^{-\frac{m'}{2}}r, K^{m'} \beta_1^{-m'2^{m'} }r \right]$ and $q \in \left[ 2^{-\frac{1}{2}} t, K \alpha_{2i-1} ^{-\frac{1}{2}}\alpha_{2i}^{-\frac{1}{2}} t \right]$ then $q \in \left[ 2^{-\frac{m}{2}}r, K^m \alpha_1^{-m 2^m }r \right]$. It is sufficient to show that, \begin{equation*} \left[ 2^{-\frac{m'+1}{2}} r, K^{m'+1} \beta_1^{-m'2^{m'} } \alpha_1^{-1} r\right] \subset \left[ 2^{-\frac{m}{2}}r, K^m \alpha_1^{-m 2^m }r \right] . \end{equation*} Note that $m' +1 \leq m$ so $2^{-\frac{m'+1}{2}}r \geq 2^{-\frac{m}{2}}r$. Also we have, \begin{eqnarray} K^{m'+1} \beta_1^{-m'2^{m'} } \alpha_1^{-1} r & \leq & K^{m} \left( \alpha_1^{2} \right) ^{-m' 2^{m'}} \alpha_1^{-1}r\nonumber\\ & = & K^m \alpha_1^{-1 - m' 2^{m'+1}}r\nonumber\\ & \leq & K^m \alpha_1^{-m 2^{m}}r\nonumber \end{eqnarray} as required. \end{proof} It is worth noting that the only properties of $m$ we have used are that $m\geq1$ when $n>1$ and that $m\geq m'+1$. This means that it is possible to choose $m$ such that $m\sim \log _2 n$. It turns out that this doesn't make any difference to the bound in Theorem \ref{mainresult}. \begin{lemma} \label{isacont} Let $\mu$ be a probaility measure on $\mathbb{R}^d$ and let $y>0$. Suppose that, \begin{equation} \label{normdecay} \int_{0^+}^y \left| \left| \mu * \frac{\partial}{\partial u} n_u \right| \right|_1 \, du < \infty \end{equation} then $\mu$ is absolutely continuous. \end{lemma} \begin{proof} The condition \eqref{normdecay} implies that the sequence $\mu * n_u$ is Cauchy as $u \to 0$ wrt $\left| \left| \cdot \right| \right|_1$. Since the space $L^1$ is complete this means that there is some absolutely continuous measure $\tilde{\mu}$ such that $\mu * n_u \to \tilde{\mu}$ with respect to $L^1$ as $u \to 0$. Suppose for contradiction that $\tilde{\mu} \neq \mu$. The set of open cuboids is a $\pi$-system generating $\mathcal{B}(\mathbb{R}^d)$ so this means that there exists real numbers $a_1, \dots, a_d$ and $b_1, \dots, b_d$ such that, \begin{equation*} \mu((a_1, b_1) \times \dots \times (a_d, b_d)) \neq \tilde{\mu}((a_1, b_1) \times \dots \times (a_d, b_d)) . \end{equation*} There are now two cases to deal with. First we will deal with the case where, \begin{equation*} \mu((a_1, b_1) \times \dots \times (a_d, b_d)) > \tilde{\mu}((a_1, b_1) \times \dots \times (a_d, b_d)) . \end{equation*} This means that there is some $\varepsilon > 0 $ such that, \begin{equation*} \mu((a_1+\varepsilon , b_1-\varepsilon ) \times \dots \times (a_d+\varepsilon , b_d-\varepsilon )) >\tilde{\mu}((a_1, b_1) \times \dots \times (a_d, b_d)) +\varepsilon . \end{equation*} We now consider $\mu*(n_u|_{B_{\varepsilon}})$ where $B_r$ is the ball center $0$ with radius $r$. We have, \begin{eqnarray} \lefteqn{\left(\mu* n_u \right) ((a_1, b_1) \times \dots \times (a_d, b_d))}\nonumber\\ & \geq & \left(\mu*(n_u|_{B_{\varepsilon}})\right) ((a_1, b_1) \times \dots \times (a_d, b_d))\nonumber\\ & \geq & \left| \left| n_u|_{B_{\varepsilon}} \right| \right|_1 \mu((a_1+\varepsilon , b_1-\varepsilon ) \times \dots \times (a_d+\varepsilon , b_d-\varepsilon )) \nonumber\\ & \geq & \left| \left| n_u|_{B_{\varepsilon}} \right| \right|_1(\tilde{\mu}((a_1, b_1) \times \dots \times (a_d, b_d)) +\varepsilon)\nonumber\\ & \to & \tilde{\mu}((a_1, b_1) \times \dots \times (a_d, b_d)) +\varepsilon\nonumber \end{eqnarray} as $u \to 0$. This contradicts the requirement, \begin{equation*} \left(\mu* n_u \right) ((a_1, b_1) \times \dots \times (a_d, b_d)) \to \tilde{\mu}((a_1, b_1) \times \dots \times (a_d, b_d)) \end{equation*} as $u \to 0$. The case, \begin{equation*} \mu((a_1, b_1) \times \dots \times (a_d, b_d)) < \tilde{\mu}((a_1, b_1) \times \dots \times (a_d, b_d)) \end{equation*} is almost identical. Together these cases show that $\mu = \tilde{\mu}$ and so in particular $\mu$ is absolutely continuous. \end{proof} \begin{lemma} \label{isabscont} Suppose that $\mu$ is a measure on $\mathbb{R}^d$ and that there exists some constant $\beta > 1$ such that for all sufficiently small $r>0$ we have, \begin{equation*} s_r(\mu) < \left( \log r^{-1} \right) ^{-\beta} \end{equation*} Then $\mu$ is absolutely continuous. \end{lemma} \begin{proof} This follows immediately from Lemma \ref{isacont}. \end{proof} \section{Proporties of Entropy} Here we will discuss some basic properties of entropy which we will later use to bound $s_r(\cdot)$. We will be studying the differential entropy of quantities like $\mu * n_y$ for some compactly supported measure $\mu$. None of the results in this section are particularly significant and the purpose of this section is more to lay the ground work for the next section. \begin{definition} Let $\mu$ be a probability measure which is absolutely continuous with respect to the Lesbegue measure and has density function $f$. Then we define the entropy of $\mu$ to be, \begin{equation*} H(\mu) := \int_{\mathbb{R}^d} - f(x) \log f(x) \, dx . \end{equation*} Here we define $0 \log 0 = 0$ \end{definition} \begin{definition} Let $\mu$ be an atomic measure. Then $H(\mu)$ will denote the Shannon entropy of $\mu$. \end{definition} Using $H$ to denote two different quantities shouldn't cause confusion as we will only be using measures which are either purely atomic or absolutely continuous. \begin{definition}[Finite Entropy] Given some probability measure $\mu$ on $\mathbb{R}^d$ we say that $\mu$ has \emph{finite entropy} if $\mu$ is absolutely continuous and has a density function $f$ such that the function $f \, \log \, f$ is in $L^1$. \end{definition} \begin{lemma} \label{convincrease} Let $\mu$ and $\nu$ be probability measures on $\mathbb{R}^d$ such $\mu$ is absolutely continous with respect to the Lesbegue measure and has finite entropy. Then, \begin{equation*} H(\mu * \nu) \geq H(\mu) . \end{equation*} \end{lemma} \begin{proof} This follows from Jensen's inequatity. Define the function, \begin{align*} h : [0, \infty) &\to \mathbb{R} \\ x & \mapsto x \log x\\ \end{align*} where as above we take $0 \log 0 = 0$. Note that $h$ is concave. Let $f$ be a probability density function for $\mu$. We then have, \begin{eqnarray} H(\mu * \nu) &=& -\int_{- \infty}^{\infty} h \left( \int_{-\infty}^{\infty} f(y-x) \,\nu(dx) \right) \, dy\nonumber\\ & \geq & -\int_{- \infty}^{\infty} \int_{-\infty}^{\infty} h(f(y-x)) \, \nu(dx) \, dy \nonumber\\ & =& -\int_{- \infty}^{\infty} \int_{-\infty}^{\infty} h(f(y-x)) \, dy \, \nu(dx) \nonumber\\ & = & \int_{- \infty}^{\infty} H(\mu) \, \nu(dx)\nonumber\\ & = & H(\mu) .\nonumber \end{eqnarray} \end{proof} \begin{definition}[Rapidly Decaying Measure] I will decribe an absolutely continuous measure $\mu$ as being a \emph{Rapidly Decaying Measure} if it has a density function $f$ such that there are some constants $C, \varepsilon > 0$ such that for all $x \in \mathbb{R}^d$ we have, \begin{equation*} f(x) < C \exp \left( -\varepsilon |x|^2 \right). \end{equation*} \end{definition} \begin{proposition} Let $X_1$, $X_2$ and $X_3$ be independent absolutely continuous random variables with finite entropy. Then, \begin{equation*} H(X_1 + X_2 + X_3) + H(X_1) \leq H(X_1 + X_2) + H(X_1 + X_3) . \end{equation*} \end{proposition} \begin{proof} This is proven in for example \cite{entropysubmodality} Theorem 1. Note that the author of \cite{entropysubmodality} doesn't explicitly state that the random variables need to have finite entropy but this is implicitly used in the proof. \end{proof} \begin{corollary} \label{2ndordent} Suppose the $\mu_1, \mu_2, \mu_3$ are absolutely continuous, rapidly decaying probability measures which have finite entropy. Then, \begin{equation*} H(\mu_1 * \mu_2 * \mu_3) + H(\mu_1) \leq H(\mu_1 * \mu_2) + H(\mu_1 * \mu_3) . \end{equation*} \end{corollary} \begin{definition} Given some function $f : \mathbb{R} \to \mathbb{R}$ we define the \emph{right hand directional derivative} of $f(x)$ with respect to $x$ by, \begin{equation*} \frac{\partial}{\partial x^+} f(x) := \lim_{h \to 0^+} \frac{f(x+h) - f(x)}{h} . \end{equation*} Similarly we define the \emph{left hand directional derivative} of $f(x)$ with respect to $x$ by, \begin{equation*} \frac{\partial}{\partial x^-} f(x) := \lim_{h \to 0^-} \frac{f(x+h) - f(x)}{h} . \end{equation*} \end{definition} \begin{lemma} \label{entroprop} Let $\mu$ be a rapidly decaying or compactly supported probability measure on $\mathbb{R}^d$. Then the function $\mathbb{R}_{>0} \to \mathbb{R}$, $y \mapsto H(\mu * n_y)$; \begin{enumerate} \item is increasing \label{inc} \item has both directional derivatives everywhere \label{diff} \item is continuous \label{cont} \item both directional derivatives $\frac{\partial}{\partial y^+} H(\mu * n_y)$ and $\frac{\partial}{\partial y^-} H(\mu * n_y)$ are decreasing as a function of $y$ \label{deder} \end{enumerate} \end{lemma} \begin{proof} Part \ref{inc} follows immediately from Lemma \ref{convincrease} \paragraph{} To prove part \ref{diff} we will show that if $y>0$ and $\epsilon_1, \epsilon_2$ are non-zero real numbers such that $-y < \epsilon_1 < \epsilon_2$ then, \begin{equation*} \frac{H(\mu * n_{y+\epsilon_1}) - H(\mu*n_y)}{\epsilon_1} \geq \frac{H(\mu * n_{y+\epsilon_2}) - H(\mu*n_y)}{\epsilon_2} . \end{equation*} To prove this we will start with the case where $\frac{\epsilon_1}{\epsilon_2} \in \mathbb{Q}$. In this case there are integers $a, b \in \mathbb{Z}$ and some $\epsilon_3 >0$ such that $\epsilon_1 = a \epsilon_3$ and $\epsilon_2 = b \epsilon_3$. We know by substiuting $\mu_1 = \mu * n_{y+n\epsilon_3}, \mu_2 = n_{\epsilon_3}$ and $\mu_3 = n_{(m-n)\epsilon_3}$ into Proposition \ref{2ndordent} that for any $n, m \in \mathbb{Z}$ with $n < m$ and $y + n \epsilon_3 > 0$ we have, \begin{equation*} H(\mu * n_{y + (n+1) \epsilon_3}) - H(\mu * n_{y+n\epsilon_3}) \geq H(\mu * n_{y+(m+1)\epsilon_3}) - H(\mu * n_{y+m\epsilon_3}) \end{equation*} summing this means that, \begin{equation*} |b| \frac{\epsilon_1}{|\epsilon_1|} [H(\mu * n_{y+\epsilon_1}) - H(\mu * n_y)] \geq |a| \frac{\epsilon_2}{|\epsilon_2|} [H(\mu * n_{y+\epsilon_2}) - H(\mu * n_y)] \end{equation*} as both sides can be as the sum of terms of the form $H(\mu * n_{y + (n+1) \epsilon_3}) - H(\mu * n_{y+n\epsilon_3})$. Dividing through gives the required result. The case where $\frac{\epsilon_1}{\epsilon_2} \notin \mathbb{Q}$ simply follows from continuity. The existence of partial derivates now follows from the fact that the limit required is the limit of a bounded monotonic function and so exists. \paragraph{} Part \ref{cont} follows immediately from Part \ref{diff}. \paragraph{} Part \ref{deder} follows from the fact that by Proposition \ref{2ndordent} we have if $0 < y_1 < y_2$ and $\delta > -y_1$ then, \begin{equation*} \frac{H(\mu * n_{y_1+\delta}) - H(\mu * n_{y_1})}{\delta} \geq \frac{H(\mu * n_{y_2+\delta}) - H(\mu * n_{y_2})}{\delta} \end{equation*} and so we can take $\delta \to 0$ from either side. \end{proof} \begin{lemma} Given any $0 < a < b$ we have, \begin{equation*} H(\mu * n_b) - H(\mu * n_a) = \int_{a}^{b} \frac{\partial}{\partial y^+} H(\mu * n_y) \, dy . \end{equation*} \end{lemma} \begin{proof} Define the function, \begin{eqnarray} t : (a, b) \times (0, \infty) & \to & \mathbb{R}\nonumber\\ (y, h) & \mapsto & \frac{H(\mu * n_{y+h}) - H(\mu * n_y)}{h}\nonumber \end{eqnarray} clearly by the arguments of the previous section $t$ is decreasing in $h$. We also have that $t(y , h) \to \frac{\partial}{\partial y^+} H(\mu * n_y) $ as $h \to 0$. This means that by the monotone convergence theorem we have, \begin{equation*} \int_{a}^{b} t(y, h) \, dy \to \int_{a}^{b} \frac{\partial}{\partial y^+} H(\mu * n_y) \, dy \end{equation*} as $h \to 0$. We also have, \begin{equation*} \int_{a}^{b} t(y, h) \, dy = \frac{1}{h} \int_a^{a+h} H(\mu * n_y) \, dy - \frac{1}{h} \int_b^{b+h} H(\mu * n_y) \, dy \end{equation*} by the continuity of $H(\mu * n_y)$ this tends to $ H(\mu * n_b) - H(\mu * n_a) $ as $h \to 0$. \end{proof} It is worth noting that there is no reason to suspect that $H(\mu * n_y)$ is not for example $C^1$. Indeed it is not too difficult to show that it is continuous diferentiable at all but at most countably many points using arguments similar to those earlier in the section. Proving that $H(\mu * n_y)$ is $C^1$ is not needed for the arguments of this paper and so has not been done. \begin{lemma} Let $\alpha >0$ and let $T_{\alpha}: \mathbb{R}^d \to \mathbb{R}^d$ be defined by $T_{\alpha}:x \to \alpha x$. Let $\mu$ be an absolutely continuous probability measure with finite entropy. Then we have, \begin{equation*} H(\mu \circ T_{\alpha}^{-1}) = H(\mu) + d \log \alpha . \end{equation*} \end{lemma} \begin{proof} Note that if $f$ is a density function of $\mu$ then then function $\tilde{f}$ defined by, \begin{equation*} \tilde{f}: x \mapsto \frac{1}{\alpha^d} f \left(\frac{x}{\alpha}\right) \end{equation*} is a density function for $\mu \circ T_{\alpha}^{-1}$. Therefore we have, \begin{eqnarray} H(\mu \circ T_{\alpha}^{-1}) & = & -\int_{x \in \mathbb{R}^d} \frac{1}{\alpha^d} f \left(\frac{x}{\alpha}\right) \log \left( \frac{1}{\alpha^d} f \left(\frac{x}{\alpha}\right) \right) \, dx\nonumber\\ & = & -\int_{y \in \mathbb{R}^d} f \left(y\right) \log \left( \frac{1}{\alpha^d} f \left(y\right) \right) \, dy\nonumber\\ & = & H(\mu) + d \log \alpha .\nonumber \end{eqnarray} \end{proof} \begin{corollary} Let $\alpha >0$ and let $T_{\alpha}:x \to \alpha x$. Let $\mu$ be a compactly supported probability measure and let $y>0$. Then we have, \begin{equation*} \left. \frac{\partial}{\partial u} H(( \mu \circ T_{\alpha}^{-1}) * n_{u}) \right|_{u=\alpha ^2y} = \alpha^{-2} \frac{\partial}{\partial y} H(\mu * n_y) . \end{equation*} \end{corollary} \begin{proof} Note that we have, \begin{equation*} H(( \mu \circ T_{\alpha}^{-1}) * n_{\alpha ^2y}) = H(\mu * n_y) +\log \alpha \end{equation*} and so, \begin{equation*} \frac{\partial}{\partial y} H(( \mu \circ T_{\alpha}^{-1}) * n_{\alpha ^2 y}) = \frac{\partial}{\partial y} H(\mu * n_y) \end{equation*} the result follows from a change of variables. \end{proof} \begin{definition} \label{nonprobent} Given a (not necessarily probability) measure $\mu$ it is convenient to define, \begin{equation*} H(\mu) := \left| \left| \mu \right| \right|_1 H\left(\frac{\mu}{\left| \left| \mu \right| \right|_1}\right) . \end{equation*} \end{definition} \begin{lemma} \label{concave} Let $\mu$ and $\nu$ be measures with finite entropy either both atomic or both absolutely continuous. Then, \begin{equation*} H( \mu + \nu) \geq H(\mu) + H(\nu) . \end{equation*} \end{lemma} \begin{proof} This follows easily from an application of Jensen's inequality to the function $-x \log x$ and Definition \ref{nonprobent}. \end{proof} \begin{lemma} \label{maxinc} Let $k \in \mathbb{N}$, let $\mathbf{p} = (p_1, \dots, p_k)$ be a probability vector and let $\mu_1, \dots, \mu_k$ be absolutely continuous probabiliy measures with finite entropy such that $\left| \left| \mu_i \right| \right|_1 = p_i$. Then, \begin{equation*} H(\sum_{i=1}^k \mu_i) \leq H(\mathbf{p}) + \sum_{i=1}^k H(\mu_i) \end{equation*} in particular, \begin{equation*} H\left(\sum_{i=1}^k \mu_i\right) \leq \sum_{i=1}^k H(\mu_i) + \log k . \end{equation*} \end{lemma} \begin{proof} Let the density function of $\mu_i$ be $f_i$ for $i=1, \dots,k$. Let $h:[0, \infty) \to \mathbb{R}$ be defined by $h : x \to - x \log x$ for $x > 0$ and $h(0) = 0$. Note that $h$ satisfies $h(a+b) \leq h(a) + h(b)$. We have, \begin{eqnarray} H\left(\sum_{i=1}^k \mu_i\right) & = & \int_{- \infty}^{\infty} h(f_1 + \dots + f_k)\nonumber\\ & \leq & \sum_{i=1}^k\int_{- \infty}^{\infty} h(f_i)\nonumber\\ &=& H(\mathbf{p}) + \sum_{i=1}^k H(\mu_i) .\nonumber \end{eqnarray} \end{proof} \begin{lemma} \label{sepinc} Let $\mu$ and $\nu$ be probability measures of $\mathbb{R}^d$. Suppose that $\mu$ is a discrete measure supposrted on finitely many points with seperation at least $2R$ and that $\nu$ is an absolutely continuous measure with finite entropy whose support is contained in a ball of radius $R$. Then, \begin{equation*} H(\mu * \nu) = H(\mu) + H(\nu) . \end{equation*} \end{lemma} \begin{proof} Let $n \in \mathbb{N}$, $p_1, p_2, \dots, p_n \in (0, 1)$ and $x_1, x_2, \dots, x_n \in \mathbb{R}^d$ be choosen such that, \begin{equation*} \mu = \sum_{i = 1}^n p_i \delta_{x_i} \end{equation*} And let $f$ be the density function of $\nu$. Note that this means that the density function of $\mu * \nu$, which we will denote by $g$, can be expressed as, \begin{equation*} g(x) = \begin{cases} p_i f(x - x_i) & |x_i - x| < R \text{ for some } i\\ 0 & \text{Otherwise}\\ \end{cases} \end{equation*} This means that, \begin{eqnarray} H(\mu * \nu) & = & \sum_{i=1}^{n} \int_{B_R(x_i)} - g(x) \log \, g(x) \, dx\nonumber\\ & = & \sum_{i=1}^{n} \int_{B_R(0)} - p_i f(x) \log \left( p_i f(x) \right) \, dx\nonumber\\ & = & \sum_{i=1}^{n} \int_{B_R(0)} - p_i f(x) \log \left( f(x) \right) \, dx\nonumber\\ & + & \sum_{i=1}^{n} \int_{B_R(0)} - p_i f(x) \log \left( p_i\right) \, dx\nonumber\\ & = & H(\mu) + H(\nu)\nonumber \end{eqnarray} \end{proof} \section{Entropy Bound for $s_r(\cdot)$} \label{entropybound2} In this section we wish to come up with a bound on the amount of detail around a scale for $\mu$ in terms of directional derivatives of $H(\mu * n_y)$. \begin{definition} Let $f:[0,1] \to [0,1]$ be defined by $f(\delta) = \frac{1}{2} (1 + \delta) \log (1 + \delta) + \frac{1}{2} (1-\delta) \log (1-\delta)$. \end{definition} Note that $f$ is convex. \begin{lemma} \label{jensondiscrete} Let $\mu$ and $\nu$ be two absolutely continuous probability measures on $\mathbb{R}^d$ with density functions $a$ and $b$. Then, \begin{equation*} f\left(\frac{1}{2}||\mu - \nu ||_1\right) \leq H\left(\frac{\mu + \nu}{2}\right) - \frac{1}{2} H(\mu) - \frac{1}{2} H(\nu) \label{2mb} \end{equation*} \end{lemma} \begin{proof} We simply note that, \begin{eqnarray} \lefteqn{f\left(\frac{1}{2}||\mu - \nu ||_1\right)}\nonumber\\ &=& f \left( \int \left| \frac{a(x) - b(x)}{a(x) + b(x)} \right| \frac{a(x)+b(x)}{2} \, dx\right)\nonumber\\ &\leq &\int \frac{a(x) + b(x)}{2} f \left(\left|\frac{a(x) - b(x)}{a(x)+b(x)}\right|\right)\,dx \label{bj}\\ &=& \int \frac{a+b}{2} \left( \frac{1}{2} \left(1+\frac{a-b}{a+b} \right) \log \left(1+\frac{a-b}{a+b} \right) \right. \nonumber\\ && \left.+ \frac{1}{2} \left(1-\frac{a-b}{a+b} \right) \log \left(1-\frac{a-b}{a+b} \right) \right) \, dx \nonumber \\ & = & \int \frac{a}{2} \log \left( \frac{2a}{a+b} \right) + \frac{b}{2} \log \left( \frac{2b}{a+b} \right) \, dx \nonumber\\ &=& H\left(\frac{\mu + \nu}{2}\right) - \frac{1}{2} H(\mu) - \frac{1}{2} H(\nu)\nonumber \end{eqnarray} with \eqref{bj} following from Jensen's inequaility. \end{proof} \begin{definition} Let $t\in \mathbb{R}^d$. Then $B_k^{(t)}$ is the convolution of $k$ copies of $\frac{1}{2} (\delta_t + \delta_{-t})$. \end{definition} \begin{lemma} \label{binenb} Let $n \in \mathbb{N}$, let $\mu$ be a probability measure on $\mathbb{R}$ and let $t \in \mathbb{R}^d$. Then, \begin{equation} \sum_{k=0}^{n-1} f\left( \frac{1}{2} \left| \left| \mu * B_k^{(t)} * (\delta_t - \delta_{-t}) \right| \right|_1\right) \leq H\left( \mu * B_n^{(t)}\right) - H(\mu) . \label{eb} \end{equation} \end{lemma} \begin{proof} I will prove this by induction. The case $n=1$ follows from putting $\mu * \delta_t$ and $\mu* \delta_{-t}$ into Lemma \ref{jensondiscrete}. For the case $n>1$ we simply note that by putting $\mu * B_{n-1}^{(t)}*\delta_t$ and $\mu * B_{n-1}^{(t)}*\delta_{-t}$ into Lemma \ref{jensondiscrete} we have, \begin{equation} \ f \left( \frac{1}{2}\left| \left| \mu * B_{n-1}^{(t)} * (\delta_t - \delta_{-t}) \right| \right|_1\right) \leq H(\mu*B_n^{(t)}) - H(\mu*B_{n-1}^{(t)}) . \end{equation} The result then follows from the inductive hypothesis. \end{proof} The idea now is to let $t= r/\sqrt{n}$ for some where $r= (r_1, 0 , \dots, 0) \in \mathbb{R}^d$ and consider the limit $n \to \infty$. This will give us an expression in terms of convolutions with normal distributions. The next lemma will be used to deal with the right hand side of \eqref{eb} in this limit. \begin{lemma} \label{hconv1} Let $\lambda_n$ be a sequence of probability measures on $\mathbb{R}^d$ such that $\lambda_n \to \lambda$ in distribution and let $\mu$ be a probability measure with finite entropy and with Lipshitz density function $g$. Suppose further that there exists constants $C>0$ and $\varepsilon>0$ such that for all $n \in \mathbb{N}$ and for all $x \in \mathbb{R}^d$ we have, \begin{equation} (\lambda_n * g) (x), g(x), (\lambda * g) (x) \leq C e^{-\varepsilon |x|^2} \label{ub} \end{equation} then $H(\mu * \lambda_n) \to H(\mu * \lambda)$ as $n \to \infty$. \end{lemma} \begin{proof} First note that, \begin{equation*} (\lambda_n * g) (x) = \int_{-\infty}^{\infty} g(x-y) \, \lambda_n(dy) . \end{equation*} Since $g(x-\cdot)$ is Lipshitz, by the Portmanteau Lemma we have that this converges to $\int_{-\infty}^{\infty} g(x-y) \, \lambda(dy)$. This means that for all $x$ we have, \begin{equation*} (\lambda_n * g) (x) \to (\lambda * g) (x) \end{equation*} and consequently, \begin{equation*} - (\lambda_n * g) (x) \log (\lambda_n * g) (x) \to - (\lambda * g) (x) \log ( \lambda * g) (x) . \end{equation*} We wish to apply the dominated converge theorem. For this we will use \eqref{ub}. First note that if $0 \leq x \leq y$ then, \begin{equation*} \sup \limits_{x \in [0,y]} |x \log x | = \begin{cases} \max\{e^{-1}, y \log y\}, & \text{if } y \geq e^{-1} \\ -y \log y, & \text{otherwise} \end{cases} . \end{equation*} This means that we can find $C', R$ such that the function $a: \mathbb{R} \to \mathbb{R}$ defined by, \begin{equation*} a: x \mapsto \begin{cases} C', & \text{if } |x| \leq R\\ C'x^2e^{-\varepsilon x^2} & \text{otherwise} \end{cases} \end{equation*} is a dominating function and so the lemma holds. \end{proof} We now need to find the limit of the left hand side of \eqref{eb}. \begin{definition}[Wasserstein Distance] Let $\mu$ and $\nu$ be two probability measures on $\mathbb{R}$. Then the first Wasserstein Distance between $\mu$ and $\nu$ is given by, \begin{equation*} W_1(\mu, \nu) = \inf\limits_{\gamma \in \Gamma} \int_{(x, y) \in \mathbb{R}^d \times \mathbb{R}^d} |x-y| \,d\gamma(x,y) \end{equation*} Where $\Gamma$ is the set of copplings between $\mu$ and $\nu$. \end{definition} \begin{lemma} Suppose that $f$ is $\alpha$- Lipshitz and bounded, and let $\mu$ and $\nu$ be probability measures. Then for all $x \in \mathbb{R}$ we have, \begin{equation*} |(\mu * f)(x) - (\nu * f)(x)| \leq \alpha W_1(\mu, \nu) . \end{equation*} \end{lemma} \begin{proof} Let $\gamma \in \Gamma$. Note that, \begin{eqnarray} \lefteqn{|(\mu * f)(x) - (\nu * f )(x)|}\nonumber\\ &=& \left| \int_{(y,z) \in \mathbb{R}^d\times \mathbb{R}^d} f(x-y) -f(x-z) \, d \gamma(y,z) \right|\nonumber \\ &\leq & \int_{(y,z) \in \mathbb{R}^d \times \mathbb{R}^d} |f(x-y) -f(x-z)| \, d \gamma(y,z) \nonumber\\ &\leq & \alpha\int_{(y,z) \in \mathbb{R}^d \times \mathbb{R}^d} |y-z| \, d \gamma(y,z) .\nonumber \end{eqnarray} Taking the infium of the right hand side gives the required result. \end{proof} \begin{definition} Given some convariance matrix $\Sigma$ let $N_{\Sigma}$ denote the multivariate normal distribution with mean $0$ and covariance matrix $\Sigma$. \end{definition} \begin{lemma} \label{cor1} There is some constant $C$ such that, \begin{equation*} W_1\left(B_k^{(t)}, N_{kt t^{T}}\right) \leq C \left| \left| t \right| \right|^2 \end{equation*} here $t t^{T}$ denotes the matrix $\Sigma$ with $\Sigma_{ij} = t_i t_j$ and $\left| \left| \cdot \right| \right|$ denotes the standard Euclidian norm. \end{lemma} \begin{proof} This follows trivially from many different non-uniform Berry-Essen bounds. For example it follows from \cite{nonuniformberryesseenerickson} Theorem 1. This Theorem states (amongst other things) that the Wasserstein distance between the sum of some independent mean $0$ random variables and a normal distribution with the same variance is at most a constant multiplied by the sum of the expectations of the absolute values of the cubes of the random variables divided by the variance of the sum. This gives the required bound. \end{proof} \begin{lemma} Let $\mu$ be a probability measure on $\mathbb{R}^d$ with density function $g$ such that $g$ is twice differentiable, the function $a: \mathbb{R} \to \mathbb{R}$ defined by $a(x) = \sup \limits_{y \in B_1(x)} \left| \frac{\partial^2}{\partial y_1^2} g(y)\right|$ is in $L^1$ and $g$ is bounded and Lipshitz. Then there is a constant $C >0$ depending only on $\mu$ such that for any $k \in \mathbb{N}$ and any $t_1 \in (0, 1)$ letting $t=(t_1, 0, \dots, 0)$ we have, \begin{equation*} \left| \,\, \left| \left| \mu * B_k^{(t)} * (\delta_t - \delta_{-t}) \right| \right|_1-2t _1\left| \left| \frac{\partial g}{\partial x_1} * N_{ktt^{T}} \right| \right|_1 \,\, \right| \leq C t_1^2 . \end{equation*} \end{lemma} \begin{proof} First note that, \begin{eqnarray} \lefteqn{\left| \,\, \left| \left| \mu * B_k^{(t)} * (\delta_t - \delta_{-t}) \right| \right|_1-2t _1\left| \left| \frac{\partial g}{\partial x_1} * N_{ktt^{T}} \right| \right|_1 \,\, \right|}\nonumber\\ &\leq& \left| \left| ( \mu * (\delta_t - \delta_{-t}) - 2t_1 \frac{\partial g}{\partial x_1} ) * B_k^{(t)} \right| \right| _ 1 + 2t_1 \left| \left| \frac{\partial g}{\partial x_1} * (B_k^{(t)}- N_{ktt^{T}}) \right| \right|_1 .\nonumber \end{eqnarray} By the mean value theorem we have, \begin{equation*} g* (\delta_t - \delta_{-t} ) (x) = 2 t_1 \frac{\partial g}{\partial y_1}(u) \end{equation*} for some $u$ in the line segment between $x-t$ and $x+t$. This means, \begin{equation*} \left| \left( g * (\delta_t - \delta_{-t} ) - 2t_1 \frac{\partial g}{\partial x_1} \right) (x) \right| \leq 2t_1^2a(x) \end{equation*} and so, \begin{equation*} \left| \left| ( \mu * (\delta_t - \delta_{-t}) - 2t_1 \frac{\partial g}{\partial x_1}) * B_k^{(t)} \right| \right| _ 1 \leq \left| \left| ( \mu * (\delta_t - \delta_{-t}) - 2t_1 \frac{\partial g}{\partial x_1}) \right| \right| _ 1 \leq C t_1^2 . \end{equation*} We then note that by Lemma \ref{cor1} we have, \begin{equation*} W_1(B_k^{(t)}, N_{ktt^{T}}) \leq C t_1 \end{equation*} giving, \begin{equation*} 2t_1\left| \left| \frac{\partial g}{\partial x_1} * (B_k^{(t)}- N_{kt^2}) \right| \right|_1 \leq Ct_1^2 . \end{equation*} Adding these terms gives the required result. \end{proof} \begin{proposition} \label{longform} Let $r>0$ and let $\mu$ be a probability measure with density function $f$ such that, \begin{enumerate} \item There are $C, \varepsilon >0$ such that $f(x) < C e ^ {-\varepsilon |x| ^2}$ \item $f$ is twice differentiable \item $f$ and $\frac{\partial f}{\partial x_1}f$ are bounded and Lipshitz \item The function $g(x) = \sup \limits_{y \in B_1(x)} \left| \frac{\partial^2}{\partial y_1^2} f(y)\right|$ is in $L^1$ . \end{enumerate} Then we have, \begin{equation*} \frac{1}{2} \int_0^{r_1^2} \left| \left| \frac{\partial}{\partial x_1} \mu * N_{ye_1} \right| \right|_1^2 \, dy \leq H(\mu * N_{r_1^2e_1}) - H(\mu) . \end{equation*} Here $e_1$ denota the matrix with $e_{11} = 1$ and all other entries equal to $0$. \end{proposition} \begin{proof} The idea is to fix $r >0$ and then let $t = \frac{r}{\sqrt{n}}$ where $r = (r_1,0,0\dots,0)$ and take the limit $n \to \infty$ in Lemma \ref{binenb}. From Lemma \ref{hconv1} we immediately have that, \begin{equation*} H(\mu * B_n^{(t)}) - H(\mu) \to H(\mu * n_r^2) - H(\mu) . \end{equation*} For the other side note that $f(\delta) = \frac{1}{2} \delta^2 + O(\delta^3)$ and so, \begin{eqnarray} f\left(\frac{1}{2} \left| \left| \mu * B_k^{(t)} * (\delta_{t} - \delta_{-t}) \right| \right|_1\right)\nonumber &=&\frac{1}{2} t_1^2 \left| \left| \frac{\partial}{\partial x_1} \mu * N_{kt_1^2e_1} \right| \right|_1^2 + O(t_1^3)\nonumber\\ &=& \frac{1}{2} \frac{r_1^2}{n} \left| \left| \frac{\partial}{\partial x_1} \mu * N_{kt_1^2e_1} \right| \right|_1^2 + O(n^{-\frac{3}{2}})\nonumber \end{eqnarray} with the error term uniform in $k$. This means that, \begin{eqnarray} \lefteqn{\sum_{k=0}^{n-1} f(\frac{1}{2} \left| \left| \mu * B_k^{(t)} * (\delta_t - \delta_{-t}) \right| \right|_1)}\nonumber\\ & = & \sum_{k=1}^{n-1} \frac{1}{2} \frac{r_1^2}{n} \left| \left| \frac{\partial}{\partial x_1} \mu * N_{kt_1^2e_1} \right| \right|_1^2 + O(n^{-\frac{1}{2}})\nonumber\\ & \to & \frac{1}{2} \int_0^{r_1^2} \left| \left| \frac{\partial}{\partial x_1} \mu * N_{ye_1} \right| \right|_1^2 \, dy\nonumber \end{eqnarray} as required with the last line following from for example the dominated convergence theorem. \end{proof} \begin{definition} Given some $y = (y_1, y_2, ..., y_d) \in \mathbb{R}^d$ let $\tilde{n}_{y}$ be the multivariante normal with diagnol covariance matrix with the entries on the diagonal given by $y$ and zero mean. \end{definition} \begin{corollary} \label{tmpc} Let $y = (y_1, y_2, ..., y_d)$ such that $y_1, ..., y_d >0$ and let $\mu$ be a compactly supported probability measure on $\mathbb{R}^d$. Then, \begin{equation*} \frac{1}{2} \left| \left| \frac{\partial}{\partial x_i} \mu * \tilde{n}_y \right| \right|_1^2 \leq \frac{\partial}{\partial y_i ^{+}} H(\mu * \tilde{n}_y) . \end{equation*} \end{corollary} \begin{proof} This follows from Proposition \ref{longform}. \end{proof} \begin{corollary} \label{tmpc2} Suppose that $y>0$. Then, \begin{equation*} \frac{1}{2} \sum_{i=1}^d \left| \left| \frac{\partial}{\partial x_i} \mu * n_y \right| \right|_1^2 \leq \frac{\partial}{\partial y} H(\mu * n_y) \end{equation*} \end{corollary} \begin{proof} This follows from the chain rule. \end{proof} \begin{theorem} \label{entropytodetail} Let $\mu$ and $\nu$ be compactly supported probability measures on $\mathbb{R}^d$ let $r, u$ and $v$ be positive real numbers such that $r^2=u+v$. Then, \begin{equation*} s_r(\mu * \nu)\leq \frac{r^{2}}{2} \Gamma \left( \frac{d}{2} \right) \left( \frac{d}{2e} \right) ^{-\frac{d}{2}} \sqrt{ \frac{\partial }{\partial u} H(\mu * n_u) \frac{\partial }{\partial v} H(\nu * n_v) } \end{equation*} \end{theorem} \begin{proof} First note that by Corollary \ref{tmpc2} we know that, \begin{equation} \frac{1}{2} \sum_{i=1}^d \left| \left| \frac{\partial}{\partial x_i} \mu * n_u \right| \right|_1^2 \leq \frac{\partial}{\partial u} H(\mu * n_u) \label{tmpe1} \end{equation} and, \begin{equation} \frac{1}{2} \sum_{i=1}^d \left| \left| \frac{\partial}{\partial x_i} \nu * n_v \right| \right|_1^2 \leq \frac{\partial}{\partial v} H(\nu * n_v) \label{tmpe2} . \end{equation} Letting $y=r^2$ we have that, \begin{equation*} \frac{\partial^2}{\partial x_i^2} \mu * \nu * n_y = \frac{\partial}{\partial x_i} \mu * n_u * \frac{\partial}{\partial x_i} \nu * n_v \end{equation*} meaning, \begin{equation*} \left| \left| \frac{\partial^2}{\partial x_i^2} \mu * \nu * n_y \right| \right|_1 \leq \left| \left| \frac{\partial}{\partial x_i} \mu * n_u \right| \right|_1 \left| \left| \frac{\partial}{\partial x_i} \nu * n_v \right| \right|_1 \end{equation*} and so by Cauchy-Swartz, \begin{equation*} \sum_{i=1}^{d} \left| \left| \frac{\partial^2}{\partial x_i^2} \mu * \nu * n_y \right| \right|_1 \leq \left( \sum_{i=1}^d \left| \left| \frac{\partial}{\partial x_i} \mu * n_u \right| \right|_1^2 \right) ^{\frac{1}{2}} \left( \sum_{i=1}^d \left| \left| \frac{\partial}{\partial x_i} \nu * n_v \right| \right|_1^2 \right) ^{\frac{1}{2}} . \end{equation*} Hence by Lemma \ref{xtoy} and equations \eqref{tmpe1} and \eqref{tmpe2} we have that, \begin{equation*} \left| \left| \frac{\partial}{\partial y} \mu * \nu * n_y \right| \right|_1 \leq \sqrt{ \frac{\partial }{\partial u} H(\mu * n_u) \frac{\partial }{\partial v} H(\nu * n_v) } . \end{equation*} The result follows by the definition of detail around a scale. \end{proof} \section{Bounds on Gap in Entropy} \label{initialgaps} In this section we will be finding estimates on the difference in entropy of the convolution of a discrete measure with two different sphereical normal distributions in terms of the variance of the normal distributions, the entropy of the measure, the radius of the support of the measure, and the minimum distance between two distinct points in the support of the measure. \begin{lemma} \label{lowerbound2} Let $n \in \mathbb{N}$, let $x_1, \dots, x_n \in \mathbb{R}^d$ be such that $|x_i-x_j| \geq 2R$ for $i \neq j$ and let $\mathbf{p} = (p_1, p_2 , \dots, p_n)$ be a probability vector. Let, \begin{equation*} \mu = \sum_{i = 1}^n p_i \delta_{x_i} \end{equation*} then, \begin{equation*} H(\mu * n_{r^2} ) \geq d\log r + H (\mathbf{p}) -c \end{equation*} for some constant $c$ depending only on $d$ and the ratio $R/r$. \end{lemma} \begin{proof} Given $k \in \mathbb{N}$ and define, \begin{equation*} \tilde{n}_k := n_{r^2} | _ {A_{(k-1)R, kR}} \end{equation*} where $A_{a,b} := \{x \in \mathbb{R}^d : |x| \in [a, b)\}$. Given $k \in \mathbb{N}$ and $i \in \mathbb{N}$ such that $i \leq n$ define, \begin{equation*} \mu_{i,k} := \delta_{x_i} * \tilde{n}_k . \end{equation*} Given $k \in \mathbb{N}$ and $j \in \mathbb{Z} / k \mathbb{Z}$ define, \begin{equation*} \nu_{j, k} := \sum_{i \equiv j \,\, ( \text{mod } k)} \delta_{x_i} . \end{equation*} We have by Lemma \ref{maxinc}, \begin{equation*} \sum_{j=1}^k H(\nu_{j, k}) \geq H(\mathbf{p}) - \log k . \end{equation*} Also by Lemma \ref{sepinc}, \begin{equation*} H(\nu_{j,k} * \tilde{n}_k) = \left| \left| \tilde{n}_k \right| \right|_1 H(\nu_{j,k}) + \left| \left| \nu_{j,k} \right| \right|_1 H(\tilde{n}_k) \end{equation*} meaning that, \begin{eqnarray} H(\mu * \tilde{n}_k) & = & H \left( \sum_{j=1}^k \nu_{j,k} * \tilde{n}_k \right)\nonumber\\ & \geq & \sum_{j=1}^k H (\nu_{j,k} * \tilde{n}_k )\nonumber\\ & \geq & \left| \left| \tilde{n}_k \right| \right|_1 H ( \mathbf{p}) + H(\tilde{n}_k) - \left| \left| \tilde{n}_k \right| \right|_1 \log k .\nonumber \end{eqnarray} We now simply wish to sum over $k$. It is not immediately clear that Lemma \ref{concave} holds for series as well as sums so to deal with this given $N \in \mathbb{N}$ we define, \begin{equation*} \tau_N := n_{r^2} | _{A_{NR, \infty}} . \end{equation*} This means that we have, \begin{equation*} H(\mu * n_{r^2}) \geq H(\tau_N) + \sum_{k=1}^N H(\mu*\tilde{n}_k) . \end{equation*} By Lemma \ref{maxinc} we know that, \begin{equation*} H(\tau_N) + \sum_{k=1}^N H(\tilde{n}_k) \geq H(n_{r^2}) - c_1 \end{equation*} for some constant $c_1$ depending only on $R/r$. This means that, \begin{equation*} H(\mu * n_{r^2}) \geq \left( \sum_{k=1}^N \left| \left| \tilde{n}_k \right| \right|_1 \right) H(\mathbf{p}) + H(n_{r^2}) -c_1 - \sum_{k=1}^N \left| \left| \tilde{n}_k \right| \right|_1 \log k . \end{equation*} The result follows by taking $N \to \infty$ and noting that $H(n_{r^2}) = d \log r + c$. \end{proof} \begin{definition} Given an interval $I \subset \mathbb{R}$ let, \begin{equation*} \nu_{F}^I := \Conv _{n \in \mathbb{Z} : \lambda^n \in I} \nu_F\circ U^{-n} \circ T_{\lambda^n}^{-1} . \end{equation*} \end{definition} \begin{lemma} \label{initialgap} Let $F$ be an iterated function system on $\mathbb{R}^d$ with uniform contraction ratio and uniform rotation. Let $h_F$ be its Garsia entropy, $M_F$ be its splitting rate, and $\lambda$ be its contraction ratio. Then for any $M>M_F$ there is some $c>0$ such that for all $n \in \mathbb{N}$ letting $\mu = \nu_F^{(\lambda^n, 1]}$ we have, \begin{equation*} H \left( \mu * n_{1}\right) - H(\left( \mu * n_{M^{-2n}} \right) < (d\log M- h_F)n + c . \end{equation*} \end{lemma} \begin{proof} We simply bound $H \left( \mu * n_{1}\right) \leq H(\mu_F*n_1)$ and $H(\mu * n_{M^{-2n}}) \geq -n d \log M +n h_F-c$. This means that, \begin{equation*} H \left( \mu * n_{1}\right) - H(\left( \mu * n_{M^{-2n}} \right) \leq (d\log M- h_F)n + c+H(\mu_F*n_1) \end{equation*} as required. \end{proof} This turns out to be the only place in the proof where we use the splitting rate and the Garsia entropy. This means that if it were possible to find a result like this for some iterated function system by some other means then this could also be used to show that an iterated function system is absolutely continuous. \section{Finding the Measures} In this section we will be showing that under certain conditions it is possible to express $\mu_F$ as the convolution of many measures each of which have at most a limited amount of detail at some given scale. After this we will use Theorem \ref{manyconv} to get an upper bound on the detail around a scale of $\mu_F$. Finally we will apply Lemma \ref{isabscont} to show that $\mu_F$ is absolutely continuous under these conditions. \begin{definition} Given some $r \in \left(0, \frac{1}{10} \right)$ and iterated function system $F$ on $\mathbb{R}^d$ we say that an interval $I$ is $\alpha$-admissible at scale $r$ if for all $t$ with, \begin{equation*} t \in \left[ \exp \left( -\left( \log \log r^{-1} \right) ^{10} \right)r, \exp \left( \left( \log \log r^{-1} \right) ^{10} \right)r\right] \end{equation*} we have, \begin{equation*} \left. \frac{\partial}{\partial y^{+}} H(\nu_F^{I}*n_y) \right|_{y=t^2} \leq \alpha t^{-2} . \end{equation*} \end{definition} The precise values of the constants taken to be ten in the above definition makes no difference to the final result and so they are taken to be ten for simplicity. \begin{proposition} \label{manyadmissibletodetail} Let $\alpha, K >0$ and let $d \in \mathbb{N}$. Suppose that $\alpha < \frac{1}{8}\left(1 + \frac{1}{2K^2} \right)^{-1}$. Then there exists some constant $c$ such that the following is true. Let $F$ be an iterated function system on $\mathbb{R}^d$ with uniform contraction ratio and uniform rotation. Suppose that $r <c$ and $n\in \mathbb{N}$ is even with, \begin{equation*} n \leq 10 \frac{\log \log r^{-1}}{\log \left( \frac{1}{8} \left(1 + \frac{1}{2K^2} \right)^{-1} \alpha^{-1} \right)} \end{equation*} and that $I_1, I_2, \dots, I_n$ are disjoint $\alpha$-admissible intervals at scale $r$ contained in $(0,1)$. Then we have, \begin{equation} s_r(\mu_F) \leq \Gamma \left( \frac{d}{2} \right) \left( \frac{d}{2e} \right) ^{-\frac{d}{2}} \frac{1}{8} \left( 8\left(1 + \frac{1}{2K^2} \right) \alpha \right)^{\frac{n}{2}} . \label{admissibledetail} \end{equation} \end{proposition} We wish to show that $s_r(\mu_F) \leq \left( \log r^{-1} \right) ^{-\beta}$ for some $\beta >1$ for all sufficiently small $r$. For this it sufficies to show that we can find at least $\beta \frac{\log \log r^{-1}}{\frac{1}{2} \log \frac{1}{8 \alpha}}$ admissible intervals for some $\beta >1$. The important fact about equation \eqref{admissibledetail} is that for any $\lambda > \sqrt{8 \alpha}$ if $r$ is sufficienly small and $n$ satisfies the conditions of the proposition we have, \begin{equation*} s_r(\mu_F) \leq C \lambda^n . \end{equation*} \begin{proof} Throughout this proof let $c_1, c_2, ...$ denote constants depending only on $\alpha, K$ and $d$. The idea is to use Theorem \ref{manyconv} and Theorem \ref{entropytodetail}. First note that by Theorem \ref{entropytodetail} we know that for all \begin{equation*} t \in \left[ \frac{1}{\sqrt{2}}\exp \left( -\left( \log \log r^{-1} \right) ^{10} \right)r, \frac{1}{\sqrt{2}} \exp \left( \left( \log \log r^{-1} \right) ^{10} \right)r\right] \end{equation*} and for $i = 1, 2, ... , \frac{n}{2}$ we have, \begin{equation*} s_r(\nu_F^{I_{2i-1}} * \nu_F^{I_{2i}}) \leq \Gamma \left( \frac{d}{2} \right) \left( \frac{d}{2e} \right) ^{-\frac{d}{2}} \alpha . \end{equation*} We now wish to apply Theorem \ref{manyconv}. To do this we simply need to check that, \begin{equation*} \left[ 2^{-\frac{m}{2}} r, K^m \alpha_1^{-m 2^m} r \right] \subset \left[ \exp \left( -\left( \log \log r^{-1} \right) ^{10} \right)r, \exp \left( \left( \log \log r^{-1} \right) ^{10} \right)r\right] \end{equation*} where $m = \frac{\log \frac{n}{2}}{\log \frac{3}{2}}$ and $\alpha_1 = \Gamma \left( \frac{d}{2} \right) \left( \frac{d}{2e} \right) ^{-\frac{d}{2}} \alpha$. We note that, \begin{equation*} m \leq \frac{1}{\log \frac{3}{2}} \log \log \log r^{-1} + c_1 \end{equation*} and so for all sufficiently small $r$ we have, \begin{equation*} 2^{-\frac{m}{2}} r \geq \exp \left( -\left( \log \log r^{-1} \right) ^{10} \right)r . \end{equation*} For the other side note that, \begin{equation*} K^m \alpha_1^{-m 2^m} \leq \exp \left( c_2 \left( \log \log \log r^{-1} \right) \left( \log \log r^{-1} \right)^{\frac{\log 2}{\log \frac{3}{2}}} +c_3 \right) . \end{equation*} Noting that $\frac{\log 2}{\log \frac{3}{2}}<10$ this means that for all sufficiently small $r$ we have, \begin{equation*} K^m \alpha_1^{-m 2^m} r \leq \exp \left( \left( \log \log r^{-1} \right) ^{10} \right)r . \end{equation*} This means that the conditions of Theorem \ref{manyconv} are satisfied and so, \begin{eqnarray} \lefteqn{s_r(\nu_F^{I_1} * \nu_F^{I_2} * \dots * \nu_F^{I_n}) }\nonumber\\ & \leq & \left( \Gamma \left( \frac{d}{2} \right) \left( \frac{d}{2e} \right)^{-\frac{d}{2}} \alpha \right) ^{\frac{n}{2}} \left( \frac{8}{\Gamma \left( \frac{d}{2} \right)} \left( \frac{d}{2e} \right) ^{\frac{d}{2}} \left( 1 + \frac{1}{2K^2} \right) \right) ^{\frac{n}{2}-1}\nonumber\\ & \leq & \Gamma \left( \frac{d}{2} \right) \left( \frac{d}{2e} \right) ^{-\frac{d}{2}} \frac{1}{8} \left( 8\left(1 + \frac{1}{2K^2} \right) \alpha \right)^{\frac{n}{2}} .\nonumber \end{eqnarray} We conclude the proof by noting that, \begin{equation*} s_r(\mu_F) \leq s_r(\nu_F^{I_1} * \nu_F^{I_2} * \dots * \nu_F^{I_n}) . \end{equation*} \end{proof} \begin{lemma} \label{numadmissible} Suppose that $F$ is an iterated function system with uniform contraction ratio $\lambda$ and uniform rotation and that $M>M_F$. Let $\alpha \in (0,1)$. Suppose further that, \begin{equation*} h_F > (d-2\alpha \lambda^4) \log M \end{equation*} then there is some constant $c$ depending only on $M$, $F$ and $\alpha$ such that for all $n \in \mathbb{N}$ and all $r \in (0,\frac{1}{4})$ there are at least, \begin{equation*} \frac{1}{2 \log \lambda^{-1}} (h_F - (d-2\alpha \lambda^4) \log M) \alpha^{-1} \lambda^{-4} n - c \end{equation*} values of $k$ with, \begin{equation*} k \in [0, n \frac{\log M}{\log \lambda^{-1}}) \cap \mathbb{Z} \end{equation*} such that the interval, \begin{equation*} (\lambda^{n-k + \lceil \frac{\log r^{-1}}{\log \lambda^{-1}} \rceil+b},\lambda^{-k + \lceil \frac{\log r^{-1}}{\log \lambda^{-1}} \rceil-b}] \end{equation*} is $\alpha$-admissible at scale $r$. Here we take $b = \frac{1}{\log \lambda^{-1}} \left( \log \log r^{-1} \right) ^{10} +10$. \end{lemma} \begin{proof} Throughout this proof let $c_1, c_2, ...$ be constants depending only on $F$, $M$ and $\alpha$. Recall that by Lemma \ref{initialgap} we have, \begin{equation*} H(\nu_F ^{(\lambda^n,1]} *n_1) - H(\nu_F ^{(\lambda^n,1]} *n_{M^{-2n}}) < n (\log M - h_F) +c_1. \end{equation*} Define, \begin{equation*} S := \left\{ y \in (M^{-2n},1) : \frac{\partial}{\partial y^{+}} H(\nu_F^{(\lambda^n,1]} * n_y) \leq \frac{1}{y} \alpha \lambda^4\right\} . \end{equation*} Let $\mu$ be the measure on $(0,\infty)$ with density function $y \mapsto \frac{1}{y}$. Note that we have, \begin{eqnarray} \lefteqn{n (d\log M - h_F) +c_1}\nonumber\\ & \geq & H(\nu_F ^{(\lambda^n,1]} *n_1) - H(\nu_F ^{(\lambda^n,1]} *n_{M^{-2n}})\nonumber\\ & = & \int_{M^{-2n}}^1 \frac{\partial}{\partial y} H(\nu_F ^{(\lambda^n,1]} *n_{y}) \, dy\nonumber\\ & \geq & \int_{y \in (M^{-2n},1) \backslash S} \frac{1}{y} \alpha \lambda^4 \, dy\nonumber\\ & = & (2n \log M - \mu(S)) \alpha \lambda^4 .\nonumber \end{eqnarray} This means, \begin{equation*} \mu(S) \geq (h_F - (d-2\alpha \lambda^4) \log M) \alpha^{-1} \lambda^{-4} n - c_2 . \end{equation*} Now let, \begin{equation*} \tilde{S} = \left\{ k \in \mathbb{N}_0 : (\lambda^{2(k+1)}, \lambda^{2k}] \cap S \neq \emptyset \right\} . \end{equation*} Note that $\mu ((\lambda^{2(k+1)}, \lambda^{2k}] ) = 2 \log \lambda^{-1}$ and so, \begin{equation*} |\tilde{S}| \geq \frac{1}{2 \log \lambda^{-1}} \mu(S) . \end{equation*} In particular, \begin{equation*} |\tilde{S}| \geq \frac{1}{2 \log \lambda^{-1}} (h_F - (d-2\alpha \lambda^4) \log M) \alpha^{-1} \lambda^{-4} n - c_3 . \end{equation*} Now we wish to show that if $k \in \tilde{S}$ then the interval, \begin{equation*} (\lambda^{n-k + \lceil \frac{\log r^{-1}}{\log \lambda^{-1}} \rceil+b},\lambda^{-k + \lceil \frac{\log r^{-1}}{\log \lambda^{-1}} \rceil-b}] \end{equation*} is $\alpha$-admissible at scale $r$. To do this suppose that, \begin{equation*} t \in \left[ \exp \left( -\left( \log \log r^{-1} \right) ^{10} \right)r, \exp \left( \left( \log \log r^{-1} \right) ^{10} \right)r\right]. \end{equation*} I wish to show that $\left. \frac{\partial}{\partial y^{+}} H(\nu_F^I* n_y) \right|_{y=t^2} \leq \alpha$ where \begin{equation*} I= {(\lambda^{n-k + \lceil \frac{\log r^{-1}}{\log \lambda^{-1}} \rceil+b},\lambda^{-k + \lceil \frac{\log r^{-1}}{\log \lambda^{-1}} \rceil-b}]} . \end{equation*} First choose $\tilde{k} \in \mathbb{Z}$ such that, \begin{equation*} \lambda^{\tilde{k} + 1} \leq t \leq \lambda ^ {\tilde{k}} \end{equation*} and choose $\tilde{t} \in (\lambda^{k+1}, \lambda^k]$ such that $\tilde{t}^2 \in S$. Note that we have, \begin{eqnarray} \left. \frac{\partial}{\partial y^{+}} H(\nu_F^{(\lambda^n,1]}*n_y) \right|_{y = \lambda^{2k}} &\leq &\left. \frac{\partial}{\partial y} H(\nu_F^{(\lambda^n,1]}*n_y) \right|_{y = \tilde{t}^2}\nonumber\\ & \leq & \lambda^4 \alpha \tilde{t}^{-2}\nonumber\\ & \leq & \lambda^2 \alpha \lambda^{-2k} .\nonumber \end{eqnarray} This means that, \begin{equation*} \left. \frac{\partial}{\partial y^{+}} H(\nu_F^{(\lambda^{n-k+\tilde{k}+1},\lambda^{-k+\tilde{k}+1}]}*n_y) \right|_{y = \lambda^{2(\tilde{k}+1)}} \leq \lambda^2 \alpha \lambda^{-2(\tilde{k}+1)} \end{equation*} and so, \begin{equation*} \left. \frac{\partial}{\partial y^{+}} H(\nu_F^{(\lambda^{n-k+\tilde{k}+1},\lambda^{-k+\tilde{k}+1}]}*n_y) \right|_{y = t^2} \leq \lambda^2 \alpha \lambda^{-2(\tilde{k}+1)}\leq \alpha t^{-2} . \end{equation*} It only remains to show that, \begin{equation*} (\lambda^{n-k+\tilde{k}+1},\lambda^{-k+\tilde{k}+1}] \subset (\lambda^{n-k + \lceil \frac{\log r^{-1}}{\log \lambda^{-1}} \rceil+b},\lambda^{-k + \lceil \frac{\log r^{-1}}{\log \lambda^{-1}} \rceil-b}] . \end{equation*} In other words it suffices to show that $\lambda^{\tilde{k} + 1} \geq \lambda ^{\lceil \frac{\log r^{-1}}{\log \lambda^{-1}} \rceil + b}$ and $\lambda^{\tilde{k} + 1} \leq \lambda ^{\lceil \frac{\log r^{-1}}{\log \lambda^{-1}} \rceil - b}$. Both of these statements follow immediately from the definition of $b$. \end{proof} \begin{corollary} \label{oneadmissible} Suppose that $F$ is an iterated function system with uniform contraction ratio $\lambda$ and uniform rotation and that $M>M_F$. Suppose further that, \begin{equation*} h_F > (d-2\alpha \lambda^4) \log M \end{equation*} then there is some constant $c$ depending only on $M$, $F$ and $\alpha$ such that for all $r \in (0,\frac{1}{4})$ and all $n \in \mathbb{N}$ with, \begin{equation} \frac{1}{2 \log \lambda^{-1}} (h_F - (d-2\alpha \lambda^4) \log M) \alpha^{-1} \lambda^{-4} n - c > 0 \label{atleastone} \end{equation} there is some $k$ with, \begin{equation*} k \in [\frac{1}{2 \log \lambda^{-1}} (h_F - (d-2\alpha \lambda^4) \log M) \alpha^{-1} \lambda^{-4} n - c , n \frac{\log M}{\log \lambda^{-1}}) \cap \mathbb{Z} \end{equation*} such that the interval, \begin{equation*} (\lambda^{n-k + \lceil \frac{\log r^{-1}}{\log \lambda^{-1}} \rceil+b},\lambda^{-k + \lceil \frac{\log r^{-1}}{\log \lambda^{-1}} \rceil-b}] \end{equation*} is $\alpha$-admissible at scale $r$. Here we take $b = \frac{1}{\log \lambda^{-1}} \left( \log \log r^{-1} \right) ^{10} +10$. \end{corollary} \begin{proof} Simply take the largest $k$ found by Lemma \ref{numadmissible}. Note that the condition \eqref{atleastone} is necessary to ensure that at least one such $k$ is found. \end{proof} \begin{lemma} Suppose that $c>0$, $A>1$ and the sequence $(x_n)_{n\in \mathbb{N}_0}$ of positive reals satisfies, \begin{equation*} x_{n+1} = A x_n +c \end{equation*} then, \begin{equation*} x_{n} = A^n (x_0+\frac{c}{A-1})-\frac{c}{A-1} . \end{equation*} \end{lemma} \begin{proof} Simply note that $x_{n+1} +\frac{c}{A-1} = A (x_n+\frac{c}{A-1} )$. \end{proof} \begin{corollary} \label{find2} Suppose that $c>0$, $A>1$ and the sequence $(x_n)_{n\in \mathbb{N}_0}$ of positive reals satisfies, \begin{equation*} x_{n+1} \leq A x_n +c \end{equation*} then, \begin{equation*} x_{n} \leq A^n (x_0+\frac{c}{A-1})-\frac{c}{A-1} . \end{equation*} \end{corollary} \begin{lemma} \label{find4} Suppose that $F$ is an iterated function system with uniform rotation and uniform contraction ratio $\lambda$. Let $M>M_F$, $\alpha \in (0, \frac{1}{8})$ and suppose that, \begin{equation*} h_F > (1-2 \alpha \lambda^4) \log M . \end{equation*} Then there exists some $c >0$ such that for every $r>0$ sufficiently small there are at least \begin{equation*} \frac{1}{\log A} \log \log r^{-1} - c \log \log \log r^{-1} \end{equation*} disjoint $\alpha$-admissable intervals at scale $r$ all of which are contained in $(0,1]$ . Here, \begin{equation*} A = \frac{2\alpha \lambda^4 \log M}{h_F - 2 \alpha \lambda^4 \log \lambda^{-1} - (d-2 \alpha \lambda^4) \log M} . \end{equation*} \end{lemma} \begin{proof} Throughout this proof $E_1, E_2, ...$ will denote error terms which may be bounded by $0 \leq E_i \leq c_i (\log \log r^{-1}) ^{c_i}$ for some positive constants $c_1, c_2, ...$ which depends only on $\alpha$, $F$ and $M$. Choose $N$ such that equation \eqref{atleastone} is satisfied for all $n \geq N$ in other words, \begin{equation*} \frac{1}{2 \log \lambda^{-1}} (h_F - (1-2\alpha \lambda^4) \log M) \alpha^{-1} \lambda^{-4} N- c > 0 . \end{equation*} The idea of the proof is to choose a sequence $\left( n_j \right)_{j=0}^{j_{\text{max}}}$ such that $N = n_0 < n_1 < n_2 < \dots <n_{j_{\text{max}}}$ such that by using Corollary \ref{oneadmissible} we are garanteed to find $j_{\text{max}}$ disjoint admissible intervals contained in $(0,1]$. Let $I_j$ denote the interval found using $n_j$ in Corollary \ref{oneadmissible}. Note that to be able to garantee that $I_j$ and $I_{j+1}$ are disjoint we need, \begin{equation*} (C -1)n_{j+1} \geq \frac{\log M}{\log \lambda^{-1}} n_j + E_1 \end{equation*} where we take, \begin{equation*} C = \frac{1}{2 \log \lambda^{-1}} (h_F - (d-2\alpha \lambda^4) \log M) \alpha^{-1} \lambda^{-4} . \end{equation*} For this is it sufficient to take, \begin{equation*} n_{j+1} = \lceil A n_j + E_2 \rceil \end{equation*} where, \begin{equation*} A = \frac{2\alpha \lambda^4 \log M}{h_F - 2 \alpha \lambda^4 \log \lambda^{-1} - (d-2 \alpha \lambda^4) \log M} \end{equation*} meaning, \begin{equation*} n_{j+1} \leq A n_j + E_3 . \end{equation*} This means we have, \begin{equation*} n_j \leq A^j \left( n_0 + E_4\right) . \end{equation*} Noting that $n_0 = N = E_5$ this means, \begin{equation*} n_j \leq A^j E_6 . \end{equation*} It must be the case that we are unable to continue this process after $j_{\text{max}}$. The only way that this can happen is if the interval given by continuing this process for one more step is not contained in $(0,1]$. This means that, \begin{equation*} \frac{\log M}{\log \lambda^{-1}} \left( A n_{j_{\text{max}}} + E_3\right) + E_7 > \frac{\log r^{-1}}{\log \lambda ^ {-1}} \end{equation*} and so, \begin{equation*} A^{j_{\text{max}}+1} E_8 > \frac{\log r^{-1}}{\log \lambda ^ {-1}} . \end{equation*} This means that, \begin{equation*} j_{\text{max}} > \frac{1}{\log A} \log \log r^{-1} - c \log \log \log r^{-1} \end{equation*} for some constant $c$ depending only on $\alpha, F$ and $M$ as required. \end{proof} I am now ready to prove the main result. \newtheorem*{theorem:mainresult}{Theorem \ref{mainresult}} \begin{theorem:mainresult} Let $F$ be an iterated function system on $\mathbb{R}^d$ with uniform contraction ratio and uniform rotation. Suppose that $F$ has Garsia entropy $h_F$, splitting rate $M_F$, and uniform contraction ratio $\lambda$. Suppose further that, \begin{equation*} (d \log M_F - h_F)(\log M_F)^2 < \frac{1}{27} (\log M_F - \log \lambda^{-1})^3 \lambda^4 \end{equation*} then the self similar measure $\mu_F$ is absolutely continuous. \end{theorem:mainresult} \begin{proof} The idea is to use Proposition \ref{manyadmissibletodetail} and Lemma \ref{find4} to show that the detail around a scale decreases fast enough for us to be able to apply Lemma \ref{isabscont}. We are given that, \begin{equation*} (d \log M_F - h_F)(\log M_F)^2 < \frac{1}{27} \left( \log M_F - \log \lambda^{-1} \right) ^3 \lambda^4 . \end{equation*} Letting $\alpha = \frac{1}{18} \left( \frac{ \log M_F - \log \lambda^{-1}}{\log M_F} \right) ^2$ we get, \begin{equation*} d \log M_F - h_F < \frac{2}{3} (\log M_F - \log \lambda^{-1}) \alpha \lambda^4 . \end{equation*} Rearranging this gives, \begin{equation} \frac{4}{3} (\log M_F - \log \lambda^{-1}) \alpha \lambda^4 < h_F - 2 \alpha \lambda^4 \log \lambda^{-1} - (d-2 \alpha \lambda^4 ) \log M_F \label{getpos} \end{equation} which means that, \begin{equation*} \frac{2}{3} \left( \frac{\log M_F - \log \lambda^{-1}}{\log M_F} \right) \tilde{A} < 1 \end{equation*} where, \begin{equation*} \tilde{A} = \frac{2 \alpha \lambda^4 \log M_F}{h_f-2\alpha \lambda^4 \log \lambda^{-1} - (d- 2 \alpha \lambda^4) \log M_F} . \end{equation*} This means that, \begin{equation*} \sqrt{8 \alpha} \tilde{A} < 1 \end{equation*} and so, \begin{equation*} \frac{1}{\log \tilde{A}} > \frac{2}{\log \frac{1}{8 \alpha}} . \end{equation*} We now choose some $M>M_F$ such that, \begin{equation*} \frac{1}{\log A} > \frac{2}{\log \frac{1}{8 \alpha}} \end{equation*} where, \begin{equation*} \tilde{A} = \frac{2 \alpha \lambda^4 \log M_F}{h_f-2\alpha \lambda^4 \log \lambda^{-1} - (d- 2 \alpha \lambda^4) \log M_F} . \end{equation*} As mentioned previously this is enough to finish the proof. We wish to apply Lemma \ref{find4}. To do this we need to check that, \begin{equation*} h_F > (d - 2 \alpha \lambda^4) \log M . \end{equation*} To see this simply note that by \eqref{getpos} we have, \begin{eqnarray} h_F - (d - 2 \alpha \lambda^4) \log M_F &>& \frac{4}{3}(\log M_F - \log \lambda^{-1}) \alpha \lambda^4 +2\alpha \lambda^4 \log \, \lambda^{-1}\nonumber\\ &>& 0 .\nonumber \end{eqnarray} This means that by Lemma \ref{find4} there is some $c_1>0$ such that for all sufficiently small $r>0$ we can find at least, \begin{equation*} \frac{1}{\log A} \log \,\log \, r^{-1} - c_1 \log \, \log \, \log \, r^{-1} \end{equation*} disjoint $\alpha$- admissible intervals at scale $r$. We now wish to apply Proposition \ref{manyadmissibletodetail}. We choose $K>1$ in this proposition such that, \begin{equation*} \frac{1}{\log A} > \frac{2}{\log \frac{1}{8 (1 + \frac{1}{2K^2})\alpha}} . \end{equation*} Applying Proposition \ref{manyadmissibletodetail} with $n$ the largest even number which is less than both $\frac{1}{\log A} \log \,\log \, r^{-1} - c_1 \log \, \log \, \log \, r^{-1}$ and $10 \frac{\log \log r^{-1}}{\log \left( \frac{1}{8} \left(1 + \frac{1}{2K^2} \right)^{-1} \alpha^{-1} \right)}$ we get that for all sufficiently small $r$, \begin{equation*} s_r(\mu_F) \leq c_2 (\log r^{-1})^{-\beta}(\log \log r ^{-1})^{c_3} \end{equation*} or some $\beta>1$. By Lemma \ref{isabscont} this means that the measure $\mu_F$ is absolutely continuous. \end{proof} To prove Theorem \ref{mainbernoulli} we will first need the following result dating back at least to Garsia. \begin{lemma} \label{garsia} Let $\lambda$ be an algebraic number and denote by $d$ the number of its algebraic conjugates with modulos $1$. Then there is some constant $c_{\lambda}$ depending only on $\lambda$ such that whenever $p$ is a polynomial with degree $n$ and coefficients $-1, 0$ and $1$ such that $p(\lambda) \neq 0$ we have, \begin{equation*} |p(\lambda)| > c_{\lambda} n^{-d} M_{\lambda} ^ {-n} . \end{equation*} \end{lemma} \begin{proof} This is proven in \cite{garsiasep} Lemma 1.51 \end{proof} \begin{corollary} Let $F$ be an iterated function system such that $\mu_F$ is a (possibly biased) Bernoulli convolution with parameter $\lambda$. Then, \begin{equation*} M_F \leq M_{\lambda} . \end{equation*} \end{corollary} \newtheorem*{theorem:mainbernoulli}{Theorem \ref{mainbernoulli}} \begin{theorem:mainbernoulli} Suppose that $\lambda \in (\frac{1}{2}, 1)$ and has Mahler measure $M_\lambda$. Suppose also that $\lambda$ is not the root of a non zero polynomial with coefficients $0, \pm 1$ and, \begin{equation} (\log M_{\lambda} - \log 2) (\log M_{\lambda})^2 < \frac{1}{27} (\log M_{\lambda} - \log \lambda^{-1})^3 \lambda^4 \label{mainberneq} \end{equation} then the unbiased Bernoulli convolution with parameter $\lambda$ is absolutely continuous. \end{theorem:mainbernoulli} \begin{proof} To prove this simply note that letting $F$ be the iterated function system generating the Bernoulli convolution we have, \begin{equation*} M_F \leq M_{\lambda} \end{equation*} and, \begin{equation*} h_F = \log 2 . \end{equation*} We are now done by applying Theorem \ref{mainresult}. \end{proof} \section{Examples} \label{examples} In this section we will give some examples of self similar measures which can be shown to be absolutely continuous using the results of this paper. \subsection{Unbiased Bernoulli Convolutions} \label{findunbiased} In this subsection we will give explicit values of $\lambda$ for which the unbiased Bernoulli convolution with parameter $\lambda$ is absolutely continuous using Theorem \ref{mainbernoulli}. we will do this by a simple computer search. We can ensure that $\lambda$ is not a root of a non-zero polynomial with coefficients $0, \pm 1$ by ensuring that it has a conjugate with absolute value greater that $2$. The computer search works by checking each integer polynomial with at most a given hieght and at most a given degree with leading coefficient $1$ and constant term $\pm 1$. The program then finds the roots of the polynomial. If there is one real root with modulos at least $2$ and at least one real root in $(\frac{1}{2}, 1)$ the program then checks that the polynomial is irreducible. If the polynomial is irreducible it then tests each real root in $(\frac{1}{2}, 1)$ to see if it satisfies equation \eqref{mainberneq}. In figure \ref{examples2}are the results for polynomials of degree at most $11$ and height at most $3$. The smallest value of $\lambda$ which we was able to find for which the Bernoulli convolution with parameter $\lambda$ can be shown to be absolutely continuous using this method is $\lambda \approx 0.799533$ with minimal polynomial $X^9 - 2 X^8 - X + 1$. I was also able to find an infinite family of $\lambda$ for which the results of this paper show that the Bernoulli convolution with parameter $\lambda$ is absolutely continuous. This family is found using the following lemma. \begin{figure}[H] \label{examples2} \begin{center} \begin{tabular}{ | m{20em} | c| c| } \hline Minimal Polynomial & Mahler Measure & $\lambda$\\ \hline $X^{7}-X^{6}-2X^{5}-X^{2}+X+1$ & 2.010432 & 0.879161\\ $X^{7}+2X^{6}-X-1$ & 2.015159 & 0.932864\\ $X^{8}+2X^{7}-1$ & 2.007608 & 0.860582\\ $X^{9}-2X^{8}-X+1$ & 2.003861 & 0.799533\\ $X^{9}+2X^{8}-X-1$ & 2.003861 & 0.949560\\ $X^{10}-2X^{9}-X^{2}+1$ & 2.005754 & 0.852579\\ $X^{10}-2X^{9}-X+1$ & 2.001940 & 0.813972\\ $X^{10}-X^{9}-2X^{8}-X^{5}+X^{4}+X^{3}-X^{2}+1$ & 2.014180 & 0.911021\\ $X^{10}-X^{9}-X^{8}-2X^{7}-X^{5}+X^{4}+X^{2}+1$ & 2.012241 & 0.939212\\ $X^{10}-X^{9}-X^{8}-X^{7}-2X^{6}-X^{5}+X^{3}+X^{2}+X+1$ & 2.017567 & 0.953949\\ $X^{10}-2X^{8}-3X^{7}-2X^{6}-X^{5}+X^{3}+2X^{2}+2X+1$ & 2.008264 & 0.968459\\ $X^{10}+X^{9}-2X^{8}+X^{7}+X^{6}-X^{5}+X^{4}-X^{3}+X-1$ & 2.016061 & 0.875809\\ $X^{10}+2X^{9}-X^{4}-1$ & 2.030664 & 0.946934\\ $X^{10}+2X^{9}-1$ & 2.001936 & 0.888810\\ $X^{10}+3X^{9}+3X^{8}+3X^{7}+2X^{6}-2X^{4}-3X^{3}-3X^{2}-2X-1$ & 2.047156 & 0.984474\\ $X^{11}-2X^{10}-X^{2}+1$ & 2.002899 & 0.861821\\ $X^{11}-2X^{10}-X+1$ & 2.000973 & 0.826146\\ $X^{11}-X^{10}-2X^{9}-X^{8}+X^{7}+2X^{6}+X^{5}-X^{4}-2X^{3}-X^{2}+X+1$ & 2.000730 & 0.876660\\ $X^{11}-X^{10}-X^{9}-2X^{8}-X^{4}+X^{2}+X+1$ & 2.004980 & 0.952899\\ $X^{11}+X^{10}-2X^{9}+X^{8}+X^{7}-2X^{6}+X^{5}+X^{4}-2X^{3}+X^{2}+X-1$ & 2.000730 & 0.831391\\ $X^{11}+X^{10}-X^{9}+2X^{8}+X^{4}-X^{2}+X-1$ & 2.004980 & 0.805998\\ $X^{11}+2X^{10}-X-1$ & 2.000973 & 0.959607\\ $X^{11}+2X^{10}+X^{2}-1$ & 2.002899 & 0.810380\\ \hline \end{tabular} \end{center} \caption{Examples of Parameters of Absolutely Continuous Bernoulli Convolutions} \end{figure} \begin{lemma} \label{infinitefamily} Suppose that $n \geq 5$ is an integer and let, \begin{equation*} p(X) = X^n - 2 X^{n-1} -X +1 \end{equation*} Then $p$ has one root in the interval $(\left( \frac{1}{2} \right) ^ {\frac{2}{\sqrt{n-1}}}, 1)$, one root in the interval $(2, 2+2^{2-n})$ and all of the remaining roots are contained in the interior of the unit disk. Furthermore $p$ is irreducible. \end{lemma} Before proving this we need the following result. \begin{theorem}[Schur-Cohn test] Let $p$ be a polynomial of degree $n$ and let $p^*$ be its reciprocal adjoint polynomial defined by $p^*(z) := z^n \overline{p(\overline{z}^{-1})}$. Note that if, \begin{equation*} p(z) = a_n z^n + \dots + a_1 z + a_0 \end{equation*} then, \begin{equation*} p^*(z) = \overline{a_0} z^n + \dots + \overline{a_{n-1}} z + \overline{a_n} . \end{equation*} Let $Tp := \overline{p(0)}p - \overline{p^*(0)}p^*$. Then the following results are true, \begin{enumerate} \item The roots of $p^*$ are the images of the roots of $p$ under inversion in the unit circle \item If $Tp(0) \neq 0$ then $p$, $p^*$ and $Tp$ share roots on the unit circle \item If $Tp(0) > 0$ then $p$ and $Tp$ have the same number of roots inside the unit disk \item If $Tp(0) < 0$ then $p^*$ and $Tp$ have the same number of roots outside the unit disk \end{enumerate} \end{theorem} \begin{proof} This was first proven in \cite{schuror}. For a reference in English see \cite{schureng}. \end{proof} We are now ready to prove Lemma \ref{infinitefamily}. \begin{proof} The idea is to use the Schur-Cohn test to show that all but one of the roots of $p$ are in the unit disk. We then identify the specified real roots using the intermediate value theorem. We cannot use the Schur Cohn test directly as we end up with $Tp(0) = 0$. We instead choose some $\alpha < 1$ and let, \begin{equation*} q = p(\alpha X) = \alpha^n X^n - 2 \alpha^{n-1} X^{n-1} - \alpha X + 1 \end{equation*} meaning that we have, \begin{equation*} q* = X^n - \alpha X^{n-1} - 2 \alpha^{n-1} X +\alpha^{n} \end{equation*} and so, \begin{equation*} Tq = (-2\alpha^{n-1} + \alpha^{n+1}) X^{n-1} + (- \alpha + 2 \alpha^{n-1}) X + (1 - \alpha^{2n}) . \end{equation*} We can choose $\alpha$ such that the coefficient of $X$ disapears. To do this we require, \begin{equation*} \alpha = \left( \frac{1}{2} \right) ^ \frac{1}{2n-2} . \end{equation*} We then simply note that since, \begin{equation*} \left| -2\alpha^{n-1} + \alpha^{n+1} \right| = \sqrt{2}\left| 1 - \alpha^{2n} \right| \end{equation*} we must have that all of the roots of $Tq$ are in the unit disk. This means that all but one of the roots of $p$ are in the disk of radius $\alpha$ which in particular means that they are in the disk of radius $1$. \paragraph{} The other roots can be found be using the intermediate value theorem. First note that $p(2)=-1<0$ and that, \begin{eqnarray} p(2+2^{2-n}) &= &2^{2-n} (2 + 2^{2-n})^{n-1} -1 - 2^{2-n}\nonumber\\ & > & 2-1 - 2^{2-n}\nonumber\\ & > & 0\nonumber \end{eqnarray} so there is a root in $(2, 2+2^{2-n})$. We also have $p(1)=-1<0$ and, \begin{equation*} p\left( \left( \frac{1}{2} \right) ^ {\frac{2}{\sqrt{n-1}}} \right)= \left( \frac{1}{2} \right) ^ {\frac{2n}{\sqrt{n-1}}} -2 \left( \frac{1}{2} \right) ^ {2\sqrt{n-1}} + 1 - \left( \frac{1}{2} \right) ^ {\frac{2}{\sqrt{n-1}}} . \end{equation*} Note that, \begin{equation*} \frac{d}{dx} \left( \frac{1}{2} \right) ^ x = - \log 2 \left( \frac{1}{2} \right) ^ x \end{equation*} so noting that $n \geq 5$ we have, \begin{equation*} 1-\left( \frac{1}{2} \right) ^ {\frac{2}{\sqrt{n-1}}} >\frac{2}{\sqrt{n-1}} \left( \frac{1}{2} \right) ^ {\frac{2}{\sqrt{n-1}}} \log 2 > \frac{1}{2 \sqrt{n-1}} . \end{equation*} Also note that for $x \geq 1$, \begin{equation*} \frac{d}{dx} 4^x = 4^x \log 4 > 4 \end{equation*} so for $x \geq 1$ we have, \begin{equation*} 4^x \geq 4x \end{equation*} meaning that, \begin{equation*} \frac{1}{2\sqrt{n-1}} \geq 2\left( \frac{1}{2} \right) ^ {2\sqrt{n-1}} \end{equation*} and so, \begin{equation*} p\left( \left( \frac{1}{2} \right) ^ {\frac{2}{\sqrt{n-1}}} \right) > 0 \end{equation*} so there is a root in the interval $(\left( \frac{1}{2} \right) ^ {\frac{2}{\sqrt{n-1}}}, 1)$. Infact it must be in that interval $(\left( \frac{1}{2} \right) ^ {\frac{2}{\sqrt{n-1}}}, \left( \frac{1}{2} \right) ^ {\frac{1}{n-2}})$. The fact that $p$ is irreducible follows from the fact that it is an integer polynomial with non-zero constant coefficient and all but one of its factors contained in the interior of the unit disk. If $p$ where not irreducible then one of its factors would need to have all of its root contained in the interior of the unit disk. This would mean that the product of the roots of this factor would not be an integer which is a contradiction. \end{proof} We now simply let $\lambda_n$ be one of the roots (or more likely the unique root) of $X^n - 2X^{n-1} - X + 1$ contained in the interval $\left( \left( \frac{1}{2} \right) ^{\frac{2}{\sqrt{n-1}}}, 1 \right)$. By Theorem \ref{mainbernoulli} to show that the Bernoulli convolution with parameter $\lambda_n$ is absolutely continuous it suffices to show that, \begin{equation*} (\log (2 + 2^{2-n}) - \log 2) (\log (2 + 2^{2-n}) )^2 < \frac{1}{27} \left( \log (2 ) - \log 2 ^{\frac{2}{\sqrt{n-1}}}\right)^3 2^{-\frac{8}{\sqrt{n-1}}} \end{equation*} The left land side is decreasing in $n$ and the right hand side is increasing in $n$ and for $n=13$ the left hand side is less than the right hand side so for $n \geq 13$ we know that the Bernoulli convolution with parameter $\lambda_n$ is absolutely continuous. It is worth noting that we have $\lambda_n \to 1$ and $M_{\lambda_n} \to 2$ so all but finity many of these Bernoulli convolutions can be shown to be absolutely continuous by for example the results of \cite{varjupaper}. Using the results of \cite{varjupaper} does however require a significantly higher value of $n$ to work. Indeed it requires $n \geq 10^{65}$. \subsection{Biased Bernoulli Convolutions} \label{findbiased} It is worth noting that all of the examples in the previous section also work with a small amount of bias as an arbitarily small amount of bias will reduce $h_F$ by an arbitarily small amount. In contrast to the results of \cite{varjupaper} it is not possible to find examples of highly biased Bernoulli convolutions that can be shown to be absolutely continuous using the results of this paper. \subsection{Other Self Similar Measures on $\mathbb{R}$} In this subsection we will briefly mention some other examples of iterated function systems in one dimension which can be shown to be absolutely continuous by these methods. \begin{proposition} Let $q$ be a prime number for $i = 1, \dots, q-1$ let $S_i : x \mapsto \frac{q-1}{q} x + i$. Let $F$ be the iterated function system on $\mathbb{R}^1$ given by, \begin{equation*} F =\left (q-1, 1, (S_i)_{i=1}^q, \left(\frac{1}{q-1}, \dots, \frac{1}{q-1}\right)\right) \end{equation*} Then we have $M_F \leq \log q$, $h_F = \log (q-1)$ and $\lambda = \frac{q-1}{q}$. Furthermore if $q \geq 17$ then $\mu_F$ is absolutely continuous. \end{proposition} \begin{proof} We note that any point in the $k$- step support of $F$ must be of the form $u = \sum_{i=0}^{k-1} x_i \left( \frac{q-1}{q} \right) ^i$ with $x_i \in \{1, \dots, q-1\}$. Suppose $u = \sum_{i=0}^{k-1} x_i \left( \frac{q-1}{q} \right) ^i$ and $v = \sum_{i=0}^{k-1} y_i \left( \frac{q-1}{q} \right) ^i$ are two such points. We note that $q^{k-1} u, q^{k-1} v \in \mathbb{Z}$. This means that if $u \neq v$ then $|u-v| \geq q^{-k}$. This gives $M_F \leq \log q$. We can also note if $u = v$ then looking at $q^{k-1} u$ and $q^{k-1} v $ mod $q^i$ for $i = 1, \dots, k$ we see that we must have $(x_1, x_2, \dots, x_k) = (y_1, y_2, \dots, y_k) $. This means that $F$ has no exact overlaps and consequently $h_F = \log (q-1)$. We also note that $\lambda = \frac{q-1}{q}$ follows immediately from the definition of $F$. To show that $\mu_F$ is absolutely continuous by theorem \ref{mainresult} it is sufficient to check that, \begin{equation*} (\log q - \log (q-1)) (\log q)^2 < \frac{1}{27}\left(\log q - \log \left(\frac{q}{q-1}\right) \right)^3 \left( \frac{q-1}{q} \right)^4. \end{equation*} This is the same as showing that, \begin{equation} \left( \log \left( 1 + \frac{1}{q-1} \right) \right) < \frac{1}{27} \left( \frac{\log (q-1)}{\log q} \right)^2 (\log (q-1)) \left( \frac{q-1}{q} \right)^4. \label{qinequality} \end{equation} The left had side of \eqref{qinequality} is decreasing in $q$ and the left hand side is increasing in $q$. The inequality is satisfied for $q = 17$. This means that it is satisfied for $q \geq 17$. \end{proof} \subsection{Self Similar Measures on $\mathbb{R}^2$} In this section we will describe some examples of self similar measures on $\mathbb{R}^2$ which can be shown to be absolutely continuous using the methods of this paper and which cannot be expressed as the product of self similar measures on $\mathbb{R}$. This will be done by indenifying $\mathbb{R}^2$ with $\mathbb{C}$. \begin{proposition} \label{prop:2dexamples} Let $p$ be a prime number such that $p \equiv 3 \, (\text{mod } 4)$. Let $I_p$ denote the ideal $(p)$ in the ring $\mathbb{Z}[i]$. Note that this is a prime ideal. Let $a_1, \dots, a_{m}$ be in different cosets of $I_p$. Choosen some $\alpha$ of the form $\alpha = \frac{a}{p}$ with $a \in \mathbb{Z}[i] \backslash I_p$ and $|\alpha| < 1$. Let $\lambda = |\alpha|$ and let $U : \mathbb{R}^2 \to \mathbb{R}^2$ be a rotation around the origin by $\arg \alpha$. For $i = 1, \dots, m$ let $S_i :\mathbb{R}^2 \to \mathbb{R}^2, \, x \mapsto \lambda U x+a_i$ and let $F$ be the iterated function system on $\mathbb{R}^2$ given by $F = \left( m, 2, \left( S_i \right)_{i=1}^{m}, \left( \frac{1}{m}, \dots, \frac{1}{m} \right) \right)$. Then we have $M_F \leq \log p$ and $h_F = \log m$. \end{proposition} \begin{proof} To see that $M_F \leq \log p$ let $x = \sum_{i=0}^{k-1} x_i \alpha^i $ and $y = \sum_{i=0}^{k-1} y_i \alpha^i $ be two points in the $k$-step support of $F$. Note that $p^{k-1} (x-y) \in \mathbb{Z}[i]$ and so if $x \neq y$ then $|x-y| \geq p^{-k+1}$. To prove $h_F = \log m$ it suffices to show that $F$ has no exact overlaps. For this it suffices to show that if $x_1, \dots, x_k, y_1, \dots, y_k \in \{a_0, \dots, a_{m}\}$ and, \begin{equation} \sum_{i=0}^{k} x_i \alpha^i = \sum_{i=0}^{k} y_i \alpha^i \label{eq:primeidealthing} \end{equation} then $x_i = y_i$ for $i=1, \dots, k$. We will prove this by induction on $i$. For $i=k$ simply multiply both sides of \eqref{eq:primeidealthing} by $p^k$ and then work modulo the ideal $I_p$. This means that $x_k$ and $y_k$ must be in the same coset of $I_p$ which in particular means that they must be equal. The inductive step follows by the same argument. \end{proof} Note that the above proposition combined with Theorem \ref{mainresult} makes it very easy to give numerous examples of absolutely continuous iterated function systems in $\mathbb{R}^2$ which are not products of absolutely continuous iteratedfunction systems in $\mathbb{R}^1$. Some possible examples are given in the following Corollary. \begin{corollary} Let $p$ be a prime number such that $p \equiv 3 \, (\text{mod } 4)$. Let $I_p$ denote the ideal $(p)$ in the ring $\mathbb{Z}[i]$. Let $a_1, \dots, a_{m}$ be in different cosets of $I_p$. Choosen some $\alpha$ of the form $\alpha = \frac{p-1 + i}{p}$. Let $\lambda = |\alpha|$ and let $U : \mathbb{R}^2 \to \mathbb{R}^2$ be a rotation around the origin by $\arg \alpha$. For $i = 1, \dots, m$ let $S_i :\mathbb{R}^2 \to \mathbb{R}^2, \, x \mapsto \lambda U x+a_i$ and let $F$ be the iterated function system on $\mathbb{R}^2$ given by $F = \left( m, 2, \left( S_i \right)_{i=1}^{m}, \left( \frac{1}{m}, \dots, \frac{1}{m} \right) \right)$. Suppose that, \begin{equation*} (2 \log p - \log m) (\log p)^2 < \frac{1}{27} \left(2 \log p - \log \frac{p}{p-1} \right)^3 \left( \frac{p-1}{p} \right)^4 \end{equation*} \end{corollary} \begin{proof} This follows immedately from theorem \ref{mainresult} and proposition \ref{prop:2dexamples}. \end{proof} It is worth noting that the case $m=p^2$ follows from the results of \cite{garsiasep} so in this case the result of this paper can again be seen as a strengthening of the results of \cite{garsiasep}. It is also worth noting that in the case $m=p^2-1$ the conditions of this corollary are satisfies for all $p$ with $p \equiv 3 \, (\text{mod } 4)$. \section{Acknowlegdements} I would like to thank my supervisor Prof. P. Varj\'u for his helpful feedback and comments in the prepartion of this paper. \medspace \printbibliography \end{document}
{'timestamp': '2021-03-24T01:29:54', 'yymm': '2103', 'arxiv_id': '2103.12684', 'language': 'en', 'url': 'https://arxiv.org/abs/2103.12684'}
\section{Introduction} HERA is a prodigious source of quasi--real photons from reactions where the electron is scattered at very small angles. This permits the study of photoproduction reactions at photon--proton centre of mass (c.m.) energies an order of magnitude larger than in previous fixed target experiments. The majority of the $\gamma p$ collisions are due to interactions of the proton with the hadronic structure of the photon, a process that has been successfully described by the vector meson dominance model (VDM)\cite{VDM}. Here, the photon is pictured to fluctuate into a virtual vector meson that subsequently collides with the proton. Such collisions exhibit the phenomenological characteristics of hadron--hadron interactions. In particular they can proceed via diffractive or non--diffractive channels. The diffractive interactions are characterized by very small four momentum transfers and no colour exchange between the colliding particles leading to final states where the colliding particles appear either intact or as more massive dissociated states. However, it has been previously demonstrated that photoproduction collisions at high transverse momentum cannot be described solely in terms of the fluctuation of the photon into a hadron--like state \cite{omega,na14}. The deviations come from contributions of two additional processes called direct and anomalous. In the former process the photon couples directly to the charged partons inside the proton. The anomalous component corresponds to the process where the photon couples to a {\it q\={q}} pair without forming a bound state. The interactions of the photon via the hadron--like state and the anomalous component are referred to as resolved photoproduction, since both of them can be described in terms of the partonic structure of the photon \cite{Storrow}. In this paper we present the measurement of the transverse momentum spectra of charged particles produced in photoproduction reactions at an average c.m. energy of $\langle W \rangle = 180\GeV$ and in the laboratory pseudorapidity range $-1.2<\eta<1.4$ \footnote{Pseudorapidity $\eta$ is calculated from the relation $ \eta = - ln( tan( \theta / 2))$, where $\theta$ is a polar angle calculated with respect to the proton beam direction. }. This range approximately corresponds to the c.m. pseudorapidity interval of $0.8<\eta_{c.m.}<3.4$, where the direction is defined such that positive $\eta_{c.m.}$ values correspond to the photon fragmentation region. The transverse momentum distributions of charged particles are studied for non--diffractive and diffractive reactions separately. The $p_{T}$ spectrum from non--diffractive events is compared to low energy photoproduction data and to hadron--hadron collisions at a similar c.m. energy. In the region of high transverse momenta we compare the data to the predictions of a next--to--leading order QCD calculation. The diffractive reaction ($\gamma p \rightarrow X p$), where $X$ results from the dissociation of the photon, was previously measured by the E612 Fermilab experiment at much lower c.m. energies, $11.8<W<16.6\GeV$ \cite{chapin}. It was demonstrated that the properties of the diffractive excitation of the photon resemble diffraction of hadrons in terms of the distribution of the dissociated mass, the distribution of the four-momentum transfer between the colliding objects \cite{chapin} and the ratio of the diffractive cross section to the total cross section \cite{zeus-sigmatot}. The hadronization of diffractively dissociated photons has not yet been systematically studied. In this analysis we present the measurement of inclusive $p_{T}$ spectra in two intervals of the dissociated photon mass with mean values of $\langle M_{X} \rangle = 5\GeV$ and $\langle M_{X} \rangle = 10\GeV$. \section{Experimental setup} The analysis is based on data collected with the ZEUS detector in 1993, corresponding to an integrated luminosity of $0.40\:{\rm pb}^{-1}$. The HERA machine was operating at an electron energy of $26.7\GeV$ and a proton energy of $820\GeV$, with 84 colliding bunches. In addition 10 electron and 6 proton bunches were left unpaired for background studies (pilot bunches). A detailed description of the ZEUS detector may be found elsewhere \cite{status93,zeus-description}. Here, only a brief description of the detector components used for this analysis is given. Throughout this paper the standard ZEUS right--handed coordinate system is used, which has its origin at the nominal interaction point. The positive Z--axis points in the direction of the proton beam, called the forward direction, and X points towards the centre of the HERA ring. Charged particles created in $ep$ collisions are tracked by the inner tracking detectors which operate in a magnetic field of $1.43{\: \rm T}$ provided by a thin superconducting solenoid. Immediately surrounding the beampipe is the vertex detector (VXD), a cylindrical drift chamber which consists of 120 radial cells, each with 12 sense wires running parallel to the beam axis \cite{VXD}. The achieved resolution is $50\:{\rm \mu m}$ in the central region of a cell and $150\:{\rm \mu m}$ near the edges. Surrounding the VXD is the central tracking detector (CTD) which consists of 72 cylindrical drift chamber layers organized in 9 superlayers \cite{CTD}. These superlayers alternate between those with wires parallel to the collision axis and those with wires inclined at a small angle to provide a stereo view. The magnetic field is significantly inhomogeneous towards the ends of the CTD thus complicating the electron drift. With the present understanding of the chamber, a spatial resolution of $\approx 260\:{\rm \mu m}$ has been achieved. The hit efficiency of the chamber is greater than $95\%$. In events with charged particle tracks, using the combined data from both chambers, the position resolution of the reconstructed primary vertex are $0.6\:{\rm cm}$ in the Z direction and $0.1\:{\rm cm}$ in the XY plane. The resolution in transverse momentum for full length tracks is $\sigma_{p_{T}} / p_{T} \leq 0.005 \cdot p_{T} \oplus 0.016$ ($p_{T}$ in $\GeV$). The description of the track and the vertex reconstruction algorithms may be found in \cite{zeus-breit} and references therein. The solenoid is surrounded by the high resolution uranium--scintillator calorimeter (CAL) divided into the forward (FCAL), barrel (BCAL) and rear (RCAL) parts \cite{CAL}. Holes of $20 \times 20 {\rm\: cm}^{2}$ in the centre of FCAL and RCAL are required to accommodate the HERA beam pipe. Each of the calorimeter parts is subdivided into towers which in turn are segmented longitudinally into electromagnetic (EMC) and hadronic (HAC) sections. These sections are further subdivided into cells, which are read out by two photomultiplier tubes. Under test beam conditions, an energy resolution of the calorimeter of $\sigma_{E}/E = 0.18/\sqrt{E (\GeV)}$ for electrons and $\sigma_{E}/E = 0.35/\sqrt{E (\GeV)}$ for hadrons was measured. In the analysis presented here CAL cells with an EMC (HAC) energy below $60\MeV$ ($110\MeV$) are excluded to minimize the effect of calorimeter noise. This noise is dominated by uranium activity and has an r.m.s.~value below $19\MeV$ for EMC cells and below $30\MeV$ for HAC cells. The luminosity detector (LUMI) measures the rate of the Bethe--Heitler process $e p \rightarrow e \gamma p$. The detector consists of two lead--scintillator sandwich calorimeters installed in the HERA tunnel and is designed to detect electrons scattered at very small angles and photons emitted along the electron beam direction \cite{lumi}. Signals in the LUMI electron calorimeter are used to tag events and to measure the energy of the interacting photon, $E_{\gamma}$, from $E_{\gamma}=E_{e}-E'_{e}=26.7\GeV-E'_{e}$, where $E'_{e}$ is the energy measured in the LUMI. \section{Trigger} The events used in the following analysis were collected using a trigger requiring a coincidence of the signals in the LUMI electron calorimeter and in the central calorimeter. The small angular acceptance of the LUMI electron calorimeter implied that in all the triggered events the virtuality of the exchanged photon was between $4\cdot 10^{-8} < Q^{2}< 0.02 \GeV^{2}$. The central calorimeter trigger required an energy deposit in the RCAL EMC section of more than $464\:{\rm MeV}$ (excluding the towers immediately adjacent to the beam pipe) or $1250\:{\rm MeV}$ (including those towers). In addition we also used the events triggered by an energy in the BCAL EMC section exceeding $3400\:{\rm MeV}$. At the trigger level the energy was calculated using only towers with more than $464\:{\rm MeV}$ of deposited energy. \section{Event selection} In the offline analysis the energy of the scattered electron detected in the LUMI calorimeter was required to satisfy $15.2<E'_{e}<18.2\GeV$, limiting the $\gamma p$ c.m. energy to the interval $167<W<194\GeV$. The longitudinal vertex position determined from tracks was required to be $-35\cm < Z_{vertex} < 25\cm $. The vertex cut removed a substantial part of the beam gas background and limited the data sample to the region of uniform detector acceptance. The cosmic ray background was suppressed by requiring the transverse momentum imbalance of the deposits in the main calorimeter, $P_{missing}$, relative to the square root of the total transverse energy, $\sqrt{E_{T}}$, to be small: $P_{missing}/\sqrt{E_{T}} < 2\sqrt{\GeV}$. The data sample was divided into a diffractive and a non--diffractive subset according to the pseudorapidity, $\eta_{max}$, of the most forward energy deposit in the FCAL with energy above $400\MeV$. The requirement of $\eta_{max} < 2$ selects events with a pronounced rapidity gap that are predominantly due to diffractive processes ($\approx 96\%$ according to Monte Carlo (MC) simulation, see section~6). The events with $\eta_{max} > 2$ are almost exclusively ($\approx 95\%$) due to non-diffractive reactions. The final non--diffractive data sample consisted of 149500 events. For the diffractive data sample ($\eta_{max} < 2$) an additional cut $\eta_{max} > -2$ was applied to suppress the production of light vector mesons $V$ in the diffractive reactions $\gamma p \rightarrow V p$ and $\gamma p \rightarrow V N$, where $N$ denotes a nucleonic system resulting from the dissociation of the proton. The remaining sample was analyzed as a function of the mass of the dissociated system reconstructed from the empirical relationship \[ M_{X rec} \approx A\cdot\sqrt{E^{2}-P_{Z}^{2}}+B = A\cdot\sqrt{(E+P_{Z}) \cdot E_{\gamma}}+B .\] The above formula exploits the fact that in tagged photoproduction the diffractively excited photon state has a relatively small transverse momentum. The total hadronic energy, $E$, and longitudinal momentum $P_{Z}=E \cdot cos\theta$ were measured with the uranium calorimeter by summing over all the energy deposits of at least $160\MeV$. The correction factors $A=1.7$ and $B=1.0\GeV$ compensate for the effects of energy loss in the inactive material, beam pipe holes, and calorimeter cells that failed the energy threshold cuts. The formula was optimized to give the best approximation of the true invariant mass in diffractive photon dissociation events obtained from MC simulations, while being insensitive to the calorimeter noise. The diffractive data were analyzed in two intervals of the reconstructed mass, namely \nobreak{$4<M_{X\:rec}<7\GeV$} and $8<M_{X\:rec}<13\GeV$. According to the MC simulation the first cut selects events generated with a mass having a mean value and spread of $\langle M_{X\:GEN} \rangle = 5\GeV$ and ${\rm r.m.s.} = 1.8\GeV$. The second cut results in $\langle M_{X GEN} \rangle = 10\GeV$ and ${\rm r.m.s.} = 2.3\GeV$. Details of the MC simulation are given in section \ref{s:mc}. The final data sample consisted of 5123 events in the lower $M_{X}$ interval and of 2870 events in the upper interval. The contamination of the final data samples from e--gas background ranges from \nobreak{$<0.1\%$} (non-diffractive sample) to $\approx 10\%$ (diffractive sample, $\langle M_{X} \rangle = 5\GeV$). The p--gas contribution is between $1\%$ (non-diffractive sample) and $2\%$ (diffractive sample, $\langle M_{X} \rangle = 5\GeV$). The e--gas background was statistically subtracted using the electron pilot bunches. A similar method was used to correct for the p--gas background that survived the selection cuts because of an accidental coincidence with an electron bremsstrahlung $(ep \rightarrow \gamma ep)$. A large fraction of these background events were identified using the LUMI detector, since the energy deposits in the electron and photon calorimeters summed up to the electron beam energy. The identified background events were included with negative weights into all of the distributions in order to compensate for the unidentified part of the coincidence background. A detailed description of the statistical background subtraction method may be found in \cite{zeus-sigmatot,phdburow}. \section{Track selection} The charged tracks used for this analysis were selected with the following criteria:\ \begin{itemize} \item only tracks accepted by an event vertex fit were selected. This eliminated most of the tracks that came from secondary interactions and decays of short lived particles; \item tracks must have hits in each of the first 5 superlayers of the CTD. This requirement ensures that only long, well reconstructed tracks are used for the analysis; \item $-1.2<\eta<1.4$ and $p_{T} > 0.3\GeV$. These two cuts select the region of high acceptance of the CTD where the detector response and systematics are best understood. \end{itemize} Using Monte Carlo events, we estimated that the efficiency of the charged track reconstruction convoluted with the acceptance of the selection cuts is about $90\%$ and is uniform in $p_{T}$. The contamination of the final sample from secondary interaction tracks, products of decays of short lived particles, and from spurious tracks (artifacts of the reconstruction algorithm) ranges from $5\%$ at $p_{T}=0.3\GeV$ to $3\%$ for $p_{T}>1\GeV$. The inefficiency and remaining contamination of the final track sample is accounted for by the acceptance correction described in the following section. The transverse momenta of the measured tracks displayed no correlation with $\eta$ over the considered interval and were symmetric with respect to the charge assigned to the track. \section{Monte Carlo models} \label{s:mc} For the acceptance correction and selection cut validation we used Monte Carlo events generated with a variety of programs. Soft, non-diffractive collisions of the proton with a VDM type photon were generated using HERWIG 5.7 with the minimum bias option \cite{HERWIG}. The generator was tuned to fit the ZEUS data on charged particle multiplicity and transverse energy distributions. For the evaluation of the model dependence of our measurements we also used events from the PYTHIA generator with the soft hadronic interaction option \cite{PYTHIA}. Hard resolved and direct subprocesses were simulated using the standard HERWIG 5.7 generator with the lower cut-off on the transverse momentum of the final--state partons, $p_{T min}$, chosen to be $2.5 \GeV$. For the parton densities of the colliding particles, the GRV--LO \cite{GRV} (for the photon) and MRSD$'$\_ \cite{MRS} (for the proton) parametrisations were used. As a cross--check we also used hard $\gamma p$ scattering events generated by PYTHIA with $p_{T min} = 5\GeV$. The soft and hard MC components were combined in a ratio that gave the best description of the transverse momentum distribution of the track with the largest $p_{T}$ in each event. For $p_{T min} = 2.5\GeV$, the hard component comprises $11\%$ of the non-diffractive sample and for $p_{T min} = 5\GeV$ only about $3\%$. Each diffractive subprocess was generated separately. The diffractive production of vector mesons $(\rho, \omega, \phi)$ was simulated with PYTHIA. The same program was used to simulate the double diffractive dissociation ($\gamma p \rightarrow X N$). The diffractive excitation of the photon ($\gamma p \rightarrow X p$) was generated with the EPDIF program which models the diffractive system as a quark--antiquark pair produced along the collision axis \cite{Solano}. Final state QCD radiation and hadronization were simulated using JETSET \cite{PYTHIA}. For the study of systematic uncertainties, a similar sample of events was obtained by enriching the standard PYTHIA diffractive events with the hard component simulated using the POMPYT Monte Carlo program (hard, gluonic pomeron with the direct photon option) \cite{bruni}. The MC samples corresponding to the diffractive subprocesses were combined with the non-diffractive component in the proportions given by the ZEUS measurement of the partial photoproduction cross sections \cite{zeus-sigmatot}. The MC events were generated without electroweak radiative corrections. In the considered $W$ range, the QED radiation effects result in $\approx 2\%$ change in the number of measured events so that the effect on the results of this analysis are negligible. The generated events were processed through the detector and trigger simulation programs and run through the standard ZEUS reconstruction chain. \section{Acceptance correction} \label{s:correction} The acceptance corrected transverse momentum spectrum was derived from the reconstructed spectrum of charged tracks, by means of a multiplicative correction factor, calculated using Monte Carlo techniques: \[ C(p_{T}) = (\frac{1}{N_{gen\: ev}} \cdot \frac{d N_{gen}}{d p_{T gen}})/ (\frac{1}{N_{rec\: ev}} \cdot \frac{d N_{rec}}{d p_{T rec}}) . \] $N_{gen}$ denotes the number of primary charged particles generated with a transverse momentum $p_{T gen}$ in the considered pseudorapidity interval and $N_{gen\: ev}$ is the number of generated events. Only the events corresponding to the appropriate type of interaction were included, e.g. for the lower invariant mass interval of the diffractive sample only the Monte Carlo events corresponding to diffractive photon dissociation with the generated invariant mass $4<M_{X gen}<7\GeV$ were used. $N_{rec}$ is the number of reconstructed tracks passing the experimental cuts with a reconstructed transverse momentum of $p_{T rec}$, while $N_{rec\: ev}$ denotes the number of events used. Only the events passing the trigger simulation and the experimental event selection criteria were included in the calculation. To account for the contribution of all the subprocesses, the combination of the MC samples described in section~\ref{s:mc} was used. This method corrects for the following effects in the data: \begin{itemize} \item the limited trigger acceptance; \item the inefficiencies of the event selection cuts, in particular the contamination of the diffractive spectra from non--diffractive processes and the events with a dissociated mass that was incorrectly reconstructed. Also the non-diffractive sample is corrected for the contamination from diffractive events with high dissociated mass; \item limited track finding efficiency and acceptance of the track selection cuts, as well as the limited resolution in momentum and angle; \item loss of tracks due to secondary interactions and contamination from secondary tracks; \item decays of charged pions and kaons, photon conversions and decays of lambdas and neutral kaons. Thus, in the final spectra the charged kaons appear, while the decay products of neutral kaons and lambdas do not. For all the other strange and charmed states, the decay products were included. \end{itemize} The validity of our acceptance correction method relies on the correct simulation of the described effects in the Monte Carlo program. The possible discrepancies between reality and Monte Carlo simulation were analyzed and the estimation of the effect on the final distributions was included in the systematic uncertainty, as described in the following section. \section{Systematic effects} \label{s:systematics} One of the potential sources of systematic inaccuracy is the tracking system and its simulation in the Monte Carlo events used for the acceptance correction. Using an alternative simulation code with artificially degraded tracking performance we verified that the efficiency to find a track which fulfills all the selection cuts is known with an accuracy of about $10\%$. The error due to an imprecise description of the momentum resolution at high $p_{T}$ is negligible compared to the statistical precision of the data. We also verified that the final spectra would not change significantly if the tracking resolution at high $p_{T}$ had non--gaussian tails at the level of $10\%$ or if the measured momentum was systematically shifted from the true value by the momentum resolution. Another source of systematic uncertainty is the Monte Carlo simulation of the trigger response. We verified that even a very large ($20\%$) inaccuracy of the BCAL energy threshold would not produce a statistically significant effect. An incorrect RCAL trigger simulation would change the number of events observed, but would not affect the final $p_{T}$ spectra since it is normalized to the number of events. The correlation between the RCAL energy and the $p_{T}$ of tracks is very small. To evaluate the model dependence we repeated the calculation of the correction factors using an alternative set of Monte Carlo programs (see section \ref{s:mc}) and compared the results with the original ones. The differences between the obtained factors varied between $5\%$ for the high mass diffractive sample and $11\%$ for the non--diffractive one. The sensitivity of the result to the assumed relative cross sections of the physics processes was checked by varying the subprocess ratios within the error limits given in \cite{zeus-sigmatot}. The effect was at most $3\%$. All the above effects were combined in quadrature, resulting in an overall systematic uncertainty of the charged particle rates as follows: $15\%$ in the non-diffractive sample, $15\%$ in the $\langle M_{X} \rangle = 5\GeV$ diffractive sample and $9\%$ in the $\langle M_{X} \rangle = 10\GeV$ diffractive sample. All these systematic errors are independent of $p_{T}$. \section{Results} The double differential rate of charged particle production in an event of a given type is calculated as the number of charged particles $\Delta N$ produced within $\Delta \eta$ and $\Delta p_{T}$ in $N_{ev}$ events as a function of $p_{T}$: \[\frac{1}{N_{ev}} \cdot \frac{d^{2}N}{dp^{2}_{T} d\eta} = \frac{1}{N_{ev}} \cdot \frac{1}{2 p_{T} \Delta \eta} \cdot \frac{\Delta N}{\Delta p_{T}} .\] The charged particle transverse momentum spectrum was derived from the transverse momentum distribution of observed tracks normalized to the number of data events by means of the correction factor described in section \ref{s:correction}. The resulting charged particle production rates in diffractive and non-diffractive events are presented in Fig.~\ref{f:corrected_pt} and listed in Tables \ref{t:results1}, \ref{t:results2} and \ref{t:results3}. In the figure the inner error bars indicate the statistical error. Quadratically combined statistical and systematic uncertainties are shown as the outer error bars. The $\langle M_{X} \rangle = 5\GeV$ diffractive spectrum extends to $p_{T}=1.75\GeV$ and the $\langle M_{X} \rangle = 10\GeV$ distribution extends to $p_{T}=2.5\GeV$. The non--diffractive distribution falls steeply in the low $p_{T}$ region but lies above the exponential fit at higher $p_{T}$ values. The measurements extend to $p_{T}=8\GeV$. \begin{table}[h] \centerline{\hbox{ \begin{tabular}{|c||c|c|c|} \hline \ \ $p_{T} [\GeV]$ \ \ & $\frac{1}{N_{ev}} \cdot \frac{d^{2}N}{dp^{2}_{T} d\eta} [\GeV^{-2}]$ & $\sigma_{stat} [\GeV^{-2}]$ & $\sigma_{syst} [\GeV^{-2}]$\\ \hline\hline 0.30-- 0.40 & 4.98 & 0.05 & 0.74 \\ 0.40-- 0.50 & 2.99 & 0.03 & 0.44 \\ 0.50-- 0.60 & 1.78 & 0.02 & 0.26 \\ 0.60-- 0.70 & 1.09 & 0.01 & 0.16 \\ 0.70-- 0.80 & 0.641 & 0.012 & 0.096 \\ 0.80-- 0.90 & 0.420 & 0.010 & 0.063 \\ 0.90-- 1.00 & 0.259 & 0.007 & 0.038 \\ 1.00-- 1.10 & 0.164 & 0.005 & 0.024 \\ 1.10-- 1.20 & 0.107 & 0.004 & 0.016 \\ 1.20-- 1.30 & 0.0764 & 0.0034 & 0.0114 \\ 1.30-- 1.40 & 0.0513 & 0.0017 & 0.0077 \\ 1.40-- 1.50 & 0.0329 & 0.0012 & 0.0049 \\ 1.50-- 1.60 & 0.0242 & 0.0010 & 0.0036 \\ 1.60-- 1.70 & 0.0175 & 0.0008 & 0.0026 \\ 1.70-- 1.80 & 0.0133 & 0.0006 & 0.0020 \\ 1.80-- 1.90 & 0.0082 & 0.0005 & 0.0012 \\ 1.90-- 2.00 & 0.00615 & 0.00038 & 0.00092 \\ 2.00-- 2.14 & 0.00454 & 0.00028 & 0.00068 \\ 2.14-- 2.29 & 0.00360 & 0.00024 & 0.00054 \\ 2.29-- 2.43 & 0.00215 & 0.00017 & 0.00032 \\ 2.43-- 2.57 & 0.00166 & 0.00013 & 0.00025 \\ 2.57-- 2.71 & 0.00126 & 0.00012 & 0.00018 \\ 2.71-- 2.86 & 0.00098 & 0.00010 & 0.00015 \\ 2.86-- 3.00 & 0.000625 & 0.000071 & 0.000093 \\ 3.00-- 3.25 & 0.000456 & 0.000048 & 0.000068 \\ 3.25-- 3.50 & 0.000252 & 0.000031 & 0.000037 \\ 3.50-- 3.75 & 0.000147 & 0.000020 & 0.000022 \\ 3.75-- 4.00 & 0.000094 & 0.000012 & 0.000014 \\ 4.00-- 4.50 & 0.000067 & 0.000008 & 0.000010 \\ 4.50-- 5.00 & 0.0000301 & 0.0000045 & 0.0000045 \\ 5.00-- 5.50 & 0.0000151 & 0.0000029 & 0.0000023 \\ 5.50-- 6.00 & 0.0000082 & 0.0000021 & 0.0000012 \\ 6.00-- 7.00 & 0.0000038 & 0.0000009 & 0.0000006 \\ 7.00-- 8.00 & 0.0000014 & 0.0000005 & 0.0000002 \\ \hline \end{tabular} }} \vspace{1cm} \bf\caption{\it The rate of charged particle production in an average non--diffractive event. The data correspond to $-1.2<\eta<1.4$. The $\sigma_{stat}$ and $\sigma_{syst}$ denote the statistical and systematic errors.} \label{t:results1} \end{table} \begin{table}[h] \centerline{\hbox{ \begin{tabular}{|c||c|c|c|} \hline \ \ $p_{T} [\GeV]$ \ \ & $\frac{1}{N_{ev}} \cdot \frac{d^{2}N}{dp^{2}_{T} d\eta} [\GeV^{-2}]$ & $\sigma_{stat} [\GeV^{-2}]$ & $\sigma_{syst} [\GeV^{-2}]$\\ \hline\hline 0.30-- 0.40 & 1.63 & 0.06 & 0.24 \\ 0.40-- 0.50 & 1.02 & 0.04 & 0.15 \\ 0.50-- 0.60 & 0.559 & 0.028 & 0.083 \\ 0.60-- 0.70 & 0.308 & 0.019 & 0.046 \\ 0.70-- 0.80 & 0.165 & 0.013 & 0.024 \\ 0.80-- 0.90 & 0.088 & 0.011 & 0.013 \\ 0.90-- 1.00 & 0.0479 & 0.0059 & 0.0071 \\ 1.00-- 1.10 & 0.0312 & 0.0052 & 0.0046 \\ 1.10-- 1.20 & 0.0196 & 0.0042 & 0.0029 \\ 1.20-- 1.35 & 0.0100 & 0.0018 & 0.0015 \\ 1.35-- 1.50 & 0.00304 & 0.00087 & 0.00045 \\ 1.50-- 1.75 & 0.00153 & 0.00052 & 0.00023 \\ \hline \end{tabular} }} \vspace{1cm} \bf\caption{\it The rate of charged particle production in an average event with a diffractively dissociated photon state of a mass $\langle M_{X} \rangle = 5\GeV$. The data correspond to $-1.2<\eta<1.4$. The $\sigma_{stat}$ and $\sigma_{syst}$ denote the statistical and systematic errors.} \label{t:results2} \end{table} \begin{table}[h] \centerline{\hbox{ \begin{tabular}{|c||c|c|c|} \hline \ \ $p_{T} [\GeV]$ \ \ & $\frac{1}{N_{ev}} \cdot \frac{d^{2}N}{dp^{2}_{T} d\eta} [\GeV^{-2}]$ & $\sigma_{stat} [\GeV^{-2}]$ & $\sigma_{syst} [\GeV^{-2}]$\\ \hline\hline 0.30-- 0.40 & 3.87 & 0.10 & 0.34 \\ 0.40-- 0.50 & 2.32 & 0.06 & 0.20 \\ 0.50-- 0.60 & 1.46 & 0.04 & 0.13 \\ 0.60-- 0.70 & 0.803 & 0.033 & 0.072 \\ 0.70-- 0.80 & 0.485 & 0.023 & 0.043 \\ 0.80-- 0.90 & 0.288 & 0.017 & 0.025 \\ 0.90-- 1.00 & 0.176 & 0.012 & 0.015 \\ 1.00-- 1.10 & 0.109 & 0.009 & 0.009 \\ 1.10-- 1.20 & 0.0732 & 0.0075 & 0.0065 \\ 1.20-- 1.35 & 0.0294 & 0.0035 & 0.0026 \\ 1.35-- 1.50 & 0.0186 & 0.0028 & 0.0016 \\ 1.50-- 1.75 & 0.0086 & 0.0014 & 0.0008 \\ 1.75-- 2.00 & 0.00260 & 0.00066 & 0.00023 \\ 2.00-- 2.50 & 0.00076 & 0.00023 & 0.00007 \\ \hline \end{tabular} }} \vspace{1cm} \bf\caption{\it The rate of charged particle production in an average event with a diffractively dissociated photon state of a mass $\langle M_{X} \rangle = 10\GeV$. The data correspond to $-1.2<\eta<1.4$. The $\sigma_{stat}$ and $\sigma_{syst}$ denote the statistical and systematic errors.} \label{t:results3} \end{table} The soft interactions of hadrons can be successfully described by thermodynamic models that predict a steep fall of the transverse momentum spectra that can be approximated with the exponential form \cite{hagedorn}: \begin{equation} \frac{1}{N_{ev}} \cdot \frac{d^{2}N}{dp^{2}_{T} d\eta}= exp(a - b \cdot \sqrt{p_{T}^{2} + m_{\pi}^{2}}) \label{e:exp} \end{equation} where $m_{\pi}$ is the pion mass. The results of the fits of this function to ZEUS data in the interval $0.3<p_{T}<1.2\GeV$ are also shown as the full line in Fig.~\ref{f:corrected_pt}. The resulting values of the exponential slope $b$ are listed in Table \ref{t:slopes}. The systematic errors were estimated by varying the relative inclusive cross sections within the systematic error limits (see section \ref{s:systematics}) and by varying the upper boundary of the fitted interval from $p_{T}=1.0\GeV$ to $1.4\GeV$. In Fig.~\ref{f:pt_slopes} we present a comparison of the $b$ parameter resulting from the fits of (\ref{e:exp}) to proton-proton and proton-antiproton data as a function of the c.m. energy. The slope of the ZEUS non-diffractive spectrum agrees with the data from hadron--hadron scattering at an energy close to the ZEUS photon--proton c.m. energy. The diffractive slopes agree better with the hadronic data corresponding to a lower energy. In Fig.~\ref{f:pt_slopes} the ZEUS diffractive points are plotted at $5\GeV$ and $10\GeV$, the values of the invariant mass of the dissociated photon. A similar behaviour has been observed for the diffractive dissociation of protons, i.e. the scale of the fragmentation of the excited system is related to the invariant mass rather than to the total c.m. energy \cite{p-diff}. The dashed line in Fig.~\ref{f:pt_slopes} is a parabola in $log(s)$ and was fitted to all the hadron--hadron points to indicate the trend of the data. As one can see, our photoproduction results are consistent with the hadronic data. \begin{table}[h] \centerline{\hbox{ \begin{tabular}{|c||c|c|c|c|c|c|} \hline \ \ sample \ \ & $b [\GeV^{-1}]$ & $\sigma_{stat}(b)$ & $\sigma_{syst}(b)$ & $a$ & $\sigma_{stat}(a)$ & $cov(a,b)$\\ \hline\hline non-diffractive & 4.94 & 0.09 & 0.19 & 3.39 & 0.09 & -0.011\\ diff $\langle M_{X} \rangle = 5\GeV$ & 5.91 & 0.17 & 0.19 & 2.78 & 0.10 & -0.016\\ diff $\langle M_{X} \rangle = 10\GeV$ & 5.28 & 0.10 & 0.17 & 3.34 & 0.06 & -0.006\\ \hline \end{tabular} }} \vspace{1cm} \bf\caption{\it The values of the parameters resulting from the fits of equation (\protect\ref{e:exp}) to ZEUS data in the interval $0.3<p_{T}<1.2\GeV$. The $\sigma_{stat}$ and $\sigma_{syst}$ indicate the statistical and systematic errors.} \label{t:slopes} \end{table} The non--diffractive spectrum in Fig.~\ref{f:corrected_pt} clearly departs from the exponential shape at high $p_{T}$ values. Such a behaviour is expected from the contribution of the hard scattering of partonic constituents of the colliding particles, a process that can be described in the framework of perturbative QCD. It results in a high $p_{T}$ behaviour of the inclusive spectrum that can be approximated by a power law formula: \begin{equation} \frac{1}{N_{ev}} \cdot \frac{d^{2}N}{dp^{2}_{T} d\eta}= A \cdot (1 + \frac{p_{T}}{p_{T\:0}})^{-n} \label{e:power} \end{equation} where $A$, $p_{T\:0}$ and $n$ are parameters determined from the data. The fit to the ZEUS points in the region of $p_{T}>1.2\GeV$ gives a good description of the data and results in the parameter values $p_{T\:0}=0.54$ GeV, $n=7.25$ and $A=394$ GeV$^{-2}$. The statistical precision of these numbers is described by the covariance matrix shown in Table \ref{t:cov}. The fitted function is shown in Fig.~\ref{f:corrected_pt} as the dotted line. \begin{table}[h] \centerline{\hbox{ \begin{tabular}{|c||c|c|c|} \hline & $p_{T\:0}$ & $n$ & $A$ \\ \hline\hline $p_{T\:0}$ & $0.32\cdot 10^{-3}$ & $0.48\cdot 10^{-3}$ & $-0.10\cdot 10^{1}$\\ $n$ & & $0.12\cdot 10^{-2}$ & $-0.12\cdot 10^{1}$\\ $A$ & & & $ 0.32\cdot 10^{4}$\\ \hline \end{tabular}}} \vspace{1cm} \bf\caption{\it The covariance matrix corresponding to the fit of equation (\protect\ref{e:power}) to the non--diffractive data for $p_{T}>1.2\GeV$.} \label{t:cov} \end{table} In Fig.~\ref{f:pt_comparison} the ZEUS data are presented together with the results of a similar measurement from the H1 collaboration at $\langle W \rangle = 200$ GeV \cite{H1} and the data from the WA69 photoproduction experiment at a c.m. energy of $\langle W \rangle = 18\GeV$ \cite{omega}. For the purpose of this comparison, the inclusive cross sections published by those experiments were divided by the corresponding total photoproduction cross sections \cite{H1-sigmatot,ALLM}. Our results are in agreement with the H1 data. The comparison with the WA69 data shows that the transverse momentum spectrum becomes harder as the energy of the $\gamma p$ collision increases. Figure~\ref{f:pt_comparison} also shows the functional fits of the form (\ref{e:power}) to {\it p\={p}} data from UA1 and CDF at various c.m. energies \cite{UA1,CDF}. Since the fits correspond to inclusive cross sections published by these experiments, they have been divided by the cross section values used by these experiments for the absolute normalization of their data. The inclusive $p_{T}$ distribution from our photoproduction data is clearly harder than the distribution for {\it p\={p}} interactions at a similar c.m. energy and in fact is similar to {\it p\={p}} at $\sqrt{s}=900\GeV$. This comparison indicates that in spite of the apparent similarity in the low $p_{T}$ region between photoproduction and proton--antiproton collisions at a similar c.m. energy, the two reactions are different in the hard regime. There are many possible reasons for this behaviour. Firstly, both of the {\it p\={p}} experiments used for the comparison measured the central rapidity region ($|\eta|<2.5$ for UA1 and $|\eta|<1$ for CDF), while our data correspond to $0.8<\eta_{c.m.}<3.4$. Secondly, according to VDM, the bulk of the $\gamma p$ collisions can be approximated as an interaction of a vector meson $V$ with the proton. The $p_{T}$ spectrum of $Vp$ collisions may be harder than {\it p\={p}} at a similar c.m. energy, since the parton momenta of quarks in mesons are on average larger than in baryons. Thirdly, in the picture where the photon consists of a resolved part and a direct part, both the anomalous component of the resolved photon and the direct photon become significant at high $p_{T}$ and make the observed spectrum harder compared to that of $Vp$ reactions. Figure \ref{f:pt_kniehl} shows the comparison of our non--diffractive data with the theoretical prediction obtained recently from NLO QCD calculations \cite{krammer}. The charged particle production rates in a non--diffractive event were converted to inclusive non-diffractive cross sections by multiplying by the non--diffractive photoproduction cross section of $\sigma_{nd}(\gamma p \rightarrow X)=91\pm 11{\rm \:\mu b}$ \cite{zeus-sigmatot}. The theoretical calculations relied on the GRV parametrisation of the parton densities in the photon and on the CTEQ2M parametrisation for partons in the proton\cite{CTEQ}. The NLO fragmentation functions describing the relation between the hadronic final state and the partonic level were derived from the $e^{+}e^{-}$ data \cite{krammer-fragm}. The calculation depends strongly on the parton densities in the proton and in the photon, yielding a spread in the predictions of up to $30\%$ due to the former and $20\%$ due to the latter. The factorization scales of the incoming and outgoing parton lines, as well as the renormalization scale, were set to $p_{T}$. The uncertainty due to the ambiguity of this choice was estimated by changing all three scales up and down by a factor of 2. The estimates of the theoretical errors were added in quadrature and indicated in Fig.~\ref{f:pt_kniehl} as a shaded band. The theoretical calculation is in good agreement with the ZEUS data. \section{Conclusions} We have measured the inclusive transverse momentum spectra of charged particles in diffractive and non--diffractive photoproduction events with the ZEUS detector. The inclusive transverse momentum spectra fall exponentially in the low $p_{T}$ region, with a slope that increases slightly going from the non--diffractive to the diffractive collisions with the lowest $M_{X}$. The diffractive slopes are consistent with hadronic data at a c.m. energy equal to the invariant mass of the diffractive system. The non--diffractive low $p_{T}$ slope is consistent with the result from {\it p\={p}} at a similar c.m. energy but displays a high $p_{T}$ tail clearly departing from the exponential shape. Compared to photoproduction data at a lower c.m. energy we observe a hardening of the transverse momentum spectrum as the collision energy increases. The shape of our $p_{T}$ distribution is comparable to that of {\it p\={p}} interactions at $\sqrt{s}=900\GeV$. The results from a NLO QCD calculation agree with the measured cross sections for inclusive charged particle production. \section{Acknowledgments} We thank the DESY Directorate for their strong support and encouragement. The remarkable achievements of the HERA machine group were essential for the successful completion of this work and are gratefully appreciated. We gratefully acknowledge the support of the DESY computing and network services. We would like to thank B.A. Kniehl and G. Kramer for useful discussions and for providing the NLO QCD calculation results. \pagebreak
{'timestamp': '1996-03-17T07:32:41', 'yymm': '9503', 'arxiv_id': 'hep-ex/9503014', 'language': 'en', 'url': 'https://arxiv.org/abs/hep-ex/9503014'}
\section{Introduction} \label{sec:Intro} The description of spontaneous light emission by atomic electrons is a staple of textbooks in quantum physics \cite{Cohen2,Messiah2,LeBellac} and quantum optics \cite{FoxMulder}. Oftentimes, several approximations are made in the treatment. The first one is the dipole approximation \cite{Messiah2,FoxMulder} which consists \cite{Shirokov} in considering that the decaying electron does not emit light from its own position but rather from the position of the nucleus to which it is bound. This approximation is usually justified by claiming that the electromagnetic wavelengths which are relevant to the problem are much larger than the uncertainty on the electron's position. It is thus argued that the electromagnetic field does not ``see'' the details of matter configuration at the atomic scale, and hence that the precise location of the point of light emission is irrelevant. The second approximation is often used to explain why the dipole approximation can be made. It consists in noticing that, for large enough times, the only field modes which effectively contribute to spontaneous emission are the ones which are resonant with the atomic transition frequency. This is known as Fermi's golden rule. These resonant modes have a wavelength which is indeed much larger than the relevant atomic dimensions, and their interaction with the atom is very well described in the framework of the dipole approximation. Hence to some extent the dipole approximation is justified by Fermi's golden rule. But it is a very general result \cite{MisraSudarshan} of quantum physics that Fermi's linear decay cannot be valid at very short times. Indeed very short times obey what is called the Zeno dynamics, where the decay is always quadratic. One should therefore be skeptical of the validity of the dipole approximation at very short times, for which it may not be justified by the golden rule. Elsewhere \cite{EdouardArXiv} two of us have investigated the Hydrogen $2\mathrm{p}-1\mathrm{s}$ transition numerically, and developed a numerical method which enabled us to reproduce the dynamics of the system in the Zeno and Fermi regimes, but also at longer times in the Wigner-Weisskopf regime.\\ In the following we investigate the validity of the dipole and Fermi approximations in the case of the $2\mathrm{p}-1\mathrm{s}$ transition in atomic Hydrogen. In sect.~\ref{sec:AtomQED} we recall the main tools needed for the description of spontaneous emission. In sect.~\ref{sec:Heart}, the central section of this manuscript, we show how Fermi's golden rule emerges from rigorous first order time-dependent perturbation theory, using two new independent arguments. First, in the dipole approximation, which is discussed at length in sect.~\ref{subsec:DipoleDsc}, we regularise the divergences (sect.~\ref{subsec:DipoleCalc}) obtained in the expression for the survival probability of the excited state. The regularisation procedure is cutoff-independent. Then (sect.~\ref{subsec:MultipoleCalc}) we go beyond the dipole approximation and rigorously derive the short-time dynamics of the sytem. \section{The decay of a two-level atom} \label{sec:AtomQED} \subsection{Position of the problem} \label{subsec:Notations} We consider a two-level atom, where the ground state $\mid\!\mathrm{g}\rangle$ has angular frequency $\omega_{\mathrm{g}}$ and the excited state state $\mid\!\mathrm{e}\rangle$ has angular frequency $\omega_{\mathrm{e}}$, interacting with the electromagnetic field in the rotating wave approximation. We call $e$ the (positive) elementary electric charge, $m_e$ is the electron mass, and $\hat{\mathbf{x}}$ is the position operator and $\hat{\mathbf{p}}$ the linear momentum operator for the electron. The atom is considered to be in free space. The Hamiltonian $\hat{H}=\hat{H}_A+\hat{H}_R+\hat{H}_I$ is a sum of three terms: the atom Hamiltonian $\hat{H}_A$, the electromagnetic field Hamiltonian $\hat{H}_R$, and the interaction Hamiltonian $\hat{H}_I$. In the Schrödinger picture these read \cite{CohenQED1} \begin{subequations} \label{eq:Hamilton} \begin{align} \hat{H}_A&=\hbar\omega_{\mathrm{g}}\mid\!\mathrm{g}\rangle\langle \mathrm{g}\!\mid + \hbar\omega_{\mathrm{e}}\mid\!\mathrm{e}\rangle\langle \mathrm{e}\!\mid&\\ \hat{H}_R&=\sum_{\lambda=\pm}\int\tilde{\mathrm{d}k}\,\hbar c\left|\left|\mathbf{k}\right|\right|\hat{a}_{\left(\lambda\right)}^\dagger\left(\mathbf{k}\right)\hat{a}_{\left(\lambda\right)}\left(\mathbf{k}\right)\\ \hat{H}_I&=\frac{e}{m_e}\hat{\mathbf{A}}\left(\hat{\mathbf{x}},t=0\right)\cdot\hat{\mathbf{p}} \end{align} \end{subequations} where $\lambda$ labels the polarisation of the electromagnetic field. Introduce the polarisation vectors $\bm{\epsilon}_{\left(\lambda=\pm1\right)}\left(\mathbf{k}\right)$, which are any two (possibly complex) mutually orthogonal unit vectors taken in the plane orthogonal to the wave vector $\mathbf{k}$. The vector potential $\hat{\mathbf{A}}\left(\hat{\mathbf{x}},t\right)$ is expanded (in the Coulomb gauge) over plane waves as \begin{equation} \label{eq:MaxwellSolHat} \mathbf{\hat{A}}\left(\mathbf{x},t\right)=\mathpalette\DHLhksqrt{\frac{\hbar}{\epsilon_0c}}\sum_{\lambda=\pm}\int\tilde{\mathrm{d}k}\left[\hat{a}_{\left(\lambda\right)}\left(\mathbf{k}\right)\bm{\epsilon}_{\left(\lambda\right)}\left(\mathbf{k}\right)\mathrm{e}^{-\mathrm{i}\left(c\left|\left|\mathbf{k}\right|\right|t-\mathbf{k}\cdot\mathbf{x}\right)}+\mathrm{h.c.}\right]. \end{equation} Here \begin{equation} \label{eq:InDaTilde} \tilde{\mathrm{d}k}\equiv\frac{\mathrm{d}^4k}{\left(2\pi\right)^4}2\pi\,\delta\left(k_0^2-\mathbf{k}^2\right)\theta\left(k_0\right), \end{equation} is the usual volume element on the light cone \cite{Weinberg1,ItzyksonZuber} (where $\theta$ stands for the Heaviside distribution). Finally, the commutation relation between the photon ladder operators is given by \begin{equation} \label{eq:aaDagger} \left[\hat{a}_{\left(\varkappa\right)}\left(\mathbf{k}\right),\hat{a}_{\left(\lambda\right)}^\dagger\left(\mathbf{q}\right)\right]=2\left|\left|\mathbf{k}\right|\right|\left(2\pi\right)^3\delta\left(\mathbf{k}-\mathbf{q}\right)\delta_{\varkappa\lambda}. \end{equation} The state of the system reads \begin{multline}\label{eq:Ansatz} \mid\!\psi\left(t\right)\rangle=c_{\mathrm{e}}\left(t\right)\mathrm{e}^{-\mathrm{i}\omega_{\mathrm{e}}t}\mid\!\mathrm{e},0\rangle\\ +\sum_{\lambda=\pm}\int\tilde{\mathrm{d}k}\,c_{\mathrm{g},\lambda}\left(\mathbf{k},t\right)\mathrm{e}^{-\mathrm{i}\left(\omega_{\mathrm{g}}+ c\left|\left|\mathbf{k}\right|\right|\right) t}\mid\!\mathrm{g},1_{\lambda,\mathbf{k}}\rangle \end{multline} where $\mid\!\mathrm{e},0\rangle$ means that the atom is in the excited state and the field contains no photons and $\mid\!\mathrm{g},1_{\lambda,\mathbf{k}}\rangle$ means that the atom is in the ground state and the field contains a photon of wave vector $\mathbf{k}$ and polarisation $\lambda$. \subsection{Importance of the interaction matrix element} \label{subsec:StartingHere} In the absence of any interaction (that is, if $\hat{H}_I$ were zero), the coefficients $c_{\mathrm{e}}$ and $c_{\mathrm{g},\lambda}\left(\mathbf{k},\cdot\right)$ would not evolve and each term of the superposition (\ref{eq:Ansatz}) would oscillate at its own eigenfrequency. As such, the nontrivial features of the problem are encompassed by the matrix elements of the interaction Hamiltonian in the Hilbert (sub)space spanned by $\mid\!\mathrm{e},0\rangle$ and $\mid\!\mathrm{g},1_{\lambda,\mathbf{k}}\rangle$. In the dipole approximation, one approximates the imaginary exponential in (\ref{eq:MaxwellSolHat}) as $\mathrm{e}^{\mathrm{i}\mathbf{k}\cdot\mathbf{x}}\simeq1$ (remember that it is the vector potential at $t=0$ which appears in $\hat{H}_I$, reflecting the fact that none of the Hamiltonians in (\ref{eq:Hamilton}) depend explicitly on time). Then one is left to compute the relatively simple matrix elements of $\hat{\mathbf{p}}$. The result is well-known for the $2\mathrm{p}-1\mathrm{s}$ Hydrogen transition. Writing, for this transition, $\mid\!\mathrm{g}\rangle\equiv\mid\!1\mathrm{s}\rangle$ and $\mid\!\mathrm{g}\rangle\equiv\mid\!2\mathrm{p}\,m_2\rangle$, with $m_2$ the magnetic quantum number of the $2\mathrm{p}$ sublevel considered, we have the following expressions for the electronic wave functions of the $1\mathrm{s}$ and $2\mathrm{p}\,m_2$ sublevels: \begin{subequations} \label{eq:HLevels} \begin{align} \psi_{1\mathrm{s}}\left(\mathbf{x}\right)&=\frac{\exp\left(-\frac{\left|\left|\mathbf{x}\right|\right|}{a_0}\right)}{\mathpalette\DHLhksqrt{\pi a_0^3}},\\ \psi_{2\mathrm{p}\,m_2}\left(\mathbf{x}\right)&=\frac{\exp\left(-\frac{\left|\left|\mathbf{x}\right|\right|}{2a_0}\right)}{8\mathpalette\DHLhksqrt{\pi a_0^3}}\frac{\mathpalette\DHLhksqrt{2}}{a_0}\mathbf{x}\cdot\bm{\xi}_{m_{2}}. \end{align} \end{subequations} with $a_0$ the Bohr radius. The vectors $\bm{\xi}_{m_{2}}$ give the preferred directionality of the wave function of the $2\mathrm{p}$ substates, the angular dependence of which is given by the usual spherical harmonics \cite{Cohen1}. They are given by \begin{subequations} \label{eq:Xi} \begin{align} \bm{\xi}_0&=\mathbf{e}_z,\\ \bm{\xi}_{\pm1}&=\mp\frac{\mathbf{e}_x\pm\mathrm{i}\mathbf{e}_y}{\mathpalette\DHLhksqrt{2}}. \end{align} \end{subequations} In the dipole approximation, the interaction matrix element then is \cite{FacchiPhD} \begin{subequations} \label{eq:MatrixDiMulti} \begin{equation} \label{eq:MatrixDipole} \langle1\mathrm{s},1_{\lambda,\mathbf{k}}\mid\!\hat{H}_I\!\mid\!2\mathrm{p}\,m_2,0\rangle=-\mathrm{i}\mathpalette\DHLhksqrt{\frac{\hbar}{\epsilon_0 c}}\frac{\hbar e}{m_e\,a_0}\frac{2^{\frac{9}{2}}}{3^4}\bm{\epsilon}_{\left(\lambda\right)}^*\left(\mathbf{k}\right)\cdot\bm{\xi}_{m_2}. \end{equation} The dipole approximation is further discussed in sect.~\ref{subsec:DipoleDsc}. For the same transition, if one keeps the full exponential in (\ref{eq:MaxwellSolHat}), one obtains the exact matrix element \cite{FacchiPhD} \begin{equation} \label{eq:MatrixMultipole} \langle1\mathrm{s},1_{\lambda,\mathbf{k}}\mid\!\hat{H}_I\!\mid\!2\mathrm{p}\,m_2,0\rangle=-\mathrm{i}\mathpalette\DHLhksqrt{\frac{\hbar}{\epsilon_0 c}}\frac{\hbar e}{m_e\,a_0}\frac{2^{\frac{9}{2}}}{3^4}\frac{\bm{\epsilon}_{\left(\lambda\right)}^*\left(\mathbf{k}\right)\cdot\bm{\xi}_{m_2}}{\left[1+\left(\frac{2}{3}a_0\left|\left|\mathbf{k}\right|\right|\right)^2\right]^2}. \end{equation} \end{subequations} Since we are interested in spontaneous emission, we set $c_{\mathrm{e}}\left(t=0\right)=1$ and $\forall\lambda\in\left\{1,2\right\}\forall\mathbf{k}\in\mathbb{R}^3c_{\mathrm{g},\lambda}\left(\mathbf{k},t=0\right)=0$. We want to compute quantities such as \begin{subequations} \label{eq:OntoCardSine} \begin{equation} \label{eq:DecaytoSomeMode} P_{\mathrm{emiss.}\rightarrow\lambda,\mathbf{k}}\left(t\right)=\left|c_{\mathrm{g},\lambda}\left(\mathbf{k},t\right)\right|^2=\left|\langle\mathrm{g},1_{\lambda,\mathbf{k}}\!\mid\!\hat{U}\left(t\right)\!\mid\!\mathrm{e},0\rangle\right|^2 \end{equation} where $\hat{U}\left(t\right)=\exp\left[\left(-\mathrm{i}/\hbar\right)\hat{H}t\right]$ is the evolution operator for the system. It is well-known \cite{Cohen2,Englert} that a time-dependent perturbative treatment of such a problem yields, to first order in time, \begin{equation} \label{eq:WildCardSineAppears} P_{\mathrm{emiss.}\rightarrow\lambda,\mathbf{k}}\left(t\right)=\frac{t^2}{\hbar^2}\left|\langle\mathrm{g},1_{\lambda,\mathbf{k}}\!\mid\!\hat{H}_I\!\mid\!\mathrm{e},0\rangle\right|^2\sin\!\mathrm{c}^2\left[\left(\omega_0-c\left|\left|\mathbf{k}\right|\right|\right)\frac{t}{2}\right] \end{equation} \end{subequations} where we introduced the notation $\omega_0\equiv\omega_{\mathrm{e}}-\omega_{\mathrm{g}}$. From (\ref{eq:Ansatz}) we deduce that the probability that, at time $t$, the electron is still in the excited state, which we shall call the survival probability, is given by \begin{equation} \label{eq:CardSineSurv} \begin{aligned} [b] P_{\mathrm{surv}}\left(t\right)\equiv\left|c_{\mathrm{e}}\left(t\right)\right|^2&=1-\sum_{\lambda=\pm}\int\tilde{\mathrm{d}k}\left|c_{\mathrm{g},\lambda}\left(\mathbf{k},t\right)\right|^2\\ &=1-\frac{t^2}{\hbar^2}\sum_{\lambda=\pm}\int\tilde{\mathrm{d}k}\left|\langle\mathrm{g},1_{\lambda,\mathbf{k}}\!\mid\!\hat{H}_I\!\mid\!\mathrm{e},0\rangle\right|^2\\ &\hspace{75pt}\sin\!\mathrm{c}^2\left[\left(\omega_0-c\left|\left|\mathbf{k}\right|\right|\right)\frac{t}{2}\right]. \end{aligned} \end{equation} The integral in (\ref{eq:CardSineSurv}) will be central for the rest of the article. Before we investigate it in further detail, let us remind the reader of the usual \cite{Cohen2,Englert,LeBellac} treatment of the problem. It consists in using the distributional limit \begin{equation} \label{eq:CardinalSineDirac} \lim_{a\to+\infty}\frac{\sin^2\left(ax\right)}{a^2\,x}=\pi\delta\left(x\right) \end{equation} of the square cardinal sine, to conclude that the only single-photon states available by spontaneous emission have a frequency equal to the atomic transition frequency $\omega_0$. Indeed the use of (\ref{eq:CardinalSineDirac}) in (\ref{eq:WildCardSineAppears}) yields Fermi's golden rule \begin{equation} \label{eq:OnlyResonant} P_{\mathrm{emiss.}\rightarrow\lambda,\mathbf{k}}\left(t\right)\underset{t\rightarrow+\infty}{\sim}2\pi\frac{t}{\hbar^2}\left|\langle\mathrm{g},1_{\lambda,\mathbf{k}}\!\mid\!\hat{H}_I\!\mid\!\mathrm{e},0\rangle\right|^2\delta\left(\omega_0-c\left|\left|\mathbf{k}\right|\right|\right). \end{equation} Notice that making use of (\ref{eq:CardinalSineDirac}) in (\ref{eq:CardSineSurv}) means taking the limit $t\rightarrow+\infty$, a strange manipulation in the framework of time-dependent perturbation theory around $t=0$. One then has to enter a subtle discussion of time regimes to conclude that Fermi's golden rule is valid for times large enough to guarantee that one can approximate the square cardinal sine by its limit (\ref{eq:CardinalSineDirac}), but small enough to ensure that first-order perturbation theory still applies. This is discussed in more detail in \cite{LeBellac} for instance.\\ Our goal is not merely to point out the well-known fact that the golden rule is an approximation, but to acquire knowledge on what it is an approximation of. This is the question we investigate in the following sect.~\ref{sec:Heart}, where we focus on the case of the $2\mathrm{p}-1\mathrm{s}$ transition in atomic Hydrogen. \section{Rigorous first order perturbation theory and Fermi's golden rule} \label{sec:Heart} We shall investigate corrections to Fermi's golden rule from first order time-dependent perturbation theory. In sect.~\ref{subsec:DipoleDsc} we discuss the validity of the ubiquitous dipole approximation for the atom-field coupling. In sect.~\ref{subsec:DipoleCalc} we propose a cutoff-independent regularisation procedure of the divergences which arise when the dipole approximation is made. We then go on to investigate the dynamics yielded by the exact coupling in sect.~\ref{subsec:MultipoleCalc}. \subsection{The dipole approximation: discussion} \label{subsec:DipoleDsc} It has been noticed that in the framework of the dipole approximation, it is necessary to introduce a cutoff over electromagnetic field frequencies. In our treatment, this can be seen as follows: write the survival probability of the electron in the excited state at time $t$ as given by (\ref{eq:MatrixDipole}) and (\ref{eq:CardSineSurv}). Performing the integration over the angles, we get, using (\ref{eq:InDaTilde}), the following expression: \begin{equation} \label{eq:NoChanceSurvival} P_{\mathrm{surv}}\left(t\right)=1-\frac{2^{10}}{3^{9}\pi}c^2\,\alpha^3\,t^2\int_0^{+\infty}\mathrm{d}\omega\,\omega\sin\!\mathrm{c}^2\left(\left(\omega_0-\omega\right)\frac{t}{2}\right) \end{equation} where $\alpha=e^2/\left(4\pi\epsilon_0\hbar c\right)$ is the fine structure constant. The integral on the right-hand side of (\ref{eq:NoChanceSurvival}) diverges at all times. In this situation, taking the limit $t\rightarrow+\infty$ under the integral to obtain Fermi's golden rule is nonrigorous to say the least. Accordingly, one introduces a cutoff frequency corresponding to an upper bound on the validity domain of the dipole approximation for the atom-field coupling. One then considers that electromagnetic field modes with frequency larger than this cutoff frequency are uncoupled to the atom. Several objections can be raised with regard to this procedure. We discuss them along with their more or less satisfying rebuttals: \begin{itemize} \item{In order to introduce a cutoff, one must assume that high-frequency electromagnetic modes do not interact with the atom. This is justified by a quick look at the exact coupling. The matrix elements are proportional to $\langle1\mathrm{s}\!\mid\hat{\mathbf{p}}\exp\left(\mathrm{i}\mathbf{k}\cdot\hat{\mathbf{x}}\right)\mid\!2\mathrm{p}\,m_2\rangle$. When the wave number $\left|\left|\mathbf{k}\right|\right|$ becomes higher than the inverse of the Bohr radius $a_0$, such matrix elements are de facto negligibly small because the oscillating exponential $\exp\left(\mathrm{i}\mathbf{k}\cdot\hat{\mathbf{x}}\right)$ averages out during the integration, as seen from (\ref{eq:HLevels}).} \item{The approximation yields cutoff-dependent results. Since the qualitative argument above only provides an order of magnitude (the ratio $c/a_0$) for the cutoff frequency $\omega_{\mathrm{C}}$, this is especially problematic. Indeed, one can check that the truncated integral \begin{multline} \label{eq:Trunk} \int_0^{\omega_{\mathrm{C}}}\mathrm{d}\omega\,\omega\sin\!\mathrm{c}^2\left(\left(\omega_0-\omega\right)\frac{t}{2}\right)\\=\frac{2}{t^2}\left[\log\left(-1+\frac{\omega_{\mathrm{C}}}{\omega_0}\right)-\left[\mathrm{Ci}\left(\left(\omega_{\mathrm{C}}-\omega_0\right)t\right)-\mathrm{Ci}\left(\omega_0 t\right)\right]\right.\\\left.+\frac{\omega_0}{\omega_{\mathrm{C}}-\omega_0}\left(-1+\cos\left(\left(\omega_{\mathrm{C}}-\omega_0\right)t\right)\right)+\left(-1+\cos\left(\omega_0 t\right)\right)\right.\\\left.+t\left(\mathrm{Si}\left(\left(\omega_{\mathrm{C}}-\omega_0\right)t\right)+\mathrm{Si}\left(\omega_0 t\right)\right)\vphantom{\frac{1}{\omega_{\mathrm{C}}-\omega_0}}\right] \end{multline} where $\mathrm{Ci}$ stands for the cosine integral and $\mathrm{Si}$ for the sine integral \cite{AbramowitzStegun}, is strongly dependent on the value of the cutoff frequency\footnote{We also note that when the interaction Hamiltonian is taken to be of the $\hat{\mathbf{E}}\cdot\hat{\mathbf{x}}$ form instead of the $\hat{\mathbf{A}}\cdot\hat{\mathbf{p}}$ form, the divergence of the integral corresponding to (\ref{eq:NoChanceSurvival}), which features $\omega^3$ instead of $\omega$, is quadratic instead of logarithmic, and the dependence of the truncated integral (\ref{eq:Trunk}) -which features the same substitution- on the cutoff frequency is very much enhanced. See \cite{EdouardArXiv}. \label{ftn:Test}} $\omega_{\mathrm{C}}$. A discussion of the relevance of (\ref{eq:Trunk}) is made in sect.~\ref{sec:Ccl}. In a recent paper \cite{Norge} devoted to the decay of magnetic dipoles, similar questions were raised, and the Compton frequency was proposed as a cutoff. With this cutoff, Grimsmo \emph{et al.} proposed a regularisation of their problem following the lines of Bethe's mass renormalisation.\\} \item{When a cutoff is implemented, the distinction between electromagnetic modes which are considered to be coupled to the atom and those who are excluded from the treatment is binary: the coupling function is taken to be exactly zero beyond the cutoff frequency. One could envision to introduce a smoother cutoff procedure along the lines of the cutting off of ultrarelativistic frequencies presented in \cite{CohenQED1}, but we feel this would do nothing but introduce further arbitrariness in the model.} \end{itemize} \subsection{The dipole approximation: regularisation} \label{subsec:DipoleCalc} For the reasons we gave in the previous sect.~\ref{subsec:DipoleDsc}, we follow a different path: we refrain from introducing a cutoff and instead retain the result (\ref{eq:NoChanceSurvival}) from the dipole approximation without cutoff, but will endeavour to regularise the divergence in the integral. We will then extract the regular (finite) terms, and inspect them carefully. As far as we know this treatment of the present problem is novel.\\ We focus our interest on the integral featured in (\ref{eq:NoChanceSurvival}), that is \begin{multline} \label{eq:Frederic} \int_{-\infty}^{+\infty}\mathrm{d}\omega\,f\left(\omega,t\right)\hspace{15pt}\text{where}\\ f\left(\omega,t\right)=\theta\left(\omega\right)\omega\sin\!\mathrm{c}^2\left(\left(\omega_0-\omega\right)\frac{t}{2}\right)\equiv\theta\left(\omega\right)g\left(\omega,t\right). \end{multline} Here $\theta$ stands for the Heaviside step distribution. As we have discussed at length, (\ref{eq:Frederic}) is a divergent integral. Nevertheless, we shall extract its regular (finite) terms.\\ The idea here is that although the function $f\left(\cdot,t\right)$ does not belong to the vector space $L^1\left(\mathbb{R}\right)$ of summable functions, it is a slowly growing function, and, as such, a tempered distribution. It therefore admits a Fourier transform in the sense of distributions, which we write $\bar{f}\left(\cdot,t\right)$.\\ We thus compute the Fourier transform \begin{equation} \label{eq:DistFourier} \bar{f}\left(\tau,t\right)=\int_{-\infty}^{+\infty}\mathrm{d}\omega\,f\left(\omega,t\right)\mathrm{e}^{-\mathrm{i}\omega\tau} \end{equation} and then shall take the limit $\tau\rightarrow0$ at the end to retrieve the desired integral (\ref{eq:Frederic}). In this limit, some terms in $\bar{f}\left(\tau,t\right)$ become ill-defined, a consequence of the fact that $f\left(\cdot,t\right)$ is not summable. We will simply discard these terms at the end of our treatment, and focus on the well-defined terms in the limit $\tau\rightarrow0$.\\ The folding theorem and the well-known expression for the Fourier transform of the Heaviside distribution yield, from (\ref{eq:Frederic}) and (\ref{eq:DistFourier}) \begin{equation} \label{eq:ConvWithDeltavp} \bar{f}\left(\tau,t\right)=\frac{1}{2}\left[\left(\delta\left(\cdot\right)-\frac{\mathrm{i}}{\pi}\mathrm{vp}\,\frac{1}{\cdot}\right)*\bar{g}\left(\cdot,t\right)\right]\left(\tau\right) \end{equation} where $\mathrm{vp}$ stands for the Cauchy principal value of the subsequent function and the relation between $\bar{g}$ and $g$ is the same as that (\ref{eq:DistFourier}) between $\bar{f}$ and $f$: \begin{equation} \label{eq:DistFourierAgain} \bar{g}\left(\tau,t\right)=\int_{-\infty}^{+\infty}\mathrm{d}\omega\,g\left(\omega,t\right)\mathrm{e}^{-\mathrm{i}\omega\tau}. \end{equation} From (\ref{eq:Frederic}) we get \begin{multline} \label{eq:ExpandSine} \bar{g}\left(\tau,t\right)=-\frac{1}{t^2}\int_{-\infty}^{+\infty}\mathrm{d}\omega\frac{\omega}{\left(\omega-\omega_0\right)^2}\\\left(\mathrm{e}^{\mathrm{i}\omega\left(t-\tau\right)}\mathrm{e}^{-\mathrm{i}\omega_0t}+\mathrm{e}^{-\mathrm{i}\omega\left(t+\tau\right)}\mathrm{e}^{\mathrm{i}\omega_0t}-2\mathrm{e}^{-\mathrm{i}\omega\tau}\right). \end{multline} We would like to compute this as a sum of three integrals corresponding to the three summands on the right-hand side of (\ref{eq:ExpandSine}), but, taken individually, these integrals will diverge because of the singularity at $\omega=\omega_0$. The full integrand in (\ref{eq:ExpandSine}), however, has no singularity at $\omega=\omega_0$. Accordingly we introduce a small positive imaginary part $\epsilon>0$ in the denominator, which enables us to compute (\ref{eq:ExpandSine}) as a sum of three integrals. Introduce \begin{equation} \label{eq:CapitalG} G_\epsilon\left(\tau\right)\equiv\int_{-\infty}^{+\infty}\mathrm{d}\omega\frac{\omega}{\left(\omega-\omega_0+\mathrm{i}\epsilon\right)^2}\mathrm{e}^{-\mathrm{i}\omega\tau} \end{equation} to rewrite \begin{equation} \label{eq:UppertoLower} \bar{g}\left(\tau,t\right)=-\frac{1}{t^2}\lim_{\epsilon\to0^+}\left(\mathrm{e}^{-\mathrm{i}\omega_0t}G_\epsilon\left(\tau-t\right)+\mathrm{e}^{\mathrm{i}\omega_0t}G_\epsilon\left(\tau+t\right)-2G_\epsilon\left(\tau\right)\right). \end{equation} Thus to compute $\bar{g}\left(\cdot,t\right)$ we need only compute $G_\epsilon$, which we do now. Notice first that \begin{equation} \label{eq:Apart} \frac{\omega}{\left(\omega-\omega_0+\mathrm{i}\epsilon\right)^2}=\frac{1}{\omega-\omega_0+\mathrm{i}\epsilon}+\frac{\omega_0+\mathrm{i}\epsilon}{\left(\omega-\omega_0+\mathrm{i}\epsilon\right)^2}. \end{equation} Since $\epsilon>0$, an application of Cauchy's residue theorem therefore yields \begin{equation} \label{eq:ThanksAugustin} \lim_{\epsilon\to0^+}G_\epsilon\left(\tau\right)=2\pi\mathrm{e}^{-\mathrm{i}\omega_0\tau}\left(\mathrm{i}+\omega_0\tau\right)\theta\left(\tau\right). \end{equation} Plugging this back in (\ref{eq:UppertoLower}), we get \begin{multline} \label{eq:ProvisionalEnd} \bar{g}\left(\tau,t\right)=2\pi\frac{1}{t^2}\mathrm{e}^{-\mathrm{i}\omega_0\tau}\left[\theta\left(\tau-t\right)\left(\mathrm{i}+\omega_0\left(\tau-t\right)\right)\right.\\\left.+\theta\left(\tau+t\right)\left(\mathrm{i}+\omega_0\left(\tau+t\right)\right)-2\theta\left(\tau\right)\left(\mathrm{i}+\omega_0\tau\right)\right]. \end{multline} Then we further plug this in (\ref{eq:ConvWithDeltavp}) to get \begin{multline} \label{eq:DeltaPlusvp} \hspace{-12.5pt}\bar{f}\left(\tau,t\right)=\frac{1}{2}\bar{g}\left(\tau,t\right)-\frac{\mathrm{i}}{t^2}\mathrm{vp}\int_{-\infty}^{+\infty}\mathrm{d}\sigma\frac{\mathrm{e}^{-\mathrm{i}\omega_0\sigma}}{\tau-\sigma}\left[\theta\left(\sigma-t\right)\left(\mathrm{i}+\omega_0\left(\sigma-t\right)\right)\right.\\\left.+\theta\left(\sigma+t\right)\left(\mathrm{i}+\omega_0\left(\sigma+t\right)\right)-2\theta\left(\sigma\right)\left(\mathrm{i}+\omega_0\sigma\right)\right]. \end{multline} After some algebra we obtain \begin{multline} \label{eq:Finalfortau} \bar{f}\left(\tau,t\right)=\frac{1}{t^2}\left\{\pi\left[\theta\left(\tau-t\right)\left(\mathrm{i}+\omega_0\left(\tau-t\right)\right)\right.\right.\\\left.\left.+\theta\left(\tau+t\right)\left(\mathrm{i}+\omega_0\left(\tau+t\right)\right)\right.\right.\\\left.\left.-2\theta\left(\tau\right)\left(\mathrm{i}+\omega_0\tau\right)\right]-\mathrm{i}\,\mathrm{vp}\left[\mathrm{i}\left(\int_{-t}^0\mathrm{d}\sigma\frac{\mathrm{e}^{-\mathrm{i}\omega_0\sigma}}{\tau-\sigma}-\int_0^t\mathrm{d}\sigma\frac{\mathrm{e}^{-\mathrm{i}\omega_0\sigma}}{\tau-\sigma}\right)\right.\right.\\\left.\left.+\omega_0\left(t\int_{-t}^t\mathrm{d}\sigma\frac{\mathrm{e}^{-\mathrm{i}\omega_0\sigma}}{\tau-\sigma}+\int_{-t}^0\mathrm{d}\sigma\frac{\mathrm{e}^{-\mathrm{i}\omega_0\sigma}}{\tau-\sigma}\sigma-\int_0^t\mathrm{d}\sigma\frac{\mathrm{e}^{-\mathrm{i}\omega_0\sigma}}{\tau-\sigma}\sigma\right)\right]\right\}. \end{multline} Hence we can finally write \begin{multline} \label{eq:RegSng} \bar{f}\left(\tau=0,t\right)=\frac{1}{t^2}\left\{\pi\left[\theta\left(-t\right)\left(\mathrm{i}-\omega_0t\right)+\theta\left(t\right)\left(\mathrm{i}+\omega_0t\right)\right.\right.\\\left.\left.-2\mathrm{i}\theta\left(0\right)\right]-\mathrm{i}\,\mathrm{vp}\left[\mathrm{i}\left(\int_{-t}^0\mathrm{d}\sigma\frac{\mathrm{e}^{-\mathrm{i}\omega_0\sigma}}{-\sigma}-\int_0^t\mathrm{d}\sigma\frac{\mathrm{e}^{-\mathrm{i}\omega_0\sigma}}{-\sigma}\right)\right.\right.\\\left.\left.+\omega_0\left(t\int_{-t}^t\mathrm{d}\sigma\frac{\mathrm{e}^{-\mathrm{i}\omega_0\sigma}}{-\sigma}-\int_{-t}^0\mathrm{d}\sigma\,\mathrm{e}^{-\mathrm{i}\omega_0\sigma}+\int_0^t\mathrm{d}\sigma\,\mathrm{e}^{-\mathrm{i}\omega_0\sigma}\right)\right]\right\}. \end{multline} Now we want to identify and discard the singular terms in (\ref{eq:RegSng}). Making use of $\theta\left(-t\right)+\theta\left(t\right)=1$, we can rewrite $2\theta\left(0\right)=1+\theta\left(0\right)-\theta\left(-0\right)$. The quantity $\theta\left(0\right)-\theta\left(-0\right)$ is singular\footnote{It can be understood as equal to $\mathrm{sgn}\left(0\right)$.}, and we discard it. We claim that the difference of twe two (principal value) integrals on the second line on the right-hand side of (\ref{eq:RegSng}) also features a singular term. Let us show this. Rewrite \begin{equation} \label{eq:ExtractSng} \begin{aligned} [b] \int_{-t}^0\mathrm{d}\sigma\frac{\mathrm{e}^{-\mathrm{i}\omega_0\sigma}}{-\sigma}-\int_0^t\mathrm{d}\sigma\frac{\mathrm{e}^{-\mathrm{i}\omega_0\sigma}}{-\sigma}&=2\int_0^t\mathrm{d}\sigma\frac{\cos\left(\omega_0\sigma\right)}{\sigma}\\ &=2\sum_{n=0}^{+\infty}\frac{\left(-1\right)^n}{\left(2n\right)!}\omega_0^{2n}\int_0^t\mathrm{d}\sigma\,\sigma^{2n-1}. \end{aligned} \end{equation} Now, for the $n=0$ term in this sum, the integral diverges, and taking its principal value will not change that fact. Accordingly, we simply discard the $n=0$ term in (\ref{eq:ExtractSng}). We write the remainder of the series in closed form: \begin{equation} \label{eq:CosineInt} \begin{aligned} [b] \sum_{n=1}^{+\infty}\frac{\left(-1\right)^n}{\left(2n\right)!}\omega_0^{2n}\int_0^t\mathrm{d}\sigma\,\sigma^{2n-1}&=\sum_{n=1}^{+\infty}\frac{\left(-1\right)^n}{2n\left(2n\right)!}\left(\omega_0t\right)^{2n}\\ &=\mathrm{Ci}\left(\omega_0t\right)-\log\left(\omega_0t\right)-\gamma \end{aligned} \end{equation} where $\gamma$ is the Euler-Mascheroni constant. Computing the integrals on the third line of (\ref{eq:RegSng}), we can rewrite the regular part of (\ref{eq:RegSng}) as \begin{multline} \label{eq:RegOnly} \bar{f}\left(\tau=0,t\right)\stackrel{\mathrm{r.p.}}{=}\frac{1}{t^2}\left[-4\sin^2\left(\frac{\omega_0 t}{2}\right)+2\left(\mathrm{Ci}\left(\omega_0t\right)-\log\left(\omega_0t\right)-\gamma\right)\right.\\\left.+\pi\omega_0 t\left(\mathrm{sgn}\left(t\right)+\frac{2}{\pi}\mathrm{Si}\left(\omega_0 t\right)\right)\right] \end{multline} where $\mathrm{r.p.}$ stands for ``regular part'. The term \begin{equation} \label{eq:FermiSeed} \pi\frac{\omega_0}{t}\left(\mathrm{sgn}\left(t\right)+\frac{2}{\pi}\mathrm{Si}\left(\omega_0 t\right)\right) \end{equation} on the right-hand side of (\ref{eq:RegOnly}) is particularly interesting, and can be directly linked to Fermi's golden rule. Indeed, remember from (\ref{eq:NoChanceSurvival}) that the decay probability (that is, $1-P_{\mathrm{surv}}\left(t\right)$) features the product of (\ref{eq:RegOnly}) by $t^2$. Further, notice that $\mathrm{Si}\left(\omega_0 t\right)$ quickly converges to the Dirichlet value $\pi/2$ as $t$ becomes substantially larger than $1/\omega_0$. For such times the leading term in (\ref{eq:RegOnly}) is clearly $2\pi\omega_0/t$ (\emph{i.e.}, the limit of (\ref{eq:FermiSeed}) as $t\rightarrow+\infty$) which is equal to the result obtained from illegally ``sneaking'' the limit $t\rightarrow+\infty$ into the divergent integral on the right-hand side of (\ref{eq:NoChanceSurvival}). Hence, we have shown how the golden rule can be retrieved from a formal, cutoff-independent regularisation of the integral featured in the expression for the survival probability. We refrain from claiming that the terms on the first line of the right-hand side of (\ref{eq:RegOnly}) are relevant descriptions of short-time deviations from the golden rule, as it is clear that using a more exact expression for the atom-field coupling will give better results\footnote{For the sake of exhaustiveness, the regularised dipole-approximated result (\ref{eq:RegOnly}) is plotted in Figs.~\ref{fig:AlmostFermi} and \ref{fig:ZenoFermi}, where it is shown that it does not provide an accurate description of the very short-time behaviour of the system.}. This is examined in the upcoming sect.~\ref{subsec:MultipoleCalc}. Nevertheless, we can notice that (\ref{eq:RegOnly}) tends to zero as $t\rightarrow0$, which is an agreeable feature of our result. \subsection{Exact coupling: vindication of Fermi's golden rule} \label{subsec:MultipoleCalc} In the present section we will start from the same integral (\ref{eq:CardSineSurv}), and use the exact matrix element (\ref{eq:MatrixMultipole}). This treatment features no infinites and thus does not call for any regularisation procedure. Plus, it allows us to investigate short time deviations from Fermi's golden rule in a more direct and reliable way. Start from the survival probability of the electron in the excited state \begin{multline} \label{eq:SomeChanceSurvival} P_{\mathrm{surv}}\left(t\right)=1-\frac{2^{10}}{3^{9}\pi}c^2\,\alpha^3\,t^2\\\int_0^{+\infty}\mathrm{d}\omega\,\frac{\omega}{\left[1+\left(\frac{\omega}{\omega_\mathrm{X}}\right)^2\right]^4}\sin\!\mathrm{c}^2\left(\left(\omega_0-\omega\right)\frac{t}{2}\right) \end{multline} where we indroduced the notation $\omega_{\mathrm{X}}\equiv\left(3/2\right)\left(c/a_0\right)$. The frequency $\omega_{\mathrm{X}}$ is a natural cutoff frequency coming from the exact computation of the interaction matrix element (\ref{eq:MatrixMultipole}). The integral in (\ref{eq:SomeChanceSurvival}) is finite at all times and we can compute it numerically or, as we shall now see, analytically.\\ Define \begin{multline} \label{eq:FermiIntegral} I_F\left(t\right)=\int_{-\infty}^{+\infty}\mathrm{d}\omega\,d\left(\omega,t\right)\hspace{15pt}\text{where}\\ \hspace{-5pt}d\left(\omega,t\right)=\theta\left(\omega\right)\frac{\omega}{\left[1+\left(\frac{\omega}{\omega_\mathrm{X}}\right)^2\right]^4}\sin\!\mathrm{c}^2\left(\left(\omega_0-\omega\right)\frac{t}{2}\right). \end{multline} It can be rewritten \begin{multline} \label{eq:ExpandSineAgain} I_F\left(t\right)=-\frac{1}{t^2}\int_0^{+\infty}\mathrm{d}\omega\frac{\omega}{\left(\omega-\omega_0\right)^2}\frac{1}{\left[1+\left(\frac{\omega}{\omega_\mathrm{X}}\right)^2\right]^4}\\\left(\mathrm{e}^{\mathrm{i}\omega t}\mathrm{e}^{-\mathrm{i}\omega_0t}+\mathrm{e}^{-\mathrm{i}\omega t}\mathrm{e}^{\mathrm{i}\omega_0t}-2\right). \end{multline} We would like to compute this as a sum of three integrals corresponding to the three summands on the right-hand side of (\ref{eq:ExpandSineAgain}), but, taken individually, these integrals will diverge because of the singularity at $\omega=\omega_0$. The full integrand in (\ref{eq:ExpandSineAgain}), however, has no singularity at $\omega=\omega_0$. Accordingly we introduce a small positive imaginary part $\epsilon>0$ in the denominator. Introduce \begin{subequations} \label{eq:CapitalH} \begin{align} H_\epsilon\left(t\right)&\equiv\int_{-\infty}^{+\infty}\mathrm{d}\omega\frac{\omega}{\left(\omega-\omega_0+\mathrm{i}\epsilon\right)^2}\frac{\theta\left(\omega\right)}{\left[1+\left(\frac{\omega}{\omega_\mathrm{X}}\right)^2\right]^4}\mathrm{e}^{-\mathrm{i}\omega t}\\ &\equiv\int_{-\infty}^{+\infty}\mathrm{d}\omega\,h_\epsilon\left(\omega\right)\theta\left(\omega\right)\mathrm{e}^{-\mathrm{i}\omega t} \end{align} \end{subequations} to rewrite \begin{equation} \label{eq:Target} I_F\left(t\right)=-\frac{1}{t^2}\lim_{\epsilon\to0^+}\left(\mathrm{e}^{-\mathrm{i}\omega_0t}H_\epsilon\left(-t\right)+\mathrm{e}^{\mathrm{i}\omega_0t}H_\epsilon\left(t\right)-2H_\epsilon\left(0\right)\right). \end{equation} Thus to compute $I_F$ we need only compute $H_\epsilon$, which we do now. The folding theorem and the well-known expression for the Fourier transform of the Heaviside distribution yield \begin{equation} \label{eq:ConvWithDeltavpAgain} H_\epsilon\left(t\right)=\frac{1}{2}\left[\left(\delta\left(\cdot\right)-\frac{\mathrm{i}}{\pi}\mathrm{vp}\,\frac{1}{\cdot}\right)*\bar{h}_\epsilon\left(\cdot\right)\right]\left(t\right). \end{equation} We therefore need to compute the Fourier transform \begin{equation} \label{eq:TrueStart} \bar{h}_\epsilon\left(t\right)=\int_{-\infty}^{+\infty}\mathrm{d}\omega\,h_\epsilon\left(\omega\right)\mathrm{e}^{-\mathrm{i}\omega t} \end{equation} of $h_\epsilon$. We use Cauchy's residue theorem. We know from (\ref{eq:CapitalH}) that $h_\epsilon$ has a second order pole at $\omega_0-\mathrm{i}\epsilon$ and two fourth order poles at $\pm\mathrm{i}\omega_{\mathrm{X}}$, pictured on Fig.~\ref{fig:CauchyPaths}. From (\ref{eq:TrueStart}) we see that we have to close the integration path (Jordan loop) in the lower half of the complex plane for $t>0$, and in the upper half of the plane for $t<0$.\\ \begin{figure} [t] \begin{center} \begin{tikzpicture}[very thick, >=stealth] \draw[->] (-2.5,0) -- (2.5,0); \draw[->] (0,-2.5) -- (0,2.5); \draw (2.5,-.25) node {\small $\mathfrak{Re}\omega$}; \draw (-.5,2.5) node {\small $\mathfrak{Im}\omega$}; \draw[red] (.875,-.5) -- (1.125,-.25); \draw[red] (.875,-.25) -- (1.125,-.5); \draw[red] (1,-.375) circle (5pt); \draw[red] (.875,-.75) node {\small $\omega_0-\mathrm{i}\epsilon$}; \draw[red] (-.125,1.625) -- (.125,1.375); \draw[red] (-.125,1.375) -- (.125,1.625); \draw[red] (0,1.5) circle (5pt); \draw[red] (-.5,1.5) node {\small $\mathrm{i}\omega_{\mathrm{X}}$}; \draw[red] (-.125,-1.625) -- (.125,-1.375); \draw[red] (-.125,-1.375) -- (.125,-1.625); \draw[red] (0,-1.5) circle (5pt); \draw[red] (-.625,-1.5) node {\small $-\mathrm{i}\omega_{\mathrm{X}}$}; \draw[blue, ->] (-2,.125) -- (-1.25,.125); \draw[blue] (-1.25,.125) -- (1.25,.125); \draw[blue, >-] (1.25,.125) -- (2,.125); \draw[blue] (2,.125) arc (5:175:2); \draw[blue] (1.75,1.75) node {\small $\gamma_{t<0}$}; \draw[green!75!black, -<] (2,-.125) -- (1.25,-.125); \draw[green!75!black] (1.25,-.125) -- (-1.25,-.125); \draw[green!75!black, <-] (-1.25,-.125) -- (-2,-.125); \draw[green!75!black] (-2,-.125) arc (185:355:2); \draw[green!75!black] (-1.75,-1.75) node {\small $\gamma_{t>0}$}; \end{tikzpicture} \end{center} \vspace{-10pt} \caption{Jordan loops in the complex $k$-plane used to compute the Fourier transform (\ref{eq:TrueStart}). The poles $\omega_0-\mathrm{i}\epsilon$ and $\pm\mathrm{i}\omega_{\mathrm{X}}$ of the integrand are represented by red circled crosses. \label{fig:CauchyPaths}} \end{figure} It can be checked that the residues of $h_\epsilon\left(\omega\right)\mathrm{e}^{-\mathrm{i}\omega t}$ read \begin{subequations} \label{eq:Residues} \begin{align} \mathrm{Res}\left(h_\epsilon\left(\cdot\right)\mathrm{e}^{-\mathrm{i}\cdot t},\omega_0-\mathrm{i}\epsilon\right)&=\mathrm{e}^{-\mathrm{i}\left(\omega_0-\mathrm{i}\epsilon\right)t}\left(a_0+a_1\,t\right),\\ \mathrm{Res}\left(h_\epsilon\left(\cdot\right)\mathrm{e}^{-\mathrm{i}\cdot t},\mathrm{i}\omega_{\mathrm{X}}\right)&=\mathrm{e}^{\omega_{\mathrm{X}}t}\left(b_0^++b_1^+\,t+b_2^+\,t^2+b_3^+\,t^3\right),\\ \mathrm{Res}\left(h_\epsilon\left(\cdot\right)\mathrm{e}^{-\mathrm{i}\cdot t},-\mathrm{i}\omega_{\mathrm{X}}\right)&=\mathrm{e}^{-\omega_{\mathrm{X}}t}\left(b_0^-+b_1^-\,t+b_2^-\,t^2+b_3^-\,t^3\right)\end{align} \end{subequations} where the $a_i$ and $b_i^\pm$ coefficients depend on $\epsilon$. Whence the Fourier transform (\ref{eq:TrueStart}) \begin{multline} \label{eq:CauchyResidue} \bar{h}_\epsilon\left(t\right)=-2\mathrm{i}\pi\left[\theta\left(t\right)\mathrm{e}^{-\mathrm{i}\left(\omega_0-\mathrm{i}\epsilon\right)t}\left(a_0+a_1\,t\right)\right.\\\left.+\theta\left(t\right)\mathrm{e}^{-\omega_{\mathrm{X}}t}\left(b_0^++b_1^+\,t+b_2^+\,t^2+b_3^+\,t^3\right)\right.\\\left.-\theta\left(-t\right)\mathrm{e}^{\omega_{\mathrm{X}}t}\left(b_0^-+b_1^-\,t+b_2^-\,t^2+b_3^-\,t^3\right)\right]. \end{multline} One can then deduce $H_\epsilon\left(t\right)$ and its limit as $\epsilon\rightarrow0^+$. Setting \begin{subequations} \label{eq:LowertoUpper} \begin{align} a_i&\underset{\epsilon\rightarrow0^+}{\longrightarrow}A_i,\\ b_i^\pm&\underset{\epsilon\rightarrow0^+}{\longrightarrow}B_i^\pm,\\ \end{align} \end{subequations} one can see that \begin{align*} B_0^+=B_0^{-*}&\equiv B_0,\\ B_1^+=-B_1^{-*}&\equiv B_1,\\ B_2^+=B_2^{-*}&\equiv B_2,\\ B_3^+=-B_3^{-*}&\equiv B_3. \end{align*} We give \begin{subequations} \label{eq:CapitalAGalore} \begin{align} A_0&=\frac{\omega_{\mathrm{X}}^8 \left(\omega_{\mathrm{X}}^2-7\omega_0^2\right)}{\left(\omega_0^2+\omega_{\mathrm{X}}^2\right)^5},\\ A_1&=-\mathrm{i}\frac{\omega_0}{\left[1+\left(\frac{\omega_0}{\omega_{\mathrm{X}}}\right)^2\right]^4}\end{align} \end{subequations} and \begin{subequations} \label{eq:CapitalBGalore} \begin{align} B_0&=-\frac{\omega_{\mathrm{X}}^3 \left(-6 \omega_0^2+30\mathrm{i}\omega_0 \omega_{\mathrm{X}}+48 \omega_{\mathrm{X}}^2\right)}{96 (\omega_{\mathrm{X}}+\mathrm{i}\omega_0)^5},\\ B_1&=\frac{\omega_{\mathrm{X}}^3 \left(-3\mathrm{i}\omega_0^3-21 \omega_0^2 \omega_{\mathrm{X}}+51\mathrm{i}\omega_0 \omega_{\mathrm{X}}^2+33 \omega_{\mathrm{X}}^3\right)}{96 (\omega_{\mathrm{X}}+\mathrm{i}\omega_0)^5},\\ B_2&=-\frac{\omega_{\mathrm{X}}^3 \left(-3\mathrm{i}\omega_0^3 \omega_{\mathrm{X}}-15 \omega_0^2 \omega_{\mathrm{X}}^2+21\mathrm{i}\omega_0 \omega_{\mathrm{X}}^3+9 \omega_{\mathrm{X}}^4\right)}{96 (\omega_{\mathrm{X}}+\mathrm{i}\omega_0)^5},\\ B_3&=\frac{\omega_{\mathrm{X}}^3 \left(-\mathrm{i} \omega_0^3 \omega_{\mathrm{X}}^2-3 \omega_0^2 \omega_{\mathrm{X}}^3+3\mathrm{i}\omega_0 \omega_{\mathrm{X}}^4+\omega_{\mathrm{X}}^5\right)}{96 (\omega_{\mathrm{X}}+\mathrm{i}\omega_0)^5}. \end{align} \end{subequations} The computation of $H_{0^+}\left(t\right)$ from (\ref{eq:ConvWithDeltavpAgain}) and (\ref{eq:CauchyResidue}) features no notable conceptual or technical difficulty, but is quite tedious. We only give the result, which reads \begin{multline} \label{eq:PostHardcore} H_{0^+}\left(t\right)=-\mathrm{i}\pi\left[\theta\left(t\right)\mathrm{e}^{-\mathrm{i}\omega_0t}\left(A_0+A_1\,t\right)\right.\\\left.+\theta\left(t\right)\mathrm{e}^{-\omega_{\mathrm{X}}t}\left(B_0^++B_1^+\,t+B_2^+\,t^2+B_3^+\,t^3\right)\right.\\\left.-\theta\left(-t\right)\mathrm{e}^{\omega_{\mathrm{X}}t}\left(B_0^-+B_1^-\,t+B_2^-\,t^2+B_3^-\,t^3\right)\vphantom{\mathrm{e}^{\mathrm{i}}}\right]\\+\mathrm{e}^{-\mathrm{i}\omega_0 t}\left[-\mathrm{i}\left(A_0+A_1\,t\right)\mathrm{Si}\left(\omega_0t\right)+\frac{2}{\omega_0}A_1\sin\left(\omega_0t\right)\right.\\\left.-\left(A_0+A_1\,t\right)\left(\mathrm{Ci}\left(\omega_0t\right)+\mathrm{i}\frac{\pi}{2}\right)-\frac{\mathrm{i}}{\omega_0}A_1\mathrm{e}^{\mathrm{i}\omega_0t}\right]\\-\left[\left(B_0^*-B_1^*\,t+B_2^*\,t^2-B_3^*\,t^3\right)\mathrm{e}^{-\omega_{\mathrm{X}}t}\mathrm{Ei}\left(\omega_{\mathrm{X}}t\right)\right.\\\left.+\left(B_0+B_1\,t+B_2\,t^2+B_3\,t^3\right)\mathrm{e}^{\omega_{\mathrm{X}}t}\mathrm{Ei}\left(-\omega_{\mathrm{X}}t\right)\right]\\-\frac{1}{\omega_{\mathrm{X}}}\left[\left(B_1+B_1^*\right)-\left(B_2+B_2^*\right)t+2\left(B_3+B_3^*\right)t^2\right]\\-\frac{1}{\omega_{\mathrm{X}^2}}\left[\left(B_2-B_2^*\right)\omega_{\mathrm{X}}t+2\left(B_3-B_3^*\right)t\right]-\frac{1}{\omega_{\mathrm{X}^3}}\left(B_3+B_3^*\right)\omega_{\mathrm{X}}^2t^2. \end{multline} Here $\mathrm{Ei}$ stands for the exponential integral \cite{AbramowitzStegun}. Now remains the unenviable task of adding three such terms as prescribed by (\ref{eq:Target}), so as to obtain the exact expression for the survival probability as given by first-order time-dependent perturbation theory. Simplifications are frankly scarce here, but we are ``saved'' from the apparition of any singular terms by the identity \begin{equation} \label{eq:SavingGrace} A_0+B_0+B_0^*=0. \end{equation} The link between (\ref{eq:SavingGrace}) and the presence/absence of singular terms is explained as follows. As seen from (\ref{eq:Residues}), the quantity $A_0+B_0+B_0^*$ is, up to a factor of $\pm2\mathrm{i}\pi$, equal to the integral of $h_{0^+}$ over any closed curve $\Gamma$ circling around the three poles of $h_{0^+}$ (see Fig.~\ref{fig:CauchyPaths}). We can take $\Gamma$ to be a circle of radius $R$ centred around $z=0$. It is easy to see from Jordan's lemmas that the integral of $h_{0^+}$ over such a curve vanishes when $R\rightarrow+\infty$\footnote{For large $\omega$, it is easy to see that $h_{0^+}$ behaves as $\omega^{-9}$.} (and hence for any $R$ large enough that the circle will still enclose the three poles), whence (\ref{eq:SavingGrace}).\\ With the help of (\ref{eq:SavingGrace}) we can finally write, from (\ref{eq:Target}) and (\ref{eq:PostHardcore}), {\allowdisplaybreaks \begin{multline} \label{eq:IntheEnd} I_F\left(t\right)=\frac{1}{t^2}\left\{-2A_0\left(\log\left(\frac{\omega_0}{\omega_{\mathrm{X}}}\right)-\mathrm{Ci}\left(\omega_0\left|t\right|\right)\right)+\mathrm{i}\pi\left(B_0-B_0^*\right)\right.\\\left.+A_1\left[-4\frac{\mathrm{i}}{\omega_0}\sin^2\left(\frac{\omega_0t}{2}\right)+\mathrm{i}\pi t\left(\mathrm{sgn}\left(t\right)+\frac{2}{\pi}\mathrm{Si}\left(\omega_0t\right)\right)\right]\right.\\\left.+\mathrm{i}\pi\left[\mathrm{e}^{-\omega_{\mathrm{X}}t}\theta\left(t\right)\left[\left(B_0^*\,\mathrm{e}^{\mathrm{i}\omega_0t}-B_0\,\mathrm{e}^{-\mathrm{i}\omega_0t}\right)\right.\right.\right.\\\left.\left.\left.+\left(-B_1^*\,\mathrm{e}^{\mathrm{i}\omega_0t}+B_1\,\mathrm{e}^{-\mathrm{i}\omega_0t}\right)t\right.\right.\right.\\\left.\left.\left.+\left(B_2^*\,\mathrm{e}^{\mathrm{i}\omega_0t}-B_2\,\mathrm{e}^{-\mathrm{i}\omega_0t}\right)t^2+\left(-B_3^*\,\mathrm{e}^{\mathrm{i}\omega_0t}+B_3\,\mathrm{e}^{-\mathrm{i}\omega_0t}\right)t^3\right)\right.\right.\\\left.\left.+\mathrm{e}^{\omega_{\mathrm{X}}t}\theta\left(-t\right)\left(\left(B_0^*\,\mathrm{e}^{-\mathrm{i}\omega_0t}-B_0\,\mathrm{e}^{\mathrm{i}\omega_0t}\right)\right.\right.\right.\\\left.\left.\left.-\left(-B_1^*\,\mathrm{e}^{-\mathrm{i}\omega_0t}+B_1\,\mathrm{e}^{\mathrm{i}\omega_0t}\right)t\right.\right.\right.\\\left.\left.\left.+\left(B_2^*\,\mathrm{e}^{-\mathrm{i}\omega_0t}-B_2\,\mathrm{e}^{\mathrm{i}\omega_0t}\right)t^2-\left(-B_3^*\,\mathrm{e}^{-\mathrm{i}\omega_0t}+B_3\,\mathrm{e}^{\mathrm{i}\omega_0t}\right)t^3\right)\right]\right.\\\left.+\mathrm{e}^{-\omega_{\mathrm{X}}t}\mathrm{Ei}\left(\omega_{\mathrm{X}}t\right)\left(\left(B_0^*-B_1^*\,t+B_2^*\,t^2-B_3^*\,t^3\right)\mathrm{e}^{\mathrm{i}\omega_0t}\right.\right.\\\left.\left.+\left(B_0-B_1\,t+B_2\,t^2-B_3\,t^3\right)\mathrm{e}^{-\mathrm{i}\omega_0t}\right)\right.\\\left.+\mathrm{e}^{\omega_{\mathrm{X}}t}\mathrm{Ei}\left(-\omega_{\mathrm{X}}t\right)\left(\left(B_0^*+B_1^*\,t+B_2^*\,t^2+B_3^*\,t^3\right)\mathrm{e}^{-\mathrm{i}\omega_0t}\right.\right.\\\left.\left.+\left(B_0+B_1\,t+B_2\,t^2+B_3\,t^3\right)\mathrm{e}^{\mathrm{i}\omega_0t}\right)\right.\\\left.+\frac{2}{\omega_{\mathrm{X}}^3}\left[-2\left(\left(B_1+B_1^*\right)\omega_{\mathrm{X}}^2-\left(B_2+B_2^*\right)\omega_{\mathrm{X}}\right.\right.\right.\\\left.\left.\left.+2\left(B_3+B_3^*\right)\vphantom{\omega_{\mathrm{X}}^2}\right)\sin^2\left(\frac{\omega_0t}{2}\right)\right.\right.\\\left.\left.+\mathrm{i}\left(\left(B_2-B_2^*\right)\omega_{\mathrm{X}}-\left(B_3-B_3^*\right)\right)\omega_{\mathrm{X}}t\sin\left(\omega_0t\right)\right.\right.\\\left.\left.+\left(B_3+B_3^*\right)\omega_{\mathrm{X}}^2t^2\cos\left(\omega_0t\right)\vphantom{\frac{2}{\omega_{\mathrm{X}}^3}}\right]\vphantom{\frac{1}{t^2}}\right\}. \end{multline} } This expression, we are aware, is less than plain. Notice, however, that it features the now familiar ``seed'' of Fermi's golden rule, namely, the quantity \begin{equation} \label{eq:GoldenSeed} \frac{1}{t}\mathrm{i}\pi\,A_1\left(\mathrm{sgn}\left(t\right)+\frac{2}{\pi}\mathrm{Si}\left(\omega_0t\right)\right), \end{equation} found on the second line of (\ref{eq:IntheEnd}). Notice from (\ref{eq:CapitalAGalore}) that in the dipole limit $\omega_0/\omega_{\mathrm{X}}\rightarrow0$, one has $A_1=-\mathrm{i}\omega_0$, and one retrieves, from (\ref{eq:SomeChanceSurvival}), the decay constant given by Fermi's golden rule in the dipole approximation. Keeping $\omega_{\mathrm{X}}$ to its actual value, we find a relative error $\left|\mathrm{i}A_1-\omega_0\right|/\omega_0=\SI{1.33d-5}{}$. With $\omega_{\mathrm{X}}$ and thus $A_1$ kept to their actual value, our decay constant $\Gamma=\mathrm{i}\,2^{10}/\left(3^9\pi\right)c^2\alpha^3A_1$ matches that found by Facchi and Pascazio \cite{FacchiPhD,FacchiPascazio}, who treated the problem nonperturbatively, by finding the resolvent for the Hamiltonian (\ref{eq:Hamilton}) of the sytem in the Laplace domain and then getting back to the time domain. All the other terms in (\ref{eq:IntheEnd}) are short-time deviations from Fermi's golden rule.\\ It is not too hard, but very tedious to show from (\ref{eq:IntheEnd}), making use, of course, of (\ref{eq:CapitalAGalore}) as well as (\ref{eq:CapitalBGalore}), that the leading term in the Taylor series of $t^2\,I_F\left(t\right)$ at $t=0$ takes the very simple form $\left(\omega_{\mathrm{X}}t\right)^2/6$. Whence the survival probability at very short times: \begin{equation} \label{eq:ZenoFacchi} P_{\mathrm{surv}}\left(t\right)\underset{t\rightarrow0}{\sim}1-\frac{2^{10}}{3^{9}\pi}c^2\,\frac{\alpha^3}{6}\left(\omega_{\mathrm{X}}t\right)^2 \end{equation} as deduced from (\ref{eq:SomeChanceSurvival}). This quadratic behaviour follows the lines of the usual Zeno regime \cite{MisraSudarshan}. The particular short-time expansion (\ref{eq:ZenoFacchi}) has also been obtained by Facchi and Pascazio \cite{FacchiPhD,FacchiPascazio}. As shown on Figs.~\ref{fig:VeryVeryShort} and \ref{fig:VeryShortCut}, the agreement between our perturbative treatment and the exact solution is very good. We can therefore use our method to investigate the short-time behaviour of this system in more detail.\\ \begin{figure}[t] \begin{center} \includegraphics[width=1\columnwidth]{VVSFVST.png} \end{center} \caption{Behaviour of the survival probability $\left|c_{\mathrm{e}}\left(t\right)\right|^2$ with the exact coupling as given by the perturbative solution (\ref{eq:IntheEnd}) (solid blue), and Facchi and Pascazio's exact solution \cite{FacchiPascazio} (dashed green). The dimensionless constant $\lambda^2=\frac{2}{\pi}\left(\frac{2}{3}\right)^9\alpha^3\simeq6.4\times10^{-9}$. Remember $1/\omega_{\mathrm{X}}=\SI{1.18d-19}{s}$. \label{fig:VeryVeryShort}} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=1\columnwidth]{VVSFST.png} \end{center} \caption{Behaviour of the survival probability $\left|c_{\mathrm{e}}\left(t\right)\right|^2$ with the exact coupling as given by the perturbative solution (\ref{eq:IntheEnd}) (solid blue), and Facchi and Pascazio's exact solution \cite{FacchiPascazio} (dashed green). Here the exponential contribution $Z\mathrm{e}^{-\Gamma t}$ to the decay has been substracted, and only the nonexponential contribution thereto is plotted. The dimensionless constant $\lambda^2=\frac{2}{\pi}\left(\frac{2}{3}\right)^9\alpha^3\simeq6.4\times10^{-9}$. Remember $1/\omega_{\mathrm{X}}=\SI{1.18d-19}{s}$. Also note $Z\simeq1-4.39\,\lambda^2$ \cite{FacchiPascazio}. \label{fig:VeryShortCut}} \end{figure} The survival probability (\ref{eq:SomeChanceSurvival}) is plotted in Fig.~\ref{fig:AlmostFermi}. We see that the deviation from Fermi's golden rule is of order $\SI{d-8}{}/\SI{d-7}{}$. For the sytem under study here, the golden rule is thus valid to an excellent approximation.\\ \begin{figure}[t] \begin{center} \includegraphics[width=1\columnwidth]{AlmostFermi.png} \end{center} \caption{Decay of the survival probability $\left|c_{\mathrm{e}}\left(t\right)\right|^2$ with the regularised dipole-approximated coupling (plotted for exhaustiveness) as given by expression (\ref{eq:RegOnly}) (dot-dashed green), the exact coupling as given by expression (\ref{eq:IntheEnd}) (solid blue) and Fermi's golden rule (dashed black). The time axis is logarithmic. \label{fig:AlmostFermi}} \end{figure} To illustrate the transition between the Zeno regime (\ref{eq:ZenoFacchi}) and Fermi's golden rule as predicted by our treatment, we plot in Fig.~\ref{fig:ZenoFermi} the decay probability as given by (\ref{eq:IntheEnd}) as well as the short-time expansion (\ref{eq:ZenoFacchi}) and the linear prediction of the golden rule. Note that the transition between the Zeno and Fermi regimes takes place around $\SI{d-17}{s}$ after the start of the decay and that after $\SI{d-15}{s}$, the behaviour of the system is completely undistinguishable from that predicted by the golden rule. \begin{figure}[t] \begin{center} \includegraphics[width=1\columnwidth]{ZenoFermi.png} \end{center} \caption{Behaviour of the decay probability $1-\left|c_{\mathrm{e}}\left(t\right)\right|^2$ with the regularised dipole-approximated coupling (plotted for exhaustiveness) as given by expression (\ref{eq:RegOnly}) (dot-dashed green), the exact coupling as given by expression (\ref{eq:IntheEnd}) (solid blue), Fermi's golden rule (dashed black), and the Zeno behaviour (\ref{eq:ZenoFacchi}) (dotted red). Both axes are logatithmic. \label{fig:ZenoFermi}} \end{figure} \section{Conclusion} \label{sec:Ccl} We have verified, in the framework of first order time-dependent perturbation theory, that Fermi's rule that predicts a linear decay of the survival probability can indeed, for the $2\mathrm{p}-1\mathrm{s}$ transition in atomic Hydrogen, be called ``golden''. The maximal deviation therefrom that we found is of order $\SI{d-8}{}/\SI{d-7}{}$, a clear-cut endorsement. It is a success for a ``rule'' which, as was argued in sect.~\ref{subsec:StartingHere}, is derived with the help of several, not obviously compatible conditions. As such, we think that much larger deviations from the golden rule could be found in other systems. As far as first-order perturbation theory goes, the crucial expression is the square cardinal sine integral in (\ref{eq:CardSineSurv}). It tells us that with an enhanced coupling between an atom or another effective two-level system and (electromagnetic) modes which are off-resonant with the transition frequency of the two-level system, one would witness more important deviations from the golden rule.\\ In \cite{FacchiPascazio} Facchi and Pascazio also raised the question of the experimental observability of Zeno deviations from the golden rule. They considered the ratio between the Zeno time $\tau_{\mathrm{Z}}$, where $P_{\mathrm{surv}}\left(t\right)\underset{t\rightarrow0}{\sim}1-\left(t/\tau_{\mathrm{Z}}\right)^2$, and the lifetime $1/\Gamma$ of the excited level as the relevant parameter for the observability. We argue that the relevant ratio is that between the ``cutoff time'' $\tau_{\mathrm{X}}$ and the Zeno time $\tau_{\mathrm{Z}}$. The cutoff time is understood to be defined so that after $\tau_{\mathrm{X}}$, the system exits the Zeno regime in which the survival probability decays quadratically. Therefore, at $t=\tau_{\mathrm{X}}$, we have $P_{\mathrm{surv}}\left(t\right)=1-\left(\tau_{\mathrm{X}}/\tau_{\mathrm{Z}}\right)^2$, and the strength of the Zeno effect is given by $\tau_{\mathrm{X}}/\tau_{\mathrm{Z}}$. This is confirmed by looking at Fig.~\ref{fig:ZenoFermi}, which shows that the maximal discrepancy between the predictions of the golden rule and the actual dynamics of the system is reached aproximately at the moment when the system exits the Zeno regime. There is a general method to obtain the Zeno time $\tau_{\mathrm{Z}}$, which is centred \cite{FacchiPhD} on the computation of the expectation value of the squared Hamiltonian $\hat{H}^2$ of the system in the initial state. On the other hand, it is difficult to obtain the cutoff time $\tau_{\mathrm{X}}$ without solving -at least perturbatively, as we did here- the dynamics of the system. Given the delicate nature of analytical approaches (see sect.~\ref{subsec:MultipoleCalc} as well as \cite{FacchiPhD,FacchiPascazio}) for this particular transition, which is much simpler than many other transitions to describe theoretically, we suspect that the best way to evaluate the cutoff time for a general transition would be by numerical evaluation of the integral on the right-hand side of (\ref{eq:CardSineSurv}). We showed in \cite{EdouardArXiv} that the ratio $\tau_{\mathrm{X}}/\tau_{\mathrm{Z}}$ scales, for hydrogen-like atoms with $Z$ protons, like $Z$, a favourable scaling for the observability of the Zeno regime.\\ Our investigations also shed light on the dipole approximation. We have seen that while the regularisation procedure of sect.~\ref{subsec:DipoleCalc} provides a nicely cutoff-independent treatment of the problem in the framework of the dipole approximation, and yields a result which is in agreement with Fermi's golden rule at ``long times'', the predictions it yields on the very short time dynamics of the system are inadequate. Namely, it does not provide the correct dynamics in the Zeno regime, as seen on Fig.~\ref{fig:ZenoFermi}. We might ask, however, how the predictions of the dipole approximation fare when the regularisation is performed more directly -and, arguably, less elegantly- via the introduction of a cutoff, as presented in sect.~\ref{subsec:DipoleDsc}. As expected from the usual Zeno dynamics \cite{MisraSudarshan}, the very short time behaviour yielded by the truncated integral (\ref{eq:Trunk}) is a quadratic decay in time. Namely \begin{equation} \label{eq:TrunkEarly} 1-\frac{2^{10}}{3^{9}\pi}c^2\,\alpha^3t^2\hspace{-2.5pt}\int_0^{\omega_{\mathrm{C}}}\hspace{-7.5pt}\mathrm{d}\omega\,\omega\sin\!\mathrm{c}^2\left[\left(\omega_0-\omega\right)\frac{t}{2}\right]\underset{t\rightarrow0}{\sim}1-\frac{2^{10}}{3^{9}\pi}c^2\frac{\alpha^3}{2}\left(\omega_{\mathrm{C}}t\right)^2. \end{equation} This can either be seen by computing the Taylor series of the right-hand side of (\ref{eq:Trunk}), or, much more directly, by taking $t=0$ in the integral on the left-hand side of (\ref{eq:TrunkEarly}). Now, remember that in the case of the exact coupling, the Zeno behaviour is given by (\ref{eq:ZenoFacchi}). One can then choose the cutoff frequency of the dipole approximation so that the very short time predictions of the dipole approximation, with cutoff, match the exact short time dynamics of the system. Comparison of (\ref{eq:ZenoFacchi}) with (\ref{eq:TrunkEarly}) shows that a perfect match is reached if we choose \begin{equation} \label{eq:UMadBro} \omega_{\mathrm{C}}\equiv\frac{\omega_{\mathrm{X}}}{\mathpalette\DHLhksqrt{3}}=\frac{\mathpalette\DHLhksqrt{3}}{2}\frac{c}{a_0}\simeq .866\frac{c}{a_0}. \end{equation} In Fig.~\ref{fig:CheatandWin} we compare the predictions of the dipole approximation, with the carefully picked cutoff frequency (\ref{eq:UMadBro}), with the predictions obtained with the exact atom-field coupling. It is interesting, and quite impressive, that while the dipole approximation here was made to fit, by a simple choice of the cutoff frequency, the exact Zeno dynamics of the system, we see that with our choice for the cutoff, we obtain an excellent agreement during the transition between the Zeno and Fermi regimes\footnote{We also obtain an excellent agreement in the Fermi regime, but that was to be expected. The agreement is not perfect, though, as the decay constant in the exact and dipole coupling are slightly different. See the discussion below (\ref{eq:GoldenSeed}).}. Hence we can conclude that for the $2\mathrm{p}-1\mathrm{s}$ transition in atomic Hydrogen, the dynamics of the system is very well described at all times within the framework of the dipole approximation, if one makes the ``correct'' choice (\ref{eq:UMadBro}) for the cutoff. \begin{figure}[t] \begin{center} \includegraphics[width=1\columnwidth]{DipoleWins.png} \end{center} \caption{Behaviour of the decay probability $1-\left|c_{\mathrm{e}}\left(t\right)\right|^2$ with the exact coupling as given by expression (\ref{eq:IntheEnd}) (solid blue) and the dipole-approximated coupling with cutoff frequency $\omega_{\mathrm{C}}=\omega_{\mathrm{X}}/\mathpalette\DHLhksqrt{3}$ as given by expression (\ref{eq:Trunk}) (dot-dashed orange). Both axes are logatithmic. \label{fig:CheatandWin}} \end{figure} \section*{Acknowledgments} Vincent Debierre acknowledges support from CNRS (INSIS doctoral grant). Thomas Durt acknowledges support from the COST 1006 and COST 1043 actions. We thank Pr. Édouard Brainis for helpful discussions and valuable suggestions on the presentation of our results.
{'timestamp': '2015-02-24T02:18:04', 'yymm': '1502', 'arxiv_id': '1502.06404', 'language': 'en', 'url': 'https://arxiv.org/abs/1502.06404'}
\section{Introduction} This paper addresses the question of the ontology of the vector potential. This question has been in debate among physicists ever since Maxwell introduced the vector potential to account for the Faraday effect. The magnetic field is the curl of the vector potential. The latter may be shifted by adding to it a vector field the curl of which vanishes. Thus infinitely many vectors correspond to the vector potential, while the magnetic field is unique. This is what is called gauge symmetry. Theories which are formulated in terms of potentials are ``gauge theories''. These acquire growing importance in theoretical physics, in particular in condensed matter, elementary particle physics and cosmology. Spontaneous symmetry breaking is also a common feature of many fields of physics: the ground state of a many-particle system may be invariant under the operations of a subgroup of the total symmetry group of the Hamiltonian. This is generally linked to the occurrence of a phase transition, for example when the temperature increases, from an ordered state, with low symmetry, to a less ordered state with higher symmetry. For example, a ferromagnetic order has axial symmetry around the magnetization vector. A sufficient increase in temperature triggers a transition, at a critical temperature $T_c$ to a paramagnetic state which has the full rotation symmetry, which is the symmetry group of the Hamiltonian. I would like to connect those two topics and discuss some of the lessons we can learn about the material world, and knowledge of its laws, by examining various aspects of gauge theories. The latter are relevant in classical physics, and are connected in quantum mechanics through the phase of the wave function. The ontological status of the phase of the wave function may be clarified by discussing some concepts such as the Berry phase and particular cases of spontaneous gauge symmetry breaking, i.e. the phenomena of superfluidity and superconductivity. The relationship of gauge symmetries with electricity conservation is especially clear in the latter phenomena, where the breaking of gauge symmetry is associated with the canonical conjugation of phase and particle number. Section \ref{classic} below will review some basic notions on gauge symmetry in classical physics. Section \ref{qm} describes the quantum mechanical version of this topic, discusses the Aharonov Bohm effect in section \ref{ab} and the Berry phase in section \ref{berry}. Both are relevant experimental and theoretical topics for my purpose. Section \ref{supercond} discusses some aspects of superconductivity , a phase with spontaneously broken gauge symmetry The discussion among philosophers about the topics mentionned above revolves around central questions of knowledge: how can the human mind access truths about the world? Are theories in physics mere representations of phenomena? Are they able to reflect, in a more or less exact way, real processes of the world? Are theoretical entities such as fields, particles, potentials, etc., real objects, independent of the human mind? This will be discussed in section \ref{sr}. Should eventually all theories of physics reduce to one fundamental Theory of Everything? Or are there emergent properties which have qualitative features absent from the fundamental microscopic equations? Emergence will be discussed in section \ref{emerg}. Proposals and ideas expressed in this paper are summarized in section \ref{conclu}. \section{Classical physics} \label{classic} Two topics are of interest in this chapter, that of Maxwell equations for classical electrodynamics, and that of parallel transport, such as is at work in the Foucault pendulum \subsection{Maxwell equations, the Faraday effect, and the vector potential} The electric field $\vec{E}(x,t)$ and magnetic field $\vec{B}(x,t)$ obey the four Maxwell equations, which describe how the fields are related to one another and to static or moving charges.The four Maxwell equations are: \begin{eqnarray}\label{maxw} (a)~~~~div\vec{E} =\rho~~~~~~~~~ &~~~~~~~~~~~~~~&(c)~~~~div\vec{B}=0\\ (b)~~~~curl\vec{B}-\frac{\partial \vec{E}}{\partial t}=\vec{j}&~~~~~~~~&(d)~~~~curl \vec{E}+\frac{\partial \vec{B}}{\partial t}=0 \end{eqnarray} where $\rho$ is the charge density and $\vec{j}$ is the current density. Quantities such as the electric potential $V(x,t)$ and the vector potential $\vec{A}$ are usually labeled "auxiliary quantities". They determine completely $\vec{E}$ and $\vec{B}$ according to: \begin{equation}\label{gauge1} \vec{B}= \vec{\nabla} \wedge \vec{A} \hspace{2cm} \vec{E} = - \frac{\partial}{\partial t}\vec{A} -\vec{\nabla}V \end{equation} On the other hand, $\vec{B}$ and $\vec{E}$ do \underline{not} determine $V$ and $\vec{A}$. If $f$ is a scalar function of space and time, the following transformation on the potentials, a "gauge transformation", does not alter the fields\footnote{This is due to equations (\ref{gauge1}) and to $\vec{\nabla} \wedge \vec{\nabla} =0$.}: \begin{equation}\label{gauge2} \vec{A} \rightarrow \vec{A} + \vec{\nabla} f \hspace{2cm} V \rightarrow V -\frac{\partial}{\partial t}f. \end{equation} This is the essence of gauge invariance, or gauge symmetry. Is this merely an ambiguity of the mathematical representation of physical states? A mere representation surplus? References \cite{guay,healey} are examples of philosophical investigation of gauge theories. David Gross \cite{gross} comments on the way Maxwell introduced the vector potential in order to account for the Faraday effect. The latter is the occurrence of an electric current in a closed conducting loop when the magnetic flux threading the loop varies in time. Maxwell did not accept the non locality of the effect: consider a situation such that the magnetic field is concentrated in a thin cylinder at the center of a closed conducting loop, and vanishes elsewhere. How could an electric current be induced by a magnetic flux variation far away from the loop, with zero intensity of the field at its locus? Maxwell found a satisfactory solution by inventing the vector potential $\vec{A}$. The latter has non zero values in regions where the magnetic field vanishes. The time variation of the flux through the loop could now be ascribed to $\partial_t\vec{A}$, together with the relationship of the electric field with the time variation of $\vec{A}$: $\vec{E}= -\partial_t \vec{A}$. The electric field then acts locally on the metallic loop, where the vector potential is non zero: a local description of phenomena is retrieved. There was no doubt in Maxwell's mind that $\vec{A}$ was a physical (i.e. real) field. But a problem appears with gauge invariance, as exhibited by equation (\ref{gauge2}). How can a physical object exist if it can be described by an infinite number of different vector fields? What led Maxwell to think of the vector potential as a physical field was the actual non zero value of $\frac{\partial}{\partial t} \vec{A}$ at the locus of the conductor, and the gauge invariance of the circulation of $\vec{A}$ along a closed loop C. This is an example of holonomy, i.e. a property of a closed loop. Indeed,$\oint_C \vec{A}.\vec{ds}$ is, through Stokes theorem, equal to the flux $\Phi_C$ of $\vec{B}$ through the loop C. It is indeed gauge invariant since the circulation of a gradient on a closed loop is identically zero: $\oint_C \vec{\nabla f}.\vec{ds}= \int_S \vec{\nabla}\wedge \vec{\nabla}f dS=0$ ($S$ is the surface subtended by the closed curve $C$). In the presence of charges and currents, Maxwell's equations impose the conservation of charge. In the classical theory of electromagnetism, the connection between gauge invariance and charge conservation was only realized in 1918 by Emmy Noether's first theorem \cite{noether}, as well as Weyl's attempts to construct a unified theory of gravitation and electromagnetism\cite{Weyl}. This connection will be discussed in another section of this paper. The positivist attitude towards science ( as clearly expressed, for example, by Duhem \cite{duhem} or Mach \cite{mach} ) prevailed among many of Maxwell's followers: Hertz, Heaviside, Lorentz, etc., down to Aharonov \cite{Aharonov}. This posture is that of Cardinal Bellarmin, who approved of Galileo's as long as the heliocentric hypothesis allowed to account for phenomena\cite{duhem}\footnote{Duhem strongly advocates Bellarmin's position , and would have condemned Galileo...} ({\it{"sauver les apparences"}}), but prohibited drawing ontological inferences about the world. Many physicists have adopted the view that the vector potential is a practical tool to simplify Maxwell's equations and to account for phenomena, but has no physical meaning, no ontological content. The rationale behind this view is the gauge dependent nature of $\vec{A}$ which makes it unobservable\footnote{Observable, or unobservable, is used here in the physicists' way: an observable thing which has causal powers allowing it to produce detectable effects, whether by a signal on a screen, a trace on a chart, or by other technologies available to the experimentalist; can an unobservable thing (in the physicist's meaning of the word) be real? Physicists in general dismiss this question as meaningless...} as a local quantity. As we shall see, a spontaneous gauge symmetry breaking seems to turn this "unobservable" object into a directly measurable one. Gauge theories have been recently discussed by Healey \cite{healey} from a philosophical viewpoint. Healey has concentrated on the Non Abelian Yang Mills theories which appear in the standard model. As this discussion involves specialized notions, such as "`soldering forms of fiber bundles"', which turn the reading of his book rather arduous, I will not discuss Healey's work in detail. I only summarize his position, namely that Yang-Mills theories refer to nonlocal properties encoded in holonomies, and the local gauge symmetries that characterize them are purely formal and have no direct empirical consequences. Brading and Brown \cite{bb} on the other hand insist that Noether's first theorem, which will be discussed below establishes that the {\it{very fact that a global gauge transformation does not lead to empirically distinct predictions is in itself non trivial.}} They state that {\it{the freedom in our descriptions is no 'mere' mathematical freedom -- it is a consequence of a physically significant structural feature of the theory.}} A rather easy introduction to gauge invariance can be found in a Field Theory treatise such as reference \cite{ramond}. As will be clear in the following, my own attempt at discussing the simpler Abelian gauge theories within non relativistic quantum mechanics deals with the same topics: are there conditions for which gauge potentials can be reasonably associated with a real material object? The philosophical background of this paper might be called the question of scientific realism: are the theoretical entities which appear in the course of science real? Under what conditions can we have good reasons to believe in the reality of a theoretical entity? Are electrical or magnetic fields real? Are potentials real? Are physical laws described by theories real? A related question is that of materialism: following the latter, the human mind, and the mind independent reality are both different forms of matter; can the human mind, in its individual or social form, access in principle knowledge of true aspects of matter? Those questions are discussed at more length in section \ref{sr}. The classical Hamilton function $H$ for a single charged particle in the presence of potentials is expressed as: \begin{equation}\label{hamilton} H=\frac{1}{2m}\left( \vec{p} -e\vec{A} \right)^2 +eV \end{equation} where $\vec{p}$ is the canonical momentum. The dynamic equation describing the motion of a charged classical particle is the Lorentz equation, which can be derived from equation (\ref{hamilton}): \begin{equation}\label{lorentz} m\frac{d^2 \vec{r}}{dt^2} = e\vec{E} + e\left( \frac{d\vec{r}}{dt} \right)\wedge \vec{B} \end{equation} The hamiltonian (equation (\ref{hamilton})) is expressed in terms of the potentials, while equation (\ref{lorentz}) is expressed in terms of the fields. The choice, in classical physics, is a matter of taste. \subsection{Parallel transport, and the Foucault pendulum}\label{parallel} The discovery of the quantum mechanical Berry phase \cite{berry}, discussed in section \ref{qm} in this paper, has allowed to re-discover a hitherto little studied gauge invariance connected, in classical physics, to parallel transport. Two concepts are of interest in this topic: the concept of {\it{anholonomy}} and that of {\it{adiabaticity}}. Quoting Berry \cite{berry}; {\it{" Anholonomy is a geometrical phenomenon in which nonintegrability causes some variables to fail to return to their original values when others, which drive them, are altered through a cycle...Adiabaticity is slow change and denotes phenomena at the borderline between dynamics and statics"}}. I note in passing that adiabaticity is yet another concept which supersedes the traditional text book antinomy between statics and dynamics. The simplest example of anholonomy is the change of the direction of the swing of a Foucault pendulum after one rotation of the earth. Visitors to the Panth\'eon in Paris can check that this is a phenomenon at work in the objective real world. If a unit vector $\vec{e}$ is transported in a parallel fashion over the surface of a sphere, its direction is changed by an angle $\alpha (C)$ after a closed circuit $C$ on the sphere has been completed. $\alpha (C)$ is found to be the solid angle subtended by $C$ at the center of the sphere. It is expressed by the circulation of a certain vector on $C$, the result of which is independent of the choice of basis vectors\footnote{For details of the derivation see ref. \cite{berry}.}. The latter freedom of choice is a gauge symmetry, the change of basis vectors being equivalent to a change of gauge. This feature is analogous to what we will find in quantum mechanics, either when discussing adiabatic transport of a quantum state, or the electromagnetic vector potential: an objective phenomenon of nature depends on the circulation of a vector quantity along a closed loop, although that quantity, when gauge invariance prevails, cannot be defined at any point along the circuit. Before going over to quantum mechanics, let me discuss first some ideas which are implicit in what I have written above, namely my scientific realist point of view. \section{Scientific Realism and Materialism}\label{sr} In his interesting book {\it{Representing and Intervening}}\cite{hacking}, Ian Hacking critically reviews a number of positivist or agnostic philosophers, from Comte to Duhem \cite{duhem}, Kuhn\cite{kuhn}, Feyerabend\cite{feyer}, Lakatos\cite{lakatos}, van Fraassen\cite{frassen}, Goodman\cite{goodman}, Carnap, etc.. I define positivism here, loosely, as the philosophical thesis which reduces knowledge to establishing a correspondence between theories, or mathematical symbols, and phenomena, and denies that it may access ontological truths, dubbed "`metaphysics"'. Hacking writes (p.131), about various trends of positivism: {\it{ Incommensurability, transcendental nominalism, surrogates for truth, and styles of reasoning are the jargon of philosophers. They arise from contemplating the connection between theory and the world. All lead to an idealist cul-de-sac. None invites a healthy sense of reality...By attending only to knowledge as representation of nature, we wonder how we can ever escape from representations and hook-up with the world. That way lies an idealism of which Berkeley\footnote{Berkeley was the finest example of philosophical idealism, which in his case is solipsism. Kant rejected it in favor of transcendental idealism.} is the spokesman. In our century (the twentieth) John Dewey has spoken sardonically of a spectator theory of knowledge...I agree with Dewey. I follow him in rejecting the false dichotomy between acting and thinking from which such idealism arises...Yet I do not think that the idea of knowledge as representation of the world is in itself the source of that evil. The harm comes from a single-minded obsession with representation and thinking and theory, at the expense of intervention and action and experiment}}. I agree with Hacking, inasmuch as he defends scientific realism. Scientific realism says that the entities, states and processes described by correct theories really do exist. I do not underestimate, as I think Hacking seems to do, the explanatory power of a correct theory, such as Maxwell's theory for electromagnetism. But I believe that Hacking is fundamentally correct in stating that the criterion of reality is practice. He describes a technique which uses an electron beam for specific technical results. This convinces him that electrons exist. He thinks that "`{\it{reality has more to do with what we do in the world than with what we think about it.}}"`He discusses at length experiments, and points out that experimenting is much more than observing: it is acting on the world, it is a practical activity. The certainty I have about the reality of a magnetic field originates from the experiments, observations, theory\footnote{Observation here is literally seeing a certain intensity value on a screen or on a chart connected by electric leads to a conducting solenoid; or observing how a charged particle motion is deflected when a magnetic field is turned on, etc..}, and practice\footnote{Practice ranges from using a compass for sea travel to Nuclear Magnetic Resonance used in medical imagery, for example.}. This certainty is intimately connected, not only with Maxwell's equations, but also with various historical acquisitions of physics, mostly during the nineteenth century; for example the certainty that those equations describe correctly a vast amount of electromagnetic phenomena, which are at the basis of countless technologies which billions of humans use everyday, which govern a vast amount of industrial production, etc.. Hacking points out that not all experiments are loaded with theory, contrary to statements by Lakatos \cite{lakatos}. Maxwell's equations belong to this category of discoveries where experiments were intimately intertwined with theory. Consider the following quotation: "`{\it{The question whether objective truth can be attributed to human thinking is not a question of theory but it is a practical question. Man must prove the truth --i.e. the reality and power, the this-sidedness of his thinking in practice. The dispute over the reality or non reality of thinking that is isolated from practice is a purely scholastic question}}"'. That is Marx' thesis 2 on Feuerbach \cite{marxfeuer} published in 1845! A second quotation is also relevant: "` {\it{The result of our action demonstrates the conformity ($\ddot{U}$bereinstimmung) of our perceptions with the objective nature of the objects perceived}"'}. That is due, in 1880, to Engels \cite{engelspractice} who is also the author of a well known expression: "`{\it{The proof of (the reality of) the pudding is that you eat it}} "`. Compare with Hacking (p.146 of \cite{hacking}): "`{\it{"`Real"' is a concept we get from what we, as infants, could put in our mouth}}"' How come Hacking does not refer to those predecessors who have stressed, as he does, the practice criterion as the criterion of reality? The answer is probably in p.24 of Hacking's book quoted above \cite{hacking}: "`{\it{...realism has, historically, been mixed up with materialism, which, in one version, says everything that exists is built up out of tiny material blocks...The dialectical materialism of some orthodox Marxists gave many theoretical entities a very hard time. Lyssenko rejected Mendelian genetics because he doubted the reality of postulated genes}}"'. It is a pity that Hacking, who reviews in many details all sorts of nuances between various doctrines he eventually calls idealist, dismisses materialism on the basis of a version of materialism ("`of some orthodox Marxists"') long outdated. As for his dismissal of dialectic materialism, he is certainly right in condemning its dogmatic degeneracy during the Stalin era. But is this the end of the story? Dialectical materialism -- or at least its reputation -- suffered a severe blow, as a useful and rational philosophical system, when it was used as official state philosophy in the USSR. Much to the contrary, nothing, in the founding philosophical writings \cite{marx,engels,lenin} allowed to justify turning them into an official State philosophy. One may surmise that the striking political achievements of the first years of the Soviet revolution made that appear as a positive step. This produced however such catastrophies as the State support for Lyssenko's theories, based on the notion that genetics was a bourgeois science, while lamarckian concepts were defined at the government level as correct from the point of view of a caricature of dialectical materialism. It is understandable that such nonsense in the name of a philosophical thesis turned the latter into a very questionable construction in the eyes of many. Dialectic materialism itself is an open system, which has no lesson to teach beforehand about specific objects of knowledge, and insists \cite{engels} on taking into account all lessons taught by the advancement of science. It is perhaps time for a serious critical assessment of this philosophical thesis. The question of the possibility of general theoretical statements about the empirical world is not a negligible one. Anyone who would dismiss Hacking's positions on realism, on the basis that he made an erroneous statement about quantum mechanics \footnote{P. 25 of \cite{hacking}, one reads: "`{\it{Should we be realists about quantum mechanics? Should we realistically say that particles do have a definite although unknowable position and momentum?}}"'. In fact the very classical concept of trajectory, with simultaneously well defined position and momentum is invalid for a microscopic particle. Particles do not have simultaneously definite position and momentum. In a propagation process, a particle follows simultaneously different trajectories, which is an aspect of the superposition principle. Realism about quantum mechanics is justified, and natural as soon as one admits that classical behaviour is, in general, an approximation valid for actions large compared to $\hbar$. } would certainly not do justice to his philosophical views. Hacking distinguishes between realism about theory and realism about "`entities"' (atoms, electrons, quarks, etc.), e.g. p. 26 of reference \cite{hacking}:"`{\it{The question about theories is whether they are true or are true-or-false...The question about entities is whether they exist }}"'. He is a realist about "`entities"' but doubts realism about theories. In my view, electromagnetism is a good example where realism about theory and realism about "`entities"' (magnetic or electric fields, currents and charges, etc.) are both relevant. Take one example of well known technology: magnetic fields, nuclear spins in animal tissues, interactions of the latter with microwave radiations, manage to provide images of the interior of our body. The latter allow sufficient accuracy that tumors, for example, having thus been made "`observable"' on photographs or fluorescent screens, can be efficiently removed by surgery. Does this allow doubts to persist about the reality of magnetic fields, nuclear spins, or electromagnetic radiation? Theories explaining the behaviour of nuclear spins, their interaction with magnetic fields and with electromagnetic radiation, and eventually with fluorescent screens, etc., must have, with a certain degree of accuracy, within certain ranges of experimental parameters, an undisputable truth content. However Maxwell's theory of electromagnetism might also be said to be false, since it ignores the quantum mechanical aspects of light. Can it be both true and false? Is the distinction between realism about theories and realism about "`entities"' a fully rational one? Dialectical materialism offers an interesting view. First, contrary to the caricature mentioned above, materialism gives a clear answer to the "`{\it{gnoseological problem of the relationship between thought and existence, between sense-data and the world...Matter is that which, acting on our senses, produces sensations.}} This was written in 1908 by Lenin \cite{lenin}, far from Hacking's caricature quoted above. It may look too simple when technology is intercalated between matter and the screens on which we read experimental results. As discussed by Bachelard \cite{bachelard} and Hacking, a reliable laboratory apparatus is a phenomenon operator which transforms causal chains originating from the sample into readable signals on a chart or on a screen. Technology or not, matter is the external source of our sensations. So much for materialism. Dialectical materialism adds a fundamental aspect of matter i.e. that contraries coexist and compete with each other within things in Nature. Depending on which dominates the competition (contradiction) under what conditions, the causal chains originating from the thing and causing phenomena will take different forms, which are reflected in theories. Epistemics and ontology are intimately intertwined. This is why Maxwell's theory of light is both true and false. In all electromagnetic wave like phenomena it has a definite truth content. Theories undergo a complex historical process of improving, sometimes correcting qualitatively, representations of how the things are. The theory of electromagnetism from Maxwell's treatise \cite{maxwell} from 1873 to this day is a good example. Maxwell thought, and many a physicist of his time with him, that his treatise meant the final point for physics. Quantum mechanics, special relativity, general relativity, condensed matter physics, atomic physics, astrophysics, high energy physics suggest on the contrary that the progress of knowledge of nature is inexhaustible. This is Chalmers' point of view \cite{chalmers}, for example. It is tightly associated with technological improvements which are themselves the results of scientific advances. Some theories may prove false. Somme theories may be true. But most good theories are not, in general, either true or false. Parameters and orders of magnitude have to be specified. \section{Quantum Mechanics} \label{qm} \subsection{Electromagnetic gauge symmetry} The ontological question about the vector potential was revived when it became clear that the Schr\"{o}dinger equation for charged particles in the presence of a magnetic field had to be formulated in terms of the potentials $\vec{A}$ and $V$ (apart from the Zeeman term which will be dropped here from the picture for simplicity, with no loss of generality \footnote{This means that we are interested here with orbital degrees of freedom, as if spin degrees of freedom were frozen in a large enough field}). The reason is that there is no such choice as between equations (\ref{hamilton}) and (\ref{lorentz}). The only starting point in quantum mechanics is the Hamiltonian\footnote{as opposed to the classical case when one could start with the Lorentz force equation. The lagrangian formulation also deals only with potentials; the choice between the lagrangian formulation or the hamiltonian one is a technical matter. } which is given by (\ref{hamilton}) where $\hat{\vec{p}}$ is now an operator: $\hat{\vec{p}} = -i\hbar \vec{\nabla}$. Quantum mechanics substitutes the notion of quantum state to that of the classical notion of trajectory. The latter is irrelevant at the microscopic level, as evidenced by the non commutativity of momentum $p$ and coordinate $x$. Gauge invariance is now expressed by the simultaneous transformation of equation (\ref{gauge2}) with: \begin{equation}\label{gauge3} \Psi \rightarrow \Psi \exp{i f(x, y, z, t)}, \end{equation} where$\Psi$ is a solution of the time dependent Schr\"{o}dinger equation. Thus the state is described up to a phase factor. Most quantum mechanics text books, at least up to Berry's discovery in 1984, state that the overall phase factor of the wave function describing a system has no physical meaning, since $|\Psi|^2$ is unchanged when the overall phase changes\footnote{see for example ref.\cite{landau}.}. Since $|\Psi(\vec{r})|^2$ is the particle density at site $\vec{r}$, the density, as well as the total particle number are invariant under a phase change. This is a quantum version of the charge conservation described by Maxwell's equations: a global phase change, which is a global gauge change, conserves the charge. We shall see the consequences of that statement: what if a quantum state breaks gauge invariance? At first Weyl \cite{Weyl} linked charge conservation to local gauge transformations. The latter are "local" when the gauge shift $f(\vec{r},t)$ varies in space. In fact Noether \cite{noether} showed that global gauge invariance is enough to express charge conservation\footnote{see ref. \cite{brading} for a detailed discussion.}. Global gauge is the limit of a local gauge when $f$ in equation \ref{gauge3} is a constant. Gauge symmetry is thus seen to have a profound significance and cannot be reduced to a 'mere' representation surplus: it is a fundamental symmetry of the material world. As regards the "representation surplus" aspect of gauge freedom, it is worth pointing out that this surplus is a blessing for the theorist, because it allows a mathematical treatment of problems which is adapted to the specific geometrical features at hand. The behaviour of the electronic liquid under magnetic field in a long flat ribbon is conveniently expressed in a gauge where $\vec{A}$ is orthogonal to the long dimension of the ribbon. For the physics of a disk, a gauge with rotational symmetry is usually useful. Any gauge choice should yield the same result, but a clumsy choice can make the theory intractable. This is quite analogous to the correct choice of coordinate system -- cartesian, polar, cylindrical, spherical, etc. -- in a geometry problem. At this stage, considering the vector potential as a mere technical tool -- a usefully flexible one at that -- for the theory seems rational. \subsection{The Aharonov-Bohm effect} \label{ab} The Aharonov-Bohm effect \cite{AB} proves that there exist effects of static potentials on microscopic charged particles, even in the region where all fields vanish. The standard experimental set up may be described by the diffraction of electrons by a standard two slits display. An infinitely long solenoid, is placed between the two slits, parallel to them, immediately behind the slit screen; an electric current creates a magnetic flux inside the solenoid, and none outside. The electronic wave function is non zero in regions where no magnetic flux is present. A variation of the flux inside the solenoid causes a displacement of the interference fringes on a second screen placed behind the slits. It is straightforward to relate this displacement to the phase difference $\delta \phi$ of the two electron paths at a given point on the screen. The latter is given by \begin{equation} \delta \phi \propto \frac{e}{\hbar}\oint_C \vec{A}.\vec{ds} = e\Phi_B/\hbar \equiv 2\pi \Phi_B/(h/e) \end{equation} where $\Phi_B$ is the flux threading the solenoid. This flux is gauge invariant. The displacement is periodic when the flux $\Phi_B$ varies in the solenoid, with a period given by the flux quantum\footnote{In this paper the velocity of light, c, is put equal to unity throughout.} $h/e$. There are various other versions of the same effect. One is the observation of periodic variations -- with flux period $2\pi h/e$ -- of the resistance of a mesoscopic conducting ring when the flux varies inside a thin solenoid passing through the ring \cite{Aharonov}. Yet another variant will be discussed in a later section when I discuss superconductivity. In their 1959 paper \cite{AB}, Aharonov and Bohm discuss the ontological significance of their findings. They mention that potentials have been regarded as purely mathematical objects. Quoting them: {\it{... it would seem natural to propose that, in quantum mechanics, the fundamental natural entities are the potentials, while the fields are defined from them by differentiation. The main objection ...is grounded in the gauge invariance of the theory...As a result the same physical behaviour is obtained from any two potentials $\vec{A}$ or $\vec{A'}$ (related by a gauge transformation). This means that insofar as the potentials are richer in properties than the fields, there is no way to reveal this actual richness. It was therefore concluded that the potentials cannot have any meaning, except insofar as they are used mathematically, to calculate the fields. }} Over the years, this is what Aharonov seems to conclude, since this statement is reproduced in the 2005 book \cite{Aharonov}. This is also what is discussed in reference \cite{guay}, who asks: {\it{is the Aharonov-Bohm effect due to non locality or to a long distance effect of fields? I do not see what kind of long distance action of fields could be invoked except if we admit that Maxwell equations have to be significantly altered. Non locality, on the other hand is now a well accepted feature of quantum mechanics}}. It is surprising that Aharonov seems to pay no attention to charge conservation; the latter illustrates the statement in reference \cite{bb} quoted above. However, another possibility arises: that the vector potential, the circulation of which on a closed loop leads to a phase factor, has some physical (gauge invariant) reality, although its local value cannot be measured because it is not gauge invariant. The fact that the vector potential becomes a measurable physical object in a superconducting phase lends some credence to this suggestion, as will be discussed in section (\ref{supercond}). \subsection{Berry phase, Berry connection, Berry curvature} \label{berry} In 1984, Berry \cite{berry2} discovered the following: a quantum system in an eigenstate, slowly transported along a circuit $C$ by varying parameters $\bf{R}$ in its Hamiltonian $H(\bf{R})$ acquires a geometrical phase factor $\exp{\left(i\gamma (C) \right)}$ in addition to the familiar dynamical phase factor. He derived an explicit formula for $\gamma(C)$ in terms of the spectrum and eigenstates of $H(\bf{R})$ over a surface spanning $C$. It is a purely geometric object, which does not depend on the adiabatic transport rate around the circuit. This anholonomy is the quantum analog of the classical one discussed in section (2.2). Although the system is transported around a closed loop, its final state is different from the initial one. The global phase choice for the initial state is a gauge degree of freedom, which has no effect on Berry's phase. The latter is thus gauge invariant. A precise definition of adiabaticity is that the motion is slow enough that no finite energy excitation occurs, as is the case when the isolated system is static. This condition in turn is that a finite excitation gap separates the ground state from the first excited state of the system. It is conceptually interesting that quantum mechanics establishes a qualitative difference between two sorts of motions: a motion such that no finite energy excitation occurs, on one hand, and a motion such that finite energy excitations occur. This qualitative difference occurs through a quantitative change of rate of the displacement. Once again, quantity change results in quality change. One may define the "Berry connection" ${\bf{\vec{\cal A}}}$, the expression of which is given below for completeness: \begin{equation}\label{connection} {\cal{A_{\mu}}}({\bf R})\equiv i\langle \Psi_{\bf R}^{(0)}| \frac{\partial}{\partial R_{\mu}} \Psi^{(0)}_{\bf R}\rangle, \end{equation} where $|\Psi^{(0)}_{\bf R}\rangle$ is the ground state wave function. $\gamma(c)$ is the circulation of ${\bf{\vec{\cal A}}}$ along the curve $C$. ${\bf{\vec{\cal A}}}$ is not gauge invariant since a gradient of any function $f$ can be added to it with no change for $\gamma(C)$. Various generalizations of equation (\ref{connection}) are described in ref.\cite{berry}. The "Berry curvature" is defined as ${\cal{ \vec{B}}}\equiv \vec{\nabla}\wedge \vec{\cal{A}}$. The Berry phase is equal to the flux of $\cal{ \vec{B}}$ through the closed curve $C$. There is a close analogy between the electromagnetic vector potential $\vec{A}$ and the Berry connection $\vec{\cal{A}}$. If a closed box containing a charged particle is driven slowly around a thin solenoid threaded by a magnetic flux, the Berry phase is shown, in this particular case, to be identical to the Aharonov-Bohm phase. $\vec{A}$ is shown to be identical to $\vec{\cal{A}}$, modulo the charge coefficient. In fact the derivation of the Berry phase is yet another way of demonstrating the Aharonov-Bohm effect \cite{berry2}. The ontology of the Berry phase is clear from the numerous experimental observations which have followed various predictions, such as the photon polarization phase shift along a coiled optical fiber \cite{tomita}, Nuclear Magnetic Resonance \cite{suter}, the intrication of charge and spin textures in the Quantum Hall Effects, the quantization of skyrmion charge in a Quantum Hall ferromagnet, etc..\footnote{A more detailed paper on the Quantum Hall Effects is submitted for publication \cite{lederer}}. What remains mysterious is the ontological status of the Berry connection, which is not gauge invariant. The same questions arise about it that are asked about the electromagnetic vector potential. Whatever the answer to this question, what emerges from the discovery of the Berry phase, and from the many confirmed predictions and observable effects, connected directly to it\footnote{see for example chapter 4 in ref. \cite{berry}.} etc., is that doubts about the reality of the wave function phase as reflecting a profound fact of nature do not appear to be justified. \section{Superconductivity, a spontaneous breaking of global gauge invariance}\label{supercond} At low enough temperature, below a "critical temperature" $T_c$, a number of conducting bodies exhibit a thermodynamical phase change wherein the resistance becomes vanishingly small, and the diamagnetic response is perfect, which means that an external magnetic field cannot penetrate inside the body (this is called the Meissner effect): a superconducting state is stabilized \cite{tinkham}. This transition is an example of a spontaneous symmetry breaking, whereby the broken symmetry is the global gauge symmetry discussed in the preceding sections. When a continuous broken symmetry occurs (as is the case of gauge symmetry in superconductivity), the transition to the new state, which is the appearance of a new quality, is simultaneously characterized by continuity and discontinuity. Indeed, the "order parameter" which describes the new quality grows continuously from zero when the temperature is lowered continuously, but the symmetry is broken discontinuously as soon as the order parameter is non zero. This is easy to understand for a spontaneous breaking of spin rotational symmetry such as that associated with ferromagnetism: as soon as an infinitesimal magnetization appears below the critical temperature, the full rotational symmetry of the microscopic equations, which is the symmetry of the high temperature phase, is broken and rotational symmetry of the state is reduced to an axial rotation symmetry, around the magnetization direction. The detailed theory of superconductivity is not relevant here, especially as the microscopic theory of superconductivity in a whole class of new metallic oxydes is still a debated topic. The construction of the theory of the effect discovered by Kammerlingh Onnes in 1911 lasted half a century. One may surmise that one reason for this delay was precisely the elusive nature of the broken symmetry at work, which was clarified some years after the microscopic theory was published \cite{BCS} in 1957. The superconducting state is characterized by the pairing of a macroscopic number of electrons in so-called "Cooper pairs". A Cooper pair, formed (at least for a whole class of "BCS superconductors"\footnote{In the last thirty years, various other classes of superconductors have been discovered and analyzed, with different ways for electrons to assemble in pairs. This does not limit the generality of the discussion in this paper.}) by two electrons of opposite spins, is a zero spin singlet, and, contrary to electrons, is not a fermion, but a boson. For simplicity, the superconducting ground state can be thought of as the condensation of a macroscopic number of such bosons, where they all have the same phase. Thus the superconducting ground state is characterized by a many-particle macroscopic condensate wave function $\Psi_{SC}({\bf{r}})$, which has amplitude and phase and maintains phase coherence over macroscopic distances. How can one reconcile the appearance of a phase\footnote{As remarked by various authors, it is an unfortunate fact that the same word, "`phase"', is used for the "`phase"' of the wave function and for a thermodynamic "`phase"'. I can only hope that this does not produce confusion for the readers of this paper.} in the ground state wave function, which is, as we shall see, a material object with very concrete and measurable properties --zero resistance and Meissner effect --, with the global gauge symmetry of the many-particle Schr\"{o}dinger equation? This is exactly what spontaneous gauge symmetry breaking is about: below the superconducting critical temperature, the state of the system selects a global phase, for the many-particle wave function, which is arbitrary between $0$ and $2\pi$. This is analogous to the ferromagnetic ground state selecting an arbitrary direction in space even though the Hamiltonian is rotationally invariant. In other words the continuous gauge symmetry of the microscopic equations is spontaneously broken by the stable thermodynamic state. At the same time there is no absolute value for the phase of the wave function of a single piece of superconductor in free space: any phase can be chosen between $0$ and $2\pi$. In the case of ferromagnetism, the direction which the quantum state has picked up for its magnetization can be determined experimentally by coupling it to an infinitesimal test magnetization, such as a compass determining the direction of the earth magnetic field. Similarly, for a superconductor, the phase chosen by the superconducting ground state can be detected by coupling the superconducting sample to another one: the Josephson effect results (see below); it depends on the phase difference between the two samples. What about charge conservation in this state? Textbooks state it is violated in the superconducting state. Some philosophers find this hard to believe. It is a matter of understanding orders of magnitude. The explanation is as follows: in order to maintain phase coherence over a macroscopic volume of the superconductor, the total charge fluctuates between macroscopic chunks of the material, by circulation of Cooper pairs, which carry each two electronic charges. The total charge of the sample is conserved, but the charge in a macroscopic part of a sample is determined with an accuracy of about $10^{-13}$ \cite{tinkham}. In fact, this is of interest for whoever still questions the complementarity of canonically conjugate variables (such as $p_x$ and $q_x$ for a single particle with position $x$ and momentum $p_x$). Phase $\phi$ and particle number $N$ are conjugate variables, and the Heisenberg relation holds: \begin{equation}\label{conjugate} \Delta N . \Delta \phi \geq 1 \end{equation} This limits the accuracy with which $N$ and $\phi$ can be simultaneously measured. However, since $N\approx 10^{23}$, and the fluctuation in $N$ is of order $\left(\frac{T_c}{T_F}\right)^{1/2}\sqrt{N}$ an accuracy of $1/\left(\frac{T_c}{T_F}\right)^{1/2}\sqrt{N}\approx 10^{-9}$ on $\phi$ is highly satisfactory and the phase can be viewed as a semi-classical variable. The number-phase relationship is expressed in the following relationship\footnote{The detailed expression for the various states involved is not relevant for this paper and can be found, for instance in references \cite{BCS} or \cite{tinkham} for example.}: \begin{equation} |\Psi_N\rangle = \int_0^{2\pi} \exp{(-iN\phi/2)}|\Psi_{\phi}\rangle d\phi \end{equation} In this equation, $|\Psi_N\rangle, |\Psi_{\phi}\rangle$ are, respectively, the superconducting states for fixed particle number, or fixed phase. The latter is relevant for macroscopic samples. The former is relevant in small superconducting objects, or in the theoretical description of experiments where single Cooper pairs are manipulated. The factor $1/2$ in the exponential under the summation is due to the fact that Cooper pairs carry 2 electronic charges. The significance of the phase of the superconducting ground state was not immediately perceived by physicists, and it took three years to Josephson \cite{josephson}, after the initial BCS paper, to predict that Cooper pairs should be able to tunnel between two superconductors even at zero bias, giving a supercurrent density \begin{equation}\label{jc} J= J_c sin(\phi_1 - \phi_2) \end{equation} where $J_c$ is a constant and $\phi_1, \phi_2$ are the superconducting phases of the two superconductors. Another spectacular prediction was that in the presence of a finite voltage difference between the two superconductors, the phase difference would increase, following equation (\ref{jc}) with time as $2e V_{12}/\hbar$, (where $e$ is the electron charge and $V_{12}$ the voltage difference) and the current should oscillate with frequency $\omega= 2e V_{12}/\hbar$. As mentionned in ref.\cite{tinkham}, "{\it{Although originally received with some skepticism, these predictions have been extremely thoroughly verified...Josephson junctions have been utilized in extremely sensitive voltmeters and magnetometers, and in making the most accurate measurements of the ratio of fundamental constants $h/e$\footnote{This was written before the discovery of the Quantum Hall Effects.}. In fact the standard volt is now {\it{defined}} in terms of the frequency of the Josephson effect}}". Among the most well known applications of the effect, SQUIDs (Superconducting QUantum Interference Devices) allow unprecedented accuracy in the detection and measurements of very weak magnetic fields. Josephson, aged 33, was awarded the Nobel prize in physics in 1973, for a work done during his PhD. Subsequently he worked on telepathy and paranormal phenomena, with no visible success... \subsection{The London equations}\label{london} A first significant progress in the theory of superconductivity was due in the nineteen thirties to the London brothers \cite{fhlondon}. They pointed out that the two characteristic experimental features of superconductivity were zero resistance and the Meissner effect, i.e. the exclusion of magnetic flux by the superconducting body. F. and H. London proposed two equations which gave a good account of both properties, based on the behaviour of electric and magnetic fields: \begin{eqnarray} \bf{\vec{E}}&=& \frac{\partial}{\partial t}\left(\Lambda \bf{\vec{J}}_s \right) \label{1}\\ \bf{\vec{H}}&=& - curl \left( \Lambda \bf{\vec{J}}_s \right)\label{2} \end{eqnarray} where $\Lambda$ is a phenomenological parameter found to be equal later to $\frac{m}{n_se^2}$; $m$ is the electron mass, $e$ the electron charge and $n_s $ is the density of superconducting electrons. Equation \ref{1} describes perfect conductivity since an electric field accelerates the superconducting electrons, instead of sustaining a constant average velocity, as described by Ohm's law, $\vec{J} = \sigma \vec{E}$ in a normal conductor (where $\sigma$ is the conductivity). Equation \ref{2} accounts for the Meissner effect, when combined with Maxwell's equation $curl \bf{\vec{H}}= 4\pi \bf{\vec{J}}$. It is straightforward to find that it leads to the exponential screening of the magnetic field from the interior of a superconducting sample, over a distance $\lambda = \sqrt{\Lambda/4\pi}$. The Meissner effect proves that superconductivity is not equivalent to perfect conductivity, even though equation \ref{1} could be derived in that case. But the magnetic field is not expelled from a perfect conductor. Both London equations can be condensed in a single one: \begin{equation} \label{3} \vec{\bf{J(r)}}_s = -\frac{\vec{\bf{A(r)}}}{\Lambda } \end{equation} This equation was actually derived by F. London himself; based on quantum mechanics, but before any microscopic theory, this allowed him to find the expression of $\Lambda= \frac{m}{n_s e^2}$, where $n_s$ is the density of superconducting electrons. Equations \ref{1} and \ref{2} can be thought as a good illustration of Duhem's positivism: they account for phenomena, but they say nothing about the ontology of superconductivity. Nothing? In fact, once we admit that magnetic fields are material objects, which they certainly are since they carry energy, the expulsion of field from a bulk superconductor, its exponential disappearance away from the surface within a penetration length which is measurable, together with zero electrical resistance, are fundamental real features of the superconducting material. So much so that equations \ref{1}, \ref{2} and\ref{3} have become over the years the hallmarks of any successful microscopic theory of superconducting materials. Their derivation was confirmed after 25 years by a microscopic derivation. Before discussing equation \ref{3} in more details, it is useful to mention a more complicated, but more exact expression, due to Pippard, relating the supercurrent and the vector potential. \subsection{Pippard's equation}\label{pippard} In normal conductors, Ohm's law $\vec{J(r)}=\sigma \vec{E(r)}$ can be made more accurate by noting that the current at a point $r$ is not determined only by the value of the electric field at $r$. It depends on $\vec{E}(\vec{r'})$ throughout a volume of order $l^3$. $l$ is a length which depends on the scattering processes in a given impure material.The resulting expression (due to Chambers) is: \begin{equation}\label{chambers} \vec{J(\vec{r})} = \frac{3\sigma}{4\pi l}\int\frac{\vec{R}\left[ \vec{R}.\vec{E}(\vec{r'})\right]e^{-R/l}}{R^4}d\vec{r'} \end{equation} Where $\vec{R}=\vec{r}-\vec{r'}$. Equation \ref{chambers} reduces to Ohm's law if $\vec{E}(\vec{r'})$ is constant over a distance of order $l$, as is clear by inspection of the formula. Pippard argued, as was later confirmed from the microscopic theory, that the superconducting wavefunction should have a characteritic dimension $\xi_0$ which he found to be $\xi_0 =a\frac{\hbar v_F}{kT_c}$ , where a is a numerical constant of order $1$, $v_F$ is the Fermi velocity and $T_c$ is the superconducting critical temperature. From the analogy with equation \ref{chambers}, Pippard proposed, in the superconductor, the replacement of equation \ref{3} by \begin{equation}\label{pip} \vec{J_c(\vec{r})} = -\frac{3}{4\pi \xi_0}\int\frac{\vec{R}\left[ \vec{R}.\vec{A}(\vec{r'})\right]e^{-R/\xi}}{R^4}d\vec{r'} \end{equation} where $J_c$ is the superconducting current. The coherence length $\xi$ is related to that of the normal metal by $\frac{1}{\xi}= \frac{1}{\xi_0} +\frac{1}{l}$. If the normal metal is pure, $l \approx \infty$. It is remarkable that the microscopic theory of Bardeen, Cooper and Schrieffer (BCS) \cite{BCS} justified Pippard's intuition, at the expense of replacing the exponential in equation \ref{pip} by a function $J(R,T)$ which behaves much like the exponential\footnote{For details, see reference \cite{tinkham}.}, and varies smoothly at $R=0$ from $1$ at temperature $T=0$ to $1.33$ at $T=T_c$. What matters here is that the BCS expression reduces to London's equation \ref{3} if $\vec{A}(\vec{r'})$ is constant over the range of $J(R,T)$, i.e. over a distance of order a few $\xi_0$. In other words, Pippard's expression, suitably corrected by the BCS expression differs from London's equation only quantitatively. When $\vec{A}$ varies notably over the range of $J(R,T)$, its relationship to $\vec{J}(\vec{r})$ is not a simple proportionality, but results from inverting the integral relationship in equation \ref{pip}. \subsection{Discussion} First consider equation \ref{3}. This equation embodies the essence of superconductivity, and a qualitative change with respect to Ohm's law for a normal conductor. How can it be physically meaningful if $\vec{A}(\vec{r})$ can change direction, range, magnitude through a gauge change? Obviously a gauge change cannot change the direction, range, magnitude of $\vec{J}$! The text book answer to this, for example in reference \cite{tinkham}, p.6, is that London's equation holds only for a particular gauge, such that the boundary conditions imposed on $\vec{J}$ hold also for $\vec{A}$. For example $div\vec{J} = 0$ is a condition expressing that there is no source from which a superconducting current is created: this is a fact of physics, a fact of the world. So we must also have $div\vec{A}=0$. Also there is no component of ${\vec{J}}$ perpendicular to the surface of the isolated superconducting material. So the implicit answer from text books is that the physical reality of $\vec{J}$ imposes a constraint on $\vec{A}$, the gauge of which is not arbitrary anymore; it is fixed by the physical reality of $\vec{J}$ and is called the London gauge. Since $\vec{J}$ is a measurable material object, and $\vec{J}$ is proportional to $\vec{A}$, it seems the latter has transited from the status of (locally) non measurable object in the normal phase, to that of measurable object of nature in the superconducting phase. Thus, we are led to conclude that equation (\ref{3}) which is a consequence of the global gauge symmetry breaking, leads to fixing the gauge of the vector potential. The gauge condition on $\vec{A}$, imposed by the materiality of $\vec{J}$, does not affect the state. The London gauge does not specify the gauge completely, since all harmonic functions $g $ such that $\vec{\nabla}^2g=0$ are possible choices. Here again, the ultimate harmonic gauge choice is dictated by the superconductor geometry. We are now facing an interesting ontological question: the vector potential appears to be a material object in a broken global gauge symmetry phase, such as a superconductor, as evidenced by the proportionality between $\vec{A}$ and the superconducting current density. On the other hand, in the (normal conductor) gauge invariant phase, it has measurable effects only through its circulation on a closed curve. How can we get over (aufheben, in german) this contradiction? Before trying to answer this question, I have to discuss the problem of emergence. This is what the next section is about. \section{Emergence or reduction? Or both?}\label{emerg} The topic of emergence in physics, and its antinomy with reductionism, has been discussed by a number of authors, in particular after Anderson's paper {\it{More is Different}} \cite{PWA2}. A related paper by Laughlin and Pines \cite{pineslaughlin} has inspired comments by Batterman \cite{baterman} Howard \cite{howard}, Healey \cite{healey2}, among others. How can the concept of emergence be grounded on rational criteria? Is emergence antinomic with the reductionist approach? \begin{itemize} \item Laughlin and Pines (hereafter LP) disagree with the reductionist point of view developed by a number of high energy physicists, such as, for example, Weinberg \cite{weinberg}. The latter author, a particle theorist, defends an ontological reductionism: following him, the fundamental laws that govern elementary constituents of matter ultimately explain phenomena in all areas of nature. Laughlin and Pines argue that many phenomena of Condensed Matter physics are emergent, and are regulated by what they call "`higher organisation principles"'in nature, which cannot be deduced from microscopics. Experiments, and artful confrontation of theory with experiments (what Hacking calls "`intervening"') is unescapable. Due to the higher organising principles, of various sorts, emergent phenomena exhibit insensitivity to microscopics. Examples of such principles are, for example, renormalisability (critical properties of continuous phase transitions, quantum critical points) or spontaneous symmetry breaking (superconductivity, ferromagnetism, antiferromagnetism, superfluidity). Another higher organisation principle accounts for the stability of the Quantum Hall Effects: the lowest energy excitation of the 2D electron liquid has an energy gap above the ground state, and electronic localisation yields a resistivity plateau, the value of which is given by the constant $e^2/h$. The Aharonov Bohm effect yields exactly the measurements of $hc/e$, is also due to a higher organisation principle: the quantum gauge invariance. LP argue that no approximate treatment from the Schr$\ddot{o}$dinger equation would yield an exact result. And exact treatments are in general impossible. They refer approvingly to Anderson's view that "`More is Different"'\cite{PWA2}. \item Batterman agrees that phase transitions, which he stresses are qualitative changes of state, are emergent phenomena. For him, the mathematical singularities in the thermodynamic potentials are fundamental to point out the qualitative differences between the phases. I am not fully happy with his paper. He does not grasp the richness of the notion of "`higher organising principles "`advocated by LP. He specializes in the Renormalisation Group (hereafter RG) theory and spends some time explaining it\footnote{Batterman insists on the limit $N\rightarrow \infty$. Is this limit experimentally out of reach, since samples have finite size? In a sample with dimensions $\approx 1 cm$, the largest correlation length is about $1cm \approx 10^7$ interatomic spaces, which is practically infinite given the experimental accuracy on temperature measurements at a critical point. This is another example of the importance of considering orders of magnitude.}. But he misses some points. For instance, he claims that the RG accounts for the universality of critical exponents. In fact, the RG explains why there are universality classes, while the molecular field approximation is universal in giving for the thermodynamic responses the same exponents near the critical point, independent of the dimensionality of space or of the order parameter for the various materials considered . The mystery has long been the observed irrational values of critical exponents, which disagree with the universal rational exponents of the molecular field theory. The liquid-vapour transition as only example is not Howard's best choice, since what the RG explains is why critical exponents depend on two parameters: $n,d$ where $n$ is the dimension of the order parameter of the condensed phase, and $d$ the dimension of the space in which the system is embedded. For the liquid-vapor transition, there is no symmetry breaking: both phases are translation invariant, and the transition with $n=1$ can be discontinuous. When a continuous symmetry is spontaneously broken, a "`higher organisation principle"' such as that which governs the ferromagnetic ground state entails specific low energy excitations (magnons), as well as universal exponents relevant for $n=3, d=3$ around the critical point. Magnons have no equivalent in the disordered phase. Batterman misses a crucial point in favor of emergence: at the critical point of a continuous phase transition, a new symmetry appears, which exists only at the critical point, the dilatation symmetry: the system exhibits the same correlations at all length scales. At the critical point, no other length scale in the system, such as interatomic spacing, plays any role. Although the microscopic Hamiltonians had been known in various cases for more than fifty years before the RG theory explained the universality classes of critical phenomena, dilatation symmetry was found only by working out, with the RG, the correlation functions at the critical point: they decrease algebraically with distance instead of exponentially away from the critical point. A directly observable consequence of this is the phenomenon of critical opalescence in certain liquid/liquid phase transitions. The critical point paradigm was not deduced from the microscopic Hamiltonian, but by a procedure both external to the microscopic Hamiltonian, and based on it. This epistemic aspect is the reflection of the ontological emergence. \item Howard also addresses the debate of reduction versus emergence. He defines supervenience, as different from emergence: "`{\it{Supervenience is an ontic relationship between structures. A structure $S_x$, is a set of entities, $E_x$, together with their properties and relations $PR_x$. A structure, $S_B$, characteristic of one level, $B$, supervenes on a structure, $S_A$, characteristic of another level $A$, if and only if the entities of $S_B$ are composed out of the entities of $S_A$ and the properties and relations, $PR_B$, of $S_B$ are wholly determined by the properties and relations, $PR_A$, of $S_A$}}"'. Following Howard, there is no straightforward relationship between reduction and supervenience. For instance, edge states in the (supervenient) fractional Quantum Hall Effects are due to boundary conditions, and do not allow reduction. Emergence can be asserted either as a denial of intertheoretic reduction or as a denial of supervenience. For example, according to Howard, superfluidity or superconductivity supervene on physical properties at the particle physics level and hence are not emergent with respect to particle physics because supervenience cannot be denied. Following Howard, entanglement of quantum states of two systems\footnote{The wave function $\Psi_{12}$ of two independent systems is factorized as a product of the wave functions of the two systems.} cannot be reduced to a product of states of the two systems; in general it is a sum of products of eigenstates of both systems. I agree with Howard that this is an elementary example of emergent property, which denies reduction. But the reasons for this differ from Howard's argument. Entanglement is a qualitative change: two entangled fermions, for example, (spin $1/2$ particles) form a boson (spin $0$ or $1$) particles. But one may also argue that entanglement supervenes on quantum particles as a direct consequence of the superposition principle in quantum mechanics. Entangled particles form a new quantum object, but reappear as particles if the entangled pair is destroyed by some intervention; their properties and relations are "`{\it{ wholly determined by the properties and relations, $PR_A$, of $S_A$}}"` i.e. quantum mechanics and quantum particles. Supervenience can be viewed as a straightforward materialist statement that all things are formed of material entities which obey the laws of physics. Establishing a logical distinction between emergence and supervenience leaves aside the question of quantity and quality, which is addressed correctly (not in those words) by Batterman. Howard states that the only emergent property in quantum physics is entanglement. He does not discuss LP's arguments, as, from the definition of supervenience, emergence is eliminated, at least for superconductivity or superfluidity, because the macroscopic wave function\footnote{Superconductivity is a macroscopic quantum phenomenon, not a mesoscopic one.} is built of entangled pairs. From Howard's point of view itself, it should be stressed that the BCS wave function has two levels of entanglement: entanglement of electrons in bosonic singlets, and macroscopic entanglement of states with different singlet numbers: this last feature is essential in establishing phase coherence. The second entanglement type, which expresses the breaking of gauge invariance by the superconducting wave function, is absent from microscopics. One may say that superconductivity is doubly emergent: 1) because superposition of states with different particle numbers is absent from gauge invariant states, and 2) it is a spontaneously broken symmetry phase. Howard, contrary to Batterman, disregards the spontaneous symmetry breaking at work in continuous phase transitions. He disregards, as does Batterman, the dilatation invariance at the critical point. Superconductivity is a spontaneously gauge invariance breaking phase. An entangled pair of particles does not break gauge invariance. The disappearance of any resistance, and the Meissner effect, observed at temperatures below the superconducting critical point, the Josephson effect, the quantization of flux in vortices, etc., are indeed emergent properties connected with the broken gauge invariance, a change in quality of the many-particle system due to a change in quantity, as stressed above. Fractional charges of excitations in the Fractional Quantum Hall liquid \cite{laughlin} are qualitatively new entities, which are signatures of emergence. Edge states, responsible for the transport properties, play no role in the classical Hall regime. Emergence can also be due to finite sample boundaries. \item Healey \cite{healey2} addresses the question of reduction and emergence in Bose-Einstein Condensates (BEC). From this starting point, he raises several interesting questions such as the reduction of classical physics to quantum physics, which I cannot discuss here for want of paper length\footnote{I have suggested elsewhere that classical mechanics, which has no superposition of states, is a case of emergence from quantum mechanics, the theory of which is still to be developed.}. His questions regarding the emergence of a phase difference between two different BECs are important ones, etc.. Those questions are specific of BEC physics, perhaps also in some respects of superconductivity or Josephson effects, but what is interesting in the context of this paper, he supports the view that spontaneous symmetry breaking is in general a case of emergence. \end{itemize} Spontaneous symmetry breaking is a clear example of the category of transformations of quantity in quality \cite{PWA2,pineslaughlin}. Not only because it requires an infinite number of particles but also because it occurs under quantitative changes of parameters such as temperature, pressure, magnetic or electric fields, etc.. In the case of spontaneously broken continuous symmetries (such as ferromagnetism), the Goldstone theorem states that the low symmetry phase possesses collective bosonic modes the energy of which goes continuously to zero with their inverse wavelength. Collective modes, such as spin waves in ferromagnets are elementary excitations of the broken symmetry phase: they disappear as well defined excitations in the disordered phase. They are well defined particles in the broken symmetry phase; they have no equivalent for isolated electrons. As far as the reduction/emergence antinomy is concerned, I believe this antinomy has no fundamental root. It is clear that all entities which interact in emergent things, such as spontaneously broken symmetry phases, Quantum Hall states, DNA molecules, central nervous system etc., are built from material entities, the microscopic Hamiltonian of which is known. In most cases (but not in liquid crystal mesophases for example), they obey quantum mechanics. So reductionism seems vindicated. Not so, because large numbers of interacting things suffer qualitative changes under suitable external conditions, so that new collective entities with new causal powers appear, which have no equivalent in the microscopic Hamiltonian, because for instance the symmetries of the latter are lowered in a broken symmetry phases, or because a gap appears between the ground state and the first excited state when isolated particles have a large ground stae degeneracy (Fractional Quantum Hall State). It is legitimate in the early twenty first century to consider that no ab initio calculation, with the most powerful computers available, could deduce the behaviour of a macroscopic number of interacting particles from the microscopic Hamiltonian\footnote{This point of view is that of Laughlin and Pines \cite{pineslaughlin}.}. However, if humanity is granted a sufficient large survival time over a sufficient number of thousands of years, it is quite possible that totally new technologies may achieve what we consider to day as impossible. In that sense, I disagree with LP who flirt with notions of unknowable things and eternal impossibilities in what regards knowledge \cite{pineslaughlin}. This posture ties them with Kant's transcendental idealism, in contrast with their present decidedly (spontaneously?) materialistic outlook on physics. A crucial point needs to be stressed: whoever admits the relevance of the emergence concept has strong reasons to agree that "`more is different"'\cite{PWA2}, i.e. that quantity can transform in quality: this is a radical change from the aristotelian antinomy of those two categories. Dialectical materialism is not far behind... \section{ London's or Pippard's equation?} \label{lp} Now let us discuss London's equation \ref{3}. \begin{itemize} \item \label{l} Start with charge conservation, which is a consequence of global gauge invariance. It is broken in superconducting phase, which is a phase with spontaneously broken gauge symmetry. As mentioned above this qualifies somewhat the statement that gauge invariance in the normal state is a mere "representation surplus". Charge conservation in the gauge invariant world is a principle of matter, analogous to momentum conservation in a translationaly invariant system. A possible way out of the dilemma about the ontological nature of the electromagnetic vector potential would be to view it as an emergent material object in the spontaneously broken global gauge invariant phase, as evidenced by equation \ref{3}. The superconducting phase itself is an emergent property (in this case the result of a thermodynamic phase transition) of the many body electron liquid of conductors. The analogy here would be, for instance, with the emergence of ferromagnetism from a macroscopic paramagnetic electronic liquid. But in that case the induction which appears in the ferromagnetic phase is the mere conceptual continuation of the magnetic field in free space. In the gauge invariant phase, however, the vector potential exists only through such anholonomies\footnote{Such as the flux quantization in a vortex.} related to the Aharonov-Bohm phase. What seems to be in trouble, however, is the thesis attributing to the vector potential a mere role of mathematical description of phenomena, with no ontological status. Consideration of equation \ref{l} leads one to suspect that the vector potential $\vec{A}$ has become a bona fide real entity in the superconducting phase. Is this discussion about London's equation \ref{3} invalidated by Pippard's equation? \item \label{p} Now turn to this equation \ref{pip}. At first sight, it seems to fatally destroy the conclusions drawn in the paragraph (section \ref{l} above. However, examining Pippard's equation \ref{pip} shows that, just as London's equation, it is not gauge invariant: adding the gradient of any function $f$ to $\vec{A}$ in equation \ref{pip} adds a term to $\vec{J}$ : $$\vec{J} \rightarrow \vec{J} -\frac{3}{4\pi \xi_0}\int\frac{\vec{R}\left[ \vec{R}.\vec{\nabla}f(\vec{r'})\right]e^{-R/\xi}}{R^4}d\vec{r'} .$$ Pippard's equation \ref{pip}, replaces the direct proportionality of $\vec{A}$ and $\vec{J}$ by a linear relationship. $\vec{J}(\vec{r})$ now depends on a weighted average on $\vec{A}(\vec{r'})$ over a volume of order $\xi^3$ around $\vec{r}$. It is straightforward to see that the second term in the equation above does not vanish in general. If, for example $\vec{\nabla}f$ is constant over a range $\xi$, the second "`gauge"' term is obviously non zero. Thus, in order for equation \ref{pip} to hold, one must impose on $\vec{A}$ conditions which are dictated by the inversion of equation \ref{pip}. The latter yields an expression of $\vec{A}$ as a function of $\vec{J}$ which is not the simple proportionality of equation \ref{3}, but keeps being a linear relationship: multiplying one by some factor $\lambda$ multiplies the other by the same factor. Boundary conditions imposed by nature and by the sample geometry on $\vec{J}$ determine the gauge of $\vec{A}$. It is obvious by inspection of equation \ref{pip} that, depending on how fast $\vec{A}$ varies with $\vec{r'}$ around $\vec{r}$, the direction and magnitude of $\vec{J}(\vec{r})$ will differ from those of $\vec{A}(\vec{r})/\Lambda$, just as the direction and intensity of $\vec{J}$ and $\sigma \vec{E}$ start differing from Ohm's law in equation \ref{chambers} if $\vec{E}$ varies fast enough over a distance $l$. What matters here is that equation \ref{pip} is not a gauge invariant circulation of $\vec{A}$ over a closed loop, but a gauge dependent weighted average on a small volume around $\vec{r}$. In order for equation\ref{pip} to have a meaning, a gauge imposed by conditions on $\vec{J}$ determines constraints on $\vec{A}(\vec{r'})$, and the latter object has to be as real as $\vec{J}(\vec{r})$. In conclusion, as we have already guessed by noting that equation \ref{pip} may reduce to London's equation \ref{3} under certain conditions, the complexity introduced by replacing London's equation by Pippard's one does not qualitatively change the conclusions one may draw about the ontology of $\vec{A}$ in the superconducting phase. \end{itemize} We have seen that real entities such as magnons, which exist in a ferromagnetic phase do not exist as well defined excitations in the paramagnetic phase. This might suggest a superficial analogy with the fate of the vector potential, although the latter is not a Goldstone boson. The ontological questions about the magnons in a ferromagnet are of a different category than those about the electromagnetic vector potential in a superconductor, because of the difference between the nature of the broken symmetries in the two cases. Further work is needed to understand fully the ontological implications of the fact that the vector potential, which is a sort of ghost in the gauge invariant phase, with only average measurable properties through its circulation on a closed loop, seems to become a measurable entity and definitely appears to reflect the properties of a material object in the broken gauge symmetry phase. \subsection{The Anderson-Higgs boson} The recent experimental evidence in favour of the Higgs boson in high energy physics is a major experimental and theoretical achievement. However it came as no big surprise to physicists in condensed matter physics. Indeed, the massless Goldstone bosons, also known as "Nambu-Goldstone" particles, which emerge in any continuous broken symmetry phase are not present in the superconducting phase, because of electromagnetic interactions. The usual statement is that the Goldstone boson has been absorbed by the massive gauge field bosonic collective mode \cite{simon}.This was realized by P. W. Anderson and published in {\it{Physical Review }} in April 1963, one year before the papers by Higgs and Brout-Englert \cite{pwa}, although as a qualitative suggestion, with no detailed calculation. The missing ingredients in Anderson's paper were non Abelian fields and relativity, which do not change qualitatively the mechanism of the Anderson-Higgs-Brout-Englert boson. This reflects a large degree of conceptual unity of gauge theories. It is fascinating that our present understanding of nature in such different fields as condensed matter and high energy physics resort to the same basic theoretical ingredients: gauge theories, spontaneous symmetry breaking, acquisition of mass by gauge bosons, etc.. It suggests also, if this was necessary, that there is no such thing as a hierarchy of scientific fields of knowledge in physics. The continuous development of knowledge with the related continuous improvement of experimental techniques is as potentially rich in new discoveries and new physical laws in one field as in another. There are as many new surprises at stake for physicists in improving high energy colliders as there are in atomic physics, condensed matter physics, etc., in reaching lower temperatures, larger magnetic fields, larger pressures, etc.. \section{Conclusions} \label{conclu} \begin{itemize} \item If the discussion about the vector potential $\vec{A}$ is limited to normal phases, one may conclude that the potential language -- as opposed to the field language -- is a mere theoretical tool, and gauge symmetry a "description surplus". The spontaneously broken gauge symmetry phases, such as superconductivity, point out the importance of the charge conservation associated to gauge invariance, and suggest that a theory wherein gauge freedom is suppressed, which means non conservation of charge, lead to the emergence of $\vec{A}$ as reflecting the properties of a material object. The understanding and the theoretical description of this object may well still be incomplete, but further advances should not invalidate its connection with $\vec{A}$. \item In section \ref{berry}, I have discussed the Berry phase. Just as the Aharonov-Bohm phase, discussed in section \ref{ab}, the former is yet another entity which supports the thesis that the phase of the wave function is a real object, with experimental testable consequences. Both phases, which depend on the gauge choices, enter expressions of gauge invariant anhalonomies: the flux of the Berry curvature, or the flux of a magnetic field through a closed curve. London's equation \ref{l}, as well as Pippard's \ref{pip} suggest that $\vec{A}$ becomes a real local object, in the gauge imposed by the supercurrent. I am not aware of a thermodynamic phase for which the Berry connection $\vec{\cal{A}}$ would undergo a transformation similar to that of $\vec{A}$ in a superconductor, although it seems plausible that a superfluid phase such as those of $^3$He would be a good candidate. \item Some concepts have appeared a number of times in this paper. That of emergence is another way of expressing a frequent occurrence in nature: the transformation of quantity in quality. Bind together two spin $1/2$ fermions, they turn into a boson; a large number of microscopic bosons condense in a macroscopic quantum state, the phase of which is measurable; an adiabatic parallel vector transport over a closed circuit on a sphere results in an anholonomy (Berry phase); an adiabatic parallel transport of a quantum system around a flux tube results in a Aharonov-Bohm phase; cool down a metal, it undergoes spontaneous broken symmetries of different types, depending on the interactions between the electrons, or on the crystalline symmetry of the atoms: broken translation, rotation, gauge invariance; cool down the universe some three hundred thousand years after the Big Bang, and a sort of metal insulator transition appears, etc.. The category of emergence encompasses a number of different examples of transformations from quantity to quality. Phase transitions and broken symmetries are one of them. As quoted above, ``{\it{More is different}}j'', wrote P. W. Anderson \cite{PWA2} in a brilliant and devastating attack on reductionism\footnote{It is worth quoting the concluding words of this paper: {\it{...Marx said that quantitative differences become qualitative ones, but a dialogue in Paris in the 1920's sums it up even more clearly: FITZGERALD: The rich are different from us. HEMINGWAY: Yes, they have more money.}}}. He wrote: {\it{ The ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe. In fact, the more elementary particle physicists tell us about the nature of the fundamental laws, the less relevance they seem to have to the real problems of the rest of science, much less to those of society}}. \end{itemize} {\bf{Acknowledgements.}} I wish to thank my colleagues of the Laboratoire de Physique des Solides (Universit\'e Paris XI-Campus d'Orsay). Discussions with Jean-No\"el Fuchs, Mark Goerbig, Gilles Montambaux, Fr\'ed\'eric Pi\'echon, Marc Gabay, Julien Bobroff have been helpful. Opinions and possible misconceptions expressed in this paper, on the other hand, are under my only responsibility. I would like to thank Professor Maximilian Kistler (D\'epartement de Philosophie, IHPST, Universit\'e Paris1), for his attention and stimulating suggestions. Special thanks are due to Jean-No\"el Fuchs for a careful reading of the manuscript and useful remarks.
{'timestamp': '2014-11-11T02:02:02', 'yymm': '1401', 'arxiv_id': '1401.8146', 'language': 'en', 'url': 'https://arxiv.org/abs/1401.8146'}
\section{Introduction} \label{Intro} Photometric space missions such as the French-led CoRoT (Convection, Rotation and planetary Transits; \citealt{2008Mich,2009Baglin}) satellite, NASA's {\it{Kepler}} space telescope \citep{2010Borucki,2010Koch}, NASA's {\it{TESS}} (Transiting Exoplanet Survey Satellite; \citealt{2015Ricker}) and the future PLATO (PLAnetary Transits and Oscillations of stars; \citealt{2014Rauer}) mission are at the forefront of yielding high quality asteroseismic data of solar-type stars. Combining the seismic data with stellar atmospheric constraints, such as spectroscopic parameters (i.e. metallicity, [Fe/H], and effective temperature, $T_{\rm{eff}}$), and interferometric measurements (i.e. angular diameter), sets a platform for constraining stellar interior physics and fundamental parameters to unprecedented levels (e.g. \citealt{Metcalfe_2010,Metcalfe_2012,2014Pinsonneault,2015Valle,2017Aguirre,2018Joyce,2019Campante,2019Bellinger,2019Andreas,2020Ball,2020Jiang,2020Bowman,2020Farnir,2021Nsamba,2021Deal}, among others). In addition, this information has been employed in the precise characterisation of exoplanetary systems (e.g. \citealt{2015Campante,2020Toledo, 2020Mortier}). To explore the valuable seismic data made available from space missions, various tools have been and continue to be developed or modified on the modelling side. These range from stellar modelling tools employed to explore stellar physics (e.g. \citealt{2008Demarque,2008Christensen,2008Weiss,2015Paxton,2018Paxton}), optimisation tools which make use of either Bayesian techniques, Markov chain Monte Carlo (MCMC) methods, or Machine learning algorithms aimed at examining the statistical relationships between stellar models and observational data (e.g. \citealt{2015Silva,2016Bellinger,2017Angelou,2019Rendle,2021aLyttle,2021Chen,Remple2021}), and tools aimed at exploring different regions in the stellar interior structure such as acoustic glitch analysis methods. Acoustic glitches are localized sharp variations in the sound speed caused by the ionization zones, as well as by sharp variations in the thermal stratification or in the mean molecular weight at the transition between radiative and convective regions. Acoustic glitches impose specific signatures on the stellar oscillation frequencies, that may be used as a diagnostics of the helium ionization zone, helium abundance in the envelope, and position of the base of the envelope convection zone or edge of the convective core \citep{1988Vorontsov,1990Gough,1998Thompson,2000Monteiro,2004Basu,2007Houdek,2014Mazumdar,2014Verma,2020Avel}. Unlike in the Sun, only oscillation frequencies of modes of degree $l \leqslant 3$ are observed in solar-type stars. Consequently, over the years studies concerning the inference of detailed information on the stars' internal structure have focused on modes of low degree. Some of these studies have aimed at finding ways to probe the stellar core sizes and examine the physics and physical processes taking place at the core edge (e.g. \citealt{2005Provost,2007Cunha,2010A&ADeheuvels,2011Cunha,2013Silva,2014Brand}). However, modes of $l = 3$ have been observed in only several tens of cases (e.g. \citealt{2010Bruntt,Metcalfe_2010,2017Lund}). Therefore, insights into the deep stellar interior (especially the core structure and size) are in most cases carried out through a combination of oscillation frequencies of spherical degree $l = 0, 1$ and to a less extent $l = 0, 2$ (e.g. \citealt{2005Popielski,2011Aguirre,2011Cunha,2020Rocha}). In order to isolate information on the stellar interior, the oscillation frequencies of these low degree acoustic modes should be combined in a way such that the combination retains information about the stellar interior structure and is, simultaneously, mostly independent of the structure of the stellar outer layers. \citet{2003Roxburgh} demonstrated that this can be obtained through the computation of the ratios of the small to large frequency separations. These are constructed using five-point separations defined as (\citealt{2005Roxburgh,2013Roxburgh}): \begin{equation} d_{01}(n) = \frac{1}{8} (\nu_{n-1,0} - 4\nu_{n-1,1} + 6\nu_{n,0} - 4\nu_{n,1} + \nu_{n+1,0})~, \label{eq1} \end{equation} \begin{equation} d_{10}(n) = -\frac{1}{8} (\nu_{n-1,1} - 4\nu_{n,0} + 6\nu_{n,1} - 4\nu_{n+1,0} + \nu_{n+1,1})~. \label{eq2} \end{equation} The ratios of small to large frequency separations are then defined as: \begin{equation} r_{01}(n) = \frac{d_{01}(n)}{\Delta\nu_{1}(n)}~, \label{eq3} \end{equation} \begin{equation} r_{10}(n) = \frac{d_{10}(n)}{\Delta\nu_{0}(n+1)}~, \label{eq4} \end{equation} where $0$ and $1$ represent the radial and dipole mode degrees, respectively, $\Delta\nu_{l}$ is the the separation between modes of the same spherical degree, $l$, and consecutive radial order, $n$, expressed as $\Delta\nu_{l}~=~\nu_{l,n} - \nu_{l, n-1}$ \citep{1986Ulrich}. The ratios $r_{01}(n)$ and $r_{10}(n)$ are also often combined as \begin{equation} \begin{aligned} r_{010} =& \{r_{01}(n),r_{10}(n),r_{01}(n+1),r_{10}(n+1),r_{01}(n+2),\\ & r_{10}(n+2), ...\}~. \end{aligned} \end{equation} These ratios provide a diagnostic of the stellar interior alone and are almost not affected by the so called ``near-surface effects'' (see \citealt{1988Christensen,1988Dziembowski,1997Christensen}). The ratios of degree $l = 0$ and $l = 1$ shown in Eq.~(\ref{eq3}) and Eq.~(\ref{eq4}) have been used to determine the base of the convective envelope in the Sun and other solar-type stars (e.g. \citealt{2009Roxburgh,2012Lebreton,2012Mazumdar,2013Silva}). In addition, during the examination of the efficiency of ratios $r_{010}$ as a seismic indicator of the presence and size of a convective core, \citet{2016Deheuvels} demonstrated that the trend of the ratios $r_{010}$ for stellar models of main-sequence stars can be approximated by quadratic polynomials. Following a similar seismic diagnostic procedure, \citet{2020Viani} showed that one of the coefficients, determined by fitting the second-order polynomial to ratios $r_{010}$ as suggested by \citet{2016Deheuvels}, can be used as an indicator of the amount of core overshoot needed to model a particular star. Furthermore, \citet{2020Viani} were also able to quantify the amount of core overshoot based on ratios $r_{010}$ for a set of {\it{Kepler}} Legacy stars and highlighted hints of a possible trend between stellar mass and core overshoot. Earlier works by \citet{2010Brand} and \citet{2011Aguirre} also argued that the ratios $r_{010}$ are not only sensitive to the presence and size of a stellar core but they are also affected by the central hydrogen content, thus can be used as indicators of the evolutionary state of a star. Taking 16 Cyg A and B as our benchmark stars, we present a novel method aimed at reducing the number of stellar models accepted by the Forward Modelling Technique (involving fitting observed oscillation frequencies and a set of atmospheric constraints) down to the ones that better represent the core of each star. This is attained by a characterisation and comparison of the ratios computed from the models and observations, following an approach similar to that proposed by \citet{2016Deheuvels}. We demonstrate that we are able to constrain further the fraction of hydrogen in the core of both of our benchmark stars, establishing their precise evolutionary state, and stellar ages. This article is organized as follows. In Section~\ref{observables}, we describe our sample which is composed of two stars, their corresponding sets of observations, the details of the stellar grids, the optimization routines, and the frequency ratio fitting procedures. In Section~\ref{results}, we present our results and discussions, while in Section~\ref{conclusions} we conclude. \section{Constraints, Models, and fitting process} \label{observables} The seismic and atmospheric constraints used in the optimisation process are described in Section~\ref{constraints}. Section~\ref{models} provides a description of the stellar grids and model selection process, while Section~\ref{ratios} details how the ratios $r_{010}$ are used to add extra constraints on the central hydrogen abundance, starting from the accepted models obtained from the forward modelling process. \subsection{Observational and known properties} \label{constraints} We consider the well-studied solar analogues 16 Cyg A and B as our benchmark stars. These stars are in a binary system with precisely measured angular diameters, thus their interferometric radii are available. These constraints are readily available from \citet{2013White}, who made use of the PAVO (Precision Astronomical Visual Observations; \citealt{2008Ireland}) beam combiner at the CHARA (Center for High Angular Resolution Astronomy; \citealt{ten_Brummelaar_2005}) Array. Specifically, the authors derived linear radii $R_{\rm A} = 1.22 \pm 0.02$~\rsun and $R_{\rm B} = 1.12 \pm 0.02$~\rsun for 16 Cyg A and B, respectively. 16 Cyg A and B are among the brightest solar-type stars observed continuously for approximately 2.5 years by the {\it{Kepler}} space telescope, yielding oscillations with exceptional signal-to-noise, allowing for detailed asteroseismic studies (e.g. \citealt{2015Metcalfe,2016Bellinger,2020Farnir}). The seismic data for both these stars have been analysed by \cite{2017Lund}, who extracted frequencies for more than 48 oscillation modes. Through the analysis of the acoustic glitch signature on the oscillation frequencies arising from the helium ionization zone, \citet{2014Verma} constrained the surface helium mass fractions of 16 Cyg A and B to be within the intervals $Y_{\rm surf, A}~\in~[0.231, 0.251]$ and $Y_{\rm{surf, B}}~\in~[0.218, 0.266]$, respectively. Finally, the spectroscopic parameters of 16 Cyg A and B adopted here are from \citet{2009Ram}, specifically, $T_{\rm eff, A}~=~5825 \pm 50$~K, [Fe/H]$_{\rm A}~=~0.10 \pm 0.03$~(dex), $T_{\rm eff, B}~=~5750 \pm 50$~K, and [Fe/H]$_{\rm B}~=~0.05 \pm 0.02$~(dex). We stress that the observational constraints adopted in our optimisation process described in Section~\ref{models} are $T_{\rm eff}$, [Fe/H], and individual oscillation frequencies only. \subsection{Asteroseismic modelling and optimisation process} \label{models} We adopted the three stellar model grids (namely, G$_{1.4}$, G$_{2.0}$, and G$_{\rm free}$) from \citet{2021Nsamba}\footnote{Note the change in nomenclature of the grids. Grid G$_{1.4}$, G$_{2.0}$, and G$_{\rm free}$ correspond to grid B, C, and A, respectively, in \citet{2021Nsamba}.} which were constructed using a 1D stellar evolution code (MESA\footnote{Modules for Experiments in Stellar Astrophysics} version 9793; \citealt{2015Paxton,2018Paxton}). All these grids have the same physics inputs, differing only in the treatment of the initial helium mass fraction. \begin{table} \centering \caption{Stellar grid variations} \label{grids} \begin{tabular}{c c } \hline Grid name & $\Delta Y / \Delta Z$\\ \hline \hline G$_{1.4}$ & 1.4 \\ G$_{2.0}$ & 2.0 \\ G$_{\rm free}$ & None \\ \hline \end{tabular} \end{table} In grids G$_{1.4}$ and G$_{2.0}$, the initial helium mass fraction, $Y_{i}$, is estimated via a helium-to-heavy element enrichment ratio, ($\Delta Y/\Delta Z$), using the expression \begin{equation} Y_i = \left(\frac{\Delta Y}{\Delta Z} \right) Z_i + Y_0 , \label{law} \end{equation} where $Z_i$ is the initial metal mass fraction and $Y_0$ is the primordial big bang nucleosynthesis helium mass fraction value taken as 0.2484 \citep{2003Cyburt}. Table~\ref{grids} highlights the helium-to-heavy element enrichment ratio used in each grid. A brief highlight of the grid dimensions is given below (refer to \citealt{2021Nsamba} for a detailed description of the uniform model physics used in all the grids); \begin{itemize} \item [-] $M$ $\in$ [0.7 -- 1.25] M$_\odot$ in steps of 0.05 M$_\odot$; \item [-] $Z_i$ $\in$ [0.004 -- 0.04] in steps of 0.002; \item [-] $\alpha_{\rm{mlt}}$ $\in$ [1.2 -- 3.0] in steps of 0.2. \end{itemize} In grid G$_{\rm free}$, the initial helium mass fraction, $Y_i$, is in the range [0.22 -- 0.32] in steps of 0.02. The stellar models in all the grids range from the zero-age main sequence (ZAMS) to the end of the main-sequence phase, i.e. terminal-age main sequence (TAMS). For each of the models, their corresponding adiabatic oscillation frequencies for the spherical mode degrees $l = 0, 1, 2,$ and $3$ were calculated using GYRE \citep{2013Townsend}. In order to select models with observables comparable to the observations (commonly referred to as best-fit models, acceptable models, or optimal models) and derive the stellar properties of our target stars, we make use of the grid-based optimisation tool AIMS (Asteroseismic Inference on a Massive Scale; \citealt{2016Reese,2019Rendle}). For additional details on the AIMS code, please refer to the AIMS documentation\footnote{https://gitlab.com/sasp/aims}. In a nutshell, AIMS combines MCMC (Markov Chain Monte Carlo) and Bayesian schemes to generate a sample of representative stellar models that fit a specific set of classical and seismic constraints of a given star. For both components of our binary, the individual oscillation frequencies were used as seismic constraints while the effective temperature, $T_{\rm{eff}}$, and metallicity, [Fe/H], were the specified classical constraints (see Section~\ref{constraints}). It is worth noting that a well known offset hindering a direct comparison between the model frequencies and observed oscillation frequencies exists which needs to be corrected for (i.e. surface effects; \citealt{1988Christensen,1988Dziembowski,1997Christensen}). We used the two-term surface correction empirical formula suggested by \citet{2014Ball} to rectify the offsets between the model and observed frequencies. This empirical expression describes the frequency offset ($\delta \nu$) as a combination of the cubic and inverse term, and takes the form \begin{equation} \delta \nu = I^{-1}(Af^{-1} + Bf^{3})~~, \label{correction} \end{equation} where A and B are free parameters, $I$ is the mode inertia, and $f = \nu / \nu_{\rm{ac}}$. Here, $\nu$ is the oscillation frequency and $\nu_{\rm{ac}}$ is the acoustic cut-off frequency which is linearly related to the frequency of maximum power, $\nu_{\rm{max}}$ (\citealt{1991Brown,1995Kjeldsen}). In the selection of best-fitting models, we consider a total $\chi^2$ which is a combination of seismic and classical constraints. We highlight that a 3$\sigma$ uncertainty cutoff on the classical constraints was applied and the best-fit models are samples of the multivariate likelihood distribution bounded solely by the criteria of $3\sigma$ on the non seismic data. Furthermore, AIMS allows for a choice of different weights to be applied to the classical and seismic constraints. In this work we chose these weights such that the $\chi_{\rm{total}}^2$ function to be used in the definition of the likelihood is expressed as \begin{equation} \chi_{\rm{total}}^2 = \frac{N_{c}}{N_{\nu}} \left(\chi_{\rm{seismic}}^2\right) + \chi_{\rm{classical}}^2~, \end{equation} where $N_{c}/N_{\nu}$ is the ratio of the number of classical constraints to the number of seismic constraints, $\chi_{\rm{seismic}}^2$ and $\chi_{\rm{classical}}^2$ are defined as sums of terms $\chi_{i}^2$ with the form \begin{equation} \chi_{i}^2 = \left( \frac{O_i - \theta_i}{\sigma_i} \right)^2 ~, \end{equation} where $O_i$, $\theta_i$, and $\sigma_i$ are the observed value, the model value, and the observational uncertainty, respectively. The observable constraints adopted are seismic (i.e. individual oscillation frequencies) and classical constraints (i.e. $T_{\rm eff}$ and [Fe/H]). The treatment of weights given to the classical and seismic constraints in the model selection process is currently a subject of interest to stellar modellers and has been extensively addressed in the ``PLATO hare and hounds'' exercise for modelling main-sequence stars \citep{2021Cunha}. Finally, the $\chi_{\rm{total}}^2$ is used to obtain the likelihood function, from which the posterior probability distributions (PDF) for the different stellar properties and their uncertainties are calculated, i.e., inform of the statistical mean and standard deviation, respectively. \subsection{Seismic probe of the core: fitting frequency ratios} \label{ratios} The inability to adequately model the stellar surface effects on the pulsation frequencies has led authors to seek frequency combinations or fitting procedures that are insensitive to those surface layers \citep{2003Roxburgh,2007Cunha,2016roxburgh} and to explore their potential for revealing the physical conditions near the stellar core \citep{2011Cunha,2011Aguirre,2014brandao}. In particular, these studies have shown that the slope ({i.e., the frequency derivative)} of the ratios of small to large frequency separations is sensitive to the gradient of the sound speed in the core, holding information on the stellar age (specifically, on the central hydrogen abundance), as well as on the size of the chemical discontinuity built at the edge of convective cores and on the amount of core overshoot. Following on these works, \cite{2016Deheuvels} argued that the combination of parameters $(a_0,a_1)$ resulting from fitting a second order polynomial of the form \begin{equation} P(\nu) = a_0 + a_1\left({\nu} -\beta \right) + a_2\left({\nu} - \gamma_1 \right)\left({\nu} - \gamma_2 \right)~ \label{poly_fits_sd} \end{equation} to the ratios $r_{010}$, is particularly useful for establishing the evolutionary state of the star and the presence and size of stellar convective cores. \begin{figure*} \includegraphics[width=\columnwidth]{ratios_16CygA.eps} \quad \includegraphics[width=\columnwidth]{ratios_16CygB.eps} \quad \includegraphics[width=\columnwidth]{ratios_16CygA_fits.eps} \quad \includegraphics[width=\columnwidth]{ratios_16CygB_fits.eps} \caption{ Ratios $r_{01}$ and $r_{10}$ for 16 Cyg A (top left and bottom left panels) and B (top right and bottom right panels). The blue vertical line in the top panels corresponds to the cut-off region of the ratios considered in the polynomial fits. The dashed lines in the bottom panels show the second-order polynomial fits to $r_{01}$ (black) and $r_{10}$ (red). The high-end frequencies are truncated in the bottom panels (see text for details). } \label{fits} \end{figure*} Here $\beta$, $\gamma_1$, and $\gamma_2$ are chosen to ensure that $P(\nu)$ is a sum of orthogonal polynomials, hence, that the coefficients $a_0$, $a_1$ and $a_2$, which are the parameters inferred from the fit, are uncorrelated. More recently, \cite{2020Viani} have performed similar fits to a sample of {\it{Kepler}} stars confirming a potential correlation between $a_1$ and the amount of core overshoot. In this study we follow \cite{2016Deheuvels} and fit a second order polynomial to the observed ratios of 16~Cyg A and B, as well as to the ratios derived for an ensemble of models representative of these stars identified through the grid-based modelling approach described in Section~\ref{models}. Nevertheless, we note below three differences between our approach and that applied in \cite{2016Deheuvels} that should be kept in mind when comparing the two studies. The first difference is that we fit the sets of ratios $r_{01}$ and $r_{10}$ separately. As argued by \cite{2017roxburgh,2018roxburgh}, from $N$ pairs of frequencies of $(l=0,l=1)$ modes one can only derive $N$ values of the surface-independent quantities. The $r_{01}$ and $r_{10}$ are thus highly correlated and combining them does not add any significant information. In fact, attempts to fit them simultaneously are faced with having to invert nearly singular covariance matrices. In \cite{2016Deheuvels} this problem was detected and mitigated by applying a truncated singular value decomposition to the covariance matrix. Here, we opt, instead, to fit the two sets of ratios separately and compare the results of the two fits. The two sets of ratios computed from the individual mode frequencies of 16~Cyg A and B are shown by different colours in the top panels of Figure~\ref{fits}. Quite noticeable on the higher frequency end (to the right of the dashed blue line) is a sudden increase in the ratios. This increase is not currently reproducible by the ratios $r_{01}$ and $r_{10}$ of theoretical models. Whether this is a result of systematic errors on these higher frequencies, which have higher uncertainties, or the result of missing physics in the stellar models, it is currently unknown. Nevertheless, given that the low degree polynomial model proposed by \cite{2016Deheuvels} and used here to fit the ratios cannot capture this feature at the high-end frequency, we truncate the observed ratios of each star at the frequency indicated by the blue dashed line when performing the fits (i.e. at $\nu_{\rm At}=$ 2600 $\mu$Hz and $\nu_{\rm Bt}=$ 2945 $\mu$Hz, respectively). Another feature that is noticeable in Figure~\ref{fits} is the signature of glitches. The signature is superimposed on the smooth behaviour of the ratios and occurs on short frequency scales. The polynomial fits to the $r_{01}$ and $r_{10}$ sets are affected slightly differently by these short-scale variations, leading to non-negligible differences in the parameters inferred from the fits. We therefore take the differences in the parameters inferred from the two fits as a measure of the uncertainty on these parameters resulting from the simplicity of the polynomial model. \ A second difference of our approach to that of \cite{2016Deheuvels} concerns the definition of the polynomial used in the fit. With the aim of working with dimensionless parameters only, we have slightly changed the polynomial function, such that \begin{equation} P(\nu) = a_0 + a_1\left(\frac{\nu}{\nu_{\rm{max}}} -\beta \right) + a_2\left(\frac{\nu}{\nu_{\rm{max}}} - \gamma_1 \right)\left(\frac{\nu}{\nu_{\rm{max}}} - \gamma_2 \right)~, \label{poly_fits} \end{equation} where the frequency of maximum power is fixed to the observed value (i.e. 2188 $\mu$Hz and 2561 $\mu$Hz for 16 CygA and B, respectively; \citealt{2017Lund}), regardless of the fitting being performed to the observed or model ratios. Therefore, our dimensionless $a_1$ and $a_2$ values are not directly comparable with the values in \cite{2016Deheuvels} (which are expressed in mHz$^{-1}$). The third and final difference is that we do not consider in the fit the correlation between the values of the observed ratios. These correlations are expected from the definition of the ratios (cf. Eqs~(\ref{eq1})-(\ref{eq4})) given that ratios of different $n$ share common frequencies. Instead, we use emcee algorithm in python \citep{2013Foreman} to find the parameters that provide the best fit of Eq.~(\ref{poly_fits}) to the observed ratios while ignoring the correlations. We then perform a set of 10,000 Monte Carlo simulations by perturbing the observed frequencies within their errors assuming the latter are normally distributed. For each simulation we compute new sets of ratios $r_{01}$ and $r_{10}$ and perform new fits. The median and standard deviation of the distribution for each parameter $a_0$, $a_1$ and $a_2$ are then taken as the point estimates and uncertainties. We note that the highest posterior density (HPD) procedures were used to determine the credible intervals, taking 68.27\% and 99.73\% for the 1$\sigma$ and 3$\sigma$ uncertainties, respectively. Briefly, HPD region of confidence interval, $\alpha$, is a ($1-\alpha$)-confidence region which satisfies the condition that the posterior density for every point in this interval is higher than the posterior density for any point outside of this interval (see \citealt{99Chen,2015Hari}). We stress that a 100($1-\alpha$)\% HPD interval is more desirable to be applied in situations where a marginal distribution is not symmetric. Due to our neglect of the correlations between the ratios in each fit, the standard deviations derived in this manner will be an upper limit to the true formal uncertainties and can be considered as conservative errors on the parameters. The fits to the ratios are illustrated in the lower panels of Figure~\ref{fits}. As we will see in Section~\ref{rat_X}, the error inferred on the parameters through our Monte Carlo analysis is still smaller than the difference in some of the parameters inferred from fitting the two sets of ratios ($r_{01}$ and $r_{10}$; black and red lines in Figure~\ref{fits}, respectively), justifying further our option to simplify the procedure by not considering the correlations. Prior to fitting Eq.~(\ref{poly_fits}) to the ratios, we compute $\beta$, $\gamma_1$ and $\gamma_2$ from the observations, following the procedure explained in the Appendix B of \cite{2016Deheuvels}. As we neglect the correlations between the observed ratios, $\beta$ is simply the mean of the dimensionless frequencies $\nu/\nu_{\rm max}$ considered in the fit. The observed values of $\beta$, $\gamma_1$ and $\gamma_2$, as well as the observed $\nu_{\rm max}$, are used also in the fits to the model ratios, so that the parameters inferred from fitting the model ratios can be directly compared with those derived from fitting the observations. As in \cite{2016Deheuvels}, we find that the coefficient $a_2$ is very small, so that the ratios vary nearly linearly with frequency. Hence, $a_0$ and $a_1$ can be interpreted approximately as the value of the ratio at the middle of the frequency interval considered in the fit and the slope of the linear trend, respectively. We note that for each star, the fits to the model and both sets of observed ratios ($r_{01}$ and $r_{10}$) are all performed in the same frequency range, defined by the observations. This is a critical point, as model and observed ratios need to be compared at the same frequency. When performing model-observation comparison of individual values of the ratios, this is assured by first interpolating the model ratios to the observed frequencies. Here, we do not compare individual ratios, but do nevertheless perform a comparison of the parameters inferred from the fit of Eq.~(\ref{poly_fits}) to the model and observed ratios. Therefore, the range of frequencies over which the fits are performed needs to be the same and it impacts directly the value of $a_0$. \section{Results and Discussions} \label{results} Figure~\ref{echelle} shows a comparison of the observed frequencies with the best-fitting models of 16 Cyg A and B obtained from grid G$_{\rm free}$. The trend of the observed frequencies is well reproduced by the best-fitting model frequencies with the two-term surface correction recipe (see Eq.~\ref{correction}) taken into account. A similar trend as shown in Figure~\ref{echelle} is also observed for the best-fitting models of 16 Cyg A and B generated from grid G$_{1.4}$ and G$_{2.0}$. We note that a systematic offset between the observed and corrected frequencies still exists for a handful of points at the highest frequency end. \begin{figure} \includegraphics[width=\columnwidth]{echelle_16CygA.eps} \quad \includegraphics[width=\columnwidth]{echelle_16CygB.eps} \caption{\'{E}chelle diagram of 16 Cyg A (top panel) and B (bottom panel). The $l = 0, 1, 2,$ and $3$ mode frequencies are represented by squares, diamonds, hexagons, and triangle symbols, respectively. The black, blue, and red symbols correspond to the observed frequencies, theoretical frequencies, and corrected frequencies of the different modes. } \label{echelle} \end{figure} This is because high frequencies are more sensitive to the stellar outer region which is more prone to surface effects. In fact the frequency offset of up to 15 $\mu$Hz is reported at the high frequency end and the empirical corrections may not perfectly account for the differences in these regions. A detailed analysis of the robustness of the surface correction methods on the main-sequence phase has been addressed in \citet{2018Nsamba,2018Basu,2018Compton,2021Cunha} and for the evolved evolution phase in \citet{2017Ball,2020rgensen,2021Ong}. \subsection{Inferred stellar parameters of 16 Cyg A and B} \label{parameters} Grid G$_{1.4}$ and G$_{2.0}$ employ a fixed helium-to-heavy element enrichment ratio (see Table~\ref{grids}) via Eq.~(\ref{law}), used to determine how the initial helium mass fraction changes with metal mass fraction. \citet{2019Verma} demonstrated using a subset of {\it{Kepler}} ``Legacy'' sample stars that the scatter in the relation between initial helium mass fraction and metal mass fraction is significant, rendering this relation unsuitable for single star studies, especially in the case of population I stars. With this in mind, we consider grid G$_{\rm free}$ as the chosen reference grid since no initial helium restriction is set in this grid other than a lower and upper limit (see Section~\ref{models}). Recently, \citet{2021Nsamba} reported that the inferred stellar masses and radii from grids with a fixed helium-to-heavy element enrichment ratio are systematically lower than those from grids with free initial helium mass fraction. Similar findings were reached by \citet{2021Deal}, while consistent values were found by the same authors for the central hydrogen mass fractions and ages. Therefore, grid G$_{1.4}$ and G$_{2.0}$ are employed here to verify the consistency of our findings concerning these parameters when frequency ratios are applied to tightly constrain the central hydrogen mass fractions and ages of 16 Cyg A and B (see Section~\ref{rat_X}). Table~\ref{grids2} and Table~\ref{grids3} show the parameters inferred in our study, and their corresponding 1$\sigma$ uncertainties, for 16 Cyg A and B, respectively. The stellar parameters missing in these tables are not provided because they are not available in the model files of these grids. \begin{table*} \centering \caption{Derived stellar parameters and their associated 1$\sigma$ uncertainties for 16 Cyg A.} \label{grids2} \begin{tabular}{c c ccccccc } \hline Grid name & $M (\rm{M}_\odot$) & $R (\rm{R}_\odot$) & Age (Gyr) & $X_{\rm c}$ & $\alpha_{\rm{mlt}}$ & $Z_{\rm 0}$ & $L (\rm{L_\odot})$ & $Y_{\rm{surf}}$\\ \hline \hline G$_{1.4}$ & 1.08 $\pm$ 0.02 & 1.226 $\pm$ 0.010 & 6.5 $\pm$ 0.3 & 0.04 $\pm$ 0.02 & 1.94 $\pm$ 0.06 & 0.026 $\pm$ 0.002 & 1.62 $\pm$ 0.06 & - \\ G$_{2.0}$ & 1.05 $\pm$ 0.02 & 1.213 $\pm$ 0.008 & 6.4 $\pm$ 0.3 & - & 1.86 $\pm$ 0.06 & 0.026 $\pm$ 0.001 & 1.55 $\pm$ 0.05 & - \\ G$_{\rm{free}}$ & 1.09 $\pm$ 0.03 & 1.233 $\pm$ 0.010 & 6.5 $\pm$ 0.2 & 0.02 $\pm$ 0.01 & 1.90 $\pm$ 0.07 & 0.026 $\pm$ 0.001 & 1.60 $\pm$ 0.06 & 0.229 $\pm$ 0.011 \\ \hline \end{tabular} \end{table*} \begin{table*} \centering \caption{Derived stellar parameters and their associated 1$\sigma$ uncertainties for 16 Cyg B.} \label{grids3} \begin{tabular}{c c ccccccc } \hline Grid name & $M (\rm{M}_\odot$) & $R (\rm{R}_\odot$) & Age (Gyr) & $X_{\rm c}$ & $\alpha_{\rm{mlt}}$ & $Z_{\rm 0}$ & $L (\rm{L_\odot})$ & $Y_{\rm{surf}}$\\ \hline \hline G$_{1.4}$ & 1.01 $\pm$ 0.02 & 1.102 $\pm$ 0.010 & 7.4 $\pm$ 0.3 & 0.16 $\pm$ 0.02 & 1.81 $\pm$ 0.06 & 0.023 $\pm$ 0.002 & 1.18 $\pm$ 0.04 & - \\ G$_{2.0}$ & 0.98 $\pm$ 0.02 & 1.094 $\pm$ 0.008 & 7.3 $\pm$ 0.3 & - & 1.79 $\pm$ 0.04 & 0.022 $\pm$ 0.001 & 1.18 $\pm$ 0.04 & - \\ G$_{\rm{free}}$ & 1.03 $\pm$ 0.03 & 1.111 $\pm$ 0.010 & 7.3 $\pm$ 0.3 & 0.14 $\pm$ 0.02 & 1.84 $\pm$ 0.07 & 0.022 $\pm$ 0.001 & 1.19 $\pm$ 0.05 & 0.225 $\pm$ 0.012 \\ \hline \end{tabular} \end{table*} \begin{figure*} \includegraphics[width=\columnwidth]{hist_radius.eps} \quad \includegraphics[width=\columnwidth]{hist_Lum.eps} \includegraphics[width=\columnwidth]{hist_Ys.eps} \quad \caption{Probability densities of inferred parameters of 16 Cyg A (dotted blue lines) and B (dashed red lines) from grid G$_{\rm free}$ and corresponding mean values and standard deviations. Horizontal arrows show 1$\sigma$ range of parameters inferred from interferometric measurements \citep{2013White}, glitch analysis \citep{2014Verma}, grid G$_{1.4}$, G$_{2.0}$, and G$_{\rm{free}}$ according to the numbers' description in each panel. } \label{model_independent} \end{figure*} \begin{figure*} \includegraphics[width=\columnwidth]{hist_mass.eps} \quad \includegraphics[width=\columnwidth]{hist_YO.eps} \quad \includegraphics[width=\columnwidth]{hist_ZO.eps} \quad \includegraphics[width=\columnwidth]{hist_X.eps} \quad \includegraphics[width=10cm,height=6.5cm]{hist_age.eps} \caption{Probability densities of inferred parameters of 16 Cyg A (dotted blue lines) and B (dashed red lines) from grid G$_{\rm free}$ and corresponding mean values and standard deviations. Masses (top left), initial helium mass fractions (top right), initial metal mass fractions (middle left), central hydrogen mass fractions (middle right), and ages (bottom panel). Horizontal arrows show 1$\sigma$ range of predicted parameters from \citet{2013White}, \citet{2016Bellinger}, \citet{2015Metcalfe}, \citet{2020Farnir}, grid G$_{1.4}$, G$_{2.0}$, and G$_{\rm{free}}$ according to the numbers' description in each panel. } \label{model_parameters} \end{figure*} Prior to applying the ratios $r_{01}$ and $r_{10}$ (following a description in Section~\ref{ratios}) as a diagnostic of the interior characteristics of the best-fit models of 16 Cyg A and B, we explore the consistency of the inferred parameters from the employed model grids with literature findings. A detailed comparison between the stellar parameters inferred from grid G$_{1.4}$ and G$_{2.0}$ (i.e., employing a fixed value of enrichment ratio) with the reference grid G$_{\rm free}$ (i.e., free initial helium abundance), and literature findings is shown in Figure~\ref{model_independent} and Figure~\ref{model_parameters}. The inferred radii and luminosities of 16 Cyg A and B from grids G$_{1.4}$, G$_{2.0}$, and G$_{\rm free}$ agree within 1-2$\sigma$ uncertainties with the model independent radii and luminosities from \citet{2013White}, respectively. This is shown in the top panels of Figure~\ref{model_independent}. We note that \citet{2013White} derived the stellar radius, $R$, through a combination of the angular diameter, $\theta_{\rm{LD}}$, with parallax-based distance to the star, $D$, using the expression \begin{equation} R = \frac{1}{2}\theta_{\rm{LD}}D~. \end{equation} They determined the effective temperature, $T_{\rm{eff}}$ of 16 Cyg A and B using a bolometric flux (\citealt{Boya}) at the Earth, $F_{\rm{bol}}$ via the relation \begin{equation} T_{\rm{eff}} = \left(\frac{4F_{\rm{bol}}}{\sigma \theta_{\rm{LD}}^2} \right)~. \end{equation} Similarly, the stellar luminosities were calculated from the bolometric flux and parallax-based distances. The bottom panel of Figure~\ref{model_independent} shows an excellent agreement between the inferred surface helium mass fractions from grid G$_{\rm free}$ and values reported by \citet{2014Verma} using glitch analysis. The top left panel of Figure~\ref{model_parameters} shows a comparison of the derived masses of 16 Cyg A and B from our grids with those estimated from \citet{2013White}, \citet{2016Bellinger}, and \citet{2020Farnir}. The masses of both 16 Cyg A and B inferred from all the grids agree within 1-2$\sigma$ uncertainties with literature findings. \citet{2013White} reports the largest uncertainties on the masses of 16 Cyg A and B. This is because they estimated the stellar masses based on scaling relations between the large frequency separation of solar-like oscillations, $\Delta \nu$, and the density of the star \citep{1986Ulrich}, which take the form: \begin{equation} \frac{\Delta \nu}{\Delta \nu_\odot} = \left( \frac{M}{M_\odot} \right)^{(1/2)} \left(\frac{R}{R_\odot} \right)^{(-3/2)} ~, \label{scaling} \end{equation} where $R$ is deduced from the angular diameter measurement. \citet{2020Farnir} employed a method (i.e. WhoSGLAd; \citealt{2019Farnir}) which generates only a handful of best-fit models, from which stellar parameters and their corresponding uncertainties are deduced. An interesting aspect of using ``WhoSGLAd'' optimisation tool involves its capability of exploring a set of seismic indicators including acoustic glitches, so as to generate models that are representative of the stellar structure. We note that \citet{2020Farnir} carried out numerous simulations varying the model input physics and observable constraints of 16 Cyg A and B. The results presented here are from their ``table 4'', which they obtained by varying various stellar parameters including $T_{\rm{eff}}$ and implementing a turbulent mixing with a coefficient of $D_{\rm{turb}}$ = 7500 cm$^{2}s^{-1}$. The results reported in table 4 of \citet{2020Farnir} satisfy their specified seismic data, interferometric radii, luminosities, effective temperatures, and surface helium abundances of both 16 Cyg A and B included in their optimisation process, except for the spectroscopic metallicities. Our non-seismic constraints, i.e., $T_{\rm{eff}}$ and [Fe/H], were satisfied by our best-fitting models of both 16 Cyg A and B. This also explains the slight differences in the stellar parameters derived from this work and those reported by \citet{2020Farnir}. \citet{2020Farnir} also considered 1$\sigma$ uncertainties on the non-seismic parameters while we considered 3$\sigma$ uncertainties. This further explains the differences in the 1$\sigma$ uncertainty sizes of our results and those of \citet{2020Farnir}. \citet{2016Bellinger} employed an approach based on machine learning to estimate the fundamental stellar parameters of 16 Cyg A and B from a set of classical and asteroseismic observations, i.e. effective temperatures, surface gravities, and metallicities from \citet{2009Ram}; radii and luminosities from \citet{2013White}; and frequencies from \citet{2015Davies}. This method involves developing a multiple-regression model capable of characterising observed stars. The construction of such a model requires developing a matrix of evolutionary simulations and employing a machine learning algorithm to unveil the relationship between model quantities and observable quantities of targeted stars (see \citealt{2016Bellinger} for details). It is also worth noting that \citet{2016Bellinger} included various variable model parameters in their grid, i.e. mass, chemical composition, mixing length parameter, overshoot coefficient, $\alpha_{\rm{ov}}$, and the diffusion multiplication factor, $D$, which they find to linearly decrease with mass. Our grids use a constant factor, i.e., $D = 1$. Despite the agreement in the masses and radii of 16 Cyg A and B from all our grids being recorded within 1$\sigma$ uncertainties (see Table~\ref{grids2} and Table~\ref{grids3}), it can be seen that the central values of the masses and radii from grid G$_{1.4}$ are higher than those from grid G$_{2.0}$ . This stems from the difference in the helium-to-heavy element ratio values adopted in the two grids, i.e., $\Delta Y/\Delta Z$ = 1.4 and 2.0 for grid G$_{1.4}$ and G$_{2.0}$, respectively. A comprehensive review on the differences and systematic uncertainties on the fundamental stellar parameters arising from using an enrichment ratio, and including the initial helium mass fraction as a free variable in stellar grids is given in \citet{2021Nsamba}. Again, there is a 1$\sigma$ agreement between the initial helium mass fractions (top right panel of Figure~\ref{model_parameters}) and central hydrogen mass fractions (middle right panel of Figure~\ref{model_parameters}) obtained from our grids and the results from \citet{2016Bellinger}. Concerning the hypothesis that 16 Cyg A and B were formed at the same time and with the same initial composition (conditions that were not assumed from the outset in this work), grid G$_{1.4}$ yields initial metal mass fractions which agree within 1$\sigma$ uncertainties, while grid G$_{\rm free}$ yields values that agree with 3$\sigma$ uncertainties. We find an excellent agreement between the initial metal mass fractions of 16 Cyg B reported by \citet{2016Bellinger} and those from our grids, while those of 16 Cyg A agree only within 2$\sigma$ uncertainties (middle left panel of Figure~\ref{model_parameters}). The stellar ages inferred from our grids only conform to the hypothesis that 16 Cyg A and B are coeval within 2-3$\sigma$ uncertainties (bottom panel of Figure~\ref{model_parameters}). It is worth noting that the results from \citet{2020Farnir} and \citet{2019Bellinger} reported in this work are those derived without imposing a common age and initial composition restrictions. We highlight that \citet{2015Metcalfe} used the Asteroseismic Modeling Portal (AMP; \citealt{Met2009}) on a grid of models which did not include overshoot, but considered diffusion and settling of helium using the prescription of \citet{Michaud}. The selection of best-fit models was carried out via a $\chi^2$ based on a set of seismic and spectroscopic constraints, but considering 3$\sigma$ uncertainties on the spectroscopic parameters. We note that during the investigation of the reliabilty of asteroseismic inferences, \citet{2015Metcalfe} carried out different combinations of observational constraints. The results of \citet{2015Metcalfe} reported in this article are those derived when taking into account all the observable constraints of 16 Cyg A and B, i.e. seismic and spectroscopic constraints. In addition, \citet{2020Farnir} reported stellar ages of 16 Cyg A and B derived by varying a series of model input physics, such as Solar metallicity mixtures, opacities, overshoot, undershoot, turbulent mixing coefficients, element diffusion, and surface correction options, as well as imposing different treatments like different choices of seismic and atmospheric constraints in the optimisation process and restrictions (e.g. with and without common age and initial composition) . They report the ages of 16 Cyg A and B to span the range [6.2 - 7.8] Gyr. These results are in agreement with the values returned from our grids. We now down select the best-fit models from our grids to the ones that predict ratios $r_{01}$ and $r_{10}$ comparable to the observed values. \subsection{Ratios $r_{01}$ and $r_{10}$ as a diagnostic for central hydrogen abundances} \label{rat_X} To further constrain the central hydrogen abundance of 16 Cyg A and B, we down select the set of best-fit models obtained following the forward modelling routine described in Section~\ref{models} and used to estimate the stellar parameters shown in Section~\ref{parameters}. We perform a second order polynomial fit to the ratios $r_{01}$ and $r_{10}$ derived for the best-fit models and for the observations, for both our target stars (following the description in Section~\ref{ratios}), extracting the first two polynomial coefficients $a_{0}$ and $a_{1}$. Table~\ref{rati_1} shows the $a_{1}$ and $a_{0}$ values and their corresponding 3$\sigma$ uncertainties deduced from the second order polynomial fit to the observed ratios $r_{01}$ and $r_{10}$ of 16 Cyg A. Similar results are shown in Table~\ref{rati_2} for 16 Cyg B. Comparison of the values of $a_{1}$ inferred from fitting $r_{01}$ and $r_{10}$ show that their difference is larger than the 1$\sigma$ uncertainties on this coefficient, both in the case of 16 Cyg A and B. For $a_{0}$, the difference between the two values is comparable to the 1$\sigma$ uncertainties in the case of 16 Cyg A, but is much larger than the 1$\sigma$ uncertainties in the case of 16 Cyg B. In practice, to account for the added uncertainty brought by the differences in the two measurements of $a_{0}$ and $a_{1}$, in our down selection we will consider all best-fit models that are within 3$\sigma$ uncertainties of either values derived for these parameters. \begin{table} \centering \caption{($a_{1}, a_{0}$) values and their associated lower and upper 3$\sigma$ uncertainties determined from the observed ratios $r_{01}$ and $r_{10}$ of 16 Cyg A. } \label{rati_1} \begin{tabular}{c c c c c c c} \hline Ratios & $a_{1}$ & $\epsilon_{-a_{1}}$ & $\epsilon_{+a_{1}}$ & $a_{0}$ & $\epsilon_{-a_{0}}$ & $\epsilon_{+a_{0}}$ \\ \hline \hline $r_{01}$ & -0.0563 & 0.0028 & 0.0029 & 0.0372 & 0.0003 & 0.0004 \\ $r_{10}$ & -0.0548 & 0.0025 & 0.0023 & 0.0373 & 0.0002 & 0.0003 \\ \hline \end{tabular} \end{table} \begin{table} \centering \caption{($a_{1}, a_{0}$) values and their associated lower and upper 3$\sigma$ uncertainties determined from the observed ratios $r_{01}$ and $r_{10}$ of 16 Cyg B. } \label{rati_2} \begin{tabular}{c c c c c c c} \hline Ratios & $a_{1}$ & $\epsilon_{-a_{1}}$ & $\epsilon_{+a_{1}}$ & $a_{0}$ & $\epsilon_{-a_{0}}$ & $\epsilon_{+a_{0}}$ \\ \hline \hline $r_{01}$ & -0.0501 & 0.0021 & 0.0024 & 0.0275 & 0.0002 & 0.0002 \\ $r_{10}$ & -0.0513 & 0.0020 & 0.0024 & 0.0263 & 0.0003 & 0.0002 \\ \hline \end{tabular} \end{table} \begin{figure*} \includegraphics[width=\columnwidth]{a0a1_Xc_A_1.4_combined.eps} \quad \includegraphics[width=\columnwidth]{a0a1_Xc_B_1.4_combined.eps} \quad \includegraphics[width=\columnwidth]{combined_a0a1_free_Xc_A_free.eps} \quad \includegraphics[width=\columnwidth]{combined_a0a1_Xc_B_free.eps} \caption{Location of best-fit models for 16 Cyg A (left) and B (right) in the ($a_1$, $a_0$) plane considering both $r_{01}$ and $r_{10}$ fits for grid G$_{1.4}$ (top) and G$_{\rm free}$ (bottom). $a_0$ and $a_1$ are the independent term and linear coefficient of the fitted second order polynomial, respectively. Colours indicate the hydrogen mass fractions in the core. The black box indicates the observational measurements for each star, estimated from $r_{01}$ and $r_{10}$ fits and their $3\sigma$ uncertainties (see text for details).} \label{a0a1fits_1} \end{figure*} \begin{figure*} \includegraphics[width=\columnwidth]{a0a1_Xc_A_2.0_Y_combined.eps} \quad \includegraphics[width=\columnwidth]{combined_a0a1_Xc_B_2.0.eps} \caption{Location of best-fit models for 16 Cyg A (left) and B (right) in the ($a_1$, $a_0$) plane considering both $r_{01}$ and $r_{10}$ fits for grid G$_{2.0}$. $a_0$ and $a_1$ are the independent term and linear coefficient of the fitted second order polynomial, respectively. Colours indicate the stellar ages. The black box indicates the observational measurements for each star, estimated from $r_{01}$ and $r_{10}$ fits and their $3\sigma$ uncertainties (see text for details).} \label{fits_1} \end{figure*} Figure~\ref{a0a1fits_1} shows the best-fit models of 16 Cyg A and B in the ($a_1$, $a_0$) plane (see Section~\ref{models} for details on how best-fits models were obtained. Note that, for each model, there are two estimates of $a_0$ and two estimates of $a_1$, derived from fitting $r_{01}$ and $r_{10}$, respectively. The results are shown for grid G$_{1.4}$ (top panels) and G$_{\rm free}$ (bottom panels), colour-coded according to their corresponding central hydrogen mass fraction, $X_c$. The black box shown in each panel encompasses the two estimates of $a_0$ and $a_1$, including their $3\sigma$ uncertainties. Similar representation of the best-fit models of 16 Cyg A and B in the ($a_1$, $a_0$) plane for grid G$_{2.0}$ are shown in Figure~\ref{fits_1}, colour-coded according to their corresponding stellar ages. In all the panels of Figure~\ref{a0a1fits_1}, it can be seen that models tend to clearly separate according to their central hydrogen mass fractions. This is not surprising, since the ratios $r_{01}$ and $r_{10}$ not only carry information about the physical processes near the stellar cores but are also sensitive to the central hydrogen content. Models which occupy the same region as the position of each star (black box symbol) in the ($a_1$, $a_0$) plane shown in Figure~\ref{a0a1fits_1} simultaneously reproduce the observed trend of the ratios and the observational constraints used in the forward modeling process highlighted in Section~\ref{constraints}. Our findings show that we were unable to find models in grid G$_{2.0}$ which simultaneously satisfy the ratios and other observational constraints used in the forwarding modeling of 16 Cyg B. This is evident in the right panel of Figure~\ref{fits_1}. The top right panel of Figure~\ref{a0a1fits_1} shows that only a few models which simultaneously satisfy the ratios and other observational constraints used in the forwarding modeling of 16 Cyg B were found using grid G$_{1.4}$. In contrast, we found more models in grid G$_{\rm free}$ which simultaneously satisfy the ratios ($r_{01}$ and $r_{10}$) and other observational constraints used in the forwarding modeling of 16 Cyg B. \begin{figure*} \includegraphics[width=\columnwidth]{Age_X_A_r01.eps} \quad \includegraphics[width=\columnwidth]{Age_X_B_r01.eps} \caption{Age vs. X$_c$: Parameters of best-fit models of grid G$_{\rm free}$ which span the 3$\sigma$ uncertainty region of the observed ($a_1$, $a_0$) values determined from $r_{01}$ and $r_{10}$ for 16 Cyg A (left panel) and B (right panel), color-coded according to their corresponding initial metal mass fractions.} \label{t_X_AB_1} \end{figure*} \begin{figure*} \includegraphics[width=\columnwidth]{Age_X_A_1.4_r10.eps} \quad \includegraphics[width=\columnwidth]{Age_X_B_1.4_r10.eps} \caption{Age vs. X$_c$: Parameters of best-fit models of grid G$_{1.4}$ which span the 3$\sigma$ uncertainty region of the observed ($a_1$, $a_0$) values determined from $r_{10}$ for 16 Cyg A (left panel) and B (right panel), color-coded according to their corresponding initial metal mass fractions.} \label{t_X_AB_2} \end{figure*} This is shown in the bottom right panel of Figure~\ref{a0a1fits_1}. We stress here that grid G$_{\rm free}$ did not include any restriction in the estimation of the model initial helium mass fraction, while the model initial helium mass fraction in grids G$_{1.4}$ and G$_{2.0}$ was determined via a helium-to-heavy element enrichment ratio. Therefore, our results further demonstrate that the helium-to-heavy element enrichment law may not be suitable for studies of individual stars. This is consistent with literature findings, e.g. \citet{2021Nsamba, 2021Deal}. Next, we extract models for 16 Cyg A and B from grid G$_{\rm free}$ which fall within the observed ($a_1$, $a_0$) error box of the observed data in the ($a_1$, $a_0$) plane. (see Figure~\ref{t_X_AB_1}). Figure~\ref{t_X_AB_2} shows the models extracted in a similar way from grid G$_{1.4}$. Negative linear trends between age and central hydrogen content are evident in the left and right panels of Figure~\ref{t_X_AB_2} for 16 Cyg A and B, respectively. Models of 16 Cyg A and B with high central hydrogen mass fractions have lower ages while those with low central hydrogen mass fractions have higher ages. This is expected based on the theoretical description of chemical abundance evolution in main-sequence stars (e.g. \citealt{1990Kippe,2000Prialnik,2010Aerts}). It is worth noting that the linear negative trend is found for grid G$_{1.4}$ (Figure~\ref{t_X_AB_2}) while no such trends are observed in the grid G$_{\rm free}$ (Figure~\ref{t_X_AB_1}). This most probably stems from the linear relation used in grid G$_{1.4}$ to estimate the model abundances via the chemical enrichment ratio (i.e. see Eq.~\ref{law}). Furthermore, this demonstrates that by considering an enrichment law, the impact of stellar aging on the hydrogen abundance is directly visible on the selection of models, while when the initial helium mass fraction,$Y_{i}$, is set to be independent of $Z$, that correlation is diluted through the increase of the possible combinations of stellar chemical content (see Figure~\ref{t_X_AB_1}). Based on Eq.~(\ref{law}), applying the most optimal initial helium mass fraction values returned from the forward modelling of 16 Cyg A and B using grid G$_{\rm free}$ (i.e. 0.270$\pm$0.013 and 0.263$\pm$0.014, respectively), plus the corresponding optimal initial metal mass fraction values shown in Table~\ref{grids2} and Table~\ref{grids3}, we deduce that a helium-to-heavy element enrichment ratio of $\sim$0.8 is required to model both components of the 16 Cyg binary system. This value is smaller than the range reported for the Sun by \citet{Serenelli_2010}, i.e. $1.7 \leqslant \Delta Y / \Delta Z \leqslant 2.2$, depending on the choice of solar composition. However, our inferred value is consistent with the range of values reported by \citet{2019Verma} using a sample of 38 {\it{Kepler}} ``LEGACY'', i.e. $\Delta Y / \Delta Z \in$~[0.38 -- 2.07]. This range was determined through a combination of surface helium abundances based on the analysis of glitch signatures caused by the ionization of helium, and initial helium abundances determined through abundance differences caused by gravitational settling in stellar models. Furthermore, helium-to-heavy element enrichment ratio values which are below 1 have also been reported in \citet{2015Silva,Aguirre_2017,2021Nsamba}. From the panels of Figure~\ref{t_X_AB_1} and Figure~\ref{t_X_AB_2}, we see that our down selection of the models leads to a central hydrogen mass fraction for 16 Cyg A spanning the range [0.01 -- 0.06] while for 16 Cyg B we find values within [0.12 -- 0.19] (cf. panels of Figure~\ref{t_X_AB_1} and Figure~\ref{t_X_AB_2}). Moreover, the ages of 16 Cyg A and B are found to be in the range [6.0 -- 7.4] and [6.4 -- 7.8], respectively. Finally, the initial metal mass fractions of 16 Cyg A and B lie in the range [0.023 -- 0.029] and [0.021 -- 0.026], respectively. Lastly, it can be seen that models which conform to the binary hypothesis that 16 Cyg A and B were born from the same molecular cloud (i.e., implying same initial chemical composition) at approximately the same time, have ages within [6.4 -- 7.4] Gyr and initial metal mass fraction within [0.023 -- 0.026]. We note that these values are based on results emanating from our referencing grid, i.e. grid G$_{\rm free}$ (see panels of Figure~\ref{t_X_AB_1}). \section{Summary and conclusions} \label{conclusions} In this article, we adopted both 16 Cyg A and B as our benchmark stars and presented a novel approach which allows for a selection of a sample of best-fit stellar models returned from forward modelling techniques (involving fitting the observed individual oscillation frequencies and spectroscopic constraints, i.e. metallicity and effective temperature), down to the ones that better represent the core of each star. Our investigations involve using the ratios $r_{01}$ and $r_{10}$ to constrain the central hydrogen mass fraction of 16 Cyg A and B. This is attained by fitting a second order polynomial to the ratios ($r_{01}$ and $r_{10}$) of both models obtained from forward modelling and the observed data of 16 Cyg A and B. By considering the linear ($a_{1}$) and independent ($a_{0}$) coefficients of the second order polynomial fits, we find the models to spread out in the ($a_{1}$, $a_{0}$) plane according to their respective central hydrogen mass fractions. This approach allowed for a selection of models which simultaneously satisfy the ratios ($r_{01}$ and $r_{10}$) and other observational constraints used in the forwarding modelling process. This implies that these selected models also satisfy the interior conditions of our target stars. Following this approach, we find that that the central hydrogen content of 16 Cyg A and B lie in the range [0.01 -- 0.06] and [0.12 -- 0.19], respectively. Moreover, a common age and initial metal mass fraction for the two stars requires these parameters to lie in the range [6.4 -- 7.4] Gyr and [0.023 -- 0.026], respectively. Furthermore, our findings show that grids having the model initial helium mass fraction determined via a helium-to-heavy element enrichment ratio may not always include models that satisfy the core conditions set by the observed ratios. This was particularly evident when modelling 16 Cyg B. \section*{Acknowledgements} The authors acknowledge the dedicated team behind the NASA'S {\it{Kepler}} missions. BN thanks Achim Weiss, Earl P. Bellinger, Selma E. de Mink, and the Stellar Evolution research group at Max-Planck-Institut f\"{u}r Astrophysik (MPA) for the useful comments on this article. B.N. also acknowledges postdoctoral funding from the Alexander von Humboldt Foundation and "Branco Weiss fellowship -- Science in Society" through the SEISMIC stellar interior physics group. M. S. Cunha is supported by national funds through Funda\c{c}\~{a}o para a Ci\^{e}ncia e a Tecnologia (FCT, Portugal) - in the form of a work contract and through the research grants UIDB/04434/2020, UIDP/04434/2020 and PTDC/FIS-AST/30389/2017, and by FEDER - Fundo Europeu de Desenvolvimento Regional through COMPETE2020 - Programa Operacional Competitividade e Internacionaliz\'{a}cao (grant: POCI-01-0145-FEDER-030389). T.L.C.~is supported by Funda\c{c}\~{a}o para a Ci\^{e}ncia e a Tecnologia (FCT) in the form of a work contract (CEECIND/00476/2018). We thank the reviewer for the constructive remarks. \section*{Data Availability} Data used in this article are available in machine-readable form. The details of the ``inlist'' files used in the stellar evolution code (MESA) are described in \citet{2021Nsamba}. \bibliographystyle{mnras}
{'timestamp': '2022-05-11T02:21:28', 'yymm': '2205', 'arxiv_id': '2205.04972', 'language': 'en', 'url': 'https://arxiv.org/abs/2205.04972'}
\section{Introduction} {\it Shift Radix Systems} and their relation to beta-numeration seem to have appeared first in Hollander's PhD thesis~\cite{Hollander:96} from 1996. Already in 1994 Vivaldi~\cite{Vivaldi:94} studied similar mappings in order to investigate rotations with round-off errors. In 2005 Akiyama {\it et al.}~\cite{Akiyama-Borbeli-Brunotte-Pethoe-Thuswaldner:05} introduced the notion of shift radix system formally and elaborated the connection of these simple dynamical systems to several well-known notions of number systems such as beta-numeration and canonical number systems. We recall the definition of these objects (here for $y\in\mathbb{R}$ we denote by $\lfloor y \rfloor$ the largest $n\in\mathbb{Z}$ with $n\le y$; moreover, we set $\{y\} = y - \lfloor y \rfloor$). \begin{definition}[Shift radix system]\label{def:SRS} Let $d\ge 1$ be an integer and ${\bf r}=(r_0,\dots,r_{d-1}) \in \RR^d$. Then we define the {\em shift radix system} (SRS by short) to be the following mapping $\tau_{{\bf r}}\; : \; \ZZ^d \to \ZZ^d$: For ${\bf z}=(z_0,\dots,z_{d-1})^t \in \ZZ^d$ let \begin{equation}\label{SRSdefinition} \tau_{{\bf r}}({\bf z})=(z_1,\dots,z_{d-1},-\lfloor {\bf r} {\bf z}\rfloor)^t, \end{equation} where ${\bf r} {\bf z}=r_0z_0 +\dots + r_{d-1}z_{d-1}$. If for each $ {\bf z} \in \ZZ^d$ there is $k\in\mathbb{N}$ such that the $k$-fold iterate of the application of $\tau_{\mathbf{r}}$ to $\mathbf{z}$ satisfies $\tau_{{\bf r}}^k({\bf z})={\bf 0}$ we say that $\tau_\mathbf{r}$ has the {\em finiteness property}. \end{definition} It should be noticed that the definition of SRS differs in literature. Our definition agrees with the one in \cite{BSSST2011}, but the SRS in \cite{Akiyama-Borbeli-Brunotte-Pethoe-Thuswaldner:05} coincide with our SRS with finiteness property. Equivalently to the above definition we might state that $\tau_{{\bf r}}({\bf z})=(z_1,\dots,z_{d-1},z_{d})^t,$ where $z_{d}$ is the unique integral solution of the linear inequality \begin{equation}\label{linearinequalities} 0 \le r_0z_0+\cdots + r_{d-1}z_{d-1} + z_{d} < 1. \end{equation} Thus, the investigation of shift radix systems has natural relations to the study of {\it almost linear recurrences} and {\em linear Diophantine inequalities}. Shift radix systems have many remarkable properties and admit relations to several seemingly unrelated objects studied independently in the past. In the present paper we will survey these properties and relations. In particular, we will emphasize on the following topics. \begin{itemize} \item For an algebraic integer $\beta > 1$ the {\em beta-transformation} $T_\beta $ is conjugate to $\tau_\mathbf{r}$ for a parameter $\mathbf{r}$ that is defined in terms of $\beta$. Therefore, the well-known {\it beta-expansions} (R\'enyi \cite{Renyi:57}, Parry \cite{Parry:60}) have a certain finiteness property called {\em property (F)} ({\it cf.} Frougny and Solomyak \cite{Frougny-Solomyak:92}) if and only if the related $\tau_\mathbf{r}$ is an SRS with finiteness property. Pisot numbers $\beta$ are of special importance in this context. \item The {\em backward division mapping} used to define {\it canonical number systems} is conjugate to $\tau_\mathbf{r}$ for certain parameters $\mathbf{r}$. For this reason, characterizing all bases of canonical number systems is a special case of describing all vectors $\mathbf{r}$ giving rise to SRS with finiteness property ({\it cf.} Akiyama {\it et al.} \cite{Akiyama-Borbeli-Brunotte-Pethoe-Thuswaldner:05}). \item The {\em Schur-Cohn region} (see Schur~\cite{Schur:18}) is the set of all vectors $\mathbf{r}=(r_0,\ldots,r_{d-1}) \in \mathbb{R}^d$ that define a contractive polynomial $X^d + r_{d-1}X^{d-1} + \cdots + r_1X+r_0$. This region is intimately related to the set of all parameters $\mathbf{r}$ for which each orbit of $\tau_\mathbf{r}$ is ultimately periodic. \item Vivaldi~\cite{Vivaldi:94} started to investigate discretized rotations which are of interest in computer science because they can be performed in integer arithmetic. A fundamental problem is to decide whether their orbits are periodic. It turns out that discretized rotations are special cases of shift radix systems and their periodicity properties have close relations to the conjecture of Bertrand and Schmidt mentioned in the following item. \item Bertrand~\cite{Bertrand:77} and Schmidt~\cite{Schmidt:80} studied beta-expansions w.r.t.\ a Salem number $\beta$. They conjectured that each element of the number field $\mathbb{Q}(\beta)$ admits a periodic beta-expansion. As a Salem number has conjugates on the unit circle it can be shown that this conjecture can be reformulated in terms of the ultimate periodicity of $\tau_{\mathbf{r}}$, where $\mathbf{r}$ is a parameter whose companion matrix $R(\mathbf{r})$ (see \eqref{mata}) has non-real eigenvalues on the unit circle. \item Shift radix systems admit a geometric theory. In particular, it is possible to define so-called {\it SRS tiles} (see Berth\'e {\it et al.}~\cite{BSSST2011}). In view of the conjugacies mentioned above these tiles contain Rauzy fractals~\cite{Akiyama:02,Rauzy:82} as well as self-affine {\em fundamental domains} of canonical number systems (see K\'atai and K\H{o}rnyei~\cite{Katai-Koernyei:92}) as special cases. However, also new tiles with different (and seemingly new) geometric properties occur in this class. It is conjectured that SRS tiles always induce tilings of their representation spaces. This contains the {\em Pisot conjecture} (see {\it e.g.}\ Arnoux and Ito~\cite{Arnoux-Ito:01} or Baker~{\it et al.}~\cite{BBK:06}) in the setting of beta-expansions as a special case. \item Akiyama~{\it et al.}~\cite{Akiyama-Frougny-Sakarovitch:07} study number systems with rational bases and establish relations of these number systems to Mahler's $\frac32$-problem ({\it cf.}~\cite{Mahler:68}). Also these number systems can be regarded as special cases of SRS (see Steiner and Thuswaldner~\cite{ST:11}) and there seem to be relations between the $\frac32$-problem and the length of SRS tiles associated with $\tau_{-2/3}$. These tiles also have relations to the Josephus problem (see again \cite{ST:11}). \item In recent years variants of shift radix systems have been studied. Although their definition is very close to that of $\tau_\mathbf{r}$, some of them have different properties. For instance, the ``tiling properties'' of SRS tiles are not the same in these modified settings. \end{itemize} It is important to recognize that the mapping $\tau_\mathbf{r}$ is ``almost linear'' in the sense that it is the sum of a linear function and a small error term caused by the floor function $\lfloor\cdot\rfloor$ occurring in the definition. To make this more precise define the matrix \begin{equation}\label{mata} R({\bf r})= \left( \begin{array}{ccccc} 0 & 1 & 0 & \cdots &0 \\ \vdots & \ddots &\ddots & \ddots & \vdots\\ \vdots & & \ddots &\ddots& 0 \\ 0 & \cdots & \cdots & 0 &1 \\ -r_0 & -r_1 & \cdots & \cdots & -r_{d-1} \end{array} \right) \qquad({\bf r}=(r_0, \ldots,r_{d-1}) \in \RR^d) \end{equation} and observe that its characteristic polynomial \begin{equation}\label{chi} \chi_{\bf r}(X)= X^d +r_{d-1} X^{d-1}+ \cdots + r_1 X + r_0 \end{equation} is also the characteristic polynomial of the linear recurrence $z_{n}+r_{d-1}z_{n-1}+ \cdots + r_0 z_{n-d}=0$. Thus \eqref{linearinequalities} implies that \begin{equation}\label{linear} \tau_\mathbf{r}(\mathbf{z}) = R({\bf r})\mathbf{z} + \mathbf{v}(\mathbf{z}), \end{equation} where $\mathbf{v}(\mathbf{z}) = (0,\ldots,0,\{\mathbf{rz}\})^t$ (in particular, $||\mathbf{v}(\mathbf{z})||_\infty<1$). Moreover, again using \eqref{linearinequalities} one easily derives that \begin{equation}\label{almostlinear} \hbox{either}\quad\tau_\mathbf{r}(\mathbf{z}_1+\mathbf{z}_2) = \tau_\mathbf{r}(\mathbf{z}_1) + \tau_\mathbf{r}(\mathbf{z}_2) \quad \hbox{or}\quad \tau_\mathbf{r}(\mathbf{z}_1+\mathbf{z}_2) = \tau_\mathbf{r}(\mathbf{z}_1) - \tau_\mathbf{r}(-\mathbf{z}_2) \end{equation} holds for $\mathbf{z}_1,\mathbf{z}_2\in \mathbb{Z}^d$. The following sets turn out to be of importance in the study of various aspects of shift radix systems. \begin{definition}[$\D_d$ and $\D_d^{(0)}$] For $d\in \mathbb{N}$ set \begin{equation}\label{DdDd0} \begin{split} \D_{d}:= & \left\{{\bf r} \in \RR^{d}\; :\; \forall \mathbf{z} \in \ZZ^{d} \, \exists k,l \in \NN: \tau_{\bf r}^{k}({\bf z})=\tau_{\bf r}^{k+l}({\bf z}) \right\} \quad\hbox{and}\\ \D_{d}^{(0)}:= & \left\{{\bf r} \in \RR^{d}\; :\; \tau_{\bf r} \mbox{ is an SRS with finiteness property} \right\} . \end{split} \end{equation} \end{definition} Observe that $\D_{d}$ consists of all vectors $\mathbf{r}$ such that the iterates of $\tau_{\bf r}$ end up periodically for each starting vector $\mathbf{z}$. In order to give the reader a first impression of these sets, we present in Figure~\ref{d20} images of (approximations of) the sets $\D_{2}$ and $\D_{2}^{(0)}$. As we will see in Section~\ref{sec:Dd0}, the set $\D_d^{(0)}$ can be constructed starting from $\D_d$ by ``cutting out'' polyhedra. Each of these polyhedra corresponds to a nontrivial periodic orbit. Using this fact, in Section~\ref{sec:algorithms} we shall provide algorithms for the description of $\D_d^{(0)}$. (Compare the more detailed comments on $\D_d$ and $\D_d^{(0)}$ in Sections~\ref{Dd} and~\ref{sec:Dd0}, respectively.) \begin{figure} \centering \includegraphics[width=0.4\textwidth]{D20Picture} \caption{The large triangle is (up to its boundary) the set $\D_2$. The black set is an approximation of $\D_2^{(0)}$ (see~\cite[Figure~1]{Akiyama-Borbeli-Brunotte-Pethoe-Thuswaldner:05}).} \label{d20} \end{figure} \section{The relation between shift radix systems and numeration systems} In the present section we discuss relations between SRS and beta-expansions as well as canonical number systems (see {\it e.g.} \cite{Akiyama-Borbeli-Brunotte-Pethoe-Thuswaldner:05,Hollander:96}). Moreover, we provide a notion of radix expansion for integer vectors which is defined in terms of SRS. \subsection{Shift radix systems and beta-expansions}\label{sec:beta} The notion of {\it beta-expansion} (introduced by R\'enyi \cite{Renyi:57} and Parry \cite{Parry:60}) is well-known in number theory. \begin{definition}[Beta-expansion] Let $\beta>1$ be a non-integral real number and define the set of ``digits" to be $\mathcal{A}=\{0,1,\ldots,\lfloor\beta\rfloor\}.$ Then each $\gamma\in[0,\infty)$ can be represented uniquely by \begin{equation}\label{greedyexp} \gamma= \sum_{i=m}^\infty \frac{a_i}{\beta ^i} \end{equation} with $m\in\mathbb{Z}$ and $a_i\in \mathcal{A}$ chosen in a way that \begin{equation}\label{greedycondition} 0\leq \gamma-\sum_{i=m}^n \frac{a_{i}}{\beta^{i}} < \frac{1}{\beta^{n}} \end{equation} for all $n\ge m$. Observe that this means that the representation in \eqref{greedyexp} is the {\it greedy expansion} of $\gamma$ with respect to $\beta$. \end{definition} For $\gamma\in [0,1)$ we can use the {\it beta-transform} \begin{equation}\label{betatransform} T_{\beta} (\gamma) = \beta \gamma -\lfloor \beta \gamma \rfloor, \end{equation} to establish this greedy expansion, namely we have $$ a_i = \lfloor \beta T^{i-1}_\beta (\gamma) \rfloor $$ ({\it cf.} Renyi \cite{Renyi:57}). This holds no longer for $\gamma = 1$, where the beta-transform yields a representation (whose digit string is often denoted by $d(1,\beta)$) different from the greedy algorithm (see \cite{Parry:60}). In the investigation of beta-expansions two classes of algebraic numbers, {\em Pisot} and {\em Salem numbers} play an important role. For convenience, we recall their definitions. \begin{itemize} \item An algebraic integer $\alpha > 1$ is called a {\em Pisot number} if all its algebraic conjugates have absolute value less than 1. \item An algebraic integer $\alpha > 1$ is called a {\em Salem number} if all its algebraic conjugates have absolute value less than or equal to 1 with at least one of them lying on the unit circle. \end{itemize} Bertrand \cite{Bertrand:77} and Schmidt \cite{Schmidt:80} provided relations between periodic beta-expansions and Pisot as well as Salem numbers (see Section~\ref{sec:Salem} for details). Frougny and Solomyak \cite{Frougny-Solomyak:92} investigated the problem to characterize base numbers $\beta$ which give rise to {\it finite beta-expansions} for large classes of numbers. Denoting the set of positive reals having finite greedy expansion with respect to $\beta$ by ${\rm Fin}(\beta)$, we say that $\beta>1$ has property (F) if \begin{equation}\label{PropertyF} {\rm Fin}(\beta) = \mathbb{Z}[1/\beta]\cap [0,\infty). \end{equation} As it is shown in \cite{Frougny-Solomyak:92} property (F) can hold only for Pisot numbers $\beta$, on the other hand, not all Pisot numbers have property (F). Characterizing all Pisot numbers with property (F) has turned out to be a very difficult problem: many partial results have been established, {\em e.g.} by Frougny and Solomyak~\cite{Frougny-Solomyak:92} (quadratic Pisot numbers), or Akiyama~\cite{Akiyama:00} (cubic Pisot units). The following theorem is basically due to Hollander~\cite{Hollander:96} (except for the notion of SRS) and establishes the immediate relation of the problem in consideration with shift radix systems (recall that $\D_{d}^{(0)}$ is defined in \eqref{DdDd0}). \begin{theorem}\label{srsbeta} Let $\beta>1$ be an algebraic integer with minimal polynomial $X^{d+1}+b_dX^{d}+\cdots + b_1X+b_0$. Set \begin{equation}\label{ers} r_j := -\left( b_j \beta^{-1} + b_{j-1}\beta^{-2} + \cdots + b_0 \beta^{-j-1} \right), \quad 0\le j\le d-1. \end{equation} Then $\beta$ has property (F) if and only if $(r_0,\ldots, r_{d-1}) \in \D_{d}^{(0)}$. \end{theorem} \begin{remark} Observe that $r_0,\ldots,r_{d-1}$ in \eqref{ers} can also be defined in terms of the identity $$ X^{d+1}+b_d X^{d} + \dots +b_{1} X+ b_0 =(X-\beta)(X^{d}+r_{d-1}X^{d-1} +\dots + r_0). $$ \end{remark} It turns out that Theorem \ref{srsbeta} is an immediate consequence of the following more general observation ({\it cf.}\ Berth\'e {\it et al.}~\cite{BSSST2011}). \begin{proposition}\label{prop:betanumformula} Under the assumptions of Theorem \ref{srsbeta} and denoting $\mathbf{r}=(r_0,\ldots,r_{d-1})$ we have \begin{equation} \label{eq:conj} \{\mathbf{r}\tau_\mathbf{r}(\mathbf{z})\}=T_\beta(\{\mathbf{r}\mathbf{z}\}) \quad \mbox{for all } \mathbf{z}\in\mathbb{Z}^d. \end{equation} In particular, the restriction of $T_\beta$ to $\mathbb{Z}[\beta]\cap[0,1)$ is conjugate to $\tau_\mathbf{r},$ {\em i.e.}, denoting $\Phi_{\bf r}:\mathbf{z}\mapsto\{\mathbf{r}\mathbf{z}\}$ we have the following commutative diagram. $$ \CD \ZZ^d @> \tau_{{\bf r}} >>\ZZ^d\\ @V\Phi_{\bf r} VV @VV\Phi_{\bf r} V\\ \mathbb{Z}[\beta]\cap [0,1) @> T_{\beta} >> \mathbb{Z}[\beta]\cap [0,1) \endCD $$ \end{proposition} \begin{proof} Let the notations be as in Theorem~\ref{srsbeta}. Let $\mathbf{z}=(z_0,\ldots,z_{d-1})^t\in\mathbb{Z}^d,$ $z_d=-\lfloor\mathbf{r}\mathbf{z}\rfloor$, and $\mathbf{b}=(b_0,\ldots,b_d).$ Then we have, with the $(d+1)\times(d+1)$ companion matrix $R(\mathbf{b})$ defined analogously as $R({\bf r})$ in~\eqref{mata} (note that the vector $\mathbf{b}$ has $d+1$ entries), \begin{align} \{(r_0,\ldots,r_{d-1},1)R(\mathbf{b})(z_0,\ldots,z_d)^t\} &= \{(-b_0, r_0-b_1,\ldots,r_{d-1}-b_d)(z_0,\ldots,z_d)^t\} \nonumber\\ &= \{ -b_0 z_0 + (r_0-b_1)z_1 +\cdots + (r_{d-1} - b_d) z_d \} \nonumber\\ &= \{ r_0 z_1 + \cdots +r_{d-1} z_d \} \label{RBfirst} \\ &= \{ (r_0,\ldots, r_{d-1})(z_1,\ldots, z_d)^t \} \nonumber\\ &= \{\mathbf{r}\tau_\mathbf{r}(\mathbf{z})\}.\nonumber \end{align} In the third identity we used that $b_0,\ldots,b_d,z_0,\ldots,z_d$ are integers. Observing that $(r_0,\ldots,r_{d-1},1)$ is a left eigenvector of the matrix $R(\mathbf{b})$ with eigenvalue $\beta$ we conclude that \begin{align} \{(r_0,\ldots,r_{d-1},1)R(\mathbf{b})(z_0,\ldots,z_d)^t\} &= \{\beta (r_0,\ldots,r_{d-1},1) (z_0,\ldots,z_d)^t\} \nonumber\\ &= \{\beta ({\bf rz} +z_d)\} \nonumber\\ &= \{\beta ({\bf rz} - \lfloor {\mathbf{rz}} \rfloor)\} \label{RBsecond}\\ &= \{\beta\{\mathbf{r}\mathbf{z}\}\} \nonumber\\ &= T_\beta(\{\mathbf{r}\mathbf{z}\}).\nonumber \end{align} Combining \eqref{RBfirst} and \eqref{RBsecond} yields \eqref{eq:conj}. Since the minimal polynomial of $\beta$ is irreducible, $\{r_0,\ldots,r_{d-1},1\}$ is a basis of $\mathbb{Z}[\beta]$. Therefore the map $$\Phi_{\bf r}:\mathbf{z}\mapsto\{\mathbf{r}\mathbf{z}\}$$ is a bijective map from $\mathbb{Z}^d$ to $\mathbb{Z}[\beta]\cap[0,1)$. This proves the conjugacy between $T_\beta$ on $\mathbb{Z}[\beta]\cap[0,1)$ and $\tau_\mathbf{r}.$ \end{proof} Theorem~\ref{srsbeta} is now an easy consequence of this conjugacy: \begin{proof}[Proof of Theorem \ref{srsbeta}] Let $\gamma\in\mathbb{Z}[1/\beta]\cap[0,\infty)$. Then, obviously, $\gamma\beta^k\in \mathbb{Z}[\beta]\cap[0,\infty)$ for a suitable integer exponent $k$, and the beta-expansions of $\gamma$ and $\gamma\beta^k$ have the same digit string. Thus $\beta$ admits property (F) if and only if every element of $\mathbb{Z}[\beta]\cap[0,\infty)$ has finite beta-expansion. The greedy condition \eqref{greedycondition} now shows that it even suffices to guarantee finite beta-expansions for every element of $\mathbb{Z}[\beta]\cap[0,1)$. Thus the conjugacy in Proposition~\ref{prop:betanumformula} implies the result. \end{proof} \begin{example}[Golden mean and Tribonacci]\label{ex:fibo} First we illustrate Proposition~\ref{prop:betanumformula} for $\beta$ equal to the golden mean $\varphi=\frac{1+\sqrt{5}}{2}$ which is a root of the polynomial $X^2 - X - 1 = (X-\varphi)(X+r_0)$ with $r_0=\frac1\varphi=\frac{-1+\sqrt{5}}{2}$. By Proposition~\ref{prop:betanumformula} we therefore get that $T_\varphi$ is conjugate to $\tau_{1/\varphi}$. As $\varphi$ has property (F) (see \cite{Frougny-Solomyak:92}), we conclude that $\frac1\varphi \in \D_1^{(0)}$. Let us confirm the conjugacy for a concrete example. Indeed, starting with $3$ we get $\tau_{1/\varphi}(3)=-\lfloor\frac{3}{\varphi}\rfloor=-1$. The mapping $\Phi_{1/\varphi}$ for these values is easily calculated by $\Phi_{1/\varphi}(3)=\{\frac3\varphi\}=\{3\varphi-3\}=3\varphi-4$ and $\Phi_{1/\varphi}(-1)=-\varphi+2$. As $T_\varphi(3\varphi-4)=\{3\varphi^2-4\varphi\}=\{-\varphi+3\}=-\varphi+2$ the conjugacy is checked for this instance. The root $\beta > 1$ of the polynomial $X^3-X^2-X-1$ is often called {\em Tribonacci number}. In this case Proposition~\ref{prop:betanumformula} yields that $r_0=\frac1\beta$ and $r_1=\frac1\beta + \frac1{\beta^2}$. Thus $T_\beta$ is conjugate to $\tau_{(1/\beta, 1/\beta+1/\beta^2)}$. Property (F) holds also in this case. \end{example} \subsection{Shift radix systems and canonical number systems} It was already observed in 1960 by Knuth~\cite{Knuth:60} and in 1965 Penney~\cite{Penney:65} that $\alpha=-1+\sqrt{-1}$ can be used as a base for a number system in the Gaussian integers. Indeed, each non-zero $\gamma\in\mathbb{Z}[\sqrt{-1}]$ has a unique representation of the shape $\gamma=c_0+c_1\alpha+\cdots+c_h\alpha^h$ with $c_i\in\{0,1\}$ $(0\le i< h)$, $c_h=1$ and $h\in\mathbb{N}$. This simple observation has been the starting point for several generalizations of the classical $q$-ary number systems to algebraic number fields, see for instance \cite{Gilbert:81,Katai-Kovacs:80,Katai-Kovacs:81,Katai-Szabo:76,Kovacs-Pethoe:91}. The following more general notion has proved to be useful in this context. \begin{definition}[Canonical number system, see {Peth\H{o}~\cite{Pethoe:91}}]\label{def:CNS} Let \[ P(X) = p_dX^d + p_{d-1}X^{d-1}+\cdots+p_1X+ p_0 \in \mathbb{Z}[X], \quad p_0\ge 2, \quad p_d\neq 0; \quad \mathcal{N}=\{0,1,\ldots,p_0-1\} \] and $\mathcal{R}:=\mathbb{Z}[X]/P(X)\mathbb{Z}[X]$ and let $x$ be the image of $X$ under the canonical epimorphism from ${\ZZ}[X]$ to $\mathcal{R}$. If every $B\in \mathcal{R}, B \neq 0$, can be represented uniquely as $$ B=b_0 + b_1 x + \cdots + b_{\ell} x^{\ell} $$ with $b_0,\ldots,b_{\ell} \in \mathcal{N}, b_{\ell} \neq 0$, the system $(P,\mathcal{N})$ is called a {\it canonical number system} ({\it CNS} for short); $P$ is called its {\it base} or {\it CNS polynomial}, $\mathcal{N}$ is called the set of {\it digits}. \end{definition} Using these notions the problem arises, whether it is possible to characterize CNS polynomials by algebraic conditions on their coefficients and roots. First of all, it is easy to see that a CNS polynomial has to be expanding (see~\cite{Pethoe:91}). Further characterization results could be gained {\it e.g.} by Brunotte~\cite{Brunotte:01}, who gave a characterization of all quadratic monic CNS polynomials. For irreducible CNS polynomials of general degree, Kov\'acs~\cite{Kovacs:81a} proved that a polynomial $P$ given as in Definition~\ref{def:CNS} is CNS if $p_0\ge 2$ and $p_0\ge p_1\ge \cdots \ge p_{d-1}>0$. In \cite{Akiyama-Pethoe:02,Scheicher-Thuswaldner:03} characterization results under the condition $p_0 > |p_1|+ \cdots + |p_{d-1}|$ were shown, \cite{Burcsi-Kovacs:08} treats polynomials with small $p_0$. A general characterization of CNS polynomials is not known and seems to be hard to obtain. It has turned out that in fact there is again a close connection to the problem of determining shift radix systems with finiteness property. The corresponding result due to Akiyama {\em et al.}~\cite{Akiyama-Borbeli-Brunotte-Pethoe-Thuswaldner:05} and Berth\'e {\em et al.}~\cite{BSSST2011} is given in the following theorem (recall the definition of $\D_{d}^{(0)}$ in \eqref{DdDd0}). \begin{theorem}\label{srscns} Let $P(X) := p_dX^d+p_{d-1}X^{d-1}+\cdots+p_1X+p_0 \in \ZZ[X]$. Then $P$ is a CNS-polynomial if and only if $\mathbf{r}:=\left(\frac{p_d}{p_0},\frac{p_{d-1}}{p_0},\ldots, \frac{p_1}{p_0}\right) \in \D_{d}^{(0)}$. \end{theorem} By a similar reasoning as in the proof of Theorem \ref{srsbeta} we will derive the result from a more general one, this time establishing a conjugacy between $\tau_\mathbf{r}$ and the restriction of the following {\it backward division mapping} $D_P:\mathcal{R} \rightarrow \mathcal{R}$ (with $\mathcal{R}:=\mathbb{Z}[X]/P(X)\mathbb{Z}[X]$ as above) to a well-suited finitely generated $\ZZ$-submodule of $\mathcal{R}$ (compare \cite{BSSST2011}). \begin{definition}[Backward division mapping] \label{lem:bdm} The {\em backward division mapping} $D_P:\mathcal{R} \rightarrow \mathcal{R}$ for $B=\sum_{i=0}^\ell b_i x^i$, $b_i\in\mathbb{Z},$ is defined by \[ D_P(B) = \sum_{i=0}^{\ell-1} b_{i+1}x^i - \sum_{i=0}^{d-1} q p_{i+1}x^i,\quad q=\left\lfloor{\frac{b_0}{p_0}}\right\rfloor. \] \end{definition} Observe that $D_P(B)$ does not depend on the representative of the equivalence class of~$B$, that \begin{equation} \label{DP} B=(b_0-q p_0)+x D_P(B), \end{equation} and that $c_0=b_0-q p_0$ is the unique element in $\mathcal{N}$ with $B-c_0 \in x\mathcal{R}$ (compare \cite{Akiyama-Borbeli-Brunotte-Pethoe-Thuswaldner:05} for the case of monic $P$ and \cite{Scheicher-Surer-Thuswaldner-vdWoestijne:08} for the general case). Iterating the application of $D_P$ yields \begin{equation}\label{DPn} B=\sum_{n=0}^{m-1} c_n x^n + x^m D_P^m(B) \end{equation} with $c_n=D_P^n(B)-x D_P^{n+1}(B)\in\mathcal{N}.$ If we write formally \begin{equation}\label{cnsrepinfinity} B = \sum_{n=0}^\infty c_n x^n \end{equation} then it follows from the reasoning above that this representation is unique in having the property that \begin{equation}\label{PNrep} B -\sum_{i=0}^{m-1} c_n x^n \in x^m\mathcal{R},\; c_n\in\mathcal{N},\, \text { for all } m\in\mathbb{N}. \end{equation} Expansion \eqref{cnsrepinfinity} is called the $(P,\mathcal{N})${\it -representation} of $B\in \mathcal{R}$. In order to relate $D_P$ to an SRS, it is appropriate to use the so-called {\it Brunotte module} \cite{Scheicher-Surer-Thuswaldner-vdWoestijne:08}. \begin{definition}[Brunotte module] The \emph{Brunotte basis modulo $P = p_dX^d+p_{d-1}X^{d-1}+\cdots+p_1X+p_0$} is the set $\{w_0,\ldots,w_{d-1}\}$ with \begin{equation} \label{eq:Brunottebasis} w_0=p_d, \quad w_1=p_dx+p_{d-1}, \quad w_2=p_dx^2+p_{d-1}x+p_{d-2},\dots,\quad w_{d-1}=p_dx^{d-1}+\dots+p_1. \end{equation} The \emph{Brunotte module} $\Lambda_P$ is the $\mathbb{Z}$-submodule of $\mathcal{R}$ generated by the Brunotte basis. We furthermore denote the representation mapping with respect to the Brunotte basis by \[ \Psi_P: \,\Lambda_P \to \mathbb{Z}^d, \quad B = \sum_{k=0}^{d-1} z_kw_k \mapsto (z_0,\ldots,z_{d-1})^t. \] \end{definition} We call P {\it monic}, if $p_d = \pm 1.$ Note that in this instance $\Lambda_P$ is isomorphic to $\mathcal{R}$, otherwise $\mathcal{R}$ is not finitely generated. Now we can formulate the announced conjugancy between backward division and the SRS transform (compare \cite{BSSST2011}). \begin{proposition}\label{p:CNSconjugacy} Let $P(X)=p_d X^d+p_{d-1}X^{d-1}+\cdots+p_1X+p_0\in\mathbb{Z}[X]$, $p_0 \geq 2$, $p_d \neq 0$, $\mathbf{r}=\big(\frac{p_d}{p_0},\ldots,\frac{p_1}{p_0}\big)$. Then we have \begin{equation} \label{eq:CNSconj} D_P\Psi_P^{-1}(\mathbf{z}) = \Psi_P^{-1}\tau_\mathbf{r}(\mathbf{z}) \quad\mbox{for all }\mathbf{z}\in\mathbb{Z}^d. \end{equation} In particular, the restriction of $D_P$ to $\Lambda_P$ is conjugate to $\tau_\mathbf{r}$ according to the diagram $$ \CD \ZZ^d @> \tau_{{\bf r}} >>\ZZ^d\\ @V\Psi_{\bf P}^{-1} VV @VV\Psi_{\bf P}^{-1} V\\ \Lambda_P @> D_P >> \Lambda_P \endCD .$$ \end{proposition} \begin{proof} It follows immediately from the definitions that on $\Lambda_P$ we have \begin{equation} \label{eq:TA} D_P\bigg(\sum_{k=0}^{d-1} z_kw_k\bigg) = \sum_{k=0}^{d-2} z_{k+1}w_k - \left\lfloor \frac{z_0p_d+\cdots+z_{d-1}p_1}{p_0} \right\rfloor w_{d-1}, \end{equation} which implies (\ref{eq:CNSconj}). Since $\Psi_P:\ \Lambda_P \to \mathbb{Z}^d$ is bijective the proof is complete. \end{proof} For monic $P$ Proposition \ref{p:CNSconjugacy} establishes a conjugacy between $D_P$ on the full set $\mathcal{R}$ and $\tau_{{\bf r}}$. \begin{proof}[Proof of Theorem~\ref{srscns}.] Observing Proposition \ref{p:CNSconjugacy} the theorem follows from the fact, that it is sufficient to establish the finiteness of the $(P,\mathcal{N})$-representations of all $B\in\Lambda_P$ in order to check whether $(P,\mathcal{N})$ is a CNS (compare \cite{Scheicher-Surer-Thuswaldner-vdWoestijne:08}). \end{proof} \begin{example}[The $\frac32$-number system]\label{ex32} Considering $P(X)=-2X+3$ and $\mathcal{R}=\mathbb{Z}[X]/P(X)\mathbb{Z}[X]$ we get $\mathcal{R}\cong\mathbb{Z}[\frac32]=\mathbb{Z}[\frac12]$. Thus in this case we can identify the image of $X$ under the natural epimorphism $\mathbb{Z}[X]\to \mathcal{R}$ with $x=\frac32$ and the backward division mapping yields representations of the form $B=b_0+b_1\frac32 + b_2 (\frac32)^2+ \cdots$ for $B\in\mathbb{Z}[\frac12]$. For $B\in \mathbb{Z}$ this was discussed (apart from a leading factor $\frac12$) under the notation {\em $\frac32$-number system} in~\cite{Akiyama-Frougny-Sakarovitch:07}. Namely, each $B\in\mathbb{N}$ can be represented as $B=\frac12 \sum_{n=0}^{\ell(n)} b_n(\frac32)^n$ with ``digits'' $b_n\in\{0,1,2\}$. The language of possible digit strings turns out to be very complicated. If we restrict ourselves to the Brunotte module $\Lambda_{P}$ (which is equal to $2\mathbf{Z}$ in this case) Proposition~\ref{p:CNSconjugacy} implies that the backward division mapping $D_{-2X+3}$ is conjugate to the SRS $\tau_{-2/3}$. As $-1$ doesn't have a finite $(-2X+3,\{0,1,2\})$-representation, we conclude that $-\frac23 \not \in \D_1^{(0)}$. We mention that the $\frac32$-number system was used in~\cite{Akiyama-Frougny-Sakarovitch:07} to established irregularities in the distribution of certain generalizations of Mahler's $\frac32$-problem ({\it cf.}~\cite{Mahler:68}). \end{example} \begin{example}[Knuth's Example]\label{exKnuth} Consider $P(X)=X^2 + 2X + 2$. As this polynomial is monic with root $\alpha=-1+\sqrt{-1}$, we obtain $\mathcal{R}\cong \mathbb{Z}[\sqrt{-1}]\cong\Lambda_P$. In this case $(X^2+2X+2,\{0,1\})$ is a CNS (see Knuth~\cite{Knuth:60}) that allows to represent each $\gamma\in\mathbb{Z}[\sqrt{-1}]$ in the form $\gamma=b_0+b_1\alpha + \cdots + b_\ell \alpha^\ell$ with digits $b_j\in \{0,1\}$. According to Proposition~\ref{p:CNSconjugacy} the backward division mapping $D_P$ is conjugate to the SRS mapping $\tau_{(\frac12,1)}$. Therefore, $(\frac12,1)\in\D_2^{(0)}$. \end{example} \subsection{Digit expansions based on shift radix systems} In the final part of this section we consider a notion of representation for vectors $\mathbf{z} \in \mathbb{Z}^d$ based on the SRS-transformation $\tau_\mathbf{r}$ (compare \cite{BSSST2011}). \begin{definition}[SRS-representation]\label{def:SRSexp} Let $\mathbf{r} \in \mathbb{R}^d$. The \emph{SRS-representation} of $\mathbf{z} \in \mathbb{Z}^d$ with respect to $\mathbf{r}$ is the sequence $(v_1,v_2,v_3,\ldots)$, where $v_k=\big\{\mathbf{r}\tau_\mathbf{r}^{k-1}(\mathbf{z})\big\}$ for all $k\ge1$. \end{definition} Observe that vectors $\mathbf{r}\in\mathcal{D}_d^{(0)}$ give rise to finite SRS-representations, and vectors $\mathbf{r}\in\mathcal{D}_d$ to ultimately periodic SRS-representations. Let $(v_1,v_2,v_3,\ldots)$ denote the SRS-representation of $\mathbf{z} \in \mathbb{Z}^d$ with respect to $\mathbf{r}$. The following lemma shows that there is a radix expansion of integer vectors where the companion matrix $R(\mathbf{r})$ acts like a base and the vectors $(0,\ldots,0,v_j)^t$ act like the digits (see \cite[Equation~(4.2)]{Akiyama-Borbeli-Brunotte-Pethoe-Thuswaldner:05}). This justifies the name {\em shift radix system}. \begin{lemma}\label{lum} Let $\mathbf{r} \in \mathbb{R}^d$ and $(v_1,v_2,\ldots)$ be the SRS-representation of $\mathbf{z} \in \mathbb{Z}^d$ with respect to~$\mathbf{r}$. Then we have \begin{equation} \label{eq:tauk} R(\mathbf{r})^n \mathbf{z} = \tau_\mathbf{r}^n(\mathbf{z}) - \sum_{k=1}^n R(\mathbf{r})^{n-k} (0,\ldots,0,v_k)^t. \end{equation} \end{lemma} \begin{proof} Starting from \eqref{linear} and using induction we immediately get for the $n$-th iterate of $\tau_{\mathbf r}$ that \begin{equation}\label{tauiterate} \tau^n_{\bf r}({\bf z})=R({\bf r})^n {\bf {z}} + \sum_{k=1}^n R({\bf r})^{n-k} {\bf v}_k \end{equation} with vectors ${\bf v}_k=(0,\ldots,0, \{\mathbf{r} \tau_\mathbf{r}^{k-1}(\mathbf{z}) \})^t$. \end{proof} There is a direct relation between the digits of a given beta-expansion and the digits of the associated SRS-representation of $\mathbf{z} \in \mathbb{Z}^d$ ({\it cf.}~\cite{BSSST2011}; a related result for CNS is contained in \cite[Lemma~5.5]{BSSST2011}). \begin{proposition} \label{cor:betanumformula} Let $\beta$ and $\mathbf{r}$ be defined as in Theorem~\ref{srsbeta}. Let $(v_1,v_2,v_3,\ldots)$ be the SRS-representation of $\mathbf{z} \in \mathbb{Z}^d$ and $\{\mathbf{r}\mathbf{z}\}=\sum_{n=1}^\infty a_n \beta^{-n}$ be the beta-expansion of $v_1=\{\mathbf{r}\mathbf{z}\}$. Then we have \[ v_n=T_\beta^{n-1}(\{\mathbf{r}\mathbf{z}\}) \quad \mbox{and} \quad a_n=\beta v_n-v_{n+1} \quad \mbox{for all }n\ge1. \] \end{proposition} \begin{proof} By Definition~\ref{def:SRSexp} and (\ref{eq:conj}), we obtain that $v_n=\{\mathbf{r}\tau_\mathbf{r}^{n-1}(\mathbf{z})\}=T_\beta^{n-1}(\{\mathbf{r}\mathbf{z}\})$, which yields the first assertion. Using this equation and the definition of the beta-expansion, we obtain \[ a_n = \big\lfloor \beta T_\beta^{n-1}(\{\mathbf{r}\mathbf{z}\}) \big\rfloor = \beta T_\beta^{n-1}(\{\mathbf{r}\mathbf{z}\}) - \big\{\beta T_\beta^{n-1}(\{\mathbf{r}\mathbf{z}\})\big\} = \beta v_n - T_\beta^n(\{\mathbf{r}\mathbf{z}\}) = \beta v_n-v_{n+1}. \hfill\qedhere \] \end{proof} \begin{example}[Golden mean, continued] Again we deal with $\beta=\varphi$, the golden mean, and consider the digits of $3\varphi-4 = \Phi_{1/\varphi}(3)$. Using \eqref{greedycondition} one easily computes the beta expansion $3\varphi-4 = \frac1\beta + \frac1{\beta^3}$. Using the notation of Proposition~\ref{cor:betanumformula} we have that $a_1=a_3=1$ and $a_i=0$ otherwise. On the other hand we have $\tau_{1/\varphi}(3)= -\lfloor \frac3\varphi\rfloor = -1$, $\tau_{1/\varphi}(-1)= -\lfloor- \frac{1}\varphi\rfloor = 1$, and $\tau_{1/\varphi}(1)= -\lfloor \frac1\varphi\rfloor = 0$. Thus, for the SRS-representation $3=(v_1,v_2,\ldots)$ we get $v_1=\{\frac3\varphi\}=\{3\varphi-3\}=3\varphi-4$, $v_2=\{-\frac{1}\varphi\}=\{-\varphi+1\}=-\varphi+2$, $v_3=\{\frac1\varphi\}=\{\varphi-1\}=\varphi-1$, and $v_i=0$ for $i \ge 4$. It is now easy to verify the formulas in Proposition~\ref{cor:betanumformula}. For instance, $\varphi v_1 - v_2 = \varphi(3\varphi-4)-(-\varphi+2)=3\varphi^2 - 3\varphi - 2 = 1 = a_1$. \end{example} \section{Shift radix systems with periodic orbits: the sets $\D_{d}$ and the Schur-Cohn region} \label{Dd} \subsection{The sets $\D_d$ and their relations to the Schur-Cohn region} In this section we focus on results on the sets $\D_{d}$ defined in \eqref{DdDd0}. To this aim it is helpful to consider the {\em Schur-Cohn} region (compare~\cite{Schur:18}) \[ \E_d:=\{\mathbf{r} \in \RR^d\; : \; \varrho(R(\mathbf{r})) < 1\}. \] Here $\varrho(A)$ denotes the spectral radius of the matrix $A$. The following important relation holds between the sets $\E_d$ and $\D_d$ ({\it cf.}~\cite{Akiyama-Borbeli-Brunotte-Pethoe-Thuswaldner:05}) . \begin{proposition}\label{EdDdEd} For $d\in\mathbb{N}$ we have $$ \E_d \subset \D_d \subset \overline{\E_{d}}. $$ \end{proposition} \begin{proof} We first deal with the proof of the assertion $\E_d \subset \D_d$. Let us assume now $0 <\varrho(R({\mathbf r}))<1$ (the instance $\mathbf{r}=\mathbf{0}$ is trivial) and choose a number ${\tilde\varrho}\in(\varrho(R({\bf r})),1)$. According {\it e.g.} to \cite[formula (3.2)]{Lagarias-Wang:96a} it is possible to construct a norm $|| \cdot ||_{\tilde\varrho}$ on $\ZZ^d$ with the property that \begin{equation}\label{norminequ} ||R({\mathbf r}){\bf x}||_{\tilde\varrho}\leq {\tilde\varrho} ||{\bf x}||_{\tilde\varrho}. \end{equation} Using \eqref{tauiterate} it follows that \begin{equation*} \Vert \tau_{{\bf r}}^n({\bf z})\Vert _{\tilde\varrho} \leq {\tilde\varrho} ^n \Vert {\bf z} \Vert _{\tilde\varrho}+ c\sum_{k=1}^n {\tilde\varrho}^{n-k}\leq {\tilde\varrho} ^n \Vert {\bf z} \Vert _{\tilde\varrho}+ \frac{c}{1 -{\tilde\varrho}}, \end{equation*} where $c=\sup\{||(0,\ldots,0,\varepsilon)^t||_{\tilde\varrho} \,:\, \varepsilon\in[0,1)\}$ is a finite positive constant. Hence, for $n$ large enough, \begin{equation}\label{pluseins} \Vert \tau_\mathbf{r}^n({\bf z})\Vert_{\tilde\varrho} \leq \frac{c}{1-{\tilde\varrho}}+ 1. \end{equation} Since the set of all iterates $\tau_\mathbf{r}^n({\bf z}), n\in \NN$, is bounded in $\ZZ^d$ it has to be finite and, hence, the sequence $(\tau_\mathbf{r}^n({\bf z}))_ {n\in \NN}$ has to be ultimately periodic. Therefore we have $\mathbf{r}\in \D_d$. We now switch to the assertion $\D_d \subset \overline{\E_{d}}$. Let us assume $\tau_\mathbf{r}\in \D_d$ and, by contrary, that there exists an eigenvalue $\lambda$ of $R(\bf{r})$ with $|\lambda| > 1.$ Let $\mathbf{u}^t$ be a left eigenvector of $R(\bf{r})$ belonging to $\lambda$. Multiplying \eqref{tauiterate} by $\mathbf{u}^t$ from the left we find that \begin{equation}\label{utauiterate} |\mathbf{u}^t\tau^n_{\bf r}({\bf z})|=|\lambda^n \mathbf{u}^t{\bf z} + \sum_{k=1}^n \lambda^{n-k} \mathbf{u}^t{\bf v}_k| \end{equation} for any ${\bf z}\in \ZZ^d.$ Since $||{\bf v}_k||_\infty<1$ there is an absolute constant, say $c_1$, such that $|\mathbf{u}^t{\bf v}_k|\leq c_1$ for any $k$. Choosing $\mathbf{z}\in \ZZ^d$ such that $|\mathbf{u}^t{\bf z}|> \frac {c_1+1}{|\lambda|-1}$ it follows from \eqref{utauiterate} that $|\mathbf{u}^t\tau^n_{\bf r}({\bf z})| \geq c_2|\lambda|^n$ with some positive constant $c_2$. Therefore the sequence $(\tau_\mathbf{r}^n({\bf z}))_ {n\in \NN}$ cannot be bounded, which contradicts the assumption that $\mathbf{r}\in \D_d$. \end{proof} Using the last proposition and the fact that the spectral radius of a real monic polynomial is a continuous function in the coefficients of the polynomial it can easily be shown that the boundary of $\D_d$ can be characterized as follows (compare \cite{Akiyama-Borbeli-Brunotte-Pethoe-Thuswaldner:05} for details). \begin{corollary}\label{Ddboundary} For $d\in\mathbb{N}$ we have that \begin{equation*} \partial \D_{d}:= \left\{{\bf r} \in \RR^{d}\; :\; \varrho(R({\mathbf r}))=1 \right\}. \end{equation*} \end{corollary} \subsection{The Schur-Cohn region and its boundary} \label{sec:Ed} We want to give some further properties of $\E_d$. Since the coefficients of a polynomial depend continuously on its roots it follows that $\E_d = {\rm int\,}(\overline{\E_d})$. Moreover, one can prove that $\E_d$ is simply connected ({\it cf.} \cite{Fam-Meditch:78}). By a result of Schur~\cite{Schur:18} the sets $\E_d$ can be described by determinants. \begin{proposition}[{{\it cf}.~Schur~\cite{Schur:18}}]\label{stregion} For $0 \leq k < d$ let \[\delta_k(r_0,\ldots,r_{d-1})= \left(\begin{array}{cccccccc} 1 & 0 & \cdots & 0 & r_0 & \cdots & \cdots & r_k \\ r_{d-1} & \ddots & \ddots & \vdots & 0 & \ddots & & \vdots \\ \vdots & \ddots & \ddots & 0 & \vdots & \ddots & \ddots & \vdots \\ r_{d-k-1} & \cdots & r_{d-1}& 1 & 0 & \cdots & 0 & r_0 \\ r_0 & 0 & \cdots & 0 & 1 & r_{d-1}& \cdots & r_{d-k-1}\\ \vdots & \ddots & \ddots & \vdots & 0 & \ddots & \ddots & \vdots \\ \vdots & & \ddots & 0 & \vdots & \ddots & \ddots & r_{d-1}\\ r_k & \cdots & \cdots & r_0 & 0 & \cdots & 0 & 1 \end{array}\right)\in \RR^{2(k+1) \times 2(k+1)}.\] Then the sets $\E_d$ are given by \begin{equation}\label{charaEd} \E_d=\left\{(r_0,\ldots,r_{d-1})\in \RR^d \; :\; \forall k \in \{0,\ldots,d-1\}: \; \det\left(\delta_k(r_0,\ldots,r_{d-1})\right)>0\right\}. \end{equation} \end{proposition} \begin{example} Evaluating the determinants for $d\in\{1,2,3\}$ yields (see also \cite{Fam-Meditch:78}) \begin{equation}\label{1902082}\begin{split} \E_1= & \{x \in \RR \; : \; \abs{x}<1\}, \\ \E_2= & \{(x,y) \in \RR^2\; :\; \abs{x}<1, \abs{y}<x+1\}, \\ \E_3= & \{(x,y,z) \in \RR^3\; :\; \abs{x}<1, \abs{y-xz}<1-x^2, \abs{x+z}<y+1\}. \end{split}\end{equation} Thus $\E_2$ is the (open) triangle in Figure 1. $\E_3$ is the solid depicted in Figure \ref{E3}. \begin{figure} \centering \includegraphics[width=0.6\textwidth]{E3picture} \vskip -0.5cm \caption{The set $\E_3$} \label{E3} \end{figure} \end{example} The boundary of $\E_d$ consists of all parameters $\mathbf{r}$ for which $R(\mathbf{r})$ has an eigenvalue of modulus~$1$. Thus $\partial \E_d$ naturally decomposes into the three hypersurfaces \begin{align*} E_d^{(1)} &:= \{\mathbf{r} \in \partial \E_d \;:\; R(\mathbf{r}) \hbox{ has $1$ as an eigenvalue}\}, \\ E_d^{(-1)} &:= \{\mathbf{r} \in \partial \E_d \;:\; R(\mathbf{r}) \hbox{ has $-1$ as an eigenvalue}\}, \\ E_d^{(\mathbb{C})} &:= \{\mathbf{r} \in \partial \E_d \;:\; R(\mathbf{r}) \hbox{ has a non-real eigenvalue of modulus 1}\}, \end{align*} {\it i.e.}, \begin{equation}\label{Edboundary} \partial\E_d=E_d^{(1)} \cup E_d^{(-1)} \cup E_d^{(\mathbb{C})}. \end{equation} These sets can be determined using $\E_{d-1}$ and $\E_{d-2}$. To state the corresponding result, we introduce the following terminology. Define for vectors ${\bf r}=(r_0,\ldots,r_{p-1})$, ${\bf s}=(s_0,\ldots,s_{q-1})$ of arbitrary dimension $p,q\in\NN$ the binary operation $\odot$ by \begin{equation}\label{eq:odot} \chi_{{\bf r} \odot {\bf s}}=\chi_{\bf r}\cdot \chi_{\bf s}, \end{equation} where ``$\cdot$'' means multiplication of polynomials. For $D \subset \RR^p$ and $E \subset \RR^q$ let $D \odot E:=\{{\bf r} \odot {\bf s} : \, {\bf r} \in D, {\bf s} \in E\}$. Then, due to results of Fam and Meditch~\cite{Fam-Meditch:78} (see also~\cite{Kirschenhofer-Pethoe-Surer-Thuswaldner:10}), the following theorem holds. \begin{theorem} For $d \geq 3$ we have \begin{equation}\begin{split} E_d^{(1)} = & (1) \odot \overline{\E_{d-1}}, \\ E_d^{(-1)} = & (-1) \odot \overline{\E_{d-1}}, \\ E_d^{(\mathbb{C})} = & \left\{(1,y) \;:\; y \in (-2,2)\right\}\odot \overline{\E_{d-2}}. \end{split}\end{equation} Moreover, $E_d^{(1)}$ and $E_d^{(-1)}$ are subsets of hyperplanes while $E_d^{(\mathbb{C})}$ is a hypersurface in $\mathbb{R}^d$ \end{theorem} In order to characterize $\D_d$, there remains the problem to describe $\D_d \setminus \E_d$, which is a subset of $\partial \D_d=\partial \E_d$. The problem is relatively simple for $d=1$, where it is an easy exercise to prove that $\D_1=[-1,1]$. For dimensions $d\ge 2$ the situation is different and will be discussed in the following two sections. \section{The boundary of $\D_2$ and discretized rotations in $\mathbb{R}^2$}\label{sec:D2} In this section we consider the behavior of the orbits of $\tau_\mathbf{r}$ for $\mathbf{r} \in \partial \mathcal{D}_2$. In particular we are interested in whether these orbits are ultimately periodic or not. To this matter we subdivide the isosceles triangle $\partial \mathcal{D}_2$ into four pieces (instead of three as in \eqref{1902082}), in particular, we split $E_2^{(1)}$ in two parts as follows. \begin{align*} E_{2-}^{(1)}& = \{(x,-x-1) \in \mathbb{R}^2 \;:\; -1 \le x \le 0 \}, \\ E_{2+}^{(1)}& = \{(x,-x-1) \in \mathbb{R}^2 \;:\; 0 < x \le 1 \}, \\ E_2^{(-1)} &= \{(x,x+1) \in \mathbb{R}^2 \;:\; -1 \le x \le 1 \}, \\ E_2^{(\mathbb{C})}& = \{(1, y) \in \mathbb{R}^2 \;:\; -2 < y < 2 \}. \end{align*} It turns out that the behavior of the orbits is much more complicated for $\mathbf{r}\in E_2^{(\mathbb{C})}$ than it is for the remaining cases. This is due to the fact that the matrix $R(\mathbf{r})$ has one eigenvalue that is equal to $-1$ for $\mathbf{r} \in E_2^{(-1)} $, one eigenvalue that is equal to $1$ for $\mathbf{r} \in E_2^{(1)}$, but a pair of complex conjugate eigenvalues on the unit circle for $\mathbf{r} \in E_2^{(\mathbb{C})}$. Figure~\ref{fig:PartialD2} surveys the known results on the behavior of the orbits of $\tau_{\mathbf{r}}$ for $\mathbf{r}\in\partial \mathcal{D}_2$. \begin{figure} \includegraphics[height=7cm]{PartialD2.pdf} \caption{An image of the isosceles triangle $\partial \mathcal{D}_2$. The black lines belong to $\mathcal{D}_2$, the dashed line doesn't belong to $\mathcal{D}_2$. For the grey line $E_2^{(\mathbb{C})}$ it is mostly not known whether it belongs to $\mathcal{D}_2$ or not. Only the 11 black points in $E_2^{(\mathbb{C})}$ could be shown to belong to $\mathcal{D}_2$ so far (compare~\cite[Figure~1]{Kirschenhofer-Pethoe-Surer-Thuswaldner:10}). For the two points $(1,2)$ and $(1,-2)$ it is easy to see that they do not belong to $\mathcal{D}_2$ by solving a linear recurrence relation. \label{fig:PartialD2}} \end{figure} \subsection{The case of real roots} We start with the easier cases that have been treated in \cite[Section~2]{Akiyama-Brunotte-Pethoe-Thuswaldner:06}. In this paper the following result is proved. \begin{proposition}[{\cite[Theorem~2.1]{Akiyama-Brunotte-Pethoe-Thuswaldner:06}}]\label{D2easyboundary} If $\mathbf{r} \in (E_2^{(-1)} \cup E_{2-}^{(1)})\setminus\{(1,2)\}$ then $\mathbf{r} \in \mathcal{D}_2\setminus \mathcal{D}_2^{(0)}$, \emph{i.e.}, each orbit of $\tau_\mathbf{r}$ is ultimately periodic, but not all orbits end in $\mathbf{0}$. If $\mathbf{r} \in E_{2+}^{(1)} \cup \{(1,2)\}$ then $\mathbf{r} \not \in \mathcal{D}_2$, \emph{i.e.}, there exist orbits of $\tau_\mathbf{r}$ that are not ultimately periodic. \end{proposition} \begin{proof}[Sketch of the proof] We subdivide the proof in four parts. \begin{itemize} \item[(i)]$\mathbf{r}=(x,x+1) \in E_{2}^{(-1)}$ with $x \le 0$. The cases $x\in \{-1,0\}$ are trivial, so we can assume that $-1<x<0$. It is easy to see by direct calculation that $\tau_{\mathbf{r}}^2((-n,n)^t)=(-n,n)^t$ holds for each $n\in\mathbb{N}$. This implies that $\mathbf{r}\not\in \mathcal{D}_2^{(0)}$ holds in this case. To show that $\mathbf{r}\in \mathcal{D}_2$ one proves that $||\tau_\mathbf{r}(\mathbf{z})||_\infty \le ||\mathbf{z}||_\infty$. This is accomplished by distinguishing four cases according to the signs of the coordinates of the vector $\mathbf{z}$ and examining $\tau_\mathbf{r}(\mathbf{z})$ in each of these cases. \item[(ii)] $\mathbf{r}=(x,-x-1) \in E_{2-}^{(1)}$. This is treated in the same way as the previous case; here $\tau_{\mathbf{r}}((n,n)^t)=(n,n)^t$ holds for each $n\in\mathbb{N}$. \item[(iii)] $\mathbf{r}=(x,x+1) \in E_{2}^{(-1)}$ with $x > 0$. Here again $\tau_{\mathbf{r}}^2((-n,n)^t)=(-n,n)^t$ implies that $\mathbf{r}\not\in \mathcal{D}_2^{(0)}$. To prove that $\mathbf{r}\in\mathcal{D}_2$ for $x < 1$ is a bit more complicated. First ultimate periodicity is shown for starting vectors contained in the set $M=\{(-n,m)^t\,:\, m\ge n\ge 0\}$. This is done by an easy induction argument on the quantity $m-n$. After that one shows that each $\mathbf{z} \in \mathbb{Z}^2\setminus M$ hits $M$ after finitely many applications of $\tau_\mathbf{r}$. Proving this is done by distinguishing several cases. If $x=1$ the fact that $\mathbf{r}\not\in\mathcal{D}_2$ follows by solving a linear recurrence relation. \item[(iv)] $\mathbf{r}=(x,-x-1) \in E_{2+}^{(1)}$. If $n > m > 0$ then $\tau_\mathbf{r}((m,n)^t) = (n,p)^t$ with $p >n$ follows from the definition of $\tau_{\mathbf{r}}$. Thus $||\tau_{\mathbf{r}}^k(1,2)||_\infty \to \infty$ for $k\to \infty$ implying that the orbit of $(1,2)^t$ is not ultimately periodic. \qedhere \end{itemize} \end{proof} For $\mathbf{r} \in E_2^{(\mathbb{C})}$ we can only exclude that $\mathbf{r} \in \mathcal{D}_2^{(0)}$. Indeed, in this case $\mathbf{r}=(1,y)$ with $|y| < 2$. This implies that $\tau_\mathbf{r}^{-1}((0,0)^t)=\{(0,0)^t\}$. In other words, in this case each orbit starting in a non-zero element does not end up at $(0,0)^t$. Combining this with Proposition~\ref{D2easyboundary} we obtain that $\mathcal{D}_2^{(0)} \cap \partial \mathcal{D}_2 = \emptyset$. By Proposition~\ref{EdDdEd} this is equivalent to the following result. \begin{corollary}[{\cite[Corollary~2.5]{Akiyama-Brunotte-Pethoe-Thuswaldner:06}}]\label{D2boundarycorollary} $$ \D_2^{(0)} \subset \E_2. $$ \end{corollary} \subsection{Complex roots and discretized rotations} We now turn our attention to periodicity results for parameters $\mathbf{r}\in E_2^{(\mathbb{C})}$, {\it i.e.}, fr $\mathbf{r}=(1,\lambda)$ with $|\lambda| < 2$. From the definition of $\tau_\mathbf{r}$ it follows that $E_2^{(\mathbb{C})} \subset \mathcal{D}_2$ is equivalent to the following conjecture. \begin{conjecture}[{see {\em e.g.} \cite{Akiyama-Brunotte-Pethoe-Thuswaldner:06,Bruin-Lambert-Poggiaspalla-Vaienti:03,Lowensteinetal:97,Vivaldi:94}}]\label{Vivaldi-SRS-Conjecture} For each $\lambda \in \mathbb{R}$ satisfying $|\lambda| < 2$ the sequence $(a_n)_{n\in \mathbb{N}}$ defined by \begin{equation}\label{eq:vivaldiconjecture} 0 \le a_{n-1} + \lambda a_n + a_{n+1} < 1 \end{equation} is ultimately periodic for each pair of starting values $(a_0,a_1)\in\mathbb{Z}^2$. \end{conjecture} \begin{remark}\label{rem:vivaldi} Several authors (in particular Franco Vivaldi and his co-workers) study the slightly different mapping $\Phi_\lambda: \mathbb{Z}^2 \to \mathbb{Z}^2$, $(z_0,z_1)^t \mapsto (\lfloor \lambda z_0 \rfloor - z_1, z_0)^t$. Their results --- which we state using $\tau_{(1,-\lambda)}$ --- carry over to our setting by obvious changes of the proofs. \end{remark} To shed some light on Conjecture~\ref{Vivaldi-SRS-Conjecture}, we emphasize that $\tau_{(1,\lambda)}$ can be regarded as a \emph{discretized rotation}. Indeed, the inequalities in \eqref{eq:vivaldiconjecture} imply that \[ \begin{pmatrix} a_n \\ a_{n+1} \end{pmatrix} = \begin{pmatrix} 0&1\\-1&-\lambda \end{pmatrix} \begin{pmatrix} a_{n-1} \\ a_{n} \end{pmatrix}+ \begin{pmatrix} 0 \\ \{\lambda a_n\} \end{pmatrix}, \] and writing $\lambda=-2\cos(\pi \theta)$ we get that the eigenvalues of the involved matrix are given by $\exp({\pm i\pi\theta})$. Thus $\tau_{(1,\lambda)}$ is a rotation followed by a ``round-off''. As in computer arithmetic round-offs naturally occur due to the limited accuracy of floating point arithmetic, it is important to worry about the effect of such round-offs. It was this application that Vivaldi and his co-authors had in mind when they started to study the dynamics of the mapping $\tau_{(1,\lambda)}$ in the 1990s (see \cite{Lowensteinetal:97,Lowenstein-Vivaldi:98,Vivaldi:94}). Of special interest is the case of rational rotation angles $\theta=p/q$, as rotations by these angles have periodic orbits with period $q$. In these cases, for the discretization one gets that $||\tau_{(1,\lambda)}^q(\mathbf{z}) - \mathbf{z}||_\infty$ is uniformly small for all $\mathbf{z}\in\mathbb{Z}^2$. The easiest non-trivial cases (if $\lambda\in\mathbb{Z}$ everything is trivial) occur for $q=5$. For instance, consider $\theta=\frac25$ which gives $\lambda=\frac{1-\sqrt{5}}2=-\frac1\varphi$, where $\varphi=\frac{1+\sqrt{5}}2$ is the golden ratio. Although the behavior of the orbits of $\tau_{(1,-1/\varphi)}$ looks rather involved and there is no upper bound on their period, Lowenstein {\it et al.}~\cite{Lowensteinetal:97} succeeded in proving that nevertheless each orbit of $\tau_{(1,-1/\varphi)}$ is periodic. This confirms Conjecture~\ref{Vivaldi-SRS-Conjecture} in the case $\lambda=-\frac1\varphi$. In their proof, they consider a dynamical system on a subset of the torus, which is conjugate to $\tau_{(1,-1/\varphi)}$ (see Section~\ref{sec:domainexchange} for more details). This system is then embedded in a higher dimensional space where the dynamics becomes periodic. This proves that $\tau_{(1,-1/\varphi)}$ is \emph{quasi-periodic} which is finally used in order to derive the result. The case $\theta=\frac45$ corresponds to $\tau_{(1,\varphi)}$ and is treated in detail in the next subsection. \subsection{A parameter associated with the golden ratio} Akiyama {\it et al.}~\cite{Akiyama-Brunotte-Pethoe-Steiner:06} came up with a very simple and beautiful proof for the periodicity of $\tau_{(1,\varphi)}$. In particular, in their argument they combine the fact that $||\tau_{(1,\varphi)}^5(\mathbf{z}) - \mathbf{z}||_\infty$ is small (see the orbits in Figure~\ref{fig:goldenorbit}) \begin{figure} \includegraphics[height=6cm]{GoldenOrbit1.pdf} \hskip 1cm \includegraphics[height=6cm]{GoldenOrbit2.pdf} \caption{Two examples of orbits of $\tau_{(1,\varphi)}$. The picture on the left is the orbit of $(5,5)^t$. Its period is 65. The orbit of $(13,0)^t$ on the right has period 535. \label{fig:goldenorbit}} \end{figure} with Diophantine approximation properties of the golden mean. In what follows we state the theorem and give a sketch of this proof (including some explanations to make it easier to read). \begin{theorem}[{\cite[Theorem~5.1]{Akiyama-Brunotte-Pethoe-Steiner:06}}]\label{th:steinerproof} Let $\varphi=\frac{1+\sqrt{5}}2$ be the golden mean. Then $(1,\varphi)\in \mathcal{D}_2$. In other words, the sequence $(a_n)_{n \in \mathbb{N}}$ defined by \begin{equation}\label{eq:phirecurrence} 0 \le a_{n-1} + \varphi a_n + a_{n+1} < 1 \end{equation} is ultimately periodic for each pair of starting values $(a_0,a_1)\in\mathbb{Z}^2$. \end{theorem} \begin{proof}[Sketch of the proof ({\em cf.}\ {\cite{Akiyama-Brunotte-Pethoe-Steiner:06}})] First we make precise our observation that $\tau_{(1,\varphi)}^5({\mathbf z})$ is close to $\mathbf{z}$ for each $\mathbf{z} \in \mathbb{Z}^2$. In particular, we shall give an upper bound for the quantity $|a_{n+5} - a_n|$ when $(a_n)_{n\in\mathbb{Z}}$ satisfies \eqref{eq:phirecurrence}. These inequalities immediately yield \begin{equation}\label{eq:phi1} a_{n+1} = -a_{n-1} - \varphi a_n + \{\varphi a_n\}, \end{equation} a representation of $a_{n+1}$ in terms of $a_n$ and $a_{n-1}$. We wish to express $a_{n+5}$ by these two elements. To accomplish this we have to proceed in four steps. We start with a representation of $a_{n+2}$ in terms of $a_n$ and $a_{n-1}$. As \begin{equation}\label{eq:phi2} a_{n+2} = -\lfloor a_{n} + \varphi a_{n+1} \rfloor \end{equation} we first calculate \begin{equation}\label{eq:phi2a} \begin{array}{rclcl} -a_n-\varphi a_{n+1} & = & (-1+\varphi^2) a_n + \varphi a_{n-1} - \varphi\{\varphi a_n\} &\;& \hbox{(by \eqref{eq:phi1})} \\ &=&\varphi a_n + \varphi a_{n-1} - \varphi\{\varphi a_n\} &\;& (\hbox{by }\varphi^2=\varphi+1) \\ &=& \lfloor\varphi a_n \rfloor + \varphi a_{n-1} + (1-\varphi)\{\varphi a_n\} &&\\ &=&\lfloor \varphi a_{n} \rfloor + \lfloor \varphi a_{n-1} \rfloor - \varphi^{-1}\{\varphi a_n\} +\{\varphi a_{n-1}\} &\;& (\hbox{by }\varphi^2=\varphi+1). \end{array} \end{equation} Inserting \eqref{eq:phi2a} in \eqref{eq:phi2} we obtain \[ a_{n+2} = \lfloor \varphi a_{n} \rfloor + \lfloor \varphi a_{n-1} \rfloor + c_n, \quad \hbox{where} \quad c_n = \begin{cases} 1, & \hbox{if } \varphi^{-1}\{\varphi a_n\} < \{\varphi a_{n-1}\};\\ 0, & \hbox{if } \varphi^{-1}\{\varphi a_n\} \ge \{\varphi a_{n-1}\}, \end{cases} \] the desired representation of $a_{n+2}$. We can now go on like that for three more steps and successively gain representations of $a_{n+3}$, $a_{n+4}$, and $a_{n+5}$ in terms of $a_n$ and $a_{n-1}$. The formula for $a_{n+5}$, which is relevant for us, reads \begin{equation}\label{eq:phi5} a_{n+5} = a_n + d_n \end{equation} where \[ d_n = \begin{cases} 1, & \hbox{if } \{\varphi a_{n-1}\} \ge \varphi\{\varphi a_{n}\},\; \{\varphi a_{n-1}\}+\{\varphi a_{n}\} >1 \\ & \hbox{or } \varphi\{\varphi a_{n-1}\} \le\{\varphi a_{n}\},\; \varphi\{\varphi a_{n}\} \le 1 + \{\varphi a_{n-1}\}; \\ 0, & \hbox{if } \varphi \{\varphi a_{n-1}\} > \{\varphi a_{n}\},\; \{\varphi a_{n-1}\}+\{\varphi a_{n}\} \le 1,\; \varphi^2 \{\varphi a_{n-1}\}<1 \\ & \hbox{or } \{\varphi a_{n}\} < \varphi\{\varphi a_{n-1}\} < \varphi^2\{\varphi a_{n}\},\; \{\varphi a_{n-1}\}+\{\varphi a_{n}\}>1; \\ -1,& \hbox{if } \varphi\{\varphi a_{n-1}\}>\{\varphi a_{n}\},\; \{\varphi a_{n-1}\}+\{\varphi a_{n}\}\le 1,\; \varphi^2\{\varphi a_{n-1}\}\ge 1 \\ & \hbox{or } \varphi\{\varphi a_{n}\}> 1+\{\varphi a_{n-1}\}. \\ \end{cases} \] In particular, this implies that \begin{equation}\label{eq:diffbound} |a_{n+5}-a_{n}| \le 1. \end{equation} To conclude the proof we use the Fibonacci numbers $F_k$ defined by $F_0=0$, $F_1=1$ and $F_k=F_{k-1}+F_{k-2}$ for $k \ge 2$. In particular, we will use the classical identity (see {\it e.g.} \cite[p.12]{Rockett-Szusz:92}) \begin{equation}\label{eq:fiboid} \varphi F_k = F_{k+1} + \frac{(-1)^{k+1}}{\varphi^k}. \end{equation} Let $(a_n)_{n\in \mathbb{N}}$ be an arbitrary sequence satisfying \eqref{eq:phirecurrence} and choose $m \in \mathbb{N}$ in a way that $a_n \le F_{2m}$ holds for $n\in \{0,\ldots, 5\}$. We claim that in this case $a_n \le F_{2m}$ holds for all $n\in \mathbb{N}$. Assume on the contrary, that this is not true. Then there is a smallest $n \in \mathbb{N}$ such that $a_{n+5} > F_{2m}$. In order to derive a contradiction, we distinguish two cases. Assume first that $a_n < F_{2m}$. In this case \eqref{eq:diffbound} implies that $a_{n+5} \le F_{2m}$, which already yields the desired contradiction. Now assume that $a_n = F_{2m}$. Here we have to be more careful. First observe that \eqref{eq:phirecurrence} implies that $\varphi a_{n-1} \ge -a_{n-2}- a_n \ge -2F_{2m}$ and, hence, \eqref{eq:fiboid} yields $a_{n-1} \ge - 2\varphi^{-1} F_{2m}= -2\varphi^{-2}F_{2m+1}+2\varphi^{-2m-2} > -F_{2m+1}$. Summing up, we have $-F_{2m+1} < a_{n-1} \le F_{2m}.$ As the Fibonacci numbers are the denominators of the convergents of the continued fraction expansion $\varphi=[1;1,1,1,\ldots]$ ({\it cf. e.g.} \cite[p.12]{Rockett-Szusz:92}) we obtain that \begin{equation}\label{cfConsequence} \{\varphi a_{n}\} \le \{\varphi a_{n-1}\} \le 1 - \{\varphi a_{n}\}. \end{equation} This chain of inequalities rules out the case $d_{n}=1$ in \eqref{eq:phi5}. Thus we get $a_{n+5} - a_n \in \{-1,0\}$, hence, $a_{n+5} \le a_{n} \le F_{2m}$, and we obtain a contradiction again. We have now shown that $a_n \le F_{2m}$ holds for each $n\in \mathbb{N}$. However, in view of \eqref{eq:phi1}, $a_{n+1}\le F_{2m}$ implies that $a_{n} \ge -(1+\varphi)F_{2m}-1$, which yields that $-(1+\varphi)F_{2m}-1 \le a_n \le F_{2m}$ holds for each $n\in\mathbb{N}$. Thus, the orbit $\{a_n\,:\, n\in\mathbb{N}\}$ is bounded and, hence, there are only finitely many possibilities for the pairs $(a_n,a_{n+1})$. In view of \eqref{eq:phirecurrence} this implies that $(a_n)_{n\in \mathbb{N}}$ is ultimately periodic. \end{proof} This proof depends on the very particular continued fraction expansion of the golden ratio $\varphi$. It seems not possible to extend this argument to other parameters (apart from its conjugate $\varphi'=\frac{1-\sqrt{5}}{2}$). In fact, inequalities of the form \eqref{cfConsequence} do not hold any more and so it cannot be guaranteed that the orbit does not ``jump'' over the threshold values. \subsection{Quadratic irrationals that give rise to rational rotations}\label{sec:domainexchange} One of the most significant results on Conjecture~\ref{Vivaldi-SRS-Conjecture} is contained in Akiyama~{\it et al.}~\cite{Akiyama-Brunotte-Pethoe-Steiner:07}. It reads as follows. \begin{theorem} If $\lambda \in \left\{ \frac{\pm1\pm\sqrt 5}2, \pm\sqrt 2, \pm \sqrt 3\right\}$, then $ (1,\lambda)\in \D_2$, {\em i.e.}, each orbit of $\tau_{(1,\lambda)}$ is ultimately periodic. \end{theorem} Observe that this settles all the instances $\lambda= -2\cos (\theta \pi)$ with rational rotation angle $\theta$ such that $\lambda$ is a quadratic irrational. The proof of this result is very long and tricky. In what follows, we outline the main ideas. The proof runs along the same lines for each of the eight values of $\lambda$. As the technical details are simpler for $\lambda = \varphi=\frac{1 + \sqrt{5}}{2}$ we use this instance as a guideline. As in the first proof of periodicity in the golden ratio case given in \cite{Lowensteinetal:97}, a conjugate dynamical system that was also studied in Adler {\it et al}.~\cite{AdlerKitchensTresser:01} is used here. Indeed, let $((a_{k-1},a_k)^t)_{k\in \mathbb{N}}$ be an orbit of $\tau_{(1,\varphi)}$ and set $x=\{\varphi a_{k-1}\}$ and $y=\{\varphi a_{k}\}$. Then, by the definition of $\tau_{(1,\varphi)}$ we have that \[ \{\varphi a_{k+1}\} = \{-\varphi a_{k-1} - \varphi^2 a_{k} + \varphi y\} = \{-x+(\varphi-1)y\}=\{-x-\varphi'y\} \] where $\varphi'=\frac{1 - \sqrt{5}}{2}$ is the algebraic conjugate of $\varphi$. Thus (the according restriction of) the mapping \[ T:[0,1)^2 \to [0,1)^2, \quad (x,y) \mapsto (y, \{-x-\varphi'y\}) \] is conjugate to $\tau_{(1,\varphi)}$ and it suffices to study the orbits of elements of $\mathbb{Z}[\varphi]^2\cap [0,1)^2$ under $T$ to prove the conjecture. (Computer assisted work on almost all orbits on $T$ was done in~\cite{Kouptsov-Lowenstein-Vivaldi:02}; however the results given there are not strong enough to imply the above theorem.) Let $A=\begin{pmatrix}0&-1\\ 1& 1/\varphi \end{pmatrix}$ and write $T$ in the form \begin{equation}\label{Tcases} T(x,y) = \begin{cases} (x,y)A,& \hbox{for }y \ge \varphi x \\ (x,y)A + (0,1), & \hbox{for }y < \varphi x. \end{cases} \end{equation} We will now iteratively determine pentagonal subregions of $[0,1)^2$ whose elements admit periodic orbits under the mapping $T$. First define the pentagon \[ R=\{(x,y)\in [0,1)^2\;:\; y <\varphi x,\ x+y > 1,\ x<\varphi y\} \] and split the remaining part $D = [0,1)^2 \setminus R$ according to the cases in the definition of $T$ in \eqref{Tcases}, {\it i.e.}, set \begin{align*} D_0 & = \{(x,y) \in D \;:\, y \ge \varphi x\} \setminus \{0,0\}, \\ D_1 &= D \setminus D_0. \end{align*} Using the fact that $A^5$ is the identity matrix one easily derives that $T^5(\mathbf{z})=\mathbf{z}$ for each $\mathbf{z} \in R$. This exhibits the first pentagon of periodic points of $T$. We will now use first return maps of $T$ on smaller copies of $[0,1)^2$ to exhibit more periodic pentagons. \begin{figure} \includegraphics[height=4cm]{steiner1a.pdf} \hskip 1cm \includegraphics[height=4cm]{steiner1b.pdf} \hskip 1cm \caption{The effect of the mapping $T$ on the regions $R$, $D_0$, and $D_1$ (compare~\cite[Figure~2.1]{Akiyama-Brunotte-Pethoe-Steiner:07}). \label{fig:steiner1}} \end{figure} To this matter we first observe that by the definition of $D_0$ and $D_1$, the mapping $T$ acts in an easy way on these sets (see Figure~\ref{fig:steiner1}). Now we scale down $D$ by a factor $1/\varphi^2$ and and follow the $T$-trajectory of each $\mathbf{z} \in D$ until it hits $D/\varphi^2$. First we determine all $\mathbf{z} \in D$ that never hit $D/\varphi^2$. By the mapping properties illustrated in Figure~\ref{fig:steiner1} one easily obtains that the set of these parameters is the union $P$ of the the five shaded pentagons drawn in Figure~\ref{fig:steiner2}. \begin{figure} \includegraphics[height=8cm]{steiner2.pdf} \hskip 1cm \caption{The region of induction (black frame) of $T$ and the five (shaded) rectangles containing points with periodic orbits of $T$ (compare~\cite[Figure~2.2]{Akiyama-Brunotte-Pethoe-Steiner:07}). \label{fig:steiner2}} \end{figure} Again we can use the mapping properties of Figure~\ref{fig:steiner1} to show that all elements of $P$ are periodic under $T$. Thus, to determine the periodic points of $T$ it is enough to study the map induced by $T$ on $D/\varphi^2$. The mapping properties of this induced map on the subsets $D_0/\varphi^2$ and $D_1/\varphi^2$ are illustrated in Figure~\ref{fig:steiner3}. They are (apart from the scaling factor) the same as the ones in Figure~\ref{fig:steiner1}. \begin{figure} \includegraphics[height=4cm]{steiner3a.pdf} \hskip 1cm \includegraphics[height=4cm]{steiner3b.pdf} \hskip 1cm \caption{The effect of the mapping $T$ on the induced regions $D_0/\varphi^2$ and $D_1/\varphi^2$. As can be seen by looking at the lower left corner of Figure~\ref{fig:steiner1}, the region $D_0/\varphi^2$ is mapped into the induced region $D/\varphi^2$ after one application of $T$. To map $D_1/\varphi^2$ back to $D/\varphi^2$ we need to apply the sixth iterate of $T$ (see Figure~\ref{fig:steiner2}, where $T^i(D_0/\varphi^2)$ is visualized for $i\in\{0,1,2,3,4,5\}$). The induced mapping on $D/\varphi^2$ shows the same behavior as $T$ on $D$, thus we say that $T$ is {\it self-inducing}. \label{fig:steiner3}} \end{figure} Now we can iterate this procedure by defining a sequence of induced maps on $D/\varphi^{2k}$ each of which exhibits $5^k$ more pentagonal pieces of periodic points of $T$ in $[0,1)^2$. To formalize this, let $s(\mathbf{z})=\min\{m\in \mathbb{N}\,:\, T^m(\mathbf{z}) \in D/\varphi^2\}$ and define the mapping \[ S: D \setminus P \to D, \quad \mathbf{z} \mapsto \varphi^2 T^{s(\mathbf{z})}(\mathbf{z}). \] The above mentioned iteration process then leads to the following result. \begin{lemma}[{\cite[Theorem~2.1]{Akiyama-Brunotte-Pethoe-Steiner:07}}]\label{lem:steiner1} The orbit $(T^k(\mathbf{z}))_{k\in \mathbb{N}}$ is periodic if and only if $\mathbf{z}\in R$ or $S^n(\mathbf{z})\in P$ for some $n \ge 0$. \end{lemma} This result cannot only be used to characterize all periodic points of $T$ in $[0,1)^2$, it even enables one to determine the exact periods (see \cite[Theorem~2.3]{Akiyama-Brunotte-Pethoe-Steiner:07}). An approximation of the set $X \subset [0,1)^2$ of aperiodic points of $T$ is depicted in Figure~\ref{fig:steiner4}. \begin{figure} \includegraphics[height=8cm]{aper1.pdf} \hskip 1cm \caption{The aperiodic set $X$ (see \cite[Figure~2.3]{Akiyama-Brunotte-Pethoe-Steiner:07}). \label{fig:steiner4}} \end{figure} To prove the theorem it suffices to show that $X \cap \mathbb{Z}[\varphi]^2 = \emptyset$. By representing the elements of $X$ with help of some kind of digit expansion one can prove that this can be achieved by checking periodicity of the orbit of each point contained in a certain finite subset of $X$. This finally leads to the proof of the theorem. We mention that Akiyama and Harriss~\cite{AH:13} show that the dynamics of $T$ on the set of aperiodic points $X$ is conjugate to the addition of $1$ on the set of 2-adic integers $\mathbb{Z}_2$. Analogous arguments are used to prove the other cases. However, the technical details get more involved (the worst case being $\pm\sqrt 3$). The most difficult part is to find a suitable region of induction for $T$ and no ``rule'' for its shape is known. Although the difficulty of the technical details increases dramatically, similar arguments should be applicable for an arbitrary algebraic number $\alpha$ that induces a rational rotation. However, if $d$ is the degree of $\alpha$ the dynamical system $T$ acts on the set $[0,1)^{2d-2}$. This means that already in the cubic case we have to find a region of induction for $T$ in a $4$-dimensional torus. \subsection{Rational parameters and $p$-adic dynamics} Bosio and Vivaldi~\cite{Bosio-Vivaldi:00} study\footnote{see Remark~\ref{rem:vivaldi}} $\tau_{(1,\lambda)}$ for parameters $\lambda = q/p^n$ where $p$ is a prime and $q\in\mathbb{Z}$ with $|q|<2p^n$. They exhibit an interesting link to $p$-adic dynamics for these parameters. Before we give their main result we need some preparations. For $p$ and $q$ given as above consider the polynomial \[ p^{2n}\chi_{(1,\lambda)}\left(\frac{X}{p^n}\right)= X^2 +qX + p^{2n}. \] If we regard this as a polynomial over the ring $\mathbb{Z}_p$ of $p$-adic integers, by standard methods from algebraic number theory one derives that it has two distinct roots $\theta,\theta' \in \mathbb{Z}_p$. Obviously we have \begin{equation}\label{padicroots} \theta \theta' = p^{2n} \quad\hbox{and}\quad \theta + \theta' = -q. \end{equation} With help of $\theta$ we now define the mapping \begin{equation}\label{Ell} \mathcal{L}: \mathbb{Z}^2 \to \mathbb{Z}_p,\quad (x,y)^t \mapsto y-\frac{\theta x}{p^n}. \end{equation} If $\sigma:\mathbb{Z}_p \to \mathbb{Z}_p$ denotes the shift mapping \[ \sigma\left(\sum_{i\ge 0} b_ip^i\right) = \sum_{i\ge 0} b_{i+1}p^i \] we can state the following conjugacy of the SRS $\tau_{(1,\lambda)}$ to a mapping on $\mathbb{Z}_p$. \begin{theorem}[{\cite[Theorem~1]{Bosio-Vivaldi:00}}]\label{BosioThm} Let $p$ be a prime number and $q\in \mathbb{Z}$ with $|q|<2p^n$, and set $\lambda=q/p^n$. The function $\mathcal{L}$ defined in \eqref{Ell} embeds $\mathbb{Z}^2$ densely into $\mathbb{Z}_p$. The mapping $\tau^*_{(1,\lambda)}= \mathcal{L} \circ \tau_{(1,\lambda)}\circ \mathcal{L}^{-1}:\mathcal{L}(\mathbb{Z}^2)\to \mathcal{L}(\mathbb{Z}^2)$ is therefore conjugate to $ \tau_{(1,\lambda)}$. It can be extended continuously to $\mathbb{Z}_p$ and has the form \[ \tau^*_{(1,\lambda)}(\psi) = \sigma^n(\theta' \psi). \] \end{theorem} \begin{proof}[Sketch of the proof {\rm (see \cite[Proposition~4.2]{Bosio-Vivaldi:00})}] The fact that $\mathcal{L}$ is continuous and injective is shown in \cite[Proposition~4.1]{Bosio-Vivaldi:00}. We establish the formula for $\tau^*_{(1,\lambda)}$ which immediately implies the existence of the continuous extension to $\mathbb{Z}_p$. Let $\psi = y-\frac{\theta x}{p^n} \in \mathcal{L}(\mathbb{Z}^2) \subset \mathbb{Z}_p$ be given. Noting that $\lfloor qy/p^n \rfloor = qy/p^n - c/p^n$ for some $c \equiv qy\; ({\rm mod}\, p^n )$ and using \eqref{padicroots} we get \begin{align*} \tau^*_{(1,\lambda)}(\psi) &= \mathcal{L}\circ \tau_{(1,\lambda)}((x,y)^t) = \mathcal{L}\left(\left(y,-x-\left\lfloor \frac{qy}{p^n} \right\rfloor \right)^t\right) = -x-\left\lfloor \frac{qy}{p^n} \right\rfloor - \frac{\theta y}{p^n} \\ &= \frac{1}{p^n}\left( -p^n x - (q+\theta)y + c \right) = \frac{1}{p^n} \left( -\frac{\theta'\theta}{p^n} x+ \theta' y + c \right) = \frac{1}{p^n}(\theta' \psi + c). \end{align*} One can show that $z\in\mathbb{Z}$ inplies $\theta z \equiv 0\; ({\rm mod}\, p^{2n} )$, thus $c \equiv qy \equiv q\psi \equiv -\theta' \psi \; ({\rm mod}\, p^{n} )$ and the result follows. \end{proof} Theorem~\ref{BosioThm} is used by Vivaldi and Vladimirov~\cite{Vivaldi-Vladimirov:03} to set up a probabilistic model for the cumulative round off error caused by the floor function under iteration of $\tau_{(1,\lambda)}$. Furthermore, they prove a central limit theorem for this model. \subsection{Newer developments} We conclude this section with two new results related to Conjecture~\ref{Vivaldi-SRS-Conjecture}. Very recently, Akiyama and Peth\H{o}~\cite{Akiyama-Pethoe:13} proved the following very general result. \begin{theorem}[{\cite[Theorem~1]{Akiyama-Pethoe:13}}]\label{AP13} For each fixed $\lambda\in(-2,2)$ the mapping $\tau_{(1,\lambda)}$ has infinitely many periodic orbits. \end{theorem} The proof is tricky and uses the fact that (after proper rescaling of the lattice $\mathbb{Z}^2$) each unbounded orbit of $\tau_{(1,\lambda)}$ has to hit a so called ``trapping region'' ${\rm Trap}(R)$ which is defined as the symmetric difference of two circles of radius $R$ whose centers have a certain distance (not depending on $R$) of each other. The proof is done by contradiction. If one assumes that there are only finitely many periodic orbits, there exist more unbounded orbits hitting ${\rm Trap}(R)$ than there are lattice points in ${\rm Trap}(R)$ if $R$ is chosen large enough. This contradiction proves the theorem. Using lattice point counting techniques this result can be extended to variants of SRS (as defined in Section~\ref{sec:variants}), see \cite[Theorem~2]{Akiyama-Pethoe:13}. \medskip Reeve-Black and Vivaldi~\cite{Reeve-Black-Vivaldi:13} study the dynamics of\footnote{see Remark~\ref{rem:vivaldi}} $\tau_{(1,\lambda)}$ for $\lambda\to 0$, $\lambda < 0$. While the dynamics of $\tau_{(1,0)}$ is trivial, for each fixed small positive parameter $\lambda$ one observes that the orbits approximate polygons as visualized in Figure~\ref{fig:black}. \begin{figure} \includegraphics[height=6cm]{ReevePicture.pdf} \hskip 1cm \caption{Some examples of orbits of $\tau_{(1,1/50)}$. \label{fig:black}} \end{figure} Close to the origin the orbits approximate squares, however, the number of edges of the polygons increases the farther away from the origin the orbit is located. Eventually, the orbits approximate circles. Moreover, the closer to zero the parameter $\lambda$ is chosen, the better is the approximation of the respective polygons (after a proper rescaling of the mappings $\tau_{(1,\lambda)}$). This behavior can be explained by using the fact that $\tau^4_{(1,\lambda)}(\mathbf{z})$ is very close to $\mathbf{z}$ for small values of $\lambda$. The idea in \cite{Reeve-Black-Vivaldi:13} is now to construct a near integrable Hamiltonian function $P:\mathbb{R}^2 \to \mathbb{R}$ that models this behavior. In particular, $P$ is set up in a way that the orbits of the flow associated with the Hamiltonian vector field $(\partial P / \partial y, -\partial P/\partial x)$ are polygons. Moreover, if such a polygon passes through a lattice point of $\mathbb{Z}^2$ it is equal to the corresponding polygon approximated by the discrete systems. Polygonal orbits of the flow passing through a lattice point are called {\it critical}. Critical polygons are used to separate the phase space into infinitely many {\it classes}. There is a crucial difference between the orbits of the discrete systems and their Hamiltonian model: the orbits of the Hamiltonian flow surround a polygon once and then close up. As can be seen in Figure~\ref{fig:black} this need not be the case for the orbits of $\tau_{(1,\lambda)}$. These may well ``surround'' a polygon more often (as can be seen in the two outer orbits of Figure~\ref{fig:black}). This behavior leads to long periods and makes the discrete case harder to understand. Discrete orbits that surround a polygon only once are called {\it simple}. They are of particular interest because they {\it shadow} the orbit of the Hamiltonian system and show some kind of structural stability. The main result in \cite{Reeve-Black-Vivaldi:13} asserts that there are many simple orbits. In particular, there exist infinitely many classes in the above sense, in which a positive portion of the orbits of $\tau_{(1,\lambda)}$ are simple for small values of $\lambda$. The numerical value for this portion can be calculated for $\lambda\to0$. These classes can be described by divisibility properties of the coordinates of the lattice points contained in a critical polygon (see~\cite[Theorems~A and~B]{Reeve-Black-Vivaldi:13}). \section{The boundary of $\mathcal{D}_d$ and periodic expansions w.r.t.\ Salem numbers} In Section~\ref{sec:D2} we considered periodicity properties of the orbits of $\tau_{\mathbf{r}}$ for $\mathbf{r}\in \partial\D_2$. While we get complete results for the regions $E_2^{(1)}$ and $E_2^{(-1)}$, the orbits of $\tau_\mathbf{r}$ for $\tau_{\mathbf{r}}\in E_2^{(\mathbb{C})}$ are hard to study and their periodicity is known only for a very limited number of instances. In the present section we want to discuss periodicity results for orbits of $\tau_\mathbf{r}$ with $\mathbf{r}\in \partial\D_d$. Here, already the ``real'' case is difficult. In Section~\ref{sec:realDd} we review a result due to Kirschenhofer {\it et al.}~\cite{Kirschenhofer-Pethoe-Surer-Thuswaldner:10} that shows a relation of this problem to the structure of $\D_p^{(0)}$ for $p < d$. The study of the ``non-real'' part $E_d^{(\mathbb{C})}$ is treated in the remaining parts of this section. Since we saw in Section~\ref{sec:D2} that the investigation of $E_2^{(\mathbb{C})}$ is difficult already in the case $d=2$, it is no surprise that no complete result exists in this direction. However, there are interesting relations to a conjecture of Bertrand~\cite{Bertrand:77} and Schmidt~\cite{Schmidt:80} on periodic orbits of beta-transformations w.r.t.\ Salem numbers and partial periodicity results starting with the work of Boyd~\cite{Boyd:89,Boyd:96,Boyd:97} that we want to survey. \subsection{The case of real roots}\label{sec:realDd} In Kirschenhofer {\it et al.}~\cite{Kirschenhofer-Pethoe-Surer-Thuswaldner:10} the authors could establish strong relations between the sets $\D_d$ on the one side and $\D_e^{(0)}$ for $e<d$ as well as some related sets on the other side. This leads to a characterization of $\D_d$ in the regions $E_d^{(1)}$ and $E_d^{(-1)}$ (and even in some small subsets of $E_d^{(\mathbb{C})}$) in terms of these sets. The according result, which is stated below as Corollary~\ref{cor:real} follows from a more general theorem that will be established in this subsection. Recall the definition of the operator $\odot$ in \eqref{eq:odot}. It was observed in \cite{Kirschenhofer-Pethoe-Surer-Thuswaldner:10} that the behavior of $\tau_{\bf r}, {\bf r} \in \RR^p$, can be described completely by the behavior of $\tau_{{\bf r} \odot {\bf s}}$ if ${\bf s}\in\mathbb{Z}^q$. To be more specific, for $q \in \NN\setminus\{0\}$ and ${\bf s}=(s_0,\ldots,s_{q-1}) \in \ZZ^q$ let \[ V_{\bf s}:\ZZ^{\infty} \rightarrow \ZZ^{\infty},\quad (x_n)_{n \in \NN} \mapsto \left(\sum_{k=0}^{q-1} s_kx_{n+k}+x_{n+q}\right)_{n \in \NN}. \] Then $V_{\bf s}$ maps each periodic sequence to a periodic sequence and each sequence that is eventually zero to a sequence that is eventually zero. Furthermore the following important fact holds ({\it cf.}~\cite{Kirschenhofer-Pethoe-Surer-Thuswaldner:10}). \begin{proposition} Let $p,q \geq 1$ be integers, ${\bf r} \in \RR^p$ and ${\bf s} \in \ZZ^q$. Then \[ V_{\bf s}\circ\tau_{{\bf r} \odot {\bf s}}^*(\ZZ^{p+q})=\tau_{\bf r}^*(\ZZ^p). \]\end{proposition} Here we denote by $\tau_{\bf t}^*(\bf x)$ the integer sequence that is derived by concatenating successively the newly occurring entries of the iterates $\tau_{\bf t}^n(\bf x)$ to the entries of the referring initial vector ${\bf x} =(x_0,\dots,x_{d-1})^t$. \begin{proof}[Sketch of the proof {(\rm compare~\cite{Kirschenhofer-Pethoe-Surer-Thuswaldner:10})}.] Let \[ U=\left(\begin{array}{ccccccccc} s_0 & s_1 & \cdots & s_{q-1} & 1 & 0 &\cdots &0 \\ 0 & s_0 & & & \ddots & \ddots & \ddots & \vdots \\ \vdots & \ddots &\ddots & & & \ddots & \ddots & 0 \\ 0 & \cdots & 0 & s_0& \cdots & \cdots & s_{q-1} & 1 \end{array}\right) \in \ZZ^{p \times (p+q)}. \] Then $U$ has maximal rank $p$ and $U\ZZ^{p+q}=\ZZ^p$. The result will be proved if we show that for all ${\bf x} \in \ZZ^{p+q}$ \begin{equation}\label{for1} V_{\bf s}\circ\tau_{{\bf r} \odot {\bf s}}^*({\bf x})=\tau_{\bf r}^*(U{\bf x}). \end{equation} holds. Supposing $(x_k)_{k \in \NN}=\tau_{{\bf r} \odot {\bf s}}^*({\bf x})$ and $(y_k)_{k \in \NN}=\tau_{\bf r}^*(U{\bf x})$ it has to be shown that $$ y_n=s_0x_n+\cdots+s_{q-1}x_{n+q-1}+x_{n+q} $$ holds for all $n \geq 0$. The latter fact is now proved by induction on $n$. \end{proof} \begin{example}[{see \cite{Kirschenhofer-Pethoe-Surer-Thuswaldner:10}}] Let ${\bf r}=(\frac{11}{12},\frac{9}{5})$ and ${\bf s}=(1)$. The theorem says that the behavior of $\tau_{\bf r}$ is completely described by the behavior of $\tau_{{\bf r} \odot {\bf s}}$. For instance, suppose ${\bf y}:=(5,-3)^t$. We can choose ${\bf x}:=(4,1,-4)^t$ such that $U{\bf x}={\bf y}$ with \[U=\left(\begin{array}{ccc} 1 & 1 & 0 \\ 0 & 1 & 1 \end{array}\right).\] Then ${\bf r} \odot {\bf s}=(\frac{11}{12},\frac{163}{60},\frac{14}{5})$ and \[ \tau_{{\bf r} \odot {\bf s}}^*({\bf x})=4,1,-4,(5,-4,2,1,-4,7,-9,10,-9,7,-4,1,2,-4)^\infty. \] In our case the map $V_{\bf s}$ performs the addition of each two consecutive entries of a sequence. Therefore we find \[ \tau_{{\bf r}}^*({\bf y})=V_{{\bf s}}\circ\tau_{{\bf r} \odot {\bf s}}^*({\bf x})=5,-3,(1,1,-2,3,-3,3,-2)^\infty. \] \end{example} An important consequence of the last proposition is the following result. \begin{corollary}[{{\em cf.}~\cite{Kirschenhofer-Pethoe-Surer-Thuswaldner:10}}] Let ${\bf r} \in \RR^d$ and ${\bf s} \in \overline{\E_q} \cap \ZZ^q$. \begin{itemize} \item If ${\bf r} \odot {\bf s} \in \D_{d+q}$ then ${\bf r} \in \D_{d}$. \item If ${\bf r} \odot {\bf s} \in \D_{d+q}^{(0)}$ then ${\bf r} \in \D_{d}^{(0)}$. \end{itemize} \end{corollary} Unfortunately the converse of the corollary does not hold in general; for instance, we have (see Example~\ref{ex:KPT} below) $\left(1,\frac{1+\sqrt{5}}{2}\right) \in \D_2$, but $\left(1,\frac{3+\sqrt{5}}{2},\frac{3+\sqrt{5}}{2}\right) = (1)\odot\left(1,\frac{1+\sqrt{5}}{2}\right) \in \partial \E_3 \setminus \D_3$. In the following we turn to results from \cite{Kirschenhofer-Pethoe-Surer-Thuswaldner:10} that allow to ``lift'' information on some sets derived from $\D_{e}$ for $e<d$ to the boundary of the sets $\D_{d}$. We will need the following notations. For ${\bf r} \in \D_d$ let $\OR({\bf r})$ be the set of all equivalence classes of cycles of $\tau_{\mathbf{r}}$. For $p \in \NN\setminus\{0\}$ and $\mathcal{B} \in \OR({\bf r})$, define the function $S_p$ by \[ \mathcal{B}=\spk{x_0, \ldots,x_{l(\mathcal{B})-1}} \mapsto \begin{cases} 0 & \hbox{for } p \nmid l(\mathcal{B}) \mbox{ or } \sum_{j=0}^{l(\mathcal{B})-1} \xi_p^j x_j=0 \\ 1 & \mbox{ otherwise}, \end{cases} \] where $\xi_p$ denotes a primitive $p$-th root of unity. Furthermore let $$ \D_d^{(p)}:=\{{\bf r} \in \D_d\; :\; \forall \mathcal{B} \in \OR({\bf r}):S_p(\mathcal{B})=0\}. $$ Observe that for $p=1$ we have that ${\bf r} \in \D_d$ lies in $\D_d^{(1)}$ iff the sum of the entries of each cycle of $\tau_{\mathbf r}$ equals 0. For $p=2$ the alternating sums $x_0-x_1+x_2-x_3\pm \dots$ must vanish, and so on. Let $\Phi_j$ denote the $j$th {\it cyclotomic polynomial}. Then the following result could be proved in \cite{Kirschenhofer-Pethoe-Surer-Thuswaldner:10}. \begin{theorem}\label{iii} Let $d,q \geq 1$, ${\bf r} \in \RR^d$ and ${\bf s}=(s_0,\ldots,s_{q-1}) \in \ZZ^q$ such that $s_0 \not=0$. Then ${\bf r} \odot {\bf s} \in \D_{d+q}$ if and only if the following conditions are satisfied: \begin{itemize} \item [(i)] $\chi_{\bf s}=\Phi_{\alpha_1}\Phi_{\alpha_2}\cdots\Phi_{\alpha_b}$ for pairwise disjoint non-negative integers $\alpha_1,\ldots,\alpha_b$, and \item [(ii)] ${\bf r} \in \bigcap_{j=1}^b \D_d^{(\alpha_j)}$. \end{itemize} \end{theorem} \begin{proof}[Sketch of proof (compare~{\cite{Kirschenhofer-Pethoe-Surer-Thuswaldner:10}})] In order to prove the sufficiency of the two conditions let us assume that (i) and (ii) are satisfied. We have to show that for arbitrary ${\bf x} \in \ZZ^{d+q}$ the sequence $(x_n)_{n \in \NN}:=\tau_{{\bf r} \odot {\bf s}}^*({\bf x})$ is ultimately periodic. Let $s_q:=1$. Setting \[ {\bf y}:=\left(\sum_{i=0}^q s_ix_i,\sum_{i=0}^q s_ix_{i+1},\ldots,\sum_{i=0}^q s_ix_{i+d-1}\right) \] it follows from \eqref{for1} that \begin{equation}\label{xy} (y_n)_{n \in \NN}:=\tau_{\bf r}^*({\bf y})=V_{{\bf s}}((x_n)_{n \in \NN}). \end{equation} Since ${\bf r} \in \D_d^{(\alpha_1)},$ we have ${\bf r} \in \D_d$. Therefore there must exist a cycle $\spk{y_{n_0},\ldots,y_{n_0+l-1}} \in \OR({\bf r})$, from which we deduce the recurrence relation \begin{equation}\label{zug1} \sum_{h=0}^q s_hx_{n_0+k+h} =y_{n_0+k}= y_{n_0+k+l}= \sum_{h=0}^q s_hx_{n_0+k+l+h} \end{equation} for $k \geq 0$ for the sequence $(x_n)_{n \geq n_0}$. Its characteristic equation is \[ (t^l-1)\chi_{\bf s}(t)=0. \] Let us now assume that $\lambda_1,\ldots,\lambda_w$ are the roots of $t^l-1$ only, $\lambda_{w+1},\ldots,\lambda_l$ are the common roots of $t^l-1$ and of $\chi_{\bf s}(t)$, and $\lambda_{l+1},\ldots,\lambda_g$ are the roots of $\chi_{\bf s}(t)$ only. By Assertion~(i) $t^l-1$ and $\chi_{\bf s}(t)$ have only simple roots, so that $\lambda_{w+1},\ldots,\lambda_l$ have multiplicity two while all the other roots are simple. Therefore the solution of recurrence \eqref{zug1} has the form \begin{equation}\label{heu1} x_{n_0+k}= \sum_{j=1}^g A^{(0)}_j \lambda_j^k + \sum_{j=w+1}^l A^{(1)}_j k \lambda_j^k \end{equation} for $l+q$ complex constants $A_j^{(\nu)}$, and the (ultimate) periodicity of $(x_n)_{n \geq n_0}$ is equivalent to $A_j^{(1)}=0$ for all $j \in \{w+1,\ldots,l\}$. For $\alpha_1\nmid l, \ldots, \alpha_b \nmid l$ the polynomials $x^l-1$ and $\chi_{\bf s}$ have no common roots and the result is immediate. Let us now suppose $x^l-1$ and $\chi_{\bf s}$ have common roots, {\it i.e.}, that $w<l$. From \eqref{zug1}, observing $\chi_{\bf s}(\lambda_j)=0$ for $j > w,$ we get with $k\in\{0,\ldots,l-1\}$ the following system of $l$ linear equalities for the $l$ constants $A_1^{(0)},\ldots,A_w^{(0)},A_{w+1}^{(1)},\ldots,A_l^{(1)}$. \begin{equation}\label{heu2} \begin{array}{rl} \displaystyle y_{n_0+k}& \displaystyle= \displaystyle \sum_{j=1}^g A^{(0)}_j \lambda_j^k\chi_{\bf s}(\lambda_j) + \sum_{j=w+1}^l A^{(1)}_j (k\lambda_j^k\chi_{\bf s}(\lambda_j)+\lambda_j^{k+1}\chi'_{\bf s}(\lambda_j))\\ &=\displaystyle \sum_{j=1}^w A^{(0)}_j \lambda_j^k\chi_{\bf s}(\lambda_j) + \sum_{j=w+1}^l A^{(1)}_j \lambda_j^{k+1}\chi'_{\bf s}(\lambda_j). \end{array} \end{equation} It remains to show that $A_j^{(1)}=0$ for $w+1\le j \le l$. Rewriting the system as \begin{equation}\label{EQ} (y_{n_0},\ldots,y_{n_0+l-1})^t= G (A_1^{(0)},\ldots,A_w^{(0)},A_{w+1}^{(1)},\ldots,A_l^{(1)})^t \end{equation} with \[ G = \left(\begin{array}{cccccc} \chi_{\bf s}(\lambda_1) & \cdots & \chi_{\bf s}(\lambda_w) & \chi'_{\bf s}(\lambda_{w+1})\lambda_{w+1} & \cdots & \chi'_{\bf s}(\lambda_{l})\lambda_{l} \\ \chi_{\bf s}(\lambda_1)\lambda_1 & \cdots & \chi_{\bf s}(\lambda_w)\lambda_w & \chi'_{\bf s}(\lambda_{w+1})\lambda_{w+1}^2 & \cdots & \chi'_{\bf s}(\lambda_{l})\lambda_{l}^2 \\ \vdots & &\vdots & \vdots & &\vdots \\ \chi_{\bf s}(\lambda_1)\lambda_1^{l-1} & \cdots & \chi_{\bf s}(\lambda_w)\lambda_w^{l-1} & \chi'_{\bf s}(\lambda_{w+1})\lambda_{w+1}^l & \cdots & \chi'_{\bf s}(\lambda_{l})\lambda_{l}^l \end{array}\right). \] we have by Cramer's rule that \[ A_j^{(1)} = \frac{\det G_j}{\det G}, \] where $G_j$ denotes the matrix that is obtained by exchanging the $j$th column of $G$ by the vector $(y_{n_0},\ldots,y_{n_0+l-1})^t$. ($\det G \not=0$ is easily detected using the Vandermonde determinant.) Now \begin{align}\label{xxxx} \det G_j = &\prod_{k=1}^w \chi_{\bf s}(\lambda_k)^l \prod_{\begin{subarray}{c} k=w+1 \\ k \not=j \end{subarray}}^l (\lambda_k\chi'_{\bf s}(\lambda_k))^l D_j, \end{align} where \begin{align*}D_j := & \det\left(\begin{array}{ccccccc} 1 & \cdots & 1 & y_{n_0} & 1 & \cdots & 1 \\ \lambda_1 & \cdots & \lambda_{j-1} & y_{n_0+1} & \lambda_{j+1} & \cdots & \lambda_{l} \\ \vdots & &\vdots & \vdots &&\vdots \\ \lambda_1^{l-1} & \cdots & \lambda_{j-1}^{l-1} & y_{n_0+l-1} & \lambda_{j+1}^{l-1} & \cdots & \lambda_{l}^{l-1} \\ \end{array}\right). \end{align*} Adding the $\overline{\lambda_j}^{k+1}$-fold multiple of the $k$th row to the last row for each $k\in\{1,\ldots,l-1\}$ we gain \[ D_j=\det\left(\begin{array}{ccccccc} 1 & \cdots & 1 & y_{n_0} & 1 & \cdots & 1 \\ \lambda_1 & \cdots & \lambda_{j-1} & y_{n_0+1} & \lambda_{j+1} & \cdots & \lambda_{l} \\ \vdots & &\vdots & \vdots &&\vdots \\ \lambda_1^{l-2} & \cdots & \lambda_{j-1}^{l-2} & y_{n_0+k+l-2} & \lambda_{j+1}^{l-2} & \cdots & \lambda_{l}^{l-2} \\ 0 & \cdots & 0 & \sum_{k=0}^{l-1} \overline{\lambda_j}^{k+1} y_{n_0+k} & 0 & \cdots & 0 \\ \end{array}\right). \] If we can establish $\sum_{k=0}^{l-1} \overline{\lambda_j}^k y_{n_0+k} = 0$, we are done. Now, since $\lambda_j$ is a root of $x^l-1$ and $\chi_{\bf s}$, Condition (i) yields that there exists a $p \in \{1,\ldots,b\}$ with $\alpha_p \mid l$. Thus $\lambda_j$, and $\overline{\lambda_j},$ are primitive $\alpha_p$th roots of unity. It follows from Condition (ii) that \begin{equation}\label{ES} S_{\alpha_p}(\spk{y_{n_0},\ldots,y_{n_0+l-1}})=\sum_{k=0}^{l-1} \xi_{\alpha_p}^k y_{n_0+k}=0 \end{equation} for each $\alpha_p$th root of unity $\xi_{\alpha_p}$. In particular, $\sum_{k=0}^{l-1} \overline{\lambda_j}^k y_{n_0+k}=0$. Let us turn to the necessity of Conditions (i) and (ii). Since ${\bf r} \odot {\bf s} \in \D_{d+q}$ we have $\varrho(R({\bf s})) \le \varrho(R({\bf r} \odot {\bf s}))\le 1$. Since $s_0\not=0$, $\chi_{s}$ is a polynomial over $\ZZ$ each of whose roots are non-zero and bounded by one in modulus. This implies that each root of this polynomial is a root of unity. Suppose now that $\chi_{\bf s}$ has a root of multiplicity at least $2$, say $\lambda_{j_0}$. Let ${\bf x} \in \ZZ^{d+q}$ and $(x_n)_{n \in \NN}:=\tau_{{\bf r} \odot {\bf s}}^*({\bf x})$. Since $(x_n)_{n \in \NN}$ is a solution of recurrence \eqref{zug1} it must have the shape \[ x_{n_0+k} = \sum_{j=1}^gA_j(k)\lambda_j^k \] with some polynomials $A_j$ ($1\le j \le g$). Inserting \eqref{zug1} yields \begin{equation}\label{Y2} y_{n_0+k}= \sum_{j=1}^g \sum_{h=0}^q s_h A_j(k+h) \lambda_j^{k+h}. \end{equation} Taking $k\in \{1,\ldots, l\}$ we get a system of $l$ equations for the $l+q$ coefficients $A_j^{(\nu)}$ of the polynomials $A_j$. In a similar way as in the treatment of \eqref{zug1} in the first part of this proof it can be shown that $q$ of the $l+q$ coefficients do not occur in \eqref{Y2} for $k\in \{1,\ldots, l\}$, and the system can be used to calculate the remaining $l$ coefficients $A_j^{(\nu)}$. The coefficient of $A_{j_0}^{(1)}$ in \eqref{Y2} equals \[ k\lambda_{j_0}^k\chi_{\bf s}(\lambda_{j_0})+\lambda_{j_0}^{k+1}\chi'_{\bf s}(\lambda_{j_0}). \] Since $\lambda_{j_0}$ is a double zero of $\chi_{\bf s}$, the latter expression, and thus the coefficient of $A_{j_0}^{(1)}$ in \eqref{Y2} vanishes. Let now $z_1,\ldots, z_q$ be a $q$-tuple of integers and consider the system of $q$ equations \begin{equation}\label{zk} z_k = \sum_{j=1}^gA_j(k)\lambda_j^k \qquad(1\le k\le q). \end{equation} This system can be used in order to calculate the remaining $q$ coefficients $A_j^{(\nu)}$, among which we have $A_{j_0}^{(1)}$. Choosing $z_1,\ldots, z_q$ in a way that $A_{j_0}^{(1)}\not=0$ allows to determine all coefficients $A_j^{(\nu)}$. We use equation \eqref{zk} now to define the integers $z_k$ for $k > q$. Then by \eqref{zug1} the sequence $\tau_{{\bf r} \odot {\bf s}}^*((z_0,\ldots,z_{d+l-1}))$ satisfies the recurrence relation $\sum_{i=0}^q s_iz_{n_0+k+i} = \sum_{i=0}^q s_iz_{n_0+k+l+i}$. As $A_{j_0}^{(1)} \not=0$ this sequence does not end up periodically by the following auxiliary Lemma~\ref{aux}, a contradiction to ${\bf r} \odot {\bf s} \in \D_{d+q}$. Thus we have proved the necessity of Condition (i) of the theorem. Let us now turn to Condition (ii). Since ${\bf r} \odot {\bf s} \in \D_{d+q}$, $(x_n)_{n\in\NN}$ in \eqref{heu1} is ultimately periodic. By Lemma~\ref{aux} this implies that $A_j^{(1)}=0$ for each $j\in\{w+1,\ldots,l\}$. Adopting the notation and reasoning of the sufficiency part of the proof this is equivalent to $D_j=0,$ so that \eqref{ES} holds for $\alpha_1,\ldots,\alpha_b,$ proving the necessity of (ii). \end{proof} In the last proof we make use of the following auxiliary result, which, in other terminology, can be found in \cite{Kirschenhofer-Pethoe-Surer-Thuswaldner:10}, too. \begin{lemma}\label{aux} Let the sequence $(x_n)_{n\ge 0}$ be the solution of a homogeneous linear recurrence with constant coefficients in $\CC,$ whose eigenvalues are pairwise disjoint roots of unity. If at least one of the eigenvalues has multiplicity greater than 1, then $x_n$ is not bounded. \end{lemma} For the simple proof we also refer to~\cite{Kirschenhofer-Pethoe-Surer-Thuswaldner:10}. Theorem~\ref{iii} allows in particular to give information on the behavior of $\tau_{\bf r}$ on several parts of the boundary of $\E_d$. Remember that $\partial \E_3 = \partial \D_3$ consists of the two triangles $E_3^{(-1)}$ and $E_3^{(1)}$ and of the surface $E_3^{(\mathbb{C})}$. Then we have the following result (see~\cite{Kirschenhofer-Pethoe-Surer-Thuswaldner:10}). \begin{corollary}\label{cor:real} The following assertions hold. \begin{itemize} \item For $d \geq 2$ we have $\D_d \cap E_d^{(-1)} = (-1) \odot \D_{d-1}^{(1)}$. \item For $d \geq 2$ we have $\D_d \cap E_d^{(1)} = (1) \odot \D_{d-1}^{(2)}$. \item For $d \geq 3$ we have $\D_d \cap (1,0) \odot \overline{\E_{d-2}} = (1,0) \odot \D_{d-2}^{(4)}$. \item For $d \geq 3$ we have $\D_d \cap (1,1) \odot \overline{\E_{d-2}} = (1,1) \odot \D_{d-2}^{(3)}$. \item For $d \geq 3$ we have $\D_d \cap (1,-1) \odot \overline{\E_{d-2}} = (1,-1) \odot \D_{d-2}^{(6)}$. \end{itemize} \end{corollary} Combining the first two items of the last corollary and computing an approximation of $\D_{2}^{(1)}$ and $\D_{2}^{(-1)}$ yields an approximation of $\D_3 \cap E_3^{(-1)}$ resp. $\D_3 \cap E_3^{(-1)}$ as depicted in Figure~\ref{E3-1} (algorithms for $\D_d^{(0)}$ are presented in Section~\ref{sec:algorithms}; they can be adapted to $\D_d^{(p)}$ in an obvious way). \begin{figure} \hskip 0.7cm \includegraphics[width=0.3\textwidth, bb=0 0 300 400]{E3-1} \hskip 1cm \includegraphics[width=0.37\textwidth, bb=0 0 300 400]{E31} \caption{The triangles $E_3^{(-1)}$ (left hand side) and $E_3^{(1)}$ (right hand side). Dark grey: parameters $\mathbf{r}$ for which $\tau_\mathbf{r}$ is ultimately periodic for each starting value $\mathbf{z}\in\mathbb{Z}^3$. Black: there exists a starting value $\mathbf{z}\in\mathbb{Z}^3$ such that the orbit $(\tau_\mathbf{r}^k(\mathbf{z}))_{k\in\mathbb{N}}$ becomes unbounded. Light grey: not yet characterized. See \cite[Figure~3]{Kirschenhofer-Pethoe-Surer-Thuswaldner:10}.} \label{E3-1} \end{figure} The last three items of the corollary allow to characterize {\it e.g.} some lines of the surface $\D_3 \cap E_3^{(\mathbb{C})}$. The study of concrete parameters $\mathbf{r}\in E_d^{(1)} \cup E_d^{(-1)}$ shows interesting behavior as illustrated in the following three dimensional example. \begin{example}[{\em cf.}~\cite{Kirschenhofer-Pethoe-Thuswaldner:08}]\label{ex:KPT} Let $\varphi = \frac{1+\sqrt{5}}{2}$. From our considerations above one can derive that $$ \left(1,\varphi^2,\varphi^2\right)\in\partial \D_3 \setminus \D_3. $$ We want to study the orbits of $\tau_{\left(1,\varphi^2,\varphi^2\right)}$ more closely for certain starting values $(z_0,z_1,z_2)$. In particular, let $z_0=z_1=0$. Interestingly, the behavior of the orbit depends on the starting digits of the {\it Zeckendorf representation} of $z_2$: $$ z_2 = \sum_{j\ge 2} z_{2,j} F_j $$ such that $z_{2,j}\in \{0,1\}, z_{2,j}z_{2,j+1}=0, j\ge 2$ (as in the proof of Theorem~\ref{th:steinerproof}, $(F_n)_{n\in\mathbb{N}}$ is the sequence of Fibonacci numbers). More precisely the following results hold (see~\cite[Theorems~4.1,5.1,5.2, and~5.3]{Kirschenhofer-Pethoe-Thuswaldner:08}). \begin{itemize} \item If $z_{2,2}=z_{2,3}=0$ the sequence $(z_n)$ is divergent; \item if $z_{2,2}=0, z_{2,3}=1$ the sequence $(z_n)$ has period 30; \item if $z_{2,2}=1, z_{2,3}=z_{2,4}=0$ the sequence $(z_n)$ has period 30; \item if $z_{2,2}=1, z_{2,3}=0, z_{2,4}=1$ the sequence $(z_n)$ has period 70. \end{itemize} \end{example} \subsection{The conjecture of Klaus Schmidt on Salem numbers}\label{sec:Salem} Schmidt~\cite{Schmidt:80} (see also Bertrand \cite{Bertrand:77}) proved the following result on beta-expansions of Salem numbers (recall that $T_\beta$ is the beta-transformation defined in \eqref{betatransform}). \begin{theorem}[{\cite[Theorems~2.5 and 3.1]{Schmidt:80}}]\label{thm:Schmidt} Let $\beta > 1$ be given. \begin{itemize} \item If $T_\beta$ has an ultimately periodic orbit for each element of $\mathbb{Q}\cap[0,1)$, then $\beta$ is either a Pisot or a Salem number. \item If $\beta$ is a Pisot number, then $T_\beta$ has an ultimately periodic orbit for each element of $\mathbb{Q}(\beta) \cap [0,1)$. \end{itemize} \end{theorem} We do not reproduce the proof of this result here, however, if we replace the occurrences of $\mathbb{Q}$ as well as $\mathbb{Q}(\beta)$ in the theorem by $\mathbb{Z}[\beta]$ and assume that $\beta$ is an algebraic integer then the according slightly modified result follows immediately from Propositions~\ref{prop:betanumformula} and \ref{EdDdEd}. Just observe that the conjugacy between $T_\beta$ and $\tau_\mathbf{r}$ stated in Proposition~\ref{prop:betanumformula} relates Pisot numbers to SRS parameters $\mathbf{r}\in \E_d$ and Salem numers to SRS parameters $\mathbf{r}\in \partial\E_d$. Therefore, this modification of Theorem~\ref{thm:Schmidt} is a special case of Proposition~\ref{EdDdEd}. Note that Theorem~\ref{thm:Schmidt} does not give information on whether beta-expansions w.r.t.\ Salem numbers are periodic or not. Already Schmidt~\cite{Schmidt:80} formulated the following conjecture. \begin{conjecture}\label{con:Schmidt} If $\beta$ is a {\em Salem} number, then $T_\beta$ has an ultimately periodic orbit for each element of $\mathbb{Q}(\beta) \cap [0,1)$. \end{conjecture} So far, no example of a non-periodic beta-expansion w.r.t.\ a Salem number $\beta$ has been found although Boyd~\cite{Boyd:96} gives a heuristic argument that puts some doubt on this conjecture (see Section~\ref{sec:Heuristic}). In view of Proposition~\ref{prop:betanumformula} (apart from the difference between $\mathbb{Z}[\beta]$ and $\mathbb{Q}(\beta)$) this conjecture is a special case of the following generalization of Conjecture~\ref{Vivaldi-SRS-Conjecture} to arbitrary dimensions (which, because of Boyd's heuristics, we formulate as a question). \begin{question}\label{qu:Schmidt} Let $\mathbf{r}\in E_d^{(\mathbb{C})}$ be given. Is it true that each orbit of $\tau_\mathbf{r}$ is ultimately periodic? \end{question} As Proposition~\ref{D2easyboundary} and Corollary~\ref{cor:real} show, the corresponding question cannot be answered affirmatively for all parameters contained in $E_d^{(1)}$ as well as in $E_d^{(-1)}$. \subsection{The expansion of $1$} Since it seems to be very difficult to verify Conjecture~\ref{con:Schmidt} for a single Salem number $\beta$, Boyd~\cite{Boyd:89} considered the simpler problem of studying the orbits of $1$ under $T_\beta$ for Salem numbers of degree $4$. In \cite[Theorem~1]{Boyd:89} he shows that these orbits are always ultimately periodic and -- although there is no uniform bound for the period -- he is able to give the orbits explicitly. We just state the result about the periods and omit the description of the concrete structure of the orbits. \begin{theorem}[{see \cite[Lemma~1 and Theorem~1]{Boyd:89}}]\label{thm:boyd4} Let $X^4 + b_1 X^3 + b_2 X^2 + b_1 X + 1$ be the minimal polynomial of a Salem number of degree $4$. Then $\lfloor \beta\rfloor \in \{-b_1-2,-b_1-1,-b_1,-b_1+1 \}$. According to these values we have the following periods $p$ for the orbits of $T_\beta(1)$: \begin{itemize} \item[(i)] If $\lfloor \beta \rfloor = -b_1+1$ then $2b_1-1 \le b_2 \le b_1-1$ and \begin{itemize} \item[(a)] if $b_2 = 2b_1 -1$ then $p=9$, and \item[(b)] if $b_2 > 2b_1 -1$ then $p=5$. \end{itemize} \item[(ii)] If $\lfloor \beta \rfloor = -b_1$ then $p=3$. \item[(iii)] If $\lfloor \beta \rfloor = -b_1-1$ then $p=4$. \item[(iv)] If $\lfloor \beta \rfloor = -b_1-2$ then $-b_1+1 < b_2 \le -2b_1-3$. Let $c_k=(-2b_1-2)-(-b_1-3)/k$ for $k\in \{1,2,\ldots, -b_1 - 3\}$. Then $-b_1+1 =c_1 < c_2 < \cdots < c_{-b_1-3} = -2 b_1-3$ and $c_{k-1} < b_2 \le c_k$ implies that $p=2k+2$. \end{itemize} \end{theorem} According to Proposition~\ref{prop:betanumformula} the dynamical systems $(\tau_\mathbf{r}, \mathbb{Z}^d)$ and $(T_\beta, \mathbb{Z}[\beta] \cap [0,1))$ are conjugate by the conjugacy $\Phi_{\mathbf r}(\mathbf{z})=\{\mathbf{r}\mathbf{z}\}$ when $\mathbf{r}=(r_0,\ldots,r_{d-1})$ is chosen as in this proposition. Thus {\it a priori} $T_\beta(1)$ has no analogue in $(\tau_\mathbf{r}, \mathbb{Z}^d)$. However, note that \begin{align*} \tau_\mathbf{r}((1,0,\ldots, 0)^t) &= (0,\ldots,0, -\lfloor r_0 \rfloor)^t = (0,\ldots,0, -\lfloor -1/\beta \rfloor)^t = (0,\ldots,0,1)^t \quad \hbox{and}\\ T_\beta(1)&= \{ \beta \}, \end{align*} since, as a Salem number is a unit we have $b_0=1$ and, hence, $r_0=-1/\beta \in(-1,0)$. Because $\Phi_{\mathbf{r}}((0,\ldots,0,1)^t)=\{\beta\}$ we see that the orbit of $(1,0,\ldots,0)^t$ under $\tau_{\mathbf{r}}$ has the same behavior as the orbit of $1$ under $T_\beta$. Let us turn back to Salem numbers of degree $4$. If $\beta$ is such a Salem number then, since $\beta$ has non-real conjugates on the unit circle, the minimal polynomial of $\beta$ can be written as $(X-\beta)(X^3 +r_2X^2 + r_1X +r_0)$ with $\mathbf{r}=(r_0,r_1,r_2) \in E_3^{(\mathbb{C})}$. Thus Theorem~\ref{thm:boyd4} answers the following question for a special class of parameters. \begin{question}\label{qu:salem4} Given $\mathbf{r} \in E_3^{(\mathbb{C})}$, is the orbit of $(1,0,0)^t$ under $\tau_{\mathbf r}$ ultimately periodic and, if so, how long is its period? \end{question} As mentioned in Section~\ref{sec:Ed}, the set $ E_3^{(\mathbb{C})}$ is a surface in $\mathbb{R}^3$. Using the definition of $E_3^{(\mathbb{C})}$ one easily derives that (see equation~\eqref{eq:EC} and \cite{Kirschenhofer-Pethoe-Surer-Thuswaldner:10}) \begin{equation}\label{ECn} E_3^{(\mathbb{C})}=\{(t,st+1,s+t) \;:\; -2< s< 2 ,\, -1\le t\le 1\}. \end{equation} Figure~\ref{fig:salemparameters} illustrates which values of the parameters $(s,t)$ correspond to Salem numbers. \begin{figure} \includegraphics[height=6cm]{SalemParameters.pdf} \caption{The black dots mark the parameters corresponding to Salem numbers of degree $4$ in the parameterization of $E_3^{(\mathbb{C})}$ given in \eqref{ECn}. The marked region indicates a set of parameters that share the same orbit of $(1,0,0)^t$. \label{fig:salemparameters}} \end{figure} By Theorem~\ref{thm:boyd4} and the above mentioned remark on the conjugacy of the dynamical systems $(\tau_\mathbf{r}, \mathbb{Z}^d)$ and $(T_\beta, \mathbb{Z}[\beta] \cap [0,1))$, for each of the indicated points we know that the orbit of $\tau_{(t,st+1,s+t) }$ is periodic with the period given in this theorem. How about the general answer to Question~\ref{qu:salem4}? In Figure~\ref{fig:schmidtshaded} we illustrate the periods of the orbit of $(1,0,0)^t$ for the values $(s,t) \in (-2,2)\times [0,1]$. \begin{figure} \includegraphics[width=0.8\textwidth]{SchmidtShadedthree.jpg} \caption{Lengths of the orbit of $(1,0,0)^t$ in the parameter region $(s,t)\in (-2,2)\times[0,1]$. The lighter the point, the shorter the orbit. Comparing this with Figure~\ref{fig:salemparameters} we see that for ``most'' Salem numbers of degree $4$ the orbit of $1$ under $T_\beta$ has short period. This agrees with Theorem~\ref{thm:boyd4}. \label{fig:schmidtshaded}} \end{figure} Although it is not hard to characterize the period of large subregions of this parameter range, we do not know whether $(1,0,0)^t$ has an ultimately periodic orbit for each parameter. In fact, it looks like a ``fortunate coincidence'' that Salem parameters lie in regions that mostly correspond to small periods. In particular, we have no explanation for the black ``stain'' southeast to the point $(-1,0)$ in Figure~\ref{fig:schmidtshaded} that corresponds to a spot with very long periods. Boyd~\cite{Boyd:96} studies orbits of $1$ under $T_\beta$ for Salem numbers of degree $6$. There seems to be no simple ``formula'' for the period as in the case of degree $4$. Moreover, for some examples no periods have been found so far (see also~\cite{Hare-Tweedle:08} where orbits of $1$ under $T_\beta$ are given for classes of Salem numbers). We give two examples that illustrate the difficulty of the situation. \begin{example} Let $\beta >1$ be the Salem number defined by the polynomial \[ x^6 - 3x^5 - x^4 - 7x^3 -x^2 - 3x + 1. \] Let $m$ be the pre-period of the orbit of $1$ under $T_\beta$, and $p$ its period (if these values exist). Boyd~\cite{Boyd:96} showed with computer assistance that $m+p > 10^9$. Hare and Tweedle~\cite{Hare-Tweedle:08} consider the Salem number $\beta > 1$ defined by \[ x^{12}-3 x^{11}+3 x^{10}-4 x^9+5 x^8-5 x^7+5 x^6-5 x^5+5 x^4-4 x^3+3 x^2-3 x+1. \] They compute that, if it exists, the period of the orbit of $1$ under $T_\beta$ is greater than $5\cdot 10^5$ in this case. We emphasize that for both of these examples it is {\em not} known whether the orbit of $1$ under $T_\beta$ is ultimately periodic or not. \end{example} \subsection{The heuristic model of Boyd for shift radix systems}\label{sec:Heuristic} Let $\beta$ be a Salem number. In \cite[Section~6]{Boyd:96} a heuristic probabilistic model for the orbits of $1$ under $T_\beta$ is presented. This model suggests that for Salem numbers of degree $4$ and $6$ ``almost all'' orbits should be finite, and predicts the existence of ``many'' unbounded orbits for Salem numbers of degree $8$ and higher. This suggests that there exist counter examples to Conjecture~\ref{con:Schmidt}. Here we present an SRS version of Boyd's model in order to give heuristics for the behavior of the orbit of $(1,0,\ldots,0)^t$ under $\tau_\mathbf{r}$ for $\mathbf{r} \in E_d^{(\mathbb{C})}$. Let $\mathbf{r}\in E_d^{(\mathbb{C})}$ be given. To keep things simple we assume that the characteristic polynomial $\chi_\mathbf{r}$ of the matrix $R(\mathbf{r})$ defined in \eqref{mata} is irreducible. Let $\beta_1,\ldots, \beta_d$ be the roots of $\chi_\mathbf{r}$ grouped in a way that $\beta_1,\ldots, \beta_r$ are real and $\beta_{r+j}= \bar \beta_{r+s+j}$ ($1\le j \le s$) are the non-real roots ($d=r+2s$). Let $D={\rm diag}(\beta_1,\ldots,\beta_d)$. Since $R(\mathbf{r})$ is a {\it companion matrix} we have $R(\mathbf{r}) = VDV^{-1}$, where $V=(v_{ij})$ with $v_{ij}=\beta_j^{i-1}$ is the {\it Vandermonde matrix} formed with the roots of $\chi_\mathbf{r}$ ({\it cf. e.g.}~\cite{Brand:64}). Iterating \eqref{linear} for $k$ times we get \begin{align} \tau_\mathbf{r}^k((1,0,\ldots,0)^t) &= \sum_{j=0}^{k-1} R(\mathbf{r})^j \mathbf{d}_j + R(\mathbf{r})^k(1,0,\ldots, 0)^t\nonumber \\ &=\sum_{j=0}^{k-1} VD^jV^{-1} \mathbf{d}_j + VD^kV^{-1}(1,0,\ldots, 0)^t, \label{eq:salemsum} \end{align} where $\mathbf{d}_j=(0,\ldots,0,\varepsilon_j)^t$ with $\varepsilon_j \in [0,1)$. Let $V^{-1}=(w_{ij})$. From \cite[Section~3]{Soto-Moya:11} we easily compute that $w_{id}=\prod_{\ell \not=i}(\beta_i - \beta_\ell)^{-1}$. Multiplying \eqref{eq:salemsum} by $V^{-1}$, using this fact we arrive at \begin{equation}\label{eq:SalemBox} V^{-1} \tau_\mathbf{r}^k\begin{pmatrix} 1\\0\\ \vdots\\0 \end{pmatrix} = \begin{pmatrix} \prod_{\ell\not=1}(\beta_1-\beta_\ell)^{-1}\sum_{j=0}^{k-1} \varepsilon_j\beta_1^j \\ \vdots\\ \prod_{\ell\not=d}(\beta_d-\beta_\ell)^{-1}\sum_{j=0}^{k-1} \varepsilon_j\beta_d^j \end{pmatrix}+ D^kV^{-1}\begin{pmatrix}1\\ 0\\ \vdots \\ 0\end{pmatrix} \in \mathbb{R}^r\times \mathbb{C}^{2s}. \end{equation} Note that the $(r+j)$-th coordinate of \eqref{eq:SalemBox} is just the complex conjugate of its $(r+s+j)$-th coordinate ($1\le j \le s$). Thus two points in the orbit of $(1,0,\ldots,0)^t$ under $\tau_\mathbf{r}$ are equal if and only if the first $r+s$ coordinates under the image of $V^{-1}$ are equal. So, using the fact that $|\varepsilon_j|< 1$ and picking $\mathbf{z}=(z_1,\ldots, z_d)^t \in \{V^{-1} \tau_\mathbf{r}^k((1,0,\ldots,0)^t)\;:\; 0\le k < n\}$ implies that \begin{itemize} \item[(i)] $\mathbf{z}$ is an element of the lattice $V^{-1} \mathbb{Z}^d$. \item[(ii)] If $i\in\{1,\ldots, r\}$ then $z_i\in \mathbb{R}$ with \[ |z_i| \le \prod_{\ell\not=i} |\beta_i-\beta_\ell|^{-1}\sum_{j=0}^{k-1} \left|\beta_i \right|^j. \] \item[(iii)] If $i\in\{1,\ldots, s\}$ then $z_{r+i}=\bar z_{r+s+i}\in \mathbb{C}$ with \begin{align*} |z_{r+i}| &\le \prod_{\ell\not=r+i}|\beta_{r+i}-\beta_\ell|^{-1}\sum_{j=0}^{k-1} \left|\beta_{r+i} \right|^j\\ &= \sqrt{\prod_{\ell\not=r+i}|\beta_{r+i}-\beta_\ell|^{-1}\prod_{\ell\not=r+s+i}|\beta_{r+s+i}-\beta_\ell|^{-1}\sum_{j=0}^{k-1} |\beta_{r+i}|^j\sum_{j=0}^{k-1} |\beta_{r+s+i}|^j}. \end{align*} \end{itemize} Let ${\rm disc}(\chi_\mathbf{r})=\prod_{i\not=j}(\beta_i-\beta_j)$ be the discriminant of $\chi_\mathbf{r}$. Then the three items above imply that a point in $\{V^{-1} \tau_\mathbf{r}^k((1,0,\ldots,0)^t)\;:\; 0\le k < n\}$ is a point of the lattice $V^{-1}\mathbb{Z}^d$ that is contained in a product $K_n$ of disks and intervals with volume \[ {\rm Vol}(K_n)=\frac{c}{|{\rm disc}(\chi_\mathbf{r})|} \prod_{i=1}^d \sum_{j=0}^{k-1} \left|\beta_{i}^j\right|, \] where $c$ is an absolute constant. As $\det(V)=\sqrt{{\rm disc}(\chi_\mathbf{r})}$ is the mesh volume of the lattice $V^{-1}\mathbb{Z}^d$ we get that this box cannot contain more than approximately \[ N_n=\frac{c}{\sqrt{|{\rm disc}(\chi_\mathbf{r})|}} \prod_{i=1}^d\sum_{j=0}^{k-1}\left| \beta_{i}^j\right| \] elements. If $|\beta_i|<1$ then $ \sum_{j=0}^{k-1} \left|\beta_{i}^j\right| = O(1)$. Since $\chi_\mathbf{r}$ is irreducible $|\beta_i|=1$ implies that $\beta_i$ is non-real. If $|\beta_i|=1$ then we have the estimate $ \sum_{j=0}^{k-1}\left|\beta_{i}^j\right| = O(n)$ for this sum as well for the conjugate sum. Let $m$ be the number of pairs of non-real roots of $\chi_\mathbf{r}$ that have modulus $1$. Then these considerations yield that \begin{equation}\label{withoutbirthday} N_n \le \frac{c}{\sqrt{|{\rm disc}(\chi_\mathbf{r})|}} n^{2m}. \end{equation} Unfortunately, this estimate doesn't allow us to get any conclusion on the periodicity of the orbit of $(1,0,\ldots,0)^t$. We thus make the following {\it assumption}: we assume that for each fixed $\beta_i$ with $|\beta_i|=1$ the quantities $|\varepsilon_j\beta_i^j|$ ($0\le j\le k-1$) in \eqref{eq:SalemBox} behave like identically distributed independent random variables. Then, according to the central limit theorem, we have that the sums in \eqref{eq:SalemBox} can be estimated by \[ \left|\sum_{j=0}^{k-1} \varepsilon_j\beta_{i}^j\right| = O(\sqrt{n}). \] Using this argument, we can replace \eqref{withoutbirthday} by the better estimate \[ N_n \le \frac{c}{\sqrt{|{\rm disc}(\chi_\mathbf{r})|}} n^{m}. \] Suppose that $m=1$. If $|{\rm disc}(\chi_\mathbf{r})|$ is large enough, the set $\{V^{-1} \tau_\mathbf{r}^k((1,0,\ldots,0)^t)\;:\; 0\le k < n\}$ would be contained in $K_n$ which contains less than $n$ points of the lattice $V^{-1}\mathbb{Z}^d$ in it. Thus there have to be some repetitions in the orbit of $(1,0,\ldots,0)^t$. This implies that it is periodic. For $m=2$ and a sufficiently large discriminant the set $\{V^{-1} \tau_\mathbf{r}^k((1,0,\ldots,0)^t)\;:\; 0\le k < n\}$ would contain considerably more than $\sqrt{N_n}$ ``randomly chosen'' points taken from a box with $N_n$ elements. Thus, according to the ``birthday paradox'' (for $n\to\infty$) with probability $1$ the orbit ``picks'' twice the same point, which again implies periodicity. For $m > 3$ this model suggests that there may well exist aperiodic orbits as there are ``too many'' choices to pick points. Summing up we come to the following conjecture. \begin{conjecture} Let $\mathbf{r} \in E_d^{(\mathbb{C})}$ be a parameter with irreducible polynomial $\chi_\mathbf{r}$. Let $m$ be the number of pairs of complex conjugate roots $(\alpha, \bar\alpha)$ of $\chi_\mathbf{r}$ with $|\alpha|=1$. Then almost every orbit of $(1,0,\ldots,0)^t$ under $\tau_\mathbf{r}$ is periodic if $m=1$ or $m=2$ and aperiodic if $m \ge 3$. \end{conjecture} Note that the cases $m=1$ and $m=2$ contain the Salem numbers of degree $4$ and $6$, respectively. Moreover, Salem numbers of degree $8$ and higher are contained in the cases $m\ge 3$. This is in accordance with \cite[Section~6]{Boyd:96}. \section{Shift radix systems with finiteness property: the sets $\D_d^{(0)}$}\label{sec:Dd0} As was already observed by Akiyama {\it et al.}~\cite{Akiyama-Borbeli-Brunotte-Pethoe-Thuswaldner:05} the set $\D_d^{(0)}$ can be constructed from the set $\D_d$ by ``cutting out'' families of convex polyhedra. Moreover, it is known that for $d\ge 2$ infinitely many such ``cut out polyhedra'' are needed to characterize $\D_d^{(0)}$ in this way (see Figure~\ref{d20} for an illustration of $\D_2^{(0)}$). A list $\pi$ of pairwise distinct vectors \begin{equation}\label{eq:cycle} (a_{j},\ldots,a_{d-1+j})^t \qquad (0\le j\le L-1) \end{equation} with $a_{L}=a_0,\ldots, a_{L+d-1}=a_{d-1}$ is called a {\em cycle of vectors}. To the cycle $\pi$ we associate the (possibly degenerate or empty) polyhedron \[ P(\pi) = \{(r_0,\ldots,r_{d-1})\; : \; 0\le r_0 a_{j} + \cdots + r_{d-1} a_{d-1+j} + a_{d+j} <1 \hbox{ holds for } 0\le j\le L-1\}. \] By definition the cycle in \eqref{eq:cycle} forms a periodic orbit of $\tau_{{\bf r}}$ if and only if $\mathbf{r}\in{P}(\pi)$. Since ${\bf r}\in \D_d^{(0)}$ if and only if $\tau_{{\bf r}}$ has no non-trivial periodic orbit it follows that $$ \label{cutout} \D_d^{(0)} = \D_d \setminus \bigcup_{\pi\not=\mathbf{0}} {P}(\pi), $$ where the union is taken over all non-zero cycles $\pi$ of vectors. The family of all (non-empty) polyhedra corresponding to this choice is called the family of {\it cut out polyhedra of $\D_d^{(0)}$}. \begin{example}[{see \cite{Akiyama-Borbeli-Brunotte-Pethoe-Thuswaldner:05}}] Let $\pi$ be a cycle of period $5$ in $\mathbb{Z}^2$ given by $$ (-1,-1)^t \rightarrow (-1,1)^t \rightarrow (1,2)^t \rightarrow (2,1)^t \rightarrow (1,-1)^t \rightarrow (-1,-1)^t. $$ Then $P(\pi)$ gives the topmost cut out triangle in the approximation of $\D_2^{(0)}$ in Figure~\ref{d20}. \end{example} For $d=1$, the set $\D_1^{(0)}$ can easily be characterized. \begin{proposition}[{{\em cf.}~\cite[Proposition~4.4]{Akiyama-Borbeli-Brunotte-Pethoe-Thuswaldner:05}}]\label{pro:D10} $$ \D_1^{(0)}= [0,1). $$ \end{proposition} The proof is an easy exercise. \subsection{Algorithms to determine $\D_d^{(0)}$} \label{sec:algorithms} To show that a given point $\mathbf{r}\in \D_d$ does not belong to $\D_d^{(0)}$ it is sufficient to show that $\tau_\mathbf{r}$ admits a non-trivial periodic orbit, {\it i.e.}, to show that there is a polyhedron $\pi$ with $\mathbf{r}\in P(\pi)$. To prove the other alternative is often more difficult. We provide an algorithm (going back to Brunotte~\cite{Brunotte:01}) that decides whether a given $\mathbf{r}\in\E_d$ is in $\D_d^{(0)}$ or not. As usual, denote the standard basis vectors of $\mathbb{R}^d$ by $\{\mathbf{e}_1,\ldots,\mathbf{e}_d\}$. \begin{definition}[Set of witnesses]\label{def:sow} A {\em set of witnesses} associated with a parameter $\mathbf{r}\in \mathbb{R}^d$ is a set $\mathcal{V}_\mathbf{r}$ satisfying \begin{itemize} \item[(i)] $\{\pm \mathbf{e}_1,\ldots,\pm\mathbf{e}_d\}\subset \mathcal{V}_\mathbf{r}$ and \item[(ii)] $\mathbf{z}\in \mathcal{V}_\mathbf{r}$ implies that $\{ \tau_\mathbf{r}(\mathbf{z}),-\tau_\mathbf{r}(-\mathbf{z})\} \subset \mathcal{V}_\mathbf{r}$, \end{itemize} \end{definition} The following theorem justifies the terminology ``set of witnesses''. \begin{theorem}[{see {\em e.g.}~\cite[Theorem~5.1]{Akiyama-Borbeli-Brunotte-Pethoe-Thuswaldner:05}}]\label{thm:Brunotte} Choose $\mathbf{r}\in \mathbb{R}^d$ and let $\mathcal{V}_\mathbf{r}$ be a set of witnesses for $\mathbf{r}$. Then \[ \mathbf{r}\in\D_d^{(0)} \quad\Longleftrightarrow\quad \hbox{for each } \mathbf{z}\in\mathcal{V}_\mathbf{r} \hbox{ there is } k\in\mathbb{N} \hbox{ such that } \tau_\mathbf{r}^k(\mathbf{z})=\mathbf{0}. \] \end{theorem} \begin{proof} It is obvious that the left hand side of the equivalence implies the right hand side. Thus assume that the right hand side holds. Assume that $\mathbf{a}\in \mathbb{Z}^d$ has finite SRS expansion, {\it i.e.}, there exists $\ell\in\mathbb{N}$ such that $\tau_\mathbf{r}^\ell(\mathbf{a})=\mathbf{0}$ and choose $\mathbf{b}\in \{\pm \mathbf{e}_1,\ldots,\pm\mathbf{e}_d\}$. We show now that also $\mathbf{a}+\mathbf{b}$ has finite SRS expansion. As $\mathcal{V}_\mathbf{r}$ is a set of witnesses, using Definition~\ref{def:sow}~(ii) we derive from the almost linearity condition stated in \eqref{almostlinear} that \[ \tau_\mathbf{r}(\mathbf{a} + \mathcal{V}_\mathbf{r}) \subset \tau_\mathbf{r}(\mathbf{a}) + \mathcal{V}_\mathbf{r}. \] Iterating this for $\ell$ times and observing that $\mathbf{b} \in \mathcal{V}_\mathbf{r}$ holds in view of Definition~\ref{def:sow}~(i), we gain \[ \tau_\mathbf{r}^\ell(\mathbf{a} + \mathbf{b}) \in \tau_\mathbf{r}^\ell(\mathbf{a}) + \mathcal{V}_\mathbf{r} = \mathcal{V}_\mathbf{r}. \] Thus our assumption implies that $\mathbf{a} + \mathbf{b}$ has finite SRS expansion. Since $\mathbf{0}$ clearly has finite SRS expansion, the above argument inductively proves that $\mathbf{r}\in\D_d^{(0)}$. \end{proof} For each $\mathbf{r} \in \E_d$ we can now check algorithmically whether $\mathbf{r}\in\D_d^{(0)}$ or not. Indeed, if $\mathbf{r}\in\E_d$ the matrix $R(\mathbf{r})$ is contractive. In view of \eqref{linear} this implies that Algorithm~\ref{alg:1} yields a finite set of witnesses $\mathcal{V}_\mathbf{r}$ for $\mathbf{r}$ after finitely many steps. Since Proposition~\ref{EdDdEd} ensures that each orbit of $\tau_\mathbf{r}$ is ultimately periodic for $\mathbf{r} \in \E_d$, the criterion in Theorem~\ref{thm:Brunotte} can be checked algorithmically for each $\mathbf{z}\in\mathcal{V}_\mathbf{r}$. \begin{algorithm} \begin{algorithmic} \Require{$\mathbf{r} \in \E_d$} \Ensure{A set of witnesses $\mathcal{V}_\mathbf{r}$ for $\mathbf{r}$} \State $W_0 \gets \{\pm \mathbf{e}_1,\ldots,\pm \mathbf{e}_d\}$ \State $i \gets 0$ \Repeat \State $W_{i+1} \gets W_i \cup \tau_\mathbf{r}(W_i) \cup (-\tau_\mathbf{r}(-W_i))$ \State $i\gets i+1$ \Until{$W_i = W_{i-1}$} \State $\mathcal{V}_\mathbf{r} \gets W_i$ \end{algorithmic} \caption{An algorithm to calculate the set of witnesses of a parameter $\mathbf{r} \in \E_d$ (see~\cite[Section~5]{Akiyama-Borbeli-Brunotte-Pethoe-Thuswaldner:05})} \label{alg:1} \end{algorithm} We can generalize these ideas and set up an algorithm that allows to determine small regions of the set $\D_d^{(0)}$. To this matter we define a set of witnesses for a compact set. \begin{definition}[Set of witnesses for a compact set]\label{def:sowR} Let $H \subset \mathbb{R}^d$ be a non-empty compact set and for $\mathbf{z}=(z_0,\ldots,z_{d-1})\in \mathbb{Z}^d$ define the functions \begin{align} M(\mathbf{z}) &= \max\{-\lfloor\mathbf{r}\mathbf{z}\rfloor\;:\; \mathbf{r}\in H\} ,\label{eq:M} \\ T(\mathbf{z}) &= \{(z_1,\ldots,z_{d-1},j)^t\;:\; -M(-\mathbf{z}) \le j \le M(\mathbf{z})\}. \nonumber \end{align} A set $\mathcal{V}_H$ is called a {\em set of witnesses for the region $H$} if it satisfies \begin{itemize} \item[(i)] $\{\pm \mathbf{e}_1,\ldots,\pm\mathbf{e}_d\}\subset \mathcal{V}_H$ and \item[(ii)] $\mathbf{z}\in \mathcal{V}_H$ implies that $T(\mathbf{z}) \subset \mathcal{V}_H$. \end{itemize} A {\em graph $\mathcal{G}_H$ of witnesses for $H$} is a directed graph whose vertices are the elements of a set of witnesses $\mathcal{V}_H$ for $H$ and with a directed edge from $\mathbf{z}$ to $\mathbf{z}'$ if and only if $\mathbf{z}'\in T(\mathbf{z})$. \end{definition} Each cycle of a graph of witnesses $\mathcal{G}_H$ is formed by a cycle of vectors (note that cycles of graphs are therefore considered to be {\em simple} in this paper). If $\mathbf{0}$ is a vertex of $\mathcal{G}_H$ then $\mathcal{G}_H$ contains the cycle $\mathbf{0} \to \mathbf{0}$. We call this cycle {\em trivial}. All the other cycles in $\mathcal{G}_H$ will be called {\em non-trivial}. \begin{lemma}[{see~\cite[Section~5]{Akiyama-Borbeli-Brunotte-Pethoe-Thuswaldner:05}}]\label{lem:Brunotte} The following assertions are true. \begin{itemize} \item[(i)] A set of witnesses for $\mathbf{r}$ is a set of witnesses for the region $H=\{\mathbf{r}\}$ and vice versa. \item[(ii)] Choose $\mathbf{r}\in \D_d$ and let $\mathcal{G}_{H}$ be a graph of witnesses for $H=\{\mathbf{r}\}$. If $\mathbf{r}\not\in\D_d^{(0)}$ then $\mathcal{G}_{H}$ has a non-trivial cycle $\pi$ with $\mathbf{r}\in P(\pi)$. \item[(iii)] A graph of witnesses for a compact set $H$ is a graph of witnesses for each non-empty compact subset of $H$. \end{itemize} \end{lemma} \begin{proof} All three assertions are immediate consequences of Definitions~\ref{def:sow} and~\ref{def:sowR}. \end{proof} We will use this lemma in the proof of the following result. \begin{theorem}[{see {\em e.g.}~\cite[Theorem~5.2]{Akiyama-Borbeli-Brunotte-Pethoe-Thuswaldner:05}}]\label{thm:BrunotteR} Let $H$ be the convex hull of the finite set $\{\mathbf{r}_1,\ldots,\mathbf{r}_k\}\subset\D_d$. If $\mathcal{G}_H$ is a graph of witnesses for $H$ then \[ \D_d^{(0)} \cap H = H \setminus \bigcup_{\begin{subarray}{c}\pi \in \mathcal{G}_H\\ \pi \not = \mathbf{0} \end{subarray}} P(\pi) \] where the union is extended over all non-zero cycles of $\mathcal{G}_H$. Thus the set $\D_d^{(0)} \cap H$ is described by the graph $\mathcal{G}_H$. \end{theorem} \begin{proof} As obviously \[ \D_d^{(0)} \cap H = H \setminus \bigcup_{\pi \not = \mathbf{0}} P(\pi) \subset H \setminus \bigcup_{\begin{subarray}{c}\pi \in \mathcal{G}_H\\ \pi \not = \mathbf{0} \end{subarray}} P(\pi) \] it suffices to prove the reverse inclusion. To this matter assume that $\mathbf{r} \not\in \D_d^{(0)} \cap H$. W.l.o.g.\ we may also assume that $\mathbf{r}\in H$. Then by Lemma~\ref{lem:Brunotte}~(iii) the graph $\mathcal{G}_H$ is a graph of witnesses for $\{\mathbf{r}\}$. Thus Lemma~\ref{lem:Brunotte}~(ii) implies that $\mathcal{G}_H$ has a non-trivial cycle $\pi$ with $\mathbf{r}\in P(\pi)$ and, hence, $\mathbf{r}\not\in H \setminus \bigcup_{\begin{subarray}{c}\pi \in \mathcal{G}_H\\ \pi \not = \mathbf{0} \end{subarray}} P(\pi)$. \end{proof} Theorem~\ref{thm:BrunotteR} is of special interest if there is an algorithmic way to construct the graph $\mathcal{G}_H$. In this case it leads to an algorithm for the description of $\D_d^{(0)}$ in the region $H$. To be more precise, assume that $H \subset \E_d$ is the convex hull of a finite set $\mathbf{r}_1,\ldots, \mathbf{r}_k$. Then the maximum $M(\mathbf{z})$ in \eqref{eq:M} is easily computable and analogously to Algorithm~\ref{alg:1} we can set up Algorithm~\ref{alg:2} to calculate the set of vertices of a graph of witnesses $\mathcal{G}_H$ for $H$. As soon as we have this set of vertices the edges can be constructed from the definition of a graph of witnesses. The cycles can then be determined by classical algorithms ({\it cf.\ e.g.}~\cite{Johnson:75}). \begin{algorithm} \begin{algorithmic} \Require{$H \subset \E_d$ which is the convex hull of $\mathbf{r}_1,\ldots, \mathbf{r}_k$} \Ensure{The states $\mathcal{V}_H$ of a graph of witnesses $\mathcal{G}_H$ for $H$} \State $W_0 \gets \{\pm \mathbf{e}_1,\ldots,\pm \mathbf{e}_d\}$ \State $i \gets 0$ \Repeat \State $W_{i+1} \gets W_i \cup T(W_i)$ \State $i\gets i+1$ \Until{$W_i = W_{i-1}$} \State $\mathcal{V}_H\gets W_i$ \end{algorithmic} \caption{An algorithm to calculate the set of witnesses of $H \subset \E_d$ (see~\cite[Section~5]{Akiyama-Borbeli-Brunotte-Pethoe-Thuswaldner:05})} \label{alg:2} \end{algorithm} We need to make sure that Algorithm~\ref{alg:2} terminates. To this matter set $I(\mathbf{z})=\{\mathbf{sz}\,:\, \mathbf{s}\in H\}$. As $H$ is convex, this set is an interval. Thus, given $\mathbf{z}$, for each $\mathbf{z}'\in T(\mathbf{z})$ we can find $\mathbf{r}\in H$ such that \[ \mathbf{z}' = \tau_{(\mathbf{r}}(\mathbf{z})=R(\mathbf{r}) \mathbf{z} + \mathbf{v} \qquad (\hbox{for some } \mathbf{v} \hbox{ with } ||\mathbf{v}||_\infty<1). \] As $\mathbf{r}\in \E_d$ we can choose a norm that makes $R(\mathbf{r})$ contractive for a particular $\mathbf{r}$. However, in general it is not possible to find a norm that makes $R(\mathbf{r})$ contractive for each $\mathbf{r}\in H$ unless $H$ is small enough in diameter. Thus, in order to ensure that Algorithm~\ref{alg:2} terminates we have to choose the set $H$ sufficiently small. There seems to exist no algorithmic way to determine sets $H$ that are ``small enough'' to make Algorithm~\ref{alg:2} finite. In practice one starts with some set $H$. If the algorithm does not terminate after a reasonable amount of time one has to subdivide $H$ into smaller subsets until the algorithm terminates for each piece. This strategy has been used so far to describe large parts of $\D_2^{(0)}$ (see {\it e.g.}~\cite{Akiyama-Brunotte-Pethoe-Thuswaldner:06,Surer:07}). Very recently, Weitzer~\cite{Weitzer:13} was able to design a new algorithm which describes $\D_d^{(0)} \cap H$ for arbitrary compact sets $H \subset \E_d$. He does not need any further assumptions on $H$. Moreover, he is able to show that the set $\D_2^{(0)}$ is not connected and has non-trivial fundamental group. \begin{remark} These algorithms can easily be adapted to characterize the sets $\D_d^{(p)}$ used in Section~\ref{sec:realDd} (see \cite[Section~6.1]{Kirschenhofer-Pethoe-Surer-Thuswaldner:10}). \end{remark} We conclude this section with rhe following fundamental problem. \begin{problem} Give a complete description for $\D_d$ if $d \ge 2$. \end{problem} \subsection{The finiteness property on the boundary of $\E_d$} Let us now focus on the relation between the sets $\D_d^{(0)}$ and $\E_d$. We already observed that the application of $\tau_\mathbf{r}$ performs a multiplication by the matrix $R(\mathbf{r})$ followed by a round-off. If $\mathbf{r} \in \partial \mathcal{D}_d$, then $R(\mathbf{r})$ has at least one eigenvalue of modulus $1$. Thus multiplication by $R(\mathbf{r})$ will not contract along the direction of the corresponding eigenvector $\mathbf{v}$. If we consider a typical (large) orbit of $\tau_\mathbf{r}$, it is reasonable to assume that the successive ``round-off errors'' will --- even though they may not cancel out by the heuristics given in Section~\ref{sec:Heuristic} --- not always draw the orbit towards $\mathbf{0}$. This would imply that such an orbit will sometimes not end up at $\mathbf{0}$ if it starts far enough away from the origin in the direction of $\mathbf{v}$. More precisely, the following conjecture was stated by Akiyama {\it et al.}~\cite{Akiyama-Borbely-Brunotte-Pethoe-Thuswaldner:06}. \begin{conjecture}\label{c69} For $d\in\mathbb{N}$ we have $$ \D_d^{(0)}\subset \E_d. $$ \end{conjecture} In other words: {\it Let $\mathbf{r}\in\mathbb{R}^d$. If $\tau_{\mathbf r}$ has the finiteness property then, according to the conjecture, each of the eigenvalues of $R(\mathbf{r})$ has modulus strictly less than one.} Since by Proposition~\ref{EdDdEd} we have $\D_d^{(0)}\subset\D_d \subset \overline{\E_{d}}$ it remains to check all parameters $\mathbf{r}$ giving rise to a matrix $R(\mathbf{r})$ whose eigenvalues have modulus at most one with equality in at least one case. Therefore Conjecture~\ref{c69} is equivalent to $$ \mathcal{D}_d^{(0)} \cap \partial \mathcal{D}_d = \emptyset. $$ This is of course trivially true for $d=1$ (see Proposition~\ref{pro:D10}). It has been proved for $d=2$ by Akiyama {\it et al.}~\cite{Akiyama-Brunotte-Pethoe-Thuswaldner:06} (see Corollary~\ref{D2boundarycorollary}). In the proofs for the cases $d=1$ and $d=2$ for all $\mathbf{r}\in \E_d$, explicit orbits that do not end up at $\mathbf{0}$ are constructed. For $d=3$ this seems no longer possible for all parameters $\mathbf{r}\in \partial\E_d$. Nevertheless Brunotte and the authors could settle the instance $d=3$. \begin{theorem}[{{\em cf.}~\cite{Brunotte-Kirschenhofer-Thuswaldner:12}}]\label{thm:D3} $$ \D_3^{(0)}\subset \E_3. $$ \end{theorem} In the following we give a very rough outline of the idea of the proof. In Figure \ref{E3} we see the set $\E_3$. The boundary of this set can be decomposed according to \eqref{Edboundary}. Moreover, the following parameterizations hold (see \cite{Kirschenhofer-Pethoe-Surer-Thuswaldner:10}) \begin{eqnarray} E_3^{(1)}&=&\{(s,s+t+st,st+t+1) \;:\; -1\le s,t\le 1\}, \label{eq:E1} \\ E_3^{(-1)}&=& \{(-s,s-t-st,st+t-1) \;:\; -1\le s,t\le 1\}\label{eq:E-1},\quad\hbox{and}\\ E_3^{(\mathbb{C})}&=&\{(t,st+1,s+t) \;:\; -2 < s < 2 ,\, -1\le t\le 1\}.\label{eq:EC} \end{eqnarray} The sets $E_3^{(1)}$ and $E_3^{(-1)}$ can be treated easily, see Proposition~\ref{DdboundaryPartialResults}~(i) and~(ii). The more delicate instance is constituted by the elements of $E_3^{(\mathbb{C})}$. Here the decomposition of the parameter region depicted in Figure \ref{Aufteilung} is helpful. \begin{figure} \centering \leavevmode \includegraphics[width=0.6\textwidth]{AufteilungBild.pdf} \caption{The subdivision of the parameter region used in the proof of Theorem~\ref{thm:D3}} \label{Aufteilung} \end{figure} Whereas for several subregions it is again possible to explicitly construct non-trivial cycles as in the instances mentioned above, this seems no longer the case {\it e.g.} for the regions labelled $1$, $2$, $3$ or $5$ in Figure~\ref{Aufteilung}. Here the following idea can be applied. For the parameters in the regions in question it can be proved that for large $n$ the set $\tau_{\mathbf{r}}^{-n}(\mathbf{0})$, where $\tau_{\mathbf{r}}^{-1}$ denotes the preimage of $\tau_{\mathbf{r}}$, has finite intersection with a subspace that is bounded by two hyperplanes. Thereby it can be concluded that some elements of this subspace belong to periodic orbits of $\tau_{\mathbf{r}}$ that do not end up at $\mathbf{0}$ without constructing these orbits explicitly. \medskip Peth\H{o}~\cite{Pethoe:09} has studied the instance of the latter problem where some eigenvalues of $R(\mathbf{r})$ are roots of unity. In the following proposition we give a summary of the partial results known for arbitrary dimensions. \begin{proposition}\label{DdboundaryPartialResults} Assume that $\mathbf{r}=(r_0,\ldots, r_{d-1})\in \partial\mathcal{D}_d$. Then $\mathbf{r}\not\in\mathcal{D}_d^{(0)}$ holds if one of the following conditions is true. \begin{itemize} \item[(i)] $\mathbf{r} \in E_d^{(1)}$. \item[(ii)] $\mathbf{r} \in E_d^{(-1)}$. \item[(iii)] $r_0<0$. \item[(iv)] Each root of $\chi_\mathbf{r}$ has modulus $1$. \item[(v)] There is a Salem number $\beta$ such that $(X-\beta)\chi_\mathbf{r}(X)$, with $\chi_\mathbf{r}$ as in \eqref{chi}, is the minimal polynomial of $\beta$ over $\mathbb{Z}$. \item[(vi)] $\mathbf{r}=(\frac{\pm 1}{p_0},\frac{p_{d-1}}{p_0},\ldots,\frac{p_1}{p_0})$ with $p_0,\ldots, p_{d-1}\in \mathbb{Z}$. \end{itemize} \end{proposition} \begin{remark} Item (iii) is a special case of \cite[Theorem~2.1]{Akiyama-Brunotte-Pethoe-Thuswaldner:07}. Item (v) is a restatement of the fact that beta expansions w.r.t.\ Salem numbers never satisfy property (F), see {\it e.g.}~\cite[Section~2]{Boyd:89} or \cite[Lemma~1(b)]{Frougny-Solomyak:92}. Item (vi) is equivalent to the fact that CNS polynomials satisfying the finiteness property need to be expanding ({\it cf.}~\cite[Theorem~6.1]{Pethoe:91}; see also~\cite{Kovacs-Pethoe:91}). \end{remark} \begin{proof} In Item (i) we have that $r_0+\cdots+r_{d-1}=-1$. Thus, choosing $\mathbf{z}=(n,\ldots,n)^t\in \mathbb{Z}^d$ we get that \[ \tau_{\mathbf{r}}(\mathbf{z}) = (n,\ldots, n, -\lfloor \mathbf{rz} \rfloor)^t = (n,\ldots,n, -\lfloor n(r_0+\cdots+r_{d-1}) \rfloor)^t = (n,\ldots,n)^t \] which exhibits a non-trivial cycle for each $n\not=0$. Similarly, in Item (ii), we use the fact that $r_0-r_1+r_2-+\cdots+(-1)^{d-1}r_{d-1}=(-1)^{d-1}$ in order to derive $\tau_\mathbf{r}^2(\mathbf{z})=\mathbf{z}$ for each $\mathbf{z}=(n,-n,\ldots,(-1)^{d-1}n)^t\in \mathbb{Z}^d$. To prove Item (iii) one shows that for $r_0<0$ there is a half-space whose elements cannot have orbits ending up in~$\mathbf{0}$. To show that the result holds when (iv) is in force, observe that this condition implies that $r_0 \in\{-1,1\}$. This immediately yields $\tau_{\mathbf r}^{-1}(\mathbf{0})=\{\mathbf{0}\}$ and, hence, no orbit apart from the trivial one can end up in~$\mathbf{0}$. In the proof of (v) one uses that a Salem number $\beta$ has the positive conjugate $\beta^{-1}$. As in (iii) this fact allows to conclude that there is a half-space whose elements cannot have orbits ending up in~$\mathbf{0}$. Item (vi) is proved using the fact that under this condition the polynomial $\chi_\mathbf{r}$ has a root of unity among its roots. \end{proof} \section{The geometry of shift radix systems} \subsection{SRS tiles} Let us assume that the matrix $R(\mathbf{r})$ is contractive and $\mathbf{r}$ is {\it reduced} in the sense that $\mathbf{r}$ does not include leading zeros. Then, as it was observed by Berth\'e {\it et al.}\ \cite{BSSST2011} the mapping $\tau_\mathbf{r}$ can be used to define so-called SRS tiles in analogy with the definition of tiles for other dynamical systems related to numeration (\emph{cf.\ e.g.}\ \cite{Akiyama:02,Barat-Berthe-Liardet-Thuswaldner:06,Berthe-Siegel:05,Ito-Rao:06,Katai-Koernyei:92,Rauzy:82,Scheicher-Thuswaldner:01,Thurston:89}). As it will turn out some of these tiles are related to CNS tiles and beta-tiles in a way corresponding to the conjugacies established already in Section 2. Formally we have the following objects. \begin{definition}[SRS tile] Let $\mathbf{r}=(r_0,\ldots,r_{d-1}) \in \mathcal{E}_d$ with $r_0\not=0$ and $\mathbf{x} \in \mathbb{Z}^d$ be given. The set \[ \mathcal{T}_\mathbf{r}(\mathbf{x})= \mathop{\rm Lim}_{n\to\infty} R(\mathbf{r})^n \tau_\mathbf{r}^{-n}(\mathbf{x}) \] (where the limit is taken with respect to the Hausdorff metric) is called the {\em SRS tile associated with $\mathbf{r}$}. $\mathcal{T}_\mathbf{r}(\mathbf{0})$ is called the {\em central SRS tile associated with $\mathbf{r}$ located at $\mathbf{x}$}. \end{definition} In other words in order to build $\mathcal{T}_\mathbf{r}(\mathbf{x})$, the vectors are considered whose SRS expansion coincides with the expansion of $\mathbf{x}$ up to an added finite prefix and afterwards the expansion is renormalized. We mention that the existence of this limit is not trivially true but can be assured by using the contractivity of the operator $R(\mathbf{r})$. \begin{example} Let $\mathbf{r}=(\frac{4}{5},-\frac{49}{50})$. Approximations of the tile $\mathcal{T}_\mathbf{r}(\mathbf{0})$ are illustrated in Figure~\ref{fig:SRSspirale}. \begin{figure} \includegraphics[width=0.3\textwidth]{SRSEx1.jpg} \includegraphics[width=0.3\textwidth]{SRSEx2.jpg} \includegraphics[width=0.3\textwidth]{SRSEx3.jpg} \\[.3cm] \includegraphics[width=0.3\textwidth]{SRSEx4.jpg} \includegraphics[width=0.3\textwidth]{SRSEx5.jpg} \includegraphics[width=0.3\textwidth]{SRSEx6.jpg} \caption{Approximations of $\mathcal{T}_{\mathbf{r}}(\mathbf{0})$ for the parameter $\mathbf{r}=(\frac{4}{5},-\frac{49}{50})$: the images show $R(\mathbf{r})^k\tau_{\mathbf{r}}^{-k}(\mathbf{0})$ for $k=1,6,16,26,36,46$. \label{fig:SRSspirale}} \end{figure} \end{example} The following Proposition summarizes some of the basic properties of SRS tiles. \begin{proposition}[{\em cf.} {\cite[Section~3]{BSSST2011}}]\label{prop:basicSRS} For each $\mathbf{r}=(r_0,\ldots,r_{d-1}) \in \mathcal{E}_d$ with $r_0\not=0$ we have the following results. \begin{itemize} \item $\mathcal{T}_\mathbf{r}(\mathbf{x})$ is compact for all $\mathbf{x} \in \mathbb{Z}^d$. \item The family $\{\mathcal{T}_\mathbf{r}(\mathbf{x}) \;:\; \mathbf{x} \in \mathbb{Z}^d\}$ is locally finite. \item $\mathcal{T}_\mathbf{r}(\mathbf{x})$ satisfies the set equation \[ \mathcal{T}_\mathbf{r}(\mathbf{x}) = \bigcup_{\mathbf{y}\in\tau_\mathbf{r}^{-1}(\mathbf{x})} R(\mathbf{r}) \mathcal{T}_\mathbf{r}(\mathbf{y}). \] \item The collection $\{\mathcal{T}_\mathbf{r}(\mathbf{x}) \;:\; \mathbf{x} \in \mathbb{Z}^d\}$ covers $\mathbb{R}^d$, {\em i.e.}, \begin{equation}\label{eq:cov} \bigcup_{\mathbf{x} \in \mathbb{Z}^d} \mathcal{T}_\mathbf{r}(\mathbf{x}) = \mathbb{R}^d. \end{equation} \end{itemize} \end{proposition} \begin{proof}[Sketch of the proof] With respect to compactness observe first that Hausdorff limits are closed by definition. Using inequality \eqref{norminequ} it follows that every $\mathcal{T}_\mathbf{r}(\mathbf{x})$, $\mathbf{x}\in\mathbb{Z}^d$, is contained in the closed ball of radius $R$ with center~$\mathbf{x}$ where \begin{equation} \label{eq:R} R := \sum_{n=0}^\infty \big\|R(\mathbf{r})^n(0,\ldots,0,1)^t\big\| \le \frac{\|(0,\ldots,0,1)^t\|}{1-\tilde\varrho}\, \end{equation} which establishes boundedness. \medskip Since $\mathcal{T}_\mathbf{r}(\mathbf{x})$ is uniformly bounded in $\mathbf{x}$ and the set of ``base points'' $\mathbb{Z}^d$ is a lattice, we also get that the family of SRS tiles $\{\mathcal{T}_\mathbf{r}(\mathbf{x}) \,:\, \mathbf{x}\in\mathbb{Z}^d\}$ is \emph{locally finite}, that is, any open ball intersects only a finite number of tiles of the family. \medskip In order to establish the set equation observe that \begin{align*} \mathcal{T}_\mathbf{r}(\mathbf{x}) &= \mathop{\rm Lim}_{n\rightarrow\infty} R(\mathbf{r})^n \tau_\mathbf{r}^{-n}(\mathbf{x}) = R(\mathbf{r})\mathop{\rm Lim}_{n\rightarrow\infty} \bigcup_{\mathbf{y}\in\tau_\mathbf{r}^{-1}(\mathbf{x})} R(\mathbf{r})^{n-1} \tau_\mathbf{r}^{-n+1}(\mathbf{y}) \\ &= R(\mathbf{r}) \bigcup_{\mathbf{y}\in\tau_\mathbf{r}^{-1}(\mathbf{x})} \mathop{\rm Lim}_{n\rightarrow\infty}R(\mathbf{r})^{n-1} \tau_\mathbf{r}^{-n+1}(\mathbf{y})=R({\mathbf{r}})\bigcup_{\mathbf{y}\in\tau_\mathbf{r}^{-1}(\mathbf{x})} \mathcal{T}_\mathbf{r}(\mathbf{y}). \hfill \end{align*} \medskip It remains to prove the covering property. The lattice $\mathbb{Z}^d$ is obviously contained in the union in~\eqref{eq:cov}. Thus, by the set equation, the same is true for $R(\mathbf{r})^k\mathbb{Z}^d$ for each $k\in\mathbb{N}$. By the contractivity of $R(\mathbf{r})$, compactness of $\mathcal{T}_\mathbf{r}(\mathbf{x})$, and local finiteness of $\{\mathcal{T}_\mathbf{r}(\mathbf{x}) \,:\, \mathbf{x}\in\mathbb{Z}^d\}$ the result follows. \end{proof} In the following we analyze the specific role of the central tile ({\em cf.}~\cite {BSSST2011}). Since $\mathbf{0}\in\tau_{\mathbf{r}}^{-1}(\mathbf{0})$ holds for each $\mathbf{r}\in\E_d$, the origin is always an element of the central tile. However, the question whether or not $\mathbf{0}$ is contained exclusively in the central tile plays an important role in numeration. In the case of beta-numeration, $\mathbf{0}$ is contained exclusively in the central beta-tile (see Definition~\ref{def:betatile} below) if and only if property (F) (compare \eqref{PropertyF}) is satisfied (\cite{Akiyama:02,Frougny-Solomyak:92}), and a similar criterion holds for CNS ({\it cf.}~\cite{Akiyama-Thuswaldner:00}). It turns out that there is a corresponding characterization for SRS with finiteness property (see \cite{BSSST2011}). \begin{definition}[Purely periodic point] Let $\mathbf{r} \in \mathbb{R}^d$. An element $\mathbf{z} \in \mathbb{Z}^d$ is called \emph{purely periodic} point if $\tau_\mathbf{r}^p(\mathbf{z})=\mathbf{z}$ for some $p\ge 1$. \end{definition} Then we have the announced characterization. \begin{theorem}[see~{\cite[Theorem~3.10]{BSSST2011}}]\label{t63} Let $\mathbf{r}=(r_0,\ldots,r_{d-1}) \in \mathcal{E}_d$ with $r_0\not=0$ and $\mathbf{x} \in \mathbb{Z}^d$. Then $\mathbf{0} \in \mathcal{T}_\mathbf{r}(\mathbf{x})$ if and only if $\mathbf{x}$ is purely periodic. There are only finitely many purely periodic points. \end{theorem} \begin{proof}[Sketch of the proof.] In order to establish that pure periodicity of $\mathbf{x}$ implies that $\mathbf{0}\in \mathcal{T}_\mathbf{r}(\mathbf{x})$ observe first that by assumption we have $\mathbf{x} = \tau_\mathbf{r}^{kp}(\mathbf{x})$. By the contractivity of the operator $R(\mathbf{r})$ it follows that $\mathbf{0}=\lim_{p\to \infty} R(\mathbf{r})^{kp}\mathbf{x} \in \mathcal{T}_\mathbf{r}(\mathbf{x})$. \medskip With respect to the other direction observe that by the set equation there is a sequence $(\mathbf{z}_n)_{n\ge 1}$ with $\mathbf{z}_n= \tau_\mathbf{r}^{-n}(\mathbf{x})$ and $\mathbf{0} \in R(\mathbf{r})^{n} \mathcal{T}_\mathbf{r}(\mathbf{z}_n)$. Thus $\mathbf{0} \in \mathcal{T}_\mathbf{r}(\mathbf{z}_n)$. Therefore by the local finiteness of $\{\mathcal{T}_\mathbf{r}(\mathbf{x}) \;:\; \mathbf{x} \in \mathbb{Z}^d\}$ there are $n,k\in\mathbb{N}$ such that $\mathbf{z}_n = \mathbf{z}_{n+k},$ hence $\mathbf{x} = \tau_\mathbf{r}^{k}(\mathbf{x})$. \medskip Observing the proof of the last proposition it follows that only points $\mathbf{x}\in\mathbb{Z}^d$ with $\|\mathbf{x}\|\le R$ with $R$ as it \eqref{eq:R}, can be purely periodic. Note that the latter property was already proved in~\cite{Akiyama-Borbeli-Brunotte-Pethoe-Thuswaldner:05}. \end{proof} There is an immediate consequence of the last theorem for SRS with finiteness property (see~\cite{BSSST2011}). \begin{corollary}\label{cor:exc} Let $\mathbf{r}=(r_0,\ldots,r_{d-1}) \in \mathcal{E}_d$ with $r_0\not=0$ be given. Then $\mathbf{r} \in \mathcal{D}_d^{(0)}$ if and only if $\mathbf{0} \in \mathcal{T}_\mathbf{r}(\mathbf{0}) \setminus \bigcup_{\mathbf{y}\neq \mathbf{0}} \mathcal{T}_\mathbf{r}(\mathbf{y})$. \end{corollary} In particular, for $\mathbf{r} \in \mathcal{D}_d^{(0)}$ the central tile $\mathcal{T}_\mathbf{r}(\mathbf{0})$ has non-empty interior. Nevertheless the following example demonstrates that the interior of $\mathcal{T}_\mathbf{r}(\mathbf{x})$ may be empty for certain choices of $\mathbf{r}$ and $\mathbf{x}$. \begin{example}[see \cite{BSSST2011}] \label{ex:pp} Let $\mathbf{r}=(\frac{9}{10},-\frac{11}{20})$. Then, with the points \[ \mathbf{z}_1=(-1,-1)^t,\ \mathbf{z}_2=(-1,1)^t,\ \mathbf{z}_3=(1,2)^t,\ \mathbf{z}_4=(2,1)^t,\ \mathbf{z}_5=(1,-1)^t, \] we have the cycle \[ \tau_\mathbf{r}:\mathbf{z}_1 \mapsto \mathbf{z}_2 \mapsto \mathbf{z}_3 \mapsto \mathbf{z}_4 \mapsto \mathbf{z}_5 \mapsto \mathbf{z}_1. \] Thus, each of these points is purely periodic. By direct calculation we see that \[ \tau_\mathbf{r}^{-1}(\mathbf{z}_1) = \big\{(1,-1)^t\big\}=\{\mathbf{z}_5\}, \] and similarly $\tau_\mathbf{r}^{-1}(\mathbf{z}_i)=\{\mathbf{z}_{i-1}\}$ for $i \in \{2,3,4,5\}$. Hence every tile $\mathcal{T}_\mathbf{r}(\mathbf{z}_i)$, $i\in \{1,2,3,4,5\}$, consists of the single point $\mathbf{0}$. \end{example} \subsection{Tiling properties of SRS tiles}\label{sec:srstiling} We saw in Proposition~\ref{prop:basicSRS} that the collection $\{\mathcal{T}_\mathbf{r}(\mathbf{x}) \;:\; \mathbf{x} \in \mathbb{Z}^d\}$ is a covering of $\mathbb{R}^d$. In \cite{BSSST2011} and \cite{ST:11} tiling properties of SRS tiles are proved. In the present section we review these results. As their proofs are involved we refrain from reproducing them here and confine ourselves to mention some main ideas. We start with a basic definition ({\em cf.}~\cite[Definition~4.1]{BSSST2011}). \begin{definition}[Weak $m$-tiling]\label{def:tilings} Let $\mathcal{K}$ be a locally finite collection of subsets of $\mathbb{R}^d$ that cover $\mathbb{R}^d$. The {\em covering degree} of $\mathcal{K}$ is given by the number \[ \min \{ \#\{ K \in \mathcal{K} \;:\; \mathbf{t}\in K\} \;:\; \mathbf{t} \in \mathbb{R}^d \}. \] The collection $\mathcal{K}$ is called a {\it weak $m$-tiling} if its covering degree is $m$, and $\bigcap_{j=1}^{m+1} {\rm int}(K_j)= \emptyset$ for each choice of pairwise disjoint elements $K_1,\ldots, K_{m+1}$ of $\mathcal{K}$. A weak $1$-tiling is called a {\em weak tiling}. \end{definition} There are several reasons why we emphasize on {\it weak} tilings. For a collection $\mathcal{K}$ of subsets of $\mathbb{R}^d$ to be a tiling one commonly assumes that \begin{itemize} \item each $K \in \mathcal{K}$ is the closure of its interior, \item $\mathcal{K}$ contains only finitely many different elements up to translation, and \item the $d$-dimensional Lebesgue measure of $\partial K$ is zero for each $K\in \mathcal{K}$. \end{itemize} In our setting, namely for the collection $\{\mathcal{T}_\mathbf{r}(\mathbf{x}) \;:\; \mathbf{x} \in \mathbb{Z}^d\}$, we already saw in Example~\ref{ex:pp} that there exist elements having no inner points. Moreover, for some parameters we get infinitely many different shapes of the tiles $\mathcal{T}_\mathbf{r}(\mathbf{x})$, so that we do not have finiteness up to translation (this is the case for instance for the SRS parameter $r=-\frac23$ associated with the $\frac32$-number system, see Example~\ref{ex32_2}). Finally, in general, there seems to exist no known proof for the fact that the boundary of $\mathcal{T}_\mathbf{r}(\mathbf{x})$ has measure zero (although we conjecture this to be true). We start with a tiling result that is contained in \cite[Theorem~4.6]{BSSST2011}. \begin{theorem}\label{thm:multitiling} Let $\mathbf{r}=(r_0,\ldots,r_{d-1}) \in \E_d$ with $r_0\not=0$ be given and assume that $\mathbf{r}$ satisfies one of the following conditions. \begin{itemize} \item $\mathbf{r} \in \mathbb{Q}^d$, or \item $(X-\beta)(X^d + r_{d-1}X^{d-1} + \cdots + r_0) \in\mathbb{Z}[X]$ for some $\beta > 1$, or \item $r_0,\ldots, r_{d-1}$ are algebraically independent over $\mathbb{Q}$. \end{itemize} Then the collection $\{\mathcal{T}_\mathbf{r}(\mathbf{x}) \;:\; \mathbf{x} \in \mathbb{Z}^d\}$ is a weak $m$-tiling for some $m\in \mathbb{N}$. \end{theorem} If $\mathcal{K}$ is a covering of degree $m$, then an {\it $m$-exclusive point} is a point that has a neighborhood $U$ such that each $\mathbf{x}\in U$ is covered by exactly $m$ elements of $\mathcal{K}$. The proof of Theorem~\ref{thm:multitiling} is technical and deals with the construction of a dense set of $m$-exclusive points. To prove that a given parameter satisfying the conditions of Theorem~\ref{thm:multitiling} actually induces a weak tiling it is obviously sufficient to exhibit a single $1$-exclusive point. For a given example this can often be done algorithmically. If $\mathbf{r}\in\D_d^{(0)}$, Corollary~\ref{cor:exc} and Theorem~\ref{thm:multitiling} can be combined to the following tiling result. \begin{corollary}[{see \cite[Corollary~4.7]{BSSST2011}}]\label{cor:tiling} Let $\mathbf{r}=(r_0,\ldots,r_{d-1}) \in \D_d^{(0)} \cap \E_d$ with $r_0\not=0$. If $\mathbf{r}$ satisfies one of the three items listed in the statement of Theorem~\ref{thm:multitiling}, then the collection $\{\mathcal{T}_\mathbf{r}(\mathbf{x}) \;:\; \mathbf{x} \in \mathbb{Z}^d\}$ is a weak tiling. \end{corollary} In \cite{ST:11}, for rational vectors $\mathbf{r}$ a tiling result without restrictions was established. \begin{theorem}\label{p:tiling} Let $\mathbf{r}=(r_0,\ldots, r_{d-1})\in \E_d$ have rational coordinates and assume that $r_0\not=0$. Then $\{\mathcal{T}_\mathbf{r}(\mathbf{x}) \;:\; \mathbf{x} \in \mathbb{Z}^d\}$ is a weak tiling of $\mathbb{R}^d$. \end{theorem} The proof of this theorem is quite elaborate. Extending a theorem of Lagarias and Wang~\cite{Lagarias-Wang:97}, in \cite{ST:11} a tiling theorem for so-called {\it rational self-affine} tiles is proved. These tiles are defined as subsets of $\mathbb{R}^d \times \prod_{\mathfrak{p}}K_\mathfrak{p}$, where $K_\mathfrak{p}$ are completions of a number field $K$ that is defined in terms of the roots of the characteristic polynomial of $R(\mathbf{r})$. As the intersection of these tiles with the ``Euclidean part'' $\mathbb{R}^d \times \prod_{\mathfrak{p}}\{0\}$ of the representation space turn out to be SRS tiles corresponding to rational parameters, this tiling theorem can be used to prove Theorem~\ref{p:tiling}. In the one-dimensional case the situation becomes much easier and we get the following result (here we identify the vector $(r)$ with the scalar $r$; see \cite[Theorem 4.9 and its proof]{BSSST2011}). \begin{theorem}\label{1dimtiling} Let $r \in \E_1\setminus \{0\}$. Then $\{\mathcal{T}_{r}({x}) \;:\; {x} \in \mathbb{Z}\}$ is a tiling whose elements are intervals. In particular, \[ \bigcup_{x \in \mathbb{Z}} \mathcal{T}_r(x) = \mathbb{R} \quad\hbox{with}\quad \# (\mathcal{T}_r(x) \cap \mathcal{T}_r(x')) \in \{0,1\} \hbox{ for distinct }x,x'\in\mathbb{Z}. \] \end{theorem} \begin{proof} We confine ourselves to $r>0$ (the case $r<0$ can be treated similar). Choose $x,y\in \mathbb{Z}$ with $x_0 < y_0$. By the definition of $\tau_r$ we get that $-x_1<-y_1$ for all $x_1 \in \tau_r^{-1}(x_0)$ and all $y_1 \in \tau_r^{-1}(y_0)$. Iterating this for $k$ times and multiplying by $R(r)^k=(-r)^k$ we obtain that \[ x \in R(r)^k\tau_r^{-k}(x_0), \quad y \in R(r)^k\tau_r^{-k}(y_0) \quad\hbox{implies that}\quad x < y. \] Taking the Hausdorff limit for $k \to \infty$, the result follows by the definition of SRS tiles and taking into account the fact that $\{\mathcal{T}_{r}({x}) \;:\; {x} \in \mathbb{Z}\}$ is a covering of $\mathbb{R}$ by Proposition~\ref{prop:basicSRS}. \end{proof} There are natural questions related to the results of this subsection. Although it seems to be unknown whether the collection $\{\mathcal{T}_\mathbf{r}(\mathbf{x}) \;:\; \mathbf{x} \in \mathbb{Z}^d\}$ forms a weak $m$-tiling for some $m$ for each $\mathbf{r}\in\E_d$ we conjecture the following stronger result (which also contains the {\it Pisot conjecture} for beta-tiles, see {\it e.g.}~\cite[Section~7]{BBK:06}). \begin{conjecture} Let $\mathbf{r}=(r_0,\ldots,r_{d-1}) \in \mathcal{E}_d$ with $r_0\not=0$. Then $\{\mathcal{T}_\mathbf{r}(\mathbf{x}) \;:\; \mathbf{x} \in \mathbb{Z}^d\}$ is a weak tiling of $\mathbb{R}^d$. \end{conjecture} Moreover, we state the following conjecture on the boundary of SRS tiles. \begin{conjecture} Let $\mathbf{r}=(r_0,\ldots,r_{d-1}) \in \mathcal{E}_d$ with $r_0\not=0$. Then the $d$-dimensional Lebesgue measure of $\partial \mathcal{T}_\mathbf{r}(\mathbf{x})$ is zero for each $\mathbf{x} \in \mathbb{Z}^d$. \end{conjecture} Finally, we state a problem related to the connectivity of central SRS tiles (see also~\cite[Section~7]{BSSST2011}). For $d\in\mathbb{N}$ define the Mandelbrot set \[ \mathcal{M}_d = \{ \mathbf{r} \in \E_d \;:\; \mathcal{T}_\mathbf{r}(\mathbf{0}) \hbox{ is connected}\}. \] It is an easy consequence of Theorem~\ref{1dimtiling} that $\mathcal{M}_1=(-1,1)$. However, we do not know anything about $\mathcal{M}_d$ in higher dimensions. \begin{problem} Describe the Mandelbrot sets $\mathcal{M}_d$ for $d\ge 2$. \end{problem} \subsection{SRS tiles and their relations to beta-tiles and self-affine tiles} Let $\beta$ be a Pisot number and write the minimal polynomial of $\beta$ as \[ (X-\beta) (X^d + r_{d-1} X^{d-1} + \cdots + r_0) \in \mathbb{Z}[X]. \] Let $\mathbf{r} = (r_0,\ldots,r_{d-1})$. Then, for every $\mathbf{x} \in \mathbb{Z}^d$, the {\it SRS tile associated with $\beta$} is the set $$\mathcal{T}_\mathbf{r}(\mathbf{x}) = \lim_{n\to\infty} R(\mathbf{r})^n \tau_\mathbf{r}^{-n}(\mathbf{x}),$$ with $R(\mathbf{r})$ as in \eqref{mata}, The conjugacy between $T_\beta$ and $\tau_\mathbf{r}$ proved in Proposition~\ref{prop:betanumformula} suggests that there is some relation between the SRS tiles $\mathcal{T}_\mathbf{r}(\mathbf{x})$, $\mathbf{x}\in \mathbb{Z} ^d$, and the tiles associated with beta-numeration (which have been studied extensively in the literature, see {\it e.g.} \cite{Akiyama:02,Rauzy:82}). We recall the definition of these ``beta-tiles''. Let $\beta_1, \ldots, \beta_d$ be the Galois conjugates of~$\beta$, numbered in such a way that $\beta_1, \ldots, \beta_r \in \mathbb{R}$, $\beta_{r+1} = \overline{\beta_{r+s+1}}, \ldots, \beta_{r+s} = \overline{\beta_{r+2s}} \in \mathbb{C}$, $d = r + 2s$. Let further $x^{(j)}$ be the corresponding conjugate of $x \in \mathbb{Q}(\beta)$, $1 \le j \le d$, and $ \Xi_\beta:\ \mathbb{Q}(\beta) \to \mathbb{R}^d$, be the map $$\ x \mapsto \big(x^{(1)}, \ldots, x^{(r)}, \Re\big(x^{(r+1)}\big), \Im\big(x^{(r+1)}\big), \ldots, \Re\big(x^{(r+s)}\big), \Im\big(x^{(r+s)}\big)\big).$$ Then we have the following definition. \begin{definition}[Beta-tile, see~\cite{Akiyama:02, BSSST2011,Thurston:89}]\label{def:betatile} For $x \in \mathbb{Z}[\beta] \cap [0,1)$, the {\it beta-tile} is the (compact) set \[ \mathcal{{R} }_\beta(x) = \lim_{n\to\infty} \Xi_\beta\big(\beta^n T_\beta^{-n}(x) \big). \] The {\it {integral }}{beta-tile} is the (compact) set \[ \mathcal{{S}}_\beta(x) = \lim_{n\to\infty} \Xi_\beta\big(\beta^n {\big(} T_\beta^{-n}(x) {{\cap \mathbb{Z}[\beta]}\big)}\big). \] \end{definition} With these definitions it holds that $\mathbf{t} \in \mathcal{{R} }_\beta(x)$ if and only if there exist $c_k \in \mathbb{Z}$ with \[ \mathbf{t} = \Xi_\beta(x) + \sum_{k=1}^\infty \Xi_\beta(\beta^{k-1} c_k),\ \frac{c_n}{\beta} + \cdots + \frac{c_1}{\beta^n} + \frac{x}{\beta^n} \in [0,1) \ \forall n {\ge 1},\] and $\mathbf{t} \in \mathcal{{S}}_\beta(x)$ if and only if there exist $c_k \in \mathbb{Z}$ with \[ \mathbf{t} = \Xi_\beta(x) + \sum_{k=1}^\infty \Xi_\beta(\beta^{k-1} c_k),\ \frac{c_n}{\beta} + \cdots + \frac{c_1}{\beta^n} + \frac{x}{\beta^n} \in [0,1) \cap \mathbb{Z}[\beta]\ \forall n {\ge 1}.\] Observe that the ``digits'' $c_k$ fulfill the greedy condition, compare \eqref{greedycondition}. The following result shows how SRS-tiles are related to integral beta-tiles by a linear transformation. \begin{theorem}[{compare \cite[Theorem~6.7]{BSSST2011}}]\label{thm:betatilecorrespondence} Let $\beta$ be a Pisot number with minimal polynomial $(X-\beta)(X^d+r_{d-1}X^{d-1}+\cdots+r_0)$ and $d=r+2s$ Galois conjugates $\beta_1,\ldots,\beta_r\in\mathbb{R}$, $\beta_{r+1},\ldots,\beta_{r+2s}\in\mathbb{C}\setminus\mathbb{R}$, ordered such that $\beta_{r+1}=\overline{\beta_{r+s+1}},\,\ldots,\,\beta_{r+s}=\overline{\beta_{r+2s}}$. Let \[ X^d+r_{d-1}X^{d-1}+\cdots+r_0 = (X-\beta_j)(X^{d-1}+q_{d-2}^{(j)}X^{d-2}+\cdots+q_0^{(j)}) \] for $1\le j\le d$ and \[ U=\left(\begin{array}{ccccc} q^{(1)}_{0} & q^{(1)}_{1} & \cdots & q^{(1)}_{d-2} & 1 \\ \vdots & \vdots & &\vdots & \vdots \\ q^{(r)}_{0} & q^{(r)}_{1} & \cdots & q^{(r)}_{d-2} & 1 \\ \Re(q^{(r+1)}_{0}) & \Re(q^{(r+1)}_{1}) & \cdots & \Re(q^{(r+1)}_{d-2}) & 1 \\ \Im(q^{(r+1)}_{0}) & \Im(q^{(r+1)}_{1}) & \cdots & \Im(q^{(r+1)}_{d-2}) & 0 \\ \vdots & \vdots & &\vdots & \vdots \\ \Re(q^{(r+s)}_{0}) & \Re(q^{(r+s)}_{1}) & \cdots & \Re(q^{(r+s)}_{d-2}) & 1 \\ \Im(q^{(r+s)}_{0}) & \Im(q^{(r+s)}_{1}) & \cdots & \Im(q^{(r+s)}_{d-2}) & 0 \\ \end{array}\right) \in \mathbb{R}^{d \times d}. \] Then we have \[ \mathcal{S}_\beta(\{\mathbf{r}\mathbf{x}\}) = U (R(\mathbf{r})-\beta I_d) \mathcal{T}_\mathbf{r}(\mathbf{x}) \] for every $\mathbf{x} \in \mathbb{Z}^d$, where $\mathbf{r}=(r_0,\ldots,r_{d-1})$ and $I_d$ is the $d$-dimensional identity matrix. \end{theorem} We omit the technical proof that, obviously, makes use of the conjugacy in Proposition~\ref{prop:betanumformula} and refer the reader to \cite{BSSST2011}. One reason for the technical difficulties come from the fact that although the integral beta-tile associated with $\mathcal{T}_\mathbf{r}(\mathbf{x})$ is given by $\mathcal{S}_\beta(\{\mathbf{r} \mathbf{x}\}) = U (R(\mathbf{r}) - \beta I_d) \mathcal{T}_\mathbf{r}(\mathbf{x})$, its ``center'' is $\Xi_\beta(\{\mathbf{r}\mathbf{x}\}) = U (\tau_\mathbf{r}(\mathbf{x}) - \beta\mathbf{x}) = U (R(\mathbf{r}) - \beta I_d) \mathbf{x} + U (0,\ldots,0,\{\mathbf{r}\mathbf{x}\})^t$. \begin{example} Let $\beta_1$ be the Pisot unit given by the dominant root of $X^3-X^2-X-1$. According to Proposition~\ref{prop:betanumformula} the associated SRS parameter is $\mathbf{r} = (1/\beta_1,\beta_1-1)$. Using the algorithm based on Theorem~\ref{thm:Brunotte} one can easily show that $\mathbf{r} \in \D_2^{(0)}$. Thus Corollary~\ref{cor:tiling} implies that the collection $\{\mathcal{T}_\mathbf{r}(\mathbf{x})\;:\; \mathbf{x}\in \mathbb{Z}^d\}$ induces a tiling of $\mathbb{R}^2$. On the right hand side of Figure~\ref{fig:betatiling} a patch of this tiling is depicted. In view of Theorem~\ref{thm:betatilecorrespondence} the beta-tiles associated with $\beta_1$ also form a tiling which can be obtained from the SRS tiling just by an affine transformation. \begin{figure} \includegraphics[width=.45\textwidth]{111} \includegraphics[width=.45\textwidth]{222} \caption{Two patches of tilings induced by SRS tiles: the left figure shows the tiling associated with the parameter $\mathbf{r} = (1/\beta_1,\beta_1-1)$ where $\beta_1$ is the Pisot unit given by $\beta_1^3 = \beta_1^2 + \beta_1 + 1$. It is an affine image of the tiling induced by the (integral) beta-tiles associated with the Pisot unit $\beta_1$. The central tile is the classical {\it Rauzy fractal}. The right patch corresponds to the parameter $\mathbf{r} = (2/\beta_2,\beta_2-2)$ where $\beta_2$ is the (non-unit) Pisot number given by $\beta_2^3 = 2\beta_2^2 + 2\beta_2 + 2$. It can also be regarded as (an affine transformation of) the tiling induced by the integral beta-tiles associated with $\beta_2$ . \label{fig:betatiling}} \end{figure} Similarly, let $\beta_2$ be the (non-unit) Pisot root of $X^3-2X^2-2X-2$. Proposition~\ref{prop:betanumformula} yields the associated SRS parameter $\mathbf{r} = (2/\beta_2,\beta_2-2)$. Again, one can check that $\mathbf{r} \in \D_2^{(0)}$, and the resulting tiling is depicted on the right hand side of Figure~\ref{fig:betatiling}. As $\beta_2$ is not a unit, the structure of this tiling turns out to be more involved. \end{example} \begin{proposition}[compare \cite {Akiyama:02, Sirvent-Wang:00a}] If $\beta$ is a Pisot unit ($\beta^{-1} \in \mathbb{Z}[\beta]$), then \begin{itemize} \item[(i)] $\mathcal{R}_\beta(x) = \mathcal{S}_\beta(x)$ for every $x \in \mathbb{Z}[\beta] \cap [0,1)$, \item[(ii)] we have only finitely many tiles up to translation, \item[(iii)] the boundary of each tile has zero Lebesgue measure, \item[(iv)] each tile is the closure of its interior, \item[(v)] $\{\mathcal{S}_\beta(x) \,:\, x \in \mathbb{Z}[\beta] \cap [0,1)\}$ forms a multiple tiling of $\mathbb{R}^d$, \item[(vi)] $\{\mathcal{S}_\beta(x) \,:\, x \in \mathbb{Z}[\beta] \cap [0,1)\}$ forms a tiling if (F) holds, \item[(vii)] $\{\mathcal{S}_\beta(x) \,:\, x \in \mathbb{Z}[\beta] \cap [0,1)\}$ forms a tiling iff (W) holds: \\ for every $x \in \mathbb{Z}[\beta] \cap [0,1)$ and every $\varepsilon > 0$, there exists some $y \in [0,\varepsilon)$ with finite beta-expansion such that $x+y$ has finite beta-expansion. \end{itemize} \end{proposition} \begin{proof} Assertion~(i) is an immediate consequence of the definition, since $T_\beta^{-1}(\mathbb{Z}[\beta]) \subset \mathbb{Z}[1/\beta]=\mathbb{Z}[\beta]$ holds for a Pisot unit $\beta$. Assertion~(ii) is contained in \cite[Lemma~5]{Akiyama:02}. Assertion~(iii) is proved in \cite[Theorem~5.3.12]{BST:10} and Assertion~(iv) is contained in \cite[Theorem~4.1]{Sirvent-Wang:00a} (both of these results are stated in terms of substitutions rather than beta-expansions). The tiling properties are proved in \cite{Akiyama:02}; property (W) has been further studied {\it e.g.} in \cite{Akiyama-Rao-Steiner:04}. \end{proof} In the following we turn our attention to tiles associated with expanding polynomials. Here Proposition~\ref{p:CNSconjugacy} suggests a relation between certain SRS tiles and the self-affine tiles associated with expanding monic polynomials defined as follows. \begin{definition}[Self-affine tile, {\it cf.}~\cite{Katai-Koernyei:92}] Let $A(X) = X^d + a_{d-1} X^{d-1} + \cdots + a_0 \in \mathbb{Z}[X]$ be an expanding polynomial and $B$ the transposed companion matrix with characteristic polynomial~$A$. \[ \mathcal{F} := \left\{\mathbf{t} \in \mathbb{R}^d\; :\; \mathbf{t} = \sum_{i=0}^\infty B^{-i} (c_i,0,\ldots,0)^t, c_i \in \mathcal{N} \right\} \] ($\mathcal{N}=\{0,\ldots,|a_0|-1\}$) is called the {\it self-affine tile associated with~$A$}. \end{definition} For this class of tiles the following properties hold. \begin{proposition}[\cite{Katai-Koernyei:92,Lagarias-Wang:97,Wang:00a}] \mbox{} \begin{itemize} \item[(i)] $\mathcal{F}$ is compact and self-affine. \item[(ii)] $\mathcal{F}$ is the closure of its interior. \item[(iii)] $\{\mathbf{x} + \mathcal{F} \,:\,\mathbf{x} \in \mathbb{Z}^d\}$ induces a (multiple) tiling of $\mathbb{R}^d$. If $A$ is irreducible $\{\mathbf{x} + \mathcal{F} \,:\, \mathbf{x} \in \mathbb{Z}^d\}$ forms a tiling of $\mathbb{R}^d$. \end{itemize} \end{proposition} \begin{proof} Assertion (i) is an immediate consequence of the definition of $\mathcal{F}$, see {\it e.g.}~\cite{Katai-Koernyei:92}. In particular, note that $\mathcal{F}=\bigcup_{c \in \mathcal{N}} B^{-1}(\mathcal{F}+(c,0,\ldots,0)^t)$ where $B=VR(\mathbf{r})^{-1}V^{-1}$ with \begin{equation}\label{rV} \mathbf{r} = \left(\frac{1}{a_0}, \frac{a_{d-1}}{a_0}, \ldots, \frac{a_1}{a_0}\right) \quad\hbox{and}\quad V = \left(\begin{array}{cccc} 1 & a_{d-1} & \cdots & a_1 \\ 0 & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & a_{d-1} \\ 0 & \cdots & 0 & 1 \end{array}\right). \end{equation} To prove Assertion~(ii) one first shows that $\mathcal{F}+\mathbb{Z}^d$ forms a covering of $\mathbb{R}^d$. From this fact a Baire type argument yields that ${\rm int}(\mathcal{F})\not=\emptyset$. Using this fact the self-affinity of $ \mathcal{F}$ yields~(ii), see \cite[Theorem~2.1]{Wang:00a}. In assertion~(iii) the multiple tiling property is fairly easy to prove, the tiling property is hard to establish and was shown in~\cite{Lagarias-Wang:97} in a more general context. \end{proof} There is a close relation between the tile $\mathcal{F}$ and the central SRS-tile studied above. \begin{theorem}\label{thm:cnssrstile} Let $A(X) = X^d + a_{d-1} X^{d-1} + \cdots + a_0 \in \mathbb{Z}[X]$ be an expanding polynomial. For all $\mathbf{x} \in \mathbb{Z}^{d}$ we have \begin{align*} \mathcal{F} & = V \mathcal{T}_\mathbf{r}(\mathbf{0}), \\ \mathbf{x} + \mathcal{F} & = V \mathcal{T}_\mathbf{r}(V^{-1}(\mathbf{x})) \end{align*} where $V$ is given in \eqref{rV}. \end{theorem} The result follows immediately from \cite[Corollary~5.14]{BSSST2011}. \begin{example} Continuing Example~\ref{exKnuth}, let $X^2 + 2X + 2$ be given. The self-affine tile associated with this polynomial is Knuth's famous twin dragon. In view of Proposition~\ref{p:CNSconjugacy}, the associated SRS parameter is $\mathbf{r}=(\frac12,1)$. Using Theorem~\ref{thm:Brunotte} we see that $\mathbf{r}\in\D_2^{(0)}$, and Corollary~\ref{cor:tiling} can be invoked to show that the associated SRS tiles induce a tiling (see the left side of Figure~\ref{fig:CNStiling}). According to Theorem~\ref{thm:cnssrstile} this tiling is an affine transformation of the tiling $\mathcal{F}+\mathbb{Z}^d$. \begin{figure} \includegraphics[width=.45\textwidth]{srs1211} \includegraphics[width=.45\textwidth]{srs2311} \caption{Two patches of tiles induced by SRS tiles: the left figure shows the tiling associated with the parameter $\mathbf{r} = (1/2,1)$. It is an affine image of the tiling induced by the CNS defined by $X^2 + 2X + 2$. The central tile is Knuth's twin dragon. The right patch corresponds to the parameter $\mathbf{r} = (2/3,1)$. It can also be regarded as the {\it Brunotte tiling} associated with the non-monic expanding polynomial $2X^2 + 3X + 3$. \label{fig:CNStiling}} \end{figure} Starting with the non-monic polynomial $2X^2 + 3X + 3$ we get $\mathbf{r}=(\frac23,1)$ and the tiling depicted on the right side of Figure~\ref{fig:CNStiling}. We mention that also in the case of non-monic polynomials we have a tiling theory. These so-called {\it Brunotte tiles} are defined and discussed in \cite[Section~5]{BSSST2011}. \end{example} We conclude this section with a continuation of Example~\ref{ex32}. \begin{example}[The $\frac32$-number system, continued]\label{ex32_2} As shown in Example~\ref{ex32} the $\frac32$-number system defined in~\cite{Akiyama-Frougny-Sakarovitch:07} is conjugate to the SRS $\tau_{-2/3}$. It has been shown in \cite[Section~5.4]{BSSST2011} that the tiling (see~Theorem~\ref{1dimtiling}) induced by the associated SRS tiles \begin{equation}\label{32-tiling} \{\mathcal{T}_{-2/3}(x)\;:\; x\in \mathbb{Z} \} \end{equation} consists of (possibly degenerate) intervals with infinitely many different lengths. Essentially, this is due to the fact that for each $k\in\mathbb{N}$ we can find $N_k\in \mathbb{Z}$ such that $\# \tau_{-2/3}^{-k}(N_k) =2$ (see \cite[Lemma~5.18]{BSSST2011}). This can be used to show that the length $\ell_k$ of the interval $\mathcal{T}_{-2/3}(N_k)$ satisfies $\ell_k\in[(\frac23)^k, 3(\frac23)^k]$, which immediately yields the existence of intervals of infinitely many different lengths in \eqref{32-tiling}. It was observed in \cite[Example~2.1]{ST:11} that the length $\ell$ of the interval $\mathcal{T}_{-2/3}(0)$ which is equal to $\ell=1.6227\cdots$ is related to a solution of some case of the Josephus problem presented in~\cite{OW:91}. \end{example} \section{Variants of shift radix systems}\label{sec:variants} In the recent years some variants of SRS have been studied. Akiyama and Scheicher~\cite{Akiyama-Scheicher:07} investigated ``symmetric'' SRS. They differ from the ordinary ones just by replacing $-\lfloor \mathbf{rz} \rfloor$ by $-\lfloor \mathbf{rz} + \frac12 \rfloor$, {\it i.e.}, the {\it symmetric SRS} $\hat\tau_\mathbf{r}: \mathbb{Z}^d \to \mathbb{Z}^d$ is defined by $$ \hat\tau_{{\bf r}}({\bf z})=\left(z_1,\dots,z_{d-1},-\left\lfloor {\bf r} {\bf z}+\frac12\right\rfloor\right)^t \qquad ({\bf z}=(z_0,\dots,z_{d-1})^t). $$ It turns out that the characterization of the (accordingly defined) finiteness property is easier in this case and complete results have been achieved for dimension two (see~\cite{Akiyama-Scheicher:07}) and three (see~\cite{Huszti-Scheicher-Surer-Thuswaldner:06}). As mentioned above, for SRS it is conjectured that SRS tiles always induce weak tilings. Interestingly, this is not true for tiles associated with symmetric SRS. Kalle and Steiner~\cite{Kalle-Steiner} found a parameter $\mathbf{r}$ (related to a Pisot unit) where the associated symmetric SRS tiles form a double tiling (in this paper this is studied in the world of symmetric beta-expansions; these are known to be a special case of symmetric SRS, see \cite{Akiyama-Scheicher:07}). Further generalizations of SRS are studied by Surer~\cite{Surer:09}. Analogs for finite fields have been introduced by Scheicher~\cite{Scheicher:07}. Brunotte {\it et al.}~\cite{Brunotte-Kirschenhofer-Thuswaldner:12a} define SRS for Gaussian integers. In this case the characterization problem for the finiteness property is non-trivial already in dimension one. \bigskip {\bf Acknowledgements.} The authors wish to express their thanks to the scientific committees of the international conferences ``Num\'eration 2011'' in Li\`ege, Belgium and ``Numeration and substitution: 2012'' in Kyoto, Japan for inviting them to give expository lectures on shift radix systems at these conferences. Moreover they are grateful to Shigeki Akiyama for inviting them to write this survey for the present RIMS conference series volume. They also thank Wolfgang Steiner for his help; he generated Figures~\ref{fig:steiner4},~\ref{fig:betatiling}, and~\ref{fig:CNStiling}. \bibliographystyle{siam}
{'timestamp': '2013-12-03T02:12:45', 'yymm': '1312', 'arxiv_id': '1312.0386', 'language': 'en', 'url': 'https://arxiv.org/abs/1312.0386'}
\section{Introduction} A central problem in geometry in computer vision is \emph{Structure from Motion} (SfM), which is the problem of reconstructing a 3-D scene from sparse feature points tracked in the images of a moving camera. This problem is known also in the robotics community as \emph{Simultaneous Localization and Mapping} (SLAM). One of the main differences between the two communities is that in SLAM it is customary to assume the presence of an \emph{Inertial Measurement Unit} (IMU) that provides measurements of angular velocity and linear acceleration in the camera's frame. Conversely, in SfM there is a line of work which uses an \emph{affine} camera model, which is an approximation to the projective model when the depth of the scene is relatively small with respect to the distance between camera and scene. The resulting \emph{Affine SfM} problem affords a very elegant closed-form solution based on matrix factorization and other linear algebra operations \cite{Tomasi:IJCV92}. This solution has not been used in the robotics community, possibly due to the fact that it cannot be immediately extended to use IMU measurements. We assume that the relative pose between IMU and camera has been calibrated using one of the existing offline \cite{Lobo:IJRR07}, online \cite{Jones:IJRR11,Jones:WDV07,Lynen:IROS13,Weiss:ICRA12,Kelly:IJRR11} or closed form \cite{DongSi:IROS12,Martinelli:FTROB13,Martinelli:TRO12} approaches. \paragraph*{Paper contributions} In this paper we bridge the gap between the two communities by proposing a new \emph{Dynamic Affine SfM} technique. Our technique is a direct extension of the traditional Affine SfM algorithm, but incorporates synchronized IMU measurements. This is achieved by assuming that the frame rate of the camera is high enough and that we can compute the higher order derivatives of the point trajectories. Remarkably, our formulation leads again to a closed form solution based on matrix factorization and linear algebra operations. To the best of our knowledge, this kind of relation between higher-order derivatives of image trajectories (flow) and IMU measurements, and the low-rank factorization relation between them, have never been exploited before. \input{priorWork} \input{notation} \input{solution} \section{Preliminary results} Figure \ref{fig:result-affine} shows a simulation of the result of a preliminary implementation of the Dynamic Affine SfM procedure. We have simulated 5 seconds of a quadrotor following a smooth trajectory while an onboard camera tracks 24 points. The measurements (point coordinates, angular velocity and linear acceleration) are sampled at $30\mathrm{Hz}$ and corrupted with Gaussian noise with variances of the added noise: $\unitfrac[3]{deg}{s}$ angular velocity, $\unitfrac[0.2]{m}{s^2}$ acceleration, $\unit[0.5]{\%}$ image points (corresponding to, for instance, $\unit[3.2]{px}$ on a $\unit[600\times600]{px}$ image). The reconstruction obtained using our implementation is aligned to the ground-truth using a Procrustes procedure without scaling and compared with an integration of the inertial measurements alone. Figure \ref{fig:result-affine-trajectory} compares the plot of the ground truth and estimated rotations and translations in absolute coordinates. Three facts should be noted in this simulation: \begin{enumerate*} \item The use of images greatly improves the accuracy with respect to the use of IMU measurements alone. \item The noise in the estimation mostly appears along the $z$-axis direction of the camera, for which the images do not provide any information. Although ours is a preliminary implementation, the result obtained is extremely close to the ground-truth, except for small errors along the $z$ axis of the camera. These errors are due to the fact that the affine model discards the information along the $z$ axis (the affine model provide little information in this direction, and the reconstruction mostly relies on the noisy accelerometer measurements). \item Larger noise appears at lower velocities (beginning and end of the trajectory), thus attesting the usefulness of incorporating higher-order derivative information. \end{enumerate*} \begin{figure*} \centering \hfill\subfloat[Top view]{\includegraphics{figures/dynamicSfM_reconstruction_top}}\hfill \subfloat[Front view]{\includegraphics{figures/dynamicSfM_reconstruction_front}}\hfill \vspace{-1mm} \caption{Simulation results for a preliminary implementation of the Dynamic Affine SfM reconstruction under significantly noisy conditions. Red: ground-truth structure and motion. Blue: reconstructed structure and motion. Black line: motion estimate from integration of IMU measurements alone. Green pyramid: initial reconstructed camera pose. The camera rotates up to $\unit[30]{deg}$ and reaches velocities of up to $\unitfrac[0.5]{m}{s}$. All axes are in meters. } \label{fig:result-affine} \end{figure*} \begin{figure*} \centering \includegraphics{figures/dynamicSfM_trajectory} \caption{Plots of the ground-truth and estimated trajectories. Left: Absolute rotations in exponential coordinates from the identity, $\log(R_f)$. Right: absolute translations.} \label{fig:result-affine-trajectory} \end{figure*} \section{Extensions and future work} The approach can be easily extended to the case where multiple (non-overlapping) cameras rigidly mounted to the same IMU. In this case, one can constract multiple matrices $W$ (one for each camera), and performs steps \ref{it:factorization}--\ref{it:centering} of our solution independently. The rotations and translations (steps \ref{it:rotations},~\ref{it:translations}) can then be recovered by solving the linear systems \eqref{eq:opt-rotation} and \eqref{eq:opt-translation} joinly over all the cameras by adjusting the corresponding coefficient matrices with the relative camera-IMU poses (which are assumed to be known). The Dynamic Affine SfM approach can also be potentially extended to the projective camera model by using the approach of \cite{Dai:IJCV13}. Let $\Lambda\in\real{F\times P}$ be a matrix containing all the unknown depths of each point in each view, and let $L=\Lambda\otimes\bmat{1\\1}$. Then, one can find a low-rank matrix $\hat{W}$ by minimizing \begin{multline} \min_{\hat{W},\Lambda,\dot{\Lambda},\ddot{\Lambda}} \norm{\bmat{L\odot W'\\\dot{L}\odot W' + L \odot \dot{W}'\\\ddot{L}\odot W' +2\dot{L}\odot \dot{W}' + L \odot \ddot{W}'}-\hat{W}}^2_F \\+ \mu \norm{\hat{W}}_\ast+f_\Lambda(\Lambda,\dot{\Lambda},\ddot{\Lambda}),\label{eq:opt-projective} \end{multline} where $\norm{\cdot}_\ast$ denotes the nuclear norm (which acts as a low-rank prior for $\hat{W}$), $\mu$ is a scalar weight and $f_\Lambda$ relates $\Lambda$ with its derivatives using a derivative interpolation filter. This problem is convex and can be iteratively solved using block coordinate descent (i.e., by minimizing over $\hat{W}$ and the other variables alternatively). The method described in Section~\ref{sc:solution} can then be carried out on the matrix $\hat{W}$ to obtain the reconstruction. Intuitively, \eqref{eq:opt-projective} estimates the projective depths of each point so that we can reduce the problem to the affine case. We will implement and evaluate these two extensions in our future work. \bibliographystyle{biblio/ieee} \section{Notation and preliminaries} In this section we establish the notation for the following sections. In particular, we define quantities related to a robot (e.g., a quadrotor) equipped with a camera and an inertial measurement unit. We consider its motion as a rigid body, and the relation of this with the IMU measurements and with the geometry of the scene. \paragraph{Reference frames, transformations and velocities} We first define an inertial spatial frame $\cR_s$, which corresponds to a fixed ``world'' reference frame, and a camera reference frame $\cR_f$, which corresponds to body-fixed reference frame of the robot. For simplicity, we assume that the reference frame of the camera and of the IMU coincide with $\cR_f$, and that they are both centered at the center of mass of the robot. We denote the world-centered frame as $\cR_s$, and as $\cR_f$ the robot-centered frame at some time instant (or ``frame'') $f\in\{1,\ldots,F\}$. We also define the pair $(R_f,T_f)\in SE(3)$, where $R_f$ is a 3-D rotation belonging to the space of rotations $SO(3)$, and $T_f\in\real{3}$ is a 3-D translation and $SE(3)$ is the group of rigid body motions \cite{Murray:book94}. More concretely, given a point with 3-D coordinates $X_f\in\real{3}$ in the camera frame, the same point in the spatial frame will have coordinates $X_s\in\real{3}$ given by: \begin{equation} \label{eq:transformationbs} X_s=R_{f}X_c+T_{f}. \end{equation} Note that this equation implies that the $T_{f}$ is equal to the position of the center of mass of the vehicle in $\cR_s$. Hence, $\dot{T}_{f}$ and $\ddot{T}_{f}$ represent its velocity and acceleration in the same reference frame. We also define the angular velocity $\omega_f\in\real{3}$ with respect to $\cR_f$ such that \begin{equation} \dot{R}_f=R_f\hat{\omega}_f,\label{eq:omegab} \end{equation} With this notation, Euler's equation of motion for the vehicle can be written as \cite{Murray:book94}: \begin{equation} J\dot{\omega}_f+\hat{\omega}_fJ\omega_f=\Gamma_f, \label{eq:Jdomegac} \end{equation} where $J$ is the moment of inertia matrix and $\Gamma_f$ is the torque applied to the body, both defined with respect to the local reference frame $\cR_f$. \paragraph{Body-fixed quantities} We denote the translation, linear velocity, linear acceleration, rotation and angular velocity of the robot expressed in the reference $\cR_f$ as $\tau_f$, $\nu_f$, $\alpha_f$, $R_f$ and $\omega_f$, respectively. Since these are vectors, they are related to the corresponding quantities in the inertial frame $\cR_s$ by the rotation $R_f^\mathrm{T}$: \begin{align} \tau_f&=R_f^\mathrm{T} T_f, \label{eq:taub} \\ \nu_f&=R_f^\mathrm{T}\dot{T}_f, \label{eq:nub}\\ \alpha_f&=R_f^\mathrm{T}\ddot{T}_f. \label{eq:taub-nub-alphab} \end{align} An ideal body-fixed ideal IMU unit will measure the angular velocity \begin{equation} \omega_{IMU}=\omega_f, \end{equation} and the acceleration A body-fixed, ideal accelerometer positioned at the center of mass of the object will measure \begin{equation} \alpha_{IMU}=R_{f}^\mathrm{T}(\ddot{T}_{f}+g_s)=\alpha_f+R_{f}^\mathrm{T} g_s,\label{eq:alphaimu} \end{equation} where $g_s$ is the (downward pointing) gravity vector in the spatial frame $\cR_s$, around $-9.8e_z \mathrm{m}/\mathrm{s}^2$, where and $e_z=[0\; 0\; 1]^\mathrm{T}$. \paragraph{Tridimensional structure} We assume that the onboard camera can track the position of $P$ points having coordinates $X_{sp}\in\real{3}$, $p\in\{1,\ldots,P\}$ in $\cR_s$. For convenience, we assume that $\cR_s$ is centered at the centroid of this points, that is, $\frac{1}{P}\sum_{p=1}^PX_{sp}=0$. Given the quantities above, we can find expressions for the coordinates of a point in the camera's coordinate system and its derivatives. However, it is first convenient to find the derivatives of $\tau_c$, $\nu_c$ and $\omega_c$, which can be obtained by combining \eqref{eq:taub} and \eqref{eq:nub} with the definition \eqref{eq:omegab}, and from Euler's equation of motion \eqref{eq:Jdomegac}. \begin{align} \dot{\tau}_f&=-\hat{\omega}_f\tau_f+\nu_f, \label{eq:dtaub}\\ \dot{\nu}_f&=-\hat{\omega}_f\nu_f+\alpha_f, \label{eq:dnub}\\ \dot{\omega}_f&=J^{-1}\bigl(\Gamma_f-\hat{\omega}_fJ\omega_f\bigr).\label{eq:domegab} \end{align} Then, the coordinate of a point $X_{sp}$ in the reference $\cR_f$ and its derivatives are given by \cite{Murray:book94}: \begin{align} X_{cfp}=&R_f^\mathrm{T} X_{sp}-\tau_f, \qquad \label{eq:Xc}\\ \dot{X}_{cfp}=&-\hat{\omega}_f R_f^\mathrm{T} X_{sp}+\hat{\omega}_f\tau_f-\nu_f, \label{eq:dXc}\\ \ddot{X}_{cfp}=&\bigl(\hat{\omega}_f^2-\dot{\hat{\omega}}_f) R_f^\mathrm{T} X_{sp} - \bigl(\hat{\omega}_f^2-\dot{\hat{\omega}}_f) \tau_f\nonumber\\&+2\hat{\omega}_f\nu_f-\alpha^{IMU}_f-R_f^\mathrm{T} g_s, \label{eq:Xc-dXc-ddXc} \end{align} where $\hat{\cdot}$ denotes the skew-symmetric matrix representation of the cross product \cite{Ma:book04}, and where $\dot{\omega}_f$ can be obtained either using Euler's equation of motion as in \eqref{eq:domegab}, by assuming $\dot{\omega}_f=0$ (constant velocity model) or by using numerical differentiation of $\omega_f$. Note that $X_{cfp}$ and its derivatives these quantities can be completely determined by the 3-D geometry of the scene in the inertial reference frame, the motion of the camera ($R_{sc}$, $\tau_c$, $\nu_c$) and the measurements of the IMU ($\alpha_{IMU}$, $\omega_{IMU}$); these all contain some coefficient matrix times the term $R_f^\mathrm{T} X_{sp}$ plus a vector given by the IMU measurements, the translational motion of the robot and the gravity vector. This structure will lead to the low-rank factorization formulation below. \paragraph{Image projections} The coordinates in the image of the projection of $X_{sp}$ at frame $f$ is denoted as $x_{fp}$. Assuming that the camera is intrinsically calibrated \cite{Ma:book04}, the image $x_{fp}$ can be related to $X_{cfp}$ with the \emph{affine camera model}, that is: \begin{equation} x_{fp}=\Pi X_{cfp}. \label{eq:affine-model} \end{equation} where $\Pi\in\real{2 \times 3}$ is a projector that removes the third element of a vector. This model is an approximation of the projective model for when the scene is relatively far from the camera. This model has been used for \emph{Affine SfM} \cite{Tomasi:IJCV92} and \emph{Affine Motion Segmentation} (see the review article \cite{Vidal:SPM11} and references within), and it will allow us to introduce the basic principles of our proposed methods. Using \eqref{eq:affine-model}, one can show that the images $\{x_{fp}\}$ and their derivatives $\{\dot{x}_{fp}\}$ (\emph{flow}) and $\{\ddot{x}_{fp}\}$ (\emph{double flow}) can be written as: \begin{align} x_{fp}=&\Pi R_f^\mathrm{T} X_{sp}-\tau_f, \qquad \label{eq:Xc}\\ \dot{x}_{fp}=&-\Pi\hat{\omega}_f R_f^\mathrm{T} X_{sp}+\Pi(\hat{\omega}_f\tau_f-\nu_f), \label{eq:dXc}\\ \ddot{x}_{fp}=&\Pi\bigl(\hat{\omega}_f^2-\dot{\hat{\omega}}_f\bigr) R_f^\mathrm{T} X_{sp} - \Pi\bigl((\hat{\omega}_f^2-\dot{\hat{\omega}}_f) \tau_f\\&+2\hat{\omega}_f\nu_f-\alpha^{IMU}_f-R_f^\mathrm{T} g_s\bigr). \label{eq:ddXc} \end{align} \paragraph{Formal problem statement} In this section we give the technical details for the proposed Dynamic SfM estimation methods for single agents. The setup and notation used in this section are shown in. We assume that the camera on the robot can track $P$ points for $F$ frames. We assume that the derivatives of the tracked points are available (e.g., through numerical differentiation). Using the notation introduced in this section, the Dynamic Affine SfM problem is then formulated as finding the motion $\{R_f,\tau_f,\nu_f\}$, the structure $\{X_{sp}\}$ and the gravity vector $g_s$ from the camera measurements $\{x_{fp}\},\{\dot{x}_{fp}\},\{\ddot{x}_{fp}\}$ and the IMU measurements $\{\omega_f,\alpha^{IMU}_f\}$. Figure~\ref{fig:notation} contains a graphical summary of the problem and of the notation. \section{$\mathrm{#1}$}% \tikzsetnextfilename{#1}% \begin{tikzpicture}[scale=0.7]% \input{sketch/#1}% \end{tikzpicture}% \endpgfgraphicnamed% } \newcommand{\inserttikzframe}[1]{\section{$\mathrm{#1}$}% \tikzsetnextfilename{#1}% \input{tikz/#1}% \endpgfgraphicnamed% } \section*{#1}} \renewcommand{\refname}{References Cited} \makeatletter \renewcommand{\maketitle}{\begin{center}\Large\textbf\@title \end{center} \setbox0=\hbox{\@author\unskip}\ifdim\wd0=0pt \else \begin{center} \@author \end{center} \fi% } \makeatother \section{Review of prior work} In the vision community, the Dynamic SfM problem is related to traditional \emph{Structure from Motion} (SfM), which uses only vision measurements. The standard solution pipeline \cite{Hartley:book04} includes three steps. First, estimate relative poses between pairs of images by using matched features \cite{Lowe:IJCV04,Dong:CVPR15,Bay:CVIU08} and robust fitting techniques \cite{Fischler:CommACM81,Hartley:pami12}. Second, combine the pairwise estimates either in sequential stages \cite{Snavely:TOG06,Agarwal:COMM11,Agarwal:ECCV10,Frahm:ECCV10,Snavely:CVPR08}, or by using a pose-graph approach \cite{Carlone:ICRA15} (which works only with the poses and not the 3-D structure). Algorithms under the latter category can be divided into \emph{local} methods \cite{Tron:TAC14,Hartley:IJCV13,Aftab:PAMI15,Chatterjee:ICCV13}, which use gradient-based optimization, and \emph{global} methods \cite{Martinec:CVPR07,Arie-Nachimson:3DIMPVT12,Wang:II13}, which involve a relaxation of the constraints on the rotations together with a low-rank approximation. The fourth and last step of the pipeline is to use Bundle Adjustment (BA) \cite{Triggs:VATP00,Hartley:book04,Engels:PCV06}, where the motion and structure are jointly estimated by minimizing the reprojection error. In the robotics community, Dynamic SFM is closely related to other Vision-aided Inertial Navigation (VIN) problems. These include: \emph{Visual-Inertial Odometry} (VIO), where only the robots' motion is of interest, and \emph{Simultaneous Localization and Mapping} (SLAM), where the reconstruction (i.e., map) of the environment is also of interest. Existing approaches to these problems fall between two extremes. On one end of the spectrum we have \emph{batch} approaches, which are similar to BA with additional terms taking into account the IMU measurements \cite{Bryson:ICRA09,Strelow:IJRR04}. If obtaining a map of the environment is not important, the optimization problem can be restricted to the poses alone (as in the pose-graph approach in SfM), using the images and IMU measurements to build a so-called \emph{factor graph} \cite{Indelman:NDRV15,Dellaert:08,Agrawal:IROS06}. To speed-up the computations, some of the nodes can be merged using IMU \emph{pre-integration} \cite{Lupton:TRO12,Carlone:ICRA15}, and \emph{key-frames} \cite{Konolige:TRO08}. \begin{figure*} \centering \includegraphics{figures/notation} \caption{Schematic illustration of the problem and notation.} \label{fig:notation} \end{figure*} On the other end of the spectrum we have pure \emph{filtering} approaches. While some approaches are based on the Unscented Kalman Filter (UKF) \cite{Huster:OCEANS02,Ebcin:GNSS01,Kelly:IJRR11} or Particle filter \cite{Fox:JAIR99,Pupilli:BMVC05}, the majority are based on the Extended Kalman Filter (EKF). The inertial measurements can be used in either a \emph{loosely coupled} manner, i.e., in the update step of the filter \cite{Konolige:RR11,Oskiper:CVPR07,Chai:PTVE02,Jones:IJRR11}, or in a \emph{tightly coupled} manner, i.e., in the prediction step of the filter together with a kinematic model \cite{Jung:CVPR01,Ma:ICRA12,Tardif:IROS10,Brockers:SPIE12,Kottas:RSS13,Strelow:IJRR04,Kim:ICRA03,Kleinert:MFI10,Pinies:ICRA07}. Methods based on the EKF can be combined with an inverse depth parametrization \cite{Kleinert:MFI10,Pinies:ICRA07,Civera:RSS06,Eade:CVPR06} to reduce linearization errors Between batch and filtering approaches there are three options. The first is to use incremental solutions to the batch problem \cite{Kaess:IJRR11}. The second option is to use a \emph{Sliding Window Filter} (SWF) approach, which applies a batch algorithm on a small set of recent measurements. The states that are removed from the window are compressed into a prior term using linearization and marginalization \cite{Mei:IJCV11,Sibley:JFR10,Leutenegger:IJRR15,DongSi:ICRA11,Huang:IROS11}, possibly approximating the sparsity of the original problem \cite{Nerurkar:ICRA14}. The third option is to use a \emph{Multistate-Constrained Kalman Filter} (MSCKF), which is similar to a sliding window filter, but where the old states are \emph{stochastic clones} \cite{Roumeliotis:ICRA02} that remain constant and are not updated with the measurements. Comparisons of the two approaches \cite{Leutenegger:IJRR15,Clement:CRV15} show that the SWF is more accurate and robust, but the MSCKF is more efficient. A hybrid method that switches between the two has appeared in \cite{Li:IROS12}, \cite{Li:RSS13}. \section{Dynamic Affine SfM} \label{sc:solution} \paragraph{Factorization formulation} We start our treatment by collecting all the image measurements and their derivatives in a single matrix \begin{equation} W=\stack(W',\dot{W}',\ddot{W}') \in\real{6F\times P}, \end{equation} where the matrix $W'\in\real{3F\times P}$ is defined by stacking the coordinates $\{x_{fp}\}$ following the frame index $f$ across the rows and the point index $p$ across the columns: \begin{equation} W'=\bmat{x_{11} & \cdots & x_{1P}\\ \vdots & \ddots & \vdots \\ x_{N1} & \cdots & x_{NP}} \in\real{2F\times P}. \end{equation} Notice the common structure in where we have some coefficient matrix times $R_f^\mathrm{T}$ times $X_{sp}$ plus a vector. Thus, the matrix $W$ admits an affine rank-three decomposition (which can also be written as a rank four decomposition) \begin{equation} W=CMS+m=\bmat{CM & m}\bmat{S\\\vct{1}^\mathrm{T}},\label{eq:factorization} \end{equation} where the \emph{motion matrix} \begin{equation} M=\stack\bigl(\{R_f^\mathrm{T}\}\bigr)\label{eq:M} \end{equation} contains the rotations, the \emph{structure matrix} \begin{equation} S=\bmat{X_{s1}&\cdots&X_{sP}} \end{equation} contains the 3-D points expressed in $\cR_s$, the \emph{coefficient matrix} $C$ contains the projector $\Pi$ times the coefficients multiplying the rotations in \eqref{eq:Xc-dXc-ddXc} \begin{equation} C=\stack\left(\{\Pi\}_{f=1}^F,\{-\Pi\hat{\omega}_{f}\}_{f=1}^F,\{\Pi(\hat{\omega}_f^2-\dot{\hat{\omega}}_f)\}_{f=1}^F\right), \end{equation} and the \emph{translation vector} $m\in\real{2F}$ contains the remaining vector terms \begin{multline} m=\stack\bigl(\{-\Pi\tau_{f}\}_{f=1}^F,\{\Pi(\hat{\omega}_f\tau_f-\nu_{f})\}_{f=1}^F,\\ \{\Pi((\hat{\omega}_f^2-\dot{\hat{\omega}}_f) \tau_f +2\hat{\omega}_{f}\nu_{f}-\alpha_{IMU}-R_{f}^\mathrm{T} g_s)\}_{f=1}^F\bigr).\label{eq:m} \end{multline} In addition to this relation, the quantities $\tau_f$, $\nu_f$ and $\alpha_f^{IMU}$ can be linearly related using derivatives (see \eqref{eq:taub-nub-alphab}). Similarly, $R_f$ and $w_f$ can be related using the definition of angular velocity. Note that the coefficients $C$ are completely determined by the IMU measurements and the torque inputs. \paragraph{Optimization formulation} From this, the problem of estimating the motion, the structure, and the gravity direction can then be casted as an optimization problem: \begin{multline} \min_{\{R_f,\tau_f,\nu_f\},g_s}\frob{W-(CMS+m)}^2+f_R(\{R_f\},\{\omega_f\})\\+f_\tau(M,\{\tau_f\},\{\nu_f\})+f_\nu(M,\{\nu_f\},\{\alpha_f^{IMU}\},g_s), \label{eq:factorization-affine} \end{multline} where $f_R$, $f_\tau$ and $f_\nu$ are quadratic regularization terms based on approximating the linear derivative constraints between $\tau_f$, $\nu_f$, $\alpha_f^{IMU}$ and between $R_f$, $w_f$ with finite differences. In particular, for our implementation we will use,: \begin{align} f_R&=\sum_{f=1}^{F-1}\frob{R_{f+1}-R_f\expm(t_s\omega_f)}^2 \label{eq:regR}\\ f_\tau&=\sum_{f=1}^{F-1} \frob{\frac{1}{t_s}\conv(\tau_k,h_k,f)-\nu_f}^2\label{eq:regtau}\\ f_\nu&=\sum_{f=1}^{F-1} \frob{\frac{1}{t_s}\conv(\nu_k,h_k,f)-\alpha_{IMU}-R_f^\mathrm{T} g_s}^2.\label{eq:regnu} \end{align} where with $t_s$ is the sampling period of the measurements, $\expm$ is the matrix exponential and $\conv(\tau_k,h_k,f)$ gives the sample at time $f$ of the convolution $\tau_k\ast h_k$ of a signal $\tau_k$ with a derivative interpolation filter $h_k$. For our implementation we obtain $h_k$ from a Savitzky-Golay filter of order one and window size three. \paragraph{Solution strategy} The optimization problem \eqref{eq:factorization-affine} is non-convex. However, we can find a closed-form solution by exploiting the low-rank nature of the product $MS$ and the linearity of the other terms. This closed-form solution is exact for the noiseless case, and provides an approximated solution to \eqref{eq:factorization-affine} in the noisy case. \begin{enumerate} \item\label{it:factorization} Factorization. Compute a rank four factorization $W=\tilde{M}\tilde{S}$ using an SVD. With respect to the last term in \eqref{eq:factorization}, the factors $\tilde{M}$ and $\tilde{S}$ are related to, respectively $\bmat{CM & m}$ and $\bmat{S\\\vct{1}^\mathrm{T}}$ by an unknown matrix $K_{\textrm{proj}}\in\real{4\times 4}$. In standard SfM terminology, $\tilde{M}$ and $\tilde{S}$ represent a \emph{projective} reconstruction. \item Similarity transformation. Ideally, the last row of $\tilde{S}'$ should be $\vct{1}^\mathrm{T}$. Therefore, we find a vector $k\in\real{4}$ by solving $k^\mathrm{T} \tilde{S}'=\vct{1}^\mathrm{T}$ in a least squares sense. We then define the matrix $K_{\textrm{symil}}=\stack(\bmat{I & 0},k^\mathrm{T})$, and the matrices $\tilde{M}'=\tilde{M} K_{\textrm{symil}}^{-1}$, $\tilde{S}'=K_{\textrm{symil}}\tilde{S}$. In standard SfM terminology, $\tilde{M}'$ and $\tilde{S}'$ represent reconstruction up to a \emph{symilarity} transformation. \item\label{it:centering} Centering. To fix the center of the absolute reference frame $\cR_s$ to the center of the 3-D structure, we first compute the vector $c=\frac{1}{P}[\tilde{S}']_{1:3,:}$, where $[\tilde{S}']_{1:3,:}$ indicates the matrix composed of the first three rows of $\tilde{S}'$. We then define the matrix $K_{\textrm{center}}=\bmat{0 & -c\\0 & 1}$, and the matrices $\tilde{M}''=\tilde{M}' K_{\textrm{center}}^{-1}$ and $\tilde{S}''=K_{\textrm{center}}\tilde{S}'$. At this point the forth column of the matrix $\tilde{M}''$ contains (in the ideal case) the vector $m$ defined in \eqref{eq:factorization}, that is $\hat{m}=[\tilde{M}'']_{:,4}$. \item\label{it:rotations} Recovery of the rotations and structure. We now solve for the rotations $\{R_f\}$ by solving a reduced version of \eqref{eq:factorization-affine}. In particular, we solve \begin{equation} \min_{M''\in\real{3F\times 3}} \norm{[\tilde{M}'']_{:,1:3}-CM''}_F^2+\norm{C_RM''}_F^2,\label{eq:opt-rotation} \end{equation} where $[\tilde{M}'']_{:,1:3}$ indicates the matrix containing the first three columns of $\tilde{M}''$, and $C_R$ is a block-banded-diagonal matrix with blocks $I$ and $-\expm(t_s\omega_f)^\mathrm{T}$ corresponding to the regularization term \eqref{eq:regR}. This is a simple least squares problem which can be easily solved using standard linear algebra algorithms. Ideally, the matrix $M''$ is related to the real matrix $M$ by an unknown similarity transformation $K_{\textrm{upg}}\in\real{3\times 3}$. This matrix can be determined (up to an arbitrary rotation) using the standard metric upgrade step from Affine SfM (see \cite{Tomasi:IJCV92}). Once $K_{\textrm{upg}}$ has been determined, we define $\hat{M}=M''K_{\textrm{upg}}$ and $\hat{S}=K_{\textrm{upg}}^{-1} [\tilde{S}'']_{1:3,:}$ to be the estimated motion and structure matrices. The final estimates $\{\hat{R}_f\}$ are obtained by projecting each $3\times 3$ block of $\hat{M}$ to $SO(3)$ using an SVD decomposition. \item\label{it:translations} Recovery of the translations and linear velocities. We need to extract $\{\tau_f\}$ and $\{\omega_f\}$ and an estimated gravity direction $\hat{g}_s$ from the vector $\hat{m}$. Following~\eqref{eq:m}, we define the matrix \begin{equation} C_m=\bmat{ \vdots & \vdots & \vdots \\ -\Pi & 0 & 0\\ \vdots & \vdots & \vdots \\ \Pi\hat{\omega}_f & -\Pi & 0\\ \vdots & \vdots & \vdots \\\Pi((\hat{\omega}_f^2-\dot{\hat{\omega}}_f) & 2\hat{\omega}_{f} & -R_{f}\\ \vdots & \vdots & \vdots } \end{equation} and the vector $c_m=\stack(\vct{0}_{6F},\{\alpha_f^{IMU}\})$. Similarly to the definition of $C_R$ in \eqref{eq:opt-rotation}, we also define the matrices $C_\tau$, $C_\nu$ corresponding to the regularization terms \eqref{eq:regtau} and \eqref{eq:regnu}. We can then solve for the vector $x=\stack(\{\hat{\tau}_f\},\{\hat{\nu_f\}},\hat{g}_s)$ by minimizing \begin{equation} \min_x \norm{C_mx-\hat{m}}_F^2+\norm{C_\tau x}_F^2+\norm{C_\nu x}_F^2,\label{eq:opt-translation} \end{equation} which again is a least squares problem that can be solved using standard linear algebra tools. \end{enumerate}
{'timestamp': '2016-08-10T02:05:01', 'yymm': '1608', 'arxiv_id': '1608.02680', 'language': 'en', 'url': 'https://arxiv.org/abs/1608.02680'}
\section{Introduction} The creation of interpersonal ties has been a fundamental question in the structural analysis of social networks. While strong ties emerge between individuals with similar social circles, forming a basis of trust and hence community structure, weak ties link two members who share few common contacts. The influential work of Granovetter reveals the vital roles of weak ties: It is weak ties that enable information transfer between communities and provide individuals positional advantage and hence influence and power \cite{Granovetter}. \smallskip Natural questions arise regarding the establishment of weak ties between communities: How to merge two departments in an organization into one? How does a company establish trade with an existing market? How to create a transport map from existing routes? We refer to such questions as {\em network building}. The basic setup involves two networks; the goal is to establish ties between them to achieve certain desirable properties in the combined network. A real-life example of network building is the inter-marriages between members of the Medici, the leading family of Renaissance Florence, and numerous other noble Florentine families, towards gaining power and control over the city \cite{Jackson-Medici}. Another example is by Paul Revere, a prominent Patriot during the American Revolution, who strategically created social ties to raise a militia \cite{UzziDunlap}. \smallskip The examples of the Medici and Paul Revere pose a more restricted scenario of network building: Here one of the two networks involved is only a single node, and the goal is to establish this node in the other network. We motivate this setup from two directions: \begin{enumerate} \item This setup amounts to the problem of {\em socialization}: the situation when a newcomer joins a network as an organizational member. A natural question for the newcomer is the following: How should I forge new relationships in order to take an advantageous position in the organization? As indicated in \cite{Morrison}, socialization is greatly influenced by the social relations formed by the newcomer with ``insiders'' of the network. \item This setup also amounts to the problem of {\em network expansion}. For example, an airline expands its existing route map with a new destination, while trying to ensure a small number of legs between any cities. \end{enumerate} {\em Distance} refers to the length of a shortest path between two members in a network; this is an important measure of the amount of influence one may exert to another in the network \cite{collaboration}. The {\em radius} of a network refers to the maximal distance from a central member to all others in a network. Hence when a newcomer joins an established network, it is in the interest of the newcomer to keep her distance to others bounded by the radius. The {\em diameter} of a network refers to the longest distance between any two members. It has long been argued from network science that small-world property -- the property that any two members of a network are linked by short paths -- improves network robustness and facilitates information flow \cite{robustness}. Hence it is in the interest of the network to keep the diameter small as the network expands. Furthermore, each relation requires time and effort to establish and maintain; thus one is interested in minimizing the number of new ties while building a network. \paragraph*{\bf Contribution.} The novelty of this work is in proposing a formal, algorithmic study of organizational socialization. More specifically we investigate the following {\em network building problems}: Given a network $G$, add a new node $u$ to $G$ and create as few ties as possible for $u$ such that: \begin{enumerate \item[(1)] $u$ is in the center of the resulting network; o \item[(2)] the diameter of the resulting network is not larger than a specific value \end{enumerate} Intuitively, (1) asks how a newcomer $u$ may optimally connect herself with members of $G$, so that she belongs to the center. We prove that this problem is in fact NP-complete (Theorem~\ref{thm:problemradius}). Nevertheless, we give several efficient algorithms for this problem; in particular, we demonstrate a ``simplification'' process that significantly improves performance. Intuitively, (2) asks how a network may preserve or reduce its diameter by connecting with a new member $u$. We show that ``preserving the diameter'' is trivial for most real-life networks and give two algorithms for ``reducing the diameter''. We experimentally test and compare the performance of all our algorithms. Quite surprisingly, the experiments demonstrate that a very small number of new edges is usually sufficient for each problem even when the graph becomes large. \paragraph*{\bf Related works.} This work is predated by organizational behavioral studies \cite{socialization1,socialization2,Morrison}, which look at how social ties affect a newcomer's integration and assimilation to the organization. The authors in \cite{CrossThomas,UzziDunlap} argue {\em brokers} -- those who bridge and connect to diverse groups of individuals -- enable good network building; creating ties with and even becoming a broker oneself allows a person to gain private information, wide skill set and hence power. Network building theory has also been applied to various other contexts such as economics (strategic alliance of companies) \cite{Stuart1}, governance (forming inter-government contracts) \cite{federalism}, and politics (individuals' joining of political movements) \cite{Passy}. Compared to these works, the novelty here is in proposing a formal framework of network building, which employs techniques from complexity theory and algorithmics. This work is also related to two forms of network formation: {\em dynamic models} and {\em agent-based models}, both aim to capture the natural emergence of social structures \cite{Jackson-Medici}. The former originates from random graphs, viewing the emergence of ties as a stochastic process which may or may not lead to an optimal structure \cite{entangle}. The latter comes from economics, treating a network as a multiagent system where utility-maximizing nodes establish ties in a competitive setting \cite{strategicNF,JacksonSurvey}. Our work differs from network formation as the focus here is on calculated strategies that achieve desirable goals in the combined network. \section{Networks Building: The Problem Setup} We view a {\em network} as an undirected unweighted connected graph $G = (V, E)$ where $V$ is a set of nodes and $E$ is a set of (undirected) edges on $V$. We denote an edge $\{u,v\}$ as $uv$. If $uv\in E$ then $v$ is said to be {\em adjacent} to $u$. A {\em path} (of {\em length} $k$) is a sequence of nodes $u_0,u_1,\ldots,u_k$ where $u_iu_{i+1}\!\in\!E$ for any $0\!\leq\!i\!<\!k$. The {\em distance} between $u$ and $v$, denoted by $\mathsf{dist}(u,v)$, is the length of a shortest path from $u$ to $v$. The {\em eccentricity} of $u$ is the maximum distance from $u$ to any other node, i.e., $\mathsf{ecc}(u)=\max_{v\in V} \mathsf{dist}(u,v)$. The {\em diameter} of the network $G$ is $ \mathsf{diam}(G)=\max_{u\in V} \mathsf{ecc}(u)$. The {\em radius} $\mathsf{rad}(G)$ of $G$ is $\min_{u\in V} \mathsf{ecc}(u)$. The {\em center} of $G$ consists of those nodes that are closest to all other nodes; it is the set $C(G)\coloneqq \{u\in V\mid \mathsf{ecc}(u)=\mathsf{rad}(G)\}$. \begin{definition Let $G=(V,E)$ be a network and $u$ be a node not in $V$. For $S\subseteq V$, denote by $E_S$ the set of edges $\{uv\mid v\in S\}$. Define $G\oplus_S u$ as the graph $(V\cup \{u\}, E\cup E_S)$ \end{definition} We require that $S\!\neq\!\varnothing$ and thus $G\oplus_S u$ is a network built by incorporating $u$ into $G$. By \cite{UzziDunlap}, for a newcomer $u$ to establish herself in $G$ it is essential to identify {\em information brokers} who connect to diverse parts of the network. Following this intuition, we make the following definition \begin{definition} A set $S\subseteq V$ is a {\em broker set} of $G$ if $\mathsf{ecc}(u)=\mathsf{rad}(G\oplus_S u)$; namely, linking with $S$ enables $u$ to get in the center of the network. \end{definition} Formally, given a network $G=(V,E)$, the problem of {\em network building for $u$} means selecting a set $S\!\subseteq\!V$ so that the combined network $G\!\oplus_S\!u$ satisfies certain conditions. Moreover, the desired set $S$ should contain as few nodes as possible. We focus on the following two key problems: \begin{enumerate \item $\mathsf{BROKER}$: The set $S$ is a broker set. \item $\mathsf{DIAM}_\Delta$: The diameter $\mathsf{diam}\!(\!G\!\oplus_S\! u\!)\!\leq\!\Delta$ for a given $\Delta\leq \mathsf{diam}(G)$. \end{enumerate} Note that for any network $G$, if $u$ is adjacent to all nodes in $G$, it will have eccentricity 1, i.e., in the network $G\oplus_V u$, $\mathsf{ecc}(u)\!=\!1\!=\!\mathsf{rad}(G\oplus_V u)$ and $\mathsf{diam}(G\oplus_V u)\!=\!2$. Hence a desired $S$ must exist for $\mathsf{BROKER}$ and $\mathsf{DIAM}_\Delta$ where $\Delta\geq 2$. In subsequent section we systematically investigate these two problems. \section{How to Be in the Center? Complexity and Algorithms for $\mathsf{BROKER}$} \subsection{Complexity} We investigate the computational complexity of the decision problem $\mathsf{BROKER}(G,k)$, which is defined as follows: \begin{description} \item[INPUT] A network $G=(V,E)$, and an integer $k\geq 1$ \item[OUTPUT] Does $G$ have a broker set of size $k$? \end{description} The $\mathsf{BROKER}(G,k)$ problem is trivial if $G$ has radius 1, as then $V$ is the only broker set. When $\mathsf{rad}(G)>1$, we recall the following notion: A set of nodes $S\subseteq V$ is a {\em dominating set} if every node not in $S$ is adjacent to at least one member of $S$. The \emph{domination number} $\gamma(G)$ is the size of a smallest dominating set for $G$. The $\mathsf{DOM}(G,k)$ problem concerns testing whether $\gamma(G)\!\leq\!k$ for a given graph $G$ and input $k$; it is a classical NP-complete decision problem \cite{GareyJohnson}. \begin{theorem}\label{thm:problemradius} The $\mathsf{BROKER}(G,k)$ problem is NP-complete \end{theorem} \begin{proof} The $\mathsf{BROKER}(G,k)$ problem is clearly in NP. Therefore we only show NP-hardness. We present a reduction from $\mathsf{DOM}(G,k)$ to $\mathsf{BROKER}(G,k)$. Note that when $\mathsf{rad}(G)\!=\!1$, $\gamma(G)\!=\!1$. Hence $\mathsf{DOM}(G,k)$ remains NP-complete if we assume $\mathsf{rad}(G)>1$. Given a graph $G=(V,E)$ where $\mathsf{rad}(G)>1$, we construct a graph $H$. The set of nodes in $H$ is $\{v_i\mid v\in V, 1\leq i\leq 3\}$. The edges of $H$ are as follows: \begin{itemize \item Add an edge $v_i v_{i+1}$ for every $v\in V$, $1\leq i<3 \item Add an edge $v_1w_1$ for every $v,w\in V \item Add an edge $v_2 w_2$ for every edge $vw\in E \end{itemize} Namely, for each node $v\in V$ we create three nodes $v_1,v_2,v_3$ which form a path. We link the nodes in $\{v_1\mid v\in V\}$ to form a complete graph, and nodes in $\{v_2\mid v\in V\}$ to form a copy of $G$. Since $\mathsf{rad}(G)\geq 2$, for each node $v\in V$ there is $w\in V$ with $\mathsf{dist}(v,w)\geq 2$. Hence in $H$, $\mathsf{dist}(v_3,w_3)\geq 4$, and $\mathsf{dist}(v_2,w_3)\geq 3$. As the longest distance from any $v_1$ to any other node is $3$, we have $\mathsf{rad}(H)=3$. Suppose $S$ is a dominating set of $G$. If we add all edges $uv$ where $v\in D=\{v_2\mid v\in S\}$, $\mathsf{ecc}(u)=3=\mathsf{rad}(H\oplus_D u)$. Hence $D$ is a broker set for $H$. Thus the size of a minimal broker set of $H$ is at most the size of a minimal dominating set of $G$. Conversely, for any set $D$ of nodes in $H$, define the {\em projection} $p(D) = \{v\mid v_i\in D \text{ for some } 1\leq i\leq 3\}$. Suppose $p(D)$ is not a dominating set of $G$. Then there is some $v\in V$ such that for all $w\in p(D)$, $\mathsf{dist}(v_2,w_2)\geq 2$. Thus if we add all edges in $\{ux\mid x\in D\}$, $\mathsf{dist}(u,v_3)\geq 4$. But then $\mathsf{ecc}(w_1)=3$ for any $w\in p(D)$. So $D$ is not a broker set. This shows that the size of a minimal dominating set of $G$ is at most the size of a minimal broker set. The above argument implies that the size of a minimal broker set for $H$ coincides with the size of a minimal dominating set for $G$. This finishes the reduction and hence the proof. \qed \end{proof} \subsection{Efficient Algorithms}\label{subsec:radius_algorithms Theorem~\ref{thm:problemradius} implies that computing optimal solution of $\mathsf{BROKER}$ is computationally hard. Nevertheless, we next present a number of efficient algorithms that take as input a network $G=(V,E)$ with radius $r$ and output a small broker set $S$ for $G$. A set $S\subseteq V$ is called {\em sub-radius dominating} if for all $v\in V$ not in $S$, there exists some $w\in S$ with $\mathsf{dist}(v,w)<r$. Our algorithms are based on the following fact, which is clear from definition: \begin{fact}\label{fact:sub-radius dominating}Any sub-radius dominating set is also a broker set \end{fact} \subsubsection{(a) Three greedy algorithms} We first present three greedy algorithms; each algorithm applies a heuristic that iteratively adds new nodes to the broker set $S$. The starting configuration is $S= \varnothing$ and $U= V$. During its computation, the algorithm maintains a subgraph $F=(U,E\restriction U)$, which is induced by the set $U$ of all ``uncovered'' nodes, i.e., nodes that have distance $>(r-1)$ from any current nodes in $S$. It repeatedly performs the following operations until $U=\varnothing$, at which point it outputs $S$: \begin{enumerate} \item Select a node $v\in U$ based on the corresponding heuristic and add $v$ to $S$. \item Compute all nodes at distance at most $(r-1)$ from $v$. Remove these nodes and all attached edges from $F$. \end{enumerate} \paragraph*{\bf Algorithm 1: $\mathsf{Max}$ (Max-Degree).} The first heuristic is based on the intuition that one should connect to the person with the highest number of social ties; at each iteration, it adds to $S$ a node with maximum degree in the graph $F$. \paragraph*{\bf Algorithm 2: $\mathsf{Btw}$ (Betweenness).} The second heuristic is based on {\em betweenness}, an important centrality measure in networks \cite{betweeness}. More precisely, the {\em betweenness} of a node $v$ is the number of shortest paths from all nodes to all others that pass through $v$. Hence high betweenness of $v$ implies, in some sense, that $v$ is more likely to have short distance with others. This heuristic works in the same manner as $\mathsf{Max}$ but picks nodes with maximum betweenness in $F$. \paragraph*{\bf Algorithm 3: $\mathsf{ML}$ (Min-Leaf).} The third heuristic is based on the following intuition: A node is called a {\em leaf} if it has minimum degree in the network; leaves correspond to least connected members in the network, and may become outliers once nodes with higher degrees are removed from the network. Hence this heuristic gives first priority to leaves. Namely, at each iteration, the heuristic adds to $S$ a node that has distance at most $r-1$ from $v$. More precisely, the heuristic first picks a leaf $v$ in $F$, then applies a sub-procedure to find the next node $w$ to be added to $S$. The sub-procedure determines a path $v=u_1,u_2,\ldots$ in $F$ iteratively as follows: \begin{enumerate \item Suppose $u_i$ is picked. If $i=r$ or $u_i$ has no adjacent node in $F$, set $u_i$ as $w$ and terminate the process \item Otherwise select a $u_{i+1}$ (which is different from $u_{i-1}$) among adjacent nodes of $u_i$ with maximum degree. \end{enumerate After the process above terminates, the algorithm adds $w$ to $S$. Note that the distance between $w$ and $v$ is at most $r-1$. We mention that Algorithms 1,3 have been applied in \cite{k-domination} to {\em regular graphs}, i.e., graphs where all nodes have the same degree. In particular, $\mathsf{ML}$ has been shown to produce small $k$-dominating sets for given $k$ in the average case for regular graphs. \subsubsection{(b) Simplified greedy algorithms} One significant shortcoming of Algorithms 1--3 is that, by deleting nodes from the network $G$, the network may become disconnected, and nodes that could have been connected via short paths are no longer reachable from each other. This process may produce {\em isolated} nodes in $F$, i.e., nodes having degree 0, which are subsequently all added to the output set $S$. Moreover, maintaining the graph $F$ at each iteration also makes implementations more complex. Therefore we next propose {\em simplified} versions of Algorithms 1--3. \paragraph*{\bf Algorithms 4 $\mathsf{S}$-$\mathsf{Max}$, 5 $\mathsf{S}$-$\mathsf{Btw}$, 6 $\mathsf{S}$-$\mathsf{ML}$.} The simplified algorithms act in a similar way as their ``non-simplified'' counterparts; the difference is that here the heuristic works over the original network $G$ as opposed to the updated network $F$. Hence the graph $F$ is no longer computed. Instead we only need to maintain a set $U$ of ``uncovered'' nodes. The simplified algorithms have the following general structure: \ Start from $S= \varnothing$ and $U= V$, and repeatedly perform the following until $U=\varnothing$, at which point output $S$: \begin{enumerate \item Select a node $v$ from $U$ based on the corresponding heuristic and add $v$ to $S$ \item Compute all nodes with distance $<\mathsf{rad}(G)$ from $v$, and remove any of these node from $U$. \end{enumerate We stress that here the same heuristics as described above in Algorithms 1--3 are applied, except that we replace any mention of ``$F$'' in the description with ``$U$'', while all notions of degrees, distances, and betweenness are calculated based on the original network $G$. As an example, in Fig.~\ref{fig:Max} we run $\mathsf{Max}$ and $\mathsf{S}$-$\mathsf{Max}$ on the same network $G$, which contains 30 nodes. The figures show the result of both algorithms, and in particular, how $\mathsf{S}$-$\mathsf{Max}$ outputs a smaller sub-radius dominating set. We further verify via experiments below that the simplified algorithms lead to much smaller output $S$ in almost all cases. \begin{figure}[!] \centerin \includegraphics[width=0.5\textwidth]{example.png} \caption{ The network $G$ contains 30 nodes and has radius $\mathsf{rad}(G)=4$. The $\mathsf{Max}$ algorithm: The algorithm first puts node 3 (shown in green) into $S$. Then removes all nodes (and attached edges that are at distance three from the node 3; these nodes are considered ``covered'' by 3. In the remaining graph, there are three isolated nodes 8,14,26, as well as a line of length 2. The algorithm then puts the node 18 into $S$ which ``covers'' 27 and 13. Thus the output set is $S=\{3,18,8,14,26\}$. The $\mathsf{S}$-$\mathsf{Max}$ algorithm: The algorithm first puts 3 into the set $S$, but does not remove the covered nodes. It simply construct a set containing all ``uncovered'' nodes, namely, $\{27,18,13,14,8,26\}$. The algorithm then selects the node 13 which has max degree from these nodes, and puts into $S$. It then turns out that all nodes are covered. Therefore the output set is $S=\{3,13\}$. Thus $\mathsf{S}$-$\mathsf{Max}$ is superior in this example.}\label{fig:Max} \end{figure} \subsubsection{(c) Center-based algorithms} The 6 algorithms presented above can all be applied to find $k$-dominating set for arbitrary $k\geq 1$. Since our focus is in finding sub-radius dominating set to answer the $\mathsf{BROKER}$ problem, we describe two algorithms that are specifically designed for this task. When building network for a newcomer, it is natural to consider nodes that are already in the center of the network $G$. Hence our two algorithms are based on utilizing the center of $G$. \paragraph*{\bf Algorithm 7 $\mathsf{Center}$.} The algorithm finds a center $v$ in $G$ with minimum degree, then output all nodes that are adjacent to $v$. Since $v$ belongs to the center, for all $w\in V$, we have $\mathsf{dist}(v,w)\leq \mathsf{rad}(G)$ and thus there is $v'$ adjacent to $v$ such that $\mathsf{dist}(w,v')=\mathsf{dist}(w,v)-1<\mathsf{rad}(G)$. Hence the algorithm returns a sub-radius dominating set. Despite its apparent simplicity, $\mathsf{Center}$ returns surprisingly good results in many cases, as shown in the experiments below. \paragraph*{\bf Algorithm 8 $\mathsf{Imp}$-$\mathsf{Center}$.} We present a modified version of $\mathsf{Center}$, which we call $\mathsf{Imp}$-$\mathsf{Center}$. The algorithm first picks a center with minimum degree, and then orders all its neighbors in decreasing degree. It adds the first neighbor to $S$ and remove all nodes $\leq (r-1)$-steps from it. This may disconnect the graph into a few connected components. Take the largest component $C$. If $C$ has a smaller radius than $r$, we add the center of this component to $S$; otherwise we add the next neighbor to $S$. We then remove from $F$ all nodes at distance $\leq (r-1)$ from the newly added node. This procedure is repeated until $F$ is empty. See Procedure~\ref{alg:Center_improved}. Fig.~\ref{fig:Center} shows an example where $\mathsf{Imp}$-$\mathsf{Center}$ out-performs $\mathsf{Center}$. \begin{algorithm}[!htb] \floatname{algorithm}{Procedure} \caption{ $\mathsf{Imp}$-$\mathsf{Center}$: Given $G=(V,E)$ (with radius $r$)} \label{alg:Center_improved} \begin{algorithmic} \State Pick a center node $v$ in $G$ with minimum degree $d$ \State Sort all adjacent nodes of $v$ to a list $u_1,u_2,\ldots,u_d$ in decreasing order of degrees \State Set $S\leftarrow \varnothing$ and $i\leftarrow 1$ \While{$U\neq \varnothing$} \State Set $C$ as the largest connected component in $F$ \If{$\mathsf{rad}(C)<\mathsf{rad}(G)-1$} \State Pick a center node $w$ of $C$. Set $S\leftarrow S\cup \{w\}$ \State Set $U\leftarrow U\setminus \{w'\in U\mid \mathsf{dist}(w,w')<r\} \Else \State Set $S\leftarrow S\cup \{u_i\}$ \State Set $U\leftarrow U\setminus \{w'\in U\mid \mathsf{dist}(u_i,w')<r\}$ \State Set $i\leftarrow i+1$ \EndIf \State Set $F$ as the subgraph induced by the current $U$ \EndWhile \State\Return $S$ \end{algorithmic} \end{algorithm} \begin{figure}[!htb \centering \includegraphics[width=0.7\textwidth]{sage6} \caption{ The graph $G$ has radius $\mathsf{rad}(G)=3$. The yellow node 0 is a center with min degree 4. Thus $\mathsf{Center}$ outputs 4 nodes $\{1,4,18,29\}$. The dark green node 29 adjacent to 0 has max degree; the red nodes are ``uncovered'' by 29. Thus $\mathsf{Imp}$-$\mathsf{Center}$ outputs the 3 blue circled nodes $\{12,25,29\}$. }\label{fig:Center} \end{figure} Finally, we note that all of Algorithms 1--8 output a sub-radius dominating set $S$ for the network $G$. Thus the following theorem is a direct implication from Fact~\ref{fact:sub-radius dominating}. \begin{theorem All of Algorithms 1--8 output a brocker set for the network $G$. \end{theorem} \subsection{Experiments for $\mathsf{BROKER}$} We implemented the algorithms using Sage \cite{Sage}. We apply two models of random graphs: The first (BA) is Barabasi-Albert's preferential attachment model which generates scale-free graphs whose degree distribution of nodes follows a power law; this is an essential property of numerous real-world networks \cite{BA}. The second (NWS) is Newman-Watts-Strogatz's small-world network \cite{NewmanWattsStrogatz}, which produces graphs with small average path lengths and high clustering coefficient. For each algorithm we are interested in two indicators of its performance: 1) {\em Output size}: The average size of the output broker set (for a specific class of random graphs). 2) {\em Optimality rate}: The probability that the algorithm gives optimal broker set for a random graph. To compute this we need to first compute the size of an optimal broker set (by brute force) and count the number of times the algorithm produces optimal solution for the generated graphs. \paragraph*{\bf Experiment 1: Output sizes.} We generate $300$ graphs whose numbers of nodes vary between $100$ and $1000$ using each random graph model. We compute averaged output sizes of generated graphs by their number of nodes $n$ and radius $r$. The results are shown in Fig.~\ref{fig:improvedRes}. From the result we see: a) The simplified algorithms produce significantly smaller broker sets compared to their unsimplified counterparts. This shows superiority of the simplified algorithms. b) BA graphs in general allow smaller output set than NWS graphs. This may be due to the scale-free property which results in high skewness of the degree distribution. \begin{figure}[!htb \centering \includegraphics[width=\textwidth]{comaring_noMin} \caption{ Comparing results: average performance of the $\mathsf{Max}$, $\mathsf{Btw}$, $\mathsf{ML}$, algorithms versus their simplified versions on randomly generated graphs (BA graphs on the left; NWS on the right)}\label{fig:improvedRes} \end{figure} \paragraph{\bf Experiment 2: Optimality rates.} For the second goal, we compute the optimality rates of algorithms when applied to random graphs, which are shown in Fig.~\ref{fig:res}. For BA graphs, the simplified algorithm $\mathsf{S}$-$\mathsf{ML}$ has significantly higher optimality rate ($\geq 85\%$) than other algorithms. On the contrary, its unsimplified counterpart $\mathsf{ML}$ has the worst optimality rate. This is somewhat contrary to Duckworth and Mans's work showing $\mathsf{ML}$ gives very small solution set for regular graphs \cite{k-domination}. For NWS graphs, several algorithms have almost equal optimality rate. The three best algorithms are $\mathsf{S}$-$\mathsf{Max}$, $\mathsf{S}$-$\mathsf{Btw}$ and $\mathsf{S}$-$\mathsf{ML}$ which has varying performance for graphs with different sizes (See Fig.~\ref{fig:optimality_nodes}). \begin{figure}[!htb]\centering \includegraphics[width=\textwidth]{percentageRes_noMin} \caption{ Optimality rates for different types of random graphs}\label{fig:res} \end{figure} \begin{figure}[!htb]\centering \includegraphics[width=\textwidth]{nodes_combined_noMin} \caption{ Optimality rates when graphs are classified by sizes}\label{fig:optimality_nodes} \end{figure} \paragraph*{\bf Experiment 3: Real-world datasets.} We test the algorithms on several real-world datasets: The $\mathsf{Facebook}$ dataset, collected from survey participants of Facebook App, consists of friendship relation on Facebook \cite{facebook}. $\mathsf{Enron}$ is an email network of the company made public by the FERC \cite{enron1}. Nodes of the network are email addresses and if an address $i$ sent at least one email to address $j$, the graph contains an undirected edge from $i$ to $j$. $\mathsf{Col} 1$ and $\mathsf{Col} 2$ are collaboration networks that represent scientific collaborations between authors papers submitted to General Relativity and Quantum Cosmology category ($\mathsf{Col} 1$), and to High Energy Physics Theory category ($\mathsf{Col} 2$) \cite{collaboration}. \begin{table}[!htb] \centering \begin{tabular}{| l | l | l| l | l |} \hline & Facebook & Enron & Col1 & Col2\\ \hline Number of nodes & 4,039 & 33,969 & 4,158& 8,638 \\ \hline Number of edges & 88,234& 180,811 & 13,422& 24,806\\ \hline Largest connected subgraph & 4,039 & 33,696 & 4,158 & 8,638 \\ \hline Diameter & 8 & 13 & 17 & 18\\ \hline Radius & 4 & 7 & 9 & 10\\ \hline \end{tabular}\caption{ Network properties} \label{table:datasets} \end{table} Results on the datasets are shown in Fig.~\ref{fig:datasets_res}. $\mathsf{Btw}$ and $\mathsf{S}$-$\mathsf{Btw}$ algorithms become too inefficient as it requires computing shortest paths between all pairs in each iteration. Moreover, $\mathsf{S}$-$\mathsf{Max}$ also did not terminate within reasonable time for the $\mathsf{Enron}$ dataset. Even though the datasets have many nodes, the output sizes are in fact very small (within 10). For instance, the smallest output sets of the $\mathsf{Enron}$, $\mathsf{Col} 1$ and $\mathsf{Col} 2$ contain just two nodes. In some sense, it means that to become in the center even in a large social network, it is often enough to establish only very few connections. \begin{figure}[!htb] \centering \includegraphics[width=.8\textwidth]{datasetRes_noMin} \caption{ The number of new ties for the four real-world networks}\label{fig:datasets_res} \end{figure} Among all algorithm $\mathsf{Imp}$-$\mathsf{Center}$ has the best performance, producing the smallest output set for all networks. Moreover, for $\mathsf{Enron}$, $\mathsf{Col} 1$ and $\mathsf{Col} 2$, $\mathsf{Imp}$-$\mathsf{Center}$ returns the optimal broker set with cardinality $2$. A rather surprising fact is, despite straightforward seemingly-naive logic, $\mathsf{Center}$ also produces small outputs in three networks. This reflects the fact that in order to become central it is often a good strategy to create ties with the friends of a central person. \section{How to Preserve or Improve the Diameter? Complexity and Algorithms for $\mathsf{DIAM}_\Delta$}\label{sec:diameter} Let $G=(V,E)$ be a network and $u\notin V$. The $\mathsf{DIAM}_\Delta$ problem asks for a set $S\subseteq V$ such that the network $G\oplus_S u$ has diameter $\leq\Delta$; we refer to any such $S$ as {\em $\Delta$-enabling}. \subsection{Preserving the diameter} We first look at a special case when $\Delta=\mathsf{diam}(G)$, which has a natural motivation: How can an airline expand its existing route map with an additional destination while ensuring the maximum number of hops between any two destinations is not increased? We are interested in creating as few new connections as possible to reach this goal. Let $\delta(G)$ denote the size of the smallest $\mathsf{diam}(G)$-enabling set for $G$. We say a graph is {\em diametrically uniform} if all nodes have the same eccentricity. \begin{theorem}\label{thm:problemdiameter} \begin{enumerate \item[(a)]If $G$ is not diametrically uniform,$\delta(G)\!=\!1$ \item[(b)]If $G$ is complete, then $\delta(G)=|V|$ \item[(c)]If $G$ is diametrically uniform and incomplete, then $1<\delta(G)\leq d$ where $d$ is the minimum degree of any node in $G$, and the upper bound $d$ is sharp \end{enumerate} \end{theorem} \begin{proof} For {\bf (a)}, suppose $G$ is not diametrically uniform. Take any $v$ where $\mathsf{ecc}(v)<\mathsf{diam}(G)$. Then in the expanded network $G\oplus_{\{v\}} u$, we have $\mathsf{ecc}(u)=\mathsf{ecc}(v)+1\leq \mathsf{diam}(G)$. {\bf (b)} is clear. For {\bf (c)} Suppose $G$ is diametrically uniform and incomplete. For the lower bound, suppose $\gamma_{\mathsf{diam}(G)-1}(G)=1$. Then there is some $v\in V$ with the following property: In the network $G\oplus_{\{v\}} u$ we have $\mathsf{ecc}(u)\leq \mathsf{diam}(G)$, which means that $\mathsf{ecc}(v)<\mathsf{diam}(G)$. This contradicts the fact that $G$ is diametrically uniform. For the upper bound, take a node $v\in V$ with the minimum degree $d$. Let $N$ be the set of nodes adjacent to $v$. From any node $w\neq v$, there is a shortest path of length $\leq \mathsf{diam}(G)$ to $v$. This path contains a node in $N$. Hence $w$ is at distance $\leq \mathsf{diam}(G)-1$ from some node in $N$. Furthermore as $G$ is not complete, $\mathsf{diam}(G)\geq 2$ and $v$ is at distance $1\leq \mathsf{diam}(G)-1$ from nodes in $N$. \qed \end{proof} \paragraph*{Remark} We point out that in case (c) calculating the exact value of $\delta(G)$ is a hard: In \cite{hardness_diameter}, its parametrized complexity is shown to be complete for $\mathsf{W}[2]$, second level of the $\mathsf{W}$-hierarchy. Hence $\mathsf{DIAM}_\Delta$ is unlikely to be in $\mathsf{P}$. On the other hand, we argue that real-life networks are rarely diametrically uniform. Hence by Thm.~\ref{thm:problemdiameter}(a), the smallest number of new connections needed to preserve the diameter is 1. \subsection{Reducing the diameter} We now explore the question $\mathsf{DIAM}_\Delta$ where $2\leq \Delta<\mathsf{diam}(G)$; this refers to the goal of placing a new member in the network and creating ties to allow a closer distance between all pairs of members. We suggest two heuristics to solve this problem. \paragraph*{\bf Algorithm 9 $\mathsf{Periphery}$.} The {\em periphery} $P(G)$ of $G$ consists of all nodes $v$ with $\mathsf{ecc}(v)=\mathsf{diam}(G)$. Suppose $\mathsf{diam}(G)>2$. Then the combined network $G\oplus_{P(G)} u$ has diameter smaller than $\mathsf{diam}(G)$. Hence we apply the following heuristic: Two nodes $v,w$ in $G$ are said to form a {\em peripheral pair} if $\mathsf{dist}(v,w)=\mathsf{diam}(G)$. The algorithm first adds the new node $u$ to $G$ and repeats the following procedure until the current graph has diameter $\leq \Delta$:\\ 1) Randomly pick a peripheral pair $v,w$ in the current graph\\ 2) Adds the edges $uv,uw$ if they have not been added already\\ 3) Compute the diameter of the updated graph \noindent Note that once $v,w$ are chosen as a peripheral pair and the corresponding edges $uv,uw$ added, $v$ and $w$ will have distance 2 and they will not be chosen as a peripheral pair again. Hence the algorithm eventually terminates and produces a graph with diameter at most $\Delta$. \paragraph*{\bf Algorithm 10 $\mathsf{CP}$ (Center-Periphery).} This algorithm applies a similar heuristic as $\mathsf{Periphery}$, but instead of picking peripheral pairs at each iteration, it first picks a node $v$ in the center and adds the edge $uv$; it then repeats the following procedure until the current graph has diameter $\leq \Delta$:\\ 1) Randomly pick a node $w$ in the periphery of the current graph\\ 2) Add the edge $uw$ if it has not been added already\\ 3) Compute the diameter of the updated graph \noindent Suppose at one iteration the algorithm picks $w$ in the periphery. Then after this iteration the eccentricity of $w$ is at most $r+2$ where $r$ is the radius of the graph. \subsection{Experiments for $\mathsf{DIAM}_\Delta$} We implement and test the performance of Algorithms 9,10 for the problem $\mathsf{DIAM}_\Delta$ The performance of these algorithms are measured by the number of new ties created. \paragraph*{\bf Experiment 4: Random graphs.} We apply the two models of random graphs, BA and NWS, as described above. We generated $350$ graphs and considered the case when $\Delta = d(G) - 1$, i.e. the aim was to improve the diameter by one. For both types of random graphs (fixing size and radius), the average number of new ties are shown in Fig.~\ref{fig:Barabasi_improve}. The experiments show that $\mathsf{Periphery}$ performs better when the radius of the graph is close to the diameter (when radius is $>2/3$ of diameter), whilst $\mathsf{CP}$ is slightly better when the radius is significantly smaller than the diameter. \begin{figure}[!htb] \centering \includegraphics[width=\textwidth]{improve_combined} \caption{ Comparing two methods for improving diameter applied to BA (left) and NWS (right) graphs}\label{fig:Barabasi_improve} \end{figure} \paragraph*{\bf Experiment 5: Real-World Datasets.} We run both $\mathsf{Periphery}$ and $\mathsf{CP}$ on the networks $\mathsf{Col} 1$ and $\mathsf{Col} 2$ introduced above, setting $\Delta =\mathsf{diam}(G)-i$ for $1\leq i\leq 4$. The numbers of new edges obtained by $\mathsf{Periphery}$ and $\mathsf{CP}$ are shown in Figure~\ref{fig:collaboration_improve}; naturally for increasing $i$, more ties need to be created. We point out that, despite the large total number of nodes, one needs less than $19$ new edges to improve the diameter even by four. This reveals an interesting phenomenon: While a collaboration network may be large, a few more collaborations are sufficient to reduce the diameter of the network. On the $\mathsf{Facebook}$ dataset, $\mathsf{Periphery}$ is significantly better than $\mathsf{CP}$: To reduce the diameter of this network from $8$ to $7$, $\mathsf{Periphery}$ requires 2 edges while $\mathsf{CP}$ requires $47$. When one wants to reach the diameter $6$, the numbers of new edges increase to 6 for $\mathsf{Periphery}$ and 208 for $\mathsf{CP}$. \begin{figure}[!htb] \centering \includegraphics[width=0.8\textwidth]{collaboration_improve} \caption{ Applying algorithms for improving diameter to Collaboration 1 and Collaboration 2 datasets}\label{fig:collaboration_improve} \end{figure} \section{Conclusion and Outlook} This work studies how ties are built between a newcomer and an established network to reach certain structural properties. Despite achieving optimality is often computationally hard, there are efficient heuristics that reach the desired goals using few new edges. We also observe that the number of new links required to achieve the specified properties remain small even for large networks. This work amounts to an effort towards an algorithmic study of network building. Along this effort, natural questions have yet to be explored include: (1) Investigating the creation of ties between two arbitrary networks, namely, how ties are created between two established networks to maintain or reduce diameter. (2) When building networks in an organizational context (such as merging two departments in a company), one normally needs not only to take into account the informal social relations, but also formal ties such as the reporting relations, which are typically directed edges \cite{LiuMoskvina}. We plan to investigate network building in an organizational management perspective by incorporating both types of ties. \bibliographystyle{splncs03}
{'timestamp': '2016-05-13T02:03:42', 'yymm': '1605', 'arxiv_id': '1605.03644', 'language': 'en', 'url': 'https://arxiv.org/abs/1605.03644'}
\section{Introduction} Inertial particles moving in a turbulent flow do not simply trace the paths of fluid elements since particle inertia allows the particles to detach from the local fluid. This causes small-scale spatial clustering even in incompressible turbulence~\citep{Max87,Dun05,Bec07,Vas08,Bra14}. Moreover, {\it preferential sampling} of regions with high vorticity/strain is observed for particles that are lighter/heavier than the fluid. As a result, the statistical properties of inertial heavy or light particles [for example water droplets in turbulent clouds~\citep{Sha03} or air bubbles in water~\citep{Maz03,Ren05}] in turbulence are highly non-trivial both at sub-viscous and larger scales. An important example is the case of inertial particle accelerations \citep{Bec06c,Qur08,Gib10,Gib12,Vol11,Pra12}, see also the review by \cite{TB2009}. At large inertia, singularities (caustics) occur in the dynamics of heavy particles \citep{Fal02,Wil05}, giving rise to large relative velocities between close particles \citep{Sun97,Fal07c,Bec10,Gus11b,Gus13c}, and leading eventually to an enhancement of collision rates~\citep{Wil06,Bec14}.\\ The dynamics of light particles is important because of its relevance to fundamental and applied questions. Light particles can be used as small probes that preferentially track high vorticity structures, highlighting statistical and topological properties of the underlying fluid conditioned on those structures. In the limit of high volume fractions they might also have a complex feedback on the flow, including the case of reducing the turbulent drag \citep{Jac10,Ber05,Ber07}. Compared to heavy particles the dynamics of light particles is more difficult to analyse because pressure-gradient forces~\citep{Tch47} and added-mass effects must be taken into account. Apart from the fact that light particles tend to be drawn into vortices there is little theoretical analysis of their dynamics in turbulent flows. This motivated us to formulate a {\it closure scheme} that allows us to compute inertial accelerations of both heavy and light particles in turbulence. We describe the scheme and we test it by comparing its predictions to results of direct numerical simulations (DNS). We consider a simple model for the particle dynamics, taking into account added-mass and pressure-gradient forces, but neglecting the effect of buoyancy on the acceleration as well as the Basset-Boussinesq history force, as in many previous DNS studies \citep{Bab00,Bec05,Cal08,Cal09,Vol08}. We comment on these questions in the conclusions. Our scheme for the inertial particle accelerations approximates the particle paths by the Lagrangian fluid-element paths. It is a closure for the particle equation of motion that neglects {\it inertial preferential sampling} such as heavy inertial particles being centrifuged out of long-lived vortical regions of the flow~\citep{Max87}, so that they preferentially sample straining regions, or light particles which by contrast are drawn into vortices \citep{Cal08,Cal08b}. The latter is simply a consequence of the fact that light particles are influenced by pressure gradients in the undisturbed flow. Our approximation cannot be quantitatively correct when inertial preferential sampling is significant, but overall it yields a good qualitative description of how particle accelerations depend on the particle density relative to that of the fluid, and upon the {\it Stokes number}, a dimensionless measure of particle inertia. Moreover, and more interestingly, the closure also predicts highly non-trivial properties of the intermittent, non-Gaussian acceleration fluctuations, as the magnitude of inertial effects and the Reynolds number of the turbulence change. For example, the closure scheme predicts that the flatness of the acceleration develops a maximum or a minimum as a function of $\st$ for light or heavy particles. This is in good qualitative agreement with the measurements on the DNS data. For heavy particles the closure fails when inertial preferential sampling affects inertial particle accelerations, i.e. for Stokes numbers of order unity, see Fig.~1{\bf b} in \citep{Bec06c}. In fact for heavy particles our closure is equivalent to the \lq filtering\rq{} mechanism discussed by \cite{Bec06c}. For light particles, by contrast, comparisons to DNS results show that inertial preferential sampling effects are strong only at very large Stokes numbers, $\st \sim 10$, leading to an enlarged range of small and intermediate values of $\st$ where the closure works qualitatively well. The remainder of this article is organized as follows. In the next section we formulate the problem, introduce the equations of motion and qualitatively discuss inertial preferential sampling. In Section \ref{sec:closure} we describe our Lagrangian closure scheme for inertial particle accelerations. Data from DNS of turbulent flows are analysed in Section~\ref{sec:turbo} together with a detailed comparison to the predictions from the closure scheme. In Section~\ref{sec:stat_model}, we assess the potentialities of the closure on a data set obtained using a stochastic surrogate for the fluid velocity field. Section~\ref{sec:conclusions} contains the conclusions. \section{Formulation of the problem} \label{sec:problem} Many studies have considered the dynamics of heavy inertial particles, much denser than the carrying fluid. When the particles are very heavy and at the same time very small (point particles) the motion is simply determined by Stokes' drag. The dynamics of light particles by contrast is also affected by pressure gradients of the unperturbed fluid, and added-mass effects. Neglecting the effect of gravitational settling (and thus buoyancy) the equation of motion reads: \begin{equation} \begin{cases} \dot{\ve r}_t = {\ve v }_t\,,\\ \dot{\ve v }_t = \beta {\rm D}_t \ve u(\ve r_t,t) + \big(\ve u(\ve r_t,t) - {\ve v}_t\big)\big/\tau_{\rm s}\,, \label{mrfc} \end{cases} \end{equation} where $ \ve r_t$ is the particle position at time $t$, $ \ve v_t$, is the particle velocity, $\ve u(\ve r_t, t)$ is the velocity field of the undisturbed fluid and ${\rm D}_t\ve u = \partial_t \ve u + (\ve u\cdot\ve \nabla)\ve u$ is the Lagrangian derivative. The dimensionless constant $\beta= 3\rho_{\rm f}/(\rho_{\rm f} + 2 \rho_{\rm p})$ accounts for the contrast between particle density $\rho_{\rm p}$ and fluid density $\rho_{\rm f}$, while the Stokes number is defined as $\st = \tau_{\rm s}/\tau_{\rm K}$ where the particle response time is $\tau_{\rm s} = R^2 /(3 \nu \beta)$, $R$ is the particle radius, $\nu$ the kinematic viscosity of the flow and $\tau_{\rm K} =\sqrt{\nu/\epsilon}$ the Kolmogorov time defined in terms of the fluid energy dissipation, $\epsilon$. Many studies have employed this model \citep{Bab00,Bec05,Cal08,Cal09,Vol08}. The model takes into account added-mass effects but neglects buoyancy forces. It is an open question under which circumstances this is a quantitative model for the acceleration of small particles in turbulence. The following analysis is based on Eqs.~(\ref{mrfc}). It is important to first understand this case before addressing more realistic situations (inclusion of buoyancy, finite size, collisions and feedback on the flow). The sources of the difficulties are twofold. First, even in the much simpler case of a non-turbulent Eulerian velocity field $\ve u(\ve r,t)$ the particle dynamics is still complicated and often chaotic, simply because Eqs.~(\ref{mrfc}) are nonlinear. Second, turbulence makes the problem even harder due to the existence of substantial spatial and temporal fluctuations. This results in chaotic Lagrangian dynamics of fluid elements. Note that even though Lagrangian fluid elements sample space uniformly, their instantaneous motion is in general correlated with structures in the underlying flow. This implies that multi-time Lagrangian flow correlation functions evaluated along tracer trajectories do not coincide with the underlying Eulerian correlation functions which are evaluated at fixed positions in space. Particles that are heavier or lighter than the fluid may detach from the flow if they have inertia. This leads to the {\it inertial preferential sampling} mentioned in the introduction. As a result, the flow statistics experienced by an inertial particle differs from that of a Lagrangian fluid element. \section{Lagrangian closure} \label{sec:closure} Let us notice that the dynamics determined by equation (\ref{mrfc}) tends to the evolution of a tracer in both limits $\st \rightarrow 0$ and $\beta \rightarrow 1$. Indeed, in the limit $\tau_s \rightarrow 0$, imposing a finite Stokes drag leads to $\ve v_t = \ve u + O(\tau_s)$. For $\beta = 1$, by contrast we may approximate the material derivative along fluid tracers trajectories with the derivative along the trajectory of a particle, ${\rm D}_t \ve u \sim \dot{\ve u}$, and consistently check that the evolution of (\ref{mrfc}) leads to an exponential relaxation of the evolution of the particle to the trajectory of the tracer. Hence, the idea is to approximate the effects of inertial forces on the particle trajectory starting from the evolution of tracers. This approach cannot be exact. It is for example known that vortices are preferentially sampled by inertial particles. Nevertheless, it is important to understand and quantify how big the difference is as a function of the distance in the parameter phase space, $(\st, \beta)$, from the two lines $\st=0$ or $\beta=1$ where the closure must be exact (see. Fig. \ref{fig:phasespace}). The starting point is to evaluate (\ref{mrfc}) along tracer trajectories: \begin{equation} \begin{cases} \dot{\ve r}^{({\rm L})}_t = \ve u(\ve r^{({\rm L})}_t,t)\,, \\ \dot{\ve v_t} = \beta {\rm D}_t \ve u(\ve r^{({\rm L})}_t,t) + \big( \ve u(\ve r^{({\rm L})}_t,t) - {\ve v_t}\big)\big/\tau_{\rm s}\,, \label{mrfc2} \end{cases} \end{equation} where $\ve r^{({\rm L})}_t$ denotes the Lagrangian trajectory of a tracer particle. This approximation is a {\it closure} in the sense that the first equation in (\ref{mrfc2}) is independent of the second one, and the second equation can be solved in terms of the Lagrangian velocity statistics $\ve u(\ve r^{({\rm L})}_t,t)$ of the underlying flow. We solve (\ref{mrfc2}) for $\ve v_t$ (disregarding initial conditions here and below because these do not matter for the steady-state statistics) to obtain \begin{align} \ve v_t&=\beta\ve u(\ve r_{t}^{(\rm L)},t)+\frac{1-\beta}{\tau_{\rm s}}\int_0^t{\rm d}t_1e^{(t_1-t)/\tau_{\rm s}}\ve u(\ve r_{t_1}^{(\rm L)},t_1)\,. \eqnlab{v_implicit} \end{align} Using this expression, the particle acceleration follows from the second of Eqs.~(\ref{mrfc2}) \begin{align} \ve a_t&=\beta\frac{{\rm D}\ve u}{{\rm D}t}(\ve r_{t}^{(\rm L)},t)+\frac{1-\beta}{\tau_{\rm s}}\int_0^t{\rm d}t_1e^{(t_1-t)/\tau_{\rm s}}\frac{{\rm D}\ve u}{{\rm D}t}(\ve r_{t_1}^{(\rm L)},t_1)\,. \eqnlab{a_implicit} \end{align} Using \Eqnref{a_implicit} we express two-time acceleration statistics in terms of the two-point Lagrangian acceleration correlation function \begin{equation} C_{\rm L}(t) \equiv \langle {\rm D}_t\ve u(\ve r_t^{(\rm L)},t)\cdot{\rm D}_t\ve u(\ve r_0^{(\rm L)},0)\rangle\,. \end{equation} In the steady-state limit we find \begin{align} \langle \ve a_t\cdot\ve a_0\rangle &= \beta^2C_{\rm L}(t)+\frac{1-\beta^2}{\tau_{\rm s}}\left[ \cosh\Big[\frac{t}{\tau_{\rm s}}\right]\int_t^\infty{\rm d}t_1e^{-t_1/\tau_{\rm s}}C_{\rm L}(t_1)\nonumber\\ &\hspace*{4cm} +e^{-t/\tau_{\rm s}}\int_0^t{\rm d}t_1 \cosh\left[\frac{t_1}{\tau_{\rm s}}\right]C_{\rm L}(t_1) \Big]\,. \label{aCorr} \end{align} The acceleration variance is obtained by letting $t\to 0$ in Eq.~(\ref{aCorr}) \begin{equation} \langle\ve a^2\rangle=\beta^2C_{\rm L}(0)+\frac{1-\beta^2}{\tau_{\rm s}}\int_0^\infty{\rm d}t_1e^{-t_1/\tau_{\rm s}}C_{\rm L}(t_1)\,. \label{aVar} \end{equation} Similarly, the fourth moment of the particle acceleration is obtained as: \begin{align} \langle |\ve a|^4\rangle& = \beta^4C_{\rm L}(0,0,0) -4\frac{(\beta-1)\beta^3}{\tau_{\rm s}}\int_0^\infty{\rm d}t_1e^{-t_1/\tau_{\rm s}}C_{\rm L}(-t_1,0,0)\nn\\ & +6\frac{(\beta-1)^2\beta^2}{\tau_{\rm s}^2}\int_0^\infty{\rm d}t_1\int_0^\infty{\rm d}t_2e^{-(t_1+t_2)/\tau_{\rm s}}C_{\rm L}(-t_1,-t_2,0) \nn\\ & -\frac{(\beta-1)^3(3\beta+1)}{\tau_{\rm s}^3}\int_0^\infty{\rm d}t_1\int_0^\infty{\rm d}t_2\int_0^\infty{\rm d}t_3e^{-(t_1+t_2+t_3)/\tau_{\rm s}}C_{\rm L}(-t_1,-t_2,-t_3)\,. \label{aQuad} \end{align} Here isotropy of the acceleration components $\langle a_ia_ja_ka_l\rangle=[\delta_{ij}\delta_{kl}+\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk}]\langle a_1^4\rangle/3$ was used to express $\langle |\ve a|^4\rangle$ in terms of the four-point Lagrangian correlation function \begin{align} C_{\rm L}(t_1,t_2,t_3)\equiv \frac{d(d+2)}{3}\langle {\rm D}_tu_1(\ve r_{t_1}^{(\rm L)},t_1){\rm D}_tu_1(\ve r_{t_2}^{(\rm L)},t_2){\rm D}_tu_1(\ve r_{t_3}^{(\rm L)},t_3){\rm D}_tu_1(\ve r_{0}^{(\rm L)},0)\rangle\,, \end{align} where ${\rm D}_tu_1$ is a component of the fluid acceleration and $d$ is the spatial dimension. Eqs.~(\ref{aCorr}), (\ref{aVar}) and (\ref{aQuad}) express the fluctuations of inertial particle accelerations in terms of Lagrangian correlation functions of the underlying flow, and predict how the inertial particle accelerations depend on $\beta$ and $\st$. The integrals in Eqs.~(\ref{aVar}) and (\ref{aQuad}) are hard to evaluate numerically for very small and for very large values of $\st$. When $\st$ is small, the exponential factors in the right-hand side of (\ref{aVar}) and (\ref{aQuad}) become singular and one needs a very high sampling frequency of the fluid acceleration along the particle trajectory to evaluate the integrals reliably. When $\st$ is large, on the other hand, the integrals sum up large-time contributions with large fluctuations due to the finite length of experimental and numerical trajectories. Before we proceed to a quantitative assessment of the model let us make a few general remarks about the range of applicability and accuracy of the approximation made. First, in Eqs. (\ref{aVar}) and (\ref{aQuad}) we need to evaluate the acceleration correlation function of the tracers for general values of $t$. This might be seen as a problem because the trajectories of tracers and of inertial particles must depart on a time scale of the order of the Lyapunov time. On the other hand, the acceleration correlation function of the tracer is known to decay on a time of the order of the Kolmogorov time, $\tau_K$, which is ten times smaller than the typical Lyapunov time of heavy particles \citep{Bec06} and stays close to zero in the inertial range of scales \citep{Falk12}. Therefore we do not see any problem in the evaluation of the integrals in the above equations. Second, we are only interested in stationary properties of the inertial particle statistics and we always assume that the initial conditions for all particles are chosen from their stationary distribution functions. As a result no influence of the initial condition should appear in the closure. The Lagrangian closure adopted here can also be applied to correlation functions of other observables of the particle, or of the flow, provided that the underlying Lagrangian correlation functions decay quickly enough. Here, we focus on the acceleration statistics because of their highly non-Gaussian properties and high sensitivity to the parameters $\beta$ and $\st$. Finally, let us stress that our closure is fully based on Lagrangian properties and does require the use of any Eulerian correlation function, as opposed to closures based on the fast Eulerian approach by \cite{Fer01} or the Lagrangian-Eulerian closure to predict two-particle distributions \citep{Zai03,Ali06,Der00,Pan10}. To make further progress we need to determine the Lagrangian correlation functions. In Section \ref{sec:turbo} we present the most important new results, we determine the Lagrangian correlation functions by DNS of inertial particle dynamics in turbulence, substitute into Eqs.~(\ref{aCorr}), (\ref{aVar}) and (\ref{aQuad}) and assess the accuracy of these equations in predicting particle acceleration fluctuations and correlations. \section{Direct numerical simulations} \label{sec:turbo} \subsection{Simulation method} We present here the analysis of data obtained from DNS of a homogeneous and isotropic turbulent flow seeded with point-like particles with different inertia and particle-fluid density ratios (see Fig.~\ref{fig:phasespace} for a summary of the $(\st,\beta)$ values available). \begin{figure} \hspace*{-0.05in} \includegraphics[width=13.5cm]{Fig1.eps} \vspace{-0.2cm} \caption{\small {\em (Online color).} Parameter-space showing available DNS data sets with $\re_\lambda=185$ and $\re_\lambda=400$. For values of $\st$ smaller than $0.05$ the $x$ axis is linear, while it is logarithmic for values of $\st$ larger than $0.05$. For each data set with $\re_\lambda=185$ we have analysed a total of $130000$ trajectories of duration $6T_L$ and for each data set with $\re_\lambda=400$ we have analysed a total of $200000$ trajectories of duration $2.5T_L$. Level curves $\st(1-\beta)=\epsilon$ for constants $\epsilon=\{-1,-0.1,-0.01,0,0.01,0.1,1\}$ are plotted as black lines. Parameter families: With $\re_\lambda=185$: $\beta=0$ (red,$\circ$), $\beta=0.25$ (magenta,$\vartriangle$), $\beta=0.5$ (cyan,$\triangledown$), $\beta=0.75$ (green,$\triangleright$), $\beta=1$ (black,$\Box$), $\beta=1.25$ (brown,$\Diamond$), $\beta=1.5$ (purple,$\triangledown$), $\beta=2$ (orange,$\triangleright$), $\beta=2.5$ (dark green,$\triangleleft$), $\beta=3$ (blue,$\vartriangle$). With $\re_\lambda=400$: $\beta=0$ (red,\marker{6}). Additional data from~\cite{Cal09,Pra12}: $\beta=3$ (blue,\marker{7}). } \label{fig:phasespace} \end{figure} The data set was previously obtained by \citep{becJFM2010}. The flow obeys the Navier-Stokes equations for an incompressible velocity field ${\ve u}(\ve x,t)$: \begin{equation} \partial_t\ve u + \ve u \cdot \nabla \ve u = -\nabla p + \nu\nabla^2\ve u +\ve f,\quad \nabla\cdot\ve u = 0\,. \label{eq:ns} \end{equation} The external forcing $\ve f$ is statistically homogeneous, stationary and isotropic, injecting energy in the first low wavenumber shells, by keeping their spectral content constant~\citep{She}. The viscosity $\nu$ is set such that the Kolmogorov length scale $\eta\approx \delta x$, where $\delta x$ is the grid spacing. The numerical domain is $2\pi$-periodic in the three directions. We use a fully dealiased pseudospectral algorithm with second-order Adam-Bashforth time stepping. For details see \citep{Bec06c,Bec06d}. Two series of DNS are analysed: Run I, with a numerical resolution of $512^3$ grid points, and the Reynolds number at the Taylor scale $\re_\lambda \approx 200$; Run II, with $2048^3$ resolution and $\re_\lambda \approx 400$. Details can be found in Table~\ref{table}. \begin{table*} \centering \caption{\label{table} Eulerian parameters for the two runs analyzed in this Article: Run I and Run II in the text. $N$ is the number of grid points in each spatial direction; $\re_{\lambda}$ is the Taylor-scale Reynolds number; $\eta$ is the Kolmogorov dissipative scale; $\delta x=\mathcal{L}/N$ is the grid spacing, with $\mathcal{L}=2\pi$ denoting the physical size of the numerical domain; $\tau_{\rm K}=\left( \nu/\varepsilon \right)^{1/2}$ is the Kolmogorov dissipative time scale; $\varepsilon$ is the average rate of energy injection; $\nu$ is the kinematic viscosity; $t_{\mathrm{dump}}$ is the time interval between two successive dumps along particle trajectories; $\delta t$ is the time step; $T_L=L/U_0$ is the eddy turnover time at the integral scale $L=\pi$, and $U_0$ is the typical large-scale velocity.} \mbox{}\\ \begin{tabular}{ccccccccccc} \hline\\[-7pt] & $N$ & $\re_{\lambda}$ & $\eta$ & $\delta x$ & $\varepsilon$ & $\nu$ & $\tau_{\rm K}$ & $t_{\mathrm{dump}}$ & $\delta t$ & $T_L$ \\[+2pt]\hline\\[-7pt] Run I& 512 & 185 & 0.01 & 0.012 & 0.9 & 0.002 & 0.047 & 0.004 & 0.0004 & 2.2 \\ Run II& 2048 & 400 & 0.0026 & 0.003 & 0.88 & 0.00035 & 0.02 & 0.00115 & 0.000115 & 2.2\\[+2pt]\hline \end{tabular} \end{table*} \subsection{Comparison for acceleration variance and flatness against DNS} \label{sec:dns_results} In Fig.~\ref{fig:accvarDNSotherbetas}{\bf a} we show the comparison of the acceleration variance, $\langle a^2 \rangle$, to DNS data as a function of $\st$ for different values of $\beta$. Consider first heavy particles ($ \beta <1 $). It is clear that the closure (\ref{mrfc2}) captures the general trend and it becomes better and better for larger Stokes numbers. Similarly, this approximation must become exact as $\st \to 0$, but the DNS data set does not contain values of $\st$ small enough to reach this limit. At intermediate Stokes numbers the Lagrangian closure described in Section \ref{sec:closure} does not match the DNS results. This mismatch for Stokes numbers between $0.1$ and $1$ is certainly due to preferential sampling, as already remarked by \cite{Bec06c}. Yet, the closure predicts a small Reynolds dependency (compare solid and dashed red lines for $\beta=0$) inherited by the dependency of the fluid tracers. Such a small variation is not detectable within the accuracy of our numerical data. Now consider light particles ($\beta >1$). Fig. ~\ref{fig:accvarDNSotherbetas}{\bf a} shows that also in this case the agreement between the DNS data and the closure scheme is good, except around $\st \sim 10$ where the closure underestimates the DNS data. \begin{figure} \hspace*{-0.05in} \includegraphics[width=13.5cm]{Fig2.eps} \vspace{-0.2cm} \caption{\small {\em (Online colour).} Acceleration variance ({\bf a}) and flatness ({\bf b}) for DNS data at changing $\beta$, $\st$ and Re. {\bf a}: points correspond to the DNS data (same symbols of Fig.\ref{fig:phasespace}). Solid lines (labeled with their corresponding value of $\beta$) show the closure scheme prediction for the acceleration variance of light and heavy particles, Eq.~(\ref{aVar}), normalized with the fluid variance, for all data from RUN I; dashed line corresponds to the closure for RUN II. {\bf b}: the flatness measured on the DNS data and the one predicted by the closure scheme, Eqs.~(\ref{aQuad}), for heavy and light particles. Thin dashed black line shows the limit of normal distributed acceleration components. Additional data for $\beta=3$ (blue,\protect\includegraphics[width=2mm,clip]{markBW7.eps}) is omitted in panel {\bf b} because the flatness was not evaluated in Refs.~\cite{Cal09,Pra12}. } \figlab{accvarDNSotherbetas} \end{figure} In this case inertial preferential sampling must be important. The reason is that light particles are drawn into vortex filaments where they experience high accelerations. Nevertheless, inertial preferential sampling must become irrelevant in the limit $\st\to\infty$ as shown by the trend for very large $\st$ in the same figure. It is interesting to remark that the closure scheme (solid line) becomes better and better the closer $\beta$ is to unity, and/or the smaller the Stokes number is, suggesting the possibility to develop a systematic perturbative expansion in the small parameter $\epsilon = \st (1-\beta)$. Furthermore it is important to note that while the acceleration variance increases monotonically as both $\beta$ and $\st$ increase, the flatness of the acceleration, defined as $$F_{\st,\beta} = \langle |\ve a|^4\rangle/\langle \ve a^2\rangle^2,$$ has a non-monotonic dependency on $\st$. In Fig.~\ref{fig:accvarDNSotherbetas}{\bf b} we show that light particles have a maximum in their flatness at $\st \sim 0.5$ and heavy particles have minimum flatness for $\st>1$ (except for the case of very heavy particles with $\beta=0$). Importantly, our closure approximation predicts these extrema qualitatively, indicating that the non-Gaussian tails observed numerically and experimentally \citep{Vol08} in the acceleration probability distribution function of bubbles are not only due to inertial preferential sampling, which is neglected by the closure. Let us also note that the non-monotonicity shown by the model for $\st \sim 0.1$ and $\beta=0$, $0.25$ and $3$ are artefacts due to insufficient accuracy in the numerical evaluation of the integral in Eq.~(\ref{aQuad}). In conclusion, remarkably, the closure approximation is in reasonable agreement with the DNS data, with the exception of those values of Stokes number where preferential sampling is important. This is an important result, supporting the idea that many key properties, including deviations from Gaussian statistics, of the acceleration statistics of inertial particles are due to the kinematic structure of the equation of motion together with the non-trivial Lagrangian properties of the fluid tracers evolution, and not only due to inertial preferential sampling. \begin{figure} \hspace*{-0.05in} \includegraphics[width=13.5cm]{Fig3.eps} \vspace{-0.2cm} \caption{\small {\em (Online colour).} Relative error $\Delta a^2$ \eqnref{relerr} between the DNS data and the closure scheme prediction for heavy particles ({\bf a}) and light particles ({\bf b}). Markers according to \Figref{phasespace}. Lines are drawn as a guide to the eye.} \figlab{acc_error_DNS} \end{figure} In order to quantify the accuracy of the closure described in \Secref{closure} we plot in Fig.~\ref{fig:acc_error_DNS} the relative error in the prediction of the acceleration variance for both light and heavy particles: \begin{equation} \Delta a^2 = \left|1 - \frac{\langle \ve a^2 \rangle}{\langle \ve a^2 \rangle_{\rm DNS}}\right|\,, \eqnlab{relerr} \end{equation} where with $\langle \ve a^2 \rangle$ we refer to the expression (\ref{aVar}) and with $ \langle \ve a^2 \rangle_{\rm DNS}$ to the numerical value obtained from the DNS results. As one can see the approximation is never very bad, with a maximum discrepancy of the order of $30-40\%$ at those values where inertial preferential sampling is important, i.e. for $\st \sim O(1)$ for heavy particles and for $\st \sim O(10)$ for light particles. \section{Statistical Eulerian velocity model} \label{sec:stat_model} Mathematical analysis and numerical studies of the particle dynamics become easier when the turbulent fluctuations of $\ve u(\ve r,t)$ are approximated by a stochastic process. Following \cite{Gus15} we use a smooth, homogeneous and isotropic Gaussian random velocity field with root-mean-squared speed $u_0$ and typical length and time scales $\eta$ and $\tau$. The model is characterized by a dimensionless number, the Kubo number, $\ku= \tau/(\eta/u_0)$, that measures the degree of persistence of flow structures in time. Very small Kubo numbers correspond to a rapidly fluctuating fluid velocity field. In this limit the closure approximation described in \Secref{closure} is exact, inertial preferential sampling is negligible and the Lagrangian correlation functions of tracer particles are well approximated by the Eulerian correlation functions. In this limit it is also possible to perform a systematic perturbative expansion \citep{Gus15}. In this paper we are interested in comparing the validity of the Lagrangian closure for the statistical model at $\ku \sim O(1)$, where no analytical results can be obtained and to further compare them with the DNS results shown in \Secref{turbo}. The motivation is the following. The statistical model has no 'internal intermittency', i.e. there is no Reynolds number dependency on the acceleration statistics (there is not even the meaning of a Reynolds number). Nevertheless, once the Gaussian Eulerian velocity field is prescribed, we can calculate the acceleration probability density function of the fluid tracers. It turns out that this is not Gaussian and that it depends on the Kubo number, due to the effect of the quadratic advection term, $\ve a_{\rm f}=\partial_t\ve u+\ku[\ve\nabla\ve u^{\rm T}]\ve u$. As a result, we expect that many of the properties shown by the acceleration distribution of inertial particles evolved in real turbulent flows are shared by particles evolved in a Gaussian random flow. Finally, a comparison between DNS data and the statistical model will allow us to assess further the importance of internal intermittency. \subsection{Construction of the random velocity field} For simplicity we discuss only the two-dimensional case. Generalization to three dimensions is straightforward. The velocity field is given in terms of the streamfunction: $\ve u(\ve r,t)=\ve\nabla\psi(\ve r,t)\wedge\hat{\ve e}_3\,$, which is defined as a superposition of Fourier modes with a Gaussian cutoff, \begin{equation} \label{eq:psi} \psi(\ve x,t)=\frac{\eta^2u_0}{\sqrt{\pi}L}\sum_{\sve k}a_{\sve k}(t)e^{{\rm i}\sve x\cdot\sve k-k^2\eta^2/4}\,. \end{equation} Here the system size $L$ is put to $10\eta$, $k_i=2\pi n_i/L$ and $n_i$ are integers with an upper cutoff $|n_i|\le 2L/\eta$ because higher-order Fourier modes are negligible. The resulting spatial correlation function of $\psi$ is Gaussian, if $L\gg\eta$ we have: $\langle\psi(\ve x,0)\psi(\ve 0,0)\rangle=(u_0^2\eta^2/2)\,e^{-x^2/(2\eta^2)}.$ The random coefficients $a_{\sve k}(t)$ in $\psi$ are drawn from random Gaussian distributions with zero means, smoothly correlated in time. To do that we used an Ornstein-Uhlenbeck process convolved with a Gaussian kernel of the form $ w(t)\equiv \exp[-t^2/(2t_0^2)]/(t_0\sqrt{2\pi})\,$ to have a smooth correlation function also for the acceleration. The parameter $t_0$ must be small, $t_0\ll \tau$, in order for the flow field to decorrelate at long times in a similar fashion as in fully developed turbulence. The Eulerian autocorrelation function of ${\ve u}$ is: \begin{equation} \langle {\ve u}(\ve x_0,t)\cdot {\ve u}(\ve x_0,0)\rangle = \frac{u_0^2}{2\sqrt{\pi}}e^{-t^2/(4t_0^2)} \left(\E\left[\frac{t_0}{\tau} + \frac{t}{2t_0}\right] \right. + \left. \E\left[\frac{t_0}{\tau}-\frac{t}{2t_0}\right]\right)\,, \eqnlab{uCorrSmoothMain} \end{equation} where $\E(x) = \sqrt{\pi}\exp(x^2)\erfc(x)$. Note that the flow field is homogeneous in space and also in time and $\langle{\ve u}(\ve x_0,t)\cdot{\ve u}(\ve x_0,0)\rangle$ is a function of $|t|$ only. For $ |t| \ll t_0$ this correlation function is Gaussian and for $ |t| \gg t_0$ the correlation function is exponential: $\langle{\ve u}(\ve x_0,t)\cdot{\ve u}(\ve x_0,0)\rangle\sim e^{-|t|/\tau}$. \subsection{Results for the random velocity model} \label{sec:res_stat_model} \begin{figure} \hspace*{-0.05in} \includegraphics[width=13.7cm]{Fig4.eps} \vspace{-0.2cm} \caption{ \small {\em (Online colour).} Same as in Fig. \ref{fig:accvarDNSotherbetas} but for the statistical model given by Eq. (\ref{eq:psi}) at $\ku=5$. } \figlab{acc_var_stat} \end{figure} We consider first the acceleration variance. Simulation results for the statistical model are compared with the Lagrangian closure (\ref{aVar}-\ref{aQuad}) in Fig.~\ref{fig:acc_var_stat} by changing both $\st, \beta$ for $\ku=5$ (panel {\bf a}). We observe a good agreement between the Lagrangian closure and the numerical simulations, comparable to what was observed for the DNS data. In \Figref{acc_var_stat}{\bf b} we show the results for the flatness. It is important to stress two facts. Also here, both heavy and light particles depart from the corresponding fluid value with a qualitative trend similar to that observed for the DNS case in the previous section. Also for the random velocity field, the Lagrangian closure works qualitatively well. The departure from the numerical data is a signature of the corresponding importance of preferential sampling at those Stokes numbers. Let us notice nevertheless an important difference with respect to the DNS data. Here the absolute values of the flatness are much smaller, due to the absence of internal intermittency. In the stochastic signal the acceleration of the fluid is non-Gaussian only because of kinematic effects. In real flows, the acceleration is more intense and more fluctuating because of the vortex stretching mechanism and of the turbulent energy cascade. \section{Conclusions} \label{sec:conclusions} In this paper we have analysed a Lagrangian closure describing fluctuations and correlations of inertial particle accelerations in turbulent flows in the diluted regime (one-way coupling), i.e. neglecting particle-particle collisions and feedback on the flow. In this way, we have a model that is able to predict some properties of the acceleration statistics of inertial particles for a large range of values of $\beta$ and $\st$ out of one single measurement based on fluid tracers only. We have compared the predictions of the closure to DNS of heavy and light inertial particles in turbulence. To summarize our results, the closure predictions are in overall good qualitative agreement with the results of DNS of particles. The closure neglects inertial preferential sampling, i.e. the tendency of light/heavy particles to be centrifuged in or out of vortex structures. Hence, the good agreement with the DNS data indicates that inertial preferential sampling has in general only a partial effect on inertial particle accelerations. The main trends are essentially kinematic, a consequence of the form of the equation of motion, as also shown by the results obtained using a stochastic surrogate for the flow velocity. A closer inspection shows that there are important differences between the Lagrangian closure scheme and the DNS, revealing where inertial preferential sampling is important. The effect is larger for light particles at large Stokes numbers ($\st \sim 10$ in our DNS), and is a consequence of the fact that light bubbles are drawn into intense vortex tubes. We mention that there is no small-scale fractal clustering for these values of $\st$, i.e. particles are distributed on a three-dimensional set at scales much smaller than the Kolmogorov length. Finally, non-trivial non-monotonic behaviours of the flatness for both light and heavy particles as a function of $\st$ are predicted by the closure scheme and confirmed by the DNS results, including the fact that light particles are always more intermittent than the fluid tracers and the opposite holds for heavy particles, as shown by the fact that the flatness for the former is always larger than the one of fluid tracers and vice versa for the latter. The Lagrangian closure scheme must become exact when $\st \to 0$ or $\beta \to 1$, it should therefore be possible to see it as a perturbative expansion around Lagrangian tracers and proportional to a small parameter $\epsilon= \st(1-\beta)$, at least for quantities that depend on Lagrangian correlation functions decaying on a time scale of the order of the Kolmogorov time. In this case, one could try to develop an intermediate asymptotic where for small enough time the difference between the two trajectories remains small and then improve the zeroth-order approximation here presented by considering also corrections induced by the velocity gradients around the Lagrangian tracers: \begin{equation} u^i (\ve r_t,t) \sim u^i (\ve r^{({\rm L})}_t,t) + \partial_i u^j(\ve r^{({\rm L})}_t,t) \delta r^j_t + \cdots\,, \end{equation} where $\delta\ve r=\ve r_t-\ve r_t^{(\rm L)}$. Work in this direction is in progress. Finally, we have also investigated the validity of the Lagrangian closure using a stochastic Gaussian surrogate for the advecting fluid velocity field. In such a case, the $\ku$ number is another free parameter that can be tuned to increase/decrease the effects of inertial preferential sampling (effects vanish as $\ku$ approaches zero). We have shown that for large Kubo numbers, corresponding to the long-lived structures in turbulent flows, the closure theory works as well as for the DNS data, even though the data for the statistical model have a much smaller flatness.\\ Let us add some remarks about the generality and the limitations of the approach proposed. First, there are no theoretical difficulties in incorporating buoyancy, Fax\'{e}n corrections and other forces in the closure scheme as long as the dynamics can be described by a point-particle approach. We refrained from presenting here the results because of lack of DNS data to compare with. On the other hand, it is known that the equations (\ref{mrfc}) are not valid for all values in the ($\beta,\st$) parameter space. Indeed, the two requirements that the Reynolds number based on the particle slip velocity is small: $Re_p = |u-v|R/\nu < O(1)$ and that the particle size is smaller than the Kolmogorov scale $R/\eta < O(1)$ lead to the condition that $\st < O(1)$ if $\beta > 1$. So the prediction of the model in the limit of large Stokes numbers for light particles cannot be taken on a quantitative basis. We stress nevertheless that the most interesting property highlighted by our approach, i.e. the existence of a non-monotonic behaviour for the flatness of the acceleration of light and heavy particles, develops at values of $\st$ where the model equations are still valid. It is difficult to precisely assess the value of Stokes where the approximation breaks down. For instance, recently it was found \citep{Lohse2015} that the acceleration variance of light particles with a size up to $R \sim 10\eta $ follow quite closely the point-like approximation (\ref{mrfc}). For even larger particle sizes a wake-driven dynamics becomes dominant. For such a range of particle parameters no theoretical models for the equations of motion are known. For instance, recently it was found (Mathai et al. 2015) that the acceleration variance of light particles with a size up to R ∼ 10η follow quite closely the approximate dynamics (2.1). {\em Acknowledgments.} This work was supported Vetenskapsr\aa{}det and by the grant {\em Bottlenecks for particle growth in turbulent aerosols} from the Knut and Alice Wallenberg Foundation, Dnr. KAW 2014.0048. The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement No 339032.
{'timestamp': '2016-07-08T02:04:57', 'yymm': '1607', 'arxiv_id': '1607.01888', 'language': 'en', 'url': 'https://arxiv.org/abs/1607.01888'}
\section{Introduction} For $d \in \mathbb{N}$, let $S$ be a finite set of points in $\mathbb{R}^d$. The set $S$ is in \emph{general position} if, for every $k=1,\dots,d-1$, no $k+2$ points of $S$ lie in an affine $k$-dimensional subspace. A set $H$ of $k$ points from $S$ is a \emph{$k$-hole} in $S$ if $H$ is in convex position and the interior of the convex hull $\conv(H)$ of $H$ does not contain any point from $S$; see Figure~\ref{fig:preliminaries} for an illustration in the plane. We say that a subset of $S$ is a \emph{hole} in $S$ if it is a $k$-hole in $S$ for some integer $k$. \begin{figure}[htb] \centering \hbox{} \hfill \begin{subfigure}[b]{.3\textwidth} \centering \includegraphics[page=1]{figs/preliminaries} \caption{} \label{fig:preliminaries_1} \end{subfigure} \hfill \begin{subfigure}[b]{.3\textwidth} \centering \includegraphics[page=3]{figs/preliminaries} \caption{} \label{fig:preliminaries_2} \end{subfigure} \hfill \begin{subfigure}[b]{.3\textwidth} \centering \includegraphics[page=2]{figs/preliminaries} \caption{} \label{fig:preliminaries_3} \end{subfigure} \hfill \hbox{} \caption{ (\subref{fig:preliminaries_1})~A $6$-tuple of points in convex position in a planar set $S$ of 10 points. (\subref{fig:preliminaries_2})~A $6$-hole in $S$. (\subref{fig:preliminaries_3})~A $6$-island in $S$ whose points are not in convex position. } \label{fig:preliminaries} \end{figure} Let $h(k)$ be the smallest positive integer $N$ such that every set of $N$ points in general position in the plane contains a $k$-hole. In the 1970s, Erd\H{o}s~\cite{Erdos1978} asked whether the number $h(k)$ exists for every $k \in \mathbb{N}$. It was shown in the 1970s and 1980s that $h(4)=5$, $h(5)=10$~\cite{Harborth1978}, and that $h(k)$ does not exist for every $k \geq 7$~\cite{Horton1983}. That is, while every sufficiently large set contains a $4$-hole and a $5$-hole, Horton constructed arbitrarily large sets with no 7-holes. His construction was generalized to so-called \emph{Horton sets} by Valtr~\cite{VALTR1992b}. The existence of 6-holes in every sufficiently large point set remained open until 2007, when Gerken~\cite{Gerken2008} and Nicolas~\cite{Nicolas2007} independently showed that $h(6)$ exists; see also~\cite{Valtr2009}. These problems were also considered in higher dimensions. For $d \geq 2$, let $h_d(k)$ be the smallest positive integer $N$ such that every set of $N$ points in general position in~$\mathbb{R}^d$ contains a $k$-hole. In particular, $h_2(k) = h(k)$ for every $k$. Valtr~\cite{VALTR1992b} showed that $h_d(k)$ exists for $k \le 2d+1$ but it does not exist for $k > 2^{d-1} (P(d-1)+1)$, where $P(d-1)$ denotes the product of the first $d-1$ prime numbers. The latter result was obtained by constructing multidimensional analogues of the Horton sets. After the existence of $k$-holes was settled, counting the minimum number $H_k(n)$ of $k$-holes in any set of $n$ points in the plane in general position attracted a lot of attention. It is known, and not difficult to show, that $H_3(n)$ and $H_4(n)$ are in $\Omega(n^2)$. The currently best known lower bounds on $H_3(n)$ and $H_4(n)$ were proved in~\cite{5holes_Socg2017}. The best known upper bounds are due to B\'{a}r\'{a}ny and Valtr~\cite{BaranyValtr2004}. Altogether, these estimates are \[n^2 + \Omega(n\log^{2/3}n) \le H_3(n) \le 1.6196 n^2 +o(n^2)\] and \[\tfrac{n^2}{2} +\Omega(n\log^{3/4}n) \le H_4(n) \le 1.9397 n^2 +o(n^2).\] For $H_5(n)$ and $H_6(n)$, the best quadratic upper bounds can be found in~\cite{BaranyValtr2004}. The best lower bounds, however, are only $H_5(n) \geq \Omega(n \log^{4/5}n)$~\cite{5holes_Socg2017} and $H_6(n) \geq \Omega(n)$~\cite{Valtr2012}. For more details, we also refer to the second author's dissertation~\cite{Scheucher2019_dissertation}. The quadratic upper bound on $H_3(n)$ can be also obtained using random point sets. For $d \in \mathbb{N}$, a \emph{convex body} in $\mathbb{R}^d$ is a compact convex set in $\mathbb{R}^d$ with a nonempty interior. Let $k$ be a positive integer and let $K \subseteq \mathbb{R}^d$ be a convex body with $d$-dimensional Lebesgue measure $\lambda_d(K)=1$. We use $EH^K_{d,k}(n)$ to denote the expected number of $k$-holes in sets of $n$ points chosen independently and uniformly at random from $K$. The quadratic upper bound on $H_3(n)$ then also follows from the following bound of B\'{a}r\'{a}ny and F\"{u}redi~\cite{BaranyFueredi1987} on the expected number of $(d+1)$-holes: \begin{equation} \label{boundBaranyFuredi} EH_{d,d+1}^K(n) \leq (2d)^{2d^2} \cdot \binom{n}{d} \end{equation} for any $d$ and $K$. In the plane, B\'{a}r\'{a}ny and F\"{u}redi~\cite{BaranyFueredi1987} proved $EH_{2,3}^K(n) \le 2n^2+O(n \log n)$ for every~$K$. This bound was later slightly improved by Valtr~\cite{Valtr1995}, who showed $EH^K_{2,3}(n) \le 4\binom{n}{2}$ for any $K$. In the other direction, every set of $n$ points in $\mathbb{R}^d$ in general position contains at least $\binom{n-1}{d}$ $(d+1)$-holes~\cite{BaranyFueredi1987,katMe88}. The expected number $EH_{2,4}^K(n)$ of $4$-holes in random sets of $n$ points in the plane was considered by Fabila-Monroy, Huemer, and Mitsche~\cite{mhm15}, who showed \begin{equation} \label{eq-MHM} EH_{2,4}^K(n) \leq 18\pi D^2 n^2 + o(n^2) \end{equation} for any $K$, where $D=D(K)$ is the diameter of $K$. Since we have $D \geq 2/\sqrt{\pi}$, by the Isodiametric inequality~\cite{evansGariepy15}, the leading constant in~\eqref{eq-MHM} is at least $72$ for any $K$. In this paper, we study the number of $k$-holes in random point sets in $\mathbb{R}^d$. In particular, we obtain results that imply quadratic upper bounds on $H_k(n)$ for any fixed $k$ and that both strengthen and generalize the bounds by B\'{a}r\'{a}ny and F\"{u}redi~\cite{BaranyFueredi1987}, Valtr~\cite{Valtr1995}, and Fabila-Monroy, Huemer, and Mitsche~\cite{mhm15}. \section{Our results} \label{sec-ourResults} Throughout the whole paper we only consider point sets in~$\mathbb{R}^d$ that are finite and in general position. \subsection{Islands and holes in random point sets} First, we prove a result that gives the estimate $O(n^d)$ on the minimum number of $k$-holes in a set of $n$ points in $\mathbb{R}^d$ for any fixed $d$ and $k$. In fact, we prove the upper bound $O(n^d)$ even for so-called $k$-islands, which are also frequently studied in discrete geometry. A set $I$ of $k$ points from a point set $S \subseteq \mathbb{R}^d$ is a \emph{$k$-island} in $S$ if $\conv (I) \cap S = I$; see part~(c) of Figure~\ref{fig:preliminaries}. Note that $k$-holes in $S$ are exactly those $k$-islands in~$S$ that are in convex position. A subset of $S$ is an \emph{island} in $S$ if it is a $k$-island in $S$ for some integer $k$. \begin{theorem} \label{thm:islands_2d} Let $d \geq 2$ and $k \geq d+1$ be integers and let $K$ be a convex body in~$\mathbb{R}^d$ with $\lambda_d(K)=1$. If $S$ is a set of $n \geq k$ points chosen uniformly and independently at random from~$K$, then the expected number of $k$-islands in $S$ is at most \[2^{d-1}\cdot \left(2d^{2d-1}\binom{k}{\lfloor d/2 \rfloor}\right)^{k-d-1} \cdot (k-d) \cdot \frac{n(n-1) \cdots (n-k+2)}{(n-k+1)^{k-d-1}},\] which is in $O(n^d)$ for any fixed $d$ and $k$. \end{theorem} The bound in Theorem~\ref{thm:islands_2d} is tight up to a constant multiplicative factor that depends on $d$ and $k$, as, for any fixed $k \geq d$, every set $S$ of $n$ points in~$\mathbb{R}^d$ in general position contains at least $\Omega(n^d)$ $k$-islands. To see this, observe that any $d$-tuple $T$ of points from $S$ determines a $k$-island with $k-d$ closest points to the hyperplane spanned by $T$ (ties can be broken by, for example, taking points with lexicographically smallest coordinates), as $S$ is in general position and thus $T$ is a $d$-hole in $S$. Any such $k$-tuple of points from $S$ contains $\binom{k}{d}$ $d$-tuples of points from $S$ and thus we have at least $\binom{n}{d}/\binom{k}{d} \in \Omega(n^d)$ $k$-islands in $S$. Thus, by Theorem~\ref{thm:islands_2d}, random point sets in $\mathbb{R}^d$ asymptotically achieve the minimum number of $k$-islands. This is in contrast with the fact that, unlike Horton sets, they contain arbitrarily large holes. Quite recently, Balogh, Gonz\'alez-Aguilar, and Salazar~\cite{BaloghGAS2013} showed that the expected number of vertices of the largest hole in a set of $n$ random points chosen independently and uniformly over a convex body in the plane is in $\Theta(\log n/(\log \log n))$. For $k$-holes, we modify the proof of Theorem~\ref{thm:islands_2d} to obtain a slightly better estimate. \begin{theorem} \label{thm:holes_2d} Let $d \geq 2$ and $k \geq d+1$ be integers and let $K$ be a convex body in~$\mathbb{R}^d$ with $\lambda_d(K)=1$. If $S$ is a set of $n \geq k$ points chosen uniformly and independently at random from~$K$, then the expected number $EH^K_{d,k}(n)$ of $k$-holes in $S$ is in $O(n^d)$ for any fixed $d$ and $k$. More precisely, \[EH^K_{d,k}(n) \leq 2^{d-1}\cdot \left(2d^{2d-1}\binom{k}{\lfloor d/2 \rfloor}\right)^{k-d-1} \cdot \frac{n(n-1) \cdots (n-k+2)}{(k-d-1)! \cdot (n-k+1)^{k-d-1}}.\] \end{theorem} For $d=2$ and $k=4$, Theorem~\ref{thm:holes_2d} implies $EH^K_{2,4}(n) \leq 128 \cdot n^2 + o(n^2)$ for any $K$, which is a worse estimate than~\eqref{eq-MHM} if the diameter of $K$ is at most $8/(3\sqrt{\pi}) \simeq 1.5$. However, the proof of Theorem~\ref{thm:holes_2d} can be modified to give $EH^K_{2,4}(n) \leq 12n^2 + o(n^2)$ for any $K$, which is always better than~\eqref{eq-MHM}; see the final remarks in Section~\ref{sec:islands_in_Rd}. We believe that the leading constant in $EH^K_{2,4}(n)$ can be estimated even more precisely and we hope to discuss this direction in future work. In the case $k=d+1$, the bound in Theorem~\ref{thm:holes_2d} simplifies to the following estimate on the expected number of $(d+1)$-holes (also called \emph{empty simplices}) in random sets of $n$ points in~$\mathbb{R}^d$. \begin{corollary} \label{thm:d_simplices} Let $d \geq 2$ be an integer and let $K$ be a convex body in $\mathbb{R}^d$ with $\lambda_d(K)=1$. If $S$ is a set of $n$ points chosen uniformly and independently at random from~$K$, then the expected number of $(d+1)$-holes in $S$ satisfies \[EH^K_{d,d+1}(n) \leq 2^{d-1} \cdot d! \cdot \binom{n}{d}.\] \end{corollary} Corollary~\ref{thm:d_simplices} is stronger than the bound~\eqref{boundBaranyFuredi} by B\'{a}r\'{a}ny and F\"{u}\-redi~\cite{BaranyFueredi1987} and, in the planar case, coincides with the bound $EH^K_{2,3}(n) \le 4\binom{n}{2}$ by Valtr~\cite{Valtr1995}. Very recently, Reitzner and Temesvari \cite{ReitznerTemesvari2019} proved an upper bound on $EH^K_{d,d+1}(n)$ that is asymptotically tight if $d=2$ or if $d \ge 3$ and $K$ is an ellipsoid. In the planar case, their result shows that the bound $4\binom{n}{2}$ on $EH^K_{2,3}(n)$ is best possible, up to a smaller order error term. We also consider islands of all possible sizes and show that their expected number is in $2^{\Theta\left(n^{(d-1)/(d+1)}\right)}$. \begin{theorem} \label{thm:exponential} Let $d \geq 2$ be an integer and let $K$ be a convex body in $\mathbb{R}^d$ with $\lambda_d(K)=1$. Then there are constants $C_1=C_1(d)$, $C_2=C_2(d)$, and $n_0=n_0(d)$ such that for every set $S$ of $n \geq n_0$ points chosen uniformly and independently at random from $K$ the expected number $E^K_d$ of islands in $S$ satisfies \[2^{C_1 \cdot n^{(d-1)/(d+1)}} \leq E^K_d \leq 2^{C_2 \cdot n^{(d-1)/(d+1)}}.\] \end{theorem} Since each island in $S$ has at most $n$ points, there is a $k \in \{1,\dots,n\}$ such that the expected number of $k$-islands in $S$ is at least $(1/n)$-fraction of the expected number of all islands, which is still in $2^{\Omega(n^{(d-1)/(d+1)})}$. This shows that the expected number of $k$-islands can become asymptotically much larger than $O(n^d)$ if $k$ is not fixed. \subsection{Islands and holes in $d$-Horton sets} To our knowledge, Theorem~\ref{thm:islands_2d} is the first nontrivial upper bound on the minimum number of $k$-islands a point set in $\mathbb{R}^d$ with $d>2$ can have. For $d=2$, Fabila-Monroy and Huemer~\cite{FabilaMonroyHuemer2012} showed that, for every fixed $k \in\mathbb{N}$, the Horton sets with $n$ points contain only $O(n^2)$ $k$-islands. For $d>2$, Valtr~\cite{VALTR1992b} introduced a $d$-dimensional analogue of Horton sets. Perhaps surprisingly, these sets contain asymptotically more than $O(n^d)$ $k$-islands for $k \geq d+1$. For each $k$ with $d+1 \leq k \leq 3 \cdot 2^{d-1}$, they even contain asymptotically more than $O(n^d)$ $k$-holes. \begin{theorem} \label{prop-Horton} Let $d \geq 2$ and $k$ be fixed positive integers. Then every $d$-dimensional Horton set $H$ with $n$ points contains at least $\Omega(n^{\min\{2^{d-1},k\}})$ $k$-islands in $H$. If $k \leq 3 \cdot 2^{d-1}$, then $H$ even contains at least $\Omega(n^{\min\{2^{d-1},k\}})$ $k$-holes in $H$. \end{theorem} \section{Proofs of Theorem~\ref{thm:islands_2d} and Theorem~\ref{thm:holes_2d}} \label{sec:islands_in_Rd} Let $d$ and $k$ be positive integers and let $K$ be a convex body in~$\mathbb{R}^d$ with $\lambda_d(K)=1$. Let $S$ be a set of $n$ points chosen uniformly and independently at random from~$K$. Note that $S$ is in general position with probability $1$. We assume $k \geq d+1$, as otherwise the number of $k$-islands in $S$ is trivially $\binom{n}{k}$ in every set of $n$ points in $\mathbb{R}^d$ in general position. We also assume $d \geq 2$ and $n \geq k$, as otherwise the number of $k$-islands is trivially $n-k+1$ and $0$, respectively, in every set of $n$ points in $\mathbb{R}^d$. First, we prove Theorem~\ref{thm:islands_2d} by showing that the expected number of $k$-islands in $S$ is at most \[2^{d-1}\cdot \left(2d^{2d-1}\binom{k}{\lfloor d/2 \rfloor}\right)^{k-d-1} \cdot (k-d) \cdot \frac{n(n-1) \cdots (n-k+2)}{(n-k+1)^{k-d-1}},\] which is in $O(n^d)$ for any fixed $d$ and $k$. At the end of this section, we improve the bound for $k$-holes, which will prove Theorem~\ref{thm:holes_2d}. Let $Q$ be a set of $k$ points from $S$. We first introduce a suitable unique ordering $q_1,\dots,q_k$ of points from~ $Q$. First, we take a set $D$ of $d+1$ points from $Q$ that determine a simplex $\Delta$ with largest volume among all $(d+1)$-tuples of points from $Q$. Let $q_1q_2$ be the longest edge of $\Delta$ with $q_1$ lexicographically smaller than $q_2$ and let $a$ be the number of points from $Q$ inside $\Delta$. For every $i=2,\dots,d$, let $q_{i+1}$ be the furthest point from $D\setminus\{q_1,\dots,q_i\}$ to ${\rm aff}(q_1,\dots,q_i)$. Next, we let $q_{d+2},\dots,q_{d+a+1}$ be the $a$ points of $Q$ inside $\Delta$ ordered lexicographically. The remaining $k-d-a-1$ points $q_{d+a+2},\dots,q_k$ from $Q$ lie outside of $\Delta$ and we order them so that, for every $i=1,\dots,k-a-d-1$, the point $q_{d+a+i+1}$ is closest to ${\rm conv}(\{q_1,\dots,\allowbreak q_{d+a+i}\})$ among the points $q_{d+a+i+1},\dots,q_k$. In case of a tie in any of the conditions, we choose the point with lexicographically smallest coordinates. Note, however, that a tie occurs with probability $0$. Clearly, there is a unique such ordering $q_1,\dots,q_k$ of $Q$. We call this ordering the \emph{canonical $(k,a)$-ordering} of $Q$. To reformulate, an ordering $q_1,\dots,q_k$ of $Q$ is the canonical $(k,a)$-ordering of~$Q$ if and only if the following five conditions are satisfied: \begin{enumL}{L} \item\label{item-canonical0} The $d$-dimensional simplex $\Delta$, with vertices $q_1,\dots,q_{d+1}$ has the largest $d$-dimen\-sional Lebesgue measure among all $d$-dimensional simplices spanned by points from~$Q$. \item\label{item-canonical1} For every $i=1,\dots,d-1$, the point $q_{i+1}$ has the largest distance among all points from $\{q_{i+1},\dots,q_d\}$ to the $(i-1)$-dimensional affine subspace ${\rm aff}(q_1,\dots,q_i)$ spanned by $q_1,\dots,q_i$. Moreover, $q_1$ is lexicographically smaller than $q_2$. \item\label{item-canonical4} For every $i=1,\dots,d-1$, the distance between $q_{i+1}$ and ${\rm aff}(q_1,\dots,q_{i})$ is at least as large as the distance between $q_{d+1}$ and ${\rm aff}(q_1,\dots,q_i)$. Also, the distance between $q_1$ and $q_2$ is at least as large as the distance between $q_{d+1}$ and any $q_i$ with $i \in \{1,\dots,d\}$. \item\label{item-canonical2} The points $q_{d+2},\dots,q_{d+a+1}$ lie inside $\Delta$ and are ordered lexicographically. \item\label{item-canonical3} The points $q_{d+a+2},\dots,q_k$ lie outside of $\Delta$. For every $i=1,\dots,k-a-d-1$, the point $q_{d+a+i+1}$ is closest to ${\rm conv}(\{q_1,\dots,\allowbreak q_{d+a+i}\})$ among the points $q_{d+a+i+1},\dots,q_k$. \end{enumL} Figure~\ref{fig:canlab} gives an illustration in~$\mathbb{R}^2$. We note that the conditions~\ref{item-canonical1} and~\ref{item-canonical4} can be merged together. However, later in the proof, we use the fact that the probability that the points from $Q$ satisfy the condition~\ref{item-canonical1} equals $1/d!$, so we stated the two conditions separately. \begin{figure}[htb] \centering \includegraphics{figs/canlab} \caption{An illustration of the canonical $(k,a)$-ordering of a planar point set~$Q$. Here we have $k=12$ points and $a=4$ of the points lie inside the largest area triangle $\triangle$ with vertices $q_1,q_2,q_3$.} \label{fig:canlab} \end{figure} Before going into details, we first give a high-level overview of the proof of Theorem~\ref{thm:islands_2d}. First, we prove an $O(1/n^{a+1})$ bound on the probability that $\triangle$ contains precisely the points $p_{d+2},\ldots,p_{d+1+a}$ from~$S$ (Lemma~\ref{lem-probabilityInside}), which means that the points $p_1,\ldots,p_{d+1+a}$ determine an island in $S$. Next, for $i=d+2+a,\ldots,k$, we show that, conditioned on the fact that the $(i-1)$-tuple $(p_1,\ldots,p_{i-1})$ determines an island in $S$ in the canonical $(k,a)$-ordering, the $i$-tuple $(p_1,\ldots,p_{i})$ determines an island in $S$ in the canonical $(k,a)$-ordering with probability $O(1/n)$ (Lemma~\ref{lem-probabilityOutside}). Then it immediately follows that the probability that $I$ determines a $k$-island in $S$ with the desired properties is at most $ O \left( 1/n^{a+1} \cdot (1/n)^{k-(d+1+a)} \right) = O(n^{d-k})$. Since there are $n \cdot (n-1)\cdots(n-k+1)=O(n^k)$ possibilities to select such an ordered subset $I$ and each $k$-island in $S$ is counted at most $k!$ times, we obtain the desired bound $O \left( n^k \cdot n^{d-k} \cdot k! \right) = O(n^d)$ on the expected number of $k$-islands in~$S$. We now proceed with the proof. Let $p_1,\dots,p_k$ be points from $S$ in the order in which they are drawn from $K$. We use $\Delta$ to denote the $d$-dimensional simplex with vertices $p_1,\dots,p_{d+1}$. We eventually show that the probability that $p_1,\dots,p_k$ is the canonical $(k,a)$-ordering of a $k$-island in $S$ for some $a$ is at most $O(1/n^{k-d})$. First, however, we need to state some notation and prove some auxiliary results. Consider the points $p_1,\dots,p_d$. Without loss of generality, we can assume that, for each $i=1,\dots,d$, the point $p_i$ has the last $d-i+1$ coordinates equal to zero. Otherwise we apply a suitable isometry to~$S$. Then, for every $i=1,\dots,d$, the distance between $p_{i+1}$ and the $(i-1)$-dimensional affine subspace spanned by $p_1,\dots,p_i$ is equal to the absolute value of the $i$th coordinate of $p_{i+1}$. Moreover, after applying a suitable rotation, we can also assume that the first coordinate of each of the points $p_1,\dots,p_d$ is nonnegative. Let $\Delta_0$ be the $(d-1)$-dimensional simplex with vertices $p_1,\dots,p_d$ and let $H$ be the hyperplane containing $\Delta_0$. Note that, according to our assumptions about $p_1,\dots,p_d$, we have $H = \{(x_1,\dots,x_d) \in \mathbb{R}^d \colon x_d=0\}$. Let $B$ be the set of points $(x_1,\dots,x_d) \in \mathbb{R}^d$ that satisfy the following three conditions: \begin{enumerate}[label=(\roman*)] \item $x_1 \geq 0$, \item $|x_i|$ is at most as large as the absolute value of the $i$th coordinate of $p_{i+1}$ for every $i \in \{1,\dots,d-1\}$, and \item $|x_d| \leq d/\lambda_{d-1}(\Delta_0)$. \end{enumerate} See Figures~\ref{fig:simplices_2d} and~\ref{fig:simplices_3d} for illustrations in $\mathbb{R}^2$ and $\mathbb{R}^3$, respectively. Observe that $B$ is a $d$-dimensional axis-parallel box. For $h \in \mathbb{R}$, we use $I_h$ to denote the intersection of $B$ with the hyperplane $x_d=h$. \begin{figure}[htb] \centering \hbox{} \hfill \begin{subfigure}[b]{.35\textwidth} \centering \includegraphics{figs/simplices_2d.pdf} \caption{} \label{fig:simplices_2d} \end{subfigure} \hfill \begin{subfigure}[b]{.5\textwidth} \centering \includegraphics{figs/simplices_3d.pdf} \caption{} \label{fig:simplices_3d} \end{subfigure} \hfill \hbox{} \caption{ An illustration of the proof of Theorem~\ref{thm:islands_2d} in~(\subref{fig:simplices_2d})~$\mathbb{R}^2$ and (\subref{fig:simplices_3d})~$\mathbb{R}^3$. } \end{figure} Having fixed $p_1,\dots,p_d$, we now try to restrict possible locations of the points $p_{d+1},\dots,p_k$, one by one, so that $p_1,\dots,p_k$ is the canonical $(k,a)$-ordering of a $k$-island in $S$ for some $a$. First, we observe that the position of the point $p_{d+1}$ is restricted to $B$. \begin{lemma} \label{lem-containment} If $p_1,\dots,p_{d+1}$ satisfy condition~\ref{item-canonical4}, then $p_{d+1}$ lies in the box~$B$. \end{lemma} \begin{proof} Let $p_{d+1}=(x_1,\dots,x_d)$. According to our choice of points $p_1,\dots,p_d$ and from the assumption that $p_1,\dots,p_d$ satisfy~\ref{item-canonical4}, we get $x_1 \geq 0$ and also that $|x_i|$ is at most as large as the absolute value of the $i$th coordinate of $p_{i+1}$ for every $i \in \{1,\dots,d-1\}$. It remains to show that $|x_d| \leq d/\lambda_{d-1}(\Delta_0)$. The simplex $\Delta$ spanned by $p_1,\dots,p_{d+1}$ is contained in the convex body $K$, as $p_1,\dots,p_{d+1} \in K$ and $K$ is convex. Thus $\lambda_d(\Delta) \leq \lambda_d(K) = 1$. On the other hand, the volume $\lambda_d(\Delta)$ equals $\lambda_{d-1}(\Delta_0) \cdot h/d$, where $h$ is the distance between $p_{d+1}$ and the hyperplane $H$ containing $\Delta_0$. According to our assumptions about $p_1,\dots,p_d$, the distance $h$ equals $|x_d|$. Since $\lambda_d(\Delta) \leq 1$, it follows that $|x_d| = h \leq d/\lambda_{d-1}(\Delta_0)$ and thus $p_{d+1} \in B$. \end{proof} The following auxiliary lemma gives an identity that is needed later. We omit the proof, which can be found, for example, in~\cite[Section~1]{aar99}. \begin{lemma}[\cite{aar99}] \label{lem-integral} For all nonnegative integers $a$ and $b$, we have \[\int_0^1 x^a(1-x)^b \;{\rm d}x = \frac{a! \; b!}{(a+b+1)!}\,.\] \end{lemma} We will also use the following result, called the \emph{Asymptotic Upper Bound Theorem}~\cite{mat02}, that estimates the maximum number of facets in a polytope. \begin{theorem}[Asymptotic Upper Bound Theorem~\cite{mat02}] \label{thm-upperBoundThm} For every integer $d \geq 2$, a $d$-dimensional convex polytope with $N$ vertices has at most $2\binom{N}{\lfloor d/2\rfloor}$ facets. \end{theorem} Let $a$ be an integer satisfying $0 \leq a \leq k-d-1$ and let $E_a$ be the event that $p_1,\dots,p_k$ is the canonical $(k,a)$-ordering such that $\{p_1,\dots,p_{d+a+1}\}$ is an island in $S$. To estimate the probability that $p_1,\dots,p_k$ is the canonical $(k,a)$-ordering of a $k$-island in $S$, we first find an upper bound on the conditional probability of~$E_a$, conditioned on the event $L_2$ that $p_1,\dots,p_d$ satisfy~\ref{item-canonical1}. \begin{lemma} \label{lem-probabilityInside} For every $a \in \{0,\dots,k-d-1\}$, the probability $\Pr[E_a \mid L_2]$ is at most \[\frac{2^{d-1}\cdot d!}{(k-a-d-1)!\cdot(n-k+1)^{a+1}} .\] \end{lemma} \begin{proof} It follows from Lemma~\ref{lem-containment} that, in order to satisfy~\ref{item-canonical4}, the point $p_{d+1}$ must lie in the box~$B$. In particular, $p_{d+1}$ is contained in $I_h \cap K$ for some real number $h \in [-d/\lambda_{d-1}(\Delta_0),d/\lambda_{d-1}(\Delta_0)]$. If $p_{d+1} \in I_h$, then the simplex $\Delta = \conv(\{p_1,\dots,p_{d+1}\})$ has volume $\lambda_d(\Delta) = \lambda_{d-1}(\Delta_0)\cdot|h|/d$ and the $a$ points $p_{d+2},\dots,p_{d+a+1}$ satisfy~\ref{item-canonical2} with probability \[\frac{1}{a!}\cdot \left(\lambda_d(\Delta)\right)^a = \frac{1}{a!}\cdot \left(\frac{\lambda_{d-1}(\Delta_0) \cdot |h|}{d}\right)^a,\] as they all lie in $\Delta \subseteq K$ in the unique order. In order to satisfy the condition~\ref{item-canonical3}, the $k-a-d-1$ points $p_{d+a+i+1}$, for $i \in \{1,\dots,k-a-d-1\}$, must have increasing distance to ${\rm conv}(\{p_1,\dots,\allowbreak p_{d+a+i}\})$ as the index $i$ increases, which happens with probability at most $\frac{1}{(k-a-d-1)!}$. Since $\{p_1,\dots,p_{d+a+1}\}$ must be an island in $S$, the $n-d-a-1$ points from $S \setminus \{p_1,\dots,p_{d+a+1}\}$ must lie outside $\Delta$. If $p_{d+1} \in I_h$, then this happens with probability \[(\lambda_d(K \setminus \Delta))^{n-d-a-1} = (\lambda_d(K)-\lambda_d(\Delta))^{n-d-a-1} = \left(1 - \frac{\lambda_{d-1}(\Delta_0) \cdot |h|}{d}\right)^{n-d-a-1},\] as they all lie in $K \setminus \Delta$ and we have $\Delta \subseteq K$ and $\lambda_d(K)=1$. Altogether, we get that $\Pr[E_a \mid L_2]$ is at most \[\int\displaylimits_{-d/\lambda_{d-1}(\Delta_0)}^{d/\lambda_{d-1}(\Delta_0)} \frac{\lambda_{d-1}(I_h \cap K)}{a!\cdot (k-a-d-1)!} \cdot \left(\frac{\lambda_{d-1}(\Delta_0) \cdot |h|}{d}\right)^a \cdot \left(1 - \frac{\lambda_{d-1}(\Delta_0) \cdot |h|}{d}\right)^{n-d-a-1} {\rm d} h. \] Since we have $\lambda_{d-1}(I_0) = \lambda_{d-1}(I_h)$ for every $h \in [-d/\lambda_{d-1}(\Delta_0),d/\lambda_{d-1}(\Delta_0)]$, we obtain $\lambda_{d-1}(I_h \cap K) \leq \lambda_{d-1}(I_0)$ and thus $\Pr[E_a \mid L_2]$ is at most \[\frac{2 \cdot \lambda_{d-1}(I_0)}{a!\cdot (k-a-d-1)! } \cdot \int\displaylimits_0^{d/\lambda_{d-1}(\Delta_0)} \left(\frac{\lambda_{d-1}(\Delta_0) \cdot h}{d}\right)^a \cdot \left(1 - \frac{\lambda_{d-1}(\Delta_0) \cdot h}{d}\right)^{n-d-a-1} {\rm d} h.\] By substituting $t=\frac{\lambda_{d-1}(\Delta_0) \cdot h}{d}$, we obtain \[\Pr[E_a \mid L_2] \leq \frac{2d\cdot \lambda_{d-1}(I_0)}{a!\cdot (k-a-d-1)!\cdot\lambda_{d-1}(\Delta_0)} \cdot \int_0^1 t^a (1-t)^{n-d-a-1} {\rm d}t.\] By Lemma~\ref{lem-integral}, the right side in the above inequality equals \begin{align*} \frac{2d\cdot \lambda_{d-1}(I_0)}{a!\cdot (k-a-d-1)! \cdot \lambda_{d-1}(\Delta_0)} &\cdot \frac{a! \cdot (n-d-a-1)!}{(n-d)!} \\ &= \frac{2d\cdot\lambda_{d-1}(I_0)}{(k-a-d-1)! \cdot\lambda_{d-1}(\Delta_0)} \cdot \frac{(n-d-a-1)!}{(n-d)!}. \end{align*} For every $i=1,\dots,d-1$, let $h_i$ be the distance between the point $p_{i+1}$ and the $(i-1)$-dimensional affine subspace spanned by $p_1,\dots,p_i$. Since the volume of the box $I_0$ satisfies \[\lambda_{d-1}(I_0) = h_1(2h_2)\cdots(2h_{d-1}) = 2^{d-2} \cdot h_1\cdots h_{d-1}\] and the volume of the $(d-1)$-dimensional simplex $\Delta_0$ is \[\lambda_{d-1}(\Delta_0) = \frac{h_1}{1} \cdot \frac{h_2}{2} \cdot \, \cdots \, \cdot \frac{h_{d-1}}{d-1} = \frac{h_1 \cdots h_{d-1}}{(d-1)!},\] we obtain $\lambda_{d-1}(I_0) / \lambda_{d-1}(\Delta_0) = 2^{d-2}\cdot (d-1)!$. Thus \begin{align*} \Pr[E_a \mid L_2] &\leq \frac{ 2^{d-1} \cdot d! }{(k-a-d-1)!} \cdot \frac{(n-d-a-1)!}{(n-d)!} \\ &= \frac{2^{d-1}\cdot d!}{(k-a-d-1)! \cdot (n-d)\cdots(n-d-a)}\\ &\leq \frac{2^{d-1}\cdot d!}{(k-a-d-1)! \cdot (n-k+1)^{a+1} }, \end{align*} where the last inequality follows from $a \leq k-d-1$. \end{proof} For every $i \in \{d+a+1,\dots,k\}$, let $E_{a,i}$ be the event that $p_1,\dots,p_k$ is the canonical $(k,a)$-ordering such that $\{p_1,\dots,p_i\}$ is an island in $S$. Note that in the event $E_{a,i}$ the condition~\ref{item-canonical3} implies that $\{p_1,\dots,p_j\}$ is an island in $S$ for every $j \in \{d+a+1,\dots,i\}$. Thus we have \[L_2 \supseteq E_a = E_{a,d+a+1} \supseteq E_{a,d+a+2} \supseteq \cdots \supseteq E_{a,k}.\] Moreover, the event $E_{a,k}$ says that $p_1,\dots,p_k$ is the canonical $(k,a)$-ordering of a $k$-island in~$S$. For $i \in \{d+a+2,\dots,k\}$, we now estimate the conditional probability of $E_{a,i}$, conditioned on $E_{a,i-1}$. \begin{lemma} \label{lem-probabilityOutside} For every $i \in \{d+a+2,\dots,k\}$, we have \[\Pr[E_{a,i} \mid E_{a,i-1}] \leq \frac{2d^{2d-1} \cdot \binom{k}{\lfloor d/2 \rfloor}}{n-i+1}.\] \end{lemma} \begin{proof} Let $i \in \{d+a+2,\dots,k\}$ and assume that the event $E_{a,i-1}$ holds. That is, $p_1,\dots,p_k$ is the canonical $(k,a)$-ordering such that $\{p_1,\dots,p_{i-1}\}$ is an $(i-1)$-island in $S$. First, we assume that $\Delta$ is a regular simplex with height $\eta>0$. At the end of the proof we show that the case when $\Delta$ is an arbitrary simplex follows by applying a suitable affine transformation. For every $j \in \{1,\dots,d+1\}$, let $F_j$ be the facet $\conv(\{p_1,\dots,\allowbreak p_{d+1}\}\setminus\{p_j\})$ of $\Delta$ and let $H_j$ be the hyperplane parallel to $F_j$ that contains $p_j$. We use $H^+_j$ to denote the halfspace determined by $H_j$ such that $\Delta \subseteq H_j^+$. We set $\Delta^* = \cap_{j=1}^{d+1} H^+_j$; see Figures~\ref{fig:Fischer2d} and \ref{fig:Fischer3d} for illustrations in $\mathbb{R}^2$ and~$\mathbb{R}^3$, respectively. Note that $\Delta^*$ is a $d$-dimensional simplex containing~$\Delta$. Also, notice that if $x \notin \Delta^*$, then $x \notin H^+_j$ for some $j$ and the distance between $x$ and the hyperplane containing $F_j$ is larger than $\eta$. \begin{figure}[htb] \centering \hbox{} \hfill \begin{subfigure}[b]{.45\textwidth} \centering \includegraphics{figs/Fischer2d} \caption{} \label{fig:Fischer2d} \end{subfigure} \hfill \begin{subfigure}[b]{.5\textwidth} \centering \includegraphics[width=0.95\textwidth]{figs/Fischer3d} \caption{} \label{fig:Fischer3d} \end{subfigure} \hfill \hbox{} \caption{ An illustration of (\subref{fig:Fischer2d})~the simplex $\Delta^*$ in $\mathbb{R}^2$ and (\subref{fig:Fischer3d})~in $\mathbb{R}^3$. } \end{figure} We show that the fact that $p_1,\dots,p_k$ is the canonical $(k,a)$-ordering implies that every point from $\{p_1,\dots,p_k\}$ is contained in $\Delta^*$. Suppose for contradiction that some point $p \in \{p_1,\dots,p_k\}$ does not lie inside $\Delta^*$. Then there is a facet $F_j$ of $\Delta$ such that the distance $\eta'$ between $p$ and the hyperplane containing $F_j$ is larger than $\eta$. Then, however, the simplex $\Delta'$ spanned by vertices of $F_j$ and by $p$ has volume larger than $\Delta$, because \[\lambda_d(\Delta') = \frac{1}{d} \cdot \lambda_{d-1}(F_j) \cdot \eta' > \frac{1}{d} \cdot \lambda_{d-1}(F_j) \cdot \eta = \lambda_d(\Delta).\] This contradicts the fact that $p_1,\dots,p_k$ is the canonical $(k,a)$-ordering, as, according to~\ref{item-canonical0}, $\Delta$ has the largest $d$-dimensional Lebesgue measure among all $d$-dimensional simplices spanned by points from $\{p_1,\dots,p_k\}$. Let $\sigma$ be the barycenter of $\Delta$. For every point $p \in \Delta^* \setminus \Delta$, the line segment $\sigma p$ intersects at least one facet of $\Delta$. For every $j \in \{1,\dots,d+1\}$, we use $R_j$ to denote the set of points $p \in \Delta^* \setminus \Delta$ for which the line segment $\sigma p$ intersects the facet $F_j$ of $\Delta$. Observe that each set $R_j$ is convex and the sets $R_1,\dots,R_{d+1}$ partition $\Delta^* \setminus \Delta$ (up to their intersection of $d$-dimensional Lebesgue measure $0$); see Figure~\ref{fig:Fischer2d_2} for an illustration in the plane. \begin{figure}[htb] \centering \includegraphics[page=3]{figs/Fischer2d} \caption{An illustration of the proof of Lemma~\ref{lem-probabilityOutside}. In order for $\{p_1,\dots,p_i\}$ to be an $i$-island in~$S$, the light gray part cannot contain points from $S$. We estimate the probability of this event from above by the probability that the dark gray simplex $\conv(\varphi \cup \{p_i\})$ contains no point of $S$. Note that the parameters $\eta$ and $\tau$ coincide for $d=2$, as then $\tau = \frac{d^2-1}{d+1}\eta = \eta$.} \label{fig:Fischer2d_2} \end{figure} Consider the point $p_i$. Since $p_1,\dots,p_k$ is the canonical $(k,a)$-ordering, the condition~\ref{item-canonical3} implies that $p_i$ lies outside of the polytope $\conv(\{p_1,\dots,p_{i-1}\})$. To bound the probability $\Pr[E_{a,i} \mid E_{a,i-1}]$, we need to estimate the probability that $\conv(\{p_1,\dots,p_i\}) \setminus \conv(\{p_1,\dots,p_{i-1}\})$ does not contain any point from $S\setminus \{p_1,\dots,p_i\}$, conditioned on $E_{a,i-1}$. We know that $p_i$ lies in $\Delta^*\setminus \Delta$ and that $p_i \in R_j$ for some $j \in \{1,\dots,d+1\}$. Since $p_i \notin \conv(\{p_1,\dots,p_{i-1}\})$, there is a facet $\varphi$ of the polytope $\conv(\{p_1,\dots,p_{i-1}\})$ contained in the closure of $R_j$ such that $\sigma p_i$ intersects $\varphi$. Since $S$ is in general position with probability $1$, we can assume that $\varphi$ is a $(d-1)$-dimensional simplex. The point $p_i$ is contained in the convex set $C_\varphi$ that contains all points $c \in \mathbb{R}^d$ such that the line segment $\sigma c$ intersects $\varphi$. We use $H(0)$ to denote the hyperplane containing $\varphi$. For a positive $r \in \mathbb{R}$, let $H(r)$ be the hyperplane parallel to $H(0)$ at distance $r$ from $H(0)$ such that $H(r)$ is contained in the halfspace determined by $H(0)$ that does not contain $\conv(\{p_1,\dots,p_{i-1}\})$. Then we have $p_i \in H(h)$ for some positive $h \in \mathbb{R}$. Since $p_i \in K$ and $\varphi \subseteq K$, the convexity of $K$ implies that the simplex $\conv(\varphi \cup \{p_i\})$ has volume $\lambda_d(\conv(\varphi \cup \{p_i\})) \leq \lambda_d(K) = 1$. Since $\lambda_d(\conv(\varphi \cup \{p_i\})) = \lambda_{d-1}(\varphi) \cdot h/d$, we obtain $h \leq d/\lambda_{d-1}(\varphi)$. The point $p_i$ lies in the $(d-1)$-dimensional simplex $C_\varphi \cap H(h)$, which is a scaled copy of~$\varphi$. We show that \begin{equation} \label{eq-simplex} \lambda_{d-1}(C_\varphi \cap H(h)) \leq d^{2d-2}\cdot\lambda_{d-1}(\varphi). \end{equation} Let $h_\varphi$ be the distance between $H(0)$ and $\sigma$ and, for every $j \in \{1,\dots,d+1\}$, let $\overline{H}_j$ be the hyperplane parallel to $F_j$ containing the vertex $H_1 \cap \cdots \cap H_{j-1} \cap H_{j+1} \cap \cdots \cap H_{d+1}$ of~$\Delta^*$. We denote by $\overline{H}^+_j$ the halfspace determined by $\overline{H_j}$ containing $\Delta^*$. Since $\Delta$ lies on the same side of $H(0)$ as $\sigma$, we see that $h_{\varphi}$ is at least as large as the distance between $\sigma$ and $F_j$, which is $\eta/(d+1)$. Since $p_i$ lies in $\Delta^* \subseteq \overline{H}^+_j$, we see that $h$ is at most as large as the distance $\tau$ between $\overline{H}_j$ and the hyperplane containing the facet $F_j$ of $\Delta$. Note that $\tau + \eta/(d+1)$ is the distance of the barycenter of $\Delta^*$ and a vertex of $\Delta^*$ and $d\eta/(d+1)$ is the distance of the barycenter of $\Delta^*$ and a facet of $\Delta^*$. Thus we get $\tau = \frac{d^2\eta}{d+1} - \frac{\eta}{d+1} = \frac{d^2-1}{d+1}\eta$ from the fact that the distance between the barycenter of a $d$-dimensional simplex and any of its vertices is $d$-times as large as the distance between the barycenter and a facet. Consequently, $h \leq \frac{d^2-1}{d+1}\eta$ and $\frac{\eta}{d+1} \leq h_\varphi$, which implies $h \leq (d^2-1)h_\varphi$. Thus $C_\varphi \cap H(h)$ is a scaled copy of~$\varphi$ by a factor of size at most~$d^2$. This gives $\lambda_{d-1}(C_\varphi \cap H(h)) \leq d^{2d-2}\cdot\lambda_{d-1}(\varphi)$. Since the simplex $\conv(\varphi \cup \{p_i\})$ is a subset of the closure of $\conv(\{p_1,\dots,p_i\}) \setminus \conv(\{p_1,\dots,p_{i-1}\})$, the probability $\Pr[E_{a,i} \mid E_{a,i-1}]$ can be bounded from above by the conditional probability of the event $A_{i,\varphi}$ that $p_i \in C_\varphi \cap K$ and that no point from $S\setminus \{p_1,\dots,p_i\}$ lies in $\conv(\varphi \cup \{p_i\})$, conditioned on $E_{a,i-1}$. All points from $S\setminus \{p_1,\dots,p_i\}$ lie outside of $\conv(\varphi \cup \{p_i\})$ with probability \[\left( 1- \frac{\lambda_d(\conv(\varphi \cup \{p_i\}))}{\lambda_d(K \setminus \conv(\{p_1,\ldots,p_{i-1}\}))} \right)^{n-i} .\] Since $\lambda_d(K \setminus \conv(\{p_1,\ldots,p_{i-1}\}))\leq \lambda_d(K) = 1$, this is bounded from above by \[(1-\lambda_d(\conv(\varphi \cup \{p_i\})))^{n-i} = \left(1-\frac{\lambda_{d-1}(\varphi) \cdot h}{d}\right)^{n-i}.\] Since the sets $C_\varphi$ partition $K \setminus \conv(\{p_1,\dots,p_{i-1}\})$ (up to intersections of $d$-dimensional Lebesgue measure $0$) and since $h \leq d/\lambda_{d-1}(\varphi)$, we have, by the law of total probability, \begin{align*} \Pr[E_{a,i} \mid E_{a,i-1}] &\leq \sum_{\varphi}\Pr[A_{i,\varphi} \mid E_{a,i-1}]\\ &\leq \sum_\varphi \int\displaylimits_0^{d/\lambda_{d-1}(\varphi)} \lambda_{d-1}(C_\varphi \cap H(h)) \cdot \left(1-\frac{\lambda_{d-1}(\varphi) \cdot h}{d}\right)^{n-i} \; {\rm d}h. \end{align*} The sums in the above expression are taken over all facets $\varphi$ of the convex polytope $\conv(\{p_1,\dots,\allowbreak p_{i-1}\})$. Using~\eqref{eq-simplex}, we can estimate $\Pr[E_{a,i} \mid E_{a,i-1}]$ from above by \[d^{2d-2} \cdot \sum_\varphi \lambda_{d-1}(\varphi) \cdot \int\displaylimits_0^{d/\lambda_{d-1}(\varphi)} \left(1-\frac{\lambda_{d-1}(\varphi) \cdot h}{d}\right)^{n-i} \; {\rm d}h.\] By substituting $t= \frac{\lambda_{d-1}(\varphi) \cdot h}{d}$, we can rewrite this expression as \[d^{2d-2} \cdot \sum_\varphi \frac{d \cdot \lambda_{d-1}(\varphi)}{\lambda_{d-1}(\varphi)} \cdot \int_0^1 (1-t)^{n-i} \; {\rm d}t = d^{2d-1} \cdot \sum_\varphi \int_0^1 1 \cdot (1-t)^{n-i} \; {\rm d}t.\] By Lemma~\ref{lem-integral}, this equals \[d^{2d-1} \cdot \sum_\varphi \frac{0! \cdot (n-i)!}{(n-i+1)!} = \frac{d^{2d-1}}{n-i+1} \sum_\varphi 1.\] Since $\conv(\{p_1,\dots,\allowbreak p_{i-1}\})$ is a convex polytope in $\mathbb{R}^d$ with at most $i-1 \leq k$ vertices, Theorem~\ref{thm-upperBoundThm} implies that the number of facets $\varphi$ of $\conv(\{p_1,\dots,\allowbreak p_{i-1}\})$ is at most $2 \binom{k}{\lfloor d/2 \rfloor}$. Altogether, we have derived the desired bound \[\Pr[E_{a,i} \mid E_{a,i-1}] \leq \frac{2d^{2d-1} \cdot \binom{k}{\lfloor d/2 \rfloor}}{n-i+1}\] in the case when $\Delta$ is a regular simplex. If $\Delta$ is not regular, we first apply a volume-preserving affine transformation $F$ that maps $\Delta$ to a regular simplex $F(\Delta)$. The simplex $F(\Delta)$ is then contained in the convex body $F(K)$ of volume $1$. Since $F$ translates the uniform distribution on $F(K)$ to the uniform distribution on $K$ and preserves holes and islands, we obtain the required upper bound also in the general case. \end{proof} Now, we finish the proof of Theorem~\ref{thm:islands_2d}. \begin{proof}[Proof of Theorem~\ref{thm:islands_2d}] We estimate the expected value of the number $X$ of $k$-islands in $S$. The number of ordered $k$-tuples of points from $S$ is $n(n-1)\cdots(n-k-1)$. Since every subset of $S$ of size $k$ admits a unique labeling that satisfies the conditions~\ref{item-canonical0}, ~\ref{item-canonical1}, \ref{item-canonical4}, \ref{item-canonical2}, and \ref{item-canonical3}, we have \begin{align*} \mathbb{E}[X] &= n(n-1)\cdots(n-k+1) \cdot \Pr\left[\cup_{a=0}^{k-d-1} E_{a,k}\right] \\ &= n(n-1)\cdots(n-k+1) \cdot \sum_{a=0}^{k-d-1} \Pr\left[E_{a,k}\right], \end{align*} as the events $E_{0,k},\dots E_{k-d-1,k}$ are pairwise disjoint. The probability of the event $L_2$, which says that the points $p_1,\dots,p_d$ satisfy the condition~\ref{item-canonical1}, is $1/d!$. Let $P = \sum_{a=0}^{k-d-1} \Pr\left[E_{a,k} \mid L_2 \right]$. For any two events $E,E'$ with $E \supseteq E'$ and $\Pr[E]>0$, we have $\Pr[E'] = \Pr[E \cap E'] = \Pr[E' \mid E]\cdot \Pr[E]$. Thus, using $L_2 \supseteq E_a = E_{a,d+a+1} \supseteq E_{a,d+a+2} \supseteq \cdots \supseteq E_{a,k}$, we get \[ \mathbb{E}[X] = n(n-1)\cdots(n-k+1) \cdot \Pr[L_2] \cdot P = \frac{n(n-1)\cdots(n-k+1)}{d!} \cdot P\] and \[ P = \sum_{a=0}^{k-d-1}\Pr[E_a \mid L_2]\cdot \prod_{i=d+a+2}^k \Pr[E_{a,i} \mid E_{a,i-1}]. \] For every $a \in \{d+2,\dots,k-d-1\}$, Lemma~\ref{lem-probabilityInside} gives \[\Pr[E_a \mid L_2] \leq \frac{2^{d-1}\cdot d!}{(k-a-d-1)!\cdot(n-k+1)^{a+1}} \leq \frac{2^{d-1}\cdot d!}{(n-k+1)^{a+1}}\] and, due to Lemma~\ref{lem-probabilityOutside}, \[\Pr[E_{a,i} \mid E_{a,i-1}] \leq \frac{2d^{2d-1} \cdot \binom{k}{\lfloor d/2 \rfloor}}{n-i+1}\] for every $i \in \{d+a+2,\dots,k\}$. Using these estimates we derive \begin{align*} P &\leq 2^{d-1}\cdot d! \cdot \left(2d^{2d-1}\binom{k}{\lfloor d/2 \rfloor}\right)^{k-d-1} \cdot \sum_{a=0}^{k-d-1}\frac{1}{(n-k+1)^{a+1}} \cdot \prod_{i=d+a+2}^k \frac{1}{n-i+1}\\ & \leq 2^{d-1}\cdot d! \cdot \left(2d^{2d-1}\binom{k}{\lfloor d/2 \rfloor}\right)^{k-d-1} \cdot \sum_{a=0}^{k-d-1}\frac{1}{(n-k+1)^{a+1}} \cdot \frac{1}{(n-k+1)^{k-d-a-1}}\\ &= 2^{d-1}\cdot d! \cdot \left(2d^{2d-1}\binom{k}{\lfloor d/2 \rfloor}\right)^{k-d-1} \cdot (k-d) \cdot \frac{1}{(n-k+1)^{k-d}}. \end{align*} Thus the expected number of $k$-islands in $S$ satisfies \begin{align*} \mathbb{E}[X] &= \frac{n(n-1) \cdots (n-k+1)}{d!} \cdot P\\ &\leq \frac{2^{d-1}\cdot d! \cdot \left(2d^{2d-1}\binom{k}{\lfloor d/2 \rfloor}\right)^{k-d-1} \cdot (k-d)}{d!} \cdot \frac{n(n-1) \cdots (n-k+1)}{(n-k+1)^{k-d}} \\ &=2^{d-1}\cdot \left(2d^{2d-1}\binom{k}{\lfloor d/2 \rfloor}\right)^{k-d-1} \cdot (k-d) \cdot \frac{n(n-1) \cdots (n-k+2)}{(n-k+1)^{k-d-1}}. \end{align*} This finishes the proof of Theorem~\ref{thm:islands_2d}. \end{proof} In the rest of the section, we sketch the proof of Theorem~\ref{thm:holes_2d} by showing that a slight modification of the above proof yields an improved bound on the expected number $EH^K_{d,k}(n)$ of $k$-holes in $S$. \begin{proof}[Sketch of the proof of Theorem~\ref{thm:holes_2d}] If $k$ points from $S$ determine a $k$-hole in $S$, then, in particular, the simplex $\Delta$ contains no points of $S$ in its interior. Therefore \[EH^K_{d,k}(n) \leq n(n-1)\cdots(n-k+1) \cdot \Pr[E_{0,k}].\] Then we proceed exactly as in the proof of Theorem~\ref{thm:islands_2d}, but we only consider the case $a=0$. This gives the same bounds as before with the term $(k-d)$ missing and with an additional factor $\frac{1}{(k-d-1)!}$ from Lemma~9, which proves Theorem~\ref{thm:holes_2d}. \end{proof} For $d=2$ and $k=4$, Theorem~\ref{thm:holes_2d} gives $EH^K_{2,4}(n) \leq 128n^2 + o(n^2)$. We can obtain an even better estimate $EH^K_{2,4}(n) \leq 12n^2 + o(n^2)$ in this case. First, we have only three facets $\varphi$, as they correspond to the sides of the triangle $\Delta$. Thus the term $\left(2\binom{k}{\lfloor d/2 \rfloor}\right)^{k-d-1} = 8$ is replaced by $3$. Moreover, the inequality~\eqref{eq-simplex} can be replaced by \[\lambda_1(C_\varphi \cap H(h) \cap \Delta^*) \leq \lambda_1(\varphi),\] since every line $H(h)$ intersects $R_j \subseteq \Delta^*$ in a line segment of length at most $\lambda_1(F_j) = \lambda(\varphi)$. This then removes the factor $d^{(2d-2)(k-d-1)} = 4$. \section{Proof of Theorem~\ref{thm:exponential}} \label{sec:exponentially_k_islands} For an integer $d \geq 2$ and a convex body $K$ in $\mathbb{R}^d$ with $\lambda_d(K)=1$. We show that there are positive constants $C_1=C_1(d)$, $C_2=C_2(d)$, and $n_0=n_0(d)$ such that for every integer $n \geq n_0$ the expected number of islands in a set $S$ of $n$ points chosen uniformly and independently at random from $K$ is at least $2^{C_1 \cdot n^{(d-1)/(d+1)}}$ and at most $2^{C_2 \cdot n^{(d-1)/(d+1)}}$. For every point set $Q$, there is a one-to-one correspondence between the set of all subsets in $Q$ in convex position and the set of all islands in $Q$. It suffices to map a subset $G$ of $Q$ in convex position to the island $Q \cap \conv(G)$ in~$Q$. On the other hand, every island $I$ in $Q$ determines a subset of $Q$ in convex position that is formed by the vertices of $\conv(I)$. Therefore the number of subsets of $Q$ in convex position equals the number of islands in~$Q$. For $m \in \mathbb{N}$, let $p(m,K)$ be the probability that $m$ points chosen uniformly and independently at random from $K$ are in convex position. We use the following result by B\'{a}r\'{a}ny~\cite{barany01} to estimate the expected number of islands in $S$. \begin{theorem}[\cite{barany01}] \label{thm-barany} For every integer $d \geq 2$, let $K$ be a convex body in $\mathbb{R}^d$ with $\lambda_d(K)=1$. Then there are positive constants $m_0$, $c_1$, and $c_2$ depending only on $d$ such that, for every $m \geq m_0$, we have \[c_1 < m^{2/(d-1)} \cdot (p(m,K))^{1/m} < c_2.\] \end{theorem} We let $X$ be the random variable that denotes the number of subsets of $S$ in convex position. Then \[\mathbb{E}[X] = \sum_{m=1}^n\binom{n}{m} \cdot p(m,K).\] Now, we prove the upper bound on the expected number of islands in $S$. By Theorem~\ref{thm-barany}, we have \[\mathbb{E}[X] \leq \sum_{m=1}^{m_0-1}\binom{n}{m} + \sum_{m=m_0}^n\binom{n}{m} \cdot \left(\frac{c_2}{m^{2/(d-1)}}\right)^m\] for some constants $m_0$ and $c_2$ depending only on $d$. Since $\binom{n}{m} \leq n^m$, the first term on the right side is at most $n^c$ for some constant $c=c(d)$. Using the bound $\binom{n}{m} \leq (en/m)^m$, we bound the second term from above by \[\sum_{m=m_0}^n\ \left(\frac{c_2 \cdot e\cdot n}{m^{(d+1)/(d-1)}}\right)^m.\] The real function $f(x)=(e \cdot c_2 \cdot n/x^{(d+1)/(d-1)})^x$ is at most $1$ for $x \geq (e\cdot c_2 \cdot n)^{(d-1)/(d+1)}$. Otherwise $x=y (e\cdot c_2 \cdot n)^{(d-1)/(d+1)}$ for some $y \in [0,1]$ and \[f(x) = \left(y^{-(d+1)/(d-1)}\right)^{y(e\cdot c_2 \cdot n)^{(d-1)/(d+1)}} = e^{\frac{d+1}{d-1}y\ln{(1/y)}(e\cdot c_2 \cdot n)^{(d-1)/(d+1)}} \leq 2^{c'n^{(d-1)/(d+1)}},\] where $c'=c'(d)$ is a sufficiently large constant. Thus $\mathbb{E}[X] \leq n^c + n2^{c'n^{(d-1)/(d+1)}}$. Choosing $C_2=C_2(d)$ sufficiently large, we have $\mathbb{E}[X] \leq 2^{C_2n^{(d-1)/(d+1)}}$. Since the number of subsets of $S$ in convex position equals the number of islands in $S$, we have the same upper bound on the expected number of islands in $S$. It remains to show the lower bound on the expected number of islands in $S$. By Theorem~\ref{thm-barany}, we have \[\mathbb{E}[X] \geq \sum_{m=m_0}^n\binom{n}{m} \cdot \left(\frac{c_1}{m^{2/(d-1)}}\right)^m\] for some constants $m_0$ and $c_1$ depending only on $d$. Using the bound $\binom{n}{m} \geq (n/m)^m$, we obtain \[\mathbb{E}[X] \geq \sum_{m=m_0}^n\cdot \left(\frac{c_1 \cdot n}{m^{(d+1)/(d-1)}}\right)^m.\] Let $C_1=(c_1/2)^{(d-1)/(d+1)}$. Then, for each $m$ satisfying $C_1 n^{\frac{d-1}{d+1}}/2 \leq m \leq C_1 n^{\frac{d-1}{d+1}}$, the summand in the above expression is at least $2^{C_1 n^{\frac{d-1}{d+1}}/2}$. It follows that the expected number of $m$-tuples from $S$ in convex position, where $C_1 n^{\frac{d-1}{d+1}}/2 \leq m \leq C_1 n^{\frac{d-1}{d+1}}$, is at least \[2^{C_1 n^{(d-1)/(d+1)}/2},\] as if $n$ is large enough with respect to $d$, then there is at least one $m \in \mathbb{N}$ in the given range. As we know, each such $m$-tuple in $S$ corresponds to an island in $S$. Thus the expected number of islands in $S$ is also at least $2^{C_1 n^{(d-1)/(d+1)}/2}$. \section{Proof of Theorem~\ref{prop-Horton}} \label{sec:prop-Horton} Here, for every $d$, we state the definition of a $d$-dimensional analogue of Horton sets on $n$ points from~\cite{VALTR1992b} and show that, for all fixed integers $d$ and $k$, every $d$-dimensional Horton set $H$ with $n$ points contains at least $\Omega(n^{\min\{2^{d-1},k\}})$ $k$-islands in $H$. If $k \leq 3 \cdot 2^{d-1}$, then we show that $H$ contains at least $\Omega(n^k)$ $k$-holes in $H$. First, we need to introduce some notation. A set $Q$ of points in $\mathbb{R}^d$ is in \emph{strongly general position} if $Q$ is in general position and, for every $i=1,\dots,d-1$, no $(i+1)$-tuple of points from $Q$ determines an $i$-dimensional affine subspace of $\mathbb{R}^d$ that is parallel to the $(d-i)$-dimensional linear subspace of $\mathbb{R}^d$ that contains the last $d-i$ axes. Let $\pi \colon \mathbb{R}^d \to \mathbb{R}^{d-1}$ be the projection defined by $\pi(x_1,\dots,x_d) = (x_1,\dots,x_{d-1})$. For $Q \subseteq \mathbb{R}^d$, we use $\pi(Q)$ to denote the set $\{\pi(q) \in \mathbb{R}^{d-1} \colon q \in Q\}$. If $Q$ is a set of $n$ points $q_0,\dots,q_{n-1}$ from $\mathbb{R}^d$ in strongly general position that are ordered so that their first coordinates increase, then, for all $m \in \mathbb{N}$ and $i \in \{0,1,\dots,m-1\}$, we define $Q_{i,m}=\{q_j \in Q \colon j \equiv i \; (\bmod \; m)\}$. For two sets $A$ and $B$ of points from $\mathbb{R}^d$ with $|A|,|B| \geq d$, we say that $B$ is \emph{deep below} $A$ and $A$ is \emph{high above} $B$ if $B$ lies entirely below any hyperplane determined by $d$ points of $A$ and $A$ lies entirely above any hyperplane determined by $d$ points of $A$. For point sets $A'$ and $B'$ in $\mathbb{R}^d$ of arbitrarily size, we say that $B'$ is \emph{deep below} $A'$ and $A'$ is \emph{high above} $B'$ if there are sets $A \supseteq A'$ and $B \supseteq B'$ such that $|A|,|B| \geq d$, $B$ is deep below $A$, and $A$ is high above~$B$. Let $p_2<p_3<p_4<\cdots$ be the sequence of all prime numbers. That is, $p_2=2$, $p_3=3$, $p_4=5$ and so on. We can now state the definition of the $d$-dimensional Horton sets from~\cite{VALTR1992b}. Every finite set of $n$ points in $\mathbb{R}$ is \emph{$1$-Horton}. For $d \geq 2$, finite set $H$ of points from $\mathbb{R}^d$ in strongly general position is a \emph{$d$-Horton set} if it satisfies the following conditions: \begin{enumerate}[label=(\alph*)] \item the set $H$ is empty or it consists of a single point, or \item $H$ satisfies the following three conditions: \begin{enumerate}[label=(\roman*)] \item if $d>2$, then $\pi(H)$ is $(d-1)$-Horton, \item for every $i \in \{0,1,\dots,p_d-1\}$, the set $H_{i,p_d}$ is $d$-Horton, \item every $I \subseteq \{0,1,\dots,p_d-1\}$ with $|I| \geq 2$ can be partitioned into two nonempty subsets $J$ and $I \setminus J$ such that $\cup_{j \in J}H_{j,p_d}$ lies deep below $\cup_{i \in I \setminus J} H_{i,p_d}$. \end{enumerate} \end{enumerate} Valtr~\cite{VALTR1992b} showed that such sets indeed exist and that they contain no $k$-hole with $k>2^{d-1}(p_2p_3\cdots p_d+1)$. The $2$-Horton sets are known as \emph{Horton sets}. We show that $d$-Horton sets with $d \geq 3$ contain many $k$-islands for $k\geq d+1$ and thus cannot provide the upper bound $O(n^d)$ that follows from Theorem~\ref{thm:islands_2d}. This contrasts with the situation in the plane, as 2-Horton sets of $n$ points contain only $O(n^2)$ $k$-islands for any fixed~$k$~\cite{FabilaMonroyHuemer2012}. Let $d$ and $k$ be fixed positive integers. Assume first that $k \geq 2^{d-1}$. We want to prove that there are $\Omega(n^{2^{d-1}})$ $k$-islands in every $d$-Horton set $H$ with $n$ points. We proceed by induction on $d$. For $d=1$ there are $n-k+1=\Omega(n)$ $k$-islands in every $1$-Horton set. Assume now that $d>1$ and that the statement holds for $d-1$. The $d$-Horton set $H$ consists of $p_d \in O(1)$ subsets $H_{i,p_d}$, each of size at least $\lfloor n/p_d \rfloor \in \Omega(n)$. The set $\{0,\dots,p_d-1\}$ is ordered by a linear ordering $\prec$ such that, for all $i$ and $j$ with $i \prec j$, the set $H_{i,p_d}$ is deep below $H_{j,p_d}$; see~\cite{VALTR1992b}. Take two of sets $X = H_{a,p_d}$ and $Y = H_{b,p_d}$ such that $a \prec b$ are consecutive in $\prec$. Since $k \geq 2^{d-1}$, we have $\lceil k/2 \rceil \geq \lfloor k/2 \rfloor \geq 2^{d-2}$. Thus by the inductive hypothesis, the $(d-1)$-Horton set $\pi(X)$ of size at least $\Omega(n)$ contains at least $\Omega(n^{2^{d-2}})$ $\lfloor k/2 \rfloor$-islands. Similarly, the $(d-1)$-Horton set $\pi(Y)$ of size at least $\Omega(n)$ contains at least $\Omega(n^{2^{d-2}})$ $\lceil k/2 \rceil$-islands. Let $\pi(A)$ be any of the $\Omega(n^{2^{d-2}})$ $\lfloor k/2 \rfloor$-islands in $\pi(X)$, where $A\subseteq X$. Similarly, let $\pi(B)$ be any of the $\Omega(n^{ 2^{d-2} })$ $\lceil k/2 \rceil$-islands in $\pi(Y)$, where $B\subseteq Y$. We show that $A \cup B$ is a $k$-island in $H$. Suppose for contradiction that there is a point $x \in H \setminus (A \cup B)$ that lies in $\conv(A \cup B)$. Since $a$ and $b$ are consecutive in $\prec$, the point $x$ lies in $X \cup Y = H_{a,p_d} \cup H_{b,p_d}$. By symmetry, we may assume without loss of generality that $x \in X$. Since $x \notin A$ and $H$ is in strongly general position, we have $\pi(x) \in \pi(X) \setminus \pi(A)$. Using the fact that $\pi(A)$ is a $\lfloor k/2 \rfloor$-island in $\pi(X)$, we obtain $\pi(x) \notin \conv(\pi(A))$ and thus $x \notin \conv(A)$. Since $X$ is deep below $Y$, we have $x \notin \conv(B)$. Thus, by Carath\'{e}dory's theorem, $x$ lies in the convex hull of a $(d+1)$-tuple $T \subseteq A \cup B$ that contains a point from~$A$ and also a point from~$B$. Note that, for $U = (T \cup \{x\})$, we have $|U \cap A| \geq 2$, as $x \in A$ and $|T \cap A| \geq 1$. We also have $|U \cap B| \geq 2$, as $X$ is deep below $Y$ and $\pi(x) \notin \conv(\pi(A))$. Thus the affine hull of $U \cap A$ intersects the convex hull of $U \cap B$. Then, however, the set $U \cap A$ is not deep below the set $U \cap B$, which contradicts the fact that $X$ is deep below~$Y$. Altogether, there are at least $\Omega(n^{ 2^{d-2} })\cdot\Omega(n^{ 2^{d-2} })=\Omega(n^{ 2^{d-1} })$ such $k$-islands $A\cup B$, which finishes the proof if $k$ is at least $2^{d-1}$. For $k<2^{d-1}$, we use an analogous argument that gives at least $\Omega(n^{\lfloor k/2 \rfloor}) \cdot \Omega(n^{\lceil k/2 \rceil}) = \Omega(n^k)$ $k$-islands in the inductive step. If $d \geq 2$ and $k \leq 3 \cdot 2^{d-1}$ then a slight modification of the above proof gives $\Omega(n^{\min\{2^{d-1},k\}})$ $k$-islands which are actually $k$-holes in $H$. We just use the simple fact that every 2-Horton set with $n$ points contains $\Omega(n^2)$ $k$-holes for every $k \in \{2,\dots,6\}$ as our inductive hypothesis. This is trivial for $k=2$ and it follows for $k \in \{3,4\}$ from the well-known fact that every set of $n$ points in $\mathbb{R}^2$ in general position contains at least $\Omega(n^2)$ $k$-holes. For $k \in \{5,6\}$, this fact can be proved using basic properties of 2-Horton sets (we omit the details). Then we use the inductive assumption, which says that every $d$-Horton set of $n$ points contains at least $\Omega(n^{\min\{2^{d-1},k\}})$ $k$-holes if $d \geq 2$ and $1 \leq k \leq 3 \cdot 2^{d-1}$. This finishes the proof of Theorem~\ref{prop-Horton}.
{'timestamp': '2020-03-03T02:31:23', 'yymm': '2003', 'arxiv_id': '2003.00909', 'language': 'en', 'url': 'https://arxiv.org/abs/2003.00909'}
\section{Introduction} In this paper we study scaling limits of skew plane partitions with periodic weights under several boundary conditions. Recall that a \textit{partition} is a weakly decreasing sequence of non-negative integers where all but finitely many terms are zero. Given a partition $\lambda=(\lambda_1,\lambda_2,\dots)$, a \textit{skew plane partition} with boundary $\lambda$ confined to a $c\times d$ box is an array of non-negative integers $\pi=\{\pi_{i,j}\}$ defined for all $1\leq i\leq c$, $\lambda_i< j\leq d$, which are weakly decreasing in $i$ and $j$. We will denote the set of such skew plane partitions by $\Pi_\lambda^{c,d}$. We can visualize a skew plane partition $\pi$ as a collection of stacks of identical cubes where the number of cubes in position $(i,j)$ is equal to $\pi_{i,j}$, as shown in Figure \ref{fig:boxes}. The shape cut out by the partition $\lambda$ is often referred to as the ``back wall''. \begin{figure}[ht] \caption{\label{fig:boxes} An example of a skew plane partition with the corresponding stacks of cubes when $\lambda=\{3,2,2,1\}$.} \includegraphics[width=8cm]{Array-and-skewpp.pdf} \end{figure} Scaling limits of boxed plane partitions (plane partitions with restricted height and confined to a box) when all three dimensions grow at the same scale have been studied extensively in the mathematical literature. Limit shapes of random boxed plane partitions under the uniform measure were studied in e.g. \cite{CohnLarsenPropp},\cite{J}, \cite{KO}. The local correlations in the case of the uniform measure were studied in e.g. \cite{J},\cite{K1},\cite{G}. Boxed plane partitions under more general measures were studied in e.g. \cite{BorGorRainsqdistr}, \cite{BorGorShuffling}, \cite{BeteaElliptic}. Many ideas about limit shapes and the structure of correlations of the corresponding tiling models can be traced to the physics literature, e.g. \cite{NienHilBloTriangularSOS}. A natural replacement for the uniform measure for skew plane partitions is the so-called ``volume'' measure: \begin{equation} \label{eq:qvolume} \mathbb{P}_q(\pi)\propto q^{|\pi|}, \end{equation} where $q\in(0,1)$ and $|\pi|=\sum_{i,j}\pi_{i,j}$ is the volume of the plane partition $\pi$, i.e. the total number of stacked cubes. The limit shape phenomenon for such measures can be proven quite generally \cite{CohnKenyonPropp}. \subsubsection*{Schur process} A skew plane partition can be represented as a sequence of interlacing Young diagrams consisting of its diagonal slices. More precisely, a skew plane partition $\pi\in\Pi^{c,d}_\lambda$ can be identified with the sequence of partitions $\{\pi(t)\}_{c< t< d}$ defined by \begin{equation*} \pi(t)=(\pi_{i,t+i},\pi_{i+1,t+i+1},\dots), \end{equation*} where $i$ is such that $\lambda_j< t+j$ for $j\geq i$. Under this identification the measure \eqref{eq:qvolume} corresponds to the measure \begin{equation} \label{eq:qvolumeSchurProc} \mathbb{P}_q(\{(\pi(t)\}_{-c<t<d})\propto\prod_{-c<t<d}q^{|\pi(t)|} \end{equation} on sequences of partitions, were $|\pi(t)|$ is the size of the partition $\pi(t)$, i.e. the sum of all its entries. In this language it is natural to consider a non-homogeneous variant of the measure \eqref{eq:qvolumeSchurProc}: a measure where the value of the weight $q$ depends on the ``time'' coordinate $t$. More precisely, for a sequence $\bar{q}=\{q_t\}_{-c<t<d}$ of positive real numbers introduce a probability measure on the set $\Pi_\lambda^{c,d}$ by \begin{equation} \label{eq:qinhomog} \mathbb{P}_{\lambda,\bar{q}}(\pi)=\frac 1Z \prod_{-c<t<d}q_t^{|\pi(t)|}, \end{equation} where $Z$ is the normalizing factor \begin{equation*} Z=\sum_{\pi\in\Pi_\lambda^{c,d}}\prod_{-c<t<d}q_t^{|\pi(t)|}, \end{equation*} called the partition function. We refer to this as the inhomogeneous measure, or the measure with inhomogeneous weights, since in the interpretation of skew plane partitions as stacks of cubes it corresponds to giving different weights to cubes depending on which diagonal slice they fall on. Okounkov and Reshetikhin introduced a broad family of probability measures on sequences of partitions, called the Schur Process, of which the measure \eqref{eq:qinhomog} is a specialization \cite{OR1}. The Schur Process can be thought of as a time-dependent version of a measure on partitions introduced by Okounkov in \cite{O1}, called Schur measure. In our notation, the parameter $t$ plays the role of time. In \cite{OR2} Okounkov and Reshetikhin showed that the measures from the Schur Process are determinantal point processes, and gave a contour integral representation for the correlation kernel (see Theorem \ref{thm:fin-corr2} below). Using this they studied the scaling limit of random skew plane partitions with respect to the homogeneous measure \eqref{eq:qvolume} when the back wall approaches in the limit a piecewise linear curve with lattice slopes $\pm 1$ \cite{OR1},\cite{OR2}. Piecewise linear walls of non-lattice slopes were studied in \cite{BMRT}. Arbitrary piecewise linear back walls were considered in \cite{M}. \subsection{Main results} While the correlation kernel of the underlying determinantal process in the case of inhomogeneous weights was computed in \cite{OR2}, the thermodynamic limit has never been studied. In this paper we carry such a study in the special case when the weights are periodic. For fixed $\alpha\geq1$ we define the weights $q_t$ by \begin{equation} \label{eq:periodicweights} q_t=q_{t,r}=\left\{ \begin{array}{ll} e^{-r}\alpha,&t\text{ is odd}\\ e^{-r}\alpha^{-1},&t\text{ is even} \end{array}\right., \end{equation} and study the scaling limit of the system when $r\rightarrow 0$. Note, that \eqref{eq:periodicweights} turns into the homogeneous measure \eqref{eq:qvolume} when $\alpha=1$. In \cite{M} it was shown that for the homogeneous measure the scaling limit only depends on the macroscopic limit of the scaled back wall. In the case of the measure with weights \eqref{eq:periodicweights} when $\alpha\neq 1$, the system is very sensitive to microscopic changes to the back walls, and unlike the homogeneous case, the back walls have to be chosen carefully in order for the measure to be well defined (i.e. for the partition function to be finite). As a representative example, we study random skew plane partitions with the back wall given by a staircase, which in the scaling limit converges to a piecewise linear curve with three linear sections of slopes $1$, $0$ and $-1$ respectively. \subsubsection{Bulk correlations and the frozen boundary} In Section \ref{sec:bulkCorrKer} we compute the asymptotics of the correlation kernel in the bulk. In the homogeneous case the distribution of the horizontal tiles in the neighbourhood of a point in the bulk converges to a translation invariant ergodic Gibbs measure \cite{BMRT}, which were classified by Kenyon \cite{KenyonGibbs}. In the inhomogeneous case this is not the case any more, since when $\alpha>1$ the process is only $2\mathbb{Z}\times\mathbb{Z}$ translation invariant. In Section \ref{sec:frozenBoundary} We examine the frozen boundary in the three different regimes of interest: \begin{itemize} \item \textit{Unbounded floor:} The main difference between the inhomogeneous and homogeneous cases is that in the former the system is bounded everywhere except in the four tentacles that arise, whereas in the latter the system is always unbounded at linear sections of non-lattice slopes. In particular, the staircase shaped back wall at the linear section of slope $0$ stays frozen in the scaling limit in the inhomogeneous case. \begin{figure}[ht] \caption{\label{fig:unbddSample} The frozen boundary and an exact sample in the unbounded case. On the left side $\alpha=1$, while on the right side $\alpha>1$.} \includegraphics[width=13.8cm]{skewpp-inf-frozen} \includegraphics[width=6.9cm]{skewpp-inf-hom0-scale} \includegraphics[width=6.9cm]{skewpp-inf-2per0-scale-decor} \end{figure} Note, that the shape of this frozen boundary can be predicted by methods from \cite{KOS}. A plot of the frozen boundary and an exact sample are shown in Figure \ref{fig:unbddSample}. \item \textit{Bounded floor:} In this case, generically, the floor will be pentagonal in the scaling limit. The system develops four turning points near the vertical boundary, two on the left and two on the right. A notable difference from the homogeneous case is the nature of the turning points. We discuss this in detail in Section \ref{sec:turningPointsOverview}. A plot of the frozen boundary and an exact sample are shown in Figure \ref{fig:bddSample}. \begin{figure}[ht] \caption{\label{fig:bddSample} The frozen boundary and an exact sample in the bounded case. On the left side $\alpha=1$, while on the right side $\alpha>1$.} \includegraphics[width=13.8cm]{skewpp-bdd-frozen} \includegraphics[width=6.9cm]{skewpp-bdd-hom0-scale} \includegraphics[width=6.9cm]{skewpp-bdd-2per1-scale} \end{figure} \item \textit{Triangular floor:} When the staircase back wall is as large as possible, the floor in the scaling limit is triangular. The corresponding limit for the homogeneous measure was studied in \cite{BMRT}, where it was shown that the disordered region is infinitely tall everywhere. In contrast, we find that when $\alpha>1$, the disordered region is bounded. A plot of the frozen boundary and an exact sample are shown in Figure \ref{fig:triangleSample}. \begin{figure}[ht] \caption{\label{fig:triangleSample} The frozen boundary and an exact sample in the triangular case. On the left side $\alpha=1$, while on the right side $\alpha>1$.} \includegraphics[width=13.8cm]{skewpp-tri-frozen} \includegraphics[width=6.9cm]{skewpp-tri-hom0-scale} \includegraphics[width=6.9cm]{skewpp-tri-2per0-scale} \end{figure} \end{itemize} All exact samples were generated using Borodin's Schur Dynamics \cite{BorSchurDynam}. \subsubsection{Turning points} \label{sec:turningPointsOverview} In the bounded cases the system develops special points called turning points near the vertical boundaries, which we study in Section \ref{sec:turningPoints}. A turning point is a point where the disordered region meets two different frozen regions. In the case of homogeneous weights the system develops one turning point near each extreme. When the back wall is piecewise linear with lattice slopes these points were studied by Okounkov and Reshetikhin in \cite{OR3}, where it was proven that the local process at the turning point is the same as the GUE minor process - the point process of the eigenvalues of the principal minors of a random $N\times N$ GUE matrix when $N\rightarrow\infty$. It was conjectured in \cite{OR3} that this holds universally, independently of boundary conditions. Results toward this were obtained in \cite{JohNordGUEminors} and \cite{GorinPanova}. In the case of inhomogeneous weights the local process at turning points is not the GUE minor process. When $\alpha>1$, the turning point splits into two turning points separated by a frozen region where two types of lozenges coexist in a deterministic pattern - a so called semi-frozen region (see the right side of Figure \ref{fig:doubleTurningPoint-bottom} and left side of Figure \ref{fig:doubleTurningPoint-top}). Essentially at each turning point the system transitions from a wall with lattice slope to a wall with slope $0$. Whereas in the homogeneous case at a turning point at distance $n$ from the edge you see $n$ horizontal lozenges (corresponding to $n$ eigenvalues of an $n\times n$ matrix), in the inhomogeneous case you see $\lfloor \frac{n+1}2\rfloor$ or $\lfloor \frac n2\rfloor$ depending on whether it is the top or bottom turning point (see Figures \ref{fig:doubleTurningPoint-bottom} and \ref{fig:doubleTurningPoint-top}). Remarkably, if we only look at the points of distance of a fixed parity from the edge, we recover the GUE minor process. As a consequence at each turning point we see two non-trivially correlated copies of the GUE minor process. It is noteworthy, that the GUE minor process we see when we look at the slices of only one parity implies interlacing of slices. This is not a result of a geometric constraint unlike the case of a single turning point found in the homogeneous case. \begin{figure}[ht] \caption{\label{fig:doubleTurningPoint-bottom} The bottom turning point on the right side of Figure \ref{fig:bddSample}. The picture is rotated 90 degrees clockwise.} \includegraphics[width=14cm]{periodicTurningPoint-bottom} \caption{\label{fig:doubleTurningPoint-top} The top turning point on the right side of Figure \ref{fig:bddSample}. The picture is rotated 90 degrees clockwise.} \includegraphics[width=14cm]{periodicTurningPoint-top} \end{figure} \subsubsection{An intermediate regime} In Section \ref{sec:intermediateWeights} we consider the scaling limit of random skew plane partitions under the measure \eqref{eq:qinhomog} when the weights $q_t$ are given by \begin{equation} \label{eq:intermperiodicweights} q_t=\left\{ \begin{array}{ll} e^{-r+\gamma r^{1/2}},&t\text{ is even}\\ e^{-r-\gamma r^{1/2}},&t\text{ is odd} \end{array}\right., \end{equation} where $\gamma>0$ is an arbitrary constant. This is an intermediate regime between the homogeneous weights and the inhomogeneous weights given by \eqref{eq:periodicweights}. The macroscopic limit shape and correlations in the bulk are the same as in the homogeneous case, so periodicity disappears in the limit. However, the local point process at turning points is different from the homogeneous one. In particular, while we only have one turning point near each edge, we still do not have the GUE minor process, but rather a one-parameter deformation of it. \subsection{Acknowledgements} I am very grateful to Richard Kenyon for suggesting the study of periodic weights. I am very grateful to Richard Kenyon, Nicolai Reshetikhin, Paul Zinn-Justin, Andrei Okounkov and Vadim Gorin for many useful discussions on this subject. The project was started during the Random Spatial Processes program at MSRI and I am very grateful to MSRI with all its staff and the organizers of the special program for their excellent hospitality. \section{Background and notation} \subsection{Notation} Let $u^\lambda_1<u^\lambda_2<\ldots<u^\lambda_{n-1}$ denote the horizontal coordinates of the corners on the boundary of the Young diagram $\lambda$. For convenience let $u^\lambda_0$ and $u^\lambda_n$ be the horizontal coordinates of respectively the left-most and right-most points of the back wall. We have $u^\lambda_0=-c$ and $u^\lambda_n=d$. Let \begin{equation*} \tilde{u}^\lambda=\sum_{i=1}^{n-1}(-1)^i u^\lambda_i, \end{equation*} and for $t\in[c,d]$ define the piecewise linear function $b_\lambda(t)$ with slopes $\pm 1$ by \begin{equation*} b_\lambda(t)=\sum_{i=0}^{n} (-1)^{i}|t-\tilde{u}^\lambda-u^\lambda_i|+u^\lambda_0-u^\lambda_n. \end{equation*} The function $b_\lambda(t)$ gives the back wall of skew plane partitions with boundary $\lambda$ (see Figure \ref{fig:bLambda}). \begin{figure}[ht] \caption{\label{fig:bLambda} The coordinates $u^\lambda_i$ and the graph of $b_\lambda(t)$ when $\lambda=\{3,2,2,1\}$ and $c=6,d=6$.} \includegraphics[width=8cm]{GraphOfBandUs.pdf} \end{figure} As can be seen from Figure \ref{fig:boxes}, skew plane partitions can be identified with tilings of regions of $\mathbb{R}^2$ with 3 types of rhombi, called lozenges. Scale the axes in such a way that the centers of horizontal tiles are on the lattice $\mathbb{Z}\times \frac 12 \mathbb{Z}$. Note, that the positions of the horizontal tiles completely determine the plane partition. Given a subset $U=\{(t_1,h_1),\ldots,(t_n,h_n)\}\subset \mathbb{Z}\times \frac 12 \mathbb{Z}$, define the corresponding local correlation function $\rho_{\lambda,\bar{q}}(U)$ as the probability for a random tiling taken from the above probability space to have horizontal tiles centered at all positions $(t_i,h_i)_{i=1}^n$. Okounkov and Reshetikhin \cite{OR2} showed that for arbitrary $\lambda$ and $\bar{q}$, the point process of horizontal lozenges on $\mathbb{Z}\times\frac12\mathbb{Z}$ is determinantal with the following kernel: \begin{theorem}[Theorem 2, part 3 \cite{OR2}] \label{thm:fin-corr2} The correlation functions $\rho_{\lambda,\bar{q}}$ are determinants \begin{equation*} \rho_{\lambda,\bar{q}}(U) =\det(K_{\lambda,\bar{q}}((t_i,h_i),(t_j,h_j)))_{1\leq i,j\leq n}, \end{equation*} where the correlation kernel $K_{\lambda,\bar{q}}$ is given by the double integral \begin{multline} \label{eq:main-corr2} K_{\lambda,\bar{q}}((t_1,h_1),(t_2,h_2)) =\\ \frac{1}{(2\pi \mathfrak{i})^2} \int_{z\in C_z}\int_{w\in C_w} \frac{\Phi_{b_\lambda,\bar{q}}(z,t_1)}{\Phi_{b_\lambda,\bar{q}}(w,t_2)} \frac{1}{z-w}z^{-h_1+\frac 12 b_\lambda(t_1)+\frac 12}w^{h_2-\frac 12 b_\lambda(t_2)+\frac12}\frac{dz\ dw}{zw}, \end{multline} where $b_\lambda(t)$ is the function giving the back wall corresponding to $\lambda$ as in Figure \ref{fig:bLambda}, \begin{align} \nonumber\Phi_{b_\lambda,\bar{q}}(z,t)&=\frac{\Phi^-_{b_\lambda,\bar{q}}(z,t)}{\Phi^+_{b_\lambda,\bar{q}}(z,t)},\\ \label{eq:Phis}\Phi^+_{b_\lambda,\bar{q}}(z,t)&=\prod_{m>t, m\in D^+, m\in \mathbb{Z}+\frac 12}(1-zx_m^+),\\ \nonumber\Phi^-_{b_\lambda,\bar{q}}(z,t)&=\prod_{m<t, m\in D^-, m\in \mathbb{Z}+\frac 12}(1-z^{-1}x_m^-), \end{align} the parameters $x^\pm_m$ satisfy the conditions \begin{align} \label{eq:xs-qs} \nonumber\frac{x^+_{m+1}}{x^+_m}&=q_{m+\frac 12},\ u^\lambda_{2i-1}<m<u^\lambda_{2i}-1,\ \text{or}\ m>u^\lambda_{n-1}, \nonumber\\x^+_{u^\lambda_{2i}-\frac 12}x^-_{u^\lambda_{2i}+\frac 12}&=q_{u^\lambda_{2i}}^{-1}, \nonumber\\x^-_{u^\lambda_{2i-1}-\frac 12}x^+_{u^\lambda_{2i-1}+\frac 12}&=q_{u^\lambda_{2i-1}}, \nonumber\\\frac{x^-_m}{x^-_{m+1}}&=q_{m+\frac 12},\ u^\lambda_{2i}<m<u^\lambda_{2i+1}-1,\ \text{or}\ m<u^\lambda_1, \end{align} $m\in D^{\pm}$ means the back wall at $t=m$, i.e. $b_\lambda(t)$ at $t=m$, has slope $\mp 1$, and $C_z$ (respectively $C_w$) is a simple positively oriented contour around 0 such that its interior contains none of the poles of $\Phi_{b_\lambda}(\cdot,t_1)$ (respectively all of the poles of $\Phi_{b_\lambda}(\cdot,t_2)^{-1}$). Moreover, if $t_1< t_2$, then $C_z$ is contained in the interior of $C_w$, and otherwise, $C_w$ is contained in the interior of $C_z$. \end{theorem} Note, that the conditions \eqref{eq:xs-qs} can be rewritten as \begin{align} \label{eq:xs-prodqs} x^-_m&=a^{-1}q_{u^\lambda_0+1}^{-1}\cdot\dots\cdot q_{m-\frac 12}^{-1}, \\\nonumber x^+_m&=aq_{u^\lambda_0+1}\cdot\dots\cdot q_{m-\frac 12}, \end{align} where $a>0$ is an arbitrary parameter. \subsection{The scaling limit} For $r>0$ let $\lambda_r$ be a staircase-shaped partition. Our goal is to study random skew plane partitions $\pi\in\Pi_{\lambda_r}^{c_r,d_r}$ under the measure \eqref{eq:qinhomog} with inhomogeneous weights given by \eqref{eq:periodicweights} in the limit when $r\rightarrow 0+$. Since the typical scale of such a random skew plane partition is $\frac 1r$, we scale plane partitions in all directions by $r$, and let $\tau=rt$, $\chi=rh$ be the rescaled coordinates. Let $B_{\lambda_r}(\tau)$ be the function giving the scaled back wall. We have $B_{\lambda_r}(\tau)=rb_{\lambda_r}(\tau/r)$. We assume that $\lim_{r\rightarrow 0} rc_r=-V_0$, $\lim_{r\rightarrow 0} rd_r=V_3$, and the scaled back walls given by $B_{\lambda_r}(\tau)$ converge point-wise and uniformly to the function $V(\tau)$ defined by \begin{equation} \label{eq:backwall} V(\tau)=-\frac12|\tau-V_1|-\frac12|\tau-V_2|, \end{equation} where $V_1=-V_2$ (see Figure \ref{fig:backwall}). \begin{figure} \caption{\label{fig:backwall}The back wall} \includegraphics[width=6cm]{TwoPerBackwall} \end{figure} If $\alpha>1$, in order for the measure \eqref{eq:qinhomog} to be well defined, we must impose two constraints. First, the weight at inner corners must be less than $1$. Thus, we must have that $u^\lambda_1$ and $u^\lambda_{n-1}$ are even. For simplicity, we will also assume that $u^\lambda_0,u^\lambda_n$ are odd. Second, the length of the section with average slope $0$ should be large enough so that the total weight of the strip of width $1$ in this section is less than $1$ (see Figure \ref{fig:stripWeight}). Since we are scaling by $r$, there are $(V_2-V_1)/r$ microscopic linear sections between $V_1$ and $V_2$. Hence, the weight of the strip in question is \begin{equation*} \alpha(e^{-r})^{\frac{V_2-V_1}r}. \end{equation*} Thus, we must have \begin{equation} \label{eq:alphaRestriction} e^{-(V_2-V_1)}\alpha<1. \end{equation} \begin{figure}[ht] \caption{\label{fig:stripWeight} Horizontal strip of width 1.} \includegraphics[width=5cm]{StripWidth1.pdf} \end{figure} \section{The correlation kernel in the bulk} \label{sec:bulkCorrKer} \subsection{Computation of the exponentially leading term of the correlation kernel} In order to understand the asymptotic local correlations near a given macroscopic point $(\tau,\chi)$, we need to study the limit of the correlation kernel \eqref{eq:main-corr2} when $\lim_{r\rightarrow 0}rt_1=\lim_{r\rightarrow 0}rt_2=\tau$, $\lim_{r\rightarrow 0}rh_1=\lim_{r\rightarrow 0}rh_2=\chi$, $\Delta t=t_1-t_2$ and $\Delta h=h_1-h_2$ are constants, and $t_1, t_2$ have fixed parity, independent of $r$. To do this, we apply the saddle point method to the integral representation of the correlation kernel \eqref{eq:main-corr2}. The first step is to compute the exponentially leading term of the integrand. From \eqref{eq:Phis} we have \begin{align*} -r\ln\Phi_+(z,t_i) =&-\sum_{m>t_i}r\frac 12(1-b'_{\lambda_r}(m))\ln(1-e^{-r(m-\frac 12-u^\lambda_0)}\alpha^{-p_m}a_r z) \\=&-\sum_{m>t_i}r\frac 12(1-B'_{\lambda_r}(m))\ln(1-e^{-rm}\alpha^{-p_m}\tilde{a}_r z), \end{align*} where \begin{equation*} p_m=\left\{ \begin{array}{ll} 0,&m-\frac 12 - u^\lambda_0 \text{ is even} \\1,&m-\frac 12 - u^\lambda_0 \text{ is odd} \end{array} \right., \end{equation*} and $\tilde{a}_r=a_re^{r(\frac12+u^\lambda_0)}$. Making a change of variable, we obtain \begin{align*} -r\ln\Phi_+(z,t_i) =&-\frac 12\sum_{M\in(\tau,\infty)\cap r(2\mathbb{Z}+\frac 12)}r(1-B'_{\lambda_r}(M))\ln(1-e^{- M}\alpha^{-1}\tilde{a}_rz) \\&-\frac 12\sum_{M\in(\tau,\infty)\cap r(2\mathbb{Z}+\frac32)}r(1-B'_{\lambda_r}(M))\ln(1-e^{- M}\tilde{a}_rz), \end{align*} where $\tau=rt_i$. Since $B_{\lambda_r}(M)$ converges to $V(M)$, we have \begin{align*} \lim_{k\rightarrow\infty}-r\ln\Phi_+(z,t_i) =&-\frac12\int\limits_{(\tau,\infty)\cap(V_2,V_3)}\ln(1-e^{- M}\tilde{a}\alpha^{-1}z)+\ln(1-e^{- M}\tilde{a}z)dM \\&-\frac12\int\limits_{(\tau,\infty)\cap(V_1,V_2)}\ln(1-e^{- M}\tilde{a}\alpha^{-1}z)dM. \end{align*} Similar computations for $\Phi_-(z,t_i)$ and setting the arbitrary parameter $\tilde{a}_r$ to $\alpha^{\frac 12}$, give us \begin{equation} \label{eq:leadAsympCorrKer} K_{\lambda_r,\bar{q}_r}((t_1,h_1),(t_2,h_2)) =\frac{1}{(2\pi \mathfrak{i})^2} \int_{z\in C_z}\int_{w\in C_w} e^{\frac{S_{\tau,\chi}(z)-S_{\tau,\chi}(w)}r+O(1)}\frac{dz\ dw}{z-w}, \end{equation} where \begin{align} \label{eq:S} S_{\tau,\chi}(z) =&\frac12\int\limits_{V_0}^{\min(V_1,\tau)}\ln(1-e^{ M}\alpha^{\frac 12} z^{-1})+\ln(1-e^{ M}\alpha^{-\frac 12}z^{-1})dM \\\nonumber&+\frac12\int\limits_{\min(V_1,\tau)}^{\min(V_2,\tau)}\ln(1-e^{ M}\alpha^{-\frac 12}z^{-1})dM \\\nonumber&-\frac12\int\limits_{\max(\tau,V_2)}^{V_3}\ln(1-e^{- M}\alpha^{-\frac 12}z)+\ln(1-e^{- M}\alpha^{\frac 12}z)dM \\\nonumber&-\frac12\int\limits_{\max(\tau,V_1)}^{\max(\tau,V_2)}\ln(1-e^{- M}\alpha^{-\frac 12}z)dM -(\chi-\frac 12 V(\tau))\ln z. \end{align} \subsection{Critical points} To apply the saddle point method we need to study the critical points of the function $S_{\tau,\chi}$. To simplify formulas, we set $v=V_3=-V_0$, and $u=V_2=-V_1$. Differentiating the formula for $S_{\tau,\chi}(z)$ and using \eqref{eq:backwall} we obtain \begin{align} \label{eq:spBounded} z\frac{d}{dz}S_{\tau,\chi}(z) =&-\frac12\ln\left(\frac{z-e^{\min(-u,\tau)}\alpha^{\frac12}}{z-e^{- v}\alpha^{\frac12}}\right) -\frac12\ln\left(\frac{z-e^{\min(u,\tau)}\alpha^{-\frac12}}{z-e^{- v}\alpha^{-\frac12}}\right) \\\nonumber&+\frac12\ln\left(\frac{z-e^{ v}\alpha^{\frac12}}{z-e^{\max(-u,\tau)}\alpha^{\frac12}}\right) +\frac12\ln\left(\frac{z-e^{ v}\alpha^{-\frac12}}{z-e^{\max(u,\tau)}\alpha^{-\frac12}}\right) \\\nonumber&-(\chi-\frac12 \tau+v). \end{align} Let $z_{\tau,\chi}$ denote a critical point of $S_{\tau,\chi}(z)$. Define \begin{align*} \mathfrak{U}: =&(e^{- v}\alpha^{\frac12},e^{\min(-u,\tau)}\alpha^{\frac12}) \cup(e^{- v}\alpha^{-\frac12},e^{\min(u,\tau)}\alpha^{-\frac12}) \\&\cup(e^{\max(-u,\tau)}\alpha^{\frac12},e^{ v}\alpha^{\frac12}) \cup(e^{\max(u,\tau)}\alpha^{-\frac12},e^{ v}\alpha^{-\frac12}). \end{align*} It is easy to check that the critical points are not near the set $\mathfrak{U}$, in the sense that for any $(\tau,\chi)$ we have \begin{equation*} \min_{x\in\mathfrak{U}}|z_{\tau,\chi}-x|>0. \end{equation*} \subsubsection{The number of complex critical points everywhere} We show that for any pair $(\tau,\chi)$, $S_{\tau,\chi}$ has at most one non-real complex conjugate pair of critical points. Let \begin{equation*} P_1(z)=e^{\chi-\frac12\tau+v} (z-e^{- v}\alpha^{-\frac12}) (z-e^{- v}\alpha^{\frac12}) (z-e^{ v}\alpha^{-\frac12}) (z-e^{ v}\alpha^{\frac12}) \end{equation*} and \begin{equation*} P_2(z)= (z-e^{ \min(-u,\tau)}\alpha^{\frac12}) (z-e^{ \min(u,\tau)}\alpha^{-\frac12}) (z-e^{ \max(-u,\tau)}\alpha^{\frac12}) (z-e^{ \max(u,\tau)}\alpha^{-\frac12}). \end{equation*} Exponentiating $2 z\frac d{dz}S_{\tau,\chi}(z)$ we see that $P_1(z_{\tau,\chi})-P_2(z_{\tau,\chi})=0$. It follows from $-v<u,\tau<v$ and \eqref{eq:alphaRestriction} that \begin{equation*} e^{- v}\alpha^{-\frac12} <e^{- v}\alpha^{\frac12} <e^{ \max(u,\tau)}\alpha^{-\frac12} <e^{ v}\alpha^{-\frac12} <e^{ v}\alpha^{\frac12} \end{equation*} and \begin{equation*} e^{- v}\alpha^{-\frac12} <e^{ \min(-u,\tau)}\alpha^{\frac12}, e^{ \min(u,\tau)}\alpha^{-\frac12}, e^{ \max(-u,\tau)}\alpha^{\frac12}, e^{ \max(u,\tau)}\alpha^{-\frac12} <e^{ v}\alpha^{\frac12}. \end{equation*} Thus, \begin{align*} P_1(e^{- v}\alpha^{-\frac12})-P_2(e^{- v}\alpha^{-\frac12})<0,\\ P_1(e^{ v}\alpha^{\frac12})-P_2(e^{ v}\alpha^{\frac12})>0 \end{align*} and \begin{equation*} P_1(e^{ \max(u,\tau)}\alpha^{-\frac12})-P_2(e^{ \max(u,\tau)}\alpha^{-\frac12})<0. \end{equation*} By the intermediate value theorem $P_1(z)-P_2(z)$ has at least two distinct real roots. Since $P_1(z)-P_2(z)$ is a degree $4$ polynomial in $z$, the number of non-real complex roots is either $0$ or $2$. Equivalently, the number of non-real complex critical points of $S_{\tau,\chi}$ is either $0$ or $2$. \subsubsection{Critical points when $|\chi|\gg 1$} Fix $\tau$. If $|\chi|\rightarrow \infty$, then looking at the real part of \eqref{eq:spBounded} it is easy to see that $|z_{\tau,\chi}|\rightarrow e^{ v}\alpha^{\pm \frac12}$ or $e^{- v}\alpha^{\pm \frac12}$ if $\chi<0$, and $|z_{\tau,\chi}|\rightarrow e^{\min(-u,\tau)}\alpha^{\frac 12}$, $e^{\max(-u,\tau)}\alpha^{\frac 12}$, $e^{\min(u,\tau)}\alpha^{-\frac 12}$ or $e^{\max(u,\tau)}\alpha^{-\frac 12}$ if $\chi>0$. In all these cases a direct application of \cite[Lemma 2.3]{M} shows that if $\Im\left( z\frac{d}{dz}S_{\tau,\chi}(z_{\tau,\chi})\right)= 0$, then $z_{\tau,\chi}$ must be real unless $\tau=\pm u$, $\chi>0$ and $u\neq v$. In the latter case $S_{\tau,\chi}$ has exactly one pair of complex conjugate critical points, the asymptotically leading term of which can be explicitly computed to give \begin{multline*} z_{\tau,\chi}=e^{\tau}\alpha^{\mp\frac12}+\mathfrak{i} \frac{ (e^{\tau}\alpha^{\mp\frac12}-e^{-v}\alpha^{\frac12})^{\frac12} (e^{\tau}\alpha^{\mp\frac12}-e^{-v}\alpha^{-\frac12})^{\frac12}} {(e^{\tau}\alpha^{\mp\frac12}-e^{-u}\alpha^{\pm\frac12})^{\frac12} } \\\times\frac{(e^{v}\alpha^{\frac12}-e^{\tau}\alpha^{\mp\frac12})^{\frac12} (e^{v}\alpha^{-\frac12}-e^{\tau}\alpha^{\mp\frac12})^{\frac12}} {(e^{\tau}\alpha^{\mp\frac12}-e^{u}\alpha^{\pm\frac12})^{\frac12}}e^{-\chi+\frac12\tau-v+O(e^{-\chi})} \end{multline*} or its conjugate, where the top sign is chosen when $\tau=u$ and the bottom sign when $\tau=-u$. \subsection{The correlation kernel in the bulk} Suppose $(\tau,\chi)$ is such that $S_{\tau,\chi}$ has a pair of complex conjugate critical points. Let the critical points be $z_{\tau,\chi}$ and $\bar{z}_{\tau,\chi}$, with $\Im z_{\tau,\chi}>0$. Following the saddle point method, we deform the contours of integration in \eqref{eq:main-corr2} to new contours $C'_z$, $C'_w$ as follows: \begin{itemize} \item the contours $C'_z$, $C'_w$ pass through the critical points $z_{\tau,\chi}$, $\bar{z}_{\tau,\chi}$, \item along the contour $C'_z$ we have $\Re S_{\tau,\chi}(z)\leq \Re S_{\tau,\chi}(z_{\tau,\chi})$, with equality only at the critical points, \item along the contour $C'_w$ we have $\Re S_{\tau,\chi}(w)\geq \Re S_{\tau,\chi}(z_{\tau,\chi})$, with equality only at the critical points. \end{itemize} During this deformation the contours cross each other along an arc connecting the conjugate critical points, and we pick up residues along the arc. The arc will cross the real axis at a positive point if $t_1\geq t_2$ and at a negative point otherwise. Since along the new contours we have \begin{equation*} \Re S_{\tau,\chi}(z)\leq \Re S_{\tau,\chi}(z_{\tau,\chi})\leq \Re S_{\tau,\chi}(w), \end{equation*} the contours cross transversally at the critical points and the leading term of the integrand is $e^{\frac{S_{\tau,\chi}(z)-S_{\tau,\chi}(w)}r}$, as $r\rightarrow 0$, the contribution of the double integral along the new contours is exponentially small and the main contribution comes from the residue term. Thus, we have \begin{multline*} \lim_{r\rightarrow 0}K_{\lambda_r,\bar{q}_r}((t_1,h_1),(t_2,h_2)) =\\\lim_{r\rightarrow 0} \frac{1}{2\pi \mathfrak{i}} \int_{\bar{z}_{\tau,\chi}}^{z_{\tau,\chi}} \frac{\Phi_{b_{\lambda_r},\bar{q}_r}(z,t_1)}{\Phi_{b_{\lambda_r},\bar{q}_r}(z,t_2)} z^{-\Delta h+\frac 12 (b_{\lambda_r}(t_1)-b_{\lambda_r}(t_2))-1} dz. \end{multline*} For $e\in\{0,1\}$ denote \begin{multline*} m_e(t_1,t_2)=\sign(\Delta t) \\\times \left|\left\{m\in D^-:\min(t_1,t_2)<m<\max(t_1,t_2) \&m-\frac12\in2\mathbb{Z}+e\right\}\right|, \end{multline*} and let $m(t_1,t_2)=m_0(t_1,t_2)+m_1(t_1,t_2)$. Using \eqref{eq:Phis}, we obtain \begin{multline} \label{eq:corr-bulk} \lim_{r\rightarrow 0}K_{\lambda_r,\bar{q}_r}((t_1,h_1),(t_2,h_2)) =\lim_{r\rightarrow 0} (-1)^{m(t_1,t_2)}(\alpha^{-1/2}e^\tau)^{m_0(t_1,t_2)}(\alpha^{1/2}e^\tau)^{m_1(t_1,t_2)} \\\times\frac{1}{2\pi \mathfrak{i}} \int_{\bar{z}_{\tau,\chi}}^{z_{\tau,\chi}} z^{-\Delta h-\frac 12 \Delta t-1} \prod_{\stackrel{\min(t_1,t_2)<m<\max(t_1,t_2)}{m-\frac12\in\mathbb{Z}}}(1-zx_m^+)^{\sign \Delta t} dz. \end{multline} Since the correlations are given by determinants and $m_e(t_1,t_2)$ is of the form $f(t_1)-f(t_2)$, the terms outside of the integral will cancel when taking determinants. Removing those terms we obtain the following theorem. \begin{theorem} The correlation functions of the system near a point $(\tau, \chi)$ in the bulk are given by \begin{equation*} K^{\alpha}_{\chi,\tau}(t_1,t_2,\Delta h) = \int_\gamma (1- e^{-\tau}\alpha^{\frac12} z)^{\frac{\Delta t+c}2} (1- e^{-\tau}\alpha^{-\frac12} z)^{\frac{\Delta t-c}2} z^{-\Delta h -\frac{\Delta t}{2}} \frac{dz}{2\pi\mathfrak{i} z}, \end{equation*} where \begin{equation*} c=\left\{ \begin{array}{rl} 1,&\Delta t\text{ is odd and }t_1\text{is even}\\ 0,&\Delta t\text{ is even}\\ -1,&\text{otherwise} \end{array} \right., \end{equation*} the integration contour connects the two non-real critical points of $S_{\tau,\chi}(z)$, passing through the real line in the interval $(0,e^{\tau}\alpha^{-\frac 12})$ if $\Delta(t)\geq 0$ and through $(-\infty, 0)$ otherwise. \end{theorem} \begin{remark} The local point process in the bulk is not $\mathbb{Z}\times\mathbb{Z}$ invariant as in the homogeneous case, but rather $2\mathbb{Z}\times \mathbb{Z}$ translation invariant. \end{remark} \section{The frozen boundary} \label{sec:frozenBoundary} The region of the $(\tau,\chi)$ plane consisting of the points $(\tau,\chi)$ where $S_{\tau,\chi}$ has complex conjugate critical points is called the disordered region. Suppose $(\tau,\chi)$ is in the complement of the closure of the disordered region. As we deform the contours of integration in \eqref{eq:main-corr2} following the saddle point method, the contours either do not cross at all, in which case the correlation kernel converges to $0$, or cross along a closed curve winding once around the origin, in which case the correlation kernel converges to $1$. Thus, at such points $(\tau,\chi)$ horizontal lozenges appear with probability $0$ or $1$ and we have a frozen region. The boundary of the disordered region is called the frozen boundary. It consists of points $(\tau,\chi)$ such that $S_{\tau,\chi}$ has double real critical points, i.e. points $(\tau,\chi)$ such that there exist $z\in\mathbb{R}$ satisfying $S'_{\tau,\chi}(z)=S''_{\tau,\chi}(z)=0$. We study this curve in three different regimes. \subsection{Infinite floor} Consider skew plane partitions with an unbounded floor. In our notation this means $V_0=-\infty$, $V_3=\infty$ and $u=V_2=-V_1$. We are interested in the points $(\tau,\chi)$ where $S_{\tau,\chi}$ has double real critical points. We will show that for every $z\in\mathbb{R}\backslash\{0\}$, there is a unique pair $(\tau,\chi)$ for which it is a double real critical point. Thus, the set of points $(\tau,\chi)$ with double real critical points can be parametrized by $z$. We will write $(\tau(z),\chi(z))$ for this curve. Differentiating \eqref{eq:spBounded} with $v=\infty$ we obtain \begin{align} \label{eq:sppInfinite} \frac{d}{dz}\left( z\frac{d}{dz} S_{\tau,\chi}(z)\right) =&\frac1z-\frac12\frac1{z-e^{\tau}\alpha^{\frac12}} -\frac12\frac1{z-e^{\tau}\alpha^{-\frac12}} \\\nonumber&-\frac12\frac1{z-e^{- u}\alpha^{\frac12}} -\frac12\frac1{z-e^{ u}\alpha^{-\frac12}}. \end{align} We can solve the equation \begin{equation} \label{eq:sppInfiniteIs0} \frac{d}{dz}\left( z\frac{d}{dz} S_{\tau,\chi}(z)\right)=0 \end{equation} for $\tau$ in terms of $z$. Once we have $z$ and $\tau$, $\chi$ can be determined uniquely from $ z\frac{d}{dz} S_{\tau,\chi}(z)=0$. Setting $$\mathfrak{A}=\alpha^{\frac 12}+\alpha^{-\frac 12}$$ and $$\mathfrak{B}=e^{- u}\alpha^{\frac12}+e^{ u}\alpha^{-\frac12},$$ equation \eqref{eq:sppInfiniteIs0} is equivalent to \begin{equation} \label{eq:sppInfiniteQuadratic} e^{2\tau}(\mathfrak{B}z-2)-e^{\tau}\mathfrak{A}z(z^2-1)+z^3(2z-\mathfrak{B})=0, \end{equation} which is quadratic in $e^{\tau}$. Let $\tau^\pm(z)$ be the solutions \begin{equation} \label{eq:tauPMz} e^{\tau^\pm(z)}=\frac{\mathfrak{A}z(z^2-1)\pm\sqrt{\mathfrak{A}^2(z^3-z)^2-4z^3(\mathfrak{B}z-2)(2z-\mathfrak{B})}}{2(\mathfrak{B}z-2)}. \end{equation} We show that for any $z\in\mathbb{R}$ only one of the solutions leads to a real pair $(\tau,\chi)$. If $z<0$, then the leading coefficient of \eqref{eq:sppInfiniteQuadratic} is negative while the free coefficient is positive, whence the roots have opposite signs. Since $\tau\in\mathbb{R}$, we must have $\tau(z)=\tau^-(z)$. Suppose $z>0$. First, notice that if $\frac2{\mathfrak{B}}<z<\frac{\mathfrak{B}}2$, then the leading coefficient of \eqref{eq:sppInfiniteQuadratic} is positive while the free coefficient is negative. Thus, $e^{\tau^-(z)}<0<e^{\tau^+(z)}$ and $\tau^-(z)\notin\mathbb{R}$. Recall, that if $z$ is a critical point, we must have $z\notin \mathfrak{U}$. We will show, that this condition is satisfied by exactly one of the solutions $\tau^\pm(z)$, namely $\tau^+(z)$. To show this, it is enough to show the following: \begin{align} \label{eq:forbiddenInfinite1} 0<z<e^{- u}\alpha^{\frac12} &\Rightarrow e^{\tau^+(z)}\alpha^{\frac12}< z<e^{\tau^-(z)}\alpha^{\frac12}, \\\label{eq:forbiddenInfinite2} e^{- u}\alpha^{\frac12}<z<\frac{2}{\mathfrak{B}} &\Rightarrow e^{\tau^+(z)}\alpha^{-\frac12}< z<e^{\tau^-(z)}\alpha^{-\frac12}, \\\label{eq:forbiddenInfinite3} \frac{\mathfrak{B}}2<z<e^{ u}\alpha^{-\frac12} &\Rightarrow e^{\tau^-(z)}\alpha^{\frac12}< z<e^{\tau^+(z)}\alpha^{\frac12}, \\\label{eq:forbiddenInfinite4} e^{ u}\alpha^{-\frac12}<z &\Rightarrow e^{\tau^-(z)}\alpha^{-\frac12}< z<e^{\tau^+(z)}\alpha^{-\frac12}. \end{align} We will show \eqref{eq:forbiddenInfinite1}. The remaining three can be established similarly. If $0<z<e^{- u}\alpha^{\frac12}$, then the leading coefficient in \eqref{eq:sppInfiniteQuadratic} is negative, so showing \eqref{eq:forbiddenInfinite1} is equivalent to showing that the left-hand side of \eqref{eq:sppInfiniteQuadratic} evaluated at $e^{\tau}=z\alpha^{-\frac12}$ is positive. Substituting $e^{\tau}=z\alpha^{-\frac12}$ in \eqref{eq:sppInfiniteQuadratic} we obtain \begin{equation*} (1-\alpha^{-1})z^2(z-e^{- u}\alpha^{\frac12})(z-e^{ u}\alpha^{-\frac12}), \end{equation*} which is negative, since $z<e^{- u}\alpha^{\frac 12}<e^{ u}\alpha^{-\frac 12}$ and $\alpha>1$. Thus, we obtain that for any $z\in\mathbb{R}$, there is a unique pair $(\tau(z),\chi(z))$ such that $z$ is a double real critical point for $S_{\tau,\chi}$. This curve is the frozen boundary. From \eqref{eq:tauPMz} it is easy to read general features of the frozen boundary such as the appearance of the tentacles, as \begin{equation*} \lim_{z\rightarrow 0}\tau(z)=-\infty ,\qquad \lim_{z\rightarrow \pm\infty}\tau(z)=\infty ,\qquad \lim_{z\rightarrow (e^{- u}\alpha^{\frac12})^\pm}\tau(z)=-u^\pm, \end{equation*} and \begin{equation*} \lim_{z\rightarrow (e^{ u}\alpha^{-\frac12})^\pm}\tau(z)=u^\pm ,\qquad \lim_{z\rightarrow (e^{ u}\alpha^{-\frac12})^\pm}\chi(z)=\infty ,\qquad \lim_{z\rightarrow (e^{- u}\alpha^{\frac12})^\pm}\chi(z)=\infty, \end{equation*} where for example by an expression like $\lim_{z\rightarrow a^+}\tau=b^-$ we mean $\tau$ approaches $b$ from below when $z$ approaches $a$ from above. \subsection{Bounded floor} Let us examine the frozen boundary near the edges $\tau=\pm v$ when the floor is bounded. Suppose $(\tau,\chi)$ is a point of the frozen boundary and $z_{\tau,\chi}$ is the double real critical point of $S_{\tau,\chi}$. Differentiating \eqref{eq:spBounded} we obtain \begin{equation*} Q_1(z_{\tau,\chi})+Q_2(z_{\tau,\chi})=0, \end{equation*} where \begin{equation*} Q_1(z)=\frac{1}{z-e^v\alpha^{\frac12}} -\frac{1}{z-e^{\tau}\alpha^{\frac12}} +\frac{1}{z-e^v\alpha^{-\frac12}} -\frac{1}{z-e^{\tau}\alpha^{-\frac12}} \end{equation*} and \begin{equation*} Q_2(z)= \frac{1}{z-e^{-v}\alpha^{-\frac12}} +\frac{1}{z-e^{-v}\alpha^{\frac12}} -\frac{1}{z-e^{-u}\alpha^{\frac12}} -\frac{1}{z-e^u\alpha^{-\frac12}}. \end{equation*} Since \begin{equation*} e^{-v}\alpha^{-\frac12} <e^{-v}\alpha^{\frac12} <e^{-u}\alpha^{\frac12} <e^u\alpha^{-\frac12}, \end{equation*} the equation $Q_2(z)=0$ has two real solutions, both in the interval $(e^{-v}\alpha^{-\frac12},e^u\alpha^{-\frac12})$. However, if $\tau>u$, then $(e^{-v}\alpha^{-\frac12},e^u\alpha^{-\frac12})\subset \mathfrak{U}$ and it follows from $z_{\tau,\chi}\notin\mathfrak{U}$ that $\lim_{\tau\rightarrow v}Q_1(z_{\tau,\chi})\neq 0$. Thus \begin{equation*} \lim_{\tau\rightarrow v}z_{\tau,\chi}=e^v\alpha^{\frac12} \text{ or } \lim_{\tau\rightarrow v}z_{\tau,\chi}=e^v\alpha^{-\frac12}. \end{equation*} Similarly, \begin{equation*} \lim_{\tau\rightarrow -v}z_{\tau,\chi}=e^{-v}\alpha^{\frac12} \text{ or } \lim_{\tau\rightarrow -v}z_{\tau,\chi}=e^{-v}\alpha^{-\frac12}. \end{equation*} It follows that there are two turning points near each of the extremes $\tau=\pm v$. The vertical coordinates of the turning points are \begin{equation} \label{eq:chiBottom} \chi_{bottom}=-\frac{v}2-\frac12\ln \frac{(e^v-e^{-u})(\alpha e^v-e^u)}{(e^v-e^{-v})(\alpha e^v-e^{-v})} \end{equation} corresponding to $z_{\tau,\chi}=e^\tau\alpha^{\frac12}$ and \begin{equation} \label{eq:chiTop} \chi_{top}=-\frac{v}2-\frac12\ln \frac{(e^v-e^{-u}\alpha)(e^v-e^u)}{(e^v-e^{-v}\alpha)(e^v-e^{-v})} \end{equation} corresponding to $z_{\tau,\chi}=e^\tau\alpha^{-\frac12}$ . Note, that when $\alpha\rightarrow 1$ we have $\chi_{top}-\chi_{bottom}\rightarrow 0$ and in the homogeneous case $\alpha=1$ there is only one turning point, as was shown to be the case in \cite{OR2}. As was mentioned in the introduction, unlike the homogeneous case, the point process near these turning points is not the GUE minor process. This follows immediately from the fact that we have two turning points on each edge: the interlacing property of the GUE minor process cannot hold at both of the turning points. We study the point process near these turning points in Section \ref{sec:turningPoints}. \subsection{Triangular floor} In the limit $u\rightarrow v$ the bounded pentagonal floor degenerates into a triangle. Differentiating \eqref{eq:spBounded} with $v=u$ we obtain \begin{align} \label{eq:sppTriangle} \frac{d}{dz}\left( z\frac{d}{dz}S_{\tau,\chi}(z)\right) =&\frac12 \frac{1}{z-e^{- u}\alpha^{-\frac 12}} -\frac12 \frac{1}{z-e^{\tau}\alpha^{-\frac 12}} -\frac12 \frac{1}{z-e^{\tau}\alpha^{\frac 12}} \\\nonumber&+\frac12 \frac{1}{z-e^{ u}\alpha^{\frac 12}}. \end{align} The equation \begin{equation} \label{eq:spp=0} \frac{d}{dz}\left( z\frac{d}{dz}S_{\tau,\chi}(z)\right)=0 \end{equation} is again quadratic in $e^{\tau}$ and arguments similar to those used in the unbounded case give us that only one of the solutions produces a real pair $(\tau(z),\chi(z))$. Thus, as before, we obtain a parametrization of the frozen boundary. From \eqref{eq:sppTriangle} it is easy to see, that for fixed $u$ and $\alpha>1$ there exists a positive constant $c$ such that \begin{equation*} \min_{\tau\in[-u,u]}\left\{|z_{\tau,\chi}-e^{- u}\alpha^{-\frac 12}|, |z_{\tau,\chi}-e^{\tau}\alpha^{-\frac 12}|, |z_{\tau,\chi}-e^{\tau}\alpha^{\frac 12}|, |z_{\tau,\chi}-e^{ u}\alpha^{\frac 12}|\right\}>c, \end{equation*} which together with \eqref{eq:spBounded} imply $\chi(z)$ is bounded and the curve $(\tau(z),\chi(z))$ is a simple closed curve. This implies, in particular, that for any $\alpha>1$ the disordered region is bounded, as demonstrated in the exact sample in Figure \ref{fig:triangleSample}. Note, however, that in the homogeneous case $\alpha=1$ when $\tau(z)\rightarrow\pm u$, we have $\chi(z)\rightarrow\infty$ and the disordered region is infinite. This was first observed in \cite{BMRT}. As $u\rightarrow v$, we have $\chi_{top}\rightarrow \infty$, so there is only one turning point near each wall $\tau=\pm v$ when $u=v$. Moreover, \begin{equation*} \lim_{\alpha\rightarrow 1}\chi_{bottom}=\infty, \end{equation*} and when $\alpha=1$, $u=v$, we have a ``turning point at infinity'', as was observed in \cite{BMRT}. \section{Turning points} \label{sec:turningPoints} We now turn to the study of the local point process at the turning points. In this section $\tau=v$, $\chi=\chi_{bottom}$ or $\chi_{top}$ and, respectively, $z_{\tau,\chi}=e^v\alpha^{\frac12}$ or $z_{\tau,\chi}=e^v\alpha^{-\frac12}$. The turning points with $\tau=-v$ are of course of the same nature, and we will not look at them. The vertical characteristic scale at the turning point is $r^{\frac12}$, so we introduce new vertical coordinates $\tilde{h}_i$ defined by \begin{equation*} h_i=\lfloor\frac{\chi}r\rfloor+\frac{\tilde{h}_i}{r^\frac12}. \end{equation*} We also introduce new horizontal coordinates $\hat{t}_i$ which indicate the distance from the edge: \begin{equation*} t_i=u^\lambda_n-\hat{t}_i= \lfloor\frac{\tau}r\rfloor-\hat{t}_i. \end{equation*} We deform the contours of integration in \eqref{eq:main-corr2} according to the saddle point method to pass through the critical point $z_{\tau,\chi}$. Since the $z$-contour should not cross the poles of $\Phi_{b_\lambda}(\cdot,t_1)$ the deformation is different depending on whether we have the bottom or top turning point. \subsubsection{Top turning point when $\Delta t\geq 0$} \label{sec:topTPposDelt} For the top turning point when $\Delta t\geq 0$ the curve $\Re S_{\tau,\chi}(z)=\Re S_{\tau,\chi}(z_{\tau,\chi})$, the original contours $C_z,C_w$ and the new contours $C'_z,C'_w$ are shown in Figure \ref{fig:contoursTopDeltaPos}. In this case the contours do not cross each other, so we do not pick up any residues as a result of the contour deformation. \begin{figure}[ht] \caption{\label{fig:contoursTopDeltaPos} Deformation of contours at a turning point. The shaded region corresponds to $\Re(S_{\tau,\chi}(z)<\Re(S_{\tau,\chi}(z_{\tau,\chi})$. The solid red (inner) and blue(outer) contours are the original contours. The dotted red and blue contours are the deformed contours.} \includegraphics[width=6cm]{Contours,TopTP,DeltaPos.pdf} \end{figure} The main contribution to the integral comes from the vicinity of the critical point. At the turning points we have \begin{equation*} \frac{d}{dz}\left( z\frac{d}{dz} S_{\tau,\chi}(z)\right)\Big|_{z_{\tau,\chi}}=Q_2(z_{\tau,\chi}), \end{equation*} from which it is easy to see that $S''_{\tau,\chi}(z_{\tau,\chi})<0$. Changing variables of integration to $\zeta$, $\omega$ defined by \begin{equation*} z=z_{\tau,\chi}e^{r^{\frac12}\zeta}, \quad w=z_{\tau,\chi}e^{r^{\frac12}\omega}, \end{equation*} we obtain \begin{align*} K_{\lambda_r,\bar{q}_r}((t_1,h_1),(t_2,h_2))= &e^{\ln(z_{\tau,\chi}) \left( \frac{\tilde{h}_2-\tilde{h}_1}{r^{1/2}} -\frac{\hat{t}_2-\hat{t}_1}2 \right)} (1-\alpha^{-1})^{\lfloor\frac{\hat{t}_2}{2}\rfloor-\lfloor\frac{\hat{t}_1}{2}\rfloor} \\&\times (-r^{1/2})^{\lfloor\frac{\hat{t}_2+1}{2}\rfloor-\lfloor\frac{\hat{t}_1+1}{2}\rfloor} \frac{r^{1/2}}{(2\pi \mathfrak{i})^2} \\&\times \iint e^{\frac{S''_{\tau,\chi}(z_{\tau,\chi})}2(\zeta^2-\omega^2)} \frac{e^{\tilde{h}_2\omega}}{e^{\tilde{h}_1\zeta}} \frac{\omega^{\lfloor\frac{\hat{t}_2+1}{2}\rfloor}}{\zeta^{\lfloor\frac{\hat{t}_1+1}{2}\rfloor}} (1+O(r^{1/2}))\frac{d\zeta\ d\omega}{\zeta-\omega}, \end{align*} where the contours of integration are as in Figure \ref{fig:contourDeformedLocal}. \begin{figure}[ht] \caption{\label{fig:contourDeformedLocal} The $\zeta$ and $\omega$ contours of integration.} \includegraphics[width=2.cm]{ContoursTPLocal.pdf} \end{figure} When taking determinants of the form \begin{equation*} \det\left(K_{\lambda_r,\bar{q}_r}((t_i,h_i),(t_j,h_j))\right)_{1\leq i,j\leq k} \end{equation*} the terms \begin{equation} \label{eq:gaugeTurningPt} e^{\ln(z_{\tau,\chi}) \left( \frac{\tilde{h}_2-\tilde{h}_1}{r^{1/2}} -\frac{\hat{t}_2-\hat{t}_1}2 \right)} (1-\alpha^{-1})^{\lfloor\frac{\hat{t}_2+2}{2}\rfloor-\lfloor\frac{\hat{t}_1+2}{2}\rfloor} (-r^{1/2})^{\lfloor\frac{\hat{t}_2+1}{2}\rfloor-\lfloor\frac{\hat{t}_1+1}{2}\rfloor} \end{equation} cancel out and we obtain that in the limit $r\rightarrow 0$ the leading asymptotic of the correlation kernel near the top turning point is \begin{equation} \label{eq:corrKerTopTP} \frac{r^{\frac12}}{(2\pi \mathfrak{i})^2} \iint e^{\frac{S''_{\tau,\chi}(z_{\tau,\chi})}2(\zeta^2-\omega^2)} \frac{e^{\tilde{h}_2\omega}}{e^{\tilde{h}_1\zeta}} \frac{\omega^{\lfloor\frac{\hat{t}_2+1}{2}\rfloor}}{\zeta^{\lfloor\frac{\hat{t}_1+1}{2}\rfloor}} \frac{d\zeta\ d\omega}{\zeta-\omega}, \end{equation} where again contours of integration are as in Figure \ref{fig:contourDeformedLocal}. \subsubsection{Bottom turning point with $\Delta t\geq 0$} In the case of the bottom turning point, the deformed contours pass through the critical point $z_{\tau,\chi}=e^v\alpha^{\frac12}$. Since the $z$-contour should not cross the poles of $\Phi_{b_{\lambda_r},\bar{q}_r}(z,t_1)$, which are near $e^v\alpha^{-\frac12}$ or larger than $e^v\alpha^{\frac12}$, the deformed $z$-contour splits into the union of two pieces, $C'_z$ described above and $C''_z$ which is a simple closed clockwise loop around $e^v\alpha^{-\frac12}$. Moreover the $w$-contour passes over $C''_z$ during the deformation, so we pick up residues from the term $\frac1{z-w}$. To summarize, we have \begin{multline*} K_{\lambda,\bar{q}}((t_1,h_1),(t_2,h_2)) =\frac{1}{(2\pi \mathfrak{i})^2} \int_{z\in C'_z}\int_{w\in C'_w}\dots+ \frac{1}{(2\pi \mathfrak{i})^2}\int_{z\in C''_z}\int_{w\in C'_w}\dots \\+ \frac{1}{2\pi \mathfrak{i}} \int_{z\in C''_z} \prod_{\stackrel{t_2<m<t_1}{m+\frac12\in2\mathbb{Z}}}(1-z^{-1}x^-_m) \prod_{\stackrel{t_2<m<t_1}{m-\frac12\in2\mathbb{Z}}}(1-zx^+_m) z^{h_2-h_1+\frac 12 (b_\lambda(t_1)-b_\lambda(t_2))-1}dz, \end{multline*} where dots stand for the same integrand as in \eqref{eq:main-corr2}. The second term is exponentially small when $r\rightarrow 0$ since along the contours $C''_z$ and $C'_w$ we have \begin{equation*} \Re S_{\tau,\chi}(z)< \Re S_{\tau,\chi}(w), \end{equation*} the last term is zero since the integrand is regular at $e^v\alpha^{-\frac12}$, and the first term can be analysed as in Section \ref{sec:topTPposDelt}, giving the same result as \eqref{eq:corrKerTopTP} with $\lfloor\frac{\hat{t}_i+1}{2}\rfloor$ replaced with $\lfloor\frac{\hat{t}_i+2}{2}\rfloor$. \subsubsection{Either turning point with $\Delta t<0$} When $\Delta t<0$, during the deformation of contours the $z$-contour passes over the $w$-contour, and we pick up residues from the term $\frac1{z-w}$ along a simple closed clockwise curve around $0$. The residues are equal to \begin{equation*} \frac{c}{2\pi\mathfrak{i}} \int_{\mathfrak{i}\infty}^{-\mathfrak{i}\infty} \frac{e^{(\tilde{h}_2-\tilde{h}_1)\zeta}} {\zeta^{\lfloor\frac{\hat{t}_1+e}{2}\rfloor-\lfloor\frac{\hat{t}_1+e}{2}\rfloor}}d\zeta, \end{equation*} where the contour of integration crosses the real line in the interval $(-\infty,0)$, $c$ is the gauge term \eqref{eq:gaugeTurningPt} and $e$ is $2$ for the bottom turning point and $1$ for the top one. Following \cite{OR3} we can interpret the residue term as the difference between two different expansions of \begin{equation*} \frac{\omega^{\lfloor\frac{\hat{t}_2+e}{2}\rfloor}}{\zeta^{\lfloor\frac{\hat{t}_1+e}{2}\rfloor}} \frac{1}{\zeta-\omega} \end{equation*} depending on the sign of $\lfloor\frac{\hat{t}_1+e}{2}\rfloor-\lfloor\frac{\hat{t}_2+e}{2}\rfloor$, and thus incorporate the residue term into the main term. \subsubsection{The local point process at the turning points} Combining the above results, we obtain the following theorem. \begin{theorem} \label{thm:corkerTurningPoint} Let $(\tau, \chi)$ be a turning point with $\tau=\pm v$ and $\chi$ given by \eqref{eq:chiBottom} or \eqref{eq:chiTop}. Let \begin{equation*} t_i=\lfloor\frac{\tau}r\rfloor-\hat{t}_i, \end{equation*} and \begin{equation*} h_i=\lfloor\frac{\chi}r\rfloor+\frac{\tilde{h}_i}{r^\frac12}. \end{equation*} If $\lfloor\frac{\tau}r\rfloor$ is odd, then the correlation functions near a turning point $(\tau, \chi)$ of the system with periodic weights \eqref{eq:periodicweights} are given by \begin{equation*} \lim_{r\rightarrow 0}r^{-\frac12} K_{\lambda,\bar{q}}((t_1,h_1),(t_2,h_2)) =\frac{1}{(2\pi \mathfrak{i})^2} \iint e^{\frac{S''_{\tau,\chi}(z_{\tau,\chi})}2(\zeta^2-\omega^2)} \frac{e^{\tilde{h}_2\omega}}{e^{\tilde{h}_1\zeta}} \frac{\omega^{\lfloor\frac{\hat{t}_2+e}{2}\rfloor}}{\zeta^{\lfloor\frac{\hat{t}_1+e}{2}\rfloor}} \frac{d\zeta\ d\omega}{\zeta-\omega}, \end{equation*} where the contours of integration are as in Figure \ref{fig:contourDeformedLocal} and $e$ is $1$ when $\chi=\chi_{top}$ and $2$ when $\chi=\chi_{bottom}$. When $\lfloor\frac{\tau}r\rfloor$ is even, $e$ is replaced by $2-e$. \end{theorem} \begin{remark} If we restrict the process to horizontal lozenges of only even or only odd distances from the edge, then the correlation kernel in Theorem \ref{thm:corkerTurningPoint} coincides with the correlation kernel at a homogeneous turning point, obtained in \cite{OR3}. In particular, the point process of horizontal lozenges restricted to a distance of fixed parity from the edge converges to the GUE minor process. \end{remark} \section{Intermediate regime} \label{sec:intermediateWeights} In this section we study turning points in the scaling limit of random skew plane partitions under the measure \eqref{eq:qinhomog} when the weights $q_t$ are given by \eqref{eq:intermperiodicweights}. Since the nature of turning points does not depend on boundary conditions, we consider the simplest boundary which gives rise to turning points. Namely, we take $\lambda_r$ to be a staircase as before, but so that the horizontal section grows at a scale between $\frac1r$ and $\frac1{\sqrt{r}}$ and in the scaling limit the back wall converges to $V(\tau)=-|\tau|$, with $-v\leq\tau\leq v$. The function $S_{\tau,\chi}$ is the same as in the case of homogeneous weights and is given by \eqref{eq:S} with $\alpha=1$. It follows that the macroscopic behaviour of the system is the same as in the homogeneous case. In particular it has the same frozen boundary, and only one turning point near each vertical $\tau=\pm v$. Moreover, the correlation kernel in the bulk matches with the homogeneous case and in the bulk horizontal lozenges are distributed according to a $\mathbb{Z}\times\mathbb{Z}$ translation invariant ergodic Gibbs measure. In particular, periodicity disappears in the scaling limit in the bulk. However, even tough there is only one turning point near each of the walls $\tau=\pm v$, the local point process is not the GUE minor process. Following the method from Section \ref{sec:turningPoints} we obtain the following theorem. \begin{theorem} \label{thm:corkerTurningPointIntermediate} Let $(\tau, \chi)$ be a turning point with $\tau=\pm v$. Let \begin{equation*} t_i=\lfloor\frac{\tau}r\rfloor-\hat{t}_i, \end{equation*} and \begin{equation*} h_i=\lfloor\frac{\chi}r\rfloor+\frac{\tilde{h}_i}{r^\frac12}. \end{equation*} If $\lfloor\frac{\tau}r\rfloor$ is odd, then the correlation functions near a turning point $(\tau, \chi)$ of the system with periodic weights \eqref{eq:intermperiodicweights} are given by \begin{multline*} \lim_{r\rightarrow 0}r^{-\frac12} K_{\lambda,\bar{q}}((t_1,h_1),(t_2,h_2)) \\=\frac{1}{(2\pi \mathfrak{i})^2} \iint e^{\frac{S''_{\tau,\chi}(z_{\tau,\chi})}2(\zeta^2-\omega^2)} \frac{e^{\tilde{h}_2\omega}}{e^{\tilde{h}_1\zeta}} \frac{\omega^{\lfloor\frac{\hat{t}_2+1}{2}\rfloor}}{\zeta^{\lfloor\frac{\hat{t}_1+1}{2}\rfloor}} \frac{(\omega-\gamma)^{\lfloor\frac{\hat{t}_2+2}{2}\rfloor}}{(\zeta-\gamma)^{\lfloor\frac{\hat{t}_1+2}{2}\rfloor}} \frac{d\zeta\ d\omega}{\zeta-\omega}, \end{multline*} where the contours of integration are as in Figure \ref{fig:contourDeformedLocal} with the $\zeta$-contour containing both $0$ and $\gamma$. If $\lfloor\frac{\tau}r\rfloor$ is even, then the exponents of $\zeta$ and $\omega$ switch places with those of $\zeta-\gamma$ and $\omega-\gamma$. \end{theorem}
{'timestamp': '2013-09-20T02:01:54', 'yymm': '1309', 'arxiv_id': '1309.4825', 'language': 'en', 'url': 'https://arxiv.org/abs/1309.4825'}
\section{Introduction} \label{section:Introduction} After molecular hydrogen (H$_2$) and carbon monoxide (CO), the water molecule (H$_2$O) can be one of the most abundant molecules in the interstellar medium (ISM) in galaxies. It provides some important diagnostic tools for various physical and chemical processes in the ISM \citep[e.g.][and references therein]{2013ChRv..113.9043V}. Prior to the {\it Herschel Space Observatory} \citep{2010A&A...518L...1P}, in extragalactic sources, non-maser {\hbox {H$_{2}$O}}\ rotational transitions were only detected by the \textit{Infrared Space Observatory} \citep[\textit{ISO},][]{1996A&A...315L..27K} in the form of far-infrared absorption lines \citep{2004ApJ...613..247G, 2008ApJ...675..303G}. Observations of local infrared bright galaxies by {\it Herschel} have revealed a rich spectrum of submillimeter (submm) {\hbox {H$_{2}$O}}\ emission lines (submm {\hbox {H$_{2}$O}}\ refers to rest-frame submillimeter {\hbox {H$_{2}$O}}\ emission throughout this paper if not otherwise specified). Many of these lines are emitted from high-excitation rotational levels with upper-level energies up to $E_\mathrm{up}$/$k = 642\,$K \citep[e.g.][]{2010A&A...518L..42V, 2010A&A...518L..43G, 2012A&A...541A...4G, 2013A&A...550A..25G, 2011ApJ...743...94R, 2012ApJ...753...70K, 2012ApJ...758..108S, 2013ApJ...762L..16M, 2013ApJ...779L..19P, 2013ApJ...768...55P}. Excitation analysis of these lines has revealed that they are probably excited through absorption of \hbox {far-infrared}\ photons from thermal dust emission in warm dense regions of the ISM \citep[e.g.][]{2010A&A...518L..43G}. Therefore, unlike the canonical CO lines that trace collisional excitation of the molecular gas, these {\hbox {H$_{2}$O}}\ lines represent a powerful diagnostic of the \hbox {far-infrared}\ radiation field. Using the {\it Herschel} archive data, \citet[][hereafter Y13]{2013ApJ...771L..24Y} have undertaken a first systematic study of submm {\hbox {H$_{2}$O}}\ emission in local \hbox {infrared}\ galaxies. {\hbox {H$_{2}$O}}\ was found to be the strongest molecular emitter after CO within the submm band in those \hbox {infrared}-bright galaxies, even with higher flux density than that of CO in some local ULIRGs (velocity-integrated flux density of \htot321312 is larger than that of \co54 in four galaxies out of 45 in the \citetalias{2013ApJ...771L..24Y} sample). The luminosities of the submm {\hbox {H$_{2}$O}}\ lines (\hbox {$L_{\mathrm{H_2O}}$}) are near-linearly correlated with total \hbox {infrared}\ luminosity ($L_\mathrm{IR}$, integrated over 8--1000\,$\mu$m) over three orders of magnitude. The correlation is revealed to be a straightforward result of \hbox {far-infrared}\ pumping: {\hbox {H$_{2}$O}}\ molecules are excited to higher energy levels through absorbing \hbox {far-infrared}\ photons, then the upper level molecules cascade toward the lines we observed in an almost constant fraction (Fig.\,\ref{fig:h2o-e-level}). Although the galaxies dominated by active galactic nuclei (AGN) have somewhat lower ratios of \hbox {$L_{\mathrm{H_2O}}$}/\hbox {$L_{\mathrm{IR}}$}, there does not appear to be a link between the presence of an AGN and the submm {\hbox {H$_{2}$O}}\ emission \citepalias{2013ApJ...771L..24Y}. The {\hbox {H$_{2}$O}}\ emission is likely to trace the \hbox {far-infrared}\ radiation field generated in star-forming nuclear regions in galaxies, explaining its tight correlation with \hbox {far-infrared}\ luminosity. Besides detections of the {\hbox {H$_{2}$O}}\ lines in local galaxies from space telescopes, redshifted submm {\hbox {H$_{2}$O}}\ lines in \hbox {high-redshift}\ lensed Ultra- and Hyper-Luminous InfraRed Galaxies (ULIRGs, $10^{13}\,L_\odot > L_\mathrm{IR} \geq 10^{12}$\,{\hbox {$L_\odot$}}; HyLIRGs, $L_\mathrm{IR} \geq 10^{13}$\,{\hbox {$L_\odot$}}) can also be detected by ground-based telescopes in atmospheric windows with high transmission. Strong gravitational lensing boosts the flux and allows one to detect the {\hbox {H$_{2}$O}}\ emission lines easily. Since our first detection of submm {\hbox {H$_{2}$O}}\ in a lensed {\it Herschel} source at $z = 2.3$ \citep{2011A&A...530L...3O} using the IRAM NOrthern Extended Millimeter Array (NOEMA), several individual detections at \hbox {high-redshift}\ have also been reported \citep{2011ApJ...738L...6L, 2011ApJ...741L..38V, 2011ApJ...741L..37B, 2012A&A...538L...4C, 2012ApJ...757..135L, 2013ApJ...779...67B, 2013A&A...551A.115O, 2013Natur.495..344V, 2013ApJ...767...88W, 2014ApJ...783...59R}. These numerous and easy detections of {\hbox {H$_{2}$O}}\ in \hbox {high-redshift}\ lensed ULIRGs show that its lines are the strongest submm molecular lines after CO and may be an important tool for studying these galaxies. We have carried out a series of studies focussing on submm {\hbox {H$_{2}$O}}\ emission in \hbox {high-redshift}\ lensed galaxies since our first detection. Through the detection of $J=2$ {\hbox {H$_{2}$O}}\ lines in seven \hbox {high-redshift}\ lensed Hy/ULIRGs reported by \citet[][hereafter O13]{2013A&A...551A.115O}, a slightly super-linear correlation between \hbox {$L_{\mathrm{H_2O}}$}\ and \hbox {$L_{\mathrm{IR}}$}\ (\hbox {$L_{\mathrm{H_2O}}$}\;$\propto$\;\hbox {$L_{\mathrm{IR}}$}$^{1.2}$) from local ULIRGs and \hbox {high-redshift}\ lensed Hy/ULIRGs has been found. This result may imply again that \hbox {far-infrared}\ pumping is important for {\hbox {H$_{2}$O}}\ excitation in \hbox {high-redshift}\ extreme starbursts. The average ratios of \hbox {$L_{\mathrm{H_2O}}$}\ to \hbox {$L_{\mathrm{IR}}$}\ for the $J=2$ {\hbox {H$_{2}$O}}\ lines in the \hbox {high-redshift}\ sources tend to be $1.8\pm0.9$ times higher than those seen locally \citepalias{2013ApJ...771L..24Y}. This shows that the same physics with infrared pumping should dominate {\hbox {H$_{2}$O}}\ excitation in ULIRGs at low and high redshift, with some specificity at \hbox {high-redshift}\ probably linked to the higher luminosities. Modelling provides additional information about the {\hbox {H$_{2}$O}}\ excitation. For example, through LVG modelling, \cite{2013Natur.496..329R} argue that the excitation of the submm {\hbox {H$_{2}$O}}\ emission in the $z \sim 6.3$ submm galaxy is \hbox {far-infrared}\ pumping dominated. Modelling of the local {\it Herschel} galaxies of \citetalias{2013ApJ...771L..24Y} has been carried out by \citet[][hereafter G14]{2014A&A...567A..91G}. They confirm that \hbox {far-infrared}\ pumping is the dominant mechanism responsible for the submm {\hbox {H$_{2}$O}}\ emission (except for the ground-state emission transitions, such as para-{\hbox {H$_{2}$O}}\ transition \t111000) in the extragalactic sources. Moreover, collisional excitation of the low-lying ($J \leq 2$) {\hbox {H$_{2}$O}}\ lines could also enhance the radiative pumping of the ($J \geq 3$) high-lying lines. The ratio between low-lying and high-lying {\hbox {H$_{2}$O}}\ lines is sensitive to the dust temperature ({\hbox {$T_{\mathrm{d}}$}}) and {\hbox {H$_{2}$O}}\ column density ($N_\mathrm{H_2O}$). From modelling the average of local star-forming- and mild-AGN-dominated galaxies, \citetalias{2014A&A...567A..91G} show that the submm {\hbox {H$_{2}$O}}\ emission comes from regions with $N_\mathrm{H_2O} \sim (0.5\text{--}2) \times 10^{17}$\,cm$^{-2}$ and a 100\,$\mu$m continuum opacity of $\tau_{100} \sim 0.05\text{--}0.2$, where {\hbox {H$_{2}$O}}\ is mainly excited by warm dust with a temperature range of $45\text{--}75$\,K. {\hbox {H$_{2}$O}}\ lines thus provide key information about the properties of the dense cores of ULIRGs, that is, their {\hbox {H$_{2}$O}}\ content, the infrared radiation field and the corresponding temperature of dust that is warmer than the core outer layers and dominates the far-infrared emission. Observations of the submm {\hbox {H$_{2}$O}}\ emission, together with appropriate modelling and analysis, therefore allows us to study the properties of the \hbox {far-infrared}\ radiation sources in great detail. So far, the excitation analysis combining both low- and high-lying {\hbox {H$_{2}$O}}\ emission has only been done in a few case studies. Using {\hbox {H$_{2}$O}}\ excitation modelling considering both collision and \hbox {far-infrared}\ pumping, \cite{2010A&A...518L..43G} and \cite{2011ApJ...741L..38V} estimate the sizes of the \hbox {far-infrared}\ radiation fields in Mrk\,231 and APM\,08279+5255 (APM\,08279 hereafter), which are not resolved by the observations directly, and suggest their AGN dominance based on their total enclosed energies. This again demonstrates that submm {\hbox {H$_{2}$O}}\ emission is a powerful diagnostic tool which can even transcend the angular resolution of the telescopes. The detection of submm {\hbox {H$_{2}$O}}\ emission in the {\it Herschel}-ATLAS\footnote{The {\it Herschel}-ATLAS is a project with {\it Herschel}, which is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA. The {\it H}-ATLAS website is \url{http://www.h-atlas.org}.} \citep[][{\it H}-ATLAS hereafter]{2010PASP..122..499E} sources through gravitational lensing allows us to characterise the \hbox {far-infrared}\ radiation field generated by intense star-forming activity, and possibly AGN, and learn the physical conditions in the warm dense gas phase in extreme starbursts in the early Universe. Unlike standard dense gas tracers such as HCN, which is weaker at \hbox {high-redshift}\ compared to that of local ULIRGs \citep{2007ApJ...660L..93G}, submm {\hbox {H$_{2}$O}}\ lines are strong and even comparable to high-$J$ CO lines in some galaxies \citepalias{2013ApJ...771L..24Y, 2013A&A...551A.115O}. Therefore, {\hbox {H$_{2}$O}}\ is an efficient tracer of the warm dense gas phase that makes up a major fraction of the total molecular gas mass in \hbox {high-redshift}\ Hy/ULIRGs \citep{2014PhR...541...45C}. The successful detections of submm {\hbox {H$_{2}$O}}\ lines in both local \citepalias{2013ApJ...771L..24Y} and the \hbox {high-redshift}\ universe \citepalias{2013A&A...551A.115O} show the great potential of a systematic study of {\hbox {H$_{2}$O}}\ emission in a large sample of \hbox {infrared}\ galaxies over a wide range in redshift (from local up to $z\sim4$) and luminosity ($\hbox {$L_{\mathrm{IR}}$} \sim10^{10}$--$10^{13}$\,{\hbox {$L_\odot$}}). However, our previous \hbox {high-redshift}\ sample was limited to seven sources and to one $J=2$ para-{\hbox {H$_{2}$O}}\ line ($E_\mathrm{up}$/$k = 100$--$127\,$K) per source \citepalias{2013A&A...551A.115O}. In order to further constrain the conditions of {\hbox {H$_{2}$O}}\ excitation, to confirm the dominant role of \hbox {far-infrared}\ pumping and to learn the physical conditions of the warm dense gas phase in \hbox {high-redshift}\ starbursts, it is essential to extend the studies to higher excitation lines. We thus present and discuss here the results of such new observations of a strong $J=3$ ortho-{\hbox {H$_{2}$O}}\ line with $E_\mathrm{up}$/$k = 304\,$K in six strongly lensed {\it H}-ATLAS galaxies at z\,$\sim$\,2.8--3.6, where a second lower-excitation $J=2$ para-{\hbox {H$_{2}$O}}\ line was also observed (Fig.\,\ref{fig:h2o-e-level} for the transitions and the corresponding $E_\mathrm{up}$). \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.441]{e_level} \caption{ Energy level diagrams of {\hbox {H$_{2}$O}}\ and {\hbox {H$_2$O$^+$}}\ shown in black and red, respectively. Dark blue arrows are the submm {\hbox {H$_{2}$O}}\ transitions we have observed in this work. Pink dashed lines show the \hbox {far-infrared}\ pumping path of the {\hbox {H$_{2}$O}}\ excitation in the model we use, with the wavelength of the photon labeled. The light blue dashed arrow is the transition from para-{\hbox {H$_{2}$O}}\ energy level $2_{20}$ to $2_{11}$ along the cascade path from $2_{20}$ to $1_{11}$. Rotational energy levels of {\hbox {H$_{2}$O}}\ and {\hbox {H$_2$O$^+$}}, as well as fine structure component levels of {\hbox {H$_2$O$^+$}}\, are also shown in the figure. } \label{fig:h2o-e-level} \end{center} \end{figure} We describe our sample, observation and data reduction in Section \ref{section:Sample and Observation}. The observed properties of the \hbox {high-redshift}\ submm {\hbox {H$_{2}$O}}\ emission are presented in Section \ref{Results}. Discussions of the lensing properties, \hbox {$L_{\mathrm{H_2O}}$}-\hbox {$L_{\mathrm{IR}}$}\ correlation, {\hbox {H$_{2}$O}}\ excitation, comparison between {\hbox {H$_{2}$O}}\ and CO, AGN contamination will be given in Section \ref{Discussion}. Section \ref{htop} describes the detection of {\hbox {H$_2$O$^+$}}\ lines. We summarise our results in Section \ref{Conclusions}. A flat $\Lambda$CDM cosmology with $H_{0}=71\,{\rm km\,s^{-1}\,Mpc^{-1}}$, $\Omega_{M}=0.27$, $\Omega_{\Lambda}=0.73$ \citep{2003ApJS..148..175S} is adopted throughout this paper. \section{Sample and observation} \label{section:Sample and Observation} Our sample consists of eleven extremely bright \hbox {high-redshift}\ sources with $F_\mathrm{500\mu m}>200$\,mJy discovered by the {\it H}-ATLAS survey \citep{2010PASP..122..499E}. Together with the seven similar sources reported in our previous H$_2$O study \citepalias{2013A&A...551A.115O}, they include all the brightest \hbox {high-redshift}\ {\it H}-ATLAS sources ($F_\mathrm{500\mu m}>170$\,mJy), but two, imaged at 880\,$\mu$m with SMA by \citet[][hereafter B13]{2013ApJ...779...25B}. In agreement with the selection according to the methods of \citet{2010Sci...330..800N}, the detailed lensing modelling performed by \citetalias{2013ApJ...779...25B} has shown that all of them are strongly lensed, but one, G09v1.124 \citep[][see below]{2013ApJ...772..137I}. The sample of our present study is thus well representative of the brightest \hbox {high-redshift}\ submillimeter sources with $F_\mathrm{500\mu m}>200$\,mJy (with apparent total infrared luminosity $\sim 5\text{--}15 \times 10^{13}$\,{\hbox {$L_\odot$}}\ and $z \sim 1.5\text{--}4.2$) found by {\it H}-ATLAS in its equatorial ('GAMA') and north-galactic-pole ('NGP') fields, in $\sim 300$\,deg$^2$ with a density $\sim 0.05$\,deg$^{-2}$. In our previous project \citepalias{2013A&A...551A.115O}, we observed {\hbox {H$_{2}$O}}\ in seven strongly lensed \hbox {high-redshift}\ {\it H}-ATLAS galaxies from the \citetalias{2013ApJ...779...25B} sample. In this work, in order to observe the high-excitation ortho-\htot321312 line with rest frequency of 1162.912\,GHz with the IRAM/NOEMA, we selected the brightest sources at 500\,$\mu$m with $z \gtrsim 2.8$ so that the redshifted lines could be observed in a reasonably good atmospheric window at $\nu_\mathrm{obs} \lesssim 300$\,GHz. Eight sources with such redshift were selected from the \citetalias{2013ApJ...779...25B} {\it H}-ATLAS sample. \begin{table*}[htbp] \small \setlength{\tabcolsep}{0.42em} \caption{Observation log.} \centering \begin{tabular}{lcllllllll} \hline \hline {IAU Name} & {Source} & {$\mathrm{RA}$} & {$\mathrm{DEC}$} & {$\mathrm{RA_{pk}}$} & {$\mathrm{DEC_{pk}}$} & {{\hbox {H$_{2}$O}}\ line} &{$\nu_\mathrm{obs}$} & {Beam} &{$t_\mathrm{on}$} \\ & & (J2000) & (J2000) & (J2000) & (J2000) & & (GHz) & ($''$) & (h) \\ \hline {{\it H}-ATLAS J083051.0$+$013224} & {G09v1.97} & {08:30:51.02} & {$+$01:32:24.88} & {08:30:51.17} & {$+$01:32:24.39} & \t211202 & {162.286} & {5.6$\times$3.3} & {3.5} \\ & & & & {08:30:51.17} & {$+$01:32:24.09} & \t321312 & {250.952} & {2.6$\times$1.1} & {3.1} \\ {{\it H}-ATLAS J113526.3$-$014605} & {G12v2.43} & {11:35:26.36} & {$-$01:46:05.56} & {11:35:26.27} & {$-$01:46:06.44} & \t202111 & {239.350} & {2.3$\times$1.0} & {6.9} \\ & & & & {11:35:26.28} & {$-$01:46:06.43} & \t321312 & {281.754} & {2.2$\times$1.1} & {1.5} \\ {{\it H}-ATLAS J125632.7$+$233625} & {NCv1.143} & {12:56:32.70} & {$+$23:36:24.86} & {12:56:32.56} & {$+$23:36:27.92} & \t211202 & {164.739} & {3.1$\times$2.9} & {1.5} \\ & & & & {12:56:32.56} & {$+$23:36:27.69} & \t321312 & {254.745} & {2.1$\times$1.0} & {1.5} \\ {{\it H}-ATLAS J132630.1$+$334410} & {NAv1.195} & {13:26:30.12} & {$+$33:44:09.90} & {13:26:30.14} & {$+$33:44:09.11} & \t202111 & {250.045} & {2.0$\times$1.7} & {3.8} \\ & & & & {13:26:30.14} & {$+$33:44:09.09} & \t321312 & {293.334} & {1.0$\times$0.9} & {3.1} \\ {{\it H}-ATLAS J132859.3$+$292327} & {NAv1.177} & {13:28:59.29} & {$+$29:23:27.07} & {13:28:59.25} & {$+$29:23:26.18} & \t202111 & {261.495} & {1.9$\times$1.7} & {2.3} \\ & & & & {13:28:59.25} & {$+$29:23:26.34} & \t321312 & {307.812} & {1.6$\times$0.9} & {2.3} \\ {{\it H}-ATLAS J133008.4$+$245900} & {NBv1.78} & {13:30:08.56} & {$+$24:58:58.30} & {13:30:08.56} & {$+$24:58:58.55} & \t321312 & {282.878} & {1.7$\times$1.1} & {4.2} \\ \multirow{2}{*}{{\it H}-ATLAS J084933.4$+$021443} & {G09v1.124-W} & \multirow{2}{*}{08:49:33.36} & \multirow{2}{*}{$+$02:14:42.30} & {08:49:33.59} & {$+$02:14:44.68} & \multirow{2}{*}{\t211202} &\multirow{2}{*}{220.537} &\multirow{2}{*}{1.8$\times$1.2} &\multirow{2}{*}{8.4} \\ & {G09v1.124-T} & & & {08:49:32.95} & {$+$02:14:39.70} & & & & \\ {{\it H}-ATLAS J085358.9$+$015537} & {G09v1.40} & {08:53:58.90} & {$+$01:55:37.00} & {08:53:58.84} & {$+$01:55:37.75} & \t211202 & {243.425} & {1.8$\times$1.0} & {1.9} \\ {{\it H}-ATLAS J091043.1$-$000321} & {SDP11} & {09:10:43.09} & {$-$00:03:22.51} & {09:10:43.06} & {$-$00:03:22.10} & \t202111 & {354.860} & {1.9$\times$1.5} & {3.8} \\ {{\it H}-ATLAS J125135.4$+$261457} & {NCv1.268} & {12:51:35.46} & {$+$26:14:57.52} & {12:51:35.38} & {$+$26:14:58.12} & \t211202 & {160.864} & {2.9$\times$2.6} & {7.7} \\ {{\it H}-ATLAS J134429.4$+$303036} & {NAv1.56} & {13:44:29.52} & {$+$30:30:34.05} & {13:44:29.46} & {$+$30:30:34.01} & \t211202 & {227.828} & {1.7$\times$1.7} & {2.3} \\ \hline \end{tabular} \tablefoot{ RA and DEC are the J2000 {\it Herschel} coordinates which were taken as the centres of the NOEMA images displayed in Fig.\,\ref{fig:map-all}; RA$_\mathrm{pk}$ and DEC$_\mathrm{pk}$ are the J2000 coordinates of the NOEMA dust continuum image peaks; $\nu_\mathrm{obs}$ is the central observed frequency. The rest-frame frequencies of para-{\hbox {H$_{2}$O}}\ \t202111, \t211202 and ortho-{\hbox {H$_{2}$O}}\ \t321312 lines are: 987.927\,GHz, 752.033\,GHz and 1162.912\,GHz, respectively (the rest-frame frequencies are taken from the JPL catalogue: \url{http://spec.jpl.nasa.gov}); $t_\mathrm{on}$ is the on-source integration time. The source G09v1.124, which is not resolved by SPIRE, is a cluster that consists of two main components: eastern component W (G09v1.124-W) and western component T (G09v1.124-T) as described in \cite{2013ApJ...772..137I} (see also Fig.\,\ref{fig:map-3}). } \label{table:obs_log} \end{table*} \normalsize \citetalias{2013ApJ...779...25B} provide lensing models, magnification factors ($\mu$) and inferred intrinsic properties of these galaxies and list their CO redshifts which come from \citet{2012ApJ...752..152H}; Harris et al. (in prep.); Lupu et al. (in prep.); Krips et al. (in prep.) and Riechers et al. (in prep.). In our final selection of the sample to be studied in the \htot321312 line, we then removed two sources, SDP\,81 and G12v2.30, that were previously observed in {\hbox {H$_{2}$O}}\ (\citetalias{2013A&A...551A.115O}, and also \citealt{2015ApJ...808L...4A} for SDP\,81), because the $J=2$ {\hbox {H$_{2}$O}}\ emission is too weak and/or the interferometry could resolve out some flux considering the lensing image. The observed \hbox {high-redshift}\ sample thus consists of two GAMA-field sources: G09v1.97 and G12v2.43, and four sources in the {\it H}-ATLAS NGP field: NCv1.143, NAv1.195, NAv1.177 and NBv1.78 (Tables\,\ref{table:obs_log} and \ref{table:previous_obs_properties}). Among the six remaining sources at redshift between 2.8 and 3.6, only one, NBv1.78, has been observed previously in a low-excitation line, para-\htot202111 \citepalias{2013A&A...551A.115O}. Therefore, we have observed both para-{\hbox {H$_{2}$O}}\ line \t202111 or \t211202 and ortho-\htot321312 in the other five sources, in order to compare their velocity-integrated flux densities. \begin{table*}[htbp] \setlength{\tabcolsep}{2.8pt} \small \caption{Previously observed properties of the sample.} \centering \begin{tabular}{cllllcccrccc} \hline \hline Source & $z$ & $F_{250}$ & $F_{350}$ & $F_{500}$ & $F_{880}$ & $r_\mathrm{half}$ & $\Sigma_{\mathit{SFR}}$ & $f_{1.4GHz}\;\;\;$& {\hbox {$T_{\mathrm{d}}$}} & $\mu$ & $\mu$\hbox {$L_{\mathrm{IR}}$}\ \\ & & (mJy) & (mJy) & (mJy) & (mJy) & (kpc) & (10$^{3}\,$M$_\odot$\,yr$^{-1}$\,kpc$^{-2}$) & (mJy)\;\;\; & (K) & & (10$^{13}$\,{\hbox {$L_\odot$}}) \\ \hline G09v1.97 & 3.634 & $260\pm7$ & $321\pm8$ & $269\pm9$ & $85.5\pm4.0$ & $0.85$ & $0.91\pm0.15$ & $\pm0.15$ & $44\pm1$ &$\;\;6.9\pm0.6$ & $15.3\pm4.3$ \\ G12v2.43 & 3.127 & $290\pm7$ & $295\pm8$ & $216\pm9$ & $48.6\pm2.3$ & -- & -- & $\pm0.15$ & -- & -- &\;\;($8.3\pm1.7$) \\ NCv1.143 & 3.565 & $214\pm7$ & $291\pm8$ & $261\pm9$ & $97.2\pm6.5$ & $0.40$ & $2.08\pm0.77$ & $0.61\pm0.16$ & $40\pm1$ & $11.3\pm1.7$ & $12.8\pm4.3$ \\ NAv1.195 & 2.951 & $179\pm7$ & $279\pm8$ & $265\pm9$ & $65.2\pm2.3$ & $1.57$ & $0.21\pm0.04$ & $\pm0.14$ & $36\pm1$ &$\;\;4.1\pm0.3$ & $\;\;7.4\pm2.0$ \\ NAv1.177 & 2.778 & $264\pm9$ & $310\pm10$ & $261\pm10$ & $50.1\pm2.1$ & -- & -- & $\pm0.15$ & -- & -- &\;\;($5.5\pm1.1$) \\ NBv1.78 & 3.111 & $273\pm7$ & $282\pm8$ & $214\pm9$ & $59.2\pm4.3$ & $0.55$ & $1.09\pm1.41$ & $0.67\pm0.20$ & $43\pm1$ & $13.0\pm1.5$ & $10.7\pm3.9$ \\ \hline G09v1.124-W$^a$ & \multirow{2}{*}{2.410} & \multirow{2}{*}{$242\pm7$} & \multirow{2}{*}{$293\pm8$} & \multirow{2}{*}{$231\pm9$} & \multirow{2}{*}{$50.0\pm3.5$} & -- & -- & $\pm0.15$ & $40\pm1$ & 1 & $\;\;3.3\pm0.3$ \\ G09v1.124-T$^a$ & & & & & & -- & -- & $\pm0.15$ & $36\pm1$ & $1.5\pm0.2$ & $\;\;2.7\pm0.8$ \\ G09v1.40 & 2.089$^b$ & $389\pm7$ & $381\pm8$ & $241\pm9$ & $61.4\pm2.9$ & $0.41$ & $0.77\pm0.30$ & $0.75\pm0.15$ & $36\pm1$ & $15.3\pm3.5$ & $\;\;6.5\pm2.5$ \\ SDP11 & 1.786 & $417\pm6$ & $378\pm7$ & $232\pm8$ & $30.6\pm2.4$ & $0.89$ & $0.22\pm0.08$ & $0.66\pm0.14$ & $41\pm1$ & $10.9\pm1.3$ & $\;\;6.2\pm1.9$ \\ NCv1.268 & 3.675 & $145\pm7$ & $201\pm8$ & $212\pm9$ & $78.9\pm4.4$ & $0.93$ & $0.31\pm0.14$ & $1.10\pm0.14$ & $39\pm1$ & $11.0\pm1.0$ & $\;\;9.5\pm2.7$ \\ NAv1.56 & 2.301 & $481\pm9$ & $484\pm13$ & $344\pm11$ & $73.1\pm2.4$ & $1.50$ & $0.14\pm0.08$ & $1.12\pm0.27$ & $38\pm1$ & $11.7\pm0.9$ & $11.3\pm3.1$ \\ \hline \end{tabular} \tablefoot{ $z$ is the redshift inferred from previous CO detection quoted by \citetalias{2013ApJ...779...25B} (see the references therein); $F_{250}$, $F_{350}$ and $F_{500}$ are the SPIRE flux densities at 250, 350 and 500{\hbox {\,$\mu m$}}, respectively \citep{2011MNRAS.415..911P}; $F_{880}$ is the SMA flux density at 880{\hbox {\,$\mu m$}}; $r_\mathrm{half}$ and $\Sigma_{\rm{SFR}}$ are the intrinsic half-light radius at 880\,$\mu$m and the lensing-corrected surface $\mathit{SFR}$ (star formation rate) density (Section\,\ref{hto}); $f_{1.4GHz}$ is the 1.4\,GHz band flux densities from the VLA FIRST survey; {\hbox {$T_{\mathrm{d}}$}}\ is the cold-dust temperature taken from \citetalias{2013ApJ...779...25B} (note that the errors quoted for {\hbox {$T_{\mathrm{d}}$}}\ are significantly underestimated since the uncertainties from differential lensing and single-temperature dust SED assumption are not fully considered); $\mu$ is the the lensing magnification factor from \citetalias{2013ApJ...779...25B}, except for G09v1.124 which is adopted from \cite{2013ApJ...772..137I}; $\mu$\hbox {$L_{\mathrm{IR}}$}\ is the apparent total \hbox {infrared}\ luminosity mostly inferred from \citetalias{2013ApJ...779...25B}. The $\mu$\hbox {$L_{\mathrm{IR}}$}\ in brackets are not listed in \citetalias{2013ApJ...779...25B}, thus we infer them from single modified black body dust SED fitting using the submm photometry data listed in this table. \\ a: The cluster source G09v1.124 includes two main components: G09v1.124-W to the east and G09v1.124-T to the west (Fig.\,\ref{fig:map-3}) and the values of these two rows are quoted from \cite{2013ApJ...772..137I}; b: Our {\hbox {H$_{2}$O}}\ observation gives $z=2.093$ for G09v1.40. This value is slightly different from the value of 2.089 quoted by \citetalias{2013ApJ...779...25B} from Lupu et al. (in prep.) obtained by CSO/Z-Spec, but consistent with \co32 observation by Riechers et al. (in prep.). } \label{table:previous_obs_properties} \end{table*} \normalsize In addition, we also observed five sources mostly at lower redshifts in para-{\hbox {H$_{2}$O}}\ lines \t202111 or \t211202 (Tables\,\ref{table:obs_log} and \ref{table:previous_obs_properties}) to complete the sample of our {\hbox {H$_{2}$O}}\ low-excitation study. They are three strongly lensed sources, G09v1.40, NAv1.56 and SDP11, a hyper-luminous cluster source G09v1.124 \citep{2013ApJ...772..137I}, and a $z \sim 3.7$ source, NCv1.268 for which we did not propose a $J=3$ {\hbox {H$_{2}$O}}\ observation, considering its large linewidth which could bring difficulties in line detection. As our primary goal is to obtain a detection of the submm {\hbox {H$_{2}$O}}\ lines, we carried out the observations in the compact, D configuration of NOEMA. The baselines extended from 24 to 176\,m, resulting in a synthesised beam with modest/low resolution of $\sim$\,$1.0$\,$''\times0.9$\,$''$ to $\sim$\,$5.6$\,$''\times3.3$\,$''$ as shown in Table \ref{table:obs_log}. The {\hbox {H$_{2}$O}}\ observations were conducted from January 2012 to December 2013 in good atmospheric conditions (seeing of $0.3$\,$''$--$1.5$\,$''$) stability and reasonable transparency (PWV $\leq\,1\,\mathrm{mm}$). The total on source time was $\sim$\,1.5--8 hours per source. 2\,mm, 1.3\,mm and 0.8\,mm bands covering 129--174, 201--267 and 277--371\,GHz, respectively, were used. All the central observation frequencies were chosen based on previous redshifts given by \citetalias{2013ApJ...779...25B} according to the previous CO detections (Table\,\ref{table:previous_obs_properties}). In all cases but one, the frequencies of our detections of {\hbox {H$_{2}$O}}\ lines are consistent with these CO redshifts. The only exception is G09v1.40 where our {\hbox {H$_{2}$O}}\ redshift disagrees with the redshift of $z=2.0894\pm0.0009$ given by Lupu et al. (in prep.), which is quoted by \citetalias{2013ApJ...779...25B}. We find $z=2.0925\pm0.0001$ in agreement with previous \co32 observations (Riechers et al., in prep.). We used the WideX correlator which provided a contiguous frequency coverage of 3.6\,GHz in dual polarisation with a fixed channel spacing of 1.95\,MHz. The phase and bandpass were calibrated by measuring standard calibrators that are regularly monitored at the IRAM/NOEMA, including 3C279, 3C273, MWC349 and 0923+392. The accuracy of the flux calibration is estimated to range from $\sim$10\% in the 2\,mm band to $\sim$20\% in the 0.8mm band. Calibration, imaging, cleaning and spectra extraction were performed within the \texttt{GILDAS}\footnote{See \url{http://www.iram.fr/IRAMFR/GILDAS} for more information about the GILDAS softwares.} packages \texttt{CLIC} and \texttt{MAPPING}. \begin{table}[htbp] \small \setlength{\tabcolsep}{1.45em} \caption{Observed CO line properties using the IRAM 30m/EMIR.} \centering \begin{tabular}{lcccc} \hline \hline Source & CO line & \hbox {$I_{\mathrm{CO}}$} & \hbox {$\Delta V_\mathrm{CO}$} \\ & & (Jy\,km\,s$^{-1}$) & (km\,s$^{-1}$) \\ \hline G09v1.97 & \tco54 & $\;\;9.5\pm1.2$ & $224\pm 32$ \\ & \tco65 & $10.4\pm2.3$ & $292\pm 86$ \\ NCv1.143 & \tco54 & $13.1\pm1.0$ & $273\pm 27$ \\ & \tco65 & $11.0\pm1.0$ & $284\pm 27$ \\ NAv1.195 & \tco54 & $11.0\pm0.6$ & $281\pm 16$ \\ NAv1.177 & \tco32 & $\;\;6.8\pm0.4$ & $231\pm 15$ \\ & \tco54 & $11.0\pm0.6$ & $230\pm 16$ \\ NBv1.78 & \tco54 & $10.3\pm0.8$ & $614\pm 53$ \\ & \tco65 & $\;\;9.7\pm1.0$ & $734\pm 85$ \\ G09v1.40 & \tco43 & $\;\;7.5\pm2.1$ & $198\pm 51$ \\ NAv1.56 & \tco54 & $17.7\pm6.6$ & $\;\;432\pm 182$ \\ \hline \end{tabular} \tablefoot{ \hbox {$I_{\mathrm{CO}}$}\ is the velocity-integrated flux density of CO; \hbox {$\Delta V_\mathrm{CO}$}\ is the linewidth (FWHM) derived from fitting a single Gaussian to the line profile. } \label{table:co_properties} \end{table} \normalsize To compare the {\hbox {H$_{2}$O}}\ emission with the typical molecular gas tracer, CO, we also observed the sources for CO lines using the EMIR receiver at the IRAM 30m telescope. The CO data will be part of a systematic study of molecular gas excitation in {\it H}-ATLAS lensed Hy/ULIRGs, and a full description of the data and the scientific results will be given in a following paper (Yang et al., in prep.). The global CO emission properties of the sources are listed in Table\,\ref{table:co_properties} where we list the CO fluxes and linewidths. A brief comparison of the emission between {\hbox {H$_{2}$O}}\ and CO lines will be given in Section \ref{CO lines}. \section{Results}\label{Results} A detailed discussion of the observation results for each source is given in Appendix\,\ref{Individual sources}, including the strength of the {\hbox {H$_{2}$O}}\ emission, the image extension of {\hbox {H$_{2}$O}}\ lines and the continuum (Fig.\,\ref{fig:map-all}), the {\hbox {H$_{2}$O}}\ spectra and linewidths (Fig.\,\ref{fig:spectra-all}) and their comparison with CO (Table\,\ref{table:co_properties}). We give a synthesis of these results in this section. \subsection{General properties of the {\hbox {H$_{2}$O}}\ emissions} \label{General properties} \begin{subfigures} \begin{figure*}[htbp] \begin{center} \includegraphics[scale=0.85]{spec-1} \caption{ Spatially integrated spectra of {\hbox {H$_{2}$O}}\ in the six sources with both $J=2$ para-{\hbox {H$_{2}$O}}\ and $J=3$ ortho-{\hbox {H$_{2}$O}}\ lines observed. The red lines represent the Gaussian fitting to the emission lines. The \htot202111 spectrum of NBv1.78 is taken from \citetalias{2013A&A...551A.115O}. Except for \htot321312 in NAv1.195, all the $J=2$ and $J=3$ {\hbox {H$_{2}$O}}\ lines are well detected, with a high S/N ratio and similar profiles in both lines for the same source.} \label{fig:spectra-1} \end{center} \end{figure*} \begin{figure*}[htbp] \begin{center} \includegraphics[scale=0.97]{spec-2} \caption{ Spatially integrated spectra of {\hbox {H$_{2}$O}}\ of the five sources with only one $J=2$ para-{\hbox {H$_{2}$O}}\ line observed. The red lines represent the Gaussian fitting to the emission lines. Except for the {\hbox {H$_{2}$O}}\ line in G09v1.124, all the $J=2$ {\hbox {H$_{2}$O}}\ lines are well detected. } \label{fig:spectra-2} \end{center} \end{figure*} \label{fig:spectra-all} \end{subfigures} \begin{table*}[htbp] \small \setlength{\tabcolsep}{4.5pt} \caption{The observed properties of {\hbox {H$_{2}$O}}\ emission lines.} \centering \begin{tabular}{cccrrrrrrrr} \hline \hline Source & {\hbox {H$_{2}$O}}\ line & $\nu_\mathrm{H_2O}$ & $S_{\nu}(\mathrm{ct})^{\mathrm{pk}}\:\:\:$ & $S_{\nu}(\mathrm{ct})\:\:\:\:\:$ & $S_\mathrm{H_2O}^{\mathrm{pk}}\:\:\:\:$ & $S_\mathrm{H_2O}\:\:\:\:$& \hbox {$I_{\mathrm{H_2O}}$}$^{\mathrm{pk}}\:\:\:$ & \hbox {$I_{\mathrm{H_2O}}$}\:\:\:\:\:\: & \hbox {$\Delta V_\mathrm{H_2O}$}\: & $\mu$\hbox {$L_{\mathrm{H_2O}}$}\:\:\:\: \\ & & (GHz) & ($\mathrm{mJy} \over \mathrm{beam}$)\:\:\:\:\: & (mJy)\:\:\:\:\: & ($\mathrm{mJy} \over \mathrm{beam}$)\:\:\:\: & (mJy)\:\:\:\: & ($\mathrm{Jy\,km\,s}^{-1} \over \mathrm{beam}$) & (Jy\,km\,s$^{-1}$) & (km\,s$^{-1}$) & (10$^8$\,{\hbox {$L_\odot$}})\:\:\:\\ \hline G09v1.97 & \t211202 & 162.255 & $ 8.9\pm0.2$ & $ 9.4\pm0.2$ & $14.9\pm2.2$ & $15.0\pm2.1$ & $ 3.8\pm0.4$ & $ 4.1\pm0.4$ & $257\pm27$ & $ 7.4\pm0.7$ \\ & \t321312 & 250.947 & $21.7\pm0.3$ & $36.1\pm0.3$ & $ 7.8\pm1.9$ & $15.0\pm2.6$ & $ 2.4\pm0.4$ & $ 3.7\pm0.4$ & $234\pm34$ & $10.4\pm1.0$ \\ G12v2.43 & \t202111 & 239.388 & $16.0\pm0.3$ & $22.5\pm0.4$ & $10.8\pm2.1$ & $17.3\pm3.1$ & $ 3.2\pm0.5$ & $ 4.8\pm0.6$ & $262\pm35$ & $ 8.8\pm1.0$ \\ & \t321312 & 281.784 & $31.5\pm0.3$ & $36.4\pm0.3$ & $25.6\pm3.3$ & $25.0\pm3.0$ & $ 4.9\pm0.4$ & $ 5.9\pm0.5$ & $221\pm20$ & $12.7\pm1.0$ \\ NCv1.143 & \t211202 & 164.741 & $11.2\pm0.1$ & $13.3\pm0.2$ & $17.4\pm1.3$ & $18.7\pm1.3$ & $ 5.6\pm0.3$ & $ 5.8\pm0.3$ & $293\pm15$ & $10.1\pm0.5$ \\ & \t321312 & 254.739 & $34.8\pm0.5$ & $63.5\pm0.5$ & $23.9\pm4.3$ & $32.1\pm4.1$ & $ 5.2\pm0.6$ & $ 8.0\pm0.7$ & $233\pm22$ & $21.3\pm1.8$ \\ NAv1.195 & \t202111 & 250.034 & $14.0\pm0.4$ & $25.8\pm0.4$ & $ 6.6\pm2.5$ & $11.6\pm2.5$ & $ 2.1\pm0.6$ & $ 4.0\pm0.6$ & $328\pm51$ & $ 6.7\pm1.0$ \\ & \t321312 & (293.334) & $17.2\pm0.5$ & $41.2\pm0.5$ & $ <4.2$ & $ <7.3$ & $ <1.5$ & $ <2.6$ & $ 330^{a}$ & $ <5.0$ \\ NAv1.177 & \t202111 & 261.489 & $26.5\pm0.6$ & $35.5\pm0.6$ & $16.8\pm4.9$ & $21.2\pm4.9$ & $ 4.4\pm0.9$ & $ 5.4\pm0.9$ & $241\pm41$ & $ 8.2\pm1.2$ \\ & \t321312 & 307.856 & $38.2\pm0.4$ & $62.0\pm0.4$ & $14.8\pm2.6$ & $25.2\pm3.1$ & $ 4.6\pm0.5$ & $ 7.3\pm0.6$ & $272\pm24$ & $12.9\pm1.1$ \\ NBv1.78 & \;\,\t202111$^{b}$ & 240.290 & $15.4\pm0.3$ & $36.9\pm0.4$ & $ 5.0\pm1.0$ & $12.3\pm3.2$ & $2.7\pm0.3$ & $6.7\pm1.3$ & $510\pm90$ & $12.2\pm2.4$\\ & \t321312 & 282.863 & $29.2\pm0.2$ & $42.6\pm0.2$ & $ 8.8\pm1.0$ & $10.6\pm1.0$ & $ 4.8\pm0.4$ & $ 6.7\pm0.5$ & $607\pm43$ & $14.3\pm1.0$ \\ \hline G09v1.124-W & \multirow{2}{*}{\t211202} & \multirow{2}{*}{(220.537)} & $6.42\pm0.15$& $7.6\pm0.2$ & $ <1.4$ & $ <1.6$ & $ <1.2^{c}$ & $ <1.4^{c}$ & $850^{c}$ & $<1.3^{c}$ \\ G09v1.124-T & & & $4.08\pm0.15$& $4.9\pm0.2$ & $ <1.7$ & $ <2.0$ & $ <1.0^{c}$ & $ <1.2^{c}$ & $550^{c}$ & $<1.0^{c}$ \\ G09v1.40 & \t211202 & 243.182 & $16.9\pm0.2$ & $30.6\pm0.3$ & $17.5\pm2.0$ & $27.7\pm1.9$ & $ 4.9\pm0.4$ & $ 8.2\pm0.4$ & $277\pm14$ & $ 5.7\pm0.3$ \\ SDP11 & \t202111 & 354.930 & $29.2\pm1.3$ & $52.1\pm1.3$ & $14.8\pm8.4$ & $40.3\pm11.7$ & $ 5.2\pm2.0$ & $ 9.2\pm2.0$ & $214\pm41$ & $ 6.3\pm1.1$ \\ NCv1.268 & \t211202 & 161.013 & $ 6.6\pm0.1$ & $10.0\pm0.1$ & $ 5.2\pm1.1$ & $ 9.0\pm1.2$ & $ 3.7\pm0.4$ & $ 7.0\pm0.7$ & $731\pm75$ & $12.8\pm1.2$ \\ NAv1.56 & \t211202 & 227.822 & $14.0\pm0.6$ & $22.7\pm0.6$ & $15.8\pm3.3$ & $23.2\pm3.0$ & $ 7.8\pm1.1$ & $14.6\pm1.3$ & $593\pm56$ & $12.0\pm1.1$ \\ \hline \end{tabular} \tablefoot{ $\nu_\mathrm{H_2O}$ is the observed central frequency of {\hbox {H$_{2}$O}}\ lines, and the values in brackets are the {\hbox {H$_{2}$O}}\ line frequencies inferred from the CO redshifts for the undetected sources; $S_{\nu}(ct)^{\mathrm{pk}}$ and $S_{\nu}(ct)$ are the peak and spatially integrated continuum flux density, respectively; $S_\mathrm{H_2O}^{\mathrm{pk}}$ is the peak {\hbox {H$_{2}$O}}\ line flux and $S_\mathrm{H_2O}$ is the total line flux; \hbox {$I_{\mathrm{H_2O}}$}$^{\mathrm{pk}}$ and \hbox {$I_{\mathrm{H_2O}}$}\ are the peak and spatially integrated velocity-integrated flux density of the {\hbox {H$_{2}$O}}\ lines; \hbox {$\Delta V_\mathrm{H_2O}$}\ is the {\hbox {H$_{2}$O}}\ linewidth; $\mu$\hbox {$L_{\mathrm{H_2O}}$}\ is the apparent luminosity of the observed {\hbox {H$_{2}$O}}\ line. \\ a: The linewidth of the undetected \htot321312 in NAv1.195 has been set to 330\,km\,s$^{-1}$ by assuming that the widths of the \htot321312 and \htot202111 lines are roughly the same; b: The data of para-\htot202111 in NBv1.78 is taken from \citetalias{2013A&A...551A.115O}; c: the 2\,$\sigma$ upper limits of \hbox {$I_{\mathrm{H_2O}}$}\ are derived by assuming that the {\hbox {H$_{2}$O}}\ linewidths are similar to those of the CO lines \citep{2013ApJ...772..137I}.\\ } \label{table:h2o_properties} \end{table*} \normalsize To measure the linewidth, velocity-integrated flux density and the continuum level of the spectra from the source peak and from the entire source, we extract each spectrum from the CLEANed image at the position of the source peak in a single synthesis beam and the spectrum integrated over the entire source. Then we fit them with Gaussian profiles using \texttt{MPFIT} \citep{2009ASPC..411..251M}. We detect the high-excitation ortho-\htot321312 in five out of six observed sources, with high signal to noise ratios ($S/N\;>9$) and velocity-integrated flux densities comparable to those of the low-excitation $J=2$ para-{\hbox {H$_{2}$O}}\ lines (Table \ref{table:h2o_properties} and Figs.\,\ref{fig:spectra-all} \& \ref{fig:map-all}). We also detect nine out of eleven $J=2$ para-{\hbox {H$_{2}$O}}\ lines, either \t202111 or \t211202, with $S/N\,\ge\,6$ in terms of their velocity-integrated flux density, plus one tentative detection of \htot202111 in SDP11. We present the values of velocity-integrated {\hbox {H$_{2}$O}}\ flux density detected at the source peak in a single synthesised beam, \hbox {$I_{\mathrm{H_2O}}$}$^{\mathrm{pk}}$, and the velocity-integrated {\hbox {H$_{2}$O}}\ flux density over the entire source, $I_{\mathrm{H_2O}}$ (Table\,\ref{table:h2o_properties}). The detected {\hbox {H$_{2}$O}}\ lines are strong, with \hbox {$I_{\mathrm{H_2O}}$}\;$= 3.7\text{--}14.6$\,Jy\,km\,s$^{-1}$. Even considering gravitational lensing correction, this is consistent with our previous finding that \hbox {high-redshift}\ Hy/ULIRGs are very strong {\hbox {H$_{2}$O}}\ emitters, with {\hbox {H$_{2}$O}}\ flux density approaching that of CO (Tables\,\ref{table:co_properties} \& \ref{table:h2o_properties} and Section\,\ref{CO lines}). The majority of the images (7/11 for $J=2$ lines and 3/4 for $J=3$) are marginally resolved with \hbox {$I_{\mathrm{H_2O}}$}$^{\mathrm{pk}}$/$I_{\mathrm{H_2O}} \sim 0.4\text{--}0.7$. They show somewhat lensed structures. The others are unresolved with \hbox {$I_{\mathrm{H_2O}}$}$^{\mathrm{pk}}$/$I_{\mathrm{H_2O}} > 0.8$. All continuum emission flux densities ($S_{\nu}(\mathrm{ct})^{\mathrm{pk}}$ for the emission peak and $S_{\nu}(\mathrm{ct})$ for the entire source) are very well detected ($S/N \ge 30$), with a range of total flux density of 9--64\,mJy for $S_{\nu}(\mathrm{ct})$. Fig.\,\ref{fig:map-all} shows the low-resolution images of {\hbox {H$_{2}$O}}\ and the corresponding dust continuum emission at the observing frequencies. Because the positions of the sources were derived from {\it Herschel} observation, which has a large beamsize ($>17$\,$''$) comparing to the source size, the position of most of the sources are not perfectly centred at these {\it Herschel} positions as seen in the maps. The offsets are all within the position error of the {\it Herschel} measurement (Fig.\,\ref{fig:map-all}). G09v1.124 is a complex HyLIRG system including two main components eastern G09v1.124-W and western G09v1.124-T as described in \cite{2013ApJ...772..137I}. In Fig.\,\ref{fig:map-3}, we identified the two strong components separated about 10$''$, in agreement with \citet{2013ApJ...772..137I}. The $J=2$ {\hbox {H$_{2}$O}}\ and dust continuum emissions in NBv1.78, NCv1.195, G09v1.40, SDP\,11 and NAv1.56, as well as the $J=3$ ortho-{\hbox {H$_{2}$O}}\ and the corresponding dust continuum emissions in G09v1.97, NCv1.143 and NAv1.177, are marginally resolved as shown in Fig.\,\ref{fig:map-all}. Their images are consistent with the corresponding SMA images \citepalias{2013ApJ...779...25B} in terms of their spatial distribution. The rest of the sources are not resolved by the low-resolution synthesised beams. The morphological structure of the {\hbox {H$_{2}$O}}\ emission is similar to the continuum for most sources as shown in Fig.\,\ref{fig:map-all}. The ratio $S_{\nu}(\mathrm{ct})^{\mathrm{pk}}$/$S_{\nu}(\mathrm{ct})$ and $S_{\nu}(\mathrm{H_2O})^{\mathrm{pk}}$/$S_{\nu}(\mathrm{H_2O})$ are in good agreement within the error. However, for NCv1.143 in which $S_{\nu}(\mathrm{ct})^{\mathrm{pk}}$/$S_{\nu}(\mathrm{ct})=0.55\pm0.01$ and $S_{\nu}(\mathrm{H_2O})^{\mathrm{pk}}$/$S_{\nu}(\mathrm{H_2O})=0.74\pm0.16$, the $J=3$ ortho-{\hbox {H$_{2}$O}}\ emission appears more compact than the dust continuum. Generally it seems unlikely that we have a significant fraction of missing flux for our sources. Nevertheless, the low angular resolution ($\sim$\,$1''$ at best) limits the study of spatial distribution of the gas and dust in our sources. A detailed analysis of the images for each source is given in Appendix\,\ref{Individual sources}. The majority of the sources have {\hbox {H$_{2}$O}}\ (and CO) linewidths between 210 and 330\,km\,s$^{-1}$, while the four others range between 500 and 700\,km\,s$^{-1}$ (Table\,\ref{table:h2o_properties}). Except NCv1.268, which shows a double-peaked line profile, all {\hbox {H$_{2}$O}}\ lines are well fit by a single Gaussian profile (Fig.\,\ref{fig:spectra-all}). The line profiles between the $J=2$ and $J=3$ {\hbox {H$_{2}$O}}\ lines do not seem to be significantly different, as shown from the linewidth ratios ranging from $1.26\pm0.14$ to $0.84\pm0.16$. The magnification from strong lensing is very sensitive to the spatial configuration, in other words, differential lensing, which could lead to different line profiles if the different velocity components of the line are emitted at different spatial positions. Since there is no visible differential effect between their profiles, it is possible that the $J=2$ and $J=3$ {\hbox {H$_{2}$O}}\ lines are from similar spatial regions. In addition to {\hbox {H$_{2}$O}}, within the 3.6\,GHz WideX band, we have also tentatively detected {\hbox {H$_2$O$^+$}}\ emission in 3 sources: NCv1.143, G09v1.97 and G15v2.779 (see Section \ref{htop}). \subsection{Lensing properties} \label{Lensing properties} All our sources are strongly gravitationally lensed (except G09v1.124, see Appendix\,\ref{g09v1.124}), which increases the line flux densities and allows us to study the {\hbox {H$_{2}$O}}\ emission in an affordable amount of observation time. However, the complexity of the lensed images complicates the analysis. As mentioned above, most of our lensed images are either unresolved or marginally resolved. Thus, we will not discuss here the spatial distribution of the {\hbox {H$_{2}$O}}\ and dust emissions through gravitational lensing modelling. However, we should keep in mind that the correction of the magnification is a crucial part of our study. In addition, differential lensing could have a significant influence when comparing {\hbox {H$_{2}$O}}\ emission with dust and even comparing different transitions of same molecular species \citep{2012MNRAS.424.2429S}, especially for the emission from close to the caustics. In order to infer the intrinsic properties of our sample, especially \hbox {$L_{\mathrm{H_2O}}$}\ as in our first paper \citetalias{2013A&A...551A.115O}, we adopted the lensing magnification factors $\mu$ (Table\,\ref{table:previous_obs_properties}) computed from the modelling of the 880\,$\mu$m SMA images \citepalias{2013ApJ...779...25B}. As shown in the Appendix, the ratio of $S_{\nu}(\mathrm{ct})^{\mathrm{pk}}$/$S_{\nu}(\mathrm{ct})$ and $S_{\nu}(\mathrm{H_2O})^{\mathrm{pk}}$/$S_{\nu}(\mathrm{H_2O})$ are in good agreement within the uncertainties. Therefore, it is unlikely that the magnification of the 880\,$\mu$m continuum image and {\hbox {H$_{2}$O}}\ can be significantly different. However, \citetalias{2013ApJ...779...25B} were unable to provide a lensing model for two of our sources, G12v2.43 and NAv1.177, because their lens deflector is unidentified. This does not affect the modelling of {\hbox {H$_{2}$O}}\ excitation and the comparison of {\hbox {H$_{2}$O}}\ and \hbox {infrared}\ luminosities since the differential lensing effect seems to be insignificant as discussed in Sections\,\ref{Discussion} and Appendix\,\ref{Individual sources}. \begin{table*}[!htbp] \small \setlength{\tabcolsep}{1.5em} \caption{IR luminosity, {\hbox {H$_{2}$O}}\ line luminosity and global dust temperature of the entire sample.} \centering \begin{tabular}{cllccc} \hline \hline Source & {\hbox {H$_{2}$O}}\ Transition & \hbox {$L_{\mathrm{IR}}$}\ & \lhtot211202 & \lhtot202111 & \lhtot321312 \\ & & ($10^{12}$\,{\hbox {$L_\odot$}}) & ($10^{7}$\,{\hbox {$L_\odot$}}) & ($10^{7}$\,{\hbox {$L_\odot$}}) & ($10^{7}$\,{\hbox {$L_\odot$}}) \\ \hline G09v1.97 & \t211202, \t321312 & $22.1\pm5.9$ & $10.7\pm1.4$ & -- & $ 15.0\pm1.9$ \\ G12v2.43 & \t202111, \t321312 & $83.2\pm16.6/\mu$ & -- & $88.4\pm10.7/\mu$ & $143.2\pm11.5/\mu$ \\ NCv1.143 & \t211202, \t321312 & $11.4\pm3.1$ & $ 9.0\pm1.4$ & -- & $ 18.9\pm3.3$ \\ NAv1.195 & \t202111, \t321312 & $18.0\pm4.6$ & -- & $16.4\pm3.0$ & $ <12.3$ \\ NAv1.177 & \t202111, \t321312 & $55.0\pm11.0/\mu$ & -- & $82.0\pm12.8/\mu$ & $129.1\pm10.8/\mu$ \\ NBv1.78 & \t202111, \t321312 & $\;\;8.2\pm2.2$ & -- & $ 9.4\pm2.1$ & $11.0\pm1.5$ \\ \hline G09v1.124-W & \t211202 & $33.1\pm3.2$ & $<12.9$ & -- & -- \\ G09v1.124-T & \t211202 & $14.5\pm1.8$ & $<6.9$ & -- & -- \\ G09v1.40 & \t211202 & $\;\;4.2\pm1.3$ & $ 3.7\pm0.9$ & -- & -- \\ SDP11 & \t202111 & $\;\;5.7\pm1.6$ & -- & $ 5.8\pm1.4$ & -- \\ NCv1.268 & \t211202 & $\;\;8.6\pm2.3$ & $11.5\pm1.5$ & -- & -- \\ NAv1.56 & \t211202 & $\;\;9.7\pm2.6$ & $10.3\pm1.2$ & -- & -- \\ \hline SDP81 & \t202111 & \;\;6.1 & -- & 3.3 & -- \\ NAv1.144 & \t211202 & 11 & 9.7 & -- & -- \\ SDP9 & \t211202 & \;\;5.2 & 7.0 & -- & -- \\ G12v2.30 & \t202111 & 16 & -- & 13 & -- \\ SDP17b & \t202111 & 16 & -- & 20 & -- \\ G15v2.779 & \t211202 & 21 & 26.6 & -- & -- \\ \hline \end{tabular} \tablefoot{ $L_\mathrm{IR}$ is the intrinsic total \hbox {infrared}\ luminosity (8-1000{\hbox {\,$\mu m$}}) taken from \citetalias{2013ApJ...779...25B}. The intrinsic {\hbox {H$_{2}$O}}\ luminosities are inferred from $\mu$\hbox {$L_{\mathrm{H_2O}}$}\ using $\mu$ in \citetalias{2013ApJ...779...25B}. The first group of the sources are the ones with both $J=2$ and $J=3$ {\hbox {H$_{2}$O}}\ lines observed, the next group are the sources with only $J=2$ {\hbox {H$_{2}$O}}\ observed, and the last group are the previous published sources in \citetalias{2013A&A...551A.115O}. } \label{table:Lir_Lh2o} \end{table*} \normalsize \section{Discussion} \label{Discussion} \subsection{\hbox {$L_{\mathrm{H_2O}}$}-\hbox {$L_{\mathrm{IR}}$}\ correlation and \hbox {$L_{\mathrm{H_2O}}$}/\hbox {$L_{\mathrm{IR}}$}\ ratio} \label{correlation} Using the formula given by \cite{1992ApJ...387L..55S}, we derive the apparent {\hbox {H$_{2}$O}}\ luminosities of the sources, $\mu$\hbox {$L_{\mathrm{H_2O}}$}\ (Table\, \ref{table:h2o_properties}), from \hbox {$I_{\mathrm{H_2O}}$}. For the ortho-\htot321312 lines, $\mu$\hbox {$L_{\mathrm{H_2O}}$}\ varies in the range of $6\text{--}22\times10^{8}$\,{\hbox {$L_\odot$}}, while the $\mu$\hbox {$L_{\mathrm{H_2O}}$}\ of the $J=2$ lines are a factor $\sim1.2\text{--}2$ weaker (Table\,\ref{table:h2o_properties}) as discussed in Section\,\ref{hto}. Using the lensing magnification correction (taking the values of $\mu$ from \citetalias{2013ApJ...779...25B}), we have derived the intrinsic {\hbox {H$_{2}$O}}\ luminosities (Table\,\ref{table:Lir_Lh2o}). The error of each luminosity consists of the uncertainty from both observation and the gravitational lensing modelling. After correcting for lensing, the {\hbox {H$_{2}$O}}\ luminosities of our \hbox {high-redshift}\ galaxies appear to be one order of magnitude higher than those of local ULIRGs, as well as their \hbox {infrared}\ luminosities (Table\,\ref{table:Lir_Lh2o}), so that many of them should rather be considered as HyLIRGs than ULIRGs. Though the ratio of \hbox {$L_{\mathrm{H_2O}}$}/\hbox {$L_{\mathrm{IR}}$}\ in our \hbox {high-redshift}\ sample is close to that of local ULIRGs \citepalias{2013ApJ...771L..24Y}, with somewhat a statistical increase in the extreme high \hbox {$L_{\mathrm{IR}}$}\ end (Fig.\,\ref{fig:h2o-ir}). As displayed in Fig.\,\ref{fig:h2o-ir} for {\hbox {H$_{2}$O}}\ of the three observed lines, because we have extended the number of detections to 21 {\hbox {H$_{2}$O}}\ lines, distributed in 16 sources and 3 transitions, we may independently study the correlation of \lhtot202111 and \lhtot211202 with \hbox {$L_{\mathrm{IR}}$}, while we had approximately combined the two lines in \citetalias{2013A&A...551A.115O}. \begin{figure*}[htbp] \begin{center} \includegraphics[scale=0.62]{L_L_all} \caption{ Correlation between \hbox {$L_{\mathrm{IR}}$}\ and \hbox {$L_{\mathrm{H_2O}}$}\ in local ULIRGs and \hbox {high-redshift}\ Hy/ULIRGs. The black points represent local ULIRGs from \citetalias{2013ApJ...771L..24Y}. The blue points with solid error bars are the {\it H}-ATLAS source in this work together with some previously published sources. Red points with dashed error bars are excluded from the fit as described in the text. Upper limits are shown in arrows. The light blue lines show the results of the fitting. The insets are the probability density distributions of the fitted slopes $\alpha$. We find tight correlations between the luminosity of the three {\hbox {H$_{2}$O}}\ lines and \hbox {$L_{\mathrm{IR}}$}, namely \hbox {$L_{\mathrm{H_2O}}$}\;$\propto$\;\hbox {$L_{\mathrm{IR}}$}$^{1.1-1.2}$. } \label{fig:h2o-ir} \end{center} \end{figure*} As found in \citetalias{2013A&A...551A.115O}, the correlation is slightly steeper than linear (\hbox {$L_{\mathrm{H_2O}}$}\;$\sim$\;\hbox {$L_{\mathrm{IR}}$}$^{1.2}$). To broaden the dynamical range of this comparison, we also included the local ULIRGs from \citetalias{2013ApJ...771L..24Y}, together with a few other {\hbox {H$_{2}$O}}\ detections in \hbox {high-redshift}\ Hy/ULIRGs, for example, HLSJ\,0918 (HLSJ\,091828.6+514223) \citep{2012A&A...538L...4C, 2014ApJ...783...59R}, APM\,08279 \citep{2011ApJ...741L..38V}, SPT\,0538 (SPT-S\,J053816−5030.8) \citep{2013ApJ...779...67B} and HFLS3 (\citealt{2013Natur.496..329R}, with the magnification factor from \citealt{2014ApJ...790...40C}) (Fig.\,\ref{fig:h2o-ir}). In the fitting, however, we excluded the sources with heavy AGN contamination (Mrk\,231 and APM\,08279) or missing flux resolved out by the interferometry (SDP\,81). We also excluded the \htot321312 line of HFLS3 considering its unusual high \lhtot321312/\hbox {$L_{\mathrm{IR}}$}\ ratio as discussed above, that could bias our fitting. We have performed a linear regression in log-log space using the Metropolis-Hastings Markov Chain Monte Carlo (MCMC) algorithm sampler through {\texttt{linmix\_err}} \citep{2007ApJ...665.1489K} to derived the $\alpha$ in \begin{equation} \label{eq:L_L} L_\mathrm{H_{2}O} \propto L_\mathrm{IR}^\alpha. \end{equation} The fitted parameters are $\alpha=1.06\pm0.19$, $1.16\pm0.13$ and $1.06\pm0.22$ for {\hbox {H$_{2}$O}}\ line \t202111, \t211202 and \t321312, respectively. Comparing with the local ULIRGs, the \hbox {high-redshift}\ lensed ones have higher \hbox {$L_{\mathrm{H_2O}}$}/\hbox {$L_{\mathrm{IR}}$}\ ratios (Table\,\ref{table:Lir_Lh2o_ratios}). These slopes confirm our first result derived from 7 {\hbox {H$_{2}$O}}\ detections in \citepalias{2013A&A...551A.115O}. The slight super-linear correlations seem to indicate that \hbox {far-infrared}\ pumping play an important role in the excitation of the submm {\hbox {H$_{2}$O}}\ emission. This is unlike the high-$J$ CO lines, which are determined by collisional excitation and follow the linear correlation between the CO line luminosity and \hbox {$L_{\mathrm{IR}}$}\ from the local to the \hbox {high-redshift}\ Universe \citep{2015ApJ...810L..14L}. As demonstrated in \citetalias{2014A&A...567A..91G}, using the \hbox {far-infrared}\ pumping model, the steeper than linear growth of \hbox {$L_{\mathrm{H_2O}}$}\ with \hbox {$L_{\mathrm{IR}}$}\ can be the result of an increasing optical depth at 100\,$\mu$m ($\tau_{100}$) with increasing \hbox {$L_{\mathrm{IR}}$}. In local ULIRGs, the ratio of \hbox {$L_{\mathrm{H_2O}}$}/\hbox {$L_{\mathrm{IR}}$}\ is relatively low while most of them are likely to be optically thin \citepalias[$\tau_{100}\sim0.1$,][]{2014A&A...567A..91G}. On the other hand, for the \hbox {high-redshift}\ lensed Hy/ULIRGs with high values of \hbox {$L_{\mathrm{IR}}$}, the continuum optical depth at \hbox {far-infrared}\ wavelengths is expected to be high (see Section\,\ref{hto}), indicating that the {\hbox {H$_{2}$O}}\ emission comes from very dense regions of molecular gas that are heavily obscured. \begin{table*}[!htbp] \small \setlength{\tabcolsep}{0.88em} \caption{Ratio between \hbox {infrared}\ and {\hbox {H$_{2}$O}}\ luminosity, and the velocity-integrated flux density ratio between different {\hbox {H$_{2}$O}}\ transitions.} \centering \begin{tabular}{clcccccc} \hline \hline Source & {\hbox {H$_{2}$O}}\ Transition & {\hbox {$T_{\mathrm{d}}$}} & \lhtotlir211202 & \lhtotlir202111 & \lhtotlir321312 & \ihtotihto211202 & \ihtotihto202111 \\ & & (K) & ($\times 10^{-6}$) & ($\times 10^{-6}$) & ($\times 10^{-6}$) & & \\ \hline G09v1.97 & \t211202, \t321312 & $44\pm1$ & $ 4.8\pm1.4$ & -- & $ 6.8\pm2.0$ & $0.9\pm0.1$ & ($0.8\pm0.2$) \\ G12v2.43 & \t202111, \t321312 &($39\pm2$) & -- & $10.6\pm2.5$ & $15.3\pm3.3$ & -- & $1.2\pm0.2$ \\ NCv1.143 & \t211202, \t321312 & $40\pm1$ & $ 7.9\pm2.5$ & -- & $16.6\pm5.4$ & $1.4\pm0.1$ & ($1.1\pm0.4$) \\ NAv1.195 & \t202111, \t321312 & $36\pm1$ & -- & $ 9.1\pm2.9$ & $ <6.8$ & -- & $ <0.7$ \\ NAv1.177 & \t202111, \t321312 &($32\pm1$) & -- & $14.9\pm3.8$ & $23.5\pm5.1$ & -- & $1.3\pm0.2$ \\ NBv1.78 & \t202111, \t321312 & $43\pm1$ & -- & $11.4\pm4.7$ & $13.4\pm4.9$ & -- & $1.0\pm0.2$ \\ G09v1.124-W & \t211202 & $40\pm1$ & $<3.9$ & -- & -- & -- & -- \\ G09v1.124-T & \t211202 & $36\pm1$ & $<4.8$ & -- & -- & -- & -- \\ G09v1.40 & \t211202 & $36\pm1$ & $ 8.8\pm3.5$ & -- & -- & -- & -- \\ SDP11 & \t202111 & $41\pm1$ & -- & $10.2\pm3.8$ & -- & -- & -- \\ NCv1.268 & \t211202 & $39\pm1$ & $13.4\pm3.9$ & -- & -- & -- & -- \\ NAv1.56 & \t211202 & $38\pm1$ & $10.7\pm3.1$ & -- & -- & -- & \\ \hline SDP81 & \t202111 & $34\pm1$ & -- & $ 5.4$ & -- & -- & -- \\ NAv1.144 & \t211202 & $39\pm1$ & $ 9.7$ & -- & -- & -- & -- \\ SDP9 & \t211202 & $43\pm1$ & $13.5$ & -- & -- & -- & -- \\ G12v2.30 & \t202111 & $41\pm1$ & -- & $ 8.1$ & -- & -- & -- \\ SDP17b & \t202111 & $38\pm1$ & -- & $12.5$ & -- & -- & -- \\ G15v2.779 & \t211202 & $41\pm1$ & $7.7$ & -- & -- & -- & -- \\ \hline HFLS3 & \t202111, \t211202, \t321312 & $56^{+9}_{-12}$ & $20.3$ & $22.2$ & $57.3$ & $1.8\pm0.6$ & $2.2\pm0.5$ \\ APM\,08279 & \t202111, \t211202, \t321312 &$220\pm30$ & $ 2.2$ & $ 6.0$ & $6.4$ & $1.9\pm0.3$ & $0.9\pm0.1$ \\ HLSJ\,0918 & \t202111 & $38\pm3$ & $11.4$ & -- & -- & -- & -- \\ SPT\,0538 & \t202111 & $39\pm2$ & -- & $40.3$ & -- & -- & -- \\ \hline local strong-AGN & \t202111, \t211202, \t321312 & -- & $3.8$ & $6.4$ & $6.7$ & $1.1\pm0.4$ & $0.9\pm0.3$ \\ local \ion{H}{ii}+mild-AGN & \t202111, \t211202, \t321312 & -- & $5.8$ & $9.2$ & $10.8$ & $1.4\pm0.4$ & $1.1\pm0.3$ \\ \hline \end{tabular} \tablefoot{ The luminosity ratios between each {\hbox {H$_{2}$O}}\ line and their total \hbox {infrared}, and the velocity-integrated flux density ratio of different {\hbox {H$_{2}$O}}\ transitions. {\hbox {$T_{\mathrm{d}}$}}\ is the cold-dust temperature taken from \citetalias{2013ApJ...779...25B}, except for the ones in brackets which are not listed \citetalias{2013ApJ...779...25B}, that we infer them from single modified black-body dust SED fitting using the submm/mm photometry data listed in Table\,\ref{table:previous_obs_properties}. All the errors quoted for {\hbox {$T_{\mathrm{d}}$}}\ are significantly underestimated especially because they do not include possible effects of differential lensing and make the assumption of a single-temperature. Line ratios in brackets are derived based on the average velocity-integrated flux density ratios between \t211202 and \t202111 lines in local infrared galaxies. The local strong-AGN sources are the optically classified AGN-dominated galaxies and the local \ion{H}{ii}+mild-AGN sources are star-forming-dominated galaxies with possible mild AGN contribution \citepalias{2013ApJ...771L..24Y}. The first group of the sources are from this work; and the sources in the second group are the previously published sources in \citetalias{2013A&A...551A.115O}; the third group contains the previously published \hbox {high-redshift}\ detections from other works: HFLS3 \citep{2013Natur.496..329R}, APM\,08279 \citep{2011ApJ...741L..38V}, HLSJ\,0918 \citep{2012A&A...538L...4C, 2014ApJ...783...59R} and SPT\,0538 \citep{2013ApJ...779...67B}; the last group shows the local averaged values from \citetalias{2013ApJ...771L..24Y}. } \label{table:Lir_Lh2o_ratios} \end{table*} \normalsize Similar to what we found in the local ULIRGs \citepalias{2013ApJ...771L..24Y}, we find again an anti-correlation between {\hbox {$T_{\mathrm{d}}$}}\ and \lhtot321312/\hbox {$L_{\mathrm{IR}}$}. The Spearman$'$s rank correlation coefficient for the five \htot321312 detected {\it H}-ATLAS sources is $\rho = -0.9$ with a two-sided significance of its deviation from zero, $p=0.04$. However, after including the non-detection of \htot321312 in NAv1.195, the correlation is much weaker, that is to say, $\rho \lesssim -0.5$ and $p \sim 0.32$. No significant correlation has been found between {\hbox {$T_{\mathrm{d}}$}}\ and \lhtot202111/\hbox {$L_{\mathrm{IR}}$}\ ($\rho = -0.1$ and $p=0.87$) nor \lhtot211202/\hbox {$L_{\mathrm{IR}}$}\ ($\rho = -0.3$ and $p=0.45$). As explained in \citetalias{2014A&A...567A..91G}, in the optically thick and very warm galaxies, the ratio of \lhtot321312/\hbox {$L_{\mathrm{IR}}$}\ is expected to decrease with increasing {\hbox {$T_{\mathrm{d}}$}}. And this anti-correlation can not be explained by optically thin conditions. However, a larger sample is needed to increase the statistical significance of this anti-correlation. Although, it is important to stress that the luminosity of {\hbox {H$_{2}$O}}\ is a complex result of various physical parameters such as dust temperature, gas density, {\hbox {H$_{2}$O}}\ abundance and {\hbox {H$_{2}$O}}\ gas distribution relative to the \hbox {infrared}\ radiation field, etc, it is striking that the correlation between \hbox {$L_{\mathrm{H_2O}}$}\ and \hbox {$L_{\mathrm{IR}}$}\ stays linear from local young stellar objects (YSOs), in which the {\hbox {H$_{2}$O}}\ molecules are mainly excited by shocks and collisions, to local ULIRGs (\hbox {far-infrared}\ pumping dominated), extending $\sim12$ orders of magnitudes \citep{2016A&A...585A.103S}, implying that {\hbox {H$_{2}$O}}\ indeed traces the SFR proportionally, similarly to the dense gas \citep{2004ApJ...606..271G} in the local infrared bright galaxies. However, for the \hbox {high-redshift}\ sources, the \hbox {$L_{\mathrm{H_2O}}$}\ emissions are somewhat above the linear correlations which could be explained by their high $\tau_{100}$ (or large velocity dispersion). As shown in Table\,\ref{table:Lir_Lh2o_ratios}, HFLS3, with a $\tau_{100}>1$ has extremely large ratios of \hbox {$L_{\mathrm{H_2O}}$}/\hbox {$L_{\mathrm{IR}}$}\ which are stronger than the average of our {\it H}-ATLAS sources by factors $\sim$\,2 for the $J=2$ lines and $\sim$\,4 for $J=3$ (see Fig.\,\ref{fig:h2o-ir}). The velocity dispersions of its {\hbox {H$_{2}$O}}\ lines are $\sim$\,900\,km\,s$^{-1}$ (with uncertainties from 18\% to 36\%), which is larger than all our sources. For optically thick systems, larger velocity dispersion will increase the number of absorbed pumping photons, and boost the ratio of \hbox {$L_{\mathrm{H_2O}}$}/\hbox {$L_{\mathrm{IR}}$}\ \citepalias{2014A&A...567A..91G}. For the AGN-dominated sources (i.e. APM\,08279, G09v1.124-W and Mrk\,231) as shown in Fig.\,\ref{fig:h2o-ir}, most of them (except for the \htot321312 line of Mrk\,231) are well below the fitted correlation (see Section\,\ref{AGN}). This is consistent with the average value of local strong-AGN-dominated sources. The $J \lesssim 3$ {\hbox {H$_{2}$O}}\ lines are \hbox {far-infrared}\ pumped by the 75 and 101\,$\mu m$ photons, thus the very warm dust in strong-AGN-dominated sources is likely to contribute more to the \hbox {$L_{\mathrm{IR}}$}\ than the $J \lesssim 3$ {\hbox {H$_{2}$O}}\ excitation (see also \citetalias{2013ApJ...771L..24Y}). \subsection{{\hbox {H$_{2}$O}}\ excitation}\label{hto} We have detected both $J=2$ and $J=3$ {\hbox {H$_{2}$O}}\ lines in five sources out of six observed for $J=3$ ortho-{\hbox {H$_{2}$O}}\ lines. By comparing the line ratios and their strength relative to \hbox {$L_{\mathrm{IR}}$}, we are able to constrain the physical conditions of the molecular content and also the properties of the \hbox {far-infrared}\ radiation field. \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.561]{h2o_sled} \caption{ Velocity-integrated flux density distribution of {\hbox {H$_{2}$O}}\ normalised to \ihtot202111 adapted from \citetalias{2013ApJ...771L..24Y}. Local averaged values are shown in black dashed line and marks. Among them, AGN-dominated sources are shown in red and star-forming dominated galaxies are shown in blue. Some individual sources are also shown in this plot as indicated by the legend. Green diamonds are the \hbox {high-redshift}\ lensed Hy/ULIRGs from this work. HFLS3 is a $z=6.3$ \hbox {high-redshift}\ galaxy from \cite{2013Natur.496..329R}. } \label{fig:h2o-sled} \end{center} \end{figure} To compare the {\hbox {H$_{2}$O}}\ excitation with local galaxies, we plot the velocity-integrated flux density of ortho-\htot321312 normalised by that of para-\htot202111 in our source on top of the local and \hbox {high-redshift}\ {\hbox {H$_{2}$O}}\ SLEDs (spectral line energy distributions) in Fig.\,\ref{fig:h2o-sled}. All the six \hbox {high-redshift}\ sources are located within the range of the local galaxies, with a 1\,$\sigma$ dispersion of $\sim 0.2$. Yet for the $z=6.34$ extreme starburst HFLS3, the value of this ratio is at least 1.7 times higher than the average value of local sources \citepalias{2013ApJ...771L..24Y} and those of our lensed \hbox {high-redshift}\ Hy/ULIRGs at $\gtrapprox3\,\sigma$ confidence level (Fig.\,\ref{fig:h2o-sled}). This probably traces different excitation conditions, namely the properties of the dust emission, as it is suggested in \citetalias{2014A&A...567A..91G} that the flux ratio of \htot321312 over \htot202111 is the most direct tracer of the hardness of the \hbox {far-infrared}\ radiation field which powers the submm {\hbox {H$_{2}$O}}\ excitation. However, the line ratios are still consistent with the strong saturation limit in the \hbox {far-infrared}\ pumping model with a $T_\mathrm{warm} \gtrsim 65$\,K. The large scatter of the {\hbox {H$_{2}$O}}\ line ratio between \t321312 and \t202111 indicates different local {\hbox {H$_{2}$O}}\ excitation conditions. As \hbox {far-infrared}\ pumping is dominating the {\hbox {H$_{2}$O}}\ excitation, the ratio therefore reflects the differences in the \hbox {far-infrared}\ radiation field, for example, the temperature of the warmer dust that excites the {\hbox {H$_{2}$O}}\ gas, and the submm continuum opacity. It is now clear that \hbox {far-infrared}\ pumping is the prevailing excitation mechanism for those submm {\hbox {H$_{2}$O}}\ lines rather than collisional excitation \citepalias{2014A&A...567A..91G} in \hbox {infrared}\ bright galaxies in both the local and \hbox {high-redshift}\ Universe. The main path of \hbox {far-infrared}\ pumping related to the lines we observed here are 75 and 101\,$\mu$m as displayed in Fig.\,\ref{fig:h2o-e-level}. Therefore, the different line ratios are highly sensitive to the difference between the monochromatic flux at 75 and 101\,$\mu$m. We may compare the global {\hbox {$T_{\mathrm{d}}$}}\ measured from \hbox {far-infrared}\ and submm bands \citepalias{2013ApJ...779...25B}. It includes both cold and warm dust contribution to the dust SED in the rest-frame, which is, however, dominated by cold dust observed in SPIRE bands. It is thus not surprising that we find no strong correlation between {\hbox {$T_{\mathrm{d}}$}}\ and \ihtot321312/\ihtot202111 ($r \sim -0.3$). The Rayleigh-Jeans tail of the dust SED is dominated by cooler dust which is associated with extended molecular gas and less connected to the submm {\hbox {H$_{2}$O}}\ excitation. As suggested in \citetalias{2014A&A...567A..91G}, it is indeed the warmer dust ($T_\mathrm{warm}$, as shown by the colour legend in Fig.\,\ref{fig:h2o-model}) dominating at the Wien side of the dust SED that corresponds to the excitation of submm {\hbox {H$_{2}$O}}\ lines. To further explore the physical properties of the {\hbox {H$_{2}$O}}\ gas content and the \hbox {far-infrared}\ dust radiation related to the submm {\hbox {H$_{2}$O}}\ excitation, we need to model how we can infer key parameters, such as the {\hbox {H$_{2}$O}}\ abundance and those determining the radiation properties, from the observed {\hbox {H$_{2}$O}}\ lines. For this purpose, we use the \hbox {far-infrared}\ pumping {\hbox {H$_{2}$O}}\ excitation model described in \citetalias{2014A&A...567A..91G} to fit the observed \hbox {$L_{\mathrm{H_2O}}$}\ together with the corresponding \hbox {$L_{\mathrm{IR}}$}, and derive the range of continuum optical depth at 100\,$\mu$m ($\tau_{100}$), warm dust temperature ($T_\mathrm{warm}$), and {\hbox {H$_{2}$O}}\ column density per unit of velocity interval (\hbox {$N_{\mathrm{H_2O}}$}/$\Delta V$) in the five sources with both $J=2$ and $J=3$ {\hbox {H$_{2}$O}}\ emission detections. Due to the insufficient number of the inputs in the model, which are \hbox {$L_{\mathrm{H_2O}}$}\ of the two {\hbox {H$_{2}$O}}\ lines and \hbox {$L_{\mathrm{IR}}$}, we are only able to perform the modelling by using the pure \hbox {far-infrared}\ pumping regime. Nevertheless, our observed line ratio between $J=3$ and $J=2$ {\hbox {H$_{2}$O}}\ lines suggests that \hbox {far-infrared}\ pumping is the dominant excitation mechanism and the contribution from collisional excitation is minor \citepalias{2014A&A...567A..91G}. The $\pm 1\,\sigma$ contours from $\chi^2$ fitting are shown in Fig.\,\ref{fig:h2o-model} for each warm dust temperature component ($T_\mathrm{warm} = 35\text{--}115$\,K) per source. It is clear that with two {\hbox {H$_{2}$O}}\ lines (one $J=2$ para-{\hbox {H$_{2}$O}}\ and ortho-\htot312312), we will not be able to well constrain $\tau_{100}$ and \hbox {$N_{\mathrm{H_2O}}$}/$\Delta V$. As shown in the figure, for $T_\mathrm{warm} \lesssim 75$\,K, both very low and very high $\tau_{100}$ could fit the observation data together with high \hbox {$N_{\mathrm{H_2O}}$}/$\Delta V$, while the dust with $T_\mathrm{warm} \gtrsim 95$\,K are likely favouring high $\tau_{100}$. In the low continuum optical depth part in Fig.\,\ref{fig:h2o-model}, as $\tau_{100}$ decreases, the model needs to increase the value of \hbox {$N_{\mathrm{H_2O}}$}/$\Delta V$ to generate sufficient \hbox {$L_{\mathrm{H_2O}}$}\ to be able to fit the observed \hbox {$L_{\mathrm{H_2O}}$}/\hbox {$L_{\mathrm{IR}}$}. This has been observed in some local sources with low $\tau_{100}$, such as in NGC\,1068 and NGC\,6240. There are no absorption features in the \hbox {far-infrared}\ but submm {\hbox {H$_{2}$O}}\ emission have been detected in these sources \citepalias{2014A&A...567A..91G}. The important feature of such sources is the lack of $J\geq4$ {\hbox {H$_{2}$O}}\ emission lines. Thus, the observation of higher excitation of {\hbox {H$_{2}$O}}\ will discriminate between the low and high $\tau_{100}$ regimes. \begin{figure*}[!htbp] \begin{center} \includegraphics[scale=0.82]{model} \caption{ Parameter space distribution of the H$_2$O \hbox {far-infrared}\ pumping excitation modelling with observed para-{\hbox {H$_{2}$O}}\ \t202111 or \t211202 and ortho-\htot321312 in each panel. $\pm 1\,\sigma$ contours are shown for each plot. Different colours with different line styles represent different temperature components of the warm dust as shown in the legend. The explored warm dust temperature range is from 35\,K to 115\,K. The temperature contours that are unable to fit the data are not shown in this figure. From the figure, we are able to constrain the $\tau_{100}$, $T_\mathrm{warm}$ and \hbox {$N_{\mathrm{H_2O}}$}/$\Delta V$ for the five sources. However, there are strong degeneracies. Thus, we need additional information, such as the velocity-integrated flux densities of $J\geq4$ {\hbox {H$_{2}$O}}\ lines, to better constrain the physical parameters. } \label{fig:h2o-model} \end{center} \end{figure*} Among these five sources, favoured key parameters are somewhat different showing the range of properties we can expect for such sources. Compared with the other four Hy/ULIRGs, G09v1.97 is likely to have the lowest $T_\mathrm{warm}$ as only dust with $T_\mathrm{warm} \sim 45-55$\,K can fit well with the data. NCv1.143 and NAv1.177 have slightly different diagnostic which yields higher dust temperature as $T_\mathrm{warm} \sim 45\text{--}75$\,K, while NBv1.78 and G12v2.43 tend to have the highest temperature range, $T_\mathrm{warm} \sim 45\text{--}95$\,K. The values of $T_\mathrm{warm}$ are consistent with the fact that {\hbox {H$_{2}$O}}\ traces warm gas. We did not find any significant differences between the ranges of \hbox {$N_{\mathrm{H_2O}}$}/$\Delta V$ derived from the modelling for these five sources, although G09v1.97 tends to have lower \hbox {$N_{\mathrm{H_2O}}$}/$\Delta V$ (Table\,\ref{table:model_para}). As shown in Section\,\ref{AGN}, there is no evidence of AGN domination in all our sources, the submm {\hbox {H$_{2}$O}}\ lines likely trace the warm dust component that connect to the heavily obscured active star-forming activity. However, due to the lack of photometry data on the Wien side of the dust SEDs, we will not be able to compare the observed values of $T_\mathrm{warm}$ directly with the ones derived from the modelling. By adopting the 100\,$\mu$m dust mass absorption coefficient from \cite{2003ARA&A..41..241D} of $\kappa_{100}$\,=\,27.1\,cm$^2$\,g$^{-1}$, we can derive the dust opacity by \begin{equation} \label{eq:tau} \tau_{100}=\kappa_{100} \, \sigma_\mathrm{dust}= \kappa_{100} \left( M_\mathrm{dust} \over A \right) = \kappa_{100} \left( M_\mathrm{dust} \over 2 \pi r_\mathrm{half}^2 \right) \end{equation} where $\sigma_\mathrm{dust}$ is the dust mass column density, $M_\mathrm{dust}$ is the dust mass, $A$ is the projected surface area of the dust continuum source and $r_\mathrm{half}$ is the half-light radius of the source at submm. As shown in Table\,\ref{table:previous_obs_properties}, among the five sources in Fig.\,\ref{fig:h2o-model}, the values of $M_\mathrm{dust}$ and $r_\mathrm{half}$ in G09v1.97, NCv1.143 and NBv1.78 have been derived via gravitational lensing \citepalias{2013ApJ...779...25B}. Consequently, the derived approximate dust optical depth at 100\,$\mu$m in these three sources is $\tau_{100} \approx\,$\,1.8, 7.2 and 2.5, respectively. One should note that, the large uncertainty in both the $\kappa_{100}$ and $r_\mathrm{half}$ of these \hbox {high-redshift}\ galaxies can bring a factor of few error budget. Nevertheless, by adopting a gas-to-dust mass ratio of $X = 100$ \cite[e.g.][]{2011ApJ...740L..15M}, we can derive the gas depletion time using the following approach, \begin{equation} \label{eq:t_dep} t_\mathrm{dep} = {M_\mathrm{gas} \over \mathit{SFR}} = {X \tau_{100} \over \Sigma_\mathit{SFR} \kappa_{100}} \approx 1.8 \times 10^{4} \left( \tau_{100} \over {\Sigma_\mathit{SFR} \over \mathrm{M_\odot\,yr^{-1}\,kpc^{-2}}} \right) \, \mathrm{Myr} \end{equation} where $M_\mathrm{gas}$ is the total molecular gas mass and $\Sigma_\mathit{SFR}$ is the surface $\mathit{SFR}$ density derived from \hbox {$L_{\mathrm{IR}}$}\ using \citet{1998ARA&A..36..189K} calibration by assuming a Salpeter IMF \citepalias[][and Table\,\ref{table:previous_obs_properties}]{2013ApJ...779...25B}. The implied depletion time scale is $t_\mathrm{dep} \approx 35\text{--}60$\,Myr with errors within a factor of two, in which the dominant uncertainties are from the assumed gas-to-dust mass ratio and the half-light radius. The $t_\mathrm{dep}$ is consistent with the values derived from dense gas tracers, like HCN in local (U)LIRGs \citep[e.g.][]{2004ApJ...606..271G, 2012A&A...539A...8G}. As suggested in \citetalias{2014A&A...567A..91G}, the {\hbox {H$_{2}$O}}\ and HCN likely to be located in the same regions, indicate that the {\hbox {H$_{2}$O}}\ traces the dense gas as well. Thus, the $\tau_{100}$ derived above is likely also tracing the \hbox {far-infrared}\ radiation source that powers the submm {\hbox {H$_{2}$O}}\ emissions. \citetalias{2013ApJ...779...25B} also has found that these {\it H}-ATLAS \hbox {high-redshift}\ Hy/ULIRGs are expected to be optically thick in the \hbox {far-infrared}. By adding the constrain from $\tau_{100}$ above, we can better derive the physical conditions in the sources as shown in Table\,\ref{table:model_para}. \begin{table}[htbp] \setlength{\tabcolsep}{0.64em} \small \caption{Parameters derived from \hbox {far-infrared}\ pumping model of {\hbox {H$_{2}$O}}.} \centering \begin{tabular}{lrrrr} \hline \hline Source & $\tau_{100}$ & $T_\mathrm{warm}$ & \hbox {$N_{\mathrm{H_2O}}$}/$\Delta V$\;\;\; & \hbox {$N_{\mathrm{H_2O}}$}\;\;\;\;\; \\ & & (K)\;\, & (cm$^{-2}$\,km$^{-1}$\,s) & (cm$^{-2}$)\;\;\;\; \\ \hline G09v1.97 & 1.8 & 45--55 & (0.3--0.6)\,$\times 10^{15}$ & (0.3--1.1)\,$\times 10^{17}$ \\ G12v2.43 & -- & 45--95 & $\gtrsim$\,0.7\,$\times 10^{15}$ & $\gtrsim$\,0.7\,$\times 10^{17}$ \\ NCv1.143 & 7.2 & 45--55 & (2.0--20)\,$\times 10^{15}$ & (2.0--60)\,$\times 10^{17}$ \\ NAv1.177 & -- & 45--75 & $\gtrsim$\,1.0\,$\times 10^{15}$ & $\gtrsim$\,1.0\,$\times 10^{17}$ \\ NBv1.78 & 2.5 & 45--75 & $\gtrsim$\,0.6\,$\times 10^{15}$ & $\gtrsim$\,0.6\,$\times 10^{17}$ \\ \hline \end{tabular} \tablefoot{ $\tau_{100}$ is derived from Eq.\,\ref{eq:tau} with errors of a few units (see text), while $T_\mathrm{warm}$ and \hbox {$N_{\mathrm{H_2O}}$}/$\Delta V$ are inferred from the {\hbox {H$_{2}$O}}\ excitation model. \hbox {$N_{\mathrm{H_2O}}$}\ is calculated by taking a typical $\Delta V$ value range of $100\text{--}300$\,km\,s$^{-1}$ as suggested by \citetalias{2014A&A...567A..91G}. } \label{table:model_para} \end{table} \normalsize From their modelling of local infrared galaxies, \citetalias{2014A&A...567A..91G} find a range of $T_\mathrm{warm} =45\text{--}75$\,K, $\tau_{100}=0.05\text{--}0.2$ and \hbox {$N_{\mathrm{H_2O}}$}/$\Delta V=(0.5\text{--}2)\times10^{15}$\,\,cm$^{-2}$\,km$^{-1}$\,s. The modelling results for our \hbox {high-redshift}\ sources are consistent with those in local galaxies in terms of $T_\mathrm{warm}$ and \hbox {$N_{\mathrm{H_2O}}$}/$\Delta V$. However, the values of $\tau_{100}$ we found at \hbox {high-redshift}\ are higher than those of the local \hbox {infrared}\ galaxies. This is consistent with the higher ratio between \hbox {$L_{\mathrm{H_2O}}$}\ and \hbox {$L_{\mathrm{IR}}$}\ at \hbox {high-redshift}\ \citepalias{2013ApJ...771L..24Y} which could be explained by higher $\tau_{100}$ \citepalias{2014A&A...567A..91G}. However, as demonstrated in an extreme sample, a very large velocity dispersion will also increase the value of \hbox {$L_{\mathrm{H_2O}}$}/\hbox {$L_{\mathrm{IR}}$}\ within the sources with $\tau_{100} > 1$. Thus, the higher ratio can also be explained by larger velocity dispersion (not including systemic rotations) in the \hbox {high-redshift}\ Hy/ULIRGs. Compared with local ULIRGs, our {\it H}-ATLAS sources are much more powerful in terms of their \hbox {$L_{\mathrm{IR}}$}. The dense warm gas regions that {\hbox {H$_{2}$O}}\ traces are highly obscured with much more powerful \hbox {far-infrared}\ radiation fields, which possibly are close to the limit of maximum starbursts. Given the values of dust temperature and dust opacity, the radiation pressure $P_\mathrm{rad} \sim \tau_{100} \sigma T_\mathrm{d} / c$ ($\sigma$ is Stefan-Boltzmann$'$s constant and $c$ the speed of light) of our sources is about $0.8 \times 10^{-7}$\,erg\,cm$^{-3}$. If we assume a H$_2$ density $n_\mathrm{H_2}$ of $\sim$\,10$^6$\,cm\,$^{-3}$ and take $T_\mathrm{k} \sim$\,150\,K as suggested in \citetalias{2014A&A...567A..91G}, the thermal pressure $P_\mathrm{th} \sim n_\mathrm{H_2} k_\mathrm{B} T_\mathrm{k} \sim 2 \times 10^{-8}$\,erg\,cm$^{-3}$ ($k_\mathrm{B}$ is the Boltzmann constant and $T_\mathrm{k}$ is the gas temperature). Assuming a turbulent velocity dispersion of $\sigma_\mathrm{v} \sim 20\text{--}50$\,km\,s$^{-1}$ \citep{2015A&A...575A..56B} and taking molecular gas mass density $\rho \sim 2 \mu n_\mathrm{H_2}$ (2$\mu$ is the average molecular mass) would yield for the turbulent pressure $P_\mathrm{turb} \sim \rho \sigma_\mathrm{v}^2/3 \sim 4 \times 10^{-6}$\,erg\,cm$^{-3}$. This might be about an order of magnitude larger than $P_\mathrm{rad}$ and two orders of magnitude larger than $P_\mathrm{th}$, but we should note that all values are very uncertain, especially $P_\mathrm{turb}$ which could be uncertain by, at maximum, a factor of a few tens. Therefore, keeping in mind their large uncertainties, turbulence and/or radiation are likely to play an important role in limiting the star formation. \subsection{Comparison between {\hbox {H$_{2}$O}}\ and CO} \label{CO lines} The velocity-integrated flux density ratio between submm {\hbox {H$_{2}$O}}\ and submm CO lines with comparable frequencies is 0.02--0.03 in local PDRs such as Orion and M\,82 \citep{2010A&A...521L...1W}. But this ratio in local ULIRGs \citepalias{2013ApJ...771L..24Y} and in {\it H}-ATLAS \hbox {high-redshift}\ Hy/ULIRGs is much higher, from 0.4 to 1.1 (Table\,\ref{table:co_properties} and \ref{table:h2o_properties}). The former case is dominated by typical PDRs, where CO lines are much stronger than {\hbox {H$_{2}$O}}\ lines, while the latter sources shows clearly a different excitation regime, in which {\hbox {H$_{2}$O}}\ traces the central core of warm, dense and dusty molecular gas which is about a few hundred parsec \citep{2010A&A...518L..43G} in diameter in local ULIRGs and highly obscured even at \hbox {far-infrared}. Generally, submm {\hbox {H$_{2}$O}}\ lines are dominated by \hbox {far-infrared}\ pumping that traces strong \hbox {far-infrared}\ dust continuum emission, which is different from the regime of molecular gas traced by collisional excited CO lines. In the active star-forming nucleus of the \hbox {infrared}-bright galaxies, the \hbox {far-infrared}\ pumped {\hbox {H$_{2}$O}}\ is expected to trace directly the \hbox {far-infrared}\ radiation generated by the intense star formation, which can be well correlated with the high-$J$ CO lines \citep{2015ApJ...810L..14L}. Thus there is likely to be a correlation between the submm {\hbox {H$_{2}$O}}\ and CO emission. From our previous observations, most of the {\hbox {H$_{2}$O}}\ and CO line profiles are quite similar from the same source in our \hbox {high-redshift}\ lensed Hy/ULIRGs sample (Fig.\,2 of \citetalias{2013A&A...551A.115O}). In the present work, we again find similar profiles between {\hbox {H$_{2}$O}}\ and CO in terms of their FWHM with an extended sample (Table\,\ref{table:co_properties} and \ref{table:h2o_properties}). In both cases the FWHMs of {\hbox {H$_{2}$O}}\ and CO are generally equal within typical 1.5\,$\sigma$ errors (see special discussion for each source in Appendix\,\ref{Individual sources}). As the gravitational lensing magnification factor is sensitive to spatial alignment, the similar line profiles could thus suggest similar spatial distributions of the two gas tracers. However, there are a few exceptional sources, such as SDP\,81 \citep{2015ApJ...808L...4A} and HLSJ0918 \citep{2014ApJ...783...59R}. In both cases, the {\hbox {H$_{2}$O}}\ lines are lacking the blue velocity component found in the CO line profiles. Quite different from the rest sources, in SDP\,81 and HLSJ0918, the CO line profiles are complicated with multiple velocity components. Moreover, the velocity-integrated flux density ratios between these CO components may vary following the excitation level (different $J_\mathrm{up}$). Thus, it is important to analyse the relation between different CO excitation components (from low-$J$ to high-$J$) and {\hbox {H$_{2}$O}}. Also, high resolution observation is needed to resolve the multiple spatial gas components and compare the CO emission with {\hbox {H$_{2}$O}}\ and dust continuum emission within each component. \subsection{AGN content} \label{AGN} It is still not clear how a strong AGN could affect the excitation of submm {\hbox {H$_{2}$O}}\ in both local ULIRGs and \hbox {high-redshift}\ Hy/ULIRGs. Nevertheless, there are some individual studies addressing this question. For example, in APM\,08279, \cite{2011ApJ...741L..38V} found that AGN is the main power source that excites the high-$J$ {\hbox {H$_{2}$O}}\ lines and also enriches the gas-phase {\hbox {H$_{2}$O}}\ abundance. Similar conclusion has also been drawn by \cite{2010A&A...518L..43G} that in Mrk\,231 the AGN accounts for at least 50\,\% contribution to the \hbox {far-infrared}\ radiation that excites {\hbox {H$_{2}$O}}. From the systematic study of local sources \citepalias{2013ApJ...771L..24Y}, slightly lower values of \hbox {$L_{\mathrm{H_2O}}$}/\hbox {$L_{\mathrm{IR}}$}\ are found in strong-AGN-dominated sources. In the present work, the decreasing ratio of \hbox {$L_{\mathrm{H_2O}}$}/\hbox {$L_{\mathrm{IR}}$}\ with AGN is clearly shown in Fig.\,\ref{fig:h2o-ir} where Mrk\,231, G09v1.124-W and APM\,08279 are below the correlation by factors between 2 and 5 with less than 30\% uncertainties (except the \htot321123 line of Mrk\,231). In the \hbox {far-infrared}\ pumping regime, the buried AGN will provide a strong \hbox {far-infrared}\ radiation source that will pump the {\hbox {H$_{2}$O}}\ lines. However, the very warm dust powered by the AGN will increase the value of \hbox {$L_{\mathrm{IR}}$}\ faster than the number of $\geq$\,75\,$\mu m$ photons that is dominating the excitation of $J \leq 3$ {\hbox {H$_{2}$O}}\ lines \citep[e.g.][]{2015ApJ...814....9K}. If we assume that the strength of the {\hbox {H$_{2}$O}}\ emission is proportional to the number of pumping photons, then in the strong-AGN-dominated sources, the ratio of \hbox {$L_{\mathrm{H_2O}}$}/\hbox {$L_{\mathrm{IR}}$}\ will decrease since much warmer dust is present. Moreover, strong radiation from the AGN could dissociate the {\hbox {H$_{2}$O}}\ molecules. To evaluate the AGN contribution to the {\it H}-ATLAS sources, we extracted the 1.4\,GHz radio flux from the FIRST radio survey \citep{1995ApJ...450..559B} listed in Table\,\ref{table:previous_obs_properties}. By comparing the \hbox {far-infrared}\ and radio emission using the q parameter \citep{1992ARA&A..30..575C}, $q \equiv \log ( {L_\mathrm{FIR} / 3.75 \times 10^{12}\,\mathrm{W}} ) - \log ( {{L_\mathrm{1.4\,GHz}} / {1\,\mathrm{W}\,\mathrm{Hz}^{-1}}} )$, we derive values of $q$ from 1.9 to 2.5 in our sources. These values follow the value $2.3\pm0.1$ found by \citet{2001ApJ...554..803Y} for non strong-radio AGN. This may suggest that there is also no significant indication of a high radio contribution from AGN. This is also confirmed by the Wide-field Infrared Survey Explorer \citep[WISE,][]{2010AJ....140.1868W}, which does not detect our sources at 12\,$\mu$m and 22\,$\mu$m. However, rest-frame optical spectral observations show that G09v1.124-W is rather a powerful AGN (Oteo et al, in prep.), which is the only identified AGN-dominated source in our sample. \section{Detection of {\hbox {H$_{2}$O}}$^+$ emission lines} \label{htop} \begin{figure*}[htbp] \begin{center} \includegraphics[scale=0.86]{h2op} \caption{ {\em Left panel}: from top to bottom are the full NOEMA spectrum at $\nu_\mathrm{rest} \sim 750$\,GHz of NCv1.143, G09v1.97 and G15v2.779, respectively. The reference frequency is the redshifted frequency of the line \htot211202. The frequencies of the main {\hbox {H$_2$O$^+$}}(\t211202)\,$_{(5/2-5/2)}$ and {\hbox {H$_2$O$^+$}}(\t202111)\,$_{(5/2-3/2)}$ lines are indicated by grey vertical dashed lines. The three dashed squares in the spectrum of NCv1.143 show the position of each zoom-in spectrum of the {\hbox {H$_2$O$^+$}}\ (or the H$_2^{18}$O) as displayed in the {\it right panel} indicated by the A, B or C. The superposed blue dashed histograms represents the spectra of \htot211202 centred at the frequencies of the {\hbox {H$_2$O$^+$}}\ lines. Note that, in many cases, the observed frequency ranges (yellow histograms) do not include the full expected profiles for the {\hbox {H$_2$O$^+$}}\ lines. The red curve represents the Gaussian fitting to the spectra. We have detected both {\hbox {H$_2$O$^+$}}\ lines in NCv1.143, and tentatively detected {\hbox {H$_2$O$^+$}}(\t202111)\,$_{(5/2-3/2)}$ in G09v1.97 and {\hbox {H$_2$O$^+$}}(\t211202)\,$_{(5/2-5/2)}$ in G15v2.779. {\em Right panel}: from top to bottom are the spectra dominated by lines of {\hbox {H$_2$O$^+$}}(\t211202)\,${_{(5/2-5/2)}}$, {\hbox {H$_2$O$^+$}}(\t202111)\,$_{(3/2-3/2)}$ and H$_2^{18}$O(\t211202), respectively, displayed as the filled yellow histograms. The reference frequency is the frequency of each of these lines. Weaker {\hbox {H$_2$O$^+$}}(\t202111)\,$_{(3/2-3/2)}$ and {\hbox {H$_2$O$^+$}}(\t211202)\,$_{(5/2-3/2)}$ components are indicated by additional grey vertical dashed lines. The superposed blue dashed histograms represent the spectra of para-\htot211202 in NCv1.143 centred at each line frequency. The red curve represents the Gaussian fitting to the spectra, and the green dashed curves are the decomposed Gaussian profiles for each fine structure line. The violet error bar indicates the $\pm$\,1\,$\sigma$ uncertainties of the spectrum. } \label{fig:h2o-ions1} \end{center} \end{figure*} H$_2$O can be formed through both solid-state and gas-phase chemical reactions \citep{2013ChRv..113.9043V}. On dust-grain mantles, surface chemistry dominates the formation of {\hbox {H$_{2}$O}}\ molecules. Then they can be released into the interstellar medium (ISM) gas through sublimation. In the gas phase, {\hbox {H$_{2}$O}}\ can be produced through two routes: the neutral-neutral reaction, usually related to shocks, creates {\hbox {H$_{2}$O}}\ via \ce{O + H2 -> OH + H}; \ce{OH + H2 -> H2O + H} at high temperature ($\gtrsim$\,300\,K). At lower temperature ($\lesssim$\,100\,K), the ion-neutral reactions in photon-dominated regions (PDRs), cosmic-ray-dominated regions and X-ray-dominated regions \citep[e.g.][]{2005A&A...436..397M} generate {\hbox {H$_{2}$O}}\ from O, H$^+$, H$_3^+$ and H$_2$, with intermediates such as O$^+$, OH$^+$, {\hbox {H$_2$O$^+$}}\ and H$_3$O$^+$, and finally \ce {H3O^+ + e -> H2O + H}. However, classical PDRs are not likely linked to these highly excited submm {\hbox {H$_{2}$O}}\ emissions \citepalias{2013ApJ...771L..24Y}. Therefore, {\hbox {H$_2$O$^+$}}\ lines are important for distinguishing between shock- or ion-chemistry origin for {\hbox {H$_{2}$O}}\ in the early Universe, indicating the type of physical regions in these galaxies: shock-dominated regions, cosmic-ray-dominated regions or X-ray-dominated regions. Indeed, they can be among the most direct tracers of the cosmic-ray or/and X-ray ionization rate \citep[e.g.][]{2010A&A...518L.110G, 2010A&A...521L..10N, 2013A&A...550A..25G} of the ISM, which regulates the chemistry and influences many key parameters, for example, X-factor \citep{2007MNRAS.378..983B} that connects the CO luminosity to the H$_2$ mass. Moreover, the significant detections of {\hbox {H$_2$O$^+$}}\ emission in \hbox {high-redshift}\ Hy/ULIRGs could help us understanding {\hbox {H$_{2}$O}}\ formation in the early Universe. When observing our sources with redshift $z \gtrsim 3.3$, it is possible to cover all the following lines with the NOEMA WideX bandwidth: para-\htot211202 at 752\,GHz and four ortho-{\hbox {H$_2$O$^+$}}\ lines (two intertwined fine structure doublets of two different lines whose frequencies almost coincide by chance): \t202111\,$_{(5/2-3/2)}$ at 742.1\,GHz, \t211202\,$_{(5/2-3/2)}$ at 742.3\,GHz, \t202111\,$_{(3/2-3/2)}$ at 746.3\,GHz and \t211202\,$_{(5/2-5/2)}$ at 746.5\,GHz, in the 3.6\,GHz band simultaneously (the rest-frame frequencies are taken from the CDMS catalogue: \url{http://www.astro.uni-koeln.de/cdms}, see energy level diagram of {\hbox {H$_2$O$^+$}}\ in Fig.\,\ref{fig:h2o-e-level} and the full spectra in Fig.\,\ref{fig:h2o-ions1}). Additionally, within this range, we can also cover the H$_2^{18}$O(\t211202) line at 745.3\,GHz. There are three sources of our sample that have been observed in such a frequency setup: NCv1.143, NCv1.268 and G09v1.97. We have also included the source G15v2.779 from our previous observation \citepalias{2013A&A...551A.115O}, in which we have covered both \htot211202 at 752\,GHz and {\hbox {H$_2$O$^+$}}\ lines around 746\,GHz. We have detected both main lines of {\hbox {H$_2$O$^+$}}\ in NCv1.143, and tentatively detected one line in G09v1.97 and G15v2.779 (Fig.\,\ref{fig:h2o-ions1}). For NCv1.268, due to the large noise level and the complex line profile, we were not able to really identify any {\hbox {H$_2$O$^+$}}\ line detection. \begin{table}[htbp] \setlength{\tabcolsep}{0.65em} \small \caption{Observed ortho-{\hbox {H$_2$O$^+$}}\ fine structure line parameters of the \hbox {high-redshift}\ {\it H}-ATLAS lensed HyLIRGs.} \centering \begin{tabular}{lcrrr} \hline \hline Source & {\hbox {H$_2$O$^+$}}\ transition & $\nu_{\rm rest}$ & $\nu_{\rm line}$ & $I_{\rm H_{2}O^+}$ \\ & & (GHz) & (GHz) & (Jy\,km\,s$^{-1}$) \\ \hline NCv1.143 & \t211202\,$_{(5/2-5/2)}$ & 746.5 & 163.53 & $1.6\pm0.5$ \\ & \t202111\,$_{(3/2-3/2)}$ & 746.3 & 163.48 & $0.2\pm0.5$ \\ & \t211202\,$_{(5/2-3/2)}$ & 742.3 & 162.61 & $0.3\pm0.4$ \\ & \t202111\,$_{(5/2-3/2)}$ & 742.1 & 162.56 & $1.6\pm0.4$ \\ G09v1.97 & \t202111\,$_{(5/2-3/2)}$ & 742.1 & 160.14 & $1.4\pm0.4$ \\ G15v2.779 & \t211202\,$_{(5/2-5/2)}$ & 746.5 & 142.35 & $1.2\pm0.3$ \\ \hline \end{tabular} \tablefoot{ The {\hbox {H$_2$O$^+$}}\,(\t202111)\,$_{(5/2-3/2)}$ line in G09v1.97 is blended by (\t211202)\,$_{(5/2-3/2)}$, and {\hbox {H$_2$O$^+$}}\,(\t211202)\,$_{(5/2-5/2)}$ line in G15v2.779 is blended by (\t202111)\,$_{(3/2-3/2)}$. However, the contribution from the latter in each case is small, likely less than 20\,\% as shown in the case of the {\hbox {H$_2$O$^+$}}\ lines in NCv1.143. Note that the quoted uncertainties do not include the missing parts of the spectra cut by the limited observed bandwidth (Fig.\,\ref{fig:h2o-ions1}). } \label{table:htop_intensity} \end{table} \normalsize As shown in Fig.\,\ref{fig:h2o-ions1}, in NCv1.143, the dominant {\hbox {H$_2$O$^+$}}\ fine structure lines \t211202\,$_{(5/2-5/2)}$ at 746.5\,GHz and \t202111\,$_{(5/2-3/2)}$ at 742.1\,GHz are well detected. The velocity-integrated flux densities of the two lines from a two-Gaussian fit are $1.9\pm0.3$ and $1.6\pm0.2$\,Jy\,km\,s$^{-1}$, respectively. These are the approximate velocity-integrated flux densities of the dominant {\hbox {H$_2$O$^+$}}\ lines \t211202\,$_{(5/2-5/2)}$ and \t202111\,$_{(5/2-3/2)}$ if neglecting the minor contributions from {\hbox {H$_2$O$^+$}}\ lines \t202111\,$_{(3/2-3/2)}$ at 746.2\,GHz and \t211202\,$_{(5/2-3/2)}$ at 742.3\,GHz. However, the {\hbox {H$_2$O$^+$}}\ line profile at 746.5\,GHz is slightly wider than the {\hbox {H$_{2}$O}}\ line (Fig.\,\ref{fig:h2o-ions1}), probably due to a contribution from the fairly weak fine structure line {\hbox {H$_2$O$^+$}}(\t202111)\,$_{(3/2-3/2)}$ at 746.3\,GHz. The ratio between total velocity-integrated flux density of the {\hbox {H$_2$O$^+$}}\ lines and \htot211202 is $0.60\pm0.07$ (roughly 0.3 for each dominant {\hbox {H$_2$O$^+$}}\ line), being consistent with the average value from the local \hbox {infrared}\ galaxies \citepalias{2013ApJ...771L..24Y}\footnote{As suggested by \cite{2013A&A...550A..25G}, due to the very limited spectral resolution of {\it Herschel}/SPIRE FTS, the ortho-{\hbox {H$_2$O$^+$}}(\t202111)\,$_{(3/2-3/2)}$ line at 746.5\,GHz quoted in \citetalias{2013ApJ...771L..24Y} is actually dominated by ortho-{\hbox {H$_2$O$^+$}}(\t211202)\,$_{(5/2-5/2)}$, considering their likely excitation and relative strength.}. In order to derive the velocity-integrated flux density of each fine structure doublets around 742 and 746\,GHz, we have also performed a four-Gaussian fit with fixed line positions (equal to $\nu_\mathrm{rest}/(1+z)$) and linewidth (equals to that of \htot211202). We find the velocity-integrated flux densities of the two fine structure lines of {\hbox {H$_2$O$^+$}}(\t211202) are $1.6\pm0.5$ and $0.3\pm0.4$\,Jy/km\,s$^{-1}$, while they are $1.6\pm0.4$ and $0.2\pm0.5$\,Jy/km\,s$^{-1}$ for the two fine structure lines of {\hbox {H$_2$O$^+$}}(\t202111) (Table\,\ref{table:htop_intensity}). We should note that these fitting results have much larger uncertainties due to the blending. Nevertheless, they are consistent with the earlier fitting results without de-blending. The similarity of the velocity-integrated flux densities between the {\hbox {H$_2$O$^+$}}(\t202111) and {\hbox {H$_2$O$^+$}}(\t211202) lines is in good agreement with the regime of \hbox {far-infrared}\ pumping as submm {\hbox {H$_{2}$O}}\ \citep{2013A&A...550A..25G}. As a first approximation, if these {\hbox {H$_2$O$^+$}}\ lines are optically thin and we ignore the additional pumping from ortho-{\hbox {H$_2$O$^+$}}\ 2$_{02}$ to ortho-{\hbox {H$_2$O$^+$}}\ $J=3$ energy levels, the statistical equilibrium applied to energy level 2$_{02\,5/2}$ implies that all population arriving per second at 2$_{02\,5/2}$ should be equal to all population leaving the level per second. After subtracting the Gaussian profiles of all the {\hbox {H$_2$O$^+$}}\ lines in the spectrum, we find a $3$\,$\sigma$ residual in terms of the velocity-integrated flux density around 745.3\,GHz ($I = 0.6\pm0.2$\,Jy\,km\,s$^{-1}$, see Fig.\ref{fig:h2o-ions1}). This could be a tentative detection of the H$_2^{18}$O(\t211202) line at 745.320\,GHz. The velocity-integrated flux density ratio of H$_2^{18}$O(\t211202) over \htot211202 in NCv1.143 would hence be $\sim0.1$. If this tentative detection was confirmed, it would show that ALMA could easily study such lines. But sophisticated models will be needed to infer isotope ratios. The spectrum of the \htot211202 line in G09v1.97 covers both the two main {\hbox {H$_2$O$^+$}}\ fine structure lines (Fig\,\ref{fig:h2o-ions1}). However, due to the limited sensitivity, we have only tentatively detected the {\hbox {H$_2$O$^+$}}(\t202111)\,$_{(5/2-3/2)}$ line just above 3\,$\sigma$ (neglecting the minor contribution from {\hbox {H$_2$O$^+$}}(\t211202)\,$_{(5/2-3/2)}$), and the velocity-integrated flux density is $1.4\pm0.4$\,Jy\,km\,s$^{-1}$ using a single Gaussian fit. We did not perform any line de-blending for this source considering the data quality. The {\hbox {H$_2$O$^+$}}\ line profile is in good agreement with that of the {\hbox {H$_{2}$O}}\ (blue dashed histogram in Fig.\,\ref{fig:h2op-h2o}). The velocity-integrated flux density of the undetected {\hbox {H$_2$O$^+$}}(\t211202)\,$_{(5/2-5/2)}$ line could also be close to this value as discussed in the case of NCv1.143, yet somewhat lower and not detected in this source. More sensitive observation should be conducted to further derive robust line parameters. We have also tentatively detected the {\hbox {H$_2$O$^+$}}(\t211202)\,$_{(5/2-5/2)}$ line in G15v2.779 ($S/N\;\sim 4$ by neglecting the minor contribution from the {\hbox {H$_2$O$^+$}}(\t202111)\,$_{(3/2-3/2)}$ line). The line profile is in good agreement with that of \htot211202 (blue dashed histogram in Fig.\,\ref{fig:h2o-ions1}). The velocity-integrated flux density derived from a double-peak Gaussian fit is $1.2\pm0.3$\,Jy\,km\,s$^{-1}$ (we did not perform any line de-blending for the {\hbox {H$_2$O$^+$}}\ doublet considering the spectral noise level). There could be a minor contribution from the {\hbox {H$_2$O$^+$}}(\t202111)\,$_{(3/2-3/2)}$ line to the velocity-integrated flux density. However, such a contribution is likely to be negligible as in the case of NCv1.143. The contribution is also within the uncertainty of the velocity-integrated flux density. Nevertheless, the position of {\hbox {H$_2$O$^+$}}\ has a small blue-shift compared with {\hbox {H$_{2}$O}}, but note that the blue part of the line is cut by the limited observed bandwidth (yellow histogram). \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.441]{L_h2op_h2o} \caption{ Correlation between the luminosity of $J=2$ ortho-{\hbox {H$_2$O$^+$}}\ and para-\htot211202. The fitted function is \hbox {$L_{\mathrm{H_2O^+}}$}\;$\propto$\;\hbox {$L_{\mathrm{H_2O}}$}$^{\alpha}$. We found a very good correlation between \hbox {$L_{\mathrm{H_2O^+}}$}\ and \hbox {$L_{\mathrm{H_2O}}$}\ with a slope close to one. Black points are from the local ULIRGs as listed in Table\,\ref{table:htop}. Dark blue ones are \hbox {high-redshift}\ starbursts from this work. Black solid lines indicate the $\chi^2$ fitting results while the grey dashed lines and the grey annotations represent the average ratio between the \hbox {$L_{\mathrm{H_2O^+}}$}\ and \hbox {$L_{\mathrm{H_2O}}$}. } \label{fig:h2op-h2o} \end{center} \end{figure} After including the local detections of {\hbox {H$_2$O$^+$}}\ lines from \citetalias{2013ApJ...771L..24Y} (Table\,\ref{table:htop}), we find a tight linear correlation between the luminosity of {\hbox {H$_{2}$O}}\ and the two main lines of {\hbox {H$_2$O$^+$}}\ (slopes equal to $1.03\pm0.06$ and $0.91\pm0.07$, see Fig.\,\ref{fig:h2op-h2o}). However, one should keep in mind that, because the local measurement done by {\it Herschel} SPIRE/FTS \citep{2010SPIE.7731E..16N} has rather low spectral resolution, neither {\hbox {H$_2$O$^+$}}(\t211202)\,$_{(5/2-3/2)}$ and {\hbox {H$_2$O$^+$}}(\t202111)\,$_{(5/2-3/2)}$, nor {\hbox {H$_2$O$^+$}}(\t211202)\,$_{(5/2-5/2)}$ and {\hbox {H$_2$O$^+$}}(\t202111)\,$_{(3/2-3/2)}$ can be spectroscopically resolved. In the correlation plot (Fig.\,\ref{fig:h2op-h2o}) and Table\,\ref{table:htop}, we use the total luminosity from the 742\,GHz and 746\,GHz lines, by assuming the contribution from {\hbox {H$_2$O$^+$}}(\t211202)\,$_{(5/2-3/2)}$ and {\hbox {H$_2$O$^+$}}(\t202111)\,$_{(3/2-3/2)}$ to the velocity-integrated flux density of the line at 742\,GHz and 746\,GHz is small ($\sim18$\,\%) and does not vary significantly between different sources. Hence, the velocity-integrated flux density ratio between each of the two dominant {\hbox {H$_2$O$^+$}}\ fine structure lines and {\hbox {H$_{2}$O}}\ in NCv1.143, G15v2.779 and G09v1.97 is $\sim 0.3$ (uncertainties are less than 30\%), which is consistent with local galaxies as shown in the figure. This ratio is much larger than the abundance ratio of {\hbox {H$_2$O$^+$}}/{\hbox {H$_{2}$O}}\,$\sim$\,0.05 found in Arp\,220, an analogue of \hbox {high-redshift}\ ULIRGs \citep{2011ApJ...743...94R}. As discussed above, the AGN contribution to the excitation of the submm lines of most of our sources appears to be minor. Thus, the formation of {\hbox {H$_2$O$^+$}}\ is likely dominated by cosmic-ray ionization, rather than X-ray ionization. Given the average luminosity ratio of {\hbox {H$_2$O$^+$}}/{\hbox {H$_{2}$O}}\;$\sim 0.3\pm0.1$ shown in Fig.\,\ref{fig:h2op-h2o}, \citet{2011A&A...525A.119M} suggest a cosmic-ray ionization rate of 10$^{-14}$--10$^{-13}$\,s$^{-1}$. Such high cosmic-ray ionization rates drive the ambient ionization degree of the ISM to 10$^{-3}$--10$^{-2}$, rather than the canonical 10$^{-4}$. Therefore, in the gas phase, an ion-neutral route likely dominates the formation of {\hbox {H$_{2}$O}}. However, {\hbox {H$_{2}$O}}\ can also be enriched through the water-ice sublimation that releases {\hbox {H$_{2}$O}}\ into the gas-phase ISM. As the upper part, $\sim 90$\,K, of the possible range for $T_\mathrm{warm}$ is close to the sublimation temperature of water ice. Hence, the high {\hbox {H$_{2}$O}}\ abundance (\hbox {$N_{\mathrm{H_2O}}$}\;$\gtrapprox 0.3 \times 10^{17}$\,cm$^{-2}$, see Section\,\ref{hto}) observed is likely to be the result of ion chemistry dominated by high cosmic-ray ionization and/or perhaps water ice desorption. However, further observation of {\hbox {H$_2$O$^+$}}\ lines of different transitions and a larger sample is needed to constrain the contribution to {\hbox {H$_{2}$O}}\ formation from neutral-neutral reactions dominated by shocks. \section{Conclusions} \normalsize \label{Conclusions} In this paper, we report a survey of submm {\hbox {H$_{2}$O}}\ emission at redshift $z \sim 2\text{--}4$, by observing a higher excited ortho-\htot321312 in 6 sources and several complementary $J=2$ para-{\hbox {H$_{2}$O}}\ emission lines in the warm dense cores of 11 \hbox {high-redshift}\ lensed extreme starburst galaxies (Hy/ULIRGs) discovered by {\it H}-ATLAS. So far, we have detected an {\hbox {H$_{2}$O}}\ line in most of our observations of a total sample of 17 \hbox {high-redshift}\ lensed galaxies, in other words, we have detected both $J=2$ para-{\hbox {H$_{2}$O}}\ and $J=3$ ortho-{\hbox {H$_{2}$O}}\ lines in five, and in ten other sources only one $J=2$ para-{\hbox {H$_{2}$O}}\ line. In these \hbox {high-redshift}\ Hy/ULIRGs, {\hbox {H$_{2}$O}}\ is the second strongest molecular emitter after CO within the submm band, as in local ULIRGs. The spatially integrated {\hbox {H$_{2}$O}}\ emission lines have a velocity-integrated flux density ranging from 4 to 15\,Jy\,km\,s$^{-1}$, which yields the apparent {\hbox {H$_{2}$O}}\ emission luminosity, $\mu$\hbox {$L_{\mathrm{H_2O}}$}\, ranging from $\sim 6\text{--}22 \times 10^{8}$\,{\hbox {$L_\odot$}}. After correction for gravitation lensing magnification, we obtained the intrinsic \hbox {$L_{\mathrm{H_2O}}$}\ for para-{\hbox {H$_{2}$O}}\ lines \t202111, \t211202 and ortho-\htot321312. The luminosities of the three {\hbox {H$_{2}$O}}\ lines increase with \hbox {$L_{\mathrm{IR}}$}\ as \hbox {$L_{\mathrm{H_2O}}$}\;$\propto$\;\hbox {$L_{\mathrm{IR}}$}$^{1.1\text{--}1.2}$. This correlation indicates the importance of \hbox {far-infrared}\ pumping as a dominant mechanism of {\hbox {H$_{2}$O}}\ excitation. Comparing with $J=3$ to $J=6$ CO lines, the linewidths between {\hbox {H$_{2}$O}}\ and CO are similar, and the velocity-integrated flux densities of {\hbox {H$_{2}$O}}\ and CO are comparable. The similarity in line profiles suggests that these two molecular species possibly trace similar intense star-forming regions. Using the \hbox {far-infrared}\ pumping model, we have analysed the ratios between $J=2$ and $J=3$ {\hbox {H$_{2}$O}}\ lines and \hbox {$L_{\mathrm{H_2O}}$}/\hbox {$L_{\mathrm{IR}}$}\ in 5 sources with both $J$ {\hbox {H$_{2}$O}}\ lines detected. We have derived the ranges of the warm dust temperature ($T_\mathrm{warm}$), the {\hbox {H$_{2}$O}}\ column density per unit velocity interval (\hbox {$N_{\mathrm{H_2O}}$}/$\Delta V$) and the optical depth at 100\,$\mu$m ($\tau_{100}$). Although there are strong degeneracies, these modelling efforts confirm that, similar to those of local ULIRGs, these submm {\hbox {H$_{2}$O}}\ emissions in \hbox {high-redshift}\ Hy/ULIRGs trace the warm dense gas that is tightly correlated with the massive star forming activity. While the values of $T_\mathrm{warm}$ and \hbox {$N_{\mathrm{H_2O}}$}\ (by assuming that they have similar velocity dispersion $\Delta V$) are similar to the local ones, $\tau_{100}$ in the \hbox {high-redshift}\ Hy/ULIRGs is likely to be greater than 1 (optically thick), which is larger than $\tau_{100}=0.05\text{--}0.2$ found in the local \hbox {infrared}\ galaxies. However, we notice that the parameter space is still not well constrained in our sources through {\hbox {H$_{2}$O}}\ excitation modelling. Due to the limited excitation levels of the detected {\hbox {H$_{2}$O}}\ lines, we are only able to perform the modelling with pure \hbox {far-infrared}\ pumping. The detection of relatively strong {\hbox {H$_2$O$^+$}}\ lines opens the possibility to help understanding the formation of such large amount of {\hbox {H$_{2}$O}}. In these \hbox {high-redshift}\ Hy/ULIRGs, the {\hbox {H$_{2}$O}}\ formation is likely to be dominated by ion-neutral reactions powered by cosmic-ray-dominated regions. The velocity-integrated flux density ratio between {\hbox {H$_2$O$^+$}}\ and {\hbox {H$_{2}$O}}\ ($I_{\mathrm{H_2O^+}}/\hbox {$I_{\mathrm{H_2O}}$} \sim 0.3$), is remarkably constant from low to high-redshift, reflecting similar conditions in Hy/ULIRGs. However, more observations of {\hbox {H$_2$O$^+$}}\ emission/absorption and also OH$^{+}$ lines are needed to further constrain the physical parameters of the cosmic-ray-dominated regions and the ionization rate in those regions. We have demonstrated that the submm {\hbox {H$_{2}$O}}\ emission lines are strong and easily detectable with NOEMA. Being a unique diagnostic, the {\hbox {H$_{2}$O}}\ emission offers us a new approach to constrain the physical conditions in the intense and heavily obscured star-forming regions dominated by \hbox {far-infrared}\ radiation at \hbox {high-redshift}. Follow-up observations of other gas tracers, for instance, CO, HCN, {\hbox {H$_2$O$^+$}}\ and OH$^+$ using the NOEMA, IRAM 30m and JVLA will complement the {\hbox {H$_{2}$O}}\ diagnostic of the structure of different components, dominant physical processes, star formation and chemistry in \hbox {high-redshift}\ Hy/ULIRGs. With unprecedented spatial resolution and sensitivity, the image from the ALMA long baseline campaign observation of SDP\,81 \citep[also known as {\it H}-ATLAS\,J090311.6+003906, ][]{2015ApJ...808L...4A, 2015MNRAS.452.2258D, 2015MNRAS.451L..40R}, shows the resolved structure of the dust, CO and {\hbox {H$_{2}$O}}\ emission in the $z=3$ ULIRG. With careful reconstruction of the source plane images, ALMA will help to resolve the submm {\hbox {H$_{2}$O}}\ emission in \hbox {high-redshift}\ galaxies into the scale of giant molecular clouds, and provide a fresh view of detailed physics and chemistry in the early Universe. \begin{acknowledgement} We thank our referee for the very detail comments and suggestions which have improved the paper. This work was based on observations carried out with the IRAM Interferometer NOEMA, supported by INSU/CNRS (France), MPG (Germany), and IGN (Spain). The authors are grateful to the IRAM staff for their support. CY thanks Claudia Marka and Nicolas Billot for their help of the IRAM 30m/EMIR observation. CY also thanks Zhi-Yu Zhang and Iv\'an Oteo for insightful discussions. CY, AO and YG acknowledge support by NSFC grants \#11311130491, \#11420101002 and CAS Pilot B program \#XDB09000000. CY and YG also acknowledge support by NSFC grants \#11173059. CY, AO, AB and YG acknowledge support from the Sino-French LIA-Origin joint exchange program. E.G-A is a Research Associate at the Harvard-Smithsonian Center for Astrophysics, and thanks the Spanish Ministerio de Econom\'{\i}a y Competitividad for support under projects FIS2012-39162-C06-01 and ESP2015-65597-C4-1-R, and NASA grant ADAP NNX15AE56G. RJI acknowledges support from ERC in the form of the Advanced Investigator Programme, 321302, COSMICISM. US participants in {\it H}-ATLAS acknowledge support from NASA through a contract from JPL. Italian participants in {\it H}-ATLAS acknowledge a financial contribution from the agreement ASI-INAF I/009/10/0. SPIRE has been developed by a consortium of institutes led by Cardiff Univ. (UK) and including: Univ. Lethbridge (Canada); NAOC (China); CEA, LAM (France); IFSI, Univ. Padua (Italy); IAC (Spain); Stockholm Observatory (Sweden); Imperial College London, RAL, UCL-MSSL, UKATC, Univ. Sussex (UK); and Caltech, JPL, NHSC, Univ. Colorado (USA). This development has been supported by national funding agencies: CSA (Canada); NAOC (China); CEA, CNES, CNRS (France); ASI (Italy); MCINN (Spain); SNSB (Sweden); STFC, UKSA (UK); and NASA (USA). CY is supported by the China Scholarship Council grant (CSC No.201404910443). \end{acknowledgement} \footnotesize \bibliographystyle{aa}
{'timestamp': '2016-09-20T02:12:52', 'yymm': '1607', 'arxiv_id': '1607.06220', 'language': 'en', 'url': 'https://arxiv.org/abs/1607.06220'}
\section*{Part~I. A review of kinematics via Cayley--Klein geometries} \section{Possible kinematics} As noted by Inonu and Wigner in their work \cite{IW53} on contractions of groups and their representations, classical mechanics is a limiting case of relativistic mechanics, for both the Galilei group as well as its Lie algebra are limits of the Poincar\'{e} group and its Lie algebra. Bacry and L\'{e}vy-Leblond \cite{BL67} classif\/ied and investigated the nature of all possible Lie algebras for kinema\-tical groups (these groups are assumed to be Lie groups as 4-dimensional spacetime is assumed to be continuous) given the three basic principles that \begin{itemize}\itemsep=0pt \item[(i)] space is isotropic and spacetime is homogeneous, \item[(ii)] parity and time-reversal are automorphisms of the kinematical group, and \item[(iii)] the one-dimensional subgroups generated by the boosts are non-compact. \end{itemize} \begin{table}[th] \centering \caption{The 11 possible kinematical groups.} \vspace{1mm} \begin{tabular}{ c | c } \hline Symbol & Name \tsep{1ex} \bsep{1ex}\\ \hline \tsep{1ex} $dS_1$ & de Sitter group $SO(4,1)$ \\ $dS_2$ & de Sitter group $SO(3,2)$ \\ $P$ & Poincar\'{e} group \\ $P^{\prime}_1$ & Euclidean group $SO(4)$ \\ $P^{\prime}_2$ & Para-Poincar\'{e} group \\ $C$ &Carroll group \\ $N_+$ & Expanding Newtonian Universe group \\ $N_-$ & Oscillating Newtonian Universe group \\ $G$ & Galilei group \\ $G^{\prime}$ & Para-Galilei group \\ $St$ & Static Universe group \bsep{0.5ex}\\ \hline \end{tabular} \end{table} \begin{figure}[th] \begin{center} \includegraphics[width=12.5cm]{McRae-fig1e} \end{center} \caption{The contractions of the kinematical groups.} \end{figure} The resulting possible Lie algebras give 11 possible kinematics, where each of the kinematical groups (see Table~1) is generated by its inertial transformations as well as its spacetime translations and spatial rotations. These groups consist of the de Sitter groups and their rotation-invariant contractions: the physical nature of a contracted group is determined by the nature of the contraction itself, along with the nature of the parent de Sitter group. Below we will illustrate the nature of these contractions when we look more closely at the simpler case of a~2-dimensional spacetime. For Fig.~1, note that a ``upper" face of the cube is transformed under one type of contraction into the opposite face. Sanjuan \cite{F84} noted that the methods employed by Bacry and L\'{e}vy-Leblond could be easily applied to 2-dimensional spacetimes: as it is the purpose of this paper to investigate these kinematical Lie algebras and groups through Clif\/ford algebras, we will begin by explicitly classifying all such possible Lie algebras. This section then is a detailed and expository account of certain parts of Bacry, L\'{e}vy-Leblond, and Sanjuan's work. Let $K$ denote the generator of the inertial transformations, $H$ the generator of time translations, and $P$ the generator of space translations. As space is one-dimensional, space is isotropic. In the following section we will see how to construct, for each possible kinematical structure, a~spacetime that is a homogeneous space for its kinematical group, so that basic principle (i) is satisf\/ied. Now let $\Pi$ and $\Theta$ denote the respective operations of parity and time-reversal: $K$ must be odd under both $\Pi$ and $\Theta$. Our basic principle (ii) requires that the Lie algebra is acted upon by the $\mathbb{Z}_2 \otimes \mathbb{Z}_2$ group of involutions generated by \[ \Pi \, : \, \left( K, H, P \right) \rightarrow \left( -K, H, -P \right) \qquad \mbox{and} \qquad \Theta \, : \, \left( K, H, P \right) \rightarrow \left( -K, -H, P \right) .\] Finally, basic principle (iii) requires that the subgroup generated by $K$ is noncompact, even though we will allow for the universe to be closed, or even for closed time-like worldlines to exist. We do not wish for $e^{0K} = e^{\theta K}$ for some non-zero $\theta$, for then we would f\/ind it possible for a boost to be no boost at all! As each Lie bracket $[K, H]$, $[K, P]$, and $[H, P]$ is invariant under the involutions $\Pi$ and $\Theta$ as well as the involution \[ \Gamma = \Pi\Theta \, : \, \left( K, H, P \right) \rightarrow \left( K, -H, -P \right), \] we must have that $[K, H] = pP$, $[K, P] = hH$, and $[H, P] = kK$ for some constants $k$, $h$, and $p$. Note that these Lie brackets are also invariant under the symmetries def\/ined by \begin{gather*} S_P : \{ K \leftrightarrow H, p \leftrightarrow -p, k \leftrightarrow h \}, \qquad S_H : \{ K \leftrightarrow P, h \leftrightarrow -h, k \leftrightarrow -p \}, \qquad \mbox{and}\\ S_K : \{ H \leftrightarrow P, k \leftrightarrow -k, h \leftrightarrow p \} , \end{gather*} and that the Jacobi identity is automatically satisf\/ied for any triple of elements of the Lie algebra. \begin{table}[t] \centering \caption{The 21 kinematical Lie algebras, grouped into 11 essentially distinct types of kinematics.} \vspace{1mm} \begin{tabular}{| r | r | r | r | r | r | r | r | r | r | r | r | r | r | r | r |} \cline{1-2} \cline{4-5} \cline{7-8} \cline{10-11} \cline{13-14} \cline{16-16} $P$ & $-P$ & & $P$ & $-P$ & & $P$ & $-P$ & & $0$ & $0$ & & $0$ & $0$ & & $0$ \\ $H$ & $-H$ & & $H$ & $-H$ & & $0$ & $0$ & & $H$ & $-H$ & & $0$ & $0$ & & $0$\\ $K$ & $-K$ & & $-K$ & $K$ & & $0$ & $0$ & & $0$ & $0$ & & $K$ & $-K$ & & $0$ \\ \cline{1-2} \cline{4-5} \cline{7-8} \cline{10-11} \cline{13-14} \cline{16-16} \multicolumn{16}{c}{} \\[-2mm] \cline{1-2} \cline{4-5} \cline{7-8} \cline{10-11} \cline{13-14} $0$ & $0$ & & $0$ & $0$ & & $P$ & $-P$ & & $P$ & $-P$ & & $P$ & $-P$ & \multicolumn{2}{c}{} \\ $H$ & $-H$ & & $H$ & $-H$ & & $0$ & $0$ & & $0$ & $0$ & & $H$ & $-H$ & \multicolumn{2}{c}{} \\ $K$ & $-K$ & & $-K$ & $K$ & & $K$ & $-K$ & & $-K$ & $K$ & & $0$ & $0$ & \multicolumn{2}{c}{} \\ \cline{1-2} \cline{4-5} \cline{7-8} \cline{10-11} \cline{13-14} \end{tabular} \end{table} \begin{table}[t] \centering \caption{6 non-kinematical Lie algebras.} \vspace{1mm} \begin{tabular}{| r | r | r | r | r | r | r | r | } \cline{1-2} \cline{4-5} \cline{7-8} $P$ & $-P$ & & $P$ & $-P$ & & $-P$ & $P$ \\ $-H$ & $H$ & & $-H$ & $H$ & & $H$ & $-H$ \\ $K$ & $-K$ & & $-K$ & $K$ & & $0$ & $0$ \\ \cline{1-2} \cline{4-5} \cline{7-8} \end{tabular}\vspace{-2mm} \end{table} We can normalize the constants $k$, $h$, and $p$ by a scale change so that $k, h, p \in \{-1, 0, 1 \}$, taking advantage of the simple form of the Lie brackets for the basis elements $K$, $H$, and $P$. There are then $3^3$ possible Lie algebras, which we tabulate in Tables~2 and~3 with columns that have the following form: \begin{table}[htbp] \centering \begin{tabular}{| r | } \hline $[K, H]$ \\ $[K, P]$ \\ $[H, P]$ \\\hline \end{tabular} \end{table} \begin{table}[t] \centering \caption{Some kinematical groups along with their notation and structure constants.} \vspace{1mm} \begin{tabular}{ c c c c c } \hline & Anti-de Sitter & Oscillating Newtonian Universe & Para-Minkowski & Minkowski \\ \cline{2-5} &\tsep{0.2ex} $adS$ & $N_-$ & $M^{\prime}$ & $M$ \\ \hline \hline \tsep{0.2ex}$[K,H]$ & $P$ & $P$ & $0$ &$P$ \\ $[K,P]$ & $H$ & $0$ & $H$ & $H$ \\ $[H,P]$ & $K$ & $K$ & $K$ & $0$ \\ \hline \end{tabular} \end{table} \begin{table}[t] \centering \caption{Some kinematical groups along with their notation and structure constants.} \vspace{1mm} \begin{tabular}{ c c c c } \hline & de Sitter & Expanding Newtonian Universe & Expanding Minkowski Universe \\ \cline{2-4} & $dS$ & $N_+$ & $M_+$ \\ \hline \hline \tsep{0.2ex}$[K,H]$ & $P$ & $P$ & $0$ \\ $[K,P]$ & $H$ & $0$ & $H$ \\ $[H,P]$ & $-K$ & $-K$ & $-K$ \\ \hline \end{tabular} \end{table} \begin{table}[t] \centering \caption{Some kinematical groups along with their notation and structure constants.} \vspace{1mm} \begin{tabular}{ c c c c c } \hline & Galilei & Carroll & Static de Sitter Universe & Static Universe \\ \cline{2-5} & $G$ & $C$ & $SdS$ & $St$ \\ \hline \hline $[K,H]$ & $P$ & $0$ & $0$ &$0$ \\ $[K,P]$ & $0$ & $H$ & $0$ & $0$ \\ $[H,P]$ & $0$ & $0$ & $K$ & $0$ \\ \hline \end{tabular} \end{table} \noindent We also pair each Lie algebra with its image under the isomorphism given by $P \leftrightarrow -P$, $H \leftrightarrow -H$, $K \leftrightarrow -K$, and $[ \star, \star \star] \leftrightarrow [ \star \star, \star]$, for both Lie algebras then give the same kinematics. There are then 11 essentially distinct kinematics, as illustrated in Table~2. Also (as we shall see in the next section) each of the other 6 Lie algebras (that are given in Table~3) violate the third basic principle, generating a compact group of inertial transformations. These non-kinematical Lie algebras are the lie algebras for the motion groups for the elliptic, hyperbolic, and Euclidean planes: let us denote these respective groups as $El$, $H$, and $Eu$. We name the kinematical groups (that are generated by the boosts and translations) in concert with the 4-dimensional case (see Tables~4,~5, and~6). Each of these kinematical groups is either the de Sitter or the anti-de Sitter group, or one of their contractions. We can contract with respect to any subgroup, giving us three fundamental types of contraction: {\it speed-space, speed-time}, and {\it space-time contractions}, corresponding respectively to contracting to the subgroups generated by $H$, $P$, and $K$. {\bfseries \itshape Speed-space contractions.} We make the substitutions $K \rightarrow \epsilon K$ and $P \rightarrow \epsilon P$ into the Lie algebra and then calculate the singular limit of the Lie brackets as $\epsilon \rightarrow 0$. Physically the velocities are small when compared to the speed of light, and the spacelike intervals are small when compared to the timelike intervals. Geometrically we are describing spacetime near a timelike geodesic, as we are contracting to the subgroup that leaves this worldline invariant, and so are passing from relativistic to absolute time. So $adS$ is contracted to $N_-$ while $dS$ is contracted to $N_+$, for example. {\bfseries \itshape Speed-time contractions.} We make the substitutions $K \rightarrow \epsilon K$ and $H \rightarrow \epsilon H$ into the Lie algebra and then calculate the singular limit of the Lie brackets as $\epsilon \rightarrow 0$. Physically the velocities are small when compared to the speed of light, and the timelike intervals are small when compared to the spacelike intervals. Geometrically we are describing spacetime near a spacelike geodesic, as we are contracting to the subgroup that leaves invariant this set of simultaneous events, and so are passing from relativistic to absolute space. Such a spacetime may be of limited physical interest, as we are only considering intervals connecting events that are not causally related. {\bfseries \itshape Space-time contractions.} We make the substitutions $P \rightarrow \epsilon P$ and $H \rightarrow \epsilon H$ into the Lie algebra and then calculate the singular limit of the Lie brackets as $\epsilon \rightarrow 0$. Physically the spacelike and timelike intervals are small, but the boosts are not restricted. Geometrically we are describing spacetime near an event, as we are contracting to the subgroup that leaves invariant only this one event, and so we call the corresponding kinematical group a {\it local group} as opposed to a {\it cosmological group}. Fig.~2 illustrates several interesting relationships among the kinematical groups. For example, Table~7 gives important classes of kinematical groups, each class corresponding to a face of the f\/igure, that transform to another class in the table under one of the symmetries $S_H$, $S_P$, or $S_K$, provided that certain exclusions are made as outlined in Table~8. The exclusions are necessary under the given symmetries as some kinematical algebras are taken to algebras that are not kinematical. \begin{figure}[t] \begin{center} \includegraphics[width=12.5cm]{McRae-fig2e} \end{center} \caption{The contractions of the kinematical groups for 2-dimensional spacetimes.} \end{figure} \begin{table}[t] \centering \caption{Important classes of kinematical groups and their geometrical conf\/igurations in Fig.~2.} \vspace{1mm} \begin{tabular}{ l | l } \hline Class of groups & Face \\ \hline \hline Relative-time & $1247$ \\ Absolute-time & $3568$ \\ Relative-space & $1346$ \\ Absolute-space & $2578$ \\ Cosmological & $1235$ \\ Local & $4678$ \\ \hline \end{tabular} \end{table} \begin{table}[t] \centering \caption{The 3 basic symmetries are represented by ref\/lections of Fig.~2, with some exclusions.} \vspace{1mm} \begin{tabular}{ c | c } \hline Symmetry & Ref\/lection across face \\ \hline \hline $S_H$ & $1378$ (excluding $M_+$) \\ $S_P$ & $1268$ (excluding $adS$ and $N_-$)\\ $S_K$ & $1458$ \\ \hline \end{tabular} \end{table} \section[Cayley-Klein geometries]{Cayley--Klein geometries} In this section we wish to review work done by Ballesteros, Herranz, Ortega and Santander on homogeneous spaces that are spacetimes for kinematical groups, and we begin with a bit of history concerning the discovery of non-Euclidean geometries. Franz Taurinus was the f\/irst to explicitly give mathematical details on how a hypothetical sphere of imaginary radius would have a non-Euclidean geometry, what he called {\it log-spherical geometry}, and this was done via hyperbolic trigonometry (see \cite{G89} or \cite{K98}). Felix Klein\footnote{Roger Penrose \cite{P05} notes that it was Eugenio Beltrami who f\/irst discovered both the projective and conformal models of the hyperbolic plane.} is usually given credit for being the f\/irst to give a complete model of a non-Euclidean geometry\footnote{ Spherical geometry was not historically considered to be non-Euclidean in nature, as it can be embedded in a 3-dimensional Euclidean space, unlike Taurinus' sphere.}: he built his model by suitably adapting Arthur Cayley's metric for the projective plane. Klein \cite{K21} (originally published in 1871) went on, in a systematic way, to describe nine types of two-dimensional geometries (what Yaglom~\cite{Y79} calls {\it Cayley--Klein geometries}) that were then further investigated by Sommerville~\cite{S10}. Yaglom gave conformal models for these geometries, extending what had been done for both the projective and hyperbolic planes. Each type of geometry is homogeneous and can be determined by two real constants ${\kappa_1}$ and ${\kappa_2}$ (see Table~9). The names of the geometries when ${\kappa_2} \leq 0$ are those as given by Yaglom, and it is these six geometries that can be interpreted as spacetime geometries. \begin{table}[t] \centering \caption{The 9 types of Cayley--Klein geometries.} \vspace{1mm} \begin{tabular}{ l l l l } \hline & \multicolumn{3}{c }{Metric Structure} \\ \cline{2-4} Conformal & Elliptic & Parabolic & Hyperbolic \\ Structure & ${\kappa_1} > 0$ & ${\kappa_1} = 0$ & ${\kappa_1} < 0$ \\ \hline \hline \tsep{1ex} Elliptic & elliptic & Euclidean & hyperbolic \\ ${\kappa_2} > 0$ & geometries & geometries & geometries \\[1mm] Parabolic & co-Euclidean & Galilean & co-Minkowski \\ ${\kappa_2} = 0$ & geometries & geometry & geometries \\[1mm] Hyperbolic & co-hyperbolic & Minkowski & doubly \\ ${\kappa_2} < 0$ & geometries & geometries & hyperbolic \\ & & & geometries \\ \hline \end{tabular} \end{table} Following Taurinus, it is easiest to describe a bit of the geometrical nature of these geometries by applying the appropriate kind of trigonometry: we will see shortly how to actually construct a model for each geometry. Let $\kappa$ be a real constant. The unit circle $a^2 + \kappa b^2 = 1$ in the plane ${\mathbb{R}}^2 = \{ (a,b) \}$ with metric $ds^2 = da^2 + \kappa db^2$ can be used to def\/ined the cosine \begin{equation*} {C_{\kappa}}(\phi) = \begin{cases} \cos{\left(\sqrt{\kappa} \, \phi \right) }, &\text{if $\kappa > 0$}, \\ 1, &\text{if $\kappa = 0$}, \\ \cosh{\left( \sqrt{-\kappa} \, \phi \right) }, &\text{if $\kappa < 0$}, \\ \end{cases} \end{equation*} and sine \begin{equation*} {S_{\kappa}}(\phi) = \begin{cases} \frac{1}{\sqrt{\kappa}}\sin{\left( \sqrt{\kappa} \, \phi \right) }, &\text{if $\kappa > 0$}, \\ \phi, &\text{if $\kappa = 0$}, \\ \frac{1}{\sqrt{-\kappa}} \sinh{\left( \sqrt{-\kappa} \, \phi \right) }, &\text{if $\kappa < 0$} \\ \end{cases} \end{equation*} functions: here $(a,b) = ({C_{\kappa}}(\phi), {S_{\kappa}}(\phi))$ is a point on the connected component of the unit circle containing the point $(1,0)$, and $\phi$ is the signed distance from $(1,0)$ to $(a,b)$ along the circular arc, def\/ined modulo the length $\dfrac{2\pi}{\sqrt{\kappa}}$ of the unit circle when $\kappa > 0$. We can also write down the power series for these analytic trigonometric functions: \begin{gather*} {C_{\kappa}}(\phi) = 1 - \frac{1}{2!}\kappa \phi^2 + \frac{1}{4!} \kappa^2 \phi^4 + \cdots, \\ {S_{\kappa}}(\phi) = \phi - \frac{1}{3!}\kappa \phi^3 + \frac{1}{5!} \kappa^2 \phi^5 + \cdots. \end{gather*} Note that ${C_{\kappa}}^2(\phi) + \kappa {S_{\kappa}}^2(\phi) = 1$. So if $\kappa > 0$ then the unit circle is an ellipse (giving us elliptical trigonometry), while if $\kappa < 0$ it is a hyperbola (giving us hyperbolic trigonometry). When $\kappa = 0$ the unit circle consists of two parallel straight lines, and we will say that our trigonometry is parabolic. We can use such a trigonometry to def\/ine the angle $\phi$ between two lines, and another independently chosen trigonometry to def\/ine the distance between two points (as the angle between two lines, where each line passes through one of the points as well as a distinguished point). At this juncture it is not clear that such geometries, as they have just been described, are of either mathematical or physical interest. That mathematicians and physicists at the beginning of the 20th century were having similar thoughts is perhaps not surprising, and Walker \cite{W99} gives an interesting account of the mathematical and physical research into non-Euclidean geometries during this period in history. Klein found that there was a fundamental unity to these geometries, and so that alone made them worth studying. Before we return to physics, let us look at these geometries from a perspective that Klein would have appreciated, describing their motion groups in a unif\/ied manner. Ballesteros, Herranz, Ortega and Santander have constructed the Cayley--Klein geometries as homogeneous spaces\footnote{See \cite{BH06,HOS96,HS02}, and also \cite{HOS00}, where a special case of the group law is investigated, leading to a plethora of trigonometric identities, some of which will be put to good use in this paper: see Appendix~A.} by looking at real representations of their motion groups. These motion groups are denoted by ${SO_{\ka,\kb}(3)}$ (that we will refer to as the {\it generalized} $SO(3)$ or simply by $SO(3)$) with their respective Lie algebras being denoted by ${so_{\ka,\kb}(3)}$ (that we will refer to as the {\it generalized} $so(3)$ or simply by $so(3)$), and most if not all of these groups are probably familiar to the reader (for example, if both ${\kappa_1}$ and ${\kappa_2}$ vanish, then $SO(3)$ is the Heisenberg group). Later on in this paper we will use Clif\/ford algebras to show how we can explicitly think of $SO(3)$ as a rotation group, where each element of $SO(3)$ has a well-def\/ined axis of rotation and rotation angle. Now a matrix representation of $so(3)$ is given by the matrices \[ H = \left( \begin{matrix} 0 & -{\kappa_1} & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \end{matrix} \right), \qquad P = \left( \begin{matrix} 0 & 0 & -{\kappa_1} {\kappa_2} \\ 0 & 0 & 0 \\ 1 & 0 & 0 \end{matrix} \right), \qquad \mbox{and} \qquad K = \left( \begin{matrix} 0 & 0 & 0 \\ 0 & 0 & -{\kappa_2} \\ 0 & 1 & 0 \end{matrix} \right), \] where the structure constants are given by the commutators \[ \left[ K, H \right] = P, \qquad \left[ K, P \right] = -{\kappa_2} H, \qquad \mbox{and} \qquad \left[ H, P \right] = {\kappa_1} K. \] By normalizing the constants we obtain matrix representations of the $adS$, $dS$, $N_-$, $N_+$, $M$, and $G$ Lie algebras, as well as the Lie algebras for the elliptic, Euclidean, and hyperbolic motion groups, denoted $El$, $Eu$, and $H$ respectively. We will see at the end of this section how the Cayley--Klein spaces can also be used to give homogeneous spaces for $M^{\prime}$, $M_+$, $C$, and $SdS$ (but not for $St$). One benef\/it of not normalizing the parameters ${\kappa_1}$ and ${\kappa_2}$ is that we can easily obtain contractions by letting ${\kappa_1} \rightarrow 0$ or ${\kappa_2} \rightarrow 0$. Elements of $SO(3)$ are real-linear, orientation-preserving isometries of ${\mathbb{R}}^3 = \{ (z, t, x)) \}$ imbued with the (possibly indef\/inite or degenerate) metric $ds^2 = dz^2 + {\kappa_1} dt^2 + {\kappa_1} {\kappa_2} dx^2$. The one-parameter subgroups $\mathcal{H}$, $\mathcal{P}$, and $\mathcal{K}$ generated respectively by $H$, $P$, and $K$ consist of matrices of the form \[ e^{\alpha H} = \left( \begin{matrix} C_{{\kappa_1}}(\alpha) & -{\kappa_1} S_{{\kappa_1}}(\alpha) & 0 \\ S_{{\kappa_1}}(\alpha) & C_{{\kappa_1}}(\alpha) & 0 \\ 0 & 0 & 1 \end{matrix} \right), \qquad e^{\beta P} = \left( \begin{matrix} C_{{\kappa_1} {\kappa_2}}(\beta) & 0 & -{\kappa_1} {\kappa_2} S_{{\kappa_1} {\kappa_2}}(\beta) \\ 0 & 1 & 0 \\ S_{{\kappa_1} {\kappa_2}}(\beta) & 0 & C_{{\kappa_1} {\kappa_2}}(\beta) \end{matrix} \right), \] \bigskip and \[ e^{\theta K} = \left( \begin{matrix} 1 & 0 & 0 \\ 0 & C_{{\kappa_2}}(\theta) & -{\kappa_2} S_{{\kappa_2}}(\theta) \\ 0 & S_{{\kappa_2}}(\theta) & C_{{\kappa_2}}(\theta) \end{matrix} \right) \] (note that the orientations induced on the coordinate planes may be dif\/ferent than expected). We can now see that in order for $\mathcal{K}$ to be non-compact, we must have that ${\kappa_2} \leq 0$, which explains the content of Table~3. The spaces $SO(3) / \mathcal{K}$, $SO(3) / \mathcal{H}$, and $SO(3) / \mathcal{P}$ are homogeneous spaces for $SO(3)$. When $SO(3)$ is a kinematical group, then $S \equiv SO(3) / \mathcal{K}$ can be identif\/ied with the manifold of space-time translations. Regardless of the values of ${\kappa_1}$ and ${\kappa_2}$ however, $S$ is the Cayley--Klein geometry with parameters ${\kappa_1}$ and ${\kappa_2}$, and $S$ can be shown to have constant curvature ${\kappa_1}$ (also, see \cite{M06}). So the angle between two lines passing through the origin (the point that is invariant under the subgroup $\mathcal{K}$) is given by the parameter $\theta$ of the element of $\mathcal{K}$ that rotates one line to the other (and so the measure of angles is related to the parameter ${\kappa_2}$). Similarly if one point can be taken to another by an element of $\mathcal{H}$ or $\mathcal{P}$ respectively, then the distance between the two points is given by the parameter $\alpha$ or $\beta$, (and so the measure of distance is related to the parameter ${\kappa_1}$ or to ${\kappa_1} {\kappa_2}$). Note that the spaces $SO(3) / \mathcal{H}$ and $SO(3) / \mathcal{P}$ are respectively the spaces of timelike and spacelike geodesics for kinematical groups. For our purposes we will also need to model $S$ as a projective geometry. First, we def\/ine the projective quadric $\bar{\Sigma}$ as the set of points on the unit sphere $\Sigma \equiv \{ (z, t, x) \in {\mathbb{R}}^3 \; | \; z^2 + {\kappa_1} t^2 + {\kappa_1} {\kappa_2} x^2 = 1 \}$ that have been identif\/ied by the equivalence relation $(z, t, x) \thicksim (-z, -t, -x)$. The group $SO(3)$ acts on $\bar{\Sigma}$, and the subgroup $\mathcal{K}$ is then the isotropy subgroup of the equivalence class $\mathcal{O} = [(1, 0, 0)]$. The metric $g$ on ${\mathbb{R}}^3$ induces a metric on $\bar{\Sigma}$ that has ${\kappa_1}$ as a factor. If we then def\/ine the main metric $g_1$ on $\bar{\Sigma}$ by setting \[ \left( ds^2 \right)_1 = \frac{1}{{\kappa_1}} ds^2, \] then the surface $\bar{\Sigma}$, along with its main metric (and subsidiary metric, see below), is a projective model for the Cayley--Klein geometry $S$. Note that in general $g_1$ can be indef\/inite as well as nondegenerate. The motion $\exp(\theta K$) gives a rotation (or boost for a spacetime) of $S$, whereas the motions $\exp(\alpha H$) and $\exp(\beta P$) give translations of $S$ (time and space translations respectively for a~spacetime). The parameters ${\kappa_1}$ and ${\kappa_2}$ are, for the spacetimes, identif\/ied with the universe time radius $\tau$ and speed of light $c$ by the formulae \[ {\kappa_1} = \pm \frac{1}{\tau^2} \qquad \mbox{and} \qquad {\kappa_2} = - \frac{1}{c^2}. \] For the absolute-time spacetimes with kinematical groups $N_-$, $G$, and $N_+$, where ${\kappa_2} = 0$ and $c = \infty$, we foliate $S$ so that each leaf consists of all points that are simultaneous with one another, and then $SO(3)$ acts transitively on each leaf. We then def\/ine the subsidiary metric $g_2$ along each leaf of the foliation by setting \[ \left( ds^2 \right)_2 = \frac{1}{{\kappa_2}} \left( ds^2 \right)_1. \] Of course when ${\kappa_2} \neq 0$, the subsidiary metric can be def\/ined on all of $\bar{\Sigma}$. The group $SO(3)$ acts on $S$ by isometries of $g_1$, by isometries of $g_2$ when ${\kappa_2} \neq 0$ and, when ${\kappa_2} = 0$, on the leaves of the foliation by isometries of $g_2$. It remains to be seen then how homogeneous spacetimes for the kinematical groups $M_+,$~$M^{\prime},$~$C,\!$ and $SdS$ may be obtained from the Cayley--Klein geometries. In Fig.~3 the face $1346$ contains the motion groups for all nine types of Cayley--Klein geometries, and the symmetries $S_H$, $S_P$, and $S_K$ can be represented as symmetries of the cube, as indicated in Table~10\footnote{Santander \cite{mS01} discusses some geometrical consequences of such symmetries when applied to $dS$, $adS$, and $H$: note that $S_H$, $S_P$, and $S_K$ all f\/ix vertex $1$.}. As vertices~$1$ and~$8$ are in each of the three planes of ref\/lection, it is impossible to get $St$ from any one of the Cayley--Klein groups through the symmetries $S_H$, $S_P$, and $S_K$. Under the symmetry $S_K$, respective spacetimes for $M_+$, $M^{\prime}$, and $C$ are given by the spacetimes $SO(3)/\mathcal{K}$ for $N_+$, $N_-$, and $G$, where space and time translations are interchanged. \begin{figure}[t] \begin{center} \includegraphics[width=12.5cm]{McRae-fig3e} \end{center} \caption{The 9 kinematical and 3 non-kinematical groups.} \end{figure} \begin{table}[t] \centering \caption{The 3 basic symmetries are given as ref\/lections of Fig.~3.} \vspace{1mm} \begin{tabular}{ c | c } \hline Symmetry & Ref\/lection across face \\ \hline \hline $S_H$ & $1378$ \\ $S_P$ & $1268$ \\ $S_K$ & $1458$ \\ \hline \end{tabular} \end{table} Under the symmetry $S_H$, the spacetime for $SdS$ is given by the homogeneous space $SO(3)/\mathcal{P}$ for $G$, as boosts and space translations are interchanged by $S_H$. Note however that there actually are no spacelike geodesics for $G$, as the Cayley--Klein geometry $S = SO(3)/\mathcal{K}$ for ${\kappa_1} = {\kappa_2} = 0$ can be given simply by the plane ${\mathbb{R}}^2 = \{ (t,x) \}$ with $ds^2 = dt^2$ as its line element\footnote{Yaglom writes in \cite{Y79} about this geometry, {\it ``\dots which, in spite of its relative simplicity, confronts the uninitiated reader with many surprising results.''}}. Although $SO(3)/\mathcal{P}$ is a homogeneous space for $SO(3)$, $SO(3)$ does not act ef\/fectively on $SO(3)/\mathcal{P}$: since both $[K,P] = 0$ and $[H,P] = 0$, space translations do not act on $SO(3)/\mathcal{P}$. Similarly, inertial transformations do not act on spacetime for $SdS$, or on $St$ for that matter. Note that $SdS$ can be obtained from $dS$ by $P \rightarrow \epsilon P$, $H \rightarrow \epsilon H$, and $K \rightarrow \epsilon^2 K$, where $\epsilon \rightarrow 0$. So velocities are negligible even when compared to the reduced space and time translations. In conclusion to Part I then, a study of all nine types of Cayley--Klein geometries af\/fords us a beautiful and unif\/ied study of all 11 possible kinematics save one, the static kinematical structure. It was this study that motivated the author to investigate another unif\/ied approach to possible kinematics, save for that of the Static Universe. \pdfbookmark[1]{Part II. Another unified approach to possible kinematics}{part2} \section*{Part II. Another unif\/ied approach to possible kinematics} \section[The generalized Lie algebra $so(3)$]{The generalized Lie algebra $\boldsymbol{so(3)}$} Preceding the work of Ballesteros, Herranz, Ortega, and Santander was the work of Sanjuan~\cite{F84} on possible kinematics and the nine\footnote{Sanjuan and Yaglom both tacitly assume that both parameters ${\kappa_1}$ and ${\kappa_2}$ are normalized.} Cayley--Klein geometries. Sanjuan represents each kinematical Lie algebra as a real matrix subalgebra of $M(2,{\mathbb{C}})$, where ${\mathbb{C}}$ denotes the generalized complex numbers (a description of the generalized complex numbers is given below). This is accomplished using Yaglom's analytic representation of each Caley--Klein geometry as a region of ${\mathbb{C}}$: for the hyperbolic plane this gives the well-known Poincar\'{e} disk model. Sanjuan constructs the Lie algebra for the hyperbolic plane using the standard method, stating that this method can be used to obtain the other Lie algebras as well. Also, extensive work has been done by Gromov \cite{nG90a,nG90b,nG92,nG95,nG96} on the generalized orthogonal groups $SO(3)$ (which we refer to simply as $SO(3)$), deriving representations of the generalized $so(3)$ (which we refer to simply as $so(3)$) by utilizing the dual numbers as well as the standard complex numbers, where again it is tacitly assumed that the parameters ${\kappa_1}$ and ${\kappa_2}$ have been normalized. Also, Pimenov has given an axiomatic description of all Cayley--Klein spaces in arbitrary dimensions in his paper \cite{P65} via the dual numbers $ i_k$, $k=1,2,\dots $, where $ i_k i_m = i_m i_k \neq 0$ and $i_k^2=0$. Unless stated otherwise, we will not assume that the parameters ${\kappa_1}$ and ${\kappa_2}$ have been normalized, as we wish to obtain contractions by simply letting ${\kappa_1} \rightarrow 0$ or ${\kappa_2} \rightarrow 0$. Our goal in this section is to derive representations of $so(3)$ as real subalgebras of $M(2,{\mathbb{C}})$, and in the process give a conformal model of $S$ as a region of the generalized complex plane ${\mathbb{C}}$ along with a hermitian metric, extending what has been done for the projective and hyperbolic planes\footnote{Fjelstad and Gal \cite{FG01} have investigated two-dimensional geometries and physics generated by complex numbers from a topological perspective. Also, see \cite{CCCZ}.}. We feel that it is worthwhile to write down precisely how these representations are obtained in order that our later construction of a Clif\/ford algebra is more meaningful. The f\/irst step is to represent the generators of $SO(3)$ by M\"{o}bius transformations (that is, linear fractional transformations) of an appropriately def\/ined region in the complex number plane ${\mathbb{C}}$, where the points of $S$ are to be identif\/ied with this region. \begin{definition} By the complex number plane ${\mathbb{C}}_{\kappa}$ we will mean $\{ w = u + i v \, | \, (u,v) \in {\mathbb{R}}^2 \ \mbox{and} \ i^2 = -\kappa \}$ where $\kappa$ is a real-valued parameter. \end{definition} Thus ${\mathbb{C}}_{\kappa}$ refers to the complex numbers, dual numbers, or double numbers when $\kappa$ is normalized to $1$, $0$, or $-1$ respectively (see \cite{Y79} and \cite{HH04}). One may check that ${\mathbb{C}}_{\kappa}$ is an associative algebra with a multiplicative unit, but that there are zero divisors when $\kappa \leq 0$. For example, if $\kappa = 0$, then $i$ is a zero-divisor. The reader will note below that $\frac{1}{i}$ appears in certain equations, but that these equations can always be rewritten without the appearance of any zero-divisors in a denominator. One can extend ${\mathbb{C}}_{\kappa}$ so that terms like $\frac{1}{i}$ are well-def\/ined (see \cite{Y79}). It is these zero divisors that play a crucial rule in determining the null-cone structure for those Cayley--Klein geometries that are spacetimes. \begin{definition} Henceforward ${\mathbb{C}}$ will denote ${\mathbb{C}}_{{\kappa_2}}$, as it is the parameter ${\kappa_2}$ which determines the conformal structure of the Cayley--Klein geometry $S$ with parameters ${\kappa_1}$ and ${\kappa_2}$. \end{definition} \begin{theorem} The matrices $\frac{i}{2} {\sigma_1}$, $\frac{i}{2} {\sigma_2}$, and $\frac{1}{2i}{\sigma_3}$ are generators for the generalized Lie algebra $so(3)$, where $so(3)$ is represented as a subalgebra of the real matrix algebra $M(2,{\mathbb{C}})$, where \[ {\sigma_1} = \left( \begin{matrix} 1 & 0 \\ 0 & -1 \end{matrix} \right) , \qquad {\sigma_2} = \left( \begin{matrix} 0 & 1 \\ {\kappa_1} & 0 \end{matrix} \right) \qquad \mbox{and} \qquad {\sigma_3} = \left( \begin{matrix} 0 & i \\ -{\kappa_1} i & 0 \end{matrix} \right) . \] In fact, we will show that $\mathcal{K}$, $\mathcal{H}$, and $\mathcal{P}$ (the subgroups generated respectively by boosts, time and space translations) can be respectively represented by elements of $SL(2,{\mathbb{C}})$ of the form $e^{i\frac{\theta}{2}{\sigma_1}}$, $e^{i\frac{\alpha}{2}{\sigma_2}}$, and $e^{\frac{\beta}{2i}{\sigma_3}}$. \end{theorem} Note that when ${\kappa_1} =1$ and ${\kappa_2} = 1$, we recover the Pauli spin matrices, though my indexing is dif\/ferent, and there is a sign change as well: recall that the Pauli spin matrices are typically given as \[ {\sigma_1} = \left( \begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix} \right) , \qquad {\sigma_2} = \left( \begin{matrix} 0 & -i \\ i & 0 \end{matrix} \right) \qquad \mbox{and} \qquad {\sigma_3} = \left( \begin{matrix} 1 & 0 \\ 0 & -1 \end{matrix} \right) . \] We will refer to ${\sigma_1}$, ${\sigma_2}$, and ${\sigma_3}$ as given in the statement of Theorem~1 as the generalized Pauli spin matrices. \begin{figure}[t] \begin{center} \includegraphics[width=8cm]{McRae-fig4e} \end{center} \caption{The unit sphere $\Sigma$ and the three complex planes ${\mathbb{C}}_{{\kappa_2}}$, ${\mathbb{C}}_{{\kappa_1}}$, and ${\mathbb{C}}_{{\kappa_1} {\kappa_2}}$.} \end{figure} The remainder of this section is devoted to proving the above theorem. The reader may f\/ind Fig.~4 helpful. The respective subgroups $\mathcal{K}$, $\mathcal{H}$, and $\mathcal{P}$ preserve the $z$, $x$, and $t$ axes as well as the ${\mathbb{C}}_{{\kappa_2}}$, ${\mathbb{C}}_{{\kappa_1}}$, and ${\mathbb{C}}_{{\kappa_1} {\kappa_2}}$ number planes, acting on these planes as rotations. Also, as these groups preserve the unit sphere $\Sigma = \{ (z, t, x) \; | \; z^2 + {\kappa_1} t^2 + {\kappa_1} {\kappa_2} x^2 = 1 \}$, they preserve the respective intersections of $\Sigma$ with the ${\mathbb{C}}_{{\kappa_2}}$, ${\mathbb{C}}_{{\kappa_1}}$, and ${\mathbb{C}}_{{\kappa_1} {\kappa_2}}$ number planes. These intersections are, respectively, circles of the form ${\kappa_1} w\bar{w} = 1$ (there is no intersection when ${\kappa_1} = 0$ or when ${\kappa_1} < 0$ and ${\kappa_2} > 0$), ${\mathsf{w}}\bar{{\mathsf{w}}} = 1$, and ${\mathfrak{w}}\bar{{\mathfrak{w}}} = 1$, where $w$, ${\mathsf{w}}$, and ${\mathfrak{w}}$ denote elements of ${\mathbb{C}}_{{\kappa_2}}$, ${\mathbb{C}}_{{\kappa_1}}$, and ${\mathbb{C}}_{{\kappa_1} {\kappa_2}}$ respectively. We will see in the next section how a general element of $SO(3)$ behaves in a~manner similar to the generators of $\mathcal{K}$, $\mathcal{H}$, and $\mathcal{P}$, utilizing the power of a Clif\/ford algebra. So we will let the plane $z = 0$ in ${\mathbb{R}}^3$ represent ${\mathbb{C}}$ (recall that ${\mathbb{C}}$ denotes ${\mathbb{C}}_{{\kappa_2}}$). We may then identify the points of $S$ with a region $\varsigma$ of ${\mathbb{C}}$ by centrally projecting $\Sigma$ from the point $(-1, 0, 0)$ onto the plane $z = 0$, projecting only those points $(z,t,x) \in \Sigma$ with non-negative $z$-values. The region $\varsigma$ may be open or closed or neither, bounded or unbounded, depending on the geometry of $S$. Such a construction is well known for both the projective and hyperbolic planes ${\bf RP}^2$ and~${\bf H}^2$ and gives rise to the conformal models of these geometries. We will see later on how the conformal structure on ${\mathbb{C}}$ agrees with that of $S$, and then how the simple hermitian metric (see Appendix~B) \[ ds^2 = \frac{dw d\overline{w}}{\left( 1 + {\kappa_1} \left| w \right|^2 \right)^2} \] gives the main metric $g_1$ for $S$. This metric can be used to help indicate the general character of the region $\varsigma$ for each of the nine types of Cayley--Klein geometries, as illustrated in Fig.~5. Note that antipodal points on the boundary of $\varsigma$ (if there is a boundary) are to be identif\/ied. For absolute-time spacetimes (when ${\kappa_2} = 0$) the subsidiary metric $g_2$ is given by \[ g_2 = \frac{dx^2}{\left( 1 + {\kappa_1} t_0^2 \right)^2} \] and is def\/ined on lines $w = t_0$ of simultaneous events. For all spacetimes, with Here-Now at the origin, the set of zero-divisors gives the null cone for that event. \begin{figure}[ht] \begin{center} \includegraphics[width=12.5cm]{McRae-fig5e} \end{center} \caption{The regions $\varsigma$.} \end{figure} Via this identif\/ication of points of $S$ with points of $\varsigma$, transformations of $S$ correspond to transformations of $\varsigma$. If the real parameters ${\kappa_1}$ and ${\kappa_2}$ are normalized to the values ${K_1}$ and ${K_2}$ so that \begin{equation*} K_i = \begin{cases} 1, &\text{if $\kappa_i > 0$}, \\ 0, &\text{if $\kappa_i = 0$} , \\ -1, &\text{if $\kappa_i < 0$} \end{cases} \end{equation*} then Yaglom~\cite{Y79} has shown that the linear isometries of ${\mathbb{R}}^3$ (with metric $ds^2 = dz^2 + {K_1} dt^2 + {K_1}{K_2} dx^2$) acting on $\bar{\Sigma}$ project to those M\"{o}bius transformations that preserve $\varsigma$, and so these M\"{o}bius transformations preserve cycles\footnote{Yaglom projects from the point $(z, t, x) = (-1, 0, 0)$ onto the plane $z = 1$ whereas we project onto the plane $z = 0$. But this hardly matters as cycles are invariant under dilations of ${\mathbb{C}}$.}: a cycle is a curve of constant curvature, corresponding to the intersection of a plane in ${\mathbb{R}}^3$ with $\bar{\Sigma}$. We would like to show that elements of $SO(3)$ project to M\"{o}bius transformations if the parameters are not normalized, and then to f\/ind a realization of $so(3)$ as a real subalgebra of $M(2,{\mathbb{C}})$. Given ${\kappa_1}$ and ${\kappa_2}$ we may def\/ine a linear isomorphism of ${\mathbb{R}}^3$ as indicated below. \begin{table}[htdp] \centering \begin{tabular}{l | l | l} \hline ${\kappa_1} \neq 0, \; {\kappa_2} \neq 0$ & ${\kappa_1} \neq 0, \; {\kappa_2} = 0$ & ${\kappa_1} = {\kappa_2} = 0$ \\ \hline \hline $z \mapsto z^{\prime} = z$ & $z \mapsto z^{\prime} = z$ & $z \mapsto z^{\prime} = z$ \\ $t \mapsto t^{\prime} = \frac{1}{\sqrt{ | {\kappa_1} | }} t$ & $t \mapsto t^{\prime} = \frac{1 }{\sqrt{ | {\kappa_1} | }} t$ & $t \mapsto t^{\prime} = t$ \\ \bsep{1ex}$x \mapsto x^{\prime} = \frac{1}{\sqrt{ | {\kappa_1} {\kappa_2} | }} x$ & $x \mapsto x^{\prime} = x$ & $x \mapsto x^{\prime} = x$ \\ \hline \end{tabular} \end{table} \noindent This transformation preserves the projection point $(-1, 0, 0)$ as well as the complex plane $z = 0$, and maps the projective quadric $\bar{\Sigma}$ for parameters ${K_1}$ and ${K_2}$ to that for ${\kappa_1}$ and ${\kappa_2}$, and so gives a correspondence between elements of $SO_{{K_1},{K_2}}(3)$ with those of ${SO_{\ka,\kb}(3)}$ as well as the projections of these elements. As the M\"{o}bius transformations of ${\mathbb{C}}$ are those transformations that preserve curves of the form \[ \mbox{Im} \frac{(w^{\prime}_1 - w^{\prime}_3)(w^{\prime}_2 - w^{\prime})}{(w^{\prime}_1 - w^{\prime})(w^{\prime}_2 - w^{\prime}_3)} = 0 \] (where $w^{\prime}_1$, $w^{\prime}_2$, and $w^{\prime}_3$ are three distinct points lying on the cycle), then if this form is invariant under the induced action of the linear isomorphism, then elements of $SO_{{\kappa_1},{\kappa_2}}(3)$ project to M\"{o}bius transformations of $\varsigma$. As a point $(z, t, x)$ is projected to the point $\left( 0, \frac{t}{z + 1}, \frac{x}{z + 1} \right)$ corresponding to the complex number $w = \frac{1}{z + 1}(t + \mathcal{I}x) \in {\mathbb{C}}_{{K_2}}$, if the linear transformation sends $(z, t, x)$ to $(z^{\prime}, t^{\prime}, x^{\prime})$, then it sends $w = \frac{1}{z + 1}(t + \mathcal{I}x) \in {\mathbb{C}}_{{K_2}}$ to $w^{\prime} = \frac{1}{z^{\prime} + 1}(t^{\prime} + ix^{\prime}) \in {\mathbb{C}}_{{\kappa_2}} = {\mathbb{C}}$, where $\mathcal{I}^2 = -{K_2}$ and $i^2 = -{\kappa_2}$. We can then write that \begin{table}[htdp] \centering \begin{tabular}{l | l | l} \hline ${\kappa_1} \neq 0, \; {\kappa_2} \neq 0$ & ${\kappa_1} \neq 0, \; {\kappa_2} = 0$ & ${\kappa_1} = {\kappa_2} = 0$ \\ \hline \hline \tsep{1ex} $w = \frac{1}{z+1} \left( t + \mathcal{I}x \right) \mapsto$ & $w = \frac{1}{z+1} \left( t + \mathcal{I}x \right) \mapsto$ & $w = \frac{1}{z+1} \left( t + \mathcal{I}x \right) \mapsto$ \\ $w^{\prime} = \frac{1}{z+1} \frac{1}{\sqrt{| {\kappa_1} |}} \left( t + \frac{\mathcal{I}}{\sqrt{| {\kappa_2} |}} x \right)$ & $w^{\prime} = \frac{1}{z+1} \left( \frac{1}{\sqrt{| {\kappa_1} |}} t + \mathcal{I}x \right)$ & $ w^{\prime} = w$ \bsep{1ex}\\ \hline \end{tabular} \end{table} \noindent And so \[ \mbox{Im} \frac{(w_1 - w_3)(w_2 - w)}{(w_1 - w)(w_2 - w_3)} = 0 \iff \mbox{Im} \frac{(w^{\prime}_1 - w^{\prime}_3)(w^{\prime}_2 - w^{\prime})}{(w^{\prime}_1 - w^{\prime})(w^{\prime}_2 - w^{\prime}_3)} = 0, \] as can be checked directly, and we then have that elements of $SO(3)$ project to M\"{o}bius transformations of $\varsigma$. The rotations $e^{\theta K}$ preserve the complex number plane $z = t + ix = 0$ and so correspond simply to the transformations of ${\mathbb{C}}$ given by $w \mapsto e^{i \theta}w$, as $e^{i \theta} = {C_{\kb}}(\theta) + i{S_{\kb}}(\theta)$, keeping in mind that $i^2 = -{\kappa_2}$. Now in order to express this rotation as a M\"{o}bius transformation, we can write \[ w \mapsto \frac{e^{\frac{\theta}{2}i}w + 0}{0w + e^{-\frac{\theta}{2}i}} .\] Since there is a group homomorphism from the subgroup of M\"{o}bius transformations correspon\-ding to $SO(3)$ to the group $M(2, {\mathbb{C}})$ of $2 \times 2$ matrices with entries in ${\mathbb{C}}$, this transformation being def\/ined by \[ \frac{aw + b}{cw + d} \mapsto \left( \begin{matrix} a & b \\ c & d \end{matrix} \right), \] each M\"{o}bius transformation is covered by two elements of $SL(2, {\mathbb{C}})$. So the rotations $e^{\theta K}$ correspond to the matrices \[ \pm \left( \begin{matrix} e^{\frac{\theta}{2}i} & 0 \\ 0 & e^{-\frac{\theta}{2}i} \end{matrix} \right) = \pm e^{ \frac{\theta}{2}i \left( \begin{matrix} 1 & 0 \\ 0 & -1 \end{matrix} \right) . } \] For future reference let us now def\/ine {\samepage \[ {\sigma_1} \equiv \left( \begin{matrix} 1 & 0 \\ 0 & -1 \end{matrix} \right) , \] where $\frac{i}{2}{\sigma_1}$ is then an element of the Lie algebra $so(3)$. } We now wish to see which elements of $SL(2, {\mathbb{C}})$ correspond to the motions $e^{\alpha H}$ and $e^{\beta P}$. The $x$-axis, the $zt$-coordinate plane, and the unit sphere $\Sigma$, are all preserved by $e^{\alpha H}$. So the $zt$-coordinate plane is given the complex structure ${\C_{\ka}} = \{ {\mathsf{w}} = z + it \, | \, i^2 = - {\kappa_1} \}$, for then the unit circle ${\mathsf{w}} \overline{{\mathsf{w}}} = 1$ gives the intersection of $\Sigma$ with ${\C_{\ka}}$, and the transformation induced on ${\C_{\ka}}$ by $e^{\alpha H}$ is simply given by ${\mathsf{w}} \mapsto e^{i \alpha} {\mathsf{w}}$. Similarly the transformation induced by $e^{\beta P}$ on ${\C_{\ka \kb}} = \{ {\mathfrak{w}} = z + ix \; | \; i^2 = -{\kappa_1} {\kappa_2} \}$ is given by ${\mathfrak{w}} \mapsto e^{i \beta} {\mathfrak{w}}$. In order to explicitly determine the projection of the rotation ${\mathsf{w}} \mapsto e^{i \alpha} {\mathsf{w}}$ of the unit circle in~${\C_{\ka}}$ and also that of the rotation ${\mathfrak{w}} \mapsto e^{i \beta} {\mathfrak{w}}$ of the unit circle in ${\C_{\ka \kb}}$, note that the projection point $(z, t, x) = (-1, 0, 0)$ lies in either unit circle and that projection sends a point on the unit circle (save for the projection point itself) to a point on the imaginary axis as follows: \[ {\mathsf{w}} = e^{i \phi} \mapsto i{T_{\ka}}\left(\frac{\phi}{2}\right),\qquad {\mathfrak{w}} = e^{i \phi} \mapsto i{T_{\ka \kb}}\left(\frac{\phi}{2}\right) \] (where $T_{\kappa}$ is the tangent function) for \[ T_{\kappa}\left(\frac{\mu}{2}\right) = \frac{S_{\kappa}(\mu)}{C_{\kappa}(\mu) + 1} ,\] noting that a point $a + ib$ on the unit circle $w \overline{w}$ of the complex plane $C_{\kappa}$ can be written as $a + ib = e^{i \psi} = {C_{\kappa}}(\psi) + i{S_{\kappa}}(\psi)$. So the rotations $e^{\alpha H}$ and $e^{\beta P}$ induce the respective transformations \[ i{T_{\ka}}\left(\frac{\phi}{2}\right) \mapsto i{T_{\ka}}\left(\frac{\phi + \alpha}{2}\right),\qquad i{T_{\ka \kb}}\left(\frac{\phi}{2}\right) \mapsto i{T_{\ka \kb}}\left(\frac{\phi + \beta}{2}\right) \] on the imaginary axes. We know that such transformations of either imaginary or real axes can be extended to M\"{o}bius transformations, and in fact uniquely determine such M\"{o}bius maps. For example, if ${\mathsf{w}} = i{T_{\ka}} \left( \frac{\phi}{2} \right)$, then we have that \[ {\mathsf{w}} \mapsto \frac{{\mathsf{w}} + i {T_{\ka}} \left( \frac{\alpha}{2} \right)} {1 - \frac{ {\kappa_1} {\mathsf{w}}}{i} {T_{\ka}} \left( \frac{\alpha}{2} \right)} \] or \[ {\mathsf{w}} \mapsto \frac{{C_{\ka}} \left( \frac{\alpha}{2} \right) {\mathsf{w}} + i {S_{\ka}} \left( \frac{\alpha}{2} \right)}{-\frac{{\kappa_1}}{i} {S_{\ka}} \left( \frac{\alpha}{2} \right){\mathsf{w}} + {C_{\ka}} \left( \frac{\alpha}{2} \right)} \] with corresponding matrix representation \[ \pm \left( \begin{matrix} {C_{\ka}} \left( \frac{\alpha}{2} \right) & i {S_{\ka}} \left( \frac{\alpha}{2} \right) \vspace{1mm}\\ i {S_{\ka}} \left( \frac{\alpha}{2} \right) & {C_{\ka}} \left( \frac{\alpha}{2} \right) \end{matrix} \right) \] in $SL(2, {C_{\ka}})$, where we have applied the trigonometric identity\footnote{For Minkowski spacetimes this trigonometric identity is the well-known formula for the addition of rapidities.} \[ T_{\kappa}(\mu \pm \psi) = \frac{T_{\kappa}(\mu) \pm T_{\kappa}(\psi)} {1 \mp \kappa T_{\kappa}(\mu) T_{\kappa}(\psi)}. \] However, it is not these M\"{o}bius transformations that we are after, but those corresponding transformations of ${\mathbb{C}}$. Now a transformation of the imaginary axis (the $x$-axis) of ${\C_{\ka \kb}}$ corresponds to a transformation of the imaginary axis of ${\mathbb{C}}$ (also the $x$-axis) while a transformation of the imaginary axis of ${\C_{\ka}}$ (the $t$-axis) corresponds to a transformation of the real axis of ${\mathbb{C}}$ (also the $t$-axis). For this reason, values on the $x$-axis, which are imaginary for both the ${\mathbb{C}}_{{\kappa_1} {\kappa_2}}$ as well as the ${\mathbb{C}}$ plane, correspond as \[ i {T_{\ka \kb}} \left( \frac{\phi}{2} \right) = i \frac{1}{\sqrt{{\kappa_1}}} {T_{\kb}} \left( \sqrt{{\kappa_1}} \frac{\phi}{2} \right) \] if ${\kappa_1} > 0$, \[ i {T_{\ka \kb}} \left( \frac{\phi}{2} \right) = i \frac{1}{\sqrt{-{\kappa_1}}} T_{-{\kappa_2}} \left( \sqrt{-{\kappa_1}} \frac{\phi}{2} \right) \] if ${\kappa_1} < 0$, and \[ i {T_{\ka \kb}} \left( \frac{\phi}{2} \right) = i \left( \frac{\phi}{2} \right) \] if ${\kappa_1} = 0$, as can be seen by examining the power series representation for $T_{\kappa}$. The situation for the rotation $e^{i \alpha}$ is similar. We can then compute the elements of $SL(2,{\mathbb{C}})$ corresponding to $e^{\alpha H}$ and $e^{\beta P}$ as given in tables $13$ and $14$ in Appendix~C. In all cases we have the simple result that those elements of $SL(2, {\mathbb{C}})$ corresponding to $e^{\alpha H}$ can be written as $e^{\frac{\alpha}{2i} {\sigma_3}}$ and those for $e^{\beta P}$ as $e^{i \frac{\beta}{2} {\sigma_2}}$, where \[ {\sigma_2} \equiv \left( \begin{matrix} 0 & 1 \\ {\kappa_1} & 0 \end{matrix} \right) \qquad \mbox{and} \qquad {\sigma_3} \equiv \left( \begin{matrix} 0 & i \\ -{\kappa_1} i & 0 \end{matrix} \right) . \] Thus $\frac{i}{2} {\sigma_1}$, $\frac{i}{2} {\sigma_2}$, and $\frac{1}{2i}{\sigma_3}$ are generators for the generalized Lie algebra $so(3)$, a subalgebra of the real matrix algebra $M(2,{\mathbb{C}})$. \section[The Clifford algebra $Cl_3$]{The Clif\/ford algebra $\boldsymbol{Cl_3}$} \begin{definition} Let $Cl_3$ be the 8-dimensional real Clif\/ford algebra that is identif\/ied with $M(2,{\mathbb{C}})$ as indicated by Table~11, where ${\mathbb{C}}$ denotes the generalized complex numbers ${\mathbb{C}}_{{\kappa_2}}$. Here we identify the scalar $1$ with the identity matrix and the volume element $i$ with the $2 \times 2$ identity matrix multiplied by the complex scalar $i$: in this case $\frac{1}{i}{\sigma_3}$ can be thought of as the $2 \times 2$ matrix $\left( \begin{matrix} 0 & 1 \\ -{\kappa_1} & 0 \end{matrix} \right)$. We will also identify the generalized Paul spin matrices ${\sigma_1}$, ${\sigma_2}$, and ${\sigma_3}$ with the vectors $\hat{i} = \langle 1, 0, 0 \rangle$, $\hat{j} = \langle 0, 1, 0 \rangle$, and $\hat{k} = \langle 0, 0, 1 \rangle$ respectively of the vector space ${\mathbb{R}}^3 = \{ (z, t, x) \}$ given the Cayley--Klein inner product\footnote{We will use the symbol $\hat{v}$ to denote a vector $v$ of length one under the standard inner product.}. \end{definition} \begin{table}[t] \centering \caption{The basis elements for $Cl_3$.} \vspace{1mm} \begin{tabular}{r c l} \hline Subspace of & & with basis \\ \hline \hline scalars & ${\mathbb{R}}$ & 1 \\ vectors & ${\mathbb{R}}^3$ & ${\sigma_1}, {\sigma_2}, {\sigma_3}$ \\ bivectors & $\bigwedge^2 {\mathbb{R}}^3$ & $i {\sigma_1}, i {\sigma_2}, \frac{1}{i} {\sigma_3}$ \\ volume elements & $\bigwedge^3 {\mathbb{R}}^3$ & $i$ \\ \hline \end{tabular} \end{table} \begin{proposition} Let $Cl_3$ be the Clifford algebra given by Definition~{\rm 3}. \begin{itemize}\itemsep=0pt \item[(i)] The Clifford product $\sigma_i^2$ gives the square of the length of the vector $\sigma_i$ under the Cayley--Klein inner product. \item[(ii)] The center Cen($Cl_3$) of $Cl_3$ is given by ${\mathbb{R}} \oplus \bigwedge^3 {\mathbb{R}}^3$, the subspace of scalars and volume elements. \item[(iii)] The generalized Lie algebra $so(3)$ is isomorphic to the space of bivectors $\bigwedge^2 {\mathbb{R}}^3$, where \[ H = \frac{1}{2i} {\sigma_3} , \qquad P = \frac{i}{2} {\sigma_2} , \qquad \mbox{and} \qquad K = \frac{i}{2} {\sigma_1} . \] \item[(iv)] If $\hat{n} = \langle n^1, n^2, n^3 \rangle$ and $\vec{\sigma} = \langle i{\sigma_1}, i{\sigma_2}, {\sigma_3}/i \rangle$, then we will let ${\hat{n} \cdot \vec{\sigma}}$ denote the bivector $n^1 i {\sigma_1} + n^2 i {\sigma_2} + n^3 \frac{1}{i} {\sigma_3}$. This bivector is simple, and the parallel vectors $i{\hat{n} \cdot \vec{\sigma}}$ and $\frac{1}{i} {\hat{n} \cdot \vec{\sigma}}$ are perpendicular to any plane element represented by ${\hat{n} \cdot \vec{\sigma}}$. Let $\eta$ denote the line through the origin that is determined by $i{\hat{n} \cdot \vec{\sigma}}$ or $\frac{1}{i} {\hat{n} \cdot \vec{\sigma}}$. \item[(v)] The generalized Lie group $SO(3)$ is also represented within $Cl_3$, for if $a$ is the vector $a^1{\sigma_1} + a^2{\sigma_2} + a^3{\sigma_3}$, then the linear transformation of ${\mathbb{R}}^3$ defined by the inner automorphism \[ a \mapsto e^{-\frac{\phi}{2} \hat{n} \cdot \vec{\sigma}} a \, e^{\frac{\phi}{2} \hat{n} \cdot \vec{\sigma}} \] faithfully represents an element of $SO(3)$ as it preserves vector lengths given by the Cayley--Klein inner product, and is in fact a rotation, rotating the vector $\langle a^1, a^2, a^3 \rangle$ about the axis $\eta$ through the angle $\phi$. In this way we see that the spin group is generated by the elements \[ e^{\frac{\theta}{2} i{\sigma_1}}, \qquad e^{\frac{\beta}{2} i{\sigma_2}},\qquad \mbox{and} \qquad e^{\frac{\alpha}{2i} {\sigma_3}} . \] \item[(vi)] Bivectors ${\hat{n} \cdot \vec{\sigma}}$ act as imaginary units as well as generators of rotations in the oriented planes they represent. Let $\varkappa$ be the scalar $- \left( {\hat{n} \cdot \vec{\sigma}} \right)^2 $. Then if $a$ lies in an oriented plane determined by the bivector ${\hat{n} \cdot \vec{\sigma}}$, where this plane is given the complex structure of ${\mathbb{C}}_{\varkappa}$, then $e^{-\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}} a e^{\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}}$ is simply the vector $\langle a^1, a^2, a^3 \rangle$ rotated by the angle $\phi$ in the complex plane ${\mathbb{C}}_{\varkappa}$, where $\iota^2 = -\varkappa$. So this rotation is given by unit complex multiplication. \end{itemize} \end{proposition} The goal of this section is to prove Proposition~1. We can easily compute the following: \begin{gather*} {\sigma_1}^2 = 1, \qquad {\sigma_2}^2 = {\kappa_1} , \qquad {\sigma_3}^2 = {\kappa_1} {\kappa_2} , \\ {\sigma_3}{\sigma_2} = -{\sigma_2}{\sigma_3} = {\kappa_1} i {\sigma_1}, \qquad {\sigma_1}{\sigma_3} = -{\sigma_3}{\sigma_1} = i{\sigma_2}, \\ {\sigma_1}{\sigma_2} = -{\sigma_2}{\sigma_1} = \frac{1}{i} {\sigma_3} , \qquad {\sigma_1} {\sigma_2} {\sigma_3} = -{\kappa_1} i. \end{gather*} Recalling that ${\mathbb{R}}^3$ is given the Cayley--Klein inner product, we see that $\sigma_i^2$ gives the square of the length of the vector $\sigma_i$. Note that when ${\kappa_1} = 0$, $Cl_3$ is not generated by the vectors. Cen($Cl_3$) of $Cl_3$ is given by ${\mathbb{R}} \oplus \bigwedge^3 {\mathbb{R}}^3$, and we can check directly that if \[ H \equiv \frac{1}{2i} {\sigma_3} , \qquad P \equiv \frac{i}{2} {\sigma_2} , \qquad \mbox{and} \qquad K \equiv \frac{i}{2} {\sigma_1} , \] then we have the following commutators: \begin{gather*} \left[ H, P \right] = HP - PH = \frac{1}{4}\left( {\sigma_3}{\sigma_2} - {\sigma_2}{\sigma_3} \right) = \frac{{\kappa_1} i {\sigma_1}}{2} = {\kappa_1} K ,\\ \left[ K, H \right] = KH - HK = \frac{1}{4}\left( {\sigma_1}{\sigma_3} - {\sigma_3}{\sigma_1} \right) = \frac{i {\sigma_2}}{2} = P, \\ \left[ K, P \right] = KP - PK = \frac{i^2}{4}\left( {\sigma_1}{\sigma_2} - {\sigma_2}{\sigma_1} \right) = \frac{i {\sigma_3}}{2} = - {\kappa_2} H. \end{gather*} So the Lie algebra $so(3)$ is isomorphic to the space of bivectors $\bigwedge^2 {\mathbb{R}}^3$. The product of two vectors $a = a^1{\sigma_1} + a^2{\sigma_2} + a^3{\sigma_3}$ and $b = b^1{\sigma_1} + b^2{\sigma_2} + b^3{\sigma_3}$ in $Cl_3$ can be expressed as $ab = a \cdot b + a \wedge b = \frac{1}{2}(ab + ba) + \frac{1}{2}(ab - ba)$, where $a \cdot b = \frac{1}{2}(ab + ba) = a^1 b^1 + {\kappa_1} a^2 b^2 + {\kappa_1} {\kappa_2} a^3 b^3$ is the Cayley--Klein inner product and the wedge product is given by \[ a \wedge b = \frac{1}{2}(ab - ba) = \left| \begin{matrix} -{\kappa_1} i {\sigma_1} & -i {\sigma_2} & \frac{1}{i} {\sigma_3} \\ a^1 & a^2 & a^3 \\ b^1 & b^2 & b^3 \end{matrix} \right|, \] so that $ab$ is the sum of a scalar and a bivector: here $| \star |$ denotes the usual $3 \times 3$ determinant. By the properties of the determinant, if $e \wedge f = g \wedge h$ and ${\kappa_1} \neq 0$, then the vectors $e$ and $f$ span the same oriented plane as the vectors $g$ and $h$. When ${\kappa_1} = 0$ the bivector ${\hat{n} \cdot \vec{\sigma}}$ is no longer simple in the usual way. For example, for the Galilean kinematical group (aka the Heisenberg group) where ${\kappa_1} = 0$ and ${\kappa_2} = 0$, we have that both ${\sigma_1} \wedge {\sigma_3} = i {\sigma_2}$ and $\left( {\sigma_1} + {\sigma_2} \right) \wedge {\sigma_3} = i {\sigma_2}$, so that the bivector $i {\sigma_2}$ represents plane elements that do no all lie in the same plane\footnote{There is some interesting asymmetry for Galilean spacetime, in that the perpendicular to a timelike geodesic through a given point is uniquely def\/ined as the lightlike geodesic that passes through that point, and this lightlike geodesic then has no unique perpendicular, since all timelike geodesics are perpendicular to it.}. Recalling that ${\sigma_1}$, ${\sigma_2}$, and ${\sigma_3}$ correspond to the vectors $\hat{i}$, $\hat{j}$, and $\hat{k}$ respectively, we observe that the subgroup $\mathcal{P}$ of the Galilean group f\/ixes the $t$-axis and preserves both of these planes, inducing the same kind of rotation upon each of them: for the plane spanned by $\hat{i}$ and $\hat{k}$ we have that \[ e^{\beta P}: \left( \begin{matrix} \hat{i} \\ \hat{k} \end{matrix} \right) \mapsto \left( \begin{matrix} \hat{i} + \beta \hat{k} \\ \hat{k} \end{matrix} \right) \] while for the plane spanned by $\hat{i} + \hat{j}$ and $\hat{k}$ we have that \[ e^{\beta P}: \left( \begin{matrix} \hat{i} + \hat{j} \\ \hat{k} \end{matrix} \right) \mapsto \left( \begin{matrix} \hat{i} + \hat{j} + \beta \hat{k} \\ \hat{k} \end{matrix} \right). \] If we give either plane the complex structure of the dual numbers so that $i^2 = 0$, then the rotation is given by simply multiplying vectors in the plane by the unit complex number $e^{\beta i}$. We will see below that this kind of construction holds generally. What we need for our construction below is that any bivector can be meaningfully expressed as $e \wedge f$ for some vectors $e$ and $f$, so that the bivector represents at least one plane element: we will discuss the meaning of the magnitude and orientation of the plane element at the end of the section. If the bivector represents multiple plane elements spanning distinct planes, so much the better. If $\hat{n} = \langle n^1, n^2, n^3 \rangle$ and $\vec{\sigma} = \langle i{\sigma_1}, i{\sigma_2}, {\sigma_3}/i \rangle$, then we will let ${\hat{n} \cdot \vec{\sigma}}$ denote the bivector $B = n^1 i {\sigma_1} + n^2 i {\sigma_2} + n^3 \frac{1}{i} {\sigma_3}$. Now if \begin{gather*} a = n^1 {\sigma_3} + {\kappa_1} n^3 {\sigma_1} , \qquad b = -n^1 {\sigma_2} + {\kappa_1} n^2 {\sigma_1},\qquad c = n^3 {\sigma_2} + n^2 {\sigma_3}, \end{gather*} then \begin{gather*} a \wedge c = {\kappa_1} n^3 {\hat{n} \cdot \vec{\sigma}}, \qquad b \wedge a = {\kappa_1} n^1 {\hat{n} \cdot \vec{\sigma}}, \qquad b \wedge c = {\kappa_1} n^2 {\hat{n} \cdot \vec{\sigma}}, \end{gather*} where at least one of the bivectors $n^i {\hat{n} \cdot \vec{\sigma}}$ is non-zero as ${\hat{n} \cdot \vec{\sigma}}$ is non-zero. If ${\kappa_1} = 0$ and $n^1 = 0$, then ${\sigma_1} \wedge c = {\hat{n} \cdot \vec{\sigma}}$. However, if both ${\kappa_1} = 0$ and $n^1 \neq 0$, then it is impossible to have $e \wedge f = {\hat{n} \cdot \vec{\sigma}}$: in this context we may simply replace the expression ${\hat{n} \cdot \vec{\sigma}}$ with the expression ${\sigma_3} \wedge {\sigma_2}$ whenever ${\kappa_1} = 0$ and $n^1 \neq 0$ (as we will see at the end of this section, we could just as well replace ${\hat{n} \cdot \vec{\sigma}}$ with any non-zero multiple of ${\sigma_3} \wedge {\sigma_2}$). The justif\/ication for this is given by letting ${\kappa_1} \rightarrow 0$, for then \[ e \wedge f = \left( \sqrt{|{\kappa_1}|} n^2 {\sigma_1} - \frac{n^1}{\sqrt{|{\kappa_1}|}} {\sigma_2} \right) \wedge \left( \sqrt{|{\kappa_1}|} \frac{n^3}{n^1}{\sigma_1} + \frac{1}{\sqrt{|{\kappa_1}|}} {\sigma_3} \right) = {\hat{n} \cdot \vec{\sigma}} \] shows that the plane spanned by the vectors $e$ and $f$ tends to the $xt$-coordinate plane. We will see below how each bivector ${\hat{n} \cdot \vec{\sigma}}$ corresponds to an element of $SO(3)$ that preserves any oriented plane corresponding to ${\hat{n} \cdot \vec{\sigma}}$: in the case where ${\kappa_1} = 0$ and $n^1 \neq 0$, we will then have that this element preserves the $tx$-coordinate plane, which is all that we require. It is interesting to note that the parallel vectors $i(a \wedge b)$ and $\frac{1}{i} (a \wedge b)$ (when def\/ined) are perpendicular to both $a$ and $b$ with respect to the Cayley--Klein inner product, as can be checked directly. However, due to the possible degeneracy of the Cayley--Klein inner product, there may not be a unique direction that is perpendicular to any given plane. The vector $i {\hat{n} \cdot \vec{\sigma}} = -{\kappa_2} n^1 {\sigma_1} - {\kappa_2} n^2 {\sigma_2} + n^3 {\sigma_1}$ is non-zero and perpendicular to any plane element corresponding to ${\hat{n} \cdot \vec{\sigma}}$ except when both ${\kappa_2} = 0$ and $n^3 = 0$, in which case $i {\hat{n} \cdot \vec{\sigma}}$ is the zero vector. In this last case the vector $\frac{1}{i} {\hat{n} \cdot \vec{\sigma}} = n^1 {\sigma_1} + n^2 {\sigma_2}$ gives a non-zero normal vector. In either case, let $\eta$ denote the axis through the origin that contains either of these normal vectors. Before we continue, let us reexamine those elements of $SO(3)$ that generate the subgroups $\mathcal{K}$,~$\mathcal{P}$, and $\mathcal{H}$. Here the respective axes of rotation (parallel to ${\sigma_1}$, ${\sigma_2}$, and ${\sigma_3}$) for the generators $e^{\theta K}$, $e^{\beta P}$, and $e^{\alpha H}$ are given by $\eta$, where ${\hat{n} \cdot \vec{\sigma}}$ is given by $i {\sigma_1}$ (or ${\sigma_3} \wedge {\sigma_2}$ by convention), $i {\sigma_2} = {\sigma_1} \wedge {\sigma_3}$, and $\frac{1}{i} {\sigma_3} = {\sigma_1} \wedge {\sigma_2}$. These plane elements are preserved under the respective rotations. In fact, for each of these planes the rotations are given simply by multiplication by a~unit complex number, as the $zt$-coordinate plane is identif\/ied with ${\C_{\ka}}$, the $zx$-coordinate plane with ${\C_{\ka \kb}}$, and the $tx$-coordinate plane with ${\C_{\kb}}$ as indicated in Fig.~4. Note that the basis bivectors act as imaginary units in $Cl_3$ since \[ \left( \frac{1}{i} {\sigma_3} \right)^2 = -{\kappa_1}, \qquad \left( i {\sigma_2} \right)^2 = -{\kappa_1} {\kappa_2}, \qquad \mbox{and} \qquad \left( i {\sigma_1} \right)^2 = -{\kappa_2}. \] The product of a vector $a$ and a bivector $B$ can be written as $aB = a \dashv B + a \wedge B = \frac{1}{2}(aB - Ba) + \frac{1}{2}(aB + Ba)$ so that $aB$ is the sum of a vector $a \dashv B$ (the left contraction of $a$ by~$B$) and a volume element $a \wedge B$. Let $B = b \wedge c$ for some vectors $b$ and $c$. Then \[ 2a \dashv (b \wedge c) = a (b \wedge c) - (b \wedge c)a = \frac{1}{2} a ( bc - cb) - \frac{1}{2}(bc - cb)a \] so that \begin{gather*} 4a \dashv (b \wedge c) = cba + abc - acb - bca \\ \phantom{4a \dashv (b \wedge c)}{} = c(b \cdot a + b \wedge a) + (a \cdot b + a \wedge b)c - (a \cdot c + a \wedge c)b - b(c \cdot a + c \wedge a) \\ \phantom{4a \dashv (b \wedge c)}{} = 2(b \cdot a)c - 2(c \cdot a)b + c(b \wedge a) + (a \wedge b)c - (a \wedge c)b - b(c \wedge a) \\ \phantom{4a \dashv (b \wedge c)}{} = 2(b \cdot a)c - 2(c \cdot a)b + c(b \wedge a) - (b \wedge a)c + b(a \wedge c) - (a \wedge c)b \\ \phantom{4a \dashv (b \wedge c)}{} = 2(b \cdot a)c - 2(c \cdot a) b + 2 \left[ c \dashv (b \wedge a) + b \dashv (a \wedge c) \right] \\ \phantom{4a \dashv (b \wedge c)}{} = 2(b \cdot a)c - 2(c \cdot a) b - 2 a \dashv (c \wedge b) \\ \phantom{4a \dashv (b \wedge c)}{} = 2(b \cdot a)c - 2(c \cdot a) b + 2 a \dashv (b \wedge c) \end{gather*} where we have used the Jacobi identity \[ c \dashv (b \wedge a) + b \dashv (a \wedge c) + a \dashv (c \wedge b) = 0, \] recalling that $M(2,{\mathbb{C}})$ is a matrix algebra where the commutator is given by left contraction. Thus \begin{gather*} 2a \dashv (b \wedge c) = 2(b \cdot a)c - 2(c \cdot a)b \end{gather*} and so \[ a \dashv (b \wedge c) = (a \cdot b) c - (a \cdot c)b. \] So the vector $a \dashv B$ lies in the plane determined by the plane element $b \wedge c$. Because of the possible degeneracy of the Cayley--Klein metric, it is possible for a non-zero vector $b$ that $b \dashv (b \wedge c) = 0$. We will show that if $a$ is the vector $a^1{\sigma_1} + a^2{\sigma_2} + a^3{\sigma_3}$, then the linear transformation of ${\mathbb{R}}^3$ def\/ined by \[ a \mapsto e^{-\frac{\phi}{2} \hat{n} \cdot \vec{\sigma}} a \, e^{\frac{\phi}{2} \hat{n} \cdot \vec{\sigma}} \] faithfully represents an element of $SO(3)$ (and all elements are thus represented). In this way we see that the spin group is generated by the elements \[ e^{\frac{\theta}{2} i{\sigma_1}}, e^{\frac{\beta}{2} i{\sigma_2}}, \qquad \mbox{and} \qquad e^{\frac{\alpha}{2i} {\sigma_3}}. \] First, let us see how, using this construction, the vectors ${\sigma_1}$, ${\sigma_2}$, and ${\sigma_3}$ (and hence the bivectors $i {\sigma_1}$, $i {\sigma_2}$, and $\frac{1}{i} {\sigma_3}$) correspond to rotations of the coordinate axes (and hence coordinate planes) given by $e^{\theta K}$, $e^{\beta P}$, and $e^{\alpha H}$ respectively. Since \begin{gather*} e^{\frac{\theta}{2} i {\sigma_1}} = {C_{\kb}} \left( \frac{\theta}{2} \right) + i {S_{\kb}} \left( \frac{\theta}{2} \right) {\sigma_1}, \qquad e^{\frac{\beta}{2} i {\sigma_2}} = {C_{\ka \kb}} \left( \frac{\beta}{2} \right) + i {S_{\ka \kb}} \left( \frac{\beta}{2} \right) {\sigma_2}, \\ e^{\frac{\alpha}{2i} {\sigma_3}} = {C_{\ka}} \left( \frac{\alpha}{2} \right) + \frac{1}{i} {S_{\ka}} \left( \frac{\alpha}{2} \right) {\sigma_3} \end{gather*} and \begin{gather*} 2{C_{\kappa}} \left( \frac{\phi}{2} \right) {S_{\kappa}} \left( \frac{\phi}{2} \right) = {S_{\kappa}} (\phi),\qquad {C_{\kappa}}^2 \left( \frac{\phi}{2} \right) - \kappa {S_{\kappa}}^2 \left( \frac{\phi}{2} \right) = {C_{\kappa}} (\phi), \\ {C_{\kappa}}^2 \left( \frac{\phi}{2} \right) + \kappa {S_{\kappa}}^2 \left( \frac{\phi}{2} \right) = 1 \end{gather*} (noting that ${C_{\kappa}}$ is an even function while ${S_{\kappa}}$ is odd) it follows that \begin{gather*} e^{-\frac{\theta}{2} i {\sigma_1}} \sigma_j e^{\frac{\theta}{2} i {\sigma_1}} = \begin{cases} {\sigma_1} & \text{if $j = 1$}, \\ {C_{\kb}} (\theta) {\sigma_2} - {S_{\kb}} (\theta) {\sigma_3} & \text{if $j = 2$}, \\ {C_{\kb}} (\theta) {\sigma_3} + {\kappa_2} {S_{\kb}} (\theta) {\sigma_2} & \text{if $j = 3$}, \end{cases} \\ e^{-\frac{\beta}{2} i {\sigma_2}} \sigma_j e^{\frac{\beta}{2} i {\sigma_2}} = \begin{cases} {C_{\ka \kb}} (\beta) {\sigma_1} + {S_{\ka \kb}} (\beta) {\sigma_3} & \text{if $j = 1$}, \\ {\sigma_2} & \text{if $j = 2$}, \\ {C_{\ka \kb}} (\beta) {\sigma_3} - {\kappa_1} {\kappa_2} {S_{\ka \kb}} (\beta) {\sigma_1} & \text{if $j = 3$}, \end{cases} \\ e^{-\frac{\alpha}{2i} {\sigma_3}} \sigma_j e^{\frac{\alpha}{2i} {\sigma_3}} = \begin{cases} {C_{\ka}} (\alpha) {\sigma_1} + {S_{\ka}} (\alpha) {\sigma_2} & \text{if $j = 1$}, \\ {C_{\ka}} (\alpha) {\sigma_2} - {\kappa_1} {S_{\ka}} (\alpha) {\sigma_1} & \text{if $j = 2$}, \\ {\sigma_3} & \text{if $j = 3$}. \end{cases} \end{gather*} So for each plane element, the $\sigma_j$ transform as the components of a vector under rotation in the clockwise direction, given the orientations of the respective plane elements: \[ i{\sigma_1} \; \mbox{is represented by} \; {\sigma_3} \wedge {\sigma_2}, \qquad i{\sigma_2} = {\sigma_1} \wedge {\sigma_3}, \qquad \mbox{and} \qquad \frac{1}{i} {\sigma_3} = {\sigma_1} \wedge {\sigma_2}. \] Now we can write \[ e^{\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}} = 1 + \frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}} + \frac{1}{2!} \left( \frac{\phi}{2} \right)^2 \left( {\hat{n} \cdot \vec{\sigma}} \right)^2 + \frac{1}{3!} \left( \frac{\phi}{2} \right)^3 \left( {\hat{n} \cdot \vec{\sigma}} \right)^3 + \cdots. \] If $\varkappa$ is the scalar $-\left( {\hat{n} \cdot \vec{\sigma}} \right)^2$, then \begin{gather*} e^{\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}} = \left( 1 - \frac{1}{2!} \left( \frac{\phi}{2} \right)^2 \varkappa + \frac{1}{4!} \left( \frac{\phi}{2} \right)^4 \varkappa^2 - \cdots \right)\\ \phantom{e^{\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}}=}{} + {\hat{n} \cdot \vec{\sigma}} \left( \frac{\phi}{2} - \frac{1}{3!} \left( \frac{\phi}{2} \right)^3 \varkappa + \frac{1}{5!} \left( \frac{\phi}{2} \right)^5 \varkappa^2 - \cdots \right) \\ \phantom{e^{\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}}}{} = C_{\varkappa} \left( \frac{\phi}{2} \right) + {\hat{n} \cdot \vec{\sigma}} S_{\varkappa} \left( \frac{\phi}{2} \right). \end{gather*} As $a = a^1 {\sigma_1} + a^2 {\sigma_2} + a^3 {\sigma_3}$ is a vector, we can compute its length easily using Clif\/ford multiplication as $a a = (a^1)^2 + {\kappa_1} (a^2)^2 + {\kappa_1} {\kappa_2} (a^3)^2 = | a |^2$. We would like to show that $e^{-\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}} a e^{\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}}$ is also a vector with the same length as $a$. If $g$ and $h$ are elements of a matrix Lie algebra, then so is $e^{-\phi \, {\mbox{\tiny ad}} \, g} h = e^{-\phi g} h e^{\phi g}$ (see \cite{SW86} for example). So if $B$ is a bivector $B^1 i{\sigma_1} + B^2 i{\sigma_2} + B^3\frac{1}{i}{\sigma_3}$, then $e^{-\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}} B e^{\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}}$ is also a bivector. It follows that $e^{-\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}} a e^{\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}}$ is a vector as the volume element $i$ lies in Cen($Cl_3$) so that $e^{-\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}} {\sigma_1} e^{\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}}$, $e^{-\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}} {\sigma_2} e^{\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}}$, and $e^{-\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}} {\sigma_3} e^{\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}}$ are all vectors. Since \[ \left( e^{-\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}} a e^{\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}} \right) \left( e^{-\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}} a e^{\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}} \right) = e^{-\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}} | a |^2 e^{\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}} = | a |^2 e^{-\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}} e^{\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}} = \left| a \right|^2 \] it follows that $e^{-\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}} a e^{\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}}$ has the same length as $a$. So the inner automorphism of ${\mathbb{R}}^3$ given by $a \mapsto e^{-\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}} a e^{\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}}$ corresponds to an element of $SO(3)$. We will see in the next section that all elements of $SO(3)$ are represented by such inner automorphisms of ${\mathbb{R}}^3$. Finally, note that $e^{-\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}} \left( {\hat{n} \cdot \vec{\sigma}} \right) e^{\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}} = {\hat{n} \cdot \vec{\sigma}}$ as ${\hat{n} \cdot \vec{\sigma}}$ commutes with $e^{\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}}$: so any plane element represented by ${\hat{n} \cdot \vec{\sigma}}$ is preserved by the corresponding element of $SO(3)$. In fact, if ${\hat{n} \cdot \vec{\sigma}} = a \wedge b$ for some vectors $a$ and $b$ and $\varkappa$ is the scalar $ - \left( a \wedge b \right)^2$, then \begin{gather*} e^{-\frac{\phi}{2} a \wedge b} ( a ) e^{\frac{\phi}{2} a \wedge b} = \left[ C_{\varkappa} \left( \frac{\phi}{2} \right) - (a \wedge b )S_{\varkappa} \left( \frac{\phi}{2} \right) \right] ( a ) \left[ C_{\varkappa} \left( \frac{\phi}{2} \right) + (a \wedge b) S_{\varkappa} \left( \frac{\phi}{2} \right) \right] \\ \phantom{e^{-\frac{\phi}{2} a \wedge b} ( a ) e^{\frac{\phi}{2} a \wedge b}}{} = C^2_{\varkappa} \left( \frac{\phi}{2} \right) a + C_{\varkappa} \left( \frac{\phi}{2} \right) S_{\varkappa} \left( \frac{\phi}{2} \right) (a \wedge b) a (a \wedge b) \\ \phantom{e^{-\frac{\phi}{2} a \wedge b} ( a ) e^{\frac{\phi}{2} a \wedge b}=}{} - C_{\varkappa} \left( \frac{\phi}{2} \right) S_{\varkappa} \left( \frac{\phi}{2} \right) (a \wedge b) a - S^2_{\varkappa} \left( \frac{\phi}{2} \right) a(a \wedge b). \end{gather*} Since $a (a \wedge b) = -(a \wedge b) a$, then \[ e^{-\frac{\phi}{2} a \wedge b} ( a ) e^{\frac{\phi}{2} a \wedge b} = \left[ C_{\varkappa}(\phi) - (a \wedge b) S_{\varkappa}(\phi) \right] a, \] and so vectors lying in the plane determined by $a \wedge b$ are simply rotated by an angle $-\phi$, and this rotation is given by simple multiplication by a unit complex number $e^{-i \phi}$ where $i^2 = -\varkappa$. Thus, the linear combination $ua + vb$ is sent to $ue^{-i \phi}a + ve^{-i \phi}b$, and so the plane spanned by the vectors $a$ and $b$ is preserved. The signif\/icance is that if $a$ lies in an oriented plane determined by the bivector ${\hat{n} \cdot \vec{\sigma}}$ where this plane is given the complex structure of ${\mathbb{C}}_{\varkappa}$, then $e^{-\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}} a e^{\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}}$ is simply the vector $a$ rotated by an angle of $-\phi$ in the complex plane ${\mathbb{C}}_{\varkappa}$, where $\iota^2 = -\varkappa$. Furthermore, the axis of rotation is given by $\eta$ as $\eta$ is preserved (recall that $i$ lies in the center of $Cl_3$). Since the covariant components $\sigma_i$ of $a$ are rotated clockwise, the contravariant components $a^j$ are rotated counterclockwise. So $\langle a^1, a^2, a^3 \rangle$ is rotated by the angle $\phi$ in the complex plane $C_{\varkappa}$ determined by ${\hat{n} \cdot \vec{\sigma}}$. If we use $b \wedge a$ instead of $a \wedge b$ to represent the plane element, then $\varkappa$ remains unchanged. Note however that, if $c$ is a vector lying in this plane, then \begin{gather*} e^{-\frac{\phi}{2} b \wedge a} c e^{-\frac{\phi}{2} b \wedge a} = \left[ C_{\varkappa}(\phi) - (b \wedge a) S_{\varkappa}(\phi) \right] c = \left[ C_{\varkappa}(-\phi) - (a \wedge b) S_{\varkappa}(-\phi) \right] c \end{gather*} so that rotation by an angle of $\phi$ in the plane oriented according to $b \wedge a$ corresponds to a~rotation of angle $-\phi$ in the same plane under the opposite orientation as given by $a \wedge b$. It would be appropriate at this point to note two things: one, the magnitude of ${\hat{n} \cdot \vec{\sigma}}$ appears to be important, since $\varkappa = - \left( {\hat{n} \cdot \vec{\sigma}} \right)^2$, and two, the normalization $(n^1)^2 + (n^2)^2 + (n^3)^2 = 1$ of $\hat{n}$ is somewhat arbitrary\footnote{Due to dimension requirements some kind of normalization is needed as we cannot have $\phi$, $n^1$, $n^2$, and $n^3$ as independent variables, for $so(3)$ is 3-dimensional.}. These two matters are one and the same. We have chosen this normalization because it is a simple and natural choice. This particular normalization is not essential, however. For suppose that $\varkappa = - (a \wedge b)^2$ while $\varkappa^{\prime} = - (n a \wedge b)^2$, where $n$ is a positive constant. Let ${\mathbb{C}}_{\varkappa} = \{ t + ix \, | \, i^2 = -\varkappa \}$ with angle measure $\phi$ and ${\mathbb{C}}_{\varkappa^{\prime}} = \{ t + \iota x \, | \,\iota^2 = -\varkappa^{\prime} = -n^2 \varkappa \}$ with angle measure $\theta$: without loss of generality let $\varkappa > 0$. Then $\phi = n\theta$, for \begin{gather*} e^{i \theta} = \cos{\left( \sqrt{\varkappa^{\prime}} \theta \right)} - \frac{\iota}{\sqrt{\varkappa^{\prime}}}\sin{\left( \sqrt{\varkappa^{\prime}} \theta \right)} = \cos{\left( n\sqrt{\varkappa} \theta \right)} - \frac{\iota}{n\sqrt{\varkappa}} \sin{\left( n\sqrt{\varkappa} \theta \right)} \\ \phantom{e^{i \theta}}{} = \cos{\left( \sqrt{\varkappa} \phi \right)} - \frac{i}{\sqrt{\varkappa}}\sin{\left( \sqrt{\varkappa} \phi \right)} = e^{i \phi}. \end{gather*} So we see that $SO(3)$ is truly a rotation group, where each element has a distinct axis of rotation as well as a well-def\/ined rotation angle. \section[$SU(2)$]{$\boldsymbol{SU(2)}$} Since the generators of the generalized Lie group $SO(3)$ can be represented by inner automorphisms of the subspace ${\mathbb{R}}^3$ of vectors of $Cl_3$ (see Def\/inition~3), then every element of $SO(3)$ can be represented by an inner automorphism, as the composition of inner automorphisms is an inner automorphism. On the other hand, we've seen that any inner automorphism represents an element of $SO(3)$. In fact, each rotation belonging to $SO(3)$ is then represented by two elements $\pm e^{\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}}$ of $SL(2,{\mathbb{C}})$, where as usual ${\mathbb{C}}$ denotes the generalized complex number ${\mathbb{C}}_{{\kappa_2}}$: we will denote the subgroup of $SL(2,{\mathbb{C}})$ consisting of elements of the form $\pm e^{\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}}$ by $SU(2)$. \begin{definition} Let $A$ be the matrix \[ A = \left( \begin{matrix} {\kappa_1} & 0 \\ 0 & 1 \end{matrix} \right) . \] \end{definition} We will now use Def\/inition 4 to show that $SU(2)$ is a subgroup of the subgroup $G$ of $SL(2,{\mathbb{C}})$ consisting of those matrices $U$ where $U^{\star} A U = A$: in fact, both these subgroups of $SL(2,{\mathbb{C}})$ are one and the same, as we shall see. Now \begin{gather*} \big( e^{\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}}\big)^{\star} A e^{\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}} = \left[ C_{\varkappa}\left( \frac{\phi}{2} \right) + \left( {{\hat{n} \cdot \vec{\sigma}}} \right)^{\star} S_{\varkappa}\left( \frac{\phi}{2} \right) \right] A \left[ C_{\varkappa}\left( \frac{\phi}{2} \right) + {\hat{n} \cdot \vec{\sigma}} S_{\varkappa}\left( \frac{\phi}{2} \right) \right] \\ \phantom{\big( e^{\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}}\big)^{\star} A e^{\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}}}{} = C_{\varkappa}^2\left( \frac{\phi}{2} \right)A + \left( {{\hat{n} \cdot \vec{\sigma}}} \right)^{\star} A \left( {\hat{n} \cdot \vec{\sigma}} \right) S_{\varkappa}^2\left( \frac{\phi}{2} \right) \\ \phantom{\big( e^{\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}}\big)^{\star} A e^{\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}}=}{} + A \left( {\hat{n} \cdot \vec{\sigma}} \right) C_{\varkappa}\left( \frac{\phi}{2} \right)S_{\varkappa}\left( \frac{\phi}{2} \right) + \left( {{\hat{n} \cdot \vec{\sigma}}} \right)^{\star} A C_{\varkappa}\left( \frac{\phi}{2} \right)S_{\varkappa}\left( \frac{\phi}{2} \right) = A \end{gather*} because $A \left( {\hat{n} \cdot \vec{\sigma}} \right) = -\left( {{\hat{n} \cdot \vec{\sigma}}} \right)^{\star} A$ implies that \[ A \left( {\hat{n} \cdot \vec{\sigma}} \right) C_{\varkappa}\left( \frac{\phi}{2} \right)S_{\varkappa}\left( \frac{\phi}{2} \right) + \left( {{\hat{n} \cdot \vec{\sigma}}} \right)^{\star} A C_{\varkappa}\left( \frac{\phi}{2} \right)S_{\varkappa}\left( \frac{\phi}{2} \right) = 0 \] and $\left( {{\hat{n} \cdot \vec{\sigma}}} \right)^{\star} A \left( {\hat{n} \cdot \vec{\sigma}} \right) = -A \left( {\hat{n} \cdot \vec{\sigma}} \right)^2 = \varkappa A$ implies that \begin{gather*} C_{\varkappa}^2\left( \frac{\phi}{2} \right)A + \left( {{\hat{n} \cdot \vec{\sigma}}} \right)^{\star} A \left( {\hat{n} \cdot \vec{\sigma}} \right) S_{\varkappa}^2\left( \frac{\phi}{2} \right) = C_{\varkappa}^2\left( \frac{\phi}{2} \right)A + \varkappa S_{\varkappa}^2\left( \frac{\phi}{2} \right) A = A. \end{gather*} So $SU(2)$ is a subgroup of the subgroup $G$ of $SL(2,{\mathbb{C}})$ consisting of those matrices $U$ where $U^{\star} A U = A$. We can characterize this subgroup $G$ as \[ \left\{ \left( \begin{matrix} \alpha & \beta \\ -{\kappa_1} \overline{\beta} & \overline{\alpha} \end{matrix} \right) | \, \alpha, \beta \in {\mathbb{C}} \; \mbox{and} \; \alpha \overline{\alpha} + {\kappa_1} \beta \overline{\beta} = 1 \right\}. \] Now \[ e^{\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}} = \left( \begin{matrix} C_{\varkappa}^2\left( \frac{\phi}{2} \right) + n^1 i S_{\varkappa}^2\left( \frac{\phi}{2} \right) & n^2 i S_{\varkappa}^2\left( \frac{\phi}{2} \right) + n^3 S_{\varkappa}^2\left( \frac{\phi}{2} \right)\vspace{1mm} \\ n^2 {\kappa_1} i S_{\varkappa}^2\left( \frac{\phi}{2} \right) - n^3 {\kappa_1} S_{\varkappa}^2\left( \frac{\phi}{2} \right) & C_{\varkappa}^2\left( \frac{\phi}{2} \right) - n^1 i S_{\varkappa}^2\left( \frac{\phi}{2} \right) \end{matrix} \right) \] as can be checked directly, recalling that \[ e^{\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}} = C_{\varkappa}\left( \frac{\phi}{2} \right) + \left( {\hat{n} \cdot \vec{\sigma}} \right) S_{\varkappa}\left( \frac{\phi}{2} \right), \] where \[ \varkappa = - \left( {\hat{n} \cdot \vec{\sigma}} \right)^2 = \left(n^1\right)^2 {\kappa_2} + \left( n^2 \right)^2 {\kappa_1} {\kappa_2} + \left( n^3 \right)^2 {\kappa_1}. \] Thus $\det\left( e^{\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}} \right) = 1$, and we see that any element of $G$ can be written in the form $e^{\frac{\phi}{2} {\hat{n} \cdot \vec{\sigma}}}$. So the group $SU(2)$ can be characterized by \[ SU(2) = \left\{ \left( \begin{matrix} \alpha & \beta \\ -{\kappa_1} \overline{\beta} & \overline{\alpha} \end{matrix} \right) | \, \alpha, \beta \in {\mathbb{C}} \; \mbox{and} \; \alpha \overline{\alpha} + {\kappa_1} \beta \overline{\beta} = 1 \right\}. \] Note that if $U(\lambda)$ is a curve passing through the identity at $\lambda = 0$, then \[ \left. \frac{d}{d\lambda} \right|_{\lambda = 0} \left( U^{\star} A U = A \right) \ \Longrightarrow \ \dot{U}^{\star}A + A\dot{U} = 0 \] so that $su(2)$ consists of those elements $B$ of $M(2,{\mathbb{C}})$ such that $B^{\star}A + AB = 0$. Although $SU(2)$ is a double cover of $SO(3)$, it is not necessarily the universal cover for $SO(3)$, nor even connected, for sometimes $SO(3)$ is itself simply-connected. Thus we have shown that: \begin{theorem} The Clifford algebra $Cl_3$ can be used to construct a double cover of the generalized Lie group $SO(3)$, for a vector $a$ can be rotated by the inner automorphism \[ {\mathbb{R}}^3 \rightarrow {\mathbb{R}}^3, \qquad a \mapsto \mathit{s}^{-1} a \mathit{s} \] where $\mathit{s}$ is an element of the group \[ {\bf Spin}(3) = \left\{ \left( \begin{matrix} \alpha & \beta \\ -{\kappa_1} \overline{\beta} & \overline{\alpha} \end{matrix} \right) | \, \alpha, \beta \in {\mathbb{C}} \; \mbox{and} \; \alpha \overline{\alpha} + {\kappa_1} \beta \overline{\beta} = 1 \right\}, \] where ${\mathbb{C}}$ denotes the generalized complex number ${\mathbb{C}}_{{\kappa_2}}$. \end{theorem} \begin{lemma} We define the generalized special unitary group $SU(2)$ to be ${\bf Spin}(3)$. Then $su(2)$ consists of those matrices $B$ of $M(2,{\mathbb{C}})$ such that $B^{\star}A + AB = 0$. \end{lemma} \section[The conformal completion of $S$]{The conformal completion of $\boldsymbol{S}$} Yaglom~\cite{Y79} has shown how the complex plane ${\mathbb{C}}_{\kappa}$ may be extended to a Riemann sphere $\Gamma$ or inversive plane\footnote{Yaglom did this when $\kappa \in \{-1, 0, 1 \}$, but it is a simple matter to generalize his results.} (and so dividing by zero-divisors is allowed), upon which the entire set of M\"{o}bius transformations acts globally and so gives a group of conformal transformations. In this last section we would like to take advantage of the simple structure of this conformal group and give the conformal completion of $S$, where $S$ is conformally embedded simply by inclusion of the region $\varsigma$ lying in ${\mathbb{C}}$ and therefore lying in~$\Gamma$. Herranz and Santander~\cite{HS02b} found a conformal completion of $S$ by realizing the conformal group as a group of linear transformations acting on ${\mathbb{R}}^4$, and then constructing the conformal completion as a homogeneous phase space of this conformal group. The original Cayley--Klein geometry $S$ was then embedded into its conformal completion by one of two methods, one a group-theoretical one involving one-parameter subgroups and the other stereographic projection. The 6-dimensional real Lie algebra for $SL(2,{\mathbb{C}})$ consist of those matrices in $M(2,{\mathbb{C}})$ with trace equal to zero. In addition to the three generators $H$, $P$, and $K$ \[ H = \frac{1}{2i}{\sigma_3} = \left( \begin{matrix} 0 & \frac{1}{2} \\ -\frac{{\kappa_1}}{2} & 0 \end{matrix} \right), \qquad P = \frac{i}{2}{\sigma_2} = \left( \begin{matrix} 0 & \frac{i}{2} \\ \frac{{\kappa_1} i}{2} & 0 \end{matrix} \right), \qquad K = \frac{i}{2}{\sigma_1} = \left( \begin{matrix} \frac{i}{2} & 0 \\ 0 & -\frac{i}{2} \end{matrix} \right) \] that come from the generalized Lie group $SO(3)$ of isometries of $S$, we have three other generators for $SL(2, {\mathbb{C}})$: one, labeled $D$, for the subgroup of dilations centered at the origin and two others, labeled $G_1$ and $G_2$, for ``translations''. It is these transformations $D$, $G_1$, $G_2$, that necessitate extending $\varsigma$ to the entire Riemann sphere $\Gamma$, upon which the set of M\"{o}bius transformations acts as a conformal group. Note that the following correspondences for the M\"{o}bius transformations $w \mapsto w + t$ and $w \mapsto w + ti$ (for real parameter $t$) are valid only if ${\kappa_1} \neq 0$, which explains why our ``translations" $G_1$ and $G_2$ are not actually translations: \begin{gather*} \exp \left[ t \left( \begin{matrix} 0 & 1 \\ 0 & 0 \end{matrix} \right) \right] = \left( \begin{matrix} 1 & t \\ 0 & 1 \end{matrix} \right) \hspace{0.25in} \rightleftarrows \hspace{0.25in} w \mapsto w + t, \\ \mbox{exp} \left[ t \left( \begin{matrix} 0 & i \\ 0 & 0 \end{matrix} \right) \right] = \left( \begin{matrix} 1 & ti \\ 0 & 1 \end{matrix} \right) \hspace{0.25in} \rightleftarrows \hspace{0.25in} w \mapsto w + ti. \end{gather*} Please see Tables 15 and 16. The structure constants $[\star, \star \star]$ for this basis of $sl(2,{\mathbb{C}})$ (which is the same basis as that given in \cite{HS02} save for a sign change in $G_2$) are given by Table~12. \begin{table}[t] \centering \caption{Additional basis elements for $sl(2,{\mathbb{C}})$.} \vspace{1mm} \begin{tabular}{ c || c c c c c c c } \hline $\star \diagdown \star\star $ & $H$ & $P$ & $K$ & $G_1$ & $G_2$ & $D$ \\ \hline \hline $H$ & 0 & ${\kappa_1} K$ & $-P$ & $D$ & $K$ & $-H - {\kappa_1} G_1$ \\ $P$ & $-{\kappa_1} K$ & 0 & ${\kappa_2} H$ & $K$ & $-{\kappa_2} D$ & $-P + {\kappa_1} G_2$ \\ $K$ & $P$ & $-{\kappa_2} H$ & 0 & $-S_2$ & ${\kappa_2} G_2$ & $0$ \\ $G_1$ & $-D$ & $-K$ & $S_2$ & 0 & $0$ & $G_1$ \\ $G_2$ & $-K$ & ${\kappa_2} D$ & $-{\kappa_2} G_2$ & $0$ & 0 & $G_2$ \\ $D$ & $H + {\kappa_1} G_1$ & $P - {\kappa_1} G_2$ & $0$ & $-G_1$ & $-G_2$ & 0 \\ \hline \end{tabular} \end{table}
{'timestamp': '2007-07-19T12:49:32', 'yymm': '0707', 'arxiv_id': '0707.2869', 'language': 'en', 'url': 'https://arxiv.org/abs/0707.2869'}
\section{INTRODUCTION} The Epoch of Reionization (EoR), a major global phase transition in which the neutral hydrogen in the Universe transitioned from almost neutral to largely ionized, remains one of the cosmological eras least constrained by observations. Although no direct measurements of this transition currently exist, multiple observations indicate reionization is completed by $z \approx 5.7$ and possibly earlier. These observations include high-redshift quasar spectra \citep[e.g.][]{Fan2006,McGreer2015}, the decrease in the fraction of Lyman~$\alpha$ (Ly~$\alpha$) emitting galaxies \citep[e.g.][]{Stark2011,Schenker2012, Pentericci2014, Tilvi2014}, and measurements of the temperature of the intergalactic medium \citep[IGM; e.g.][]{Theuns2002, Raskutti2012, Bolton2012}. The start of substantial reionization is constrained by the Thomson optical depth measured from the anisotropies and polarisation of the Cosmic Microwave Background, CMB \citep[e.g.][]{Komatsu2011, Planck2015, Planck2016}. \cite{Planck2016} find that the Universe was less than 10 per cent ionized at z $\approx$ 10, the average redshift at which reionization would have taken place if it had been an instantaneous process to be in the range $7.8 \leq z \leq 8.8$, and an upper limit for the duration of the process is $\Delta z < 2.8$. At high redshifts, 21-cm radiation from hydrogen atoms in the I: contains a treasure trove of information about the physical conditions both during the EoR and the preceding epochs. In particular, the 21-cm signal probes the \textit{Dark Ages}, the epoch after recombination during which the formation of baryonic large scale structure began and the \textit{Cosmic Dawn}, the period of preheating from the first ionizing sources before reionization was significantly underway. Several experiments are attempting to measure the 21-cm signal from the EoR using low-frequency radio interferometry. These include the ongoing GMRT\footnote{\url{http://gmrt.ncra.tifr.res.in/}}, LOFAR\footnote{\url{http://www.lofar.org/}}, MWA\footnote{\url{http://www.mwatelescope.org/}}, and PAPER\footnote{\url{http://eor.berkeley.edu/}} and the future HERA\footnote{\url{http://reionization.org/}} and SKA\footnote{\url{https://www.skatelescope.org/}}. The main sources powering reionization are likely early galaxies, with Population III (Pop.~III; metal-free) and Population II (Pop.~II; metal-enriched) stars providing the bulk of ionizing photons. However, sources of higher energy X-ray photons may also be present, contributing non-trivially to the photon budget. Although their abundance is uncertain, high-mass X-ray binaries (HMXBs) likely exist throughout reionization \citep{Glover2003}. Other hard radiation sources, such as QSOs and supernovae, may have also contributed. Very little is known about these objects in terms of their abundances, clustering, evolution, and spectra, especially at these high redshifts. The high-energy photons from these hard radiation sources have a much smaller cross section for interaction with atoms and, hence, far longer mean free paths than lower energy ionizing photons. Therefore, these photons are able to penetrate significantly further into the neutral IGM. While not sufficiently numerous to contribute significantly to the ionization of the IGM (although recently there has been some debate about the level of contribution of quasars, \citep[e.g.][]{Khaire2016}), their high energies result in a non-trivial amount of heating. Along with variations in the early Ly~$\alpha$\ background, variations in the temperature of the neutral IGM caused by this non-uniform heating constitute an important source of 21-cm fluctuations before large-scale reionization patchiness develops \citep[see e.g.][for a detailed discussion]{2012RPPh...75h6901P}. Once a sufficient Ly~$\alpha$\ background due to stellar radiation has been established in the IGM, the spin temperature of neutral hydrogen will be coupled to the kinetic temperature, $T_\mathrm{K}$, due to the Wouthuysen-Field (WF) effect. The 21-cm signal is then expected to appear initially in absorption against the CMB, as the CMB temperature ($T_{\mathrm{CMB}}$) is greater than the spin temperature of the gas. Once the first sources have heated the IGM and brought the spin temperature,$T_\mathrm{s}$, above $T_{\mathrm{CMB}}$, the signal transitions into emission (see Section~\ref{sec:dbtTheory} for more details). The timing and duration of this transition are highly sensitive to the type of sources present, as they determine the quantity and morphology of the heating of the IGM \citep[e.g.][]{Pritchard2007,Baek2010,Mesinger2013,Fialkov2014,2014MNRAS.443..678P,Ahn2015}. Considerable theoretical work regarding the impact of X-ray radiation on the thermal history of reionization and the future observational signatures exists. Attempts have been made to understand the process analytically \citep[e.g.][]{Glover2003,Furlanetto2004}, semi-numerically \citep[see e.g.][]{Santos2010,Mesinger2013, Fialkov2014, Knevitt2014}, and numerically \citep[eg.][]{Baek2010,Xu2014,Ahn2015}. However, due to the computationally challenging, multi-scale nature of the problem, numerical simulations have not yet been run over a sufficiently large volume -- a few hundred comoving Mpc per side -- to properly account for the patchiness of reionization \citep{Iliev2013}, while at the same time resolving the ionizing sources. In this paper, we present the first full numerical simulation of reionization including X-ray sources and multi-frequency heating over hundreds of Mpc. Using multi-frequency radiative transfer (RT) modelling, we track the morphology of the heating and evolution of ionized regions using density perturbations and haloes obtained from a high-resolution, $N$-body simulation. The size of our simulations ($349\,$Mpc comoving on a side) is sufficient to capture the large-scale patchiness of reionization and to make statistically meaningful predictions for future 21-cm observations. We compare two source models, one with and one without X-ray sources, that otherwise use the same underlying cosmic structures. We also test the limits of validity of the common assumption that for late times the IGM temperature is much greater than the spin temperature. The outline of the paper is as follows. In Section~\ref{sec:sims}, we present our simulations and methodology. In Section \ref{sec:theory}, we describe in detail the theory behind our generation of the 21-cm signatures. Section~\ref{sec:results} contains our results, which include the reionization and temperature history and morphology. We also present our 21-cm maps and various statistics of the 21-cm signal. We then conclude in Section~\ref{sec:conclusions}. The cosmological parameters we use throughout this work are ($\Omega_\Lambda$, $\Omega_\mathrm{M}$, $\Omega_\mathrm{b}$, $n$, $\sigma_\mathrm{8}$, $h$) = (0.73, 0.27, 0.044, 0.96, 0.8, 0.7), where the notation has the usual meaning and $h = \mathrm{H_0} / (100 \ \mathrm{km} \ \rm{s}^{-1} \ \mathrm{ Mpc}^{-1}) $. These values are consistent with the latest results from WMAP \citep{Komatsu2011} and Plank combined with all other available constraints \citep{Planck2015,Planck2016}. \section{THE SIMULATIONS} \label{sec:sims} In this section, we present an overview of the methods used in our simulations. We start with a high-resolution, $N$-body simulation, which provides the underlying density fields and dark matter halo catalogues. We then apply ray-tracing RT to a density field that is smoothed to a lower resolution to speed up the calculations. The sources of ionizing and X-ray radiation are associated with the dark matter haloes. Below, we describe these steps in more detail. \subsection{$N$-BODY SIMULATIONS} The cosmic structures underlying our simulations are based on a high-resolution $N$-body simulation using the \textsc{\small CubeP$^3$M} code \citep{Harnois2013}. A 2-level particle-mesh is used to calculate the long-range gravitational forces, kernel-matched to local, direct and exact particle-particle interactions. The maximum distance between particles over which the direct force is calculated set to 4 times the mean interparticle spacing, which was found to give the optimum trade off between accuracy and computational expense. Our $N$-body simulation follows $4000^3$ particles in a $349\,Mpc$ per side volume, and the force smoothing length is set to 1/20th of the mean interparticle spacing (this $N$-body simulation was previously presented in \citet{2016MNRAS.456.3011D} and completed under the Partnership for Advanced Computing in Europe, PRACE, Tier-0 project called PRACE4LOFAR). This particle number is chosen to ensure reliable halo identification down to 10$^9$ $\rm M_\odot$ (with a minimum of 40 particles). The linear power spectrum of the initial density fluctuations was calculated with the code \textsc{\small CAMB} \citep{Lewis2000}. Initial conditions were generated using the Zel'dovich approximation at sufficiently high redshifts (initial redshift $z_\mathrm{i}=150$) to ensure against numerical artefacts \citep{Crocce2006}. \subsection{SOURCES} Sources are assumed to live within dark matter haloes, which were found using the spherical overdensity algorithm with an overdensity parameter of 178 with respect to the mean density. We use a sub-grid model \citep{Ahn15a} calibrated to very high resolution simulations to add the haloes between $10^8 - 10^9\,$M$_\odot$, the rough limit for the atomic-line cooling of primordial gas to be efficient. For a source with halo mass, $M$, and lifetime, $t_s$, we assign a stellar ionizing photon emissivity according to \begin{equation} \dot{N}_\gamma=g_\gamma\frac{M\Omega_{\rm b}}{\mu m_{\rm p}(10\,\rm Myr)\Omega_0}, \end{equation} where the proportionality coefficient $g_{\gamma}$ reflects the ionizing photon production efficiency of the stars per stellar atom, $N_\mathrm{i}$, the star formation efficiency, $f_\star$, and the escape fraction, $f_{\rm esc}$: \begin{equation} g_\gamma=f_\star f_{\rm esc}N_{\rm i}\left(\frac{10 \; {\rm Myr}}{t_{\rm s}}\right). \end{equation} \citep[][]{Haim03a,Ilie12a}. Sources hosted by high-mass haloes (above $10^9 \ $M$_\odot$) have efficiency $g_\gamma=1.7$ and are assumed to be unaffected by radiative feedback, as their halo mass is above the Jeans mass for ionized ($\sim\!10^4$K) gas. Low-mass haloes (between 10$^8 \ $M$_\odot$ and 10$^9 \ $M$_\odot$) have a higher efficiency factor $g_\gamma=7.1$, reflecting the likely presence of more efficient Pop.~III stars or higher escape fractions \citep{2007MNRAS.376..534I}. The low-mass sources are susceptible to suppression from photoionization heating. In this work, we assume that all low-mass sources residing in ionized cells (with an ionized fraction greater than 10 per cent) are fully suppressed, i.e.,\ they produce no ionizing photons \citep{2007MNRAS.376..534I,2016MNRAS.456.3011D}. The source model for the stellar radiation is identical to the one in simulation LB2 in \citet{2016MNRAS.456.3011D} and uses the aggressive suppression model `S' defined there. However, the details of this suppression are not very significant here, since we focus on the very early stages of reionization before significant ionization develops. The stellar sources are assigned a blackbody spectrum with an effective temperature of $T_{\mathrm{eff}} = 5 \times 10^4 K$. The X-ray sources are also assumed to reside in dark matter haloes. They are assigned a power-law spectrum with an index of $\alpha = -1.5$ in luminosity. \citet{Hickox2007} showed, using the \textit{Chandra Deep Fields (CDFs)}, that X-ray sources with a single power law spectrum would over-contribute to the observed X-ray background (XRB) if $\alpha \lower.5ex\hbox{\ltsima} 1$, thus requiring softer spectra for the reionization sources. The frequency range emitted by these sources extends from 272.08 eV to 100 times the ionization threshold of for doubly ionized helium (5441.60 eV) (\citet{Mineo2012, Mesinger2013}). Photons with frequencies below the minimum frequency are assumed to be obscured, as suggested by observational works \citep[e.g.\ ][]{Lutivnov2005}. This value for the minimum frequency is consistent with the optical depth from high-redshift gamma-ray bursts \citep{Totani2006,Greiner2009} and is consistent with \cite{Mesinger2013}. The X-ray luminosity is also set to be proportional to the halo mass, since HMXBs are formed from binary systems of stars and stellar remnants. Unlike their stellar counterparts, these sources have the same efficiency factor for all active sources. Low-mass haloes that are suppressed are assumed not to produce X-ray radiation. The efficiency is parametrised as follows: \begin{equation} g_{\rm x} = N_{\rm x} f_\star\left(\frac{10 \;\mathrm{Myr}}{t_{\rm s}}\right)\, \end{equation} where $N_{\rm x}$ is the number of X-ray photons per stellar baryon. A value of $N_{\rm x}=0.2$ is roughly consistent with measurements between 0.5--8~keV for X-ray binaries in local, star-bursting galaxies \citep{Mineo2012} although the uncertainty is a factor 2 to 3 \citep[see the discussion in][]{Mesinger2013}. We take $g_\mathrm{x} = 8.6\times 10^{-2}$, which implies $f_\star\approx 0.4$ if $N_{\rm x}=0.2$. Our X-ray luminosities are, therefore, somewhat higher than in the local Universe. The total number of X-ray photons contributed from these sources over the simulation time is an order of magnitude lower than the value obtained from the CDFs for the XRB between 1 and 2 keV \citep{Hickox2007} making these sources consistent with observations. Note that the long range X-ray heating we examine here is dependent on the abundance, clustering and spectra of the X-ray sources. We leave comparisons between different X-ray source models for future work. \subsection{RT SIMULATIONS} The RT is based on short-characteristics ray-tracing for ionizing radiation \citep[e.g.][]{1999RMxAA..35..123R} and non-equilibrium photoionization chemistry of hydrogen and helium, using the code \textsc{\small C$^2$-Ray}, \textbf{C}onservative, \textbf{C}ausal \textbf{Ray} tracing \citep{Mellema2006}. \textsc{\small C$^2$-Ray} is explicitly photon-conserving in both space and time due to the finite-volume approach taken when calculating the photoionization rates and the time-averaged optical depths used. This quality enables time-steps much longer than the ionization time scale, which results in the method being orders-of-magnitude faster than other approaches. However, we note that including the gas heating could impose some additional constraints on the time-stepping, resulting in smaller time-steps as discussed in \cite{Lee2016}. The basic RT method was further developed in order to accommodate multi-frequency RT \citep{Friedrich2012}, including the effects of helium, secondary ionizations from electrons, multi-frequency photoionization and detailed heating through full on-the-spot approximation, in order to correctly model the effects of hard radiation. Frequency bin-integrated rates are used for the photoionization and photoionization heating rates with three bands: one for the frequency range in which only Hydrogen may be ionized (i.e. from the first ionization level of Hydrogen to the first of Helium), a second for photons which can ionize both hydrogen and helium once and a third for photons that can ionize all species. These are sub-divided into 1, 26, and 20 sub-bins respectively. The rates values are pre-calculated and stored in look up tables as functions of the optical depths at the ionization thresholds. The convergence of the number of sub-bins in bands 2 and 3 have been tested in \citet{Friedrich2012}. They concluded 10 and 11 sub-bins for bands 2 and 3 respectively produced sufficiently converged results. \textsc{\small C$^2$-Ray} has been tested extensively against existing exact solutions \citep{Mellema2006}, numerous other numerical codes within code comparison projects \citep{Iliev2006b,Iliev2010}, and against \textsc{\small CLOUDY} \citep{Friedrich2012}. In this work, we present two simulations: one in which the haloes contain both HMXB \& stellar sources, and one which only considers stellar sources. The stellar component and underlying cosmic structures are identical in both simulations. The density is smoothed onto an RT grid of size $250^3$. These simulations were performed under the PRACE Tier-0 projects PRACE4LOFAR and Multi-scale Reionization. \section{The 21-CM SIGNAL} \label{sec:theory} In this section, we discuss the method of extracting the 21-cm signal from our simulation outputs. \subsection{THE DIFFERENTIAL BRIGHTNESS TEMPERATURE} \label{sec:dbtTheory} Observations aim to detect the redshifted 21-cm signal caused by the hyperfine transition from the triplet to the singlet ground state of the neutral hydrogen present during reionization. This signal is dictated by the density of neutral hydrogen atoms and the ratio of hydrogen atoms in the triplet and singlet states, quantified by $T_{\mathrm{S}}$: \begin{equation} \frac{N_1}{N_0}=\frac{g_1}{g_0}\exp\left(-\frac{T_\star}{T_{\rm S}}\right). \end{equation} Here $T_\star=\frac{h\nu_{10}}{k}=0.0681~$K is the temperature corresponding to of the 21-cm transition energy, and $g_{1,0}$ are the statistical weights of the triplet and singlet states, respectively. For the 21-cm signal to be visible against the CMB, the spin temperature needs to decouple from it, since the two start in equilibrium at high redshift. The two mechanisms that can do this \citep{Field1958}. Firstly, collisions with other atoms and free electrons do so by exciting electrons from the singlet to the triplet state. This mechanism is only effective for sufficiently dense gas, i.e. in very dense filaments and haloes or at very high redshifts. Secondly, the electrons can be excited to the triplet state through the Wouthuysen-Field (WF) effect when absorbing a Ly~$\alpha$\ photon. The spin temperature can then be expressed as follows \citep{Field1958}: \begin{equation} T_{\mathrm{S}} = \frac{T_{\mathrm{CMB}} + x_\mathrm{\alpha} T_\mathrm{c} + x_\mathrm{c} T_\mathrm{k}}{1 + x_\mathrm{\alpha} + x_\mathrm{c}}, \end{equation} where $T_\mathrm{c}$ is the Ly~$\alpha$\ colour temperature, $x_\mathrm{\alpha}$ is the Ly~$\alpha$\ coupling constant, $T_\mathrm{k}$ is the gas kinetic temperature, and $x_\mathrm{c}$ is the collisional coupling constant. Throughout this paper, we assume that Ly~$\alpha$\ radiation is in the WF-effect-saturated-regime (in which case $x_\mathrm{\alpha} \gg x_\mathrm{c}$), and the colour temperature is equal to the kinetic temperature ($T_\mathrm{c} = T_\mathrm{k}$); hence, $T_\mathrm{S} = T_\mathrm{k}$. Since early sources produce copious amounts of soft-UV photons, this approximation tends to hold throughout most of the evolution, except for the earliest times \citep[e.g.][]{2003ApJ...596....1C}. The 21-cm signal itself is usually defined in terms of the differential brightness temperature with respect to the CMB: \begin{equation} \delta T_{\mathrm{b}} = \left(1 - \frac{T_{\mathrm{CMB}}}{T_{\mathrm{S}}}\right) \ \frac{3 \lambda_0^3 A_{10} T_\star n_{\mathrm{HI}}(z)}{32 \pi T_{\mathrm{S}} H(z) (1+z)} \label{dbt}, \end{equation} where $\lambda_0=21.1$~cm is the line rest-frame wavelength, $A_{10}=2.85\times10^{-15}\,\rm s^{-1}$ is the Einstein A-coefficient for spontaneous emission from the triplet to singlet state, and $n_{\mathrm{HI}}$ is the density of neutral hydrogen. Thus, $\delta T_{\mathrm{b}}$ could be seen either in absorption or emission, depending on the $T_{\rm s}$ relative to the CMB. We are particularly interested in the timing and character of the transition between absorption and emission. Predicted 21-cm maps (smoothed to the resolution of observations) and their statistical measures will be the only way to connect theories of galaxy formation to future observations of 21-cm radiation. When calculating the 21-cm signal, many studies assume the IGM gas to have reached temperature saturation, i.e. to be heated to temperatures well above the CMB, $T_\mathrm{K} \gg T_{\mathrm{CMB}} $. We refer to this as the high-temperature limit, high-$T_\mathrm{K}$ limit. While this approximation may hold during the later stages of reionization, it certainly breaks down at early times. Where appropriate, we show the high-$T$$_\mathrm{K}$ limit results for reference. \subsection{TEMPERATURE OF THE NEUTRAL IGM IN PARTIALLY IONIzED REGIONS} \label{sec:corrections} H~II regions can have sizes smaller than our cell resolution, particularly for individual weak sources, and therefore be unresolved in our simulations. The cells containing such ionized regions will appear partially ionized in the simulation, with a temperature that is averaged between the hot, ionized gas phase and the colder, neutral one. At 21-cm, using the average cell T would yield a signal in emission where it should appear in absorption. In order to correct for such unphysical behaviour, we have adopted an algorithm to locate such cells and calculate $\delta T_{\rm b}$ appropriately, as follows. Since this does not affect any of the physical quantities produced by the code, it can be performed as a post-processing step. \textit{Finding and marking the cells requiring special treatment:} Since the softer stellar spectra do not produce photons that can penetrate into the IGM (the typical mean free paths are of order kpc), the cells potentially requiring correction in the stellar-only simulation are identified as those with $T > T_\mathrm{ad}$ and $x>x_{\rm in}$, where $T_\mathrm{ad}$ is the mean adiabatic gas temperature of the universe and $x_{\rm in}$ is the initial ionized fraction. In the HMXB simulation the cells that might need correction are in the same locations as in the stellar-only simulation since the ionizing sources are identical between the two simulations and the co-located HMXBs do not contribute enough ionizing photons to significantly grow the primarily stellar radiation-driven ionized regions (which we have tested using high resolution simulations and analytic estimates). \textit{Calculating the temperature of H~II regions in the stellar-only simulation:} The softer stellar spectra yield sharp H~II region boundaries, separating them from the cold, adiabatically cooling IGM (due to the lack of shock heating in our simulations, the treatment of which we leave for future work). We thus assume that the temperature of the neutral gas is that adiabatic temperature, $T_\mathrm{HI,s}=T_\mathrm{ad}$. The temperature of the H~II regions in each marked cell is thus calculated using: \begin{equation} T_\mathrm{HII,s} = \frac{T_\mathrm{c,s} - T_\mathrm{ad} (1-x)}{x}, ~\label{thii} \end{equation} where $T_\mathrm{HII,s}$ is the temperature of the H~II region, $T_\mathrm{c,s}$ is the cell-average temperature given by \textsc{\small C$^2$-Ray}, $T_\mathrm{ad}$ is the adiabatic temperature of the universe and $x$ is the volume weighted ionized fraction of H. \textit{Calculating $\mathrm{T_{HI}}$ from HMXB simulation:} The temperature of the neutral IGM in each marked cell in the HMXBs simulation is calculated using: \begin{equation} T_\mathrm{HI,x} = \frac{T_\mathrm{c,x} - T_\mathrm{HII,s} x}{1-x}, \end{equation} where $T_\mathrm{c,x}$ is the temperature from $\small $C$^2$-RAY from the HMXB simulation and $T_\mathrm{HII,s}$ is from Equation ~\ref{thii}. Here, we assume that the temperature of the H~II regions is the same between the two simulations, since the local heating in the H~II region is strongly dominated by the stellar emission. This similarity was verified by high-resolution tests and analytical estimates, which showed that the additional X-ray heating is negligible. In some cases, the LMACHs are suppressed in the HMXB case, but not the stellar-only case due to a marginally higher ionized fraction. These very rare cells are taken to be the average temperature of their neighbours. In summary, we use the temperatures of the neutral gas in each cell ($T_\mathrm{HI,s}$ and $T_\mathrm{HI,x}$, respectively for the two simulations), as calculated above to derive the 21-cm $\delta T_{\rm b}$. \section{RESULTS} \label{sec:results} \subsection{REIONIZATION AND THERMAL HISTORIES} \label{sec:reion_hist} \begin{figure} \includegraphics[width=3.6in]{plot1.png} \caption{(top) The mean ionized fraction by volume of each species: HII - solid lines, HeII - dashed lines and HeIII - dotted lines. (bottom) The volume-weighted mean temperature - dashed lines, median temperature - solid lines and T$_{\rm CMB}$ - dotted line. In both plots the HMXB case is shown in red and the stellar-only case is shown in blue. \label{fig:meanhist}} \end{figure} The 21-cm signal is affected by the thermal and ionization histories of the IGM. In the upper panel of Fig.~\ref{fig:meanhist}, we show the mean volume-weighted ionized fraction evolution for the species present in our simulations: H~II (solid lines), He~II (dashed lines), and He~III (dotted lines) for the HMXB case (red lines) and the stallar only case (blue lines). The ionization of H~I and He~I is largely driven by the hard-UV photons of stars, which are more abundant than X-ray photons. The effect of the latter is largely limited to increasing the He~III abundance by about an order of magnitude (while remaining low) due to their high energy per photon. In the lower panel, volume-weighted mean (dashed lines) and the median (solid lines) temperatures are shown for the HMXB case (red lines) and the stellar-only case (blue lines). For the HMXB case, the mean temperatures are increased modestly, surpassing $T_{\mathrm{CMB}}$ earlier than in the stellar-only case, with both occurring around $z\sim16$. The differences between the mean $T_{\rm K}$ of the two cases (with and without X-rays) grow at later times, rising above 50~per~cent for $z<13$. As the 21-cm signal probes neutral hydrogen, the median temperature of the IGM is more relevant, since the mean is skewed towards higher valued by the hot, ionized regions. In the presence of X-rays, the median surpasses $T_{\mathrm{CMB}}$ just before $z=14$; while in their absence the neutral IGM remains cold. The only previous full numerical simulations to take into account X-ray sources are from \citet{Baek2010}\footnote{The simulations in \citet{Xu2014} and \citet{Ahn2015} are focused on a single X-ray source in a zoomed region of a cosmological volume, so are not directly comparable to our results here.}. However, the mass resolution of these simulations is approximately 600 times lower than ours, and the volume is 15 times smaller. As a consequence, their first sources of any type of radiation appear around $z\approx 14$, approximately when the neutral IGM in our simulation has already been globally heated to well above the $T_{\rm CMB}$. In other words, their simulations describe a scenario in which X-ray sources appear very late, are relatively rare and bright, and are coincident with substantial stellar emission. This situation is very different from our case in which the first X-ray sources appear at $z\approx 23$ and large numbers of relatively faint sources heat the neutral IGM well before any substantial reionization. We will, therefore, not further compare the details of our results to those of \citet{Baek2010}. More detailed information about the temperature distributions is obtained from the corresponding probabily distributions functions (PDFs), shown in Fig.~\ref{fig:hist}. Again, the HMXB case is shown in red and the stellar-only case in blue. These PDFs were generated from the coeval simulation cubes using 100 bins and normalised to have a total area of one. The stellar-only distributions are clearly bimodal, with a few hot, partially ionized regions and the majority of cells remaining very cold. This behaviour is expected, given the very short mean free path of the ionizing photons in this case, which yields sharp ionization fronts. In contrast, when X-rays are present, their long mean free paths lead to gas heating spreading quickly and widely, with all cells being affected. The distribution is strongly peaked, relatively wide, and gradually moves towards higher temperatures, with typical values above 100~K by $z=13.2$. Our thermal history is similar to that of Case A (Pop.~II stars) in \citet{Pritchard2007} and case `$\log\zeta_{\rm X}=55$' in \citet{2015MNRAS.454.1416W}. These studies do not provide temperature PDFs or median values. The lightcones (spatial-redshift/frequency slices) provide a visual representation of the quantities discussed above, including spatial variations and evolution over time (Fig.~\ref{fig:lightcones:xfrac}). These lightcones are constructed by taking a cross section of the simulation volume along the line of sight and continually interpolating in time the relevant quantity using the spatial periodicity of our cosmological volume. In Fig.~\ref{fig:lightcones:xfrac} (top panels), we show the hydrogen ionization lightcone. As expected based on the very similar mean ionization fractions in the two simulations, the morphology of hydrogen ionization is broadly similar. However, the hard photons, which penetrate deep into the neutral regions produce a low-level, but widespread ionization of the IGM and `fuzzier', less clearly defined H~II regions when X-ray sources are present. Given their similar ionization potentials, the first ionization of helium (not shown here) closely follows that of hydrogen. Alternatively, the second helium ionization potential is sufficiently higher to result in significant differences between models (Fig.~\ref{fig:lightcones:xfrac}, bottom panels). The 50,000~K blackbody stellar spectra produce very few photons able to fully ionize a helium atom; thus, any He~III produced is concentrated in the immediate surroundings of the ionizing sources. However, the X-rays are very efficient in fully ionizing helium, producing widespread ionization (albeit still at a relatively low level). This ionization is also quite patchy on large scales, especially at early times. The exact morphology depends on the spectra, abundance, and clustering of the X-ray sources. \begin{figure} \includegraphics[width=0.475\textwidth]{plot1b.png} \caption{Histograms of the temperature at the full simulation resolution for the HMXB (red) and stellar-only (blue) cases for several illustrative redshifts. \label{fig:hist}} \end{figure} The lightcones in Fig.~\ref{fig:lightcones:temp} (top panels) show the spatial geometry and evolution of the IGM heating. The soft, stellar radiation in both models ionize and heat the immediate environments of the sources to $T\sim10^4$~K, seen as dark regions, with the majority of the IGM remaining completely cold. The X-ray radiation propagates much further, starting to heat the gas throughout. Large, considerably hotter regions (black in the image) develop, quickly reaching tens of Mpc across before $z\sim15$. These regions gradually expand and merge, resulting in thorough heating to hundreds of degrees by $z\sim14$; though, large cold regions still remain present down to $z\sim13.5$. At early times, the heating from X-rays increases the inhomogeneity when X-rays are present, but the temperature distribution eventually becomes more homogeneous at later times. \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{lightcone_xfrac.png} \includegraphics[width=0.8\textwidth]{lightcone_xfracHe2.png} \caption{The position-redshift lightcone images of the ionized volume fraction of hydrogen (top two panels) and He~III (bottom two panels). Shown are the HMXB case (top panel in each pair), and the stellar-only case (lower panel in each pair).} \label{fig:lightcones:xfrac} \end{figure*} \subsection{21-CM DIFFERENTIAL BRIGHTNESS TEMPERATURE} \label{sec:21-cm_maps} We are primarily interested in the directly observable quantity of this epoch, the 21-cm $\delta T_{\rm b}$. As discussed in Section~\ref{sec:theory}, $\delta T_{\rm b}$ depends on the density, ionization, and temperature fields. In Fig.~\ref{fig:lightcones:temp} (bottom panels), we show the $\delta T_\mathrm{b}$ lightcones corresponding to the same cross section through the position-redshift image cube as in Fig.~\ref{fig:lightcones:xfrac} and Fig.~\ref{fig:lightcones:temp} (top panels). The morphology of the $\delta T_\mathrm{b}$ fluctuations is closely related to that of the heating, demonstrating the importance of temperature variations, especially during the early stages of the EoR. Long-range X-ray heating produces a gradual, extended transition from absorption into emission. Large-scale fluctuations are significant throughout, and the first emission regions appear part way through the simulation at $z=18$, after enough X-rays have penetrated into the IGM near the ionizing sources in order to heat it. After this these bubbles grow quickly, with some reaching tens of Mpc in size by $z\sim15.5$. Only after $z\sim13.8$ have all large regions of 21-cm absorption disappeared. Comparing to \citet{Mesinger2013}, the results of our simulation appear to be closest to their case with X-ray efficiency $f_{\rm x}=1$. They also find an extended transition from absorption to emission starting at $z\sim20$, which completes around $z\sim14$. However, by that time, the hydrogen ionization fraction is around 10~per~cent, which is substantially higher than our case. The majority of the difference is likely due to the fact that their sources are more efficient, as indicated by a completion of reionization by $z_{\rm reion}\sim8$ in their case versus $z_{\rm reion}<6.5$ for our source model \citep{2016MNRAS.456.3011D}. In the stellar-only case, the signal remains in absorption throughout the simulation as there are no photons with long enough mean free paths to penetrate and heat the neutral IGM. As a result the IGM simply cools adiabatically as the universe expands\footnote{There is also some Compton heating due to CMB scattering, which we do include in our simulation. This heating is inefficient at this point due to the low density of the IGM.}. In reality the temperature of the IGM would also be impacted by shock heating, however, this has not been included in our simulations and is left for future work. \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{lightcone_temp.png} \includegraphics[width=0.8\textwidth]{lightcone_dbt.png} \caption{The position-redshift lightcone images of the IGM gas temperature (top two panels) and 21-cm differential brightness temperature (bottom two panels). Shown are the HMXB case (top panel in each pair), and the stellar-only case (lower panel in each pair).} \label{fig:lightcones:temp} \end{figure*} In Fig. \ref{fig:powerspectra}, we show the power spectra of $\delta T_\mathrm{b}$ at several key redshifts. During the early evolution (shown is $z=20.134$), only a modest amount of heating of the IGM has yet occurred in the X-ray model. Thus, the large-scale 21-cm fluctuations are dominated by the density variations, which are the same in the two cases. Therefore, the 21-cm power spectra are almost identical at this stage, with power suppressed slightly on all scales in the HMXB case. The high-$T_\mathrm{K}$ limit results in significantly lower fluctuations, reflecting the lower average $\delta T_\mathrm{b}$ in the emission regime compared to the absorption or mixed regimes. As the evolution progresses and with X-ray sources contributing to the long-range heating, the 21-cm power is significantly boosted on large scales and a well-defined, if broad, peak develops around scale of $\sim\!43$~Mpc ($z=15-16$). The small-scale power decreases due to the stronger heating in the vicinity of sources, which brings the temperature contrast with the CMB down and closer to the high-$T$$_\mathrm{K}$ limit. The overall fluctuations peak at $\sim14~$mK around $z\sim15$ in the HMXB model. \citet{Pritchard2007} find a heating peak that is in agreement with our result, at similar scales ($k\sim0.14\,\rm Mpc^{-1}$) with an amplitude of$\Delta_{\rm 21cm}\sim 11.5$mK or $\Delta_{\rm 21cm}\sim20$~mK depending on the source model used. Results from \citet{2014MNRAS.443..678P} are also in rough agreement with our own, with a power spectra peak at a similar scale (44 Mpc) and at similar redshift ($z\sim15-16$). Their peak value is also in agreement at $\Delta_{\rm 21cm}\sim14\,$mK. Over time, as emission patches develop and the overall IGM is gradually heated ($z=12-14$), the power spectra slowly approach the high-$T_\mathrm{K}$ limit, but does not reach it even by the end of our simulation. At that point ($z=12.7$), the IGM has been heated well above $T_{\mathrm{CMB}}$ throughout and therefore is in 21-cm emission everywhere. However, the neutral IGM temperature remain at only a few hundred degrees, is still considerably spatially inhomogeneous, and so is not yet fully in the high-$T$$_\mathrm{K}$ limit, where the $\delta T_{\rm b}$ becomes independent of the actual gas temperature value. \begin{figure*} \centering \makebox[\textwidth][c]{\includegraphics[width=1.0\textwidth]{4_powerspectra.png}} \caption{The 21-cm power spectra from our simulations at several key stages of the evolution with the high-$T$$_\mathrm{K}$ limit results for reference. The high-$T$$_\mathrm{K}$ limit is shown in yellow and as before the results from the HMXB case are shown in red and the stellar-only case in blue.} \label{fig:powerspectra} \end{figure*} \begin{figure} \includegraphics[width=0.45\textwidth]{kplot5.png} \caption{The evolution of the 21-cm power spectra modes at $k=0.1,0.5$ and $1\,\rm Mpc^{-1}$ for the two simulations and high-$T$$_\mathrm{K}$ limit for reference, as labelled.\label{fig:kplot}} \end{figure} The evolution is markedly different in the stellar-only case. The 21-cm fluctuations here are dominated by density fluctuations as the stars do not produce many photons able to penetrate into and heat the neutral cold IGM. Consequently, the shape of the power spectrum remains almost a power law. Throughout the simulation cosmic structures continue to form so the amplitude of these fluctuations gradually increases over time. In the stellar-only case, 21-cm power on all scales is far higher at late times compared to that in the HMXB case since the mean temperature in the latter case approaches and then surpasses $T_{\mathrm{CMB}}$ and the IGM remains very cold in the former. The evolution of several particular $k$-modes ($k=0.1, 0.5$, and $1\,\rm Mpc^{-1}$) is shown in Fig.~\ref{fig:kplot}. With increasing $k$, the 21-cm fluctuations deviate from the density fluctuations later due to more advanced structure formation at small scales. With X-rays present, there is a peak from heating at all scales considered here, occurring between $z\sim15 - 17$. The peak becomes wider and more pronounced at larger scales, which matches the typical 21-cm fluctuations scale due to inhomogeneous heating (on the order of tens of Mpc). At larger scales, X-ray heating from the first sources removes the colder regions close to these sources, resulting in an initial dip in the power. Subsequently, the power rises as the X-ray heating extends inhomogeneously into the IGM. Both of these features are present, but less pronounced, at the intermediate scale (k=0.1\,Mpc$^{-1}$). In the absence of X-rays, the evolution is markedly different. At later times when the IGM X-ray heating has become homogeneous, the corresponding result for the stellar-only case has higher power on all scales due to the IGM remaining cold. In the high-$T$$_\mathrm{K}$ limit, the fluctuations are much lower and mostly flat at all scales due to the lack of cold IGM, which results in lower amplitude signal driven initially by the density fluctuations only. The evolution of the $k=0.1\,\rm Mpc^{-1}$ mode is roughly in agreement with \citet{Mesinger2013}, however, their peak occurs earlier than our results, again likely due to the higher assumed efficiency of their sources. Probably for the same reason, \citet{Pritchard2007} and \citet{Fialkov2014} find the peak to occur later. Beyond the timing of the peak, the amplitude is in rough agreement with in the semi-numerical results. \citet{Mesinger2013}, \citet{Pritchard2007}, and \citet{Santos2010} find a marginally higher peak value of 20~mK, while \citet{Fialkov2014} find a lower one of 7~mK. In Fig.~\ref{fig:dbtmaps}, we show maps of the mean-subtracted $\delta T_{\rm b}$ $(\delta T_\mathrm{b}-\overline{\delta T_\mathrm{b}}$) at several key epochs of the redshifted 21-cm evolution, smoothed with a Gaussian beam that is roughly twice as broad as the what can be achieved by the core of SKA1-Low, which will have maximum baselines of around 2~km. We averaged over a frequency bandwidth that corresponds to the same spatial scale as the Gaussian beam. To mimic the lack of sensitivity at large scales, which interferometers have due to the existence of a minimum distance between their elements, we also subtracted the mean value from the images. We do not include instrument noise and calibration effects in these maps. However, at this resolution, SKA1-Low is expected to have a noise level of $\sim 10$~mK per resolution element for 1000 hours of integration \citep{2015aska.confE...1K}. The images show a clear difference between the cases with and without X-ray heating even at these low resolutions, suggesting that SKA should be able to distinguish between these two scenarios. As the variations over the field of view (FoV) reach values of 50~mK we can also conclude that SKA1-Low will be able to image these structures for deep integrations of around 1000 hours. Previous expectations were that SKA1-Low would only be able to make statistical detections of the 21-cm signal from the Cosmic Dawn. Our results indicate that, at least from the perspective of signal to noise, imaging should be possible. Once again, the signal from partially heated IGM peaks around $z\sim15$, when there are large regions -- tens of Mpc across -- in either emission or absorption. This peak is followed by gradual thorough heating of the IGM above $T_{\mathrm{CMB}}$, bringing the signal into emission and, thus, decreasing the overall fluctuations. In the stellar-only case, the maps remain fully in absorption at these resolutions, since the ionized regions are much smaller than the beam extent and are smoothed away. Nonetheless, considerable fluctuations remain, with high overall amplitude due to the very cold IGM in that case. Over time larger regions are substantially affected by further structure formation occurs the contrast in the images becomes greater. \subsection{21-CM ONE-POINT STATISTICS} \label{sec:21-cm} The 21-cm fluctuations are typically non-Gaussian in nature and are not fully described by the power spectra alone, and imaging might not be possible in all regimes for sensitivity reasons. Therefore, the one-point statistical properties of the 21-cm signal are also of great interest, since they quantify other aspects of the 21-cm signal and enable comparisons with past works and future observations. The rms is defined as: \begin{equation} \mathrm{rms}(y) \equiv \sigma=\sqrt{\frac{\sum_{i=0}^{N}(y_i - \overline{y})^2}{N}}, \label{eq:rms} \end{equation} and we use the dimensionless definitions for the skewness and kurtosis, as follows: \begin{equation} \mathrm{Skewness}(y) = \frac{1}{N} \frac{\sum_{i=0}^{N}(y_i - \overline{y})^3}{\sigma^{3}}, \label{eq:skew} \end{equation} and \begin{equation} \mathrm{Kurtosis}(y) = \frac{1}{N} \frac{\sum_{i=0}^{N}(y_i - \overline{y})^4}{\sigma^4}. \label{eq:kur} \end{equation} \begin{figure*} \centering \makebox[\textwidth][c]{\includegraphics[width=1.\textwidth]{3_dbtmap.png} } \caption{Mean-subtracted $\delta T_{\rm b}$ maps smoothed with a Gaussian beam with the FWHM corresponding to a 1.2~km maximum baseline at the relevant frequency, as labelled. The images are bandwidth-smoothed with a top hat function (width equal to the distance corresponding to the beam width). The X-ray simulation runs along the top row and the stellar-only case is below, with snapshots of the same redshifts being vertically aligned.} \label{fig:dbtmaps} \end{figure*} Here, $y$ is the quantity of interest (in this case $\delta T_{\rm b}$), $N$ is the number of data points, $\overline{y}$ is the mean value of $y$, and $\sigma^2$ is the variance of $y$. These quantities are calculated from coeval simulation cubes, smoothed with a Gaussian beam corresponding to a 1.2~km maximum baseline at the relevant frequency and a bandwidth corresponding to the same spatial extent as the full-width half-maximum (FWHM) of the beam. This smoothing is the same as was used for the images in Fig.~\ref{fig:dbtmaps}. In the top-left panel of Fig. \ref{fig:statistics}, we show the global value of the $\delta T_{\rm b}$ calculated as the mean signal from our simulations, $\overline{ \delta T_\mathrm{b}}$. The high-$T_\mathrm{K}$ limit corresponds to the dashed (cyan) lines. In both simulation models, the global value starts negative, due to the initially cold IGM, and drops further as the universe expands and cools adiabatically. In the stellar-only case (thin, blue line), $\overline{ \delta T_\mathrm{b}}$ rises slowly thereafter, starting at $z\sim16.5$ as the highest density peaks become ionized. The highest value remains negative, since the neutral IGM never gets heated in this scenario. $\overline{ \delta T_\mathrm{b}}$ is significantly higher in the HMXB case (thick, red line), starting to rise from around $z = 20$ due to the heating of the IGM. The global value becomes positive just before $z=14$, and by the end of the simulation it approaches the high-$T$$_\mathrm{K}$ limit. The evolution of the global 21-cm signal is similar to that of the analytical and semi-numerical models in the literature \citep[e.g.][]{Pritchard2007,Mesinger2013}, apart from the timing of this transition, which depends on the specific assumptions made about the ionizing and X-ray sources. One difference from these models is that we do not model the early Ly~$\alpha$\ background, but assume efficient WF coupling at all times. When this assumption is not made, weaker WF coupling early on produces a shallower absorption signal (typically $\overline{ \delta T_\mathrm{b}}_{\rm min}>180\,\rm mK$ instead of $\overline{ \delta T_\mathrm{b}}_{\rm min}\sim200\,\rm mK$ as in our case). While the incomplete Ly~$\alpha$\ coupling could be an important effect at the earliest times ($z>20$), we focus on the subsequent X-ray heating epoch, where this effect should have minimal impact. The lower left-hand panel of Fig. \ref{fig:statistics} shows the rms or standard deviation of the $\delta T_\mathrm{b}$ as a function of redshift, calculated according to Eq.~\ref{eq:rms} and for the resolution specified below Eq.~\ref{eq:kur}. Before the X-ray heating is able to significantly impact the cold IGM, our two scenarios have a similar rms evolution, which is dominated by the density fluctuations and the adiabatic cooling of the IGM. Later on, the rms drops slightly in the HMXB case due to the local X-ray heating from the first sources, which brings the local IGM temperature closer to that of the CMB. As the characteristic scale of the X-ray heating fluctuations increases (cf. the power spectra evolution), the rms of the X-ray case starts rising again and peaks at $\sim 11.5$~mK around $z\sim16.5$. Thereafter, the rms fluctuations gradually decrease due to the $\overline{\delta T_\mathrm{b}}$ rising towards positive values and the 21-cm absorption turning into emission. The rms in the stellar-only case continuously rises, as the density fluctuations increase. Although local ionization introduces small-scale $\delta T_\mathrm{b}$ fluctuations, they are smoothed out by the beam- and bandwidth-averaging. At later times in the HMXB case, the rms asymptotically approaches the high-$T$$_\mathrm{K}$ limit, but does not quite reach it by the end of the simulation at $z=12.7$. In the stellar-only case, the rms of $\delta T_\mathrm{b}$ is driven purely by the density (and later on to a small extent ionization) fluctuations. The rms, therefore, continues to rise as structure formation continues, ionization begins, and the IGM further cools. Note that in reality the temperature of the neutral IGM is likely to be impacted by shock heating, and we leave this to future work. The features of the rms evolution and their timing are dependent on the resolution available and, hence, on the details of the radio interferometer. \begin{figure*} \centering \makebox[\textwidth][c]{\includegraphics[width=1.0\textwidth]{5_statistics.png}} \caption{Statistics from the 21-cm signal from both our simulations as well as the high-$T$$_\mathrm{K}$ limit. The top left panel shows the mean value of $\delta T_{\rm b}$, the bottom left panel the rms, the top-right panel the skewness and the bottom right the kurtosis. The points are the results calculated from smoothed coeval boxes from our simulations and the fitted line is a cubic spline of these data.} \label{fig:statistics} \end{figure*} The higher order statistics of $\delta T_{\rm b}$ are also affected by the inclusion of X-rays. The skewness of $\delta T_{\rm b}$ is shown in the top-right panel of Fig.~\ref{fig:statistics}. In both cases, the skewness starts close to zero, tracking the initial, Gaussian density fields. The skewness then gradually increases in the HMXB case as hot regions surrounding sources positively skew the data. The skewness from the HMXB case then increases rapidly, as large regions of the IGM are heated. The skewness then peaks around $z \sim 18$ with a value of 0.85. After this peak, the skewness decreases as the heating becomes more homogeneous and reaches zero before increasing again and approaching the high-$T_\mathrm{K}$ limit (shown as dashed, cyan line). The skewness from the stellar-only case remains negative throughout the simulation and only begins to rise toward the end of the simulation when the ionized fraction becomes non-negligible. Once again, the high-$T_\mathrm{K}$ limit is not valid for the skewness at the early times considered in this work, except for the very last stages of the HMXB case. The skewness from the high-$T_\mathrm{K}$ limit and the stellar-only case are mirror images of each other due to the fact that they are both dominated by density fluctuations, as $T_{\rm s} \gg T_{\rm CMB}$ in the high-$T_\mathrm{K}$ limit and the stellar-only case is dominated by the adiabatic temperature of the universe. The high-$T_\mathrm{K}$ limit is forced to be in emission; whereas the stellar-only case is in absorption. Our skewness results are in reasonable agreement with the ones for low X-ray efficiency case `$\log\zeta_{\rm X}=55$' in \citet{2015MNRAS.454.1416W} (cf. their Fig.~13, but note that they show the dimensional skewness, which is the same as the dimensionless skewness multiplied by the corresponding rms). The kurtosis of $\delta T_{\rm b}$ from our two simulations is displayed in the bottom right panel of Fig.~\ref{fig:statistics}. Initially, the kurtosis of both models is close to zero as the Gaussian density fluctuations dominate at these early stages. As long-range heating develops, the kurtosis of the HMXB case increases and peaks at $z \sim 18$ with a value of 1.1. Later, the kurtosis decreases again to negative values, before increasing once more and approaching temperature saturation (shown as dashed, cyan line). This statistic follows roughly the same pattern as the skewness, but with somewhat different functional shape and timing. The stellar-only case kurtosis closely tracks that of the high-$T_\mathrm{K}$ limit throughout the simulation, deviating only slightly at the end due to the small amount of ionization present. As there is no heating in this case, the signal is mostly Gaussian with small non-Gaussianity only arising from the density fluctuations. \section{CONCLUSIONS} \label{sec:conclusions} We present the first large-volume, fully numerical structure formation and radiative transfer simulations of the IGM heating during the Cosmic Dawn by the first stellar and X-ray sources. We simulate the multi-frequency transfer of both ionizing and X-ray photons and solve self-consistently for the temperature state of the IGM. While the exact nature and properties of the first X-ray sources are still quite uncertain, our results demonstrate that, under a reasonable set of assumptions, these sources produce significant early and inhomogeneous heating of the neutral IGM and, thus, impact considerably the redshifted 21-cm signals. We focus on these expected 21-cm signals from this epoch and its statistics throughout this paper. In this work, we consider relatively soft-spectrum X-ray sources, which trace the star formation at high redshift. At these high redshifts, these sources are still fairly rare and for reasonable assumed efficiencies, the addition of X-rays does not affect significantly the evolution of the mean fractions of H~II and He~II. The fraction of He~III, however, is boosted by almost an order of magnitude compared to the stellar-only case, although remaining quite low overall. The high energies and long mean free paths of the hard X-ray radiation make it the dominant driver of the heating of the neutral IGM. Pop.~II stars, even massive ones, do not produce a significant amount of such hard radiation. Therefore, both the morphology and the overall amount of heating change dramatically when X-ray sources are present. The mean and the median temperature both increase considerably compared to the stellar-only case, with the mean eventually reaching $\sim10^3$~K by $z\sim13$ (the median, which only reaches $\sim200$~K, better reflects the neutral IGM state as it is less sensitive to the very high temperatures in the ionized regions). The X-ray heating is long-range and, therefore, widely distributed throughout the IGM. This heating is also highly inhomogeneous, as evidenced by the temperature PDFs, maps, and evolution seen in the lightcone visualisations. The neutral regions are heated by the X-ray sources and go fully into 21-cm emission with respect to the CMB before $z=13$, while with stellar-only sources the IGM remains in absorption throughout the Cosmic Dawn. The presence of X-rays, therefore, results in an early, but extended ($\Delta z\sim 7$) transition into emission. The 21-cm fluctuations initially ($z>20$) track the density fluctuations due to the still insignificant heating and ionization fluctuations. However, the temperature fluctuations due to X-ray heating quickly boost the large-scale 21-cm fluctuations to much higher values. At a resolution of $\sim 10-12$ arcmin for redshifts 15 -- 17, the fluctuations are large enough to be a factor of several above the expected noise level of SKA1-Low, which implies the possibility of observing not only power spectra, but also coarse images of the 21-cm signal from the Cosmic Dawn. For the same resolution, the $\delta T_{\rm b}$ rms in the presence of X-rays peaks at $\sim11.5$~mK around $z\sim16.5$. As the X-rays heat the neutral IGM, a broad peak develops at $k\sim 0.1$~Mpc$^{-1}$, corresponding to spatial scale of about 43 Mpc, at $z \sim14-15$. As the IGM heats up and the absorption gradually turns into emission, the 21-cm fluctuations for the HMXB case decrease and asymptote to the high-$T_\mathrm{K}$ limit, which is not fully reached by the end of our simulation ($z\sim12.7$), even though by that time the mean IGM is heated well above the CMB temperature. In contrast, the stellar-only case fluctuations are still increasing steeply by $z\sim12$, as they are driven by cold IGM. In the HMXB case, the distribution of the $\delta T_{\rm b}$ fluctuations shows a clear non-Gaussian signature, with both the skewness and kurtosis peaking when the fluctuations start rising. By the end of the simulation, the skewness and kurtosis approach, but do not reach, the high-$T_\mathrm{K}$ limit. For soft radiation sources, the non-Gaussianity is driven by density fluctuations only, producing a smooth evolution. The often-used high spin temperature limit, $T_\mathrm{S} \gg T_{\mathrm{CMB}}$, is not valid throughout the X-ray heating epoch as long as any IGM patches remain cold. When X-rays are present, even after the IGM temperature rises above the CMB everywhere (and thus the 21-cm signal transits into emission), significant temperature fluctuations remain and contribute to the 21-cm signal. The neutral regions do not asymptote to the high-temperature limit until quite late in our model, at $z\sim12$. This asymptotic behavior can readily been seen in the power spectra and statistics of the 21-cm signal. Soft, stellar-only radiation has short mean free paths and, therefore, never penetrates into the neutral regions, leaving a cold IGM. Previous work in this area has largely been limited to approximate semi-analytical and semi-numerical modelling \citep[e.g.][]{Pritchard2007,Mesinger2013,2014MNRAS.445..213F,2015MNRAS.451..467S, 2015MNRAS.454.1416W,2016arXiv160202873K}. By their nature, such approaches do not apply detailed, multi-frequency RT, but rely on counting the photons produced in a certain region of space and comparing this to the number of atoms (with some correction for the recombinations occurring). The difference between the two determines the ionization state of that region. The X-ray heating is done by solving the energy equation using integrated, average optical depths and photon fluxes, and often additional approximations are employed as well \citep[e.g.][]{2015MNRAS.454.1416W}. These methods typically do not take into account nonlinear physics, spatially varying gas clumping or absorbers, or Jeans mass filtering of low mass sources. These differences make comparisons with the previous results in detail difficult due to the very different modelling employed and would require further study. Nonetheless, we find some commonalities and some disparities with our results, summarised below. Our thermal history is similar to that of the relevant cases in \citet{Pritchard2007} (their Case A) and \citet{2015MNRAS.454.1416W} (their case `$\log\zeta_{\rm X}=55$'). We find a quite extended transition between 21-cm absorption and emission, from the formation of the first ionizing and X-ray sources at $z\sim21$ all the way to $z\sim13$. This transition is somewhat more protracted than the one in the most similar scenarios ($f_{\rm x}=1$ and $5$) considered in \citet{Mesinger2013}, likely due to the higher star formation efficiencies assumed in that work. We find a clear X-ray heating-driven peak in the 21-cm power spectra at $k=0.1-0.2\,{\rm Mpc}^{-1}$, similar to the soft X-ray spectra peak found in\citet{2014MNRAS.443..678P} and at similar redshift (z$\sim 19 \ 15 - 16$; though this depends on the uncertain source efficiencies). Results from their peak power, at $\Delta_{\rm 21cm}\sim14\,$mK, are in rough agreement with our results. The general evolution of the power spectra found in \cite{Pritchard2007} appears similar, with the fluctuations at $k=0.1\,\rm Mpc^{-1}$ also peaking at $z\sim 15-16$ (although that only occurs at $z\sim 12-13$ for the scenario with less X-rays, again suggesting a strong dependence on the source model). The power spectra found are in reasonable agreement our results, with peak values of $\Delta_{\rm 21cm}\sim19\,$mK or $\Delta_{\rm 21cm}\sim11.5\,$mK depending on the source model used by them, compared to $\Delta_{\rm 21cm}\sim14\,$mK for our HMXB case. The 21-cm skewness from the X-ray heating epoch is rarely calculated, but \citet{2015MNRAS.454.1416W} recently found a very similar evolution to ours (though shifted to somewhat higher redshifts), with a positive peak roughly coinciding with the initial rise of the 21-cm fluctuations due to the temperature patchiness. Their corresponding 21-cm $\delta T_{\rm b}$ PDF distributions during the X-ray heating epoch significantly differ from ours, however. At the epoch when $T_{\rm S}$ reaches a minimum, the semi-numerical model predicts a long tail of positive $\delta T_{\rm b}$, which does not exist in the full simulations. Around the $T_{\rm S}\sim T_{\rm CMB}$ epoch, our distribution is quite Gaussian; while \citet{2015MNRAS.454.1416W} find an asymmetric one (though, curiously, with one with close to zero skewness, indicating that skewness alone provides a very incomplete description). Finally, in the $T_{\rm S}\gg T_{\rm CMB}$ epoch, the two results both yield gaussian PDFs, but the simulated one is much narrower. Our models confirm that, for reasonable assumptions about the presence of X-ray sources, there is a period of substantial of fluctuations in the 21-cm signal caused by the patchiness of this heating and that this period precedes the one in which fluctuations are mostly caused by patchiness in the ionization. However, since the nature and properties of X-ray sources remain unconstrained by observations, other scenarios in which the heating occurs later, are also allowed. The currently ongoing observational campaigns of both LOFAR and MWA should be able to put constraints on the presence of spin temperature fluctuations for the range $z < 11$, which would then have clear implications for the required efficiency of X-ray heating at those and earlier redshifts. In the future, we will use simulations of the kind presented here to explore other possible scenarios, for example heating caused by rare, bright sources, as well as the impact of spin temperature fluctuations on all aspects of the 21-cm signal, such as redshift space distortions. \section{Acknowledgements} This work was supported by the Science and Technology Facilities Council [grant number ST/I000976/1] and the Southeast Physics Network (SEPNet). GM is supported in part by Swedish Research Council grant 2012-4144. This research was supported in part by the Munich Institute for Astro- and Particle Physics (MIAPP) of the DFG cluster of excellence ``Origin and Structure of the Universe". We acknowledge that the results in this paper have been achieved using the PRACE Research Infrastructure resources Curie based at the Très Grand Centre de Calcul (TGCC) operated by CEA near Paris, France and Marenostrum based in the Barcelona Supercomputing Center, Spain. Time on these resources was awarded by PRACE under PRACE4LOFAR grants 2012061089 and 2014102339 as well as under the Multi-scale Reionization grants 2014102281 and 2015122822. Some of the numerical computations were done on the Apollo cluster at The University of Sussex. \newpage
{'timestamp': '2017-03-29T02:01:15', 'yymm': '1607', 'arxiv_id': '1607.06282', 'language': 'en', 'url': 'https://arxiv.org/abs/1607.06282'}
\section{\mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}\ Detector and Dataset} \label{sec:detector} The analysis is based on data collected with the \mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}\ experiment \cite{Aubert:2001detector} at the PEP-II asymmetric-energy \ensuremath{e^+e^-}\xspace storage rings \cite{Slac:1993pep2} at the Stanford Linear Accelerator Center between October 1999 and July 2004. The \mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}\ tracking system used for charged particle and vertex reconstruction has two main components: a silicon vertex tracker (SVT) and a drift chamber (DCH), both operating within a 1.5-T magnetic field of a superconducting solenoid. The transverse momentum resolution is 0.47\,\% at 1\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace. Photons are identified in an electromagnetic calorimeter (EMC) surrounding a detector of internally reflected Cherenkov light (DIRC), which associates Cherenkov photons with tracks for particle identification (PID). The energy of photons is measured with a resolution of 3\,\% at 1\ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace. Muon candidates are identified with the use of the instrumented flux return (IFR) of the solenoid. The detector covers the polar angle of $30^\circ < \theta < 140^\circ$ in the center of mass (c.m.) frame. The data sample consists of about 210.4\,\ensuremath{\mbox{\,fb}^{-1}}\xspace, corresponding to $(232 \pm 3) \times 10^{6}$ decays of \Y4S $\ensuremath{\rightarrow}\xspace$ \BB. We use Monte Carlo (MC) simulated events to determine background distributions and to correct for detector acceptance and resolution effects. The simulation of the $\mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}$ detector is realized with $\mbox{\tt GEANT4}\xspace$ \cite{Agostinelli:2002Geant4}. Simulated \ensuremath{B}\xspace\ meson decays are generated using \mbox{\tt EvtGen}\xspace \cite{Lange:2001EvtGen}. Final state radiation is modeled with $ \texttt{PHOTOS}\xspace$ \cite{Richter-Was:1992PHOTOS}. The simulations of $\semilepXc$ decays use a parameterization of form factors for $\ensuremath{\Bbar}\xspace\ensuremath{\rightarrow}\xspace D^{*}\ell^-\ensuremath{\overline{\nu}}\xspace$~\cite{Duboscq:1996FormFactor}, and models for $\ensuremath{\Bbar}\xspace\ensuremath{\rightarrow}\xspace D \ell^-\ensuremath{\overline{\nu}}\xspace,D^{**}\ell^-\ensuremath{\overline{\nu}}\xspace$~\cite{Scora:1995FormFactor} and $\ensuremath{\Bbar}\xspace\ensuremath{\rightarrow}\xspace D \pi \ell^-\ensuremath{\overline{\nu}}\xspace, D^* \pi \ell^-\ensuremath{\overline{\nu}}\xspace$~\cite{Goity:1995SoftPion}. \section{Introduction} \label{sec:introduction} Measurement of moments of the hadronic-mass \cite{Csorna:2004CLEOMoments, Aubert:2004BABARMoments, Acosta:2005CDFMoments, Abdallah:2005DELPHIMoments, Schwanda:2007BELLEMassMoments} and lepton-energy \cite{Aubert:2004BABARLeptonMoments, Abdallah:2005DELPHIMoments, Abe:2005BELLELeptonMoments} spectra in inclusive semileptonic decays \semilepXc\ have been used to determine the non-perturbative QCD parameters describing these decays and the CKM matrix element \ensuremath{|V_{cb}|}\xspace. Combined fits to these moments and moments of the photon-energy spectrum in $\BtoXsGamma$ decays \cite{Chen:2001CLEOXsGammaMoments, Koppenburg:2004BELLEXsGammaInclusive, Aubert:2005BABARXsGammaExclusive, Aubert:2006XsGammaInclusive} in the context of Heavy Quark Expansions (HQE) of QCD have resulted in precise determinations of $\ensuremath{|V_{cb}|}\xspace$ and $\mb$, the mass of the $\ensuremath{b}\xspace$ quark. Specifically, they are reported to be $\ensuremath{|V_{cb}|}\xspace = (42.0\pm 0.2\pm 0.7) \cdot 10^{-3}$ and $\mb = (4.59 \pm 0.03 \pm 0.03) \ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace$ in the kinetic mass scheme \cite{Buchmuller:2005globalhqefit} and $\ensuremath{|V_{cb}|}\xspace = (41.4 \pm 0.6 \pm 0.1) \cdot 10^{-3}$ and $\mb = (4.68 \pm 0.03) \ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace$ in the 1S scheme \cite{Bauer:2004GlobalFit1SScheme}. Lepton-energy moments are known with very good accuracy, but the precision of the hadronic-mass and photon-energy moments is limited by statistics. Therefore, we present here an updated measurement of the hadronic-mass moments $\mxmom{k}$ with $k=1,\ldots,6$ based on a larger dataset than previously used \cite{Aubert:2004BABARMoments}. In addition we present measurements of the mixed hadron mass-energy moments $\moment{\ensuremath{n_{\X}^{k}}}$ with $k=2,4,6$ as proposed by Gambino and Uraltsev \cite{Gambino:2004MomentsKineticScheme}. All moments are presented for different cuts on the minimum energy of the charged lepton. The mixed moments use the mass \mx\ and the energy \Ex\ of the \ensuremath{X_{c}}\xspace\ system in the \ensuremath{B}\xspace meson rest frame of \semilepXc\ decays, \begin{equation}\label{eq:nxDef} n_X^2 = m_X^2 c^4 - 2 \tilde{\Lambda} E_X + \tilde{\Lambda}^2, \end{equation} with a constant $\tilde\Lambda$, here fixed to be 0.65\,\ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace as proposed in \cite{Gambino:2004MomentsKineticScheme}. They allow a more reliable extraction of the higher-order non-perturbative HQE parameters and thus they are expected to increase the precision on the extraction of $\ensuremath{|V_{cb}|}\xspace$ and the quark masses $\mb$ and $\mc$. We perform a combined fit to the hadronic mass moments, measured moments of the lepton-energy spectrum, and moments of the photon energy spectrum in decays $\BtoXsGamma$. The fit extracts values for $\ensuremath{|V_{cb}|}\xspace$, the quark masses $\mb$ and $\mc$, the total semileptonic branching fraction $\brf(\semilepXc)$, and the dominant non-perturbative HQE parameters. These are $\mupi$ and $\muG$, parameterizing effects at ${\cal O}(1/\mb^2)$, and $\rhoD$ and $\rhoLS$, parameterizing effects at ${\cal O}(1/\mb^3)$. \section{Hadronic Mass Moments} \label{sec:hadronic_mass_moments} We present measurements of the moments $\mxmom{k}$, with $k=1,\ldots6$, of the hadronic mass distribution in semileptonic $\ensuremath{B}\xspace$ meson decays $\semilepXc$. The moments are measured as functions of the lower limit on the lepton momentum, $\plmin$, between $0.8\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace$ and $1.9\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace$ calculated in the rest frame of the $\ensuremath{B}\xspace$ meson. \subsection{Selected Event Sample} The selected event sample contains about $21.5\%$ background. For $\plgeq{0.8}$ we find a total of $15085 \pm 146$ signal events above a combinatorial and continuum background of $2429 \pm 43$ events and residual background of $1696 \pm 19$ events. For $\plgeq{1.9}$ we find $2006 \pm 53$ signal events above a background constituted of $271 \pm 17$ and $248 \pm 7$ combinatorial and residual events, respectively. Figure \ref{fig:mass_spectra} shows the kinematically fitted $\mx$ distributions together with the extracted background shapes for $\plgeq{0.8}$ and $\plgeq{1.9}$. \begin{figure} \begin{center} \includegraphics{figures/massSpectra} \end{center} \caption{Kinematically fitted hadronic mass spectra for minimal lepton momenta $\plgeq{0.8}$ (top) and $\plgeq{1.9}$ (bottom) together with distributions of combinatorial background and background from non-$\BB$ decays (hatched area) as well as residual background (crossed area) } \label{fig:mass_spectra} \end{figure} \subsection{Extraction of Moments} To extract unbiased moments $\mxmom{k}$, additional corrections have to be applied to correct for remaining effects that can distort the measured $\mx$ distribution. Contributing effects are the limited acceptance and resolution of the $\mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}$ detector resulting in unmeasured particles and in misreconstructed energies and momenta of particles. Additionally measured particles not originating from the hadronic system and final state radiation of leptons contribute, too. We correct the kinematically fitted $\mx^{k}$ by applying correction factors on an event-by-event basis using the observed linear relationship between the moments of the measured mass $\mxmomreco{k}$ and moments of the true underlying mass $\mxmomtruecut{k}$. Correction functions are constructed from MC simulations by calculating moments $\mxmomreco{k}$ and $\mxmomtruecut{k}$ in several bins of the true mass $\mxtrue$ and fitting the observed dependence with a linear function. \begin{figure*} \begin{center} \includegraphics{figures/calibCurve_mx2_psfrag} \end{center} \caption{Examples of calibration fucntions for $\mxmom{2}$ in bins of $\MultX$, $\epmiss$ and $\plep$. Shown are the extracted $\mxmomreco{}$ versus $\mxmomtruecut{}$ in bins of $\mxtrue$ for $\plbin{0.8}{0.9}$ (\textcolor{black}{$\bullet$}), $\plbin{1.4}{1.5}$ (\textcolor{magenta}{$\circ$}), and $\plgeq{1.9}$ (\textcolor{blue}{$\blacksquare$}). The results of fits of linear functions are overlaid as solid lines. A reference line with $\mxmomreco{} = \mxmomtruecut{}$ is superimposed (dashed line). There is only one calibration function with $\plgeq{1.9}$ constructed but plotted for better comparableness in each bin. } \label{fig:calib_mx2} \end{figure*} Studies show that the bias of the measured $\mxmomreco{k}$ is not constant over the whole phase space but depends on the resolution and total multiplicity of the reconstructed hadronic system, $\MultX$. Therefore, correction functions are derived in three bins of $\MultX$, three bins of $\epmiss$, as well as in twelve bins of $\plep$, each with a width of $100\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c}}\xspace$. Due to limited number of generated MC events, the binning in $\MultX$ and $\epmiss$ is abandoned for $\plmin \geq 1.7\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace$. Overall we construct $75$ calibration functions for each order of moments. Figure \ref{fig:calib_mx2} shows examples of correction functions for the moment $\mxmom{2}$ in three bins of $\plep$ as well as in nine bins of $\epmiss$ and $\MultX$. For each event $i$ the corrected mass $\mxcalibi^{k}$ is calculated by inverting the linear function, \begin{equation} \mxcalibi^{k} = \frac{\mxrecoi^{k} - A(\epmiss, \MultX, k, \plep)}{B(\epmiss, \MultX, k, \plep)}, \end{equation} with $A$ the offset and $B$ the slope of the correction function. Background contributions are subtracted by applying weight factors $ \ensuremath{w_{i}}\xspace $ dependent on $\mxreco$ to each corrected hadronic mass, whereby each weight corresponds to the fraction of signal events expected in the corresponding part of the $\mxreco$ spectrum. This leads to the following expression used for the calculation of the moments: \begin{equation} \mxmom{k} = \frac{\sum\limits_{i=1}^{N_{\mathit{ev}}} \ensuremath{w_{i}}\xspace \, \mxcalibi^{k}} {\sum\limits_{i}^{N_{\mathit{ev}}} \ensuremath{w_{i}}\xspace } \times \Ccalib \times \Ctrue. \end{equation} The factors $\Ccalib$ and $\Ctrue$ are dependent on the order $k$ and minimal lepton momentum $\plmin$ of the measured moment. They are determined in MC simulations and correct for small biases observed after the calibration. The factors $\Ccalib$ account for the bias of the applied correction method and range between $0.985$ and $0.996$. For $\mxmom{6}$ we observe larger biases ranging between $0.897$ and $0.970$ for the lowest $\plmin$ between $0.8\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace$ and $1.2\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace$, respectively. The residual bias correction factor $\Ctrue$ accounts for differences in selection efficiencies for different hadronic final states and QED radiation in the final state that is included in the measured hadron mass and distorts the measurement of the lepton's momentum. The effect of radiative photons is estimated by employing $ \texttt{PHOTOS}\xspace$. Our correction procedure results in moments which are free of photon radiation. The residual bias correction $\Ctrue$ is estimated in MC simulations and typically ranges between $0.994$ and $1.014$. For the moments $\mxmom{5}$ and $\mxmom{6}$ slighly higher correction factors are determined ranging between $0.994$ and $1.023$ as well as $0.994$ and $1.036$, respectively. This procedure is verified on a MC sample by applying the calibration to measured hadron masses of individual semileptonic decays, $\semilepD$, $\semilepDstar$, four resonant decays $\semilepDstarstar$, and two non-resonant decays $\semilepNreso$. Figure \ref{fig:massMoments_exclusiveModes} shows the corrected moments $\mxmom{2}$ and $\mxmom{4}$ as functions of the true moments for minimal lepton momenta $\plgeq{0.8}$. The dashed line corresponds to ${\protect \mxmomcalib{k}} = {\protect \mxmomtruecut{k}}$. The calibration reproduces the true moments over the full mass range. \begin{figure} \begin{center} \includegraphics{figures/massMoments_exclusiveModes} \end{center} \caption{Calibrated ($\bullet$) and uncorrected ($\Box$) moments $\mxmom{2}$ (left) and $\mxmom{4}$ (right) of individual hadronic modes for minimal lepton momenta $\plgeq{0.8}$. A reference line with $\mxmomcalib{} = \mxmomtruecut{}$ is superimposed. } \label{fig:massMoments_exclusiveModes} \end{figure} \begin{figure*} \begin{center} \includegraphics{figures/massMoments} \end{center} \caption{Measured hadronic mass moments $\mxmom{k}$ with $k = 1 \ldots 6$ for different selection criteria on the minimal lepton momentum $\plmin$. The inner error bars correspond to the statistical uncertainties while the full error bars correspond to the total uncertainties. The moments are highly correlated. } \label{fig:massMoments} \end{figure*} \subsection{Systematic Studies} \label{sec:hadronic_mass_moments:systematics} The principal systematic uncertainties are associated with the modeling of hadronic final states in semileptonic $\ensuremath{B}\xspace$-meson decays, the bias of the calibration method, the subtraction of residual background contributions, the modeling of track and photon selection efficiencies, the identification of particles, as well as the stability of the results. The obtained results are summarized in Tables \ref{tab:massMomentsSummary_1} and \ref{tab:massMomentsSummary_2} for the measured moments $\mxmom{k}$ with $k = 1 \ldots 6$ and selection criteria on the minimum lepton momentum ranging from $\plgeq{0.8}$ to $\plgeq{1.9}$. \subsubsection{Modeling of Signal Decays} The uncertainty of the calibration method with respect to the chosen signal model is estimated by changing the composition of the simulated inclusive hadronic spectrum. The dependence on the simulation of high mass hadronic final states is estimated by constructing correction functions only from MC simulated hadronic events with hadronic masses $\mxtruecut < 2.5 \ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace$, thereby removing the high mass tail of the simulated hadronic mass spectrum. The model dependence of the calibration method is found to contribute only little to the total systematic uncertainty. We estimate the model dependence of the residual bias correction $\Ctrue$ by changing the composition of the inclusive hadronic spectrum, thereby omitting one or more decay modes. We associate a systematic uncertainty to the correction of the observed bias of the calibration method $\Ccalib$ of half the size of the applied correction. We study the effect of differences between data and MC in the multiplicity and $\epmiss$ distributions on the calibration method by changing the binning of the correction functions. The observed variation of the results is found to be covered by the statistical uncertainties of the calibration functions. \subsubsection{Background Subtraction} The branching fractions of background decays in the MC simulation are scaled to agree with current measurements \cite{Yao:2006pdbook}. The associated systematic uncertainty is estimated by varying these branching fractions within their uncertainties. At low $\plmin$, most of the studied background channels contribute to the systematic uncertainty, while at high $\plmin$, the systematic uncertainty is dominated by background from decays $\semilepXu$. Contributions from $\ensuremath{{J\mskip -3mu/\mskip -2mu\psi\mskip 2mu}}\xspace$ and $\ensuremath{\psi{(2S)}}\xspace$ decays are found to be negligible. The uncertainty in the combinatorial $\ensuremath{\B_\mathrm{reco}}\xspace$ background subtraction is estimated by varying the lower and upper limits of the sideband region in the $\mbox{$m_{\rm ES}$}\xspace$ distribution up and down by $2.5\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace$. The observed effect is found to be negligible. \subsubsection{Detector-Related Effects} We correct the MC simulation for differences to data in the selection efficiencies of charged tracks and photons, as well as identification efficiencies and misidentification rates of various particle types. The corrections are extracted from data and MC control samples. The uncertainty of the photon selection efficiencies is found to be $1.8\%$ per photon independent of energy, polar angle and multiplicity. The systematic uncertainty in track finding efficiencies is estimated to be $0.8\%$ per track. We add in quadrature the statistical uncertainty of the control samples that depend on energy and polar angle of the track as well as the multiplicity of tracks in the reconstructed event. The misidentification of $\ensuremath{\pi^\pm}\xspace$ mesons as leptons is found to affect the overall normalization of the corresponding background spectra by $8\%$. While the latter two uncertainties give only small contributions to the total systematic uncertainty, the uncertainty associated with the selection efficiency of photons is found to be the main source of systematic uncertainties. \subsubsection{Stability of the Results} The stability of the results is tested by dividing the data into several independent subsamples: $\ensuremath{B^\pm}\xspace$ and $\ensuremath{B^0}\xspace$, decays to electrons and muons, different run periods of roughly equal data-sample sizes, and two regions in the $\epmiss$ spectrum, $-0.5 \leq \epmiss < 0 \ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace$ and $0 \leq \epmiss < 0.5 \ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace$, characterized by different resolutions of the reconstructed hadronic system. No significant variations are observed. The stability of the result under variation of the selection criteria on $\epmiss$ is tested by varying the applied cut between $\epmissabs < 0.2\ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace$ and $\epmissabs < 1.4\ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace$. For most of the measured moments the observed variation is covered by other known systematic detector and MC simulation effects. In cases where the observed variation is not covered by those effects, we add an additional contribution to the systematic uncertainty of the measurement that compensates the observed difference . \subsubsection{Simulation of Radiation} We check the impact of low energetic photons by removing EMC neutral energy deposits with energies below $100 \ensuremath{\mathrm{\,Me\kern -0.1em V}}\xspace$ from the reconstructed hadronic system. The effect on the measured moments is found to be negligible. \subsection{Results} \label{sec:hadronic_mass_moments:results} The measured hadronic mass moments $\mxmom{k}$ with $k = 1 \ldots 6$ as functions of the minimal lepton momentum $\plmin$ are depicted in Fig.~\ref{fig:massMoments}. All measurements are correlated since they share subsets of selected events. Tables \ref{tab:massMomentsSummary_1} and \ref{tab:massMomentsSummary_2} summarize the numerical results. The statistical uncertainty consists of contributions from the data statistics and the statistics of the MC simulation used for the construction of the correction functions, for the subtraction of residual background, and the determination of the final bias correction. In most cases we find systematic uncertainties that exceed the statistical uncertainty by a factor of $~1.5$. \section{Mixed Hadronic Mass- and Energy-Moments\label{sec:mixed_moments}} The measurement of moments of the observable \nx, a combination of the mass and energy of the inclusive \ensuremath{X_{c}}\xspace\ system, as defined in Eq.~\ref{eq:nxDef} , allow a more reliable extraction of the higher order HQE parameters \mupi, \muG, \rhoD, and \rhoLS. Thus a smaller uncertainty on the standard model parameters \ensuremath{|V_{cb}|}\xspace, \mb , and \mc could be achieved. We present measurements of the moments \moment{\nx}, \moment{\nxfour}, and \moment{\nxsix} for different minimal lepton momenta between 0.8\,\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace and 1.9\,\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace calculated in the \ensuremath{B}\xspace-meson rest frame. We calculate the central moments \ensuremath{ \moment{( \nx - \moment{\nx})^2}}, \ensuremath{ \moment{( \nx - \moment{\nx})^3}}, and the moments \ensuremath{ \moment{( \nx - 1.35\,\gevsq)^2}}\ and \ensuremath{ \moment{( \nx - 1.35\,\gevsq)^3}}\ as proposed in \cite{Gambino:2004MomentsKineticScheme}. Due to the structure of the variable \nx\ as a difference of two measured values, its measured resolution and bias are larger than for the mass moments and the sensitivity to $\epmiss$ is increased wrt.~to $\mx$. The overall resolution of \nx\ after the kinematic fit for lepton momenta greater than 0.8\,\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace is measured to be 1.31\,\ensuremath{\mathrm{\,Ge\kern -0.1em V^2}}\xspace\ with a bias of -0.08\,\ensuremath{\mathrm{\,Ge\kern -0.1em V^2}}\xspace. We therefore introduce stronger requirements on the reconstruction quality of the event. We tighten the criteria on the neutrino observables. The variable \epmiss is required to be between 0\ and 0.3\,\ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace. Due to the stronger requirements on $\epmiss$ the individual variables $\emiss$ and $\pmiss$ have less influence on the resolution of the reconstrcuted hadronic system. Therefore, the cuts on the missing energy and the missing momentum in the event are loosened to $\emiss>0\,\ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace$ and $\pmiss > 0\,\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace$, respectively, as they do not yield significant improvement on the resolution of \nx, and do not increase the ratio of signal to background events. The final event sample contains about 22\,\%\xspace\ of background events. The background is composed of 12\,\%\xspace\ continuum and combinatorial background and 10\,\%\xspace\ decays of the signal \ensuremath{B}\xspace meson other than the semileptonic decay $\semilepXc$. Combinatorial and continuum background is subtracted using the sideband of the \mbox{$m_{\rm ES}$}\xspace distribution, as described above. The residual background events, containing a correctly reconstructed \ensuremath{\B_\mathrm{reco}}\xspace\ meson, are subtracted using MC simulations. The dominant sources are pions misidentified as muons, \semilepXu\ decays, and secondary semileptonic decays of $D$ and $D_s$ mesons. The measured \nx\ spectra for cuts on the lepton momentum at \plgeq{0.8}\ and \plgeq{1.9} are shown together with the backgound distributions in Fig.~\ref{fig:nxSpectra}. We measure $7827 \pm 108$\ ($1278 \pm 42$) signal events for \plgeq{0.8\ (1.9)}, respectively. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{figures/nX_DataSpectrumPlots} \caption{Spectra of \nx\ after the kinematic fit together with distributions of combinatorial background and background from non-\BB decays (hatched area) as well as residual background (crossed area) for different minimal lepton momenta \plgeq{0.8} (a) and \plgeq{1.9} (b). \label{fig:nxSpectra}} \end{figure} \subsection{Extraction of Moments} To extract unbiased moments \moment{\ensuremath{n_{\X}^{k}}}, effects that distort the \nx\ distribution need to be corrected. These are the limited detector acceptance, resulting in a loss of particles, the resolution of measured charged particle momenta and energy depositions in the EMC, as well as the radiation of final-state photons. These photons are included in the measured \ensuremath{X_{c}}\xspace\ system and thus lead to a modified energy and mass measurement of the inclusive system. In the case of radiation from the lepton, the lepton's measured momentum is also lowered w.r.t.\ its initial momentum. The measured moments are corrected for the impact of these photons. \begin{figure*} \begin{center} \includegraphics[width=0.95\textwidth]{figures/CalibCurve_unbinned_nX_el} \includegraphics[width=0.95\textwidth]{figures/CalibCurve_unbinned_nX_mu} \end{center} \caption{Examples of calibration curves for \moment{\ensuremath{n_{\X}^{k}}} ($k=2,4,6$) in bins of \plep, extracted separately for events \semilepe\ (a)-(c) and \semilepmu\ (d)-(f). Shown are the extracted $\moment{n^k_{X,\mathrm{reco}}}$ versus $\moment{n^k_{X,\mathrm{true}}}$ in bins of ${\nx}_\mathrm{true}$ for $\plbin{0.9}{1.0}$ (\textcolor{black}{$\bullet$}), $\plbin{1.4}{1.5}$ (\textcolor{magenta}{$\Box$}), and $\plgeq{1.9}$ (\textcolor{blue}{$\circ$}) integrated over multiplicity and $\epmiss$ bins. The results of fits of linear functions are overlaid as solid lines. Reference lines with $ \moment{n^k_{X,\mathrm{reco}}} = \moment{n^k_{X,\mathrm{true}}}$ are superimposed (dashed lines). Please note the logarithmic scales in (b), (c), (e), and (f). \label{fig:nxCalib}} \end{figure*} As described before, we find linear relationships correcting the measured means \moment{{\ensuremath{n_{\X}^{k}}}_\ensuremath{\mathrm}{reco}} to the true means \moment{{\ensuremath{n_{\X}^{k}}}_\ensuremath{\mathrm}{true}} described by first order polynomials. These functions vary with the measured lepton momentum, the measured \epmiss, and the measured multiplicity of the inclusive \ensuremath{X_{c}}\xspace system. The curves are therefore derived in three bins of \epmiss and three bins of the multiplicity for each of the 12 lepton momentum bins of 100\,\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c}}\xspace. We also find differences for events containing an electron or a muon and therefore derive separate correction functions for these two classes of events. The measured \ensuremath{n_{\X}^{k}}\ value is corrected on an event-by-event basis using the inverse of these functions: \begin{equation} n^{k}_{X,\mathrm{calib}} = \frac{ n^{k}_{X,\mathrm{reco}} -A(\epmiss, \MultX, k, \plep)}{B(\epmiss, \MultX, k, \plep)}. \end{equation} Here $A$ and $B$ are the offset and the slope of the calibration function and differ for each order $k=2,4,6$ and for each of the abovementioned bins. Figure \ref{fig:nxCalib} shows calibration curves for the moments \moment{\ensuremath{n_{\X}^{k}}} ($k=2,4,6$), integrated over all multiplicity bins and bins in \epmiss, for three different bins of \plep. These calibration curves are extracted separately for events containing an electron or muon. Differences are mainly visible in the low momentum bin. To verify this calibration procedure, we extract the moments of \ensuremath{n_{\X}^{k}}\ of individual exclusive \semilepXc\ modes on a MC sample and compare the calibrated moments to the true moments. The result of this study for the moments \moment{\nx}\ is plotted in Fig.~\ref{fig:exclModes}, confirming that the extraction method is able to reproduce the true moments. Small biases remaining after calibration are of the order of 1\,\% for \moment{\nx} and in the order of few percent for \moment{\nxfour} and \moment{\nxsix} and are corrected and treated in the systematic uncertainties. Background contributions are subtracted applying \nx\ dependent weight factors $w_{i}(\nx)$ on an event-by-event basis, leading to the following expression for the moments: \begin{equation} \moment{\ensuremath{n_{\X}^{k}}} = \frac{\sum\limits_{i=1}^{N_{\mathrm{ev}}} w_{i}(\nx)\cdot {\ensuremath{n_{\X}^{k}}}_{\mathrm{calib},i} } {\sum\limits_{i=1}^{N_{\mathrm{ev}}} w_{i}(\nx) } \times \mathcal{C}(\plep, k) \end{equation} The bias correction factor $\mathcal{C}(\plep, k)$ depends on the minimal lepton momentum and the order of the extracted moments. It is derived on MC simulations and corrects for the small bias remaining after the calibration. \begin{figure}[b] \centering \includegraphics[width=0.48\textwidth]{figures/ExclusiveModes} \caption{Result of the calibration verification procedure for different minimal lepton momenta \plgeq{0.8} (a) and \plgeq{1.9} (b). Moments \moment{\nx}\ of exclusive modes on simulated events before calibration ($\Box$) and after calibration ($\bullet$) plotted against the true moments for each mode. The dotted line shows the fit result to the calibrated moments, the resulting parameters are shown.\label{fig:exclModes}} \end{figure} \subsection{Systematic Studies} The main sources of systematic uncertainties have been identified as the simulation of the detector efficiency to detect neutral clusters. The corresponding effect from charged tracks is smaller but still contributes to the uncertainty on the moments. Their impact has been evaluated by randomly excluding neutral or charged candidates from the \ensuremath{X_{c}}\xspace\ system with a probability of 1.8\,\% for the neutral candidates and 0.8\% for the charged tracks, corresponding to the systematic uncertainties of the efficiency extraction methods. For the tracks we add in quadrature the statistical uncertainties from the control samples to the 0.8\,\% systematic uncertainty. The uncertainty arising from the differences between data and MC in the $\epmiss$ distributions is evaluated by changing the selected region of \epmiss to [0.0,0.2]\,\ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace and [0.0, 0.4]\,\ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace. To evaluate the uncertainty due to the binning of the calibration curves in the multiplicity, we randomly increase the measured multiplicity used for the choice of the calibration curve by one with a probability of $5\%$ corresponding to observed differences between data and MC. Smaller uncertainties arise from the unknown branching fractions of the background decay modes. Their branching fractions are scaled to agree with recent measurements \cite{Yao:2006pdbook} and are varied within their uncertainties. The MC sample is corrected for differences in the identification efficiencies between data and MC for various particle types. The uncertainty on the background due to pions misidentified as muons is evaluated by changing the MC corrections within the statistical uncertainties of these data control samples. While the background shape does not vary, the amount decreases up to 8\,\%. For the estimate of the uncertainty due to particle identification, we propagate this variation into the extracted moments. A similar variation procedure is applied for the branching fractions of the exclusive signal modes, varying them several times randomly within 10\,\% for the \ensuremath{D^*}\xspace, 15\,\% for the $D$, 50\,\% for the individual \ensuremath{D^{**}}\xspace\ modes and 75\% for the non-resonant modes. The inclusive rate for the decays \semilepXc\ is conserved by rescaling all other modes. In addition, all \ensuremath{D^{**}}\xspace (non-resonant) modes are scaled in common, again randomly within 50\%, keeping the inclusive decay rate \semilepXc\ constant by rescaling the non-resonant (\ensuremath{D^{**}}\xspace) modes only. Experimental uncertainties on the signal branching fractions are fully covered by these variations \cite{Yao:2006pdbook}. This dependence of the extraction method results in changes of the calibration curve and bias correction, however the impact on the moments measured on data is small. We conservatively add half of the bias correction remaining after calibration to the uncertainty related to the extraction method. The stability of the results has been tested by splitting the data sample into several independent subsamples: \ensuremath{B^\pm}\xspace and \ensuremath{B^0}\xspace, decays to electrons and muons, and different run periods of roughly equal data-sample sizes. No significant variations are observed. \begin{figure*}[t] \begin{center} \includegraphics[width=0.95\textwidth]{figures/nX_MomentPlotsWithCentralLogY} \end{center} \caption{Measured moments \moment{\nx} (a), \moment{\nxfour} (b), \moment{\nxsix} (c), and the central moments \ensuremath{ \moment{( \nx - C)^2}}\ with $C=\moment{\nx}$ ($\bullet$) and $C = 1.35\,\ensuremath{\mathrm{\,Ge\kern -0.1em V^2}}\xspace$ ($\circ$) (d), and \ensuremath{ \moment{( \nx - C)^3}}\ with $C=\moment{\nx}$ ($\bullet$) and $C = 1.35\,\ensuremath{\mathrm{\,Ge\kern -0.1em V^2}}\xspace$ ($\circ$) (e) for different cuts on the lepton momentum \plep. The error bars indicate the statistical and the total errors, respectively. Please note the logarithmic scale on the $y$-axis in plots (d) and (e). The moments are highly correlated. \label{fig:nxMoments}} \end{figure*} \subsection{Results} Figure \ref{fig:nxMoments} shows the results for the moments \moment{\nx}, \moment{\nxfour}, \moment{\nxsix}, and the central moments \ensuremath{ \moment{( \nx - \moment{\nx})^k}}\ and \ensuremath{ \moment{( \nx - 1.35\,\gevsq)^k}}\ for $k=2 \mathrm{\ and\ }3$ as a function of the \plep\ cut. The moments are highly correlated due to the overlapping data samples. The full numerical results and the statistical and the estimated systematic uncertainties can be found in Tables \ref{tab:NxOrder246} - \ref{tab:AppnxModCen3}. A clear dependence on the lepton momentum selection criteria is observed for all moments, due to the varying contributions from higher mass final states with decreasing lepton momentum. Statistical uncertainties on the moments \moment{\ensuremath{n_{\X}^{k}}} arise from the limited data sample, the width of the measured distribution $\moment{n_X^{2k}} - \moment{\ensuremath{n_{\X}^{k}}}^2$, and limited statistics on the MC samples used for the extraction of background shapes, calibration curves, and bias correction. In most cases we obtain systematic uncertainties slightly exceeding the statistical uncertainty. \section{Reconstruction of Semileptonic Decays} \label{sec:semilep_decays} \subsection{Selection of Hadronic $\ensuremath{B}\xspace$-Meson Decays} \label{sec:hadronic_mass_moments:breco} The analysis uses \Y4S\ensuremath{\rightarrow}\xspace\BB\ events in which one of the \ensuremath{B}\xspace mesons decays to hadrons and is fully reconstructed ($\ensuremath{\B_\mathrm{reco}}\xspace$) and the semileptonic decay of the recoiling \ensuremath{\Bbar}\xspace\ meson ($B_{\rm recoil}$) is identified by the presence of an electron or muon. While this approach results in a low overall event selection efficiency of only a few per mille, it allows for the determination of the momentum, charge, and flavor of the \ensuremath{B}\xspace mesons. To obtain a large sample of $B$ mesons, many exclusive hadronic decays are reconstructed~\cite{Aubert:2003VubRecoil}. The kinematic consistency of these $\ensuremath{\B_\mathrm{reco}}\xspace$ candidates is checked with two variables, the beam-energy-substituted mass $\mbox{$m_{\rm ES}$}\xspace = \sqrt{s/4 - \vec{p}^{\,2}_B}$ and the energy difference $\Delta E = E_B - \sqrt{s}/2$. Here $\sqrt{s}$ is the total energy in the c.m.\ frame, $\vec{p}_B$ and $E_B$ denote the c.m.\ momentum and c.m.\ energy of the $\ensuremath{\B_\mathrm{reco}}\xspace$ candidate. We require $\Delta E = 0$ within three standard deviations, which range between 10 and 30\ensuremath{\mathrm{\,Me\kern -0.1em V}}\xspace depending on the number of hadrons used for the reconstruction. For a given \ensuremath{\B_\mathrm{reco}}\xspace\ decay mode, the purity is estimated as the signal fraction in events with \mbox{$m_{\rm ES}$}\xspace$>5.27$\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace. For events with one high-momentum lepton with \plgeq{0.8}\ in the $\ensuremath{B}\xspace$-meson rest frame, the purity is approximately 78\,\%. \subsection{Selection of Semileptonic Decays} \label{sec:hadronic_mass_moments:selection} Semileptonic decays are identified by the presence of one and only one electron or muon above a minimum momentum $\plmin$ measured in the rest frame of the $B_{\rm recoil}$. Electrons are selected with 94\% average efficiency and a hadron misidentification rate in the order of 0.1\%. Muons are identified with an efficiency ranging between 60\% for momenta $p = 1\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace$ in the laboratory frame and 75\% for momenta $p > 2\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace$ and a hadron misidentification rate between 1\% for kaons and protons and 3\% for pions. Efficiencies and misidentification rates are estimated from selected samples of electrons, muons, pions, and kaons. We impose the condition $Q_b Q_{\ell} < 0$, where $Q_{\ell}$ is the charge of the lepton and $Q_b$ is the charge of the $b$-quark of the $B_{\rm reco}$. This condition is fulfilled for primary leptons, except for \ensuremath{\Bz {\kern -0.16em \Bzb}}\xspace\ events in which flavor mixing has occurred. We require the total observed charge of the event to be $|Q_{\rm tot}|= |Q_{\rm B_{reco}} + Q_{\rm B_{recoil}}| \leq 1$, allowing for a charge imbalance in events with low momentum tracks or photon conversions. In cases where only one charged track is present in the reconstructed \ensuremath{X_{c}}\xspace\ system, the total charge in the event is required to be equal to zero. \subsection{Reconstruction of the Hadronic System} The hadronic system $\ensuremath{X_{c}}\xspace$ in the decay \semilepXc\ is reconstructed from charged tracks and energy depositions in the calorimeter that are not associated with the $\ensuremath{\B_\mathrm{reco}}\xspace$ or the charged lepton. Procedures are implemented to eliminate fake tracks, low-energy beam-generated photons, and energy depositions in the calorimeter originating from hadronic showers faking the presence of additional particles. Each track is assigned a specific particle type, either $\kern 0.18em\optbar{\kern -0.40em p}{}\xspace$, $\ensuremath{K^\pm}\xspace$, or $\ensuremath{\pi^\pm}\xspace$, based on combined information from the different $\mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}$ subdetectors. The four-momentum of the reconstructed hadronic system $P_{\ensuremath{X_{c}}\xspace}$ is calculated from the four-momenta of the reconstructed tracks $P_{i,trk}$, reconstructed using the mass of the identified particle type, and photons $P_{i,\gamma}$ by $P_{\ensuremath{X_{c}}\xspace} = \sum_{i=1}^{N_{trk}} P_{i,trk} + \sum_{i=1}^{N_{\gamma}} P_{i,\gamma}$. The hadronic mass $\mx$ is calculated from the reconstructed four-momenta as $\mx = \sqrt{ P_{\ensuremath{X_{c}}\xspace}^{2} }$. The four-momentum of the unmeasured neutrino $P_{\nu}$ is estimated from the missing four-momentum $\Pmissfourmom = P_{\Y4S} - P_{\ensuremath{\B_\mathrm{reco}}\xspace} - P_{\ensuremath{X_{c}}\xspace} - P_\ell$. Here, all four-momenta are measured in the laboratory frame. To ensure a well reconstructed hadronic system, we impose criteria on the missing energy, $\emiss > 0.5 \ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace$, the missing momentum, $\pmiss > 0.5 \ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace$, and the difference of both quantities, $\epmissabs < 0.5 \ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace$. After having selected a well reconstructed $\ensuremath{\B_\mathrm{reco}}\xspace$ and having imposed the selection criteria on $\epmiss$, $4.7\%$ of signal decays and $0.3\%$ of background decays are retained. Starting from a kinematically well defined initial state additional knowledge of the kinematics of the semileptonic final state is used in a kinematic fit to improve the overall resolution and reduce the bias of the measured values. The fit imposes four-momentum conservation, the equality of the masses of the two $\ensuremath{B}\xspace$ mesons, and constrains the mass of the neutrino, $P_{\nu}^{2} = 0$. The resulting average resolutions in $\mx$ and $\nx$ are $0.355 \ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace$ and $1.31 \ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace^{2}$, respectively. The overall biases of the kinematically fitted hadronic system are found to be $-0.096 \ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace$ and $-0.08 \ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace^{2}$, respectively. We require the fit to converge, thus ensuring that the constraints are fulfilled. The background is composed of $\ensuremath{e^+e^-}\xspace \rightarrow \ensuremath{q}\xspace\ensuremath{\overline q}\xspace\, (\ensuremath{q}\xspace = u,d,s,c)$ events (continuum background) and decays $\Y4S \rightarrow \ensuremath{\Bu {\kern -0.16em \Bub}}\xspace$ or $\ensuremath{\Bz {\kern -0.16em \Bzb}}\xspace$ in which the $\ensuremath{\B_\mathrm{reco}}\xspace$ candidate is mistakenly reconstructed from particles coming from both $\ensuremath{B}\xspace$ mesons in the event (combinatorial background). Missing tracks and photons in the reconstructed hadronic system are not considered an additional source of background since they only affect its resolution. The effect of missing particles in the reconstruction is taken care of by further correction procedures. To quantify the amount of background in the $\mbox{$m_{\rm ES}$}\xspace$ signal region we perform a fit to the $\mbox{$m_{\rm ES}$}\xspace$ distribution of the $\ensuremath{\B_\mathrm{reco}}\xspace$ candidates. We parametrize the background using an empirical threshold function \cite{Albrecht:1987argusFunction}, \begin{equation} \frac{ \text{d}N }{\text{d} \mbox{$m_{\rm ES}$}\xspace } \propto \mbox{$m_{\rm ES}$}\xspace \sqrt{ 1 - x^{2} } e^{ -\chi \left( 1 - x^{2} \right) }, \end{equation} where $x = \mbox{$m_{\rm ES}$}\xspace / \mbox{$m_{\mathrm{ES,max}}$}\xspace$, $\mbox{$m_{\mathrm{ES,max}}$}\xspace = 5.289\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace$ is the kinematic endpoint approximated by the mean c.m. energy, and $\chi$ is a free parameter defining the curvature of the function. The signal is parameterized with a modified Gaussian function \cite{Skwarnicki:1986cbFunction} peaked at the $\ensuremath{B}\xspace$-meson mass and corrected for radiation losses. The fit is performed separately for several bins in $\mx$ and $\nx$ to account for changing background contributions in different $\mx$ or $\nx$ regions, respectively. The background shape is determined in a signal-free region of the $\mbox{$m_{\rm ES}$}\xspace$ sideband, $5.21 \leq \mbox{$m_{\rm ES}$}\xspace \leq 5.255 \ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace$. Figure \ref{fig:mesFits} shows the $\mbox{$m_{\rm ES}$}\xspace$ distribution for $\plgeq{0.8}$ together with the fitted signal and background contributions. \begin{figure} \begin{center} \includegraphics{figures/mesFits} \end{center} \caption{$\mbox{$m_{\rm ES}$}\xspace$ spectrum of $\ensuremath{\B_\mathrm{reco}}\xspace$ decays accompanied by a lepton with $\plgeq{0.8}$. The signal (solid line) and background (red dashed line) components of the fit are overlaid. The crossed area shows the background under the $\ensuremath{\B_\mathrm{reco}}\xspace$ signal. The background control region in the $\mbox{$m_{\rm ES}$}\xspace$ sideband is indicated by the hatched area. } \label{fig:mesFits} \end{figure} Residual background is estimated from MC simulations. It is composed of charmless semileptonic decays $\semilepXu$, hadrons misidentified as leptons, secondary leptons from semileptonic decays of $\ensuremath{\D^{\left(*\right)}}\xspace$, $\ensuremath{D^+_s}\xspace$ mesons or $\tau$ either in $\ensuremath{\Bz {\kern -0.16em \Bzb}}\xspace$ mixed events or produced in $\ensuremath{b}\xspace \rightarrow \ensuremath{c}\xspace \ensuremath{\overline c}\xspace \ensuremath{s}\xspace$ transitions, as well as leptons from decays of $\ensuremath{{J\mskip -3mu/\mskip -2mu\psi\mskip 2mu}}\xspace$, and $\ensuremath{\psi{(2S)}}\xspace$. The simulated background spectra are normalized to the number of $\ensuremath{\B_\mathrm{reco}}\xspace$ events in data. We verify the normalization using an independent data control sample with inverted lepton charge correlation, $Q_b Q_{\ell} > 0$. \section{Summary} \label{sec:summary} We have reported preliminary results for the moments $\mxmom{k}$ with $k = 1,\ldots,6$ of the hadronic mass distribution in semileptonic \ensuremath{B}\xspace-meson decays to final states containing a charm quark. In addition we have presented preliminary results for a first measurement of the moments $\moment{\ensuremath{n_{\X}^{k}}}$ for $k=2,4,6$ with $\nx$ a combination of mass and energy of the hadronic system \ensuremath{X_{c}}\xspace. The results for the mass moments agree with the previous measurements \cite{Csorna:2004CLEOMoments, Aubert:2004BABARMoments, Acosta:2005CDFMoments, Abdallah:2005DELPHIMoments, Schwanda:2007BELLEMassMoments} but tend in general to higher values, between $1\%$ and $2\%$ for $\mxmom{}$ and $\mxmom{4}$, respectively, relative to the previous \mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}\ measurement \cite{Aubert:2004BABARMoments}. The increased data sample compared to the previous \mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}\ measurement led to significantly smaller statistical uncertainties which are smaller than the systematic uncertainties. We have made a combined fit in the kinetic scheme to the hadronic mass moments, measured moments of the lepton-energy spectrum \cite{Aubert:2004BABARLeptonMoments}, and moments of the photon energy spectrum in decays $\BtoXsGamma$ \cite{Aubert:2005BABARXsGammaExclusive, Aubert:2006XsGammaInclusive}. The combined fit yields preliminary results for $\ensuremath{|V_{cb}|}\xspace$, the quark masses $\mb$ and $\mc$, the total semileptonic branching fraction $\brf(\semilepXc)$, and the dominant non-perturbative HQE parameters in agreement with previous determinations. We obtain $\ensuremath{|V_{cb}|}\xspace = (41.88 \pm 0.81) \cdot 10^{-3}$ and $\mb = (4.552 \pm 0.055) \ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace$. \\ \section{Determination of $\ensuremath{|V_{cb}|}\xspace$ and the quark masses $\mb$ and $\mc$} \label{sec:vcb} At the parton level, the weak decay rate for $\ensuremath{b}\xspace \rightarrow \ensuremath{c}\xspace \ell \nu$ can be calculated accurately; it is proportional to $\ensuremath{|V_{cb}|}\xspace^2$ and depends on the quark masses, $m_b$ and $m_c$. To relate measurements of the semileptonic $\ensuremath{B}\xspace$-meson decay rate to $\ensuremath{|V_{cb}|}\xspace$, the parton-level calculations have to be corrected for effects of strong interactions. Heavy-Quark Expansions (HQEs) \cite{Voloshin:1985HQE,Chay:1990HQE,Bigi:1991HQE} have become a successful tool for calculating perturbative and non-perturbative QCD corrections \cite{Bigi:1993PRLB293HQE,Bigi:1993PRL71HQE,Blok:1994HQE,Manohar:1994HQE,Gremm:1997HQE} and for estimating their uncertainties. In the kinetic-mass scheme \cite{Benson:2003GammaKineticScheme,Gambino:2004MomentsKineticScheme, Benson:2004BToSGammaKineticScheme,Aquila:2005PertCorrKineticScheme, Uraltsev:2004PertCorrKineticScheme,Bigi:2005KineticSchemeOpenCharm}, these expansions in $1/m_b$ and $\alpha_s(m_b)$ (the strong coupling constant) to order ${\cal O}(1/m_b^3)$ contain six parameters: the running kinetic masses of the $b-$ and $c-$quarks, $\mb(\mu)$ and $\mc(\mu)$, and four non-perturbative parameters. The parameter $\mu$ denotes the Wilson normalization scale that separates effects from long- and short-distance dynamics. The calculations are performed for $\mu = 1 \ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace$ \cite{1997:BigiKineticScheme}. We determine these six parameters from a fit to the moments of the hadronic-mass and electron-energy \cite{Aubert:2004BABARLeptonMoments} distributions in semileptonic $B$ decays $\semilepXc$ and moments of the photon-energy spectrum in decays $\BtoXsGamma$ \cite{Aubert:2005BABARXsGammaExclusive,Aubert:2006XsGammaInclusive}. In the kinetic-mass scheme the HQE to ${\cal O}(1/m_b^3)$ for the rate $\Gammasl$ of semileptonic decays $\semilepXc$ can be expressed as \cite{Benson:2003GammaKineticScheme} \begin{eqnarray} \Gammasl & = & \frac{G_F^2 \mb^5}{192\pi^3} \ensuremath{|V_{cb}|}\xspace^2 (1+A_{\mathit{ew}}) A_{\mathit{pert}}(r,\mu) \nonumber\\ & \times & \Bigg [ z_0(r) \Bigg ( 1 - \frac{\mupi - \muG + \frac{\rhoD + \rhoLS}{c^2 \mb}}{2 c^4 \mb^2} \Bigg ) \\ & - & 2(1-r)^4\frac{\muG + \frac{\rhoD + \rhoLS}{c^2 \mb}}{c^4 \mb^2} + d(r)\frac{\rhoD}{c^6 \mb^3} + \mathcal{O}(1/\mb^4)\Bigg]. \nonumber \label{eq:vcb_gammaslkinetic} \end{eqnarray} The leading non-perturbative effects arise at ${\cal O}(1/\mb^2)$ and are parameterized by $\mupi(\mu)$ and $\muG(\mu)$, the expectation values of the kinetic and chromomagnetic dimension-five operators. At ${\cal O}(1/\mb^3)$, two additional parameters enter, $\rhoD(\mu)$ and $\rhoLS(\mu)$, the expectation values of the Darwin and spin-orbit dimension-six operators, respectively. The ratio $r = m_c^2/m_b^2$ enters in the tree level phase-space factor $z_0(r) = 1 - 8r + 8r^3 - r^4 - 12r^2 \ln r$ and in the function $d(r) = 8 \ln r + 34/3 - 32r/3 - 8r^2 + 32r^3/3 - 10r^4 /3$. The factor $1 + A_{\mathit{ew}}$ accounts for electroweak corrections. It is estimated to be $1 + A_{ew} \cong ( 1 + \alpha/\pi \ln M_Z/\mb )^2 = 1.014$. The quantity $A_{\mathit{pert}}$ accounts for perturbative contributions and is estimated to be $A_{pert}(r,\mu) \approx 0.908$ \cite{Benson:2003GammaKineticScheme}. The performed fit uses a linearized expression for the dependence of $\ensuremath{|V_{cb}|}\xspace$ on the values of heavy-quark parameters, expanded around ${\it a~priori}$ estimates of these parameters \cite{Benson:2003GammaKineticScheme}: \begin{eqnarray} \frac{\ensuremath{|V_{cb}|}\xspace}{0.0417} & = & \sqrt{\frac{\brf_{clv}}{0.1032} \frac{1.55}{\tau_{\ensuremath{B}\xspace}} } \nonumber \\ & &\times [1 + 0.30 (\alpha_s(\mb) - 0.22) ] \nonumber\\ & &\times [ 1 - 0.66 ( \mb - 4.60) + 0.39 ( \mc - 1.15 ) \nonumber\\ & &+ 0.013 ( \mupi - 0.40) + 0.09 ( \rhoD - 0.20) \nonumber\\ & &+ 0.05 ( \muG - 0.35 ) - 0.01 ( \rhoLS + 0.15 ) ]. \end{eqnarray} Here $\mb$ and $\mc$ are in $\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace$ and all other parameters of the expansion are in $\ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace^{k}$; $\tau_B$ refers to the average lifetime of $\ensuremath{B}\xspace$ mesons produced at the $\Y4S$ and is given in \ensuremath{\rm \,ps}\xspace. HQEs in terms of the same heavy-quark parameters are available for hadronic-mass, electron-energy, and photon-energy moments. Predictions for those moments are obtained from an analytical calculation. We use these calculations to determine $\ensuremath{|V_{cb}|}\xspace$, the total semileptonic branching fraction $\brf$, the quark masses $\mb$ and $\mc$, as well as the heavy-quark parameters $\mupi$, $\muG$, $\rhoD$, and $\rhoLS$, from a simultaneous $\ensuremath{\chi^2}\xspace$ fit to the measured moments and partial branching fractions, all as functions of the minimal lepton momentum $\plmin$ and minimal photon energies $\Egammacut$. \subsection{Extraction Formalism} The fit method designed to extract the HQE parameters from the moments measurements has been reported previously \cite{Buchmuller:2005globalhqefit, Aubert:2004BABARHQEFit}. It is based on a $\ensuremath{\chi^2}\xspace$ minimization, \begin{equation} \ensuremath{\chi^2}\xspace = \left( \vec{M}_{\mathrm{exp}} - \vec{M}_{\mathrm{HQE}} \right)^{T} \covtot^{-1} \left( \vec{M}_{\mathrm{exp}} - \vec{M}_{\mathrm{HQE}} \right). \label{eq:vcb_chi2} \end{equation} The vectors $\vec{M}_{\mathrm{exp}}$ and $\vec{M}_{\mathrm{HQE}}$ contain the measured moments included in the fit and the corresponding moments calculated by theory, respectively. Furthermore, the expression in Eq.~\ref{eq:vcb_chi2} contains the total covariance matrix \covtot\ defined as the sum of the experimental, \covexp, and theoretical, \covhqe, covariance matrices (see Section \ref{sec:vcb_theoreticalerrors}). The total semileptonic branching fraction, $\brf(\semilepXc)$, is extracted in the fit by extrapolating measured partial branching-fractions, $\brf_{\plcut}(\semilepXc)$, with $\plep \geq \plmin$ to the full lepton energy spectrum. Using HQE predictions of the relative decay fraction \begin{equation} R_{\plcut} = \frac{\int_{\plcut} \frac{\text{d}\Gammasl}{\text{d}\El} \text{d} \El} {\int_{0} \frac{\text{d}\Gammasl}{\text{d}\El} \text{d} \El}, \end{equation} the total branching fraction can be introduced as a free parameter in the fit. It is given by \begin{equation} \brf(\semilepXc) = \frac{\brf_{\plcut}(\semilepXc)}{R_{\plcut}}. \end{equation} The total branching fraction can be used together with the average \ensuremath{B}\xspace-meson lifetime $\tau_{\ensuremath{B}\xspace}$ to calculate the total semileptonic rate proportional to $\ensuremath{|V_{cb}|}\xspace^{2}$, \begin{equation} \Gammasl = \frac{\brf(\semilepXc)}{\tau_{\ensuremath{B}\xspace}} \propto \ensuremath{|V_{cb}|}\xspace^{2}. \end{equation} By adding $\tau_{\ensuremath{B}\xspace}$ to the vectors of measured and predicted quantities, $\vec{M}_{\mathrm{exp}}$ and $\vec{M}_{\mathrm{HQE}}$, $\ensuremath{|V_{cb}|}\xspace$ can be extracted from the fit as an additional free parameter using Eq.~\ref{eq:vcb_gammaslkinetic}. The non-perturbative parameters $\muG$ and $\rhoLS$ have been estimated from $\ensuremath{B}\xspace$-$\ensuremath{B}\xspace^*$ mass splitting and heavy-quark sum rules to be $\muG = (0.35 \pm 0.07) \ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace^{2}$ and $\rhoLS = (-0.15 \pm 0.10) \ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace^{3}$ \cite{Buchmuller:2005globalhqefit}, respectively. Both parameters are restricted in the fit by imposing Gaussian error constraints. \begin{figure*} \begin{center} \includegraphics{figures/FitMassAndMixedMoments} \end{center} \caption{The measured hadronic-mass and mixed moments (\textcolor{black}{$\bullet$}/\textcolor{black}{$\circ$}), as a function of the minimal lepton momentum $\plmin$ compared with the result of the simultaneous fit (solid line) and a previous measurement by the $\mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}$ Collaboration (\textcolor{red}{$\square$}) \cite{Aubert:2004BABARMoments}. The solid data points mark the measurements included in the fit. The vertical bars indicate the experimental errors. The dashed lines correspond to the total fit uncertainty as obtained by converting the fit errors of each individual HQE parameter into an error for the individual moment. } \label{fig:vcb_FitMassAndMixedMoments} \end{figure*} \subsection{Experimental Input} The combined fit is performed on a subset of available moment measurements with correlations below $95\%$ to ensure the invertibility of the covariance matrix. Since the omitted measurements are characterized by high correlations to other measurements considered in the fit they do not contribute significant additional information and the overall sensitivity of the results is not affected. All results are based on the following set of moment measurements, 27 in total: \begin{itemize} \item Lepton energy moments measured by \mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}\ \cite{Aubert:2004BABARLeptonMoments}. We use the partial branching fraction $\brf_{\plcut}$ measured for \plgeq{0.6,1.0,1.5} and the moments \elmom{} for \plgeq{0.6,0.8,1.0,1.2,1.5}. The lepton energy moments \elmom{2} are used at the minimal lepton momentum \plgeq{0.6,1.0,1.5} and \elmom{3} at \plgeq{0.8,1.2}. \item Hadronic mass moments are used as presented in this paper. We select the following subset for the fit: \mxmom{2} for \plgeq{0.9,1.1,1.3,1.5} and \mxmom{4} for \plgeq{0.8,1.0,1.2, 1.4}. \item Photon energy moments measured in \BtoXsGamma\ decays are taken from \cite{Aubert:2005BABARXsGammaExclusive} and \cite{Aubert:2006XsGammaInclusive}: \egammamom{} for the minimal photon energy \Egammageq{1.9, 2.0} and \egammamom{2} for \Egammageq{1.9}. \end{itemize} In addition we use $\tau_{\ensuremath{B}\xspace} = f_0 \tau_0 + (1 - f_0) \tau_{\pm} = (1.585 \pm 0.007) \ensuremath{\rm \,ps}\xspace$, taking into account the lifetimes \cite{Yao:2006pdbook} of neutral and charged $\ensuremath{B}\xspace$ mesons, $\tau_0$ and $\tau_{\pm}$, and their relative production rates, $f_0 = 0.491 \pm 0.007$ \cite{Yao:2006pdbook}, the fraction of $\ensuremath{B^0}\xspace\ensuremath{\Bbar^0}\xspace$ pairs. \subsection{Theoretical Uncertainties} \label{sec:vcb_theoreticalerrors} As discussed in \cite{Buchmuller:2005globalhqefit} and specified in \cite{Gambino:2004MomentsKineticScheme} the following theoretical uncertainties are taken into account: The uncertainty related to the uncalculated perturbative corrections to the Wilson coefficients of non-perturbative operators are estimated by varying the corresponding parameters $\mupi$ and $\muG$ by $20\%$ and $\rhoD$ and $\rhoLS$ by $30\%$ around their expected values. Uncertainties for the perturbative corrections are estimated by varying $\alpha_{s} = 0.22$ up and down by $0.1$ for the hadronic mass moments and by $0.04$ for the lepton energy moments around its nominal value. Uncertainties in the perturbative corrections of the quark masses $m_{\ensuremath{b}\xspace}$ and $m_{c}$ are addressed by varying both by $20\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace$ up and down around their expected values. For the extracted value of $\ensuremath{|V_{cb}|}\xspace$ an additional error of $1.4\%$ is added for the uncertainty in the expansion of the semileptonic rate $\Gammasl$ \cite{Benson:2003GammaKineticScheme, Bigi:2005KineticSchemeOpenCharm}. It accounts for remaining uncertainties in the perturbative corrections to the leading operator, uncalculated perturbative corrections to the chromomagnetic and Darwin operator, higher order power corrections, and possible non-perturbative effects in the operators with charm fields. This uncertainty is not included in the theoretical covariance matrix \covhqe\ but is listed separately as a theoretical uncertainty on $\ensuremath{|V_{cb}|}\xspace$. For the predicted photon energy moments $\egammamom{n}$, additional uncertainties are taken into account. As outlined in \cite{Benson:2004BToSGammaKineticScheme}, additional uncertainties of $30\%$ of the applied bias correction to the photon-energy moments and half the difference in the moments derived from two different distribution-function ans\"atze have to be considered. Both contributions are added linearly. The theoretical covariance matrix \covhqe\ is constructed by assuming fully correlated theoretical uncertainties for a given moment with different lepton momentum or photon energy cutoff and assuming uncorrelated theoretical uncertainties for moments of different orders and types. The additonal uncertainties considered for the photon energy moments are assumed to be uncorrelated for different moments and photon energy cutoffs. \subsection{Results} A comparison of the fit results for the hadronic-mass and mixed moments with the measured moments is shown in Fig.~\ref{fig:vcb_FitMassAndMixedMoments}. The moments \mxmom{} and \mxmom{3} are not included in the fit and thus provide an unbiased comparison with the fitted HQE prediction. We find an overall good agreement, also indicated by $\ensuremath{\chi^2}\xspace = \resultfitchisq$ for 20 degrees of freedom. The measured moments continue to decrease with increasing $\plmin$ and extend beyond theoretical predictions available for $\plmin \leq 1.5 \ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace$. Comparing the measured moments \moment{\nx}\ and \ensuremath{ \moment{( \nx - \moment{\nx})^2}}\ with predictions resulting from the presented fit, a good agreement is found. The calculations used for the predictions of the mixed moments are currently missing $\plmin$-dependent perturbative corrections. The $\plmin$ dependence of the perturbative corrections for those moments is however expected to be small \cite{Uraltsev:2004PertCorrKineticScheme}. The fit results for the standard model and HQE parameters are summarized in Table \ref{tab:vcb_fitResults}. We find as preliminary results $\ensuremath{|V_{cb}|}\xspace = (41.88 \pm 0.81) \cdot 10^{-3}$ and $\mb = (4.552 \pm 0.055) \ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace$. The results are in good agreement with earlier determinations \cite{Buchmuller:2005globalhqefit, Bauer:2004GlobalFit1SScheme}, showing slightly increased uncertainties due to the limited experimental input used in this fit. Figure \ref{fig:vcb_contours} shows the $\Delta\ensuremath{\chi^2}\xspace = 1$ contours in the $(\mb,\ensuremath{|V_{cb}|}\xspace)$ and $(\mb,\mupi)$ planes. It compares the standard fit including photon energy moments, and a fit based on moments from semileptonic $\semilepXc$ decays only, clearly indicating the significance of the constraints from the $\BtoXsGamma$ decays for both $\ensuremath{|V_{cb}|}\xspace$ and $\mb$. \begin{table*} \caption{Fit results with experimental and theoretical uncertainties. For $\ensuremath{|V_{cb}|}\xspace$ we take an additional theoretical uncertainty of $1.4\%$ from the uncertainty in the expansion of $\Gammasl$ into account. Correlations coefficients for all parameters are summarized below the results. } \input{tables/fitResults.tex} \label{tab:vcb_fitResults} \end{table*} \begin{figure*} \begin{center} \includegraphics{figures/Contours} \end{center} \caption{$\Delta\ensuremath{\chi^2}\xspace = 1$ contours for the fit results in the $(\mb,\ensuremath{|V_{cb}|}\xspace)$ and $(\mb, \mupi)$ planes comparing the results of the presented fit (black line) with those of a fit omitting the photon-energy moments (red dashed line). } \label{fig:vcb_contours} \end{figure*}
{'timestamp': '2007-07-18T10:35:08', 'yymm': '0707', 'arxiv_id': '0707.2670', 'language': 'en', 'url': 'https://arxiv.org/abs/0707.2670'}
\section{Introduction} \label{intro} {\em Ab-initio} techniques based on density-functional theory (DFT) have played a key role for several years in the study of matter under extreme conditions~\cite{stixrude98}. With recent progress in the direct {\em ab initio} calculation of thermodynamic free energies~\cite{sugino95,dewijs98a,alfe99a}, there is now great scope for the systematic and accurate calculation of thermodynamic properties over a wide range of conditions. We present here extensive DFT calculations of the free energy of hexagonal-close-packed (h.c.p.) iron under Earth's core conditions, which we have used to obtain results for a number of other thermodynamic quantities, including the bulk modulus, expansion coefficient, specific heat and Gr\"{u}neisen parameter. For some of these we can make direct comparisons with experimental data, which support the accuracy and realism of the calculations; for others, the calculations supply information that is not yet available from experiments. An important ambition of the work is to determine thermodynamic functions without making any significant statistical-mechanical or electronic-structure approximations, other than those required by DFT itself, and we shall argue that we come close to achieving this. The techniques we have developed are rather general, and we believe they will find application to many other problems concerning matter under extreme conditions. The importance of understanding the high-pressure and high-temperature properties of iron can be appreciated by recalling that the Earth's core accounts for about 30~\% of the mass of the entire Earth, and consists mainly of iron~\cite{poirier91}. In fact, the liquid outer core is somewhat less dense than pure iron, and is generally accepted to contain light impurities such as S, O, Si or H~\cite{poirier94}; probably the density of the solid inner core is also significantly reduced by impurities~\cite{masters90,stixrude97}. Nevertheless, the thermodynamic properties of pure iron are fundamental to understanding the more complex real material in the core, and a large experimental effort has been devoted to measuring them. The difficulties are severe, because the pressure range of interest extends from 100~GPa up to nearly 400~GPa, and the temperature goes from {\em ca.}~3000~K to perhaps 7000~K -- the temperature at the centre of the core is subject to an uncertainty of at least 1000~K. Static compression experiments with the diamond anvil cell (DAC) have been performed on Fe up to 300~GPa at room temperature~\cite{mao90}, and DAC experiments at temperatures as high as 3700~K have been reported up to 200~GPa~\cite{boehler90,boehler93,saxena93,saxena94,saxena95,saxena96,jephcoat96,andrault97,shen98}. Our present knowledge of the phase diagram of Fe comes mainly from these experiments, though there are still controversies. Suffice it say that for pressures $p$ above {\em ca.}~60~GPa and temperatures $T$ below {\em ca.}~1500~K it is generally accepted that the stable phase is hexagonal close packed (h.c.p.). Recent DAC diffraction experiments~\cite{shen98} indicate that h.c.p. is actually stable for all temperatures up to the melting line for $p > 60$~GPa, but earlier work claimed that there is another phase, usually called $\beta$, in a region below the melting line for pressures above {\em ca.}~40~GPa. The existing evidence suggests that, if the $\beta$-phase is thermodynamically stable, its structure could either be double-h.c.p.~\cite{saxena95,saxena96} or orthorhombic~\cite{andrault97}, and in either case is closely related to the usual h.c.p. structure. According to very recent theoretical work~\cite{vocadlo99}, h.c.p. is thermodynamically slightly more stable than double-h.c.p. at Earth's core pressures and temperatures. The evidence for the stability of h.c.p. over much of the high-temperature/high-pressure phase diagram is our motivation for concentrating on this phase in the present work. DAC measurements have given some information about thermodynamic quantities up to pressures of a few tens of GPa, but beyond this shock experiments (see e.g. Refs.~\cite{jeanloz79,brown86,yoo93}) have no competitors. These experiments give direct values of the pressure as a function of volume~\cite{brown86} on the Hugoniot curve, and have also been used to obtain information about the adiabatic bulk modulus and some other thermodynamic quantities on this curve. These data will be important in validating our calculations. Temperature is difficult to measure reliably in shock experiments~\cite{yoo93}, and we believe that our {\em ab initio} results may be valuable in providing the needed calibration. The difficulties and uncertainties of experiments have stimulated many theoretical efforts. Some of the theoretical work has been based on simple atomistic models, such as a representation of the total energy as a sum of pair potentials~\cite{matsui97}, or the more sophisticated embedded-atom model~\cite{belonoshko97}. Such models can be useful, but for accuracy and reliability they cannot match high-quality {\em ab initio} calculations based on density-functional theory (DFT)~\cite{generaldft}. The accuracy of DFT depends very much on the approximation used for the electronic exchange-correlation energy $E_{\rm xc}$. It is known that the local density approximation (LDA) is not fully satisfactory for Fe~\cite{wang85}, but that modern generalised-gradient approximations (GGA) reproduce a wide range of properties very accurately. These include the equilibrium lattice parameter, bulk modulus and magnetic moment of body-centred cubic (b.c.c.) Fe at ambient pressures~\cite{stixrude94,soderlind96,vocadlo97} and the phonon dispersion relations of the b.c.c. phase~\cite{vocadlo99}. There has been much DFT work on different crystal structures of Fe at high pressures, and experimental low-temperature results for the pressure as a function of volume $p(V)$ up to $p = 300$~GPa for the hexagonal close packed (h.c.p.) structure are accurately predicted~\cite{mao90}. Further evidence for the accuracy of DFT comes from the successful prediction of the b.c.c. to h.c.p. transition pressure~\cite{stixrude94,soderlind96}. With {\em ab-initio} molecular dynamics, DFT calculations can also be performed on the liquid state, and we have reported extensive calculations both on pure liquid Fe~\cite{vocadlo97,dewijs98b,alfe99b} and on liquid Fe/S and Fe/O alloys~\cite{alfe98a,alfe98b,alfe99c}. Recently, work has been reported~\cite{stixrude97,wasserman96} in which the thermal properties of close-packed crystalline Fe under Earth's core conditions are calculated using {\em ab initio} methods. In fact, the work itself was based on a tight-binding representation of the total energy, but this was parameterised using extensive {\em ab initio} data. The authors did not attempt to perform the statistical-mechanical calculations exactly, but instead used the so-called `particle in a cell' approximation~\cite{holt70}, in which vibrational correlations between atoms are ignored. In spite of these limitations, the work yielded impressive agreement with shock data. We shall make comparisons with this work at various points in the present paper. The present DFT work is based on the GGA known as Perdew-Wang 1991~\cite{wang91,perdew92}. The choice of functional for exchange-correlation energy $E_{\rm xc}$ completely determines the free energy and all other thermodynamic quantities. This statement is important enough to be worth elaborating. It is clear that a given $E_{\rm xc}$ exactly determines the total energy of the system $U ( {\bf R}_1 , \ldots {\bf R}_N )$ for given positions $\{ {\bf R}_i \}$ of the atoms. But through the standard formulas of statistical mechanics, the function $U ( {\bf R}_1 , \ldots {\bf R}_N )$ exactly determines the free energy. So, provided no further electronic-stucture approximations are made in calculating $U ( {\bf R}_1 , \ldots {\bf R}_N )$ from $E_{\rm xc}$, and no statistical-mechanical approximations are made in obtaining $F$ from $U ( {\bf R}_1 , \ldots {\bf R}_N )$, then $E_{\rm xc}$ exactly determines $F$. The calculation of $U$ from $E_{\rm xc}$ has been discussed over many years by many authors. The work presented here is based on the projector-augmented wave (PAW) implementation of DFT~\cite{blochl94,kresse99}, which is an all-electron technique similar to other standard implementations such as full-potential linear augmented plane waves (FLAPW)~\cite{wei85}, as well as being very closely related to the ultrasoft pseudopotential (USPP) method~\cite{vanderbilt90}. In principle, PAW allows one to compute $U$ with any required precision for a given $E_{\rm xc}$. In practical tests on Fe and other systems~\cite{kresse99}, the technique has been shown to yield results that are almost indistinguishable from calculations based on FLAPW, USPP and other DFT implementations -- provided the same $E_{\rm xc}$ is used, of course. We aim to demonstrate in this work that $F$ can also be computed from the {\em ab-initio} $U ( {\bf R}_1 , \ldots {\bf R}_N )$ to any required precision. In this sense, all the approximations made in calculating thermodynamic quantities are completely controlled, with the sole exception of $E_{\rm xc}$ itself. To clarify the precision we are aiming for in the calculation of $F$, we need to explain that one of the wider objectives of this work is the {\em ab initio} determination of the high-pressure melting properties of Fe, to be reported in detail elsewhere~\cite{alfe99d}. Our approach to melting starts from the basic principle that at coexistence the Gibbs free energies $G_{\rm sol} ( p , T )$ and $G_{\rm liq} ( p , T )$ of solid and liquid are equal. But for a given pressure, the curves of $G_{\rm sol} ( p , T )$ and $G_{\rm liq} ( p , T )$ cross at a shallow angle. The difference of slopes $( \partial G_{\rm sol} / \partial T )_p = S_{\rm sol}$ and $( \partial G_{\rm liq} / \partial T )_p = S_{\rm liq}$ is equal to the entropy of fusion $S_{\rm m} \equiv S_{\rm liq} - S_{\rm sol}$, which is comparable to $k_{\rm B}$ per atom. This means that to get the melting temperature within an error of $\delta T$, the non-cancelling errors in $G_{\rm sol}$ and $G_{\rm liq}$ must not exceed {\em ca.}~$k_{\rm B} \delta T$. Ideally, we should like to calculate the melting temperature to within {\em ca.}~100~K, so that non-cancelling errors must be reduced to the level of {\em ca.}~10~meV. Our original ambition for the present work on h.c.p. Fe was to obtain $F$ from the given {\em ab-initio} $U ( {\bf R}_1 , \ldots {\bf R}_N )$ to this precision, and to demonstrate that this has been achieved. As we shall see, this target has probably not been attained, but we miss it by only small factor, which will be estimated. We shall present results for thermodynamic quantities for pressures $50 < p < 400$~GPa and temperatures $2000 < T < 6000$~K. This is a far wider range than is strictly needed for understanding the inner core, where pressures span the range $330 < p < 364$~GPa and $T$ is believed to be in the region of $5000 - 6000$~K. However, the wider range is essential in making comparisons with the available laboratory data. We set the lower limit of 2000~K for our $T$ range because this is the lowest $T$ that has been proposed for equilibrium between the h.c.p. crystal and the liquid (at lower $T$, melting occurs from the f.c.c. phase). In the next Section, we summarize the {\em ab initio} techniques, and give a detailed explanation of the statistical-mechanical techniques. The three Sections after that present our investigations of the three main components of the free energy, associated with the rigid perfect lattice, harmonic lattice vibrations, and anharmonic contributions, probing in each case the technical measures that must be taken to achieve our target precision. Sec.~6 reports our results for all the thermodynamic quantities derived from the free energy, with comparisons wherever possible with experimental measurements and previous theoretical values. Overall discussion and conclusions are given in Sec.~7. The implications of our results for deepening our understanding of the Earth's core will be analysed elsewhere. \section{Techniques} \label{sec:techniques} \subsection{Ab initio techniques} \label{sec:abinitio} The use of DFT to calculate the energetics of many-atom systems has been extensively reviewed~\cite{generaldft}. However, a special feature of the present work is that thermal electronic excitations are crucially important, and we need to clarify the theoretical framework in which the calculations are done. A fundamental quantity in this work is the electronic free energy $U ( {\bf R}_1 , \ldots {\bf R}_N ; T_{\rm el} )$ calculated at electronic temperature $T_{\rm el}$ with the $N$ nuclei fixed at positions ${\bf R}_1 , \ldots {\bf R}_N$. Throughout most of the work, the statistical mechanics of the nuclei will be treated in the classical limit. We lose nothing by doing this, since we shall demonstrate later that quantum corrections are completely negligible under the conditions of interest. The Helmholtz free energy of the whole system is then: \begin{equation} \label{eqn:helmholtz} F = - k_{\rm B} T \ln \left\{ \frac{1}{N ! \Lambda^{3 N}} \int d {\bf R}_1 \ldots d {\bf R}_N \, \exp \left[ - \beta U ( {\bf R}_1 , \ldots {\bf R}_N ; T_{\rm el} ) \right] \right\} \; , \end{equation} where $\Lambda = h / ( 2 \pi M k_{\rm B} T )^{1/2}$ is the thermal wavelength, with $M$ being the nuclear mass, and $\beta = 1 / k_{\rm B} T$. In practice, the electronic and nuclear degrees of freedom are in thermal equilibrium with each other, so that $T_{\rm el} = T$, but it will be useful to preserve the logical distinction between the two. Although $U ( {\bf R}_1 , \ldots {\bf R}_N ; T_{\rm el} )$ is actually a {\em free} energy, we will generally refer to it simply as the total-energy function, to avoid confusion with the overall free energy $F$. It is clear from Eqn~(\ref{eqn:helmholtz}) that in the calculation of $F$, and hence of all other thermodynamic quantities, it makes no difference that $U$ is a free energy; it is simply the object that plays the role of the total energy in the statistical mechanics of the nuclei. The DFT formulation of the electronic free energy $U$ has been standard since the work of Mermin~\cite{mermin65}, and has frequently been used in practical calculations~\cite{gillan89,wentzcovitch92}, though usually as a mere technical device for economising on Brillouin-zone sampling, rather than because electronic thermal excitation was important. The essence of finite-temperature DFT is that $U$ is given by: \begin{equation} U = E - T S \; , \end{equation} where the DFT total energy $E$ is the usual sum of kinetic, electron-nucleus, Hartree and exchange-correlation energy terms, and $S$ is the electronic entropy: \begin{equation} S = - 2 k_{\rm B} \sum_i \left[ f_i \ln f_i + ( 1 - f_i ) \ln ( 1 - f_i ) \right] \; , \label{eqn:entropy} \end{equation} with $f_i$ the thermal (Fermi-Dirac) occupation number of orbital $i$. The electronic kinetic energy and other parts of $E$ also contain the occupation numbers. We point out that in exact DFT theory the exchange-correlation (free) energy $E_{\rm xc}$ has an explicit temperature dependence. Very little is known about its dependence on temperature, and we assume throughout this work that $E_{\rm xc}$ has its zero-temperature form. The PAW implementation of DFT has been described in detail in previous papers~\cite{blochl94,kresse99}. The present calculations were done using the VASP code~\cite{kresse96a,kresse96b}. The details of the core radii, augmentation-charge cut-offs, etc are exactly as in our recent PAW work on liquid Fe~\cite{alfe99b}. Our division into valence and core states is also the same: the 3$p$ electrons are treated as core states, but their response to the high compression is represented by an effective pair potential, with the latter constructed using PAW calculations in which the 3$p$ states are explicitly included as valence states. Further technical details are as follows. All the calculations are based on the form of GGA known as Perdew-Wang 1991~\cite{wang91,perdew92}. Brillouin-zone sampling was performed using Monkhorst-Pack special points~\cite{monkhorst76}, and the detailed form of sampling will be noted where appropriate. The plane-wave cut-off of 300~eV was used, exactly as our PAW work on liquid Fe. \subsection{Components of the free energy} \label{sec:components} Our {\em ab initio} calculations of thermodynamic properties are based on a separation of the Helmholtz free energy $F$ into the three components mentioned in the Introduction, which are associated with the rigid perfect crystal, harmonic lattice vibrations, and anharmonic contributions. To explain this separation, we start from the expression for $F$ given in Eqn~(\ref{eqn:helmholtz}). We let $F_{\rm perf} ( T_{\rm el} ) \equiv U ( {\bf R}_1^0 , \ldots {\bf R}_N^0 ; T_{\rm el} )$ denote the total free energy of the system when all atoms are fixed at their perfect-lattice positions ${\bf R}_I^0$, and write $U ( {\bf R}_1 , \ldots {\bf R}_N ; T_{\rm el} ) = F_{\rm perf} ( T_{\rm el} ) + U_{\rm vib} ( {\bf R}_1 , \ldots {\bf R}_N ; T_{\rm el} )$, which defines the vibrational energy $U_{\rm vib}$. Then it follows from eqn~(\ref{eqn:helmholtz}) that \begin{equation} F =F_{\rm perf} + F_{\rm vib} \; , \end{equation} where the vibrational free energy $F_{\rm vib}$ is given by: \begin{equation} F_{\rm vib} = - k_{\rm B} T \ln \left \{ \frac{1}{\Lambda^{3N}} \int d {\bf R}_1 \ldots d {\bf R}_N \, \exp [ - \beta U_{\rm vib} ( {\bf R}_1 , \ldots {\bf R}_N ; T_{\rm el} ) ] \right\} \; . \end{equation} (Note that we now omit the factor $N!$ from the partition function, since every atom is assumed to be confined to its own lattice site.) The vibrational energy $U_{\rm vib}$ can be further separated into harmonic and anharmonic parts ($U_{\rm vib} = U_{\rm harm} + U_{\rm anharm}$), in terms of which we can define the harmonic vibrational free energy $F_{\rm harm}$: \begin{equation} F_{\rm harm} = - k_{\rm B} T \ln \left\{ \frac{1}{\Lambda^{3N}} \int d {\bf R}_1 \ldots d {\bf R}_N \, \exp [ - \beta U_{\rm harm} ( {\bf R}_1 , \ldots {\bf R}_N ; T_{\rm el} ) ] \right\} \; , \label{eqn:fharm0} \end{equation} with the anharmonic free energy being the remainder $F_{\rm anharm} = F_{\rm vib} - F_{\rm harm}$. The harmonic energy $U_{\rm harm}$ is defined in the obvious way: \begin{equation} U_{\rm harm} = \frac{1}{2} \sum_{I , J} {\bf u}_I \cdot \left( \nabla_I \nabla_J U \right) \cdot {\bf u}_J \; , \end{equation} where ${\bf u}_I$ is the displacement of atom $I$ from its perfect-lattice position (${\bf u}_I \equiv {\bf R}_I - {\bf R}_I^0$) and the double gradient of the {\em ab initio} total energy is evaluated with all atoms at their perfect-lattice positions. Since we are dealing with a crystal, we shall usually prefer to rewrite $U_{\rm harm}$ in the more explicit form: \begin{equation} U_{\rm harm} = \frac{1}{2} \sum_{l s \alpha , l^\prime t \beta} u_{l s \alpha} \Phi_{l s \alpha , l^\prime t \beta} u_{l^\prime t \beta} \; , \end{equation} where $u_{l s \alpha}$ is the $\alpha$th Cartesian component of the displacement of atom number $s$ in primitive cell number $l$, and $\Phi_{l s \alpha , l^\prime t \beta}$ is the force-constant matrix. It should be noted that the present separation of $F$ does not represent a separation into electronic and nuclear contributions, since thermal electronic excitations influence all three parts of $F$. Since all other thermodynamic functions can be obtained by taking appropriate derivatives of the Helmholtz free energy, the separation of $F$ into components implies a similar separation of other quantities. For example, the pressure $p = - ( \partial F / \partial V )_T$ is $p_{\rm perf} + p_{\rm harm} + p_{\rm anharm}$, where $p_{\rm perf} = - ( \partial F_{\rm perf} / \partial V )_T$, and similarly for the components $p_{\rm harm}$ and $p_{\rm anharm}$. \subsection{Phonon frequencies} \label{sec:phonons} The free energy of a harmonic oscillator of frequency $\omega$ is $k_{\rm B} T \ln \left( \exp ( \frac{1}{2} \beta \hbar \omega ) - \exp ( - \frac{1}{2} \beta \hbar \omega ) \right)$, which has the high-temperature expansion $k_{\rm B} T \ln ( \beta \hbar \omega ) + k_{\rm B} T \left[ \frac{1}{24} ( \beta \hbar \omega )^2 + O ( ( \beta \hbar \omega )^4 ) \right]$, so that the harmonic free energy per atom of the vibrating crystal in the classical limit is: \begin{equation} F_{\rm harm} = \frac{3 k_{\rm B} T}{N_{{\rm k} s}} \sum_{{\bf k} s} \ln ( \beta \hbar \omega_{{\bf k} s} ) \; , \label{eqn:fharm} \end{equation} where $\omega_{{\bf k} s}$ is the frequency of phonon branch $s$ at wavevector ${\bf k}$ and the sum goes over the first Brillouin zone, with $N_{{\bf k} s}$ the total number of $k$-points and branches in the sum. It will sometimes be useful to express this in terms of the geometric average $\bar{\omega}$ of the phonon frequencies, defined as: \begin{equation} \ln \bar{\omega} = \frac{1}{N_{{\bf k} s}} \sum_{{\bf k} s} \ln ( \omega_{{\bf k} s} ) \; , \end{equation} which allows us to write: \begin{equation} F_{\rm harm} = 3 k_{\rm B} T \ln ( \beta \hbar \bar{\omega} ) \; . \end{equation} The central quantity in the calculation of the frequencies is the force-constant matrix $\Phi_{l s \alpha , l^\prime t \beta}$, since the frequencies at wavevector ${\bf k}$ are the eigenvalues of the dynamical matrix $D_{s \alpha , t \beta}$, defined as: \begin{equation} D_{s \alpha , t \beta} ( {\bf k} ) = \frac{1}{M} \sum_{l^\prime} \Phi_{l s \alpha , l^\prime t \beta} \exp \left[ i {\bf k} \cdot ( {\bf R}_{l^\prime t}^0 - {\bf R}_{l s}^0 ) \right] \; , \end{equation} where ${\bf R}_{l s}^0$ is the perfect-lattice position of atom $s$ in primitive cell number $l$. If we have the complete force-constant matrix, then $D_{s \alpha , t \beta}$ and hence the frequencies $\omega_{{\rm k} s}$ can be obtained at any ${\bf k}$, so that $\bar{\omega}$ can be computed to any required precision. In principle, the elements of $\Phi_{l s \alpha , l^\prime t \beta}$ are non-zero for arbitrarily large separations $\mid {\bf R}_{l^\prime t}^0 - {\bf R}_{l s}^0 \mid$, but in practice they decay rapidly with separation, so that a key issue in achieving our target precision is the cut-off distance beyond which the elements can be neglected. We calculate $\Phi_{l s \alpha , l^\prime t \beta}$ by the small-displacement method, in a way similar to that described in Ref.~\cite{kresse95}. The basic principle is that $\Phi_{l s \alpha , l^\prime t \beta}$ describes the proportionality between displacements and forces. If the atoms are given small displacements $u_{l s \alpha}$ from their perfect-lattice positions, then to linear order the forces $F_{l s \alpha}$ are: \begin{equation} F_{l s \alpha} = - \sum_{l^\prime t \beta} \Phi_{l s \alpha , l^\prime t \beta} u_{l^\prime t \beta} \; . \end{equation} Within the {\em ab initio} scheme, all the elements $\Phi_{l s \alpha , l^\prime t \beta}$ are obtained for a given $l^\prime t \beta$ by introducing a small displacement $u_{l^\prime t \beta}$, all other displacements being zero, minimising the electronic free energy, and evaluating all the forces $F_{l s \alpha}$. In practice, the displacement amplitude $u_{l^\prime t \beta}$ must be made small enough to ensure linearity to the required precision, and this sets the precision with which the electronic free energy must be minimised. By translational symmetry, the entire force-constant matrix is obtained by making three independent displacements for each atom in the primitive cell, and this means that no more than $3 N_{\rm bas}$ calculations are needed, where $N_{\rm bas}$ is the number of atoms in the primitive cell. This number can be reduced by symmetry. If, as in the h.c.p. crystal, all atoms in the primitive cell are equivalent under operations of the space group, then the entire force-constant matrix can be obtained by making at most three displacements of a single atom in the primitive cell: from $\Phi_{l s \alpha , l^\prime t \beta}$ for one chosen atom $l^\prime t$, one obtains $\Phi_{l s \alpha , l^\prime t \beta}$ for all other $l^\prime t$. Point-group symmetry reduces the number still further if linearly independent displacements of the chosen atom are equivalent by symmetry. This is the case in the h.c.p. structure, since displacements in the basal plane related by rotations about the $c$-axis by $\pm 120^\circ$ are equivalent by symmetry; this means that two calculations, one with the displacement along the $c$-axis, and other with the displacement in the basal plane, suffice to obtain the entire $\Phi_{l s \alpha , l^\prime t \beta}$ matrix. The basal-plane displacement should be made along a symmetry direction, because the symmetry makes the calculations more efficient. Since the exact $\Phi_{l s \alpha , l^\prime t \beta}$ matrix has point-group symmetries, the calculated $\Phi_{l s \alpha , l^\prime t \beta}$ must be symmetrised to ensure that these symmetries are respected. The symmetrisation also serves to eliminate the lowest-order non-linearities in the relation between forces and displacements~\cite{kresse95}. It is important to appreciate that the $\Phi_{l s \alpha , l^\prime t \beta}$ in the formula for $D_{s \alpha , t \beta} ( {\bf k} )$ is the force-constant matrix in the infinite lattice, with no restriction on the wavevector ${\bf k}$, whereas the {\em ab initio} calculations of $\Phi_{l s \alpha , l^\prime t \beta}$ can only be done in supercell geometry. Without a further assumption, it is strictly impossible to extract the infinite-lattice $\Phi_{l s \alpha , l^\prime t \beta}$ from supercell calculations, since the latter deliver information only at wavevectors that are reciprocal lattice vectors of the superlattice. The further assumption needed is that the infinite-lattice $\Phi_{l s \alpha , l^\prime t \beta}$ vanishes when the separation ${\bf R}_{l^\prime t} - {\bf R}_{l s}$ is such that the positions ${\bf R}_{l s}$ and ${\bf R}_{l^\prime t}$ lie in different Wigner-Seitz (WS) cells of the chosen superlattice. More precisely, if we take the WS cell centred on ${\bf R}_{l^\prime t}$, then the infinite-lattice value of $\Phi_{l s \alpha , l^\prime t \beta}$ vanishes if ${\bf R}_{l s}$ is in a different WS cell; it is equal to the supercell value if ${\bf R}_{l s}$ is wholly within the same WS cell; and it is equal to the supercell value divided by an integer $P$ if ${\bf R}_{l s}$ lies on the boundary of the same WS cell, where $P$ is the number of WS cells having ${\bf R}_{l s}$ on their boundary. With this assumption, the $\Phi_{l s \alpha , l^\prime t \beta}$ elements will converge to the correct infinite-lattice values as the dimensions of the supercell are systematically increased. \subsection{Anharmonicity} \label{sec:anharmonicity} \subsubsection{Thermodynamic integration} \label{sec:thermint} Although we shall show that the anharmonic free energy $F_{\rm anharm}$ is numerically fairly small, it is far more challenging to calculate than $F_{\rm perf}$ or $F_{\rm harm}$, because there is no simple formula like eqn~(\ref{eqn:fharm}), and the direct computation of the multi-dimensional integrals in the free-energy formulas such as eqn~(\ref{eqn:fharm0}) is impossible. Instead, we use the technique of thermodynamic integration (see e.g. Ref.~\cite{frenkel96}) to obtain the difference $F_{\rm vib} - F_{\rm harm}$, as developed in earlier papers~\cite{sugino95,dewijs98a,alfe99a}. Thermodynamic integration is a completely general technique for determining the difference of free energies $F_1 - F_0$ for two systems whose total-energy functions are $U_1$ and $U_0$. The basic idea is that $F_1 - F_0$ represents the reversible work done on continuously and isothermally switching the energy function from $U_0$ to $U_1$. To do this switching, a continuously variable energy function $U_\lambda$ is defined as: \begin{equation} U_\lambda = ( 1 - \lambda ) U_0 + \lambda U_1 \; , \end{equation} so that the energy goes from $U_0$ to $U_1$ as $\lambda$ goes from 0 to 1. In classical statistical mechanics, the work done in an infinitesimal change $d \lambda$ is: \begin{equation} d F = \langle d U_\lambda / d \lambda \rangle_\lambda d \lambda = \langle U_1 - U_0 \rangle_\lambda d \lambda \; , \end{equation} where $\langle \, \cdot \, \rangle_\lambda$ represents the thermal average evaluated for the system governed by $U_\lambda$. It follows that: \begin{equation} F_1 - F_0 = \int_0^1 d \lambda \, \langle U_1 - U_0 \rangle_\lambda \, . \end{equation} In practice, this formula can be applied by calculating $\langle U_1 - U_0 \rangle_\lambda$ for a suitable set of $\lambda$ values and performing the integration numerically. The average $\langle U_1 - U_0 \rangle_\lambda$ is evaluated by sampling over configuration space. For the anharmonic free energy, a possible approach is to choose $U_0$ as $U_{\rm harm}$ and $U_1$ as $U_{\rm vib}$, so that $F_1 - F_0$ is the anharmonic free energy $F_{\rm anharm}$. This was the procedure used in our earlier {\em ab initio} work on the melting of Al~\cite{dewijs98a}, and a related technique was used by Sugino and Car~\cite{sugino95} in their work on Si melting. However, the calculations are rather heavy, and the need for extensive sampling over the electronic Brillouin zone in the {\em ab initio} calculations makes it difficult to achieve high precision. We have now developed a more efficient two-step procedure, in which we go first from the harmonic {\em ab initio} system $U_{\rm harm}$ to an intermediate reference system $U_{\rm ref}$ which closely mimics the full {\em ab initio} total energy $U_{\rm vib}$; in the second step, we go from $U_{\rm ref}$ to $U_{\rm vib}$. The anharmonic free energy is thus represented as: \begin{equation} F_{\rm anharm} = ( F_{\rm vib} - F_{\rm ref} ) + ( F_{\rm ref} - F_{\rm harm} ) \; , \end{equation} and the two differences are calculated by separate thermodynamic integrations: \begin{eqnarray} F_{\rm vib} - F_{\rm ref} & = & \int_0^1 d \lambda \, \langle U_{\rm vib} - U_{\rm ref} \rangle_\lambda^{\rm vr} \nonumber \\ F_{\rm ref} - F_{\rm harm} & = & \int_0^1 d \lambda \, \langle U_{\rm ref} - U_{\rm harm} \rangle_\lambda^{\rm rh} \; . \end{eqnarray} To distinguish clearly between these two parts of the calculation, we denote by $\langle \, \cdot \, \rangle_\lambda^{\rm rh}$ the thermal average taken in the ensemble generated by the switched total energy $U_\lambda^{\rm rh} \equiv ( 1 - \lambda ) U_{\rm harm} + \lambda U_{\rm ref}$ and by $\langle \, \cdot \, \rangle_\lambda^{\rm vr}$ the corresponding average for $U_\lambda^{\rm vr} \equiv ( 1 - \lambda ) U_{\rm ref} + \lambda U_{\rm vib}$. The crucial point of this is that $U_{\rm ref}$ is required to consist of an empirical model potential which quite accurately represents both the harmonic and anharmonic parts of the {\em ab initio} total energy $U_{\rm vib}$. Since it is a model potential, the thermodynamic integration for $F_{\rm ref} - F_{\rm harm}$ can be performed with high precision on large systems. The difference $F_{\rm vib} - F_{\rm ref}$, by contrast, involves heavy {\em ab initio} calculations, but provided a good $U_{\rm ref}$ can be found these are manageable. We return to the problem of searching for a good $U_{\rm ref}$ in Sec.~\ref{sec:reference}. \subsubsection{Calculation of thermal averages} \label{sec:thermav} The calculation of thermal averages is just the standard problem of computational statistical mechanics, and can be accomplished by any method that allows us to draw unbiased samples of configurations from the appropriate ensemble. In this work, we employ molecular dynamics simulation. This means, for example, that to calculate $\langle U_{\rm ref} - U_{\rm harm} \rangle_\lambda^{\rm rh}$ we generate a trajectory of the system using equations of motion derived from the total energy function $U_\lambda^{\rm rh}$. In the usual way, an initial part of the trajectory is discarded for equilibration, and the remainder is used to estimate the average. The duration of this remainder must suffice to deliver enough {\em independent} samples to achieve the required statistical precision. The key technical problem in calculating thermal averages in nearly harmonic systems is that of ergodicity. In the dynamical evolution of a perfectly harmonic system, energy is never shared between different vibrational modes, so that a system starting at any point in phase space fails to explore the whole of phase space. This means that in a nearly harmonic system exploration will be very slow and inefficient, and it is difficult to generate statistically independent samples. We solve this following Ref.~\cite{dewijs98a}: the statistical sampling is performed using Andersen molecular dynamics~\cite{andersen80}, in which the atomic velocities are periodically randomised by drawing them from a Maxwellian distribution. This type of simulation generates the canonical ensemble and completely overcomes the ergodicity problem. \subsubsection{Reference system} \label{sec:reference} The computational effort needed to calculate $F_{\rm vib} - F_{\rm ref}$ is greatly reduced if the difference of total energies $U_{\rm vib} - U_{\rm ref}$ is small, for two reasons. First, the amount of sampling needed to calculate $\langle U_{\rm vib} - U_{\rm ref} \rangle_\lambda$ to a given precision is reduced if the fluctuations of $U_{\rm vib} - U_{\rm ref}$ are small; second, the variation of $\langle U_{\rm vib} - U_{\rm ref} \rangle_\lambda$ as $\lambda$ goes from 0 to 1 is reduced. In fact, if the fluctuations are small enough, this variation can be neglected, and it is accurate enough to approximate $F_{\rm vib} - F_{\rm ref} \simeq \langle U_{\rm vib} - U_{\rm ref} \rangle_{\rm ref}$, with the average taken in the reference ensemble. If this is not good enough, the next approximation is readily shown to be: \begin{equation} F_{\rm vib} - F_{\rm ref} \simeq \langle U_{\rm vib} - U_{\rm ref} \rangle_{\rm ref} - \frac{1}{2 k_{\rm B} T} \left\langle \left[ U_{\rm vib} - U_{\rm ref} - \langle U_{\rm vib} - U_{\rm ref} \rangle_{\rm ref} \rangle \right]^2 \right\rangle_{\rm ref} \; . \label{eqn:secondorder} \end{equation} Our task is therefore to search for a model $U_{\rm ref}$ for which the fluctuations of $U_{\rm vib} - U_{\rm ref}$ are as small as possible. The question of reference systems for Fe has already been discussed in our recent {\em ab initio} simulation work on the high-pressure liquid~\cite{alfe99b}. We showed there that a remarkably good reference model is provided by a system interacting through inverse-power pair potentials: \begin{equation} U_{\rm IP} = \frac{1}{2} \sum_{I \ne J} \phi ( \mid {\bf R}_I - {\bf R}_J \mid ) \; , \label{eqn:uip} \end{equation} where $\phi (r) = B / r^\alpha$, with $B$ and $\alpha$ adjusted to minimise the fluctuations of the difference between $U_{\rm IP}$ and the {\em ab initio} energy. Unfortunately, we shall show that this is an unsatisfactory reference model for the solid, because the harmonic phonon dispersion relations produced by $U_{\rm IP}$ differ markedly from the {\em ab initio} ones. It is a particularly poor reference model at low temperatures where anharmonic corrections are small, because in that r\'{e}gime a good reference system must closely resemble $U_{\rm harm}$. However, we find that $U_{\rm IP}$ becomes an increasingly good reference system as $T$ approaches the melting temperature. We therefore adopt as a general form for the reference system a linear combination of $U_{\rm harm}$ and $U_{\rm IP}$: \begin{equation} U_{\rm ref} = c_1 U_{\rm harm} + c_2 U_{\rm IP} \; . \label{eqn:optcoef} \end{equation} The coefficients $c_1$ and $c_2$ are adjusted to minimise the intensity of the fluctuations of $U_{\rm vib} - U_{\rm ref}$ for each thermodynamic state. Now consider in more detail how this optimisation of $U_{\rm ref}$ is to be done. In principle, the ensemble in which we have to sample the fluctuations of $U_{\rm vib} - U_{\rm ref}$ is the one generated by the continuously switched total energy $( 1 - \lambda ) U_{\rm ref} + \lambda U_{\rm vib}$ that governs the thermodynamic integration from $U_{\rm ref}$ to $U_{\rm vib}$. In practice, this is essentially the same as sampling in either of the ensembles associated with $U_{\rm ref}$ or $U_{\rm vib}$, provided the fluctuations of $U_{\rm vib} - U_{\rm ref}$ are indeed small. But even this poses a problem. We are reluctant to sample in the ensemble of $U_{\rm vib}$, because extensive (and expensive) {\em ab initio} calculations are needed to achieve adequate statistical accuracy. On the other hand, we cannot sample in the ensemble of $U_{\rm ref}$ without knowing $U_{\rm ref}$, which is what we are trying to find. We resolve this problem by constructing an initial optimised $U_{\rm ref}$ by minimising the fluctuations in the ensemble of $U_{\rm harm}$. We then use this initial $U_{\rm ref}$ to generate a new set of samples, which is then used to reoptimise $U_{\rm ref}$. In principle, we should probably repeat this procedure until $U_{\rm ref}$ ceases to vary, but in practice we stop after the second iteration. Note that even this approach requires fully converged {\em ab initio} calculations for a large set of configurations. But since the configurations are generated with the potential model $U_{\rm ref}$, statistically independent samples are generated with much less effort than if we were using $U_{\rm vib}$ to generate them. \section{The rigid perfect lattice} \label{sec:perfect} The energy and pressure as functions of volume for h.c.p. Fe at low temperatures (i.e temperatures at which lattice vibrations and electronic excitations can be neglected) have been extensively investigated both by DAC experiments~\cite{mao90} and by DFT studies~\cite{stixrude94,soderlind96}, including our own earlier USPP~\cite{vocadlo97} and PAW~\cite{alfe99b} calculations. The various DFT calculations agree very closely with each other, and reproduce the experimental $p(V)$ relation very accurately, especially at high pressures: the difference between our PAW pressures~\cite{alfe99b} and the experimental values ranges from 4.5~\% at 100~GPa to 2.5~\% at 300~GPa, these deviations being only slightly greater than the scatter on the experimental values. The present DFT calculations on the rigid perfect lattice give $F_{\rm perf}$ for any chosen volume and electronic temperature $T_{\rm el}$. In order to achieve our target precision of 10~meV/atom for the free energy, careful attention must be paid to electronic Brillouin-zone sampling. All the calculations presented in this Section employ the $15 \times 15 \times 9$ Monkhorst-Pack set, which gives 135 $k$-points in the irreducible wedge of the Brillouin zone. The $k$-point sampling errors with this set have been assessed by repeating selected calculations with 520 $k$-points in the irreducible wedge. Tests at the atomic volume $V = 8.67$~\AA$^3$ show that the errors are 4.0, 2.0 and 0.5~meV/atom at temperatures of 500, 1000 and 2000 respectively, so that, as expected, the errors decrease rapidly with increasing $T_{\rm el}$. Since temperatures below 2000~K are not of interest here, we conclude that with our chosen Monkhorst-Pack set $k$-point errors are completely negligible. We have done direct DFT calculations at a closely spaced set of atomic volumes going from 6.2 to 11.4~\AA$^3$ at intervals of 0.2~\AA$^3$, and at each of these volumes we have made calculations at $T_{\rm el}$ values going from 200 to 10,000~K at intervals of 200~K. At every one of these state points, we obtain the value of $F_{\rm perf}$. The calculations also deliver directly the internal energy $E_{\rm perf}$ and the entropy $S_{\rm perf}$ (see Eqn~\ref{eqn:entropy}). The specific heat is then obtained either by numerical differentiation of the $E_{\rm perf}$ results ($C_{\rm perf} = ( \partial E_{\rm perf} / \partial T )_V$) or analytically from the formula for $S_{\rm perf}$ ($C_{\rm perf} = T ( \partial S_{\rm perf} / \partial T )_V$). If we ignore the temperature dependence of the Kohn-Sham energies, $\partial S_{\rm perf} / \partial T$ can be evaluated analytically using the text-book formula for the Fermi-Dirac occupations numbers: $f_i = 1 / [ \exp ( \beta ( \epsilon_i - \mu )) + 1 ]$, where $\epsilon_i$ is the Kohn-sham energy of orbital $i$ and $\mu$ is the electronic chemical potential. It was pointed out by Wasserman {\em et al.}~\cite{wasserman96} that the neglect of the dependence of $\epsilon_i$ on $T_{\rm tel}$ is a very accurate approximation, and out tests confirm that the errors incurred by doing this are negligible. Our analytically calculated results for $C_{\rm perf}$ are reported in Fig.~\ref{fig:cv_electronic} for the temperature range $0 - 6000$~K at the atomic volumes $V = 7.0$, 8.0, 9.0 and 10.0~\AA$^3$/atom. The key point to note is that $C_{\rm perf}$ becomes {\em large} at high temperatures, its value of {\em ca.}~$2 k_{\rm B}$ at 6000~K being comparable with the Dulong-Petit specific heat of lattice vibrations ($3 k_{\rm B}$). This point about the large magnitude of the electronic specific heat has been emphasised in several previous papers~\cite{stixrude97,wasserman96,boness90}, and the values reported here are close to those calculated by Wasserman {\em et al.}~\cite{wasserman96}, though the latter actually refer to f.c.c. Fe. This means that full inclusion of thermal electronic excitations, as done here, is crucial to a correct description of the thermodynamics of Fe at core conditions. The linear dependence of $C_{\rm perf}$ on $T$ evident in Fig.~\ref{fig:cv_electronic} at low $T$ ($C_{\rm perf} = \gamma T + O ( T^2 )$) is what we expect from the standard Sommerfeld expansion~\cite{ashcroft76} for electronic specific heat in powers of $T$, which shows that the low-temperature slope is given by $\gamma = \frac{1}{3} \pi^2 k_{\rm B}^2 g ( E_{\rm F} )$, where $g ( E_{\rm F} )$ is the electronic density of states (DOS, i.e. the number of states per unit energy per atom) at the Fermi energy $E_{\rm F}$. Our calculated DOS at the atomic volumes $V = 7.0$ and 10.0~\AA$^3$ (Fig.~\ref{fig:dos}) shows, as expected, that the width of the electronic $d$-band increases on compression, so that $g ( E_{\rm F} )$ and hence $\gamma$ decrease with decreasing atomic volume. As a cross-check, we have calculated $\gamma$ directly from the density of states, and we recover almost exactly the low temperature slope of $C_{\rm perf}$. In order to obtain other thermodynamic functions, we need a fit to our $F_{\rm perf}$ results. At each temperature, we fit the results to the standard Birch-Murnaghan form, using exactly the procedure followed in our recent work on the Fe/O system~\cite{alfe99c}. This involves fitting the 22 values of $F_{\rm perf}$ at a given temperature using four fitting parameters ($E_0$, $V_0$, $K$ and $K^\prime$ in the notation of Ref.~\cite{alfe99c}). We find that at all temperatures the r.m.s. fitting errors are less than 1~meV at all points. The temperature variation of the fitting parameters is then represented using a polynomial of sixth degree. Electronic excitations have a significant effect on the pressure, as can be seen by examining the $T$-dependence of the perfect-lattice pressure $p_{\rm perf} = - ( \partial F_{\rm perf} / \partial V )_T$. We display in Fig.~\ref{fig:p_electronic} the thermal part $\Delta p_{\rm perf}$ of $p_{\rm perf}$, i.e. the difference between $p_{\rm perf}$ at a given $T$ and its zero-temperature value. The thermal excitation of electrons produces a positive pressure. This is what intuition would suggest, but it is worth noting the reason. Since $F_{\rm perf} ( T_{\rm el} ) = F_{\rm perf} (0) - \frac{1}{2} T^2 \gamma (V)$ at low temperatures, the change of pressure due to electronic excitations is $\Delta p = \frac{1}{2} T^2 d \gamma / d V$ in this region. But $d \gamma / d V > 0$, so that the electronic thermal pressure must be positive. To put the magnitude of this pressure in context, we recall that at the Earth's inner-core boundary (ICB) the pressure is 330~GPa and the temperature is believed to be in the range $5000 - 6000$~K, the atomic volume of Fe under these conditions being {\em ca.}~7~\AA$^3$. Our results then imply that electronic thermal pressure accounts for {\em ca.}~4~\% of the total pressure, which is small but significant. \section{The harmonic crystal} \label{sec:harmonic} \subsection{Convergence tests} \label{sec:convergence} We have undertaken extensive tests to show that our target precision of 10~meV/atom can be attained for the harmonic free energy $F_{\rm harm}$. It is useful to note that at $T = 6000$~K an error of 10~meV represents 2~\% of $k_{\rm B} T$, and since $F_{\rm harm}$ is given in terms of the geometric mean frequency $\bar{\omega}$ by $3 k_{\rm B} T \ln ( \hbar \bar{\omega} / k_{\rm B} T )$, we must achieve a precision of 0.7~\% in $\bar{\omega}$. A {\em sufficient} condition for this is that we obtain the phonon frequencies $\omega_s ( {\bf k} )$ for all wavevectors ${\bf k}$ and branches $s$ to this precision, but this may not be {\em necessary}, since there can be cancellation of errors at different {\bf k} and/or $s$. Convergence of $\bar{\omega}$ must be ensured with respect to four main parameters: the atom displacement used to calculate the force-constant matrix $\Phi_{l s \alpha , l^\prime t \beta}$; the electronic $k$-point sampling; the size of repeating cell used to obtain $\Phi_{l s \alpha , l^\prime t \beta}$; and the density of $k$-point mesh used in calculating $\bar{\omega}$ from the $\omega_{{\bf k} s}$ by integration over the phonon Brillouin zone (see Eqn~(\ref{eqn:fharm})). Convergence can be tested separately with respect to these four parameters. Integration over the phonon Brillouin zone is performed using Monkhorst-Pack $k$-points~\cite{monkhorst76}. We find that the set having 364 $k$-points in the irreducible wedge achieves a precision in $F_{\rm harm}$ of better than 1~meV/atom at all the temperatures of interest, and this $k$-point set was used in the calculations presented here. The effect of atomic displacement amplitude was tested with the force-constant matrix generated using a $2 \times 2 \times 2$ repeating cell at the atomic volume 8.67~\AA$^3$, and amplitudes ranging from 0.0148 to 0.118~\AA\ were used. The systematic variation of the resulting $F_{\rm harm}$ showed that with an amplitude of 0.0148~\AA\ the error is less than 1~meV/atom. The errors associated with electronic $k$-point sampling in the calculation of the force-constant matrix were initially assessed with $\Phi_{l s \alpha , l^\prime t \beta}$ obtained from the $2 \times 2 \times 2$ repeating cell at the atomic volume $V = 8.67$~\AA$^3$. We found that at $T_{\rm el} = 4300$~K convergence of $F_{\rm harm}$ is obtained within 2~meV/atom if the $3 \times 3 \times 2$ Monkhorst-Pack set of electronic $k$-points is used. This is close to being satisfactory, but starts to produce significant errors at lower temperatures. Our definitive calculations were actually done with the more extensive $5 \times 5 \times 5$ Monkhorst-Pack electronic set. With this set, our tests performed with $\Phi_{l s \alpha , l^\prime t \beta}$ obtained from the $3 \times 3 \times 2$ repeating cell and $V = 9.17$~\AA$^3$ show that the electronic $k$-point error is now {\em ca.}~0.1~meV/atom even at $T_{\rm el} = 1500$~K. At higher electronic temperatures and with larger repeating cells, the error will of course be even smaller. Finally, we have tested the convergence of $F_{\rm harm}$ with respect to the size of repeating cell used to generate $\Phi_{l s \alpha , l^\prime t \beta}$. A wide range of different cell sizes and shapes were studied, including $2 \times 2 \times 2$, $3 \times 3 \times 2$, $4 \times 4 \times 4$ and $5 \times 5 \times 3$, the largest of these containing 150 atoms in the repeating cell. The tests showed that with the repeating cell $3 \times 3 \times 2$ the error in $F_{\rm harm}$ calculated at $V = 8.67$~\AA$^3$ at 4300~K is a little over 2~meV/atom, and we adopted this cell size for all our calculations of $\Phi_{l s \alpha , l^\prime t \beta}$. We expect the error to be similar at other volumes, and to be roughly proportional to temperature, so that it should be insignificant over the whole range of states of interest. \subsection{Dispersion relations, average frequency, free energy} \label{sec:dispersion} In Fig.~\ref{fig:phonons} we present the harmonic phonon dispersion relations at the two atomic volumes 8.67 and 6.97~\AA$^3$ calculated with $T_{\rm el} = 4000$~K. We are not aware of previous direct {\em ab initio} calculations of the phonon frequencies of high-pressure h.c.p. Fe, but there are published dispersion relations derived from a `generalised pseudopotential' parameterisation of FP-LMTO calculations performed by S\"{o}derlind {\em et al.}~\cite{soderlind96} using the LDA at the atomic volume 6.82~\AA$^3$. The agreement of their phonon frequencies with ours is far from perfect. For example, we find that the maximum frequency in the Brillouin zone calculated at $V = 6.82$~\AA$^3$ is at the $\Gamma$-point and is 21.2~THz, whereas they find the maximum frequency at the $M$-point with the value 17.2~THz. This is not unexpected, since they report that the generalised pseudopotential scheme fails to reproduce accurately some phonon frequencies calculated directly with FP-LMTO in the f.c.c. Fe crystal~\cite{soderlind96}; in addition, the LDA used by them is known to underestimate phonon frequencies in Fe~\cite{vocadlo97}. Casual inspection suggests that our dispersion curves at the two atomic volumes are almost identical apart from an overall scale factor. This suggestion can be judged from the right-hand panel of Fig.~\ref{fig:phonons}, where we plot as dashed curves the dispersion curves at $V = 8.67$~\AA$^3$ scaled by the factor 1.409 -- the reason for choosing this factor will be explained below. The comparison shows that the curves at the two volumes are indeed related by a single scaling factor to within {\em ca.}~5~\%. We also take the opportunity here to check how well the inverse-power potential model $U_{\rm IP}$ (see Eqn~(\ref{eqn:uip})) reproduces phonon frequencies. To do this, we take exactly the same parameters $B$ and $\alpha$ specifying $\phi (r)$ that reproduced well the properties of the liquid~\cite{alfe99b}, namely $\alpha = 5.86$ and $B$ such that for $r = 2.0$~\AA\ $\phi (r) = 1.95$~eV. The phonons calculated from this model are compared with the {\em ab initio} phonons at atomic volume $V = 8.67$~\AA\ in the left panel of Fig.~\ref{fig:phonons}. Although the general form of the dispersion curves is correctly reproduced, it is clear that the model gives only a very rough description, with discrepancies of as much as 30~\% for some frequencies. We performed direct {\em ab initio} calculations of the dispersion relations and hence the geometric mean frequency $\bar{\omega}$ for seven volumes spaced roughly equally from 9.72 to 6.39~\AA$^3$, and for each of these volumes for $T_{\rm el}$ from 1000 to 10,000~K at intervals of 500~K. The results for $\bar{\omega}$ as function of volume are reported in Fig.~\ref{fig:omega} for the three temperatures $T_{\em el} = 2000$, 4000 and 6000~K. We use a (natural) log-log plot to display the results, so that the negative slope $\gamma_{\rm ph} \equiv - d \ln \bar{\omega} / d \ln V$ is the so-called phonon Gr\"{u}neisen parameter. (The relation between $\gamma_{\rm th}$ and the thermodynamic Gr\"{u}neisen parameter $\gamma$ will be discussed in Sec.~\ref{sec:other}.) We note that if phonon dispersion curves at two different volumes are related by a simple scaling factor, this must be the ratio of $\bar{\omega}$ values at the two volumes. The scaling factor used in Fig.~\ref{fig:phonons} was obtained in exactly this way from our $\bar{\omega}$ results. The Gr\"{u}neisen parameter $\gamma_{\rm ph}$ increases with increasing volume, in accord with a widely used rule of thumb~\cite{anderson95}. We find that $\gamma_{\rm ph}$ goes from 1.34 at $V = 6.7$~\AA$^3$ to 1.70 at $V = 8.3$~\AA$^3$, but then decreases slightly to 1.62 at $V = 9.5$~\AA$^3$. Fig.~\ref{fig:phonons} also allows us to judge the effect of $T_{\rm el}$ on phonon frequencies: for all volumes studied, the frequencies decrease by {\em ca.}~4~\% as $T_{\rm el}$ goes from 2000 to 6000~K. However, we mention that for the higher volumes, though not for the smaller ones, $\bar{\omega}$ slightly increases again as $T_{\rm el}$ goes to still higher values. To enable the $\bar{\omega}$ data to be used in thermodynamic calculations, we parameterise the temperature dependence of $\ln \bar{\omega}$ at each volume as $a + b T^2 + c T^3 + e T^5$, and the volume dependence of the four coefficients $a$, $b$, $c$ and $e$ as a third-degree polynomial in $V$. We now return to the matter of quantum nuclear corrections. Since the leading high-temperature correction to the free energy is $\frac{1}{24} k_{\rm B} T ( \beta \hbar \omega )^2$ per mode and there are three modes per atom, the quantum correction to $F_{\rm harm}$ is $\frac{1}{8} k_{\rm B} T ( \beta \hbar \langle \omega^2 \rangle^{1/2} )^2$ per atom, where $\langle \omega^2 \rangle$ denotes the average of $\omega^2$ over wavevectors and branches. At the lowest volume of interest, $V = 7$~\AA$^3$, $\langle \omega^2 \rangle^{1/2} / 2 \pi$ is roughly 15~THz. At the lowest temperature of interest, $T = 2000$~K, this gives a quantum correction of 3~meV/atom, which is small compared with our target precision. \subsection{Harmonic phonon specific heat and thermal pressure} \label{harmCp} If the mean frequency $\bar{\omega}$ were independent of temperature, the constant-volume specific heat $C_{\rm harm}$ due to harmonic phonons would be exactly $3 k_{\rm B}$ per atom in the classical limit employed here. We find that its temperature dependence yields a slight increase of $C_{\rm harm}$ above this value, but this is never greater than $0.25 k_{\rm B}$ under the conditions of interest. The harmonic phonon pressure $p_{\rm harm}$ as a function of atomic volume at different temperatures is reported in Fig.~\ref{fig:p_harmonic}. Comparison with Fig.~\ref{fig:p_electronic} shows that $p_{\rm harm}$ is always much bigger (by a factor of at least three) than the electronic thermal pressure under the conditions of interest. At ICB conditions ($p = 330$~GPa, $T \sim 5000 - 6000$~K), $p_{\rm harm}$ account for {\em ca.}~15~\% of the total pressure. \section{Anharmonic free energy} \label{sec:anharmonic} \subsection{Optimisation of reference system} \label{sec:optimisation} It was stressed in Sec.~\ref{sec:reference} that optimisation of the reference system greatly improves the efficiency of the anharmonic calculations. We investigated the construction of the reference system in detail at the atomic volume 8.67~\AA$^3$, with the optimisations performed for a simulated system of 16 atoms. The calculation of the anharmonic free energy itself for a system as small as this would not be adequate, but we expect this system size to suffice for the optimisation of $U_{\rm ref}$. The initial sample of configurations (see Sec.~\ref{sec:reference}) was taken from a simulation of duration 100~ps performed with the total energy $U_{\rm harm}$, with velocity randomisation typically every 0.2~ps. Configurations were taken every 1~ps, so that we obtain a sample of 100 configurations. In computing the energy difference $U_{\rm vib} - U_{\rm ref}$ for these configurations, the {\em ab initio} energy $U_{\rm vib}$ was always computed using $5 \times 5 \times 3$ Monkhorst-Pack electronic $k$-point sampling (38 $k$-points in the full Brillouin zone). Once the preliminary optimisation had been performed with configurations generated like this, the resulting $U_{\rm ref}$ was used to produce a new set of 100 configurations with an Andersen MD simulation of the same duration as before, and the reference system was reoptimised. This entire procedure was carried out at temperatures of 1000 and 4000~K. The values of the optimisation coefficients (see Eqn~(\ref{eqn:optcoef})) were $c_1= 0.2$, $c_2 = 0.8$ at the high temperature and $c_1 = 0.7$, $c_2 = 0.3$ at the low temperature. (We do not require that $c_1 + c_2 = 1$, though this happens to be the case here.) As expected, $U_{\rm ref}$ resembles $U_{\rm harm}$ quite closely at the low temperature and $U_{\rm IP}$ quite closely at the high temperature. In view of the labour involved in the optimisation, we wanted to find out whether the detailed choice of $c_1$ and $c_2$ makes a large difference to the strength of the fluctuations of $U_{\rm vib} - U_{\rm ref}$. To do this, we computed these fluctuations at several temperatures, using the two reference models just described, i.e. without optimising the $c_i$ coefficients at each temperature. Our conclusion is that the values $c_1 = 0.2$, $c_2 = 0.8$ can safely be used at all the state points of interest, without incurring large fluctuations, and we therefore used this way of making the reference system in all subsequent calculations. \subsection{From harmonic {\em ab initio} to reference to full {\em ab initio}} \label{sec:harm2full} The thermodynamic integration from {\em ab initio} harmonic to reference was done with nine equally-spaced $\lambda$-points using Simpson's rule, which gives an integration precision well in excess of our target. To investigate the influence of system size, integration from $U_{\rm harm}$ to $U_{\rm ref}$ was performed for systems of 12 different sizes, going from 16 to 1200 atoms. The calculations were also repeated with the force-constant matrix in $U_{\rm harm}$ generated with cells containing from 16 to 150 atoms (see Sec.~\ref{sec:phonons}). These tests showed that if the thermodynamic integration is done with a system of 288 atoms and the force constant used for $U_{\rm harm}$ is generated with the 36-atom cell, then the resulting difference $F_{\rm ref} - F_{\rm harm}$ is converged to better than 3~meV/atom. To compute the difference $F_{\rm vib} - F_{\rm ref}$, we used the second-order expansion formula given in Eqn~(\ref{eqn:secondorder}). Given the small size of the fluctuations of $U_{\rm vib} - U_{\rm ref}$, we expect this to be very accurate. The calculations of $F_{\rm vib} - F_{\rm ref}$ were all done with the 16-atom system. Tests with 36- and 64-atom systems show that this free-energy difference is converged with respect to size effects to within {\em ca.}~2~meV. A summary of all our results for $F_{\rm ref} - F_{\rm harm}$ and $F_{\rm vib} - F_{\rm ref}$, and the resulting values of $F_{\rm anharm} \equiv F_{\rm vib} - F_{\rm harm}$ are reported in Table~\ref{tab:anharm}. The anharmonic free energy is always negative, so that anharmonicity stabilises the solid. As expected, the anharmonic free energy is small (less than or comparable with our target precision of 10~meV/atom) at low temperatures, but increases rapidly at high temperatures. The temperature at which it becomes appreciable is higher for smaller atomic volumes. In classical statistical mechanics, $F_{\rm anharm}$ is expected to go as $T^2$ at low temperatures, and in fact we find that $F_{\rm anharm} = a(V) T^2$ gives a good representation of our results for all the temperatures studied. The volume dependence of $a(V)$ is adequately represented by $a(V) = \alpha_1 + \alpha_2 V$, with $\alpha_1 = 2.2 \times 10^{-9}$~eV~K$^{-2}$ and $\alpha_2 = - 6.0 \times 10^{-10}$~eV~\AA$^{-3}$ per atom. \subsection{Anharmonic specific heat and pressure} \label{sec:anharmCp} Within the parameterisation just described, the anharmonic contribution to the constant-volume specific heat $C_{\rm anharm}$ is proportional to $T$ and varies linearly with $V$. As an indication of its general size, we note that $C_{\rm anharm}$ increases from 0.09 to 0.18~$k_{\rm B}$ at 2000~K and from 0.28 to 0.53~$k_{\rm B}$ at 6000~K as $V$ goes from 7 to 10~\AA$^3$. The anharmonic contribution to the pressure is independent of volume, and is proportional to $T^2$. It increases from 0.4 to 3.5~GPa as $T$ goes from 2000 to 6000~K, so that even at high temperatures it is barely significant. \section{Thermodynamics of the solid} \label{sec:thermo} We now combine the parameterised forms for $F_{\rm perf}$, $F_{\rm harm}$ and $F_{\rm anharm}$ presented in the previous three Sections to obtain the total free energy of the h.c.p. crystal, and hence, by taking appropriate derivatives, a range of other thermodynamic functions, starting with those measured in shock experiments. \subsection{Thermodynamics on the Hugoniot} \label{sec:hugoniot} In a shock experiment, conservation of mass, momentum and energy require that the pressure $p_{\rm H}$, the molar internal energy $E_{\rm H}$ and the molar volume $V_{\rm H}$ in the compression wave are related by the Rankine-Hugoniot formula~\cite{rankinehugoniot}: \begin{equation} \frac{1}{2} p_{\rm H} ( V_0 - V_{\rm H} ) = E_{\rm H} - E_0 \; , \end{equation} where $E_0$ and $V_0$ are the internal energy and volume in the zero-pressure state before the arrival of the wave. The quantities directly measured are the shock-wave and material velocities, which allow the values of $p_{\rm H}$ and $V_{\rm H}$ to be deduced. From a series of experiments, $p_{\rm H}$ as a function of $V_{\rm H}$ (the so-called Hugoniot) can be derived. The measurement of temperature in shock experiments is attempted but problematic~\cite{yoo93}. The Hugoniot curve $p_{\rm H} ( V_{\rm H} )$ is straightforward to compute from our results: for a given $V_{\rm H}$, one seeks the temperature at which the Rankine-Hugoniot relation is satisfied; from this, one obtains $p_{\rm H}$ (and, if required, $E_{\rm H}$). In experiments on Fe, $V_0$ and $E_0$ refer to the zero-pressure b.c.c. crystal, and we obtain their values directly from GGA calculations, using exactly the same PAW technique and GGA as in the rest of the calculations. Since b.c.c. Fe is ferromagnetic, spin polarisation must be included, and this is treated by spin interpolation of the correlation energy due to Vosko {\em et al.}, as described in Refs.~\cite{alfe99b,kresse99}. The value of $E_0$ includes the harmonic vibrational energy at 300~K, calculated from {\em ab initio} phonon dispersion relations for ferromagnetic b.c.c. Fe. Our {\em ab initio} Hugoniot is compared with the measurements of Brown and McQueen~\cite{brown86} in Fig.~\ref{fig:p_hugoniot}. The agreement is good, with discrepancies ranging from 10~GPa at $V = 7.8$~\AA$^3$ to 12~GPa at $V = 8.6$~\AA$^3$. These discrepancies are only slightly greater than those found for the room-temperature static $p(V)$ curve (see Sec.~\ref{sec:perfect}), which can be regarded as giving an indication of the intrinsic accuracy of the GGA itself. Another way of looking at the accuracy to be expected of the GGA is to recalculate the Hugoniot using the experimental value of the b.c.c. $V_0$ (11.8~\AA$^3$, compared with the {\em ab initio} value of 11.55~\AA$^3$). The Hugoniot calculated in this way is also plotted in Fig.~\ref{fig:p_hugoniot}, and we see that this gives almost perfect agreement with the experimental data in the pressure range $100 - 240$~GPa. We deduce from this that the {\em ab initio} Hugoniot deviates from the experimental data by an amount which should be expected from the known inaccuracies of the GGA applied to Fe. A similar comparison with the experimental Hugoniot was given in the tight-binding total-energy work of Wasserman {\em et al.}~\cite{wasserman96}, and their agreement was as good as ours. We discuss the significance of this later. Our Hugoniot temperature as a function of pressure is compared with the experimental results of Brown and McQueen~\cite{brown86} and of Yoo {\em et al.}~\cite{yoo93} in Fig.~\ref{fig:t_hugoniot}. The {\em ab initio} temperatures agree well with those of Brown and McQueen, but fall substantially below those of Yoo {\em et al.}, and this supports the suggestion of Ref.~\cite{wasserman96} that the Yoo {\em et al.} measurements overestimate the Hugoniot temperature by {\em ca.}~1000~K. A further quantity that can be extracted from shock experiments is the bulk sound velocity $v_{\rm B}$ as a function of atomic volume on the Hugoniot, which is given by $v_{\rm B} = ( K_S / \rho )^{1/2}$, with $K_S \equiv - V ( \partial p / \partial V )_S$ the adiabatic bulk modulus and $\rho$ the mass density. Since $K_S$ can be calculated from our {\em ab initio} pressure and entropy as functions of $V$ and $T$, our calculated $K_S$ can be directly compared with experimental values (Fig.~\ref{fig:ks_hugoniot}). Here, there is a greater discrepancy than one would wish, with the theoretical values falling significantly above the $K_S$ values of both Refs~\cite{brown86} and \cite{jeanloz79}, although we note that the two sets of experimental results disagree by an amount comparable with the discrepancy between theory and experiment. For what it is worth, we show in Fig.~\ref{fig:alpha_hugoniot} a comparison between our calculated thermal expansivity on the Hugoniot with values extracted from shock data by Jeanloz~\cite{jeanloz79}. The latter are very scattered, but is clear that the theoretical values have similar magnitude. However, our values vary little along the Hugoniot, whereas the experimental values seem to decrease rather rapidly with increasing pressure. \subsection{Other thermodynamic quantities} \label{sec:other} We conclude our presentation of results by reporting our {\em ab initio} predictions of quantities which conveniently characterise h.c.p. Fe at high pressures and temperatures, and allow some further comparisons with the predictions of Refs~\cite{stixrude97} and \cite{wasserman96}. Our results are presented as a function of pressure on isotherms at $T = 2000$, 4000 and 6000~K. At each temperature, we give results only for the pressure range where, according to our {\em ab initio} melting curve, the h.c.p. phase is thermodynamically stable. The total constant-volume specific heat per atom $C_v$ (Fig.~\ref{fig:cv_total}) emphasises again the importance of electronic excitations. In a purely harmonic system, $C_v$ would be equal to $3 k_{\rm B}$, and it is striking that $C_v$ is considerably greater than that even at the modest temperature of 2000~K, while at 6000~K it is nearly doubled. The decrease of $C_v$ with increasing pressure evident in Fig.~\ref{fig:cv_total} comes from the suppression of electronic excitations by high compression, and to a smaller extent from the suppression of anharmonicity. The thermal expansivity $\alpha$ (Fig.~\ref{fig:alpha}) is one of the few cases where we can compare with DAC measurements~\cite{boehler90}. The latter show that $\alpha$ decreases strongly with increasing pressure and the {\em ab initio} results fully confirm this. Our results also show that $\alpha$ increases significantly with temperature. Both trends are also shown by the tight-binding calculations of Ref.~\cite{wasserman96}, though the latter differ from ours in showing considerably larger values of $\alpha$ at low pressures. We note that Ref.~\cite{wasserman96} reported results for $\alpha$ at temperatures only up to 2000~K, so a full comparison is not possible. The product $\alpha K_T$ of expansivity and isothermal bulk modulus is important because it is sometimes assumed to be independent of pressure and temperature over a wide range of conditions, and this constancy is used to extrapolate experimental data. Our predicted isotherms for $\alpha K_T$ (Fig.~\ref{fig:alphakt}) indicate that its dependence on $p$ is indeed weak, especially at low temperatures, but that its dependence on $T$ certainly cannot be ignored, since it increases by at least 30~\% as $T$ goes from 2000 to 6000~K at high pressures. Wasserman {\em et al.}~\cite{wasserman96} come to qualitatively similar conclusions, and they also find values of {\em ca.}~10~MPa~K$^{-1}$ at $T \simeq 2000$~K. However, it is disturbing to note that the general tendency for $\alpha K_T$ to increase with pressure evident in our results is exactly the opposite of what was found in Ref.~\cite{wasserman96}. In particular, they found a marked increase of $\alpha K_T$ as $p \rightarrow 0$, which does not occur in our results. The thermodynamic Gr\"{u}neisen parameter $\gamma \equiv V ( \partial p / \partial E )_V \equiv \alpha K_T V / C_v$ plays an important role in high-pressure physics, because it relates the thermal pressure (i.e. the difference $p_{\rm th}$ between $p$ at given $V$ and $T$ and $p$ at the same $V$ but $T = 0$) and the thermal energy (difference $E_{\rm th}$ between $E$ at given $V$ and $T$ and $E$ at the same $V$ but $T = 0$). Assumptions about the value of $\gamma$ are frequently used in reducing shock data from Hugoniot to isotherm. If one assumes that $\gamma$ depends only on $V$, then the thermal pressure and energy are related by: \begin{equation} p_{\rm th} V = \gamma E_{\rm th} \; , \end{equation} a relation known as the Mie-Gr\"{u}neisen equation of state. At low temperatures, where only harmonic phonons contribute to $E_{\rm th}$ and $p_{\rm th}$, $\gamma$ should indeed be temperature independent above the Debye temperature, because $E_{\rm th} = 3 k_{\rm B} T$ per atom, and $p_{\rm th} V = - 3 k_{\rm B} T d \ln \bar{\omega} / d \ln V = 3 k_{\rm B} T \gamma_{\rm th}$, so that $\gamma = \gamma_{\rm ph}$, which depends only on $V$. But in high-temperature Fe, the temperature independence of $\gamma$ will clearly fail, because of electronic excitations (and anharmonicity). Our results for $\gamma$ (Fig.~\ref{fig:gamma}) indicate that it varies rather little with either pressure or temperature in the region of interest. At temperatures below {\em ca.}~4000~K, it decreases with increasing pressure, as expected from the behaviour of the phonon Gr\"{u}neisen parameter $\gamma_{\rm ph}$ (see Sec.~\ref{sec:dispersion}). This is also expected from the often-used empirical rule of thumb~\cite{anderson95} $\gamma \simeq ( V / V_0 )^q$, where $V_0$ is a reference volume and $q$ is a constant exponent usually taken to be roughly unity. Since $V$ decreases by a factor of about 0.82 as $p$ goes from 100 to 300~GPa, this empirical relation would make $\gamma$ decrease by the same factor over this range, which is roughly what we see. However, the pressure dependence of $\gamma$ is very much weakened as $T$ increases, until at 6000~K $\gamma$ is almost constant. Our results agree quite well with those of Wasserman {\em et al.}~\cite{wasserman96} in giving a value $\gamma \simeq 1.5$ at high pressures, although once again their calculations are limited to the low-temperature region $T \le 3000$~K. But at low pressures there is a serious disagreement, since they find a strong increase of $\gamma$ to values of well over 2.0 as $p \rightarrow 0$, whereas our values never exceed 1.6. \section{Discussion and conclusions} \label{sec:discon} Our primary interest in this work is in the properties of h.c.p. iron at high pressures and temperatures, but in order to investigate them using {\em ab initio} methods we have needed to make technical developments, which have a wider significance. The major technical achievement is that we have been able to calculate the {\em ab initio} free energy and other thermodynamic properties with completely controlled statistical-mechanical errors, i.e. errors that can be reduced to any required extent. Anharmonicity and thermal electronic excitations are fully included. The attainment of high precision for the electronic and harmonic parts of the free energy has required no particular technical innovations, though careful attention to sources of error is essential. The main innovation is in the development of well optimised reference systems for use with thermodynamic integration in the calculation of the anharmonic part, without which adequate precision would be impossible. With the methods we have developed, it becomes unnecessary to approximate the electronic structure with semi-empirical representations, or to resort to the statistical-mechanical approximations that have been used in the past. We have assessed in detail the precision achieved in the various parts of the free energy. There are two kinds of errors: those incurred in the calculation of the free energies themselves, and those produced by fitting the results to polynomials. We have seen that the errors in calculating the perfect-lattice free energy $F_{\rm perf}$ are completely negligible, though there may be small fitting errors of perhaps 1~meV/atom. In the harmonic part $F_{\rm harm}$, the calculational errors are {\em ca.}~3~meV/atom, most of which comes from spatial truncation of the force-constant matrix; the fitting error for $F_{\rm harm}$ are of about the same size. The most serious errors are in the anharmonic part $F_{\rm anharm}$, and these are {\em ca.}~5~meV/atom in the calculation and {\em ca.}~4~meV/atom in the fitting. The overall technical errors therefore amount to {\em ca.}~15~meV/atom, which is slightly larger than our target of 10~meV/atom. We stress that the precision just quoted does not take into account errors incurred in the particular implementation of DFT (PAW in the present work), for example the error associated with the chosen split between valence and core states. Such errors can in principle be systematically reduced, but we have not attempted this here. Nor does it account for the inaccuracy of the chosen $E_{\rm xc}$, or for the neglect of the temperature dependence of $E_{\rm xc}$. We shall attempt to assess errors of this type in our separate paper on the melting properties of Fe. The most direct way to test the reliability of our methods is comparison with shock data for $p(V)$ on the Hugoniot~\cite{brown86}, so it is gratifying to find close agreement over the pressure range of interest. The closeness of this agreement is inherently limited by the known inaccuracies of the GGA employed, and we have shown that the discrepancies are of the expected size. An important prediction of the calculations is the temperature $T(p)$ on the Hugoniot, since temperature is notoriously difficult to obtain in shock experiments. Our results support the reliability of the shock temperatures estimated by Brown and McQueen~\cite{brown86,not_measured}, and, in agreement with Wasserman {\em et al.}~\cite{wasserman96}, we find that the temperatures of Yoo {\em et al.}~\cite{yoo93} are too high by as much as 1000~K. This incidentally lends support to the reliability of the Brown and McQueen estimate of {\em ca.}~5500~K for the melting temperature of Fe at 243~GPa. The situation is not so satisfactory for the adiabatic bulk modulus $K_S$ on the Hugoniot, since our {\em ab initio} values seem to be {\em ca.}~8~\% above the shock values. But it should be remembered that even at ambient conditions {\em ab initio} and experimental bulk moduli frequently differ by this amount. The difficulties may be partly on the experimental side, since even for b.c.c. Fe at ambient conditions, experimental $K_S$ values span a range of 8~\%. Our calculations fully confirm the strong influence of electronic thermal excitations~\cite{boness90,wasserman96}. At the temperatures $T \sim 6000$~K of interest for the Earth's core, their contribution to the specific heat is almost as large as that due to lattice vibrations, in line with previous estimates. They also have a significant effect on the Gr\"{u}neisen parameter $\gamma$, which plays a key role in the thermodynamics of the core, and is poorly constrained by experiment. Our finding that $\gamma$ decreases with increasing pressure for $T < 4000$~K accords with an often-used rule of thumb~\cite{anderson95}, but electronic excitations completely change this behaviour at core temperatures $T \sim 6000$~K, where $\gamma$ has almost constant values of {\em ca.}~1.45, in accord with experimental estimates in the range 1.1 to 1.6~\cite{brown86,stacey95}. Comparison with the earlier tight-binding calculations of Wasserman {\em et al.}~\cite{wasserman96} both for $\gamma$ and for the quantity $\alpha K_T$ is rather disquieting. Although a full comparison is hindered by the fact that they report results only for the low-temperature region $T \le 3000$~K, we find two kinds of disagreement at low pressure. First, they find an increase of $\alpha K_T$ as $p \rightarrow 0$, whreas at low temperatures we find the opposite. Even more seriously, their strong increase of $\gamma$ as $p \rightarrow 0$ is completely absent in our results. Our calculations are more rigorous than theirs, since we completely avoid their statistical-mechanical approximations, as well as being fully self-consistent on the electronic-structure side. The suggestion must be that their approximations lead to significantly erroneous behaviour at low pressures. In pursuing this further, it would be very helpful to know what their methods predict in the high-temperature region relevant to the Earth's core. The present work forms part of a larger project on both pure Fe and its alloys with S, O, Si and H in the solid and liquid states. In a separate paper, we shall demonstrate that the thermodynamic integration technique employed here can also be used to obtain the fully {\em ab initio} free energy and other thermodynamic functions of liquid Fe over a wide range of states, with a precision equal to what has been achieved here for the solid. From the free energies of solid and liquid, we are then able to determine the {\em ab initio} melting curve and the entropy and volume of fusion as functions of pressure. In summary, we have presented extensive {\em ab initio} calculations of the free energy and a range of other thermodynamic properties of iron at high pressures and temperatures, in which all statistical-mechanical errors are fully under control, and a high (and quantified) precision has been achieved. We find close agreement with the most reliable shock data. {\em Ab initio} values are provided for important, but experimentally poorly determined quantities, such as the Gr\"{u}neisen parameter. The free energy results provide part of the basis for the {\em ab initio} determination of the high-pressure melting properties of iron, to be reported elsewhere. \section*{Acknowledgments} The work of DA is supported by NERC grant GST/02/1454 to G. D. Price and M. J. Gillan. We thank NERC and EPSRC for allocations of time on the Cray T3E machines at Edinburgh Parallel Computer Centre and Manchester CSAR service, these allocations being provided through the Minerals Physics Consortium (GST/02/1002) and the UK Car-Parrinello Consortium (GR/M01753). We gratefully acknowledge discussions with Prof. J.-P. Poirier and Dr. L. Vo\v{c}adlo.
{'timestamp': '1999-08-27T11:39:15', 'yymm': '9908', 'arxiv_id': 'cond-mat/9908400', 'language': 'en', 'url': 'https://arxiv.org/abs/cond-mat/9908400'}
\section{Introduction} In the case of complicated space-time topology, a promising approach to quantization of general field theories involves cutting the space-time manifold into simple pieces, where the problem is more easily solved, and then gluing back the individual elements to obtain the final answer. This method was proposed by Atiyah and successfully applied by Witten in \cite{WittenCS} to study the quantization of the Chern-Simons theory with Wilson lines (cf. also \cite{Froehlich-King}). Following this idea, a systematic program to understand quantization in the Batalin-Vilkovisky formalism for field theories with degeneracies on manifolds with boundaries has been initiated in \cite{CattaneoMnevReshetikhin}. As a part of the construction, a Batalin-Fradkin-Vilkovisky model is associated to the boundary of the space-time manifold. The canonical quantization of this boundary BFV model provides a space of boundary states, together with a cohomological invariance condition that defines the admissible quantum states of the theory among all boundary states. In the case of quantum field theories on manifolds without boundary, the partition function and other correlation functions are complex-valued. In the presence of a boundary, the correlators of the bulk theory take values in this boundary space of states. The aim of this paper is to apply the BV-BFV formalism of \cite{CattaneoMnevReshetikhin} to the Chern-Simons theory on manifolds with boundary, with Wilson lines ending on the boundary. The BV formulation of this theory on closed manifolds is well understood (and served as the motivating example for the AKSZ construction). We will also consider the one-dimensional Chern-Simons model, obtained when the AKSZ construction is carried out in one dimension. We include in our construction Wilson lines which may end on the boundary of the manifold. This requires some extra work in BV-BFV formalism. Our treatment is based on the path integral representation for Wilson loops suggested in \cite{QuantSymplOrbits}, \cite{Diakonov-Petrov}. This is also an example of a more general construction of observables for AKSZ sigma models proposed in \cite{AKSZObs}. We compare our answers with those obtained using the geometric quantization for the boundary \cite{WittenCS} and using canonical quantization \cite{CSgenus0}, \cite{CSgenus1}. One of our main results is the boundary BFV action for Chern-Simons theory with Wilson lines, which has the form of an odd (degree 1) version of $BF$ action modified by source terms for the $B$ field at points where the Wilson lines meet the boundary. We also consider the toy model of one-dimensional Chern-Simons theory and derive the corresponding boundary action. Its quantization coincides with Kostant cubic Dirac operator. We compare the BV-BFV results for the one-dimensional model with the ones obtained in \cite{1DCSAlekseevMnev} on segments, and also see how Wilson lines can be added to the one-dimensional model. In the three-dimensional case, the boundary space of states, arising as the cohomology of the quantized BFV action, coincides with the space of conformal blocks of the WZW model on the boundary (in the picture of \cite{CSgenus0}). We begin in section \ref{WilsonLinesBV} with the treatment of Wilson lines in the BV formalism. We also provide a short introduction to the relevant aspects of the BV formalism. In section \ref{3DCSWL} we describe the BV formulation of the Chern-Simons theory with Wilson lines, applying the AKSZ construction to this special setting. Then we proceed to explain how the bulk model gets supplemented with a boundary BFV theory if the underlying manifold has a boundary. We repeat this procedure in section \ref{1DCSWL} for the one-dimensional Chern-Simons model. The $\mathbb{Z}_2$-grading that replaces the usual $\mathbb{Z}$-grading in this case leads to certain subtleties with the master equation. In section \ref{BoundaryQuantumStates} we present the quantization of the boundary BFV models and describe the arising spaces of quantum states, that we compare with known results for the quantization of the involved models. \subsection*{Acknowledgements} Research of A.A. and Y.B. was supported in part by the grants number 140985 and 141329 of the Swiss National Science Foundation. P.M. acknowledges partial support by RFBR grant 11-01-00570-a and by SNF grant 200021\_137595. \section{Wilson lines in the BV formalism} \label{WilsonLinesBV} In this section, we start with a brief introduction to the BV formalism. The main point is to incorporate the Wilson line observables in this approach. \subsection{A short introduction to the BV formalism} We know that the path integral in quantum field theories is not well defined if the classical action $S_{cl}$ defined over the space of classical fields $\mathcal{F}_{cl}$ is degenerate, for instance due to gauge symmetries. The Batalin-Vilkovisky formalism provides a general method for the perturbative calculation of partition functions and correlators. In the BV formalism, the space of fields is augmented to a BV space of fields $\mathcal{F}_{BV}$, a graded infinite-dimensional manifold equipped with a symplectic structure $\Omega_{BV}$ of degree -1 called the BV structure. The grading (usually $\mathbb{Z}$, sometimes $\mathbb{Z}_2$) is commonly referred to as ``ghost number'', in relation with the Faddeev-Popov prescription. The BV bracket is defined as the Poisson bracket obtained by inverting the BV structure, \[ \left\lbrace F,G \right\rbrace = \Omega_{BV}^{-1}(\delta F,\delta G), \] and obviously has a ghost number 1. Note that the variational operator $\delta$ can be interpreted as a de Rham differential in the space of fields. In many cases of interest, the BV space of fields is a cotangent bundle where the degree of the fibers is shifted to $-1$, which ensures its canonical symplectic form has the proper degree. Coordinates along the cotangent fibers are then called antifields. In the case of gauge theories, where the degeneracy arises under the action of a gauge group, the BV space of fields is simply the shifted cotangent bundle of the BRST space of fields which contains all classical fields as well as the ghosts parametrizing the gauge symmetries (basically the infinitesimal gauge parameters with a ghost number shifted by one), $\mathcal{F}_{BV}=T^\ast\left[-1\right]\mathcal{F}_{BRST}$. At the classical level, infinitesimal gauge transformations and the classical action can be used to construct a differential acting on the functionals on the BV space of fields. Geometrically, this differential corresponds to a cohomological vector field $Q$ on $\mathcal{F}_{BV}$. Moreover, $Q$ is a Hamiltonian vector field, \[ \imath_Q \Omega_{BV} = \delta S_{BV}, \] with the Hamiltonian function being the BV action $S_{BV}$ that reduces to the classical action when all antifields are set to zero. The condition $Q^2=0$ follows from the classical master equation $\left\lbrace S_{BV},S_{BV}\right\rbrace =0$, and determining the BV formulation of a given theory amounts to determining an extension of the classical action to the BV space of fields that satisfies this classical master equation. \subsection{The AKSZ construction} While it is usually difficult to find the BV formulation of a given field theory with a degenerate action, the study of the geometric interpretation of the classical master equation in \cite{AKSZ} led to an insightful procedure to construct solutions thereof, called the AKSZ construction after its authors. A formalized more recent treatment can be found in \cite{AKSZCattaneoFelder}. In this construction, the target space of the theory is a graded manifold $Y$ equipped with a symplectic structure $\omega_Y$ of degree $n-1$ and a compatible cohomological vector field $Q_Y$ of degree 1, in the sense that it preserves the symplectic structure, $\mathcal{L}_{Q_Y}\omega_Y=0$. We also want $\omega_Y$ to be associated to a Liouville one-form $\alpha_Y$ (of degree $n-1$ as well), namely $\omega_Y=\delta\alpha_Y$. Here $\delta$ denotes the de Rham exterior derivative on $Y$, while we keep the usual $d$ for the one on the source manifold $N$ of the model. For $n\neq 0$ (see for instance \cite{Roytenberg} for details), $Q_Y$ can be shown to be Hamiltonian, i.e. there exists a function $\Theta_Y$ of degree $n$ on $Y$ such that $\imath_{Q_Y}\omega_Y=\delta\Theta_Y$. Like in the BV formalism, the nilpotency of $Q_Y$ follows from the condition $\left\lbrace \Theta_Y,\Theta_Y\right\rbrace_Y =0$, where the curly braces with a subscript $Y$ denote the Poisson bracket on $Y$ associated to its symplectic structure. The BV space of fields is then given by maps between the odd tangent bundle of some $n$-dimensional manifold $N$ and the graded manifold $Y$, \[ \mathcal{F}_{AKSZ}=\mathrm{Map}( T\left[1\right]N ,Y). \] The odd tangent bundle is naturally equipped with a cohomological vector field, the de Rham vector field, which can be expressed as \[ D=\theta^\mu \frac{\partial}{\partial x^\mu} \] in coordinates $x^\mu$ of the base manifold $N$ and $\theta^\mu$ of the odd fibers. Notice also that real-valued functions on $T\left[1\right]N$ can be interpreted as differential forms on $N$ (by expanding the function in powers of $\theta^\mu$), \[ C^\infty(T\left[1\right]N,\mathbb{R}) \simeq \Omega^\bullet(N), \] which allows to define a canonical measure $\mu$ on $T\left[1\right]N$: the Berezinian integration along all odd fibers simply extracts the top-form out of this expansion and it remains to integrate it over the base $N$. Roughly, the idea behind the AKSZ construction involves lifting the symplectic structure $\omega_Y$ from the target space to define the BV structure $\Omega_{\mathrm{AKSZ}}$ on the space of fields. In effect, we replace functions and differential forms on $Y$ by functionals on the space of fields with values in differential forms on $N$ and their variations (which also explains the choice of $\delta$ to denote the exterior derivative on $Y$), and we integrate over the source space $T\left[1\right]N$ using its canonical measure $\mu$, \begin{equation} \label{AKSZBVstr} \Omega_{\mathrm{AKSZ}} = \int_{T\left[1\right]N} \mu\, \tilde{\omega}_Y. \end{equation} The tilde denotes the extension from function on $Y$ to functional on the space of fields. The Berezinian integration along the fibers will lower the ghost number of $\omega_Y$ from $n-1$ to $-1$ as required. In a second stage we need to lift the cohomological vector field $Q_Y$ on the target-space $Y$ as well as the de Rham vector field $D$ on the source-space $T\left[1\right] N$ to the space of fields, and combine them to form the BV cohomological vector field $Q$ discussed above that will happen to be Hamiltonian. Its generating functional is nothing but the BV-AKSZ action \begin{equation} \label{AKSZaction} S_{\mathrm{AKSZ}} = \int_{T\left[1\right]N} \mu\ \left( \imath_{Q_D}\tilde{\alpha}_Y + \tilde{\Theta}_Y \right), \end{equation} where $Q_D = \Sigma_i D\phi^i \frac{\delta}{\delta \phi^i}$ is the lift of the de Rham vector field ($\phi^i$ denotes generic coordinates on the space of fields). This AKSZ action automatically solves the classical master equation as a consequence of the integrability condition on $\Theta_Y$ and the fact that the integral of exact forms vanishes provided $\partial N=\emptyset$. As an example of the AKSZ construction, we derive here the BV formulation of the Chern-Simons theory, which corresponds to the special case $n=3$ with $Y=\mathfrak{g}\left[1\right]$, where $\mathfrak{g}$ is a Lie algebra equipped with an invariant scalar product. As required, the target space $\mathfrak{g}\left[1\right]$ supports a symplectic structure of degree 2 and a Hamiltonian cohomological vector field of degree 1 (sometimes called a Q-structure). If we denote with $\delta$ the exterior derivative on $\mathfrak{g}\left[1\right]$ and $\psi$ a generic element, the symplectic form, its Liouville potential and the Hamiltonian of the cohomological vector field respectiviely can be written as \begin{eqnarray} \omega_{\mathfrak{g}\left[1\right]} &=& -\frac{1}{2} \left( \delta\psi,\delta\psi \right), \\ \alpha_{\mathfrak{g}\left[1\right]} &=& -\frac{1}{2} \left( \psi,\delta\psi \right), \\ \Theta_{\mathfrak{g}\left[1\right]} &=& -\frac{1}{6} \left( \psi,\left[\psi,\psi\right] \right). \end{eqnarray} Note that Grassmanian variables $\psi$ anticommute, but so do differential forms of odd degree, which explains why $\omega_{\mathfrak{g}\left[1\right]}$ may be built out of a symmetric product. If we use coordinates $x^\mu,\mu=1,2,3$ on $N$ and corresponding Grassmanian coordinates $\theta^\mu$ on the odd fibres of $T\left[1\right] N$, we can decompose the fields $\mathbf{A}\in\mathrm{Map}( T\left[1\right] N ,\mathfrak{g}\left[1\right])$ into $\mathfrak{g}$-valued differential forms of various degrees and grading, \begin{equation} \label{decompo3D} \mathbf{A}=\gamma + A_\mu\theta^\mu + \frac{1}{2} A^+_{\mu\nu}\theta^\mu\theta^\nu + \frac{1}{6} \gamma^+_{\mu\nu\sigma}\theta^\mu\theta^\nu\theta^\sigma, \end{equation} specifically \begin{displaymath} \begin{array}{lcl} \gamma & \in & \mathrm{Map}(N,\mathfrak{g}[1]), \\ A & \in & \Gamma( T^\ast N \otimes \mathfrak{g}), \\ A^+ & \in & \Gamma( \bigwedge^2 T^\ast N \otimes \mathfrak{g}[-1]), \\ \gamma^+ & \in & \Gamma( \bigwedge^3 T^\ast N \otimes \mathfrak{g}[-2]). \end{array} \end{displaymath} These fields are endowed with two gradings, namely the ghost-grading (that stands in square brackets when non-zero) and the degree as a differential form. Their sum, the total degree, should amount to 1, since each $\theta^\mu$ has a ghost number 1, and all terms in the decomposition (\ref{decompo3D}) should have the same total ghost number of 1. As usual, the fields of ghost number 0 are the classical fields, here a $\mathfrak{g}$-valued connection $A$, and the fields of ghost number 1 are simply called ghosts. The other two fields are their antifields (which in the BV formalism means canonically conjugated), as is clear when one computes the BV structure, \begin{equation} \label{3D_BV_form} \begin{split} \Omega_{\mathrm{BV}}^{\mathrm{CS}} &= \int_{T\left[1\right] N}\mu \ \tilde{\omega}_{\mathfrak{g}\left[1\right]}= - \int_{T\left[1\right] N}\mu \left( \delta\mathbf{A},\delta\mathbf{A} \right) \\ &= \int_N \left( \left( \delta \gamma^+,\delta \gamma \right) - \left( \delta A^+,\delta A \right) \right) = \int_N \left( -\left( \delta \gamma,\delta \gamma^+ \right) + \left( \delta A,\delta A^+ \right) \right). \end{split} \end{equation} Note that the commutation rules for the fields (which are simultaneously functions on $\mathcal{F}$ and differential forms on $N$) are determined by the total degree (the de Rham degree of the differential form plus the ghost number): two fields of odd total degree anti-commute, and commute if at least one field has even total degree. It remains to compute the BV action, which is straightforward in the AKSZ scheme, \begin{equation} \label{Scsbv} \begin{split} S_{\mathrm{BV}}^{\mathrm{CS}} &= \int_{T\left[1\right] N} \mu \left( \imath_{Q_D} \tilde{\alpha}_{\mathfrak{g}\left[1\right]} + \tilde{\Theta}_{\mathfrak{g}\left[1\right]} \right) \\ &= \int_{T\left[1\right] N} \mu \left( \left( \mathbf{A},D\mathbf{A} \right) + \frac{1}{6} \left( \mathbf{A},\left[\mathbf{A},\mathbf{A}\right] \right) \right) \\ &= \int_N \left( \frac{1}{2} \left( A, dA \right) + \frac{1}{6} \left( A,\left[ A,A\right] \right) - \left( A^+ , d\gamma + \left[ A,\gamma\right] \right) + \left( \gamma^+,\frac{1}{2}\left[ \gamma,\gamma \right] \right) \right). \end{split} \end{equation} In the two terms involving only physical fields, we recognize the classical action of the Chern-Simons theory. The other terms complete the BV action, and by construction, it is clear that it satisfies the classical master equation. Nevertheless, we will show it explicitly, mainly to present an example of calculations in the space of fields, on which we will rely in the rest of this paper. \subsection{Calculations in the BV formalism} First of all, we need to find the BV bracket, simply by inverting the BV structure (\ref{3D_BV_form}), without forgetting that the product between two differential forms in the space of fields really means the exterior product, \begin{displaymath} \delta \phi^+ \delta \phi = \delta \phi^+ \wedge \delta \phi = \delta \phi^+ \otimes \delta \phi \pm \delta \phi \otimes \delta \phi^+, \end{displaymath} where the sign depends on the commutation rules between $\phi$ and $\phi^+$. We find the following expression for the BV bracket of two functionals $F_1$ and $F_2$, \begin{equation} \begin{split} \left\lbrace F_1,F_2\right\rbrace = & \int_N \left( \left( \frac{F_1 \overleftarrow{\delta}}{\delta \gamma}, \frac{\overrightarrow{\delta}F_2}{\delta \gamma^+}\right) - \left( \frac{F_1 \overleftarrow{\delta}}{\delta \gamma^+}, \frac{\overrightarrow{\delta}F_2}{\delta \gamma}\right) \right. \\ & \qquad \left. - \left( \frac{F_1 \overleftarrow{\delta}}{\delta A}, \frac{\overrightarrow{\delta}F_2}{\delta A^+}\right) + \left( \frac{F_1 \overleftarrow{\delta}}{\delta A^+}, \frac{\overrightarrow{\delta}F_2}{\delta A}\right) \right) \end{split} \end{equation} where the functional derivatives $\frac{\overrightarrow{\delta}}{\delta\phi}$ and $\frac{\overleftarrow{\delta}}{\delta\phi}$ for $\phi\in\left\lbrace A, A^+, \gamma, \gamma^+, g^+\right\rbrace$ are the duals of the differentials $\delta\phi$ in the space of fields (which can be interpreted as variations of fields in the framework of variational calculus). We need to make the difference between right- and left-derivatives due to the commutation rules that depend on ghost numbers and degrees of differential forms on $N$. All the fields of the Chern-Simons model are $\mathfrak{g}$-valued, so taking the functional derivative of a real-valued functional $F$ on $\mathcal{F}$ by one of these fields should produce a $\mathfrak{g}^\ast$-valued result, but we can use the non-degenerate scalar product $\left(\cdot,\cdot\right)$ to identify $\mathfrak{g}$ with its dual. If $F$ is constructed as an integral, like an action, the left- and right-derivatives by a field $\phi\in\left\lbrace A, A^+, \gamma, \gamma^+, g^+\right\rbrace$ can be defined as the components of the exterior derivative with respect to the local frame induced by these coordinate-fields of $\mathcal{F}$, \begin{displaymath} \delta F(\phi_1,\dots,\phi_n) = \int_{\partial N} \sum_{j=1}^n \left(\delta\phi_j,\frac{\overrightarrow{\delta}F}{\delta \phi_j}\right) = \int_{\partial N} \sum_{j=1}^n \left(\frac{F\overleftarrow{\delta}}{\delta \phi_j},\delta\phi_j \right). \end{displaymath} If on the other hand $F$ is a local functional, we may still express it as an integral provided we filter its position with a Dirac distribution, a distribution that will stick to the functional derivative. At a later stage, we will need to consider Lie group-valued fields of the form $g\in\mathrm{Map}(N,G)$. The problem with such a field is that its variation does not take value in $\mathfrak{g}$, but rather in the tangent space at $g$ of the Lie group $G$. Natural coupling with the other $\mathfrak{g}$-valued fields via the invariant scalar product involves the right multiplication by $g^{-1}$ to bring it back to the Lie algebra, explicitly $\delta g\, g^{-1}$. The dual derivative $\frac{\delta}{\delta g}$ assumes its value in $T^\ast_{g^{-1}} G$, which is isomorphic to $T_{g^{-1}} G$ thanks to the invariant non-degenerate scalar product, and we need to apply this time left-multiplication by $g$ to get back to the Lie algebra. If the functional $F$ depends also on $g$, we find for the derivative \begin{displaymath} \delta F(\phi_1,\dots,\phi_n, g) = \int_{\partial N} \sum_{j=1}^n \left(\delta\phi_j,\frac{\overrightarrow{\delta}F}{\delta \phi_j}\right) +\left( \delta g\, g^{-1}, g \frac{\overrightarrow{\delta}F}{\delta g}\right). \end{displaymath} To compute $\left\lbrace S_{\mathrm{BV}}^{\mathrm{CS}},S_{\mathrm{BV}}^{\mathrm{CS}} \right\rbrace$, we need the derivatives of the Chern-Simons BV action. We find \begin{equation} \begin{split} \delta S_{\mathrm{BV}}^{\mathrm{CS}} = \int_N & \left( \left( \delta A, dA + \frac{1}{2} \left[A,A\right] + \left[A^+,\gamma\right] \right) \right. \\ & + \left( \delta \gamma, -d_A A^+ - \left[\gamma^+,\gamma\right] \right) \\ & + \left. \left( \delta A^+, -d_A\gamma \right) + \left( \delta\gamma^+, \frac{1}{2} \left[\gamma,\gamma\right]\right) \right), \end{split} \end{equation} where we introduced the covariant derivative $d_A=d + \left[A,\cdot\right]$. Note that we need to integrate by parts to find the contribution of the exterior derivatives, such as \[ \int_N \left( A, \delta dA\right) = \int_N \left( A, d\delta A\right) = - \int_{\partial N} \left(A,\delta A\right) + \int_N \left(dA,\delta A\right), \] where the boundary term vanishes for a closed manifold $N$. When we consider source spaces with boundaries, these terms will no longer vanish, and they will contribute to a one-form in the boundary space of fields. For now we can check the classical master equation, \begin{displaymath} \begin{split} \frac{1}{2}\left\lbrace S_{\mathrm{BV}}^{\mathrm{CS}},S_{\mathrm{BV}}^{\mathrm{CS}}\right\rbrace &= \int_N \left( \left( \frac{S_{\mathrm{BV}}^{\mathrm{CS}} \overleftarrow{\delta}}{\delta \gamma}, \frac{\overrightarrow{\delta}S_{\mathrm{BV}}^{\mathrm{CS}} }{\delta \gamma^+}\right) - \left( \frac{S_{\mathrm{BV}}^{\mathrm{CS}} \overleftarrow{\delta}}{\delta A}, \frac{\overrightarrow{\delta}S_{\mathrm{BV}}^{\mathrm{CS}}}{\delta A^+}\right) \right) \\ &= \int_N \left( \left( d_A A^+ + \left[\gamma^+,\gamma\right], \frac{1}{2} \left[\gamma,\gamma\right]\right) \right. \\ & \qquad \quad \left. - \left( dA + \frac{1}{2} \left[A,A\right] + \left[A^+,\gamma\right] , -d_A\gamma \right)\right) \\ &=0. \end{split} \end{displaymath} In the last step we make repeated use of the invariance of the scalar product, the Jacobi identity for $\mathfrak{g}$, and the Stokes theorem. We are now fully prepared to describe Wilson lines in the BV formalism. \subsection{Wilson Lines} In gauge theories, a degeneracy arises under the local action of a Lie group on the space of fields, the gauge group. In what follows, we will denote by $G$ the gauge group, and $\mathfrak{g}$ its associated Lie algebra. Their so-called gauge field is a connection $A$ in a principal $G$-bundle over some manifold $N$. The gauge symmetry is parametrized in the BV (and BRST) formalism by a ghost field $\gamma\in\mathrm{Map}(N,\mathfrak{g}\left[1\right])$. The BV variation of these two fields depends only on their behavior under gauge transformations and not on the specific type of the underlying ambient theory. We assume that the dynamics and the gauge structure of this ambient theory is encoded in the BV action $S^{\mathrm{amb}}$ and the corresponding BV structure $\Omega^{\mathrm{amb}}$ (both defined as integrals over $N$), and of course that $S^{\mathrm{amb}}$ solves the ambient classical master equation $\left\lbrace S^{\mathrm{amb}},S^{\mathrm{amb}}\right\rbrace_{\mathrm{amb}}=0$. The part of this ambient BV bracket involving the gauge connection and the ghost field relevant for our further investigation is defined by the BV variation of these fields, namely \begin{equation} \label{gaugetransfo} \begin{array}{rcccl} \left\lbrace S^{\mathrm{amb}}, A \right\rbrace &=& Q\,A &=& d_A\gamma, \\ \left\lbrace S^{\mathrm{amb}}, \gamma \right\rbrace &=& Q\, \gamma &=& \frac{1}{2}\left[\gamma,\gamma\right]. \end{array} \end{equation} Natural non-local observables to consider in gauge theories are given by Wilson-loops, traces of the holonomy of the connection $A$ along a curve $\Gamma$ embedded in $N$ in given representations of the Lie algebra, \begin{displaymath} W_{\Gamma,R}\left[A\right] = \mathrm{Tr}_R\mathrm{Pexp}\left(\int_{\Gamma}A\right), \end{displaymath} where P stands for the path-ordering and $R$ labels the representation of $\mathfrak{g}$. This cumbersome path-ordering can be removed at the price of integrating over all inequivalent gauge transformations along the loop \cite{QuantSymplOrbits}, \begin{equation} W_{\Gamma,R}\left[A\right] = \int \mathcal{D}g \, \mathrm{exp}\left(\int_\Gamma\langle T_0, g^{-1}Ag + g^{-1}dg\rangle\right). \end{equation} The dual algebra element $T_0\in\mathfrak{g}^\ast$ encodes the representation $R$, along the lines of the orbit method \cite{OrbitMethod} that links unitary irreducible representations of Lie groups and their coadjoint orbits. This expression for the Wilson loop can be absorbed into an extended action by adding the auxiliary term \begin{equation} \label{wilsonlineterm} S_{\mathrm{Wilson}} = \int_\Gamma \langle \mathrm{Ad}^\ast_g (T_0), A + dg\,g^{-1}\rangle \end{equation} to the ambient action $S^{\mathrm{amb}}$ of the model under consideration. In this last step we replaced the adjoint action on the second factor of the product by the coadjoint action on the first factor, to emphasize the role of the coadjoint orbit $\mathcal{O}$ of $T_0$. Now we would like to find a BV formulation of this contribution, so as to obtain a BV action of the full model with Wilson loops. The partition function of such a model with an action extended to take into account a Wilson line as an auxiliary term actually corresponds to the expectation value of this Wilson line in the pure theory, \begin{displaymath} Z_{S^{\mathrm{amb}} + S^{\mathrm{aux}}} = \langle W_{\Gamma,R} \rangle_{S^{\mathrm{amb}}}. \end{displaymath} We note that the coadjoint orbit $\mathcal{O}$ supports the Kirillov symplectic structure $\omega_{\mathcal{O}}$, of ghost number 0, and that the curve $\Gamma$ carrying the Wilson line has dimension 1, the first two main ingredients for the AKSZ construction for $n=1$. It is thus tempting to try to apply the prescription proposed in \cite{AKSZObs} to construct observables within the AKSZ formalism. Nonetheless, \cite{AKSZObs} treats exclusively the case of an ambient theory of the AKSZ type, whereas we want to consider gauge theories, with the sole requirement that their space of fields contains a gauge connection and an associated ghost field obeying the relations (\ref{gaugetransfo}). The obvious solution is to study a gauge theory of the AKSZ type, a condition fulfilled by the Chern-Simons model, the main subject of this paper. The BV formulation of the Wilson line contribution will happen to remain valid for other gauge theories. So following \cite{AKSZObs}, the auxiliary fields are the maps between the odd tangent bundle of the curve $\Gamma$ and the coadjoint orbit, \begin{displaymath} \mathcal{F}^{\mathrm{aux}} = \mathrm{Map}(T\left[1\right]\Gamma,\mathcal{O}). \end{displaymath} This auxiliary space of fields needs to be equipped with its own BV structure, $\Omega^{\mathrm{aux}}$, that once added to the BV structure $\Omega^{\mathrm{amb}}$ of the ambient theory will provide the BV structure $\Omega = \Omega^{\mathrm{amb}} + \Omega^{\mathrm{aux}}$ of the full model with space of fields $\mathcal{F} = \mathcal{F}^{\mathrm{amb}} \oplus \mathcal{F}^{\mathrm{aux}}$. Then it will be possible to add to the ambient action $S^{\mathrm{amb}}$ an auxiliary term $S^{\mathrm{aux}}$ that obeys certain constraints to obtain a solution of the master equation of the full model. The definition of $\Omega^{\mathrm{aux}}$ is similar to the one of the AKSZ-BV structure (\ref{AKSZBVstr}), we just need to change the source space and the symplectic structure of the target, \begin{displaymath} \Omega^{\mathrm{aux}} = \int_{T\left[1\right] \Gamma} \mu_\Gamma \, \tilde{\omega}_{\mathcal{O}}. \end{displaymath} Here $\mu_\Gamma$ is obviously the canonical measure on $T\left[1\right]\Gamma$. Unfortunately, the Kirillov symplectic form on the coadjoint orbit is in general not exact, so it is in general not possible to find a Liouville one-form, which we would normally use to construct the kinetic term of the auxiliary action. However, in the case of integrable orbits, we may pick a line bundle (the pre-quantum line bundle in the language of geometric quantization) and a connection $\alpha_{\mathcal{O}}$ thereon with curvature $\omega_{\mathcal{O}}$, \[ \delta\alpha_{\mathcal{O}} = \omega_{\mathcal{O}} \] (we recall that in the target spaces of AKSZ theories, we denote by $\delta$ the exterior derivative), and we can simply use this connection to construct the kinetic term of the auxiliary action. This formulation is not very practical to carry out calculations. To find expressions easier to deal with, we apply the defining property of the Kirillov symplectic form, that the pullback by the projection map \begin{equation} \label{projectioncoadorbit} \pi: G \rightarrow \mathcal{O}\simeq G/\mathrm{Stab}(T_0) \end{equation} brings it to an explicit presymplectic form $\omega_G$ on $G$, \begin{equation} \label{defKirillov} \pi^\ast (\omega_{\mathcal{O}}) = \omega_G = - \langle \mathrm{Ad}^\ast_g (T_0),\frac{1}{2}\left[\delta g\,g^{-1},\delta g\,g^{-1}\right]\rangle. \end{equation} This two-form is the contraction of $T_0$ with the exterior derivative of the Maurer-Cartan one-form on $G$. It thus admits a potential \begin{equation} \alpha_G = -\langle \mathrm{Ad}^\ast_g (T_0), \delta g\,g^{-1}\rangle. \end{equation} As it happens, the pullback by the projection map $\pi$ brings the connection $\alpha_{\mathcal{O}}$ over to the one-form $\alpha_G$, \[ \pi^\ast (\alpha_\mathcal{O}) = \alpha_G, \] that we will use in the place of the more cumbersome connection to compute certain quantities. Since $\omega_G$ is degenerate, it is not possible to construct a Poisson bracket out of it, unless we restrict it to invariant functions on $G$, such as the ones obtained by pullback of functions on the coadjoint orbit $\mathcal{O}$ by the projection map $\pi$. It remains to define the interaction term. The idea of \cite{AKSZObs} is to construct a function $\Theta_{\mathcal{O}}$ on $\mathfrak{g}\left[1\right] \times \mathcal{O}$ that will generate together with $\Theta_{\mathfrak{g}\left[1\right]}$ a Hamiltonian cohomological vector field on $\mathfrak{g}\left[1\right] \times \mathcal{O}$. While $\Theta_{\mathfrak{g}\left[1\right]}$ already satisfies an integrability condition on its own and generates a cohomological vector field $\mathcal{Q}_{\mathfrak{g}\left[1\right]}$, the integrability condition for $\Theta_{\mathcal{O}}$ needs to be slightly adapted to account for the mixed term, namely \begin{displaymath} \mathcal{Q}_{\mathfrak{g}\left[1\right]} \Theta_{\mathcal{O}} + \frac{1}{2}\left\lbrace\Theta_{\mathcal{O}},\Theta_{\mathcal{O}}\right\rbrace_{\mathcal{O}} = 0. \end{displaymath} As it happens, the function \begin{equation} \Theta_{\mathcal{O}} = \langle \mathrm{Ad}^\ast_g (T_0), \psi\rangle \end{equation} satisfies this requirement and naturally extends the term $\langle\mathrm{Ad}^\ast_g (T_0), A \rangle$ that already appeared in the classical part (\ref{wilsonlineterm}). This integrability condition is most easily checked by pulling it back by $\pi$ to a function on $\mathfrak{g}\left[1\right] \times G$, where the Poisson bracket $\left\lbrace\cdot,\cdot\right\rbrace_{\mathcal{O}}$ becomes $\left\lbrace \cdot,\cdot\right\rbrace_G$, which can be explicitly determined by inverting $\omega_G$. We now have all the ingredients to construct the auxiliary BV action, that we just need to combine into a formula similar as the usual AKSZ action (\ref{AKSZaction}), \begin{displaymath} S^{\mathrm{aux}} = \int_{T\left[1\right]\Gamma} \mu_\Gamma \ \left( \imath_{Q_D}\tilde{\alpha}_{\mathcal{O}} + \tilde{\Theta}_{\mathcal{O}} \right). \end{displaymath} By construction, if $S^{\mathrm{amb}}$ is the AKSZ action of the Chern-Simons model, the total action \[ S=S^{\mathrm{amb}}+S^{\mathrm{aux}} \] automatically satisfies the classical master equation generated by the total BV structure \[ \Omega = \Omega^{\mathrm{amb}} + \Omega^{\mathrm{aux}}. \] We claimed that it remains true when $S^{\mathrm{amb}}$ is the BV action of a generic gauge theory with gauge group $G$. To verify this assertion, we need to compute \begin{equation} \label{threetermsCME} \frac{1}{2}\left\lbrace S,S\right\rbrace = \frac{1}{2}\left\lbrace S^{\mathrm{amb}},S^{\mathrm{amb}}\right\rbrace + \left\lbrace S^{\mathrm{amb}},S^{\mathrm{aux}}\right\rbrace + \frac{1}{2}\left\lbrace S^{\mathrm{aux}},S^{\mathrm{aux}}\right\rbrace. \end{equation} The first two terms involve only the ambient BV structure, since $S^{\mathrm{amb}}$ does not depend on the auxiliary fields. The first one vanishes due to the master equation of the BV ambient model. To compute the second term, we should know the exact dependence of the auxiliary term $S^{\mathrm{aux}}$ on the ambient fields $A$ and $\gamma$, and to compute the last one, we need an expression of the auxiliary BV structure $\Omega^{\mathrm{aux}}$ that we know how to invert. These two issues can be addressed by using the projection map (\ref{projectioncoadorbit}) to define an extended space of fields, \begin{displaymath} \hat{\mathcal{F}}_G^{\mathrm{aux}} = \pi^\ast (\mathcal{F}^{\mathrm{aux}}) = \left\lbrace (g,g^+) \vert g\in \mathrm{Map}(\Gamma,G), g^+\in \Omega^1(\Gamma)\otimes g^\ast(T\mathcal{O})\left[-1\right] \right\rbrace. \end{displaymath} The subscript $G$ emphasizes the fact that the coadjoint orbit is replaced by the whole group. This projection map, now seen as a map between spaces of fields, \begin{displaymath} \pi : \hat{\mathcal{F}}_G^{\mathrm{aux}} \rightarrow \mathcal{F}^{\mathrm{aux}}, \end{displaymath} acts on the group-valued component $g$ by sending it to its image $\mathrm{Ad}^\ast_g(T_0)$ in the coadjoint orbit of $T_0$. It can be used to pull back differential forms on the auxiliary space of fields (such as the auxiliary BV structure, a two-form, or the auxiliary BV action, a zero-form) to this extended space of fields, where it is easier to compute BV brackets of $G$-invariant functionals given the explicit formulas for the pullbacks of the auxiliary BV structure and of the auxiliary action. In $\hat{\mathcal{F}}_G^{\mathrm{aux}}$, both fields $g$ and $g^+$ can be combined into a superfield of total degree 0 that we can use to express the pullback of the auxiliary BV structure and action, \begin{displaymath} \mathbf{H}(x,\theta) = \mathrm{Ad}_{g(x)}^\ast (T_0) - \theta g^+(x). \end{displaymath} Here $x$ is a coordinate of $\Gamma$ and $\theta$ a Grassmanian coordinate on the odd fibers of $T\left[1\right]\Gamma$, and $g^+(x)$ is the component of the one-form $g^+$ expressed in this coordinate system, $g^+ = g^+(x)dx$. We now have all the tools to compute the pullback of the auxiliary BV structure, \begin{equation} \label{auxBVstr} \begin{split} \hat\Omega^{\mathrm{aux}}_G &= \pi^\ast(\Omega^{\mathrm{aux}}) = \int_{T\left[1\right] \Gamma} \mu_\Gamma \ \pi^\ast(\tilde{\omega}_{\mathcal{O}}) = \int_{T\left[1\right] \Gamma} \mu_\Gamma \ \tilde{\omega}_G \\ &= -\int_{T\left[1\right] \Gamma} \mu \, \delta\langle\mathbf{H},\delta g\,g^{-1}\rangle = \int_{\Gamma}\delta\langle g^+,\delta g\,g^{-1}\rangle \\ &= \int_{\Gamma}\langle\delta g^+,\delta g\,g^{-1}\rangle + \langle g^+, \frac{1}{2}\left[\delta g\,g^{-1},\delta g\,g^{-1}\right]\rangle, \end{split} \end{equation} and of the auxiliary BV action, \begin{equation} \label{Saux} \begin{split} \hat S^{\mathrm{aux}}_G &= \pi^\ast(S^{\mathrm{aux}}) = \int_{T\left[1\right]\Gamma} \mu_\Gamma \ \left( \imath_{Q_D}\tilde{\alpha}_G + \pi^\ast(\tilde{\Theta}_{\mathcal{O}}) \right) \\ &= \int_{T\left[1\right]\Gamma} \mu_\Gamma \ \left( \langle \mathbf{H}, Dg\,g^{-1} \rangle + \langle \mathbf{H},\mathbf{A}\rangle \right) \\ &= \int_{\Gamma} \left( \langle\mathrm{Ad}^\ast_g(T_0), A + dg\,g^{-1} \rangle - \langle g^+,\gamma\rangle\right). \end{split} \end{equation} The additional term $\langle g^+,\gamma\rangle$ encodes the action of gauge transformations of the ambient model on the auxiliary classical field $\mathrm{Ad}^\ast_g(\langle T_0)$. We insist on the fact that $\hat\Omega^{\mathrm{aux}}_G$ is only a pre-BV structure in $ \hat{\mathcal{F}}_G^{\mathrm{aux}}$, it is closed but degenerate. Nevertheless, like its finite dimensional counterpart $\omega_G$, it can be used to define a BV bracket on invariant functionals, such as the ones obtained by pullback from $\mathcal{F}^{\mathrm{aux}}$, for instance $\hat S^{\mathrm{aux}}_G$. We can turn our attention to the pullback of the last two terms in (\ref{threetermsCME}). To compute the second one, we notice that $\left\lbrace S^{\mathrm{amb}},\cdot\right\rbrace$ acts as a differential (namely $Q$) on the fields of the ambient model, of which only $A$ and $\gamma$ appear in the auxiliary action (\ref{Saux}), so that we may simply use the relations (\ref{gaugetransfo}) to find \begin{equation} \begin{split} \left\lbrace S^{\mathrm{amb}},\hat S^{\mathrm{aux}}_G\right\rbrace &= \int_{\Gamma} \left( \langle \mathrm{Ad}^\ast_g(T_0), Q(A) \rangle - \langle g^+,Q(\gamma)\rangle\right) \\ &= \int_\Gamma \left( \langle \mathrm{Ad}^\ast_g(T_0), d_A\gamma \rangle -\langle g^+, \frac{1}{2}\left[ \gamma,\gamma\right] \rangle \right). \end{split} \end{equation} Next, the bracket of the third term contains only contributions from the auxiliary structure, and we may compute \begin{equation} \begin{split} \frac{1}{2}\left\lbrace \hat S^{\mathrm{aux}}_G,\hat S^{\mathrm{aux}}_G\right\rbrace &= \int_\Gamma \left( \langle g\frac{ \hat S^{\mathrm{aux}}_G \overleftarrow{\delta} }{\delta g}, \frac{\overrightarrow{\delta}\hat S^{\mathrm{aux}}_G}{\delta g^+}\rangle + \langle g^+, \frac{1}{2}\left[ \frac{ \hat S^{\mathrm{aux}}_G \overleftarrow{\delta} }{\delta g^+},\frac{\overrightarrow{\delta}\hat S^{\mathrm{aux}}_G}{\delta g^+}\right] \rangle \right) \\ &= \int_\Gamma \left( \langle (d+\mathrm{ad}_A^\ast)\mathrm{Ad}^\ast_{g^{-1}}T_0, \gamma\rangle + \langle g^+, \frac{1}{2}\left[ \gamma,\gamma\right] \rangle \right). \end{split} \end{equation} The first line displays the BV bracket of invariant functionals on $\hat{\mathcal{F}}_G^{\mathrm{aux}}$ constructed out of the pre-BV structure $\hat{\Omega}^{\mathrm{aux}}_G$. The sum of these two terms yields the integral of an exact term that vanishes since $\Gamma$ is closed (for now), and the pullback of the classical master equation is satisfied, \begin{displaymath} \pi^\ast \left( \frac{1}{2}\left\lbrace S^{\mathrm{amb}} + S^{\mathrm{aux}}, S^{\mathrm{amb}} + S^{\mathrm{aux}}\right\rbrace \right) = 0. \end{displaymath} Furthermore, since the left-hand side of this last equality is $G$-invariant, it behaves nicely enough under the projection $\pi$ so that $ S^{\mathrm{amb}} + S^{\mathrm{aux}}$ still solves the classical master equation at the level of $\mathcal{F} = \mathcal{F}^{\mathrm{amb}} \oplus \mathrm{Map}(T\left[1\right]\Gamma,\mathcal{O})$, also for generic gauge theories. \subsection{Quadratic Lie algebras} In many cases of interest, the Lie algebra $\mathfrak{g}$ is equipped with a non-degenerate scalar product $(\cdot,\cdot)$, which we can use to define an isomorphism $\beta:\mathfrak{g}^\ast\rightarrow\mathfrak{g}$, that we can apply to $T_0$, $H$ and $g^+$. The relation $\beta(\mathrm{Ad}^\ast_g(T_0))=\mathrm{Ad}_g \, ( \beta(T_0))$ will be very useful, in particular we can replace the canonical pairing between $\mathfrak{g}$ and $\mathfrak{g}^\ast$ with the scalar product, \begin{displaymath} \langle \mathrm{Ad}^\ast_g(T_0), \cdot \rangle = \left( \mathrm{Ad}_g(\beta(T_0)), \cdot \right), \end{displaymath} and thus identify coadjoint orbits with adjoint orbits. In the rest of this article, we will assume that $\mathfrak{g}$ admits such a non-degenerate scalar product. We will make use of it to write down all actions and BV structures, and we will simply consider $T_0$, $H$ and $g^+$ to be elements of $\mathfrak{g}$ instead of its dual, $\mathfrak{g}^\ast$ (we drop the $\beta$ for simplicity). Coadjoint orbits will therefore be identified with adjoint orbits. To summarize the main results of this section with this new convention, we can re-write the auxiliary BV structure (\ref{auxBVstr}) as \begin{equation} \hat{\Omega}_G^{\mathrm{aux}} = \int_{\Gamma}(\delta g^+,\delta g\,g^{-1}) + ( g^+, \frac{1}{2}\left[\delta g\,g^{-1},\delta g\,g^{-1}\right]) \end{equation} and the auxiliary BV action (\ref{Saux}) as \begin{equation} \label{Sauxnew} \hat{S}_G^{\mathrm{aux}} = \int_{\Gamma} \left( ( \mathrm{Ad}_g(T_0), A + dg\,g^{-1} ) - ( g^+,\gamma)\right). \end{equation} \section{3D Chern-Simons Theory with a Wilson line} \label{3DCSWL} Our main goal in this paper is to study the behaviour of Chern-Simons models on a manifold $N$ with boundaries and Wilson lines ending on these boundaries, both in three and one dimension, in the BV formalism. The presence of a boundary requires either a careful choice in the boundary conditions for the fields, so as to keep the classical master equation under control, or the application of the recently developed BV-BFV formalism for gauge theories with boundaries \cite{CattaneoMnevReshetikhin}, which presents the big advantage that it allows to glue pieces together along their boundary. In order to obtain the BV-BFV formulation of the three-dimensional Chern-Simons model with a boundary supporting some Wilson lines, we first need to determine the BV theory of the bulk. Actually we already know all its ingredients. Obviously, we will treat the Wilson lines as an auxiliary part of the action, as described in section \ref{WilsonLinesBV}, added to the ambient Chern-Simons action (\ref{Scsbv}), \[ S^{\mathrm{amb}} = S_{BV}^{CS}. \] To include Wilson lines to our model, say $n$ of them, we need to extend the ambient space of fields \begin{displaymath} \mathcal{F}^{\mathrm{amb}} = \mathrm{Map}(T\left[1\right] N,\mathfrak{g}\left[1\right]) \end{displaymath} carrying the ambient BV structure $\Omega^{\mathrm{amb}} = \Omega^{CS}_{BV}$ with an auxiliary part \begin{displaymath} \mathcal{F}^{\mathrm{aux}} = \bigoplus_{k=1}^n \mathrm{Map}(T\left[1\right]\Gamma_k,\mathcal{O}_k) \end{displaymath} made of $n$ components, one for each Wilson line labeled by $k$. We recall that we denote by $\mathcal{O}_k$ the (co)adjoint orbit of a Lie algebra element $T_{0,k}$ encoding the representation in which the $k$-th Wilson line is computed and by $\Gamma_k$ the curve embedded in $N$ supporting this Wilson line. The BV structure of this auxiliary space of fields is the sum of $N$ copies of the auxiliary BV structure of a single Wilson line, \begin{displaymath} \Omega^{\mathrm{aux}} = \sum_{k=1}^n \int_{T\left[1\right] \Gamma_k} \mu_{\Gamma_k} \, \tilde{\omega}_{\mathcal{O}_k}. \end{displaymath} The auxiliary BV action is similarly constructed as a sum, \begin{displaymath} S^{\mathrm{aux}} = \sum_{k=1}^n \int_{T\left[1\right] \Gamma_k} \mu_{\Gamma_k} \ \left( \imath_{Q_D}\tilde{\alpha}_{\mathcal{O}_k} + \tilde{\Theta}_{\mathcal{O}_k} \right). \end{displaymath} From now on, unless specified otherwise, a superscript ``amb'' will always describe a quantity associated to the BV formulation of the bare Chern-Simons model, be it a BV structure, a BV action or a BV space of fields, ``aux'' will always describe a quantity associated to the BV formulation of the auxiliary contribution of $n$ Wilson lines, and no superscript will mean a BV quantity of the full model, namely $\mathcal{F}=\mathcal{F}^{\mathrm{amb}}\oplus\mathcal{F}^{\mathrm{aux}}$, $\Omega=\Omega^{\mathrm{amb}} + \Omega^{\mathrm{aux}}$ and $S=S^{\mathrm{amb}} + S^{\mathrm{aux}}$. Before we consider the case of a source manifold with boundary, we need to compute the Hamiltonian vector field $Q$ generated by $S$, i.e. satisfying \begin{equation} \label{iQOmega} \imath_Q \Omega = \delta S. \end{equation} Once the BV structure $\Omega$ is inverted to form the BV bracket $\left\lbrace\cdot,\cdot\right\rbrace$, this is equivalent to \[ Q = \left\lbrace S,\cdot\right\rbrace. \] This relation is linear in $Q$ and $S$, namely $\imath_{Q^{\mathrm{amb}}} \Omega = \delta S^{\mathrm{amb}}$ and $\imath_{Q^{\mathrm{aux}}} \Omega = \delta S^{\mathrm{aux}}$. We will again use the projection maps $\pi_k: G \rightarrow \mathcal{O}_k$ to pull back the auxiliary action and the auxiliary BV structure to the space of fields $\bigoplus_{k=1}^n \hat{\mathcal{F}}_G^{\mathrm{aux}}$ where the calculations are easier. Note that we need one copy of $\hat{\mathcal{F}}_G^{\mathrm{aux}}$ for each Wilson line. The results can then be easily brought over to the actual auxiliary space of fields by the $n$ projections $\pi_k$. For the ambient Hamiltonian vector field we obtain \begin{equation} \begin{split} Q^{\mathrm{amb}} = \left\lbrace S^{\mathrm{amb}}, \cdot \right\rbrace = & \left( \left( d_A A^+ + \left[ \gamma^+,\gamma\right]\right), \frac{\overrightarrow{\delta}}{\delta\gamma^+} \right) - \frac{1}{2} \left( \left[ \gamma, \gamma\right], \frac{\overrightarrow{\delta}}{\delta\gamma} \right) \\ & - \left( \left( dA + \frac{1}{2}\left[ A,A\right] + \left[ A^+,\gamma \right]\right), \frac{\overrightarrow{\delta}}{\delta A^+} \right) + \left( d_A \gamma, \frac{\overrightarrow{\delta}}{\delta A} \right), \end{split} \end{equation} and for its auxiliary counterpart \begin{equation} \begin{split} \hat{Q}_G^{\mathrm{aux}} = \left\lbrace \hat{S}_G^{\mathrm{aux}}, \cdot \right\rbrace = & - \sum_k \left( \mathrm{Ad}_g \,(T_{0,k})\delta(\Gamma_k), \frac{\overrightarrow{\delta}}{\delta A^+} \right) - \left( d_A(\mathrm{Ad}_g \,(T_{0,k})), \frac{\overrightarrow{\delta}}{\delta g^+} \right) \\ & - \left( \gamma, g \frac{\overrightarrow{\delta}}{\delta g} \right) - \sum_k \left( g^+ \delta(\Gamma_k),\frac{\overrightarrow{\delta}}{\delta \gamma^+} \right). \end{split} \end{equation} Here $\delta(\Gamma_k)$ denotes a Dirac distribution two-form centred on $\Gamma_k$ to filter the curve out of the whole manifold $N$. The fields $g$ and $g^+$ that appear in front of these Dirac two-forms are defined only on the Wilson lines. The functional derivatives appearing right after them act on functionals in the bulk, but their results are zero- and one-forms that make sense on the curves $\Gamma_k$. Since the ambient and the total actions solve classical master equations, we know that $Q^{\mathrm{amb}}$ and $Q=Q^{\mathrm{amb}}+Q^{\mathrm{aux}}$ are cohomological, but not $Q^{\mathrm{aux}}$. If the source space has a boundary, on which might end an open Wilson line along one or more curves $\Gamma_k$ (for simplicity, we will assume all of them), these cohomological vector fields cease to be Hamiltonian due to boundary effects affecting the variation of the differentiated terms in the action. The integration by part required to compute the contribution of these terms to the variation of the action now contains a surface integral. The BV-BFV formalism is based on the observation that this correction can be seen as a one-form in the boundary space of fields $\mathcal{F}_\partial$. This boundary space of fields contains the restriction of the fields of the bulk BV theory to their value on the boundary of the source manifold, $\partial N$ in the case of our ambient theory, $\bigcup_{k=1}^n \partial \Gamma_k$ for the auxiliary model describing the Wilson lines. We denote by \[ \pi_\partial: \mathcal{F} \rightarrow \mathcal{F}_\partial \] the projection corresponding to this restriction. The correction to the Hamiltonian condition (\ref{iQOmega}) can be expressed as \begin{equation} \label{iQOmegaplusalpha} \delta S = \imath_Q\Omega + \pi_\partial^\ast (\alpha_\partial). \end{equation} The exterior derivative of this one-form, \[ \Omega_\partial = \delta\alpha_\partial, \] happens to be symplectic and is called the BFV structure. It is a two-form of ghost number 0 in the boundary space of fields. The corrected Hamiltonian condition (\ref{iQOmegaplusalpha}) is linear in $\alpha_\partial$, too, so we may decompose the boundary BFV structure \[ \Omega_\partial = \Omega_\partial^{\mathrm{amb}} + \Omega_\partial^{\mathrm{aux}} \] and compute it in two parts. The ambient one corresponds to the Chern-Simons model, \begin{equation} \label{2DBFVstrCS} \Omega_\partial^{\mathrm{amb}} = \int_{\partial N} \left( \frac{1}{2}\left( \delta A, \delta A \right) + \left( \delta \gamma, \delta A^+\right) \right), \end{equation} and could actually be derived in a two-dimensional adaptation of the AKSZ construction. To compute the auxiliary part, we make use of the usual trick to do calculations in the augmented space of fields. The result \begin{equation} \hat{\Omega}_{\partial,G}^{\mathrm{aux}} = \sum_k \int_{\partial\Gamma_k} \left( \mathrm{Ad}_g(T_{0,k}), \frac{1}{2}\left[ \delta g \, g^{-1}, \delta g \, g^{-1} \right] \right) \end{equation} is the sum of $2k$ copies of the symplectic form $\omega_G$ of the target space of the augmented space of fields $\hat{\mathcal{F}}^{\mathrm{aux}}_{\partial,G}$, one carried by each extremity of every Wilson line. Using the relation (\ref{defKirillov}), we immediately find the BFV structure on the actual auxiliary boundary space of fields $\mathcal{F}^{\mathrm{aux}}_\partial = \bigoplus_{k=1}^n \mathrm{Map}(\partial \Gamma_k, \mathcal{O}_K)$, \begin{equation} \label{auxBFVstr} \Omega_\partial^{\mathrm{aux}} = \sum_k \int_{\partial\Gamma_k} \tilde{\omega}_{\mathcal{O}_k}, \end{equation} where $\tilde{\omega}_{\mathcal{O}_k}$ is evidently the Kirillov symplectic form on the $k$-th (co)adjoint orbit $\mathcal{O}_k$ lifted to the space of fields. The curve $\Gamma_k$ being one dimensional, we have $\partial \Gamma_k = \left\lbrace z_k, z'_k\right\rbrace \subset \partial N$, and what the last integral really means is $\int_{\partial\Gamma_k} \tilde{\omega}_{\mathcal{O}_k} = \tilde{\omega}_{\mathcal{O}_k}(z_k) - \tilde{\omega}_{\mathcal{O}_k}(z'_k)$. In the last step of the construction of the boundary BFV model, we know that the restriction of $Q$ to the boundary surface $\partial N$ is Hamiltonian with respect to the BFV structure, and the boundary BFV action is defined as its generating functional, \begin{displaymath} \imath_{Q_\partial}\Omega_\partial = \delta S_\partial. \end{displaymath} Ghost number counting shows that the BFV action has ghost number $1$. In the case of the Chern-Simons model with Wilson lines, we calculate the restriction of $\hat{Q}_G = Q^{\mathrm{amb}} + \hat{Q}^{\mathrm{aux}}_G$ in the extended space of fields, \begin{equation} \begin{split} \hat{Q}_{\partial,G} = & -\left( \frac{1}{2}\left[ \gamma, \gamma\right], \frac{\overrightarrow{\delta}}{\delta\gamma} \right) - \left( \left( dA + \frac{1}{2}\left[ A,A\right] + \left[ A^+,\gamma \right]\right), \frac{\overrightarrow{\delta}}{\delta A^+} \right)\\ & + \left( d_A \gamma, \frac{\overrightarrow{\delta}}{\delta A} \right) - \sum_k \left( \mathrm{Ad}_g(T_{0,k})\delta(\Gamma_k),\frac{\overrightarrow{\delta}}{\delta A^+} \right) - \left( \gamma, g \frac{\overrightarrow{\delta}}{\delta g} \right), \end{split} \end{equation} which leads to the two contributions \begin{equation} S^{\mathrm{amb}}_{\partial} = - \int_{\partial N} \left( \left( dA + \frac{1}{2}\left[ A,A\right] ,\gamma \right) + \left( A^+, \frac{1}{2}\left[ \gamma,\gamma\right] \right) \right) \end{equation} and \begin{equation} \hat{S}^{\mathrm{aux}}_{\partial,G} = - \sum_k \int_{\partial \Gamma_k} \left( \mathrm{Ad}_g (T_{0,k}),\gamma\right) \end{equation} to the boundary BFV action. As expected, the $G$-valued field $g$ appears only in a (co)adjoint action, so the projection to the auxiliary space of fields is straightforward, and we obtain the BFV action \begin{equation} \begin{split} \label{BFVaction} S_\partial =& - \int_{\partial N} \left( \left( dA + \frac{1}{2}\left[ A,A\right] ,\gamma \right) + \left( A^+, \frac{1}{2}\left[ \gamma,\gamma\right] \right) \right. \\ & \quad +\left. \sum_k \left( \mathrm{Ad}_g (T_{0,k}),\gamma \right) (\delta^{(2)}(z_k) - \delta^{(2)}(z'_k)) \right). \end{split} \end{equation} In the last line, we have cast everything into the integral over $\partial N$ by making use of Dirac distributions centered on the extremities of the Wilson lines. We recognize in the boundary BFV action of the Chern-Simons model with Wilson lines an odd version of the two-dimensional $BF$ model with sources, where the role of the $B$ field is taken over by the restriction to the boundary of the ghost field $\gamma$ of the bulk theory. We conclude this section with a short remark regarding the insertions (labeled by $z_k$ and $z'_k$) of the boundary model. In our setting, with Wilson lines ending on the boundary, these insertions always come in pairs of points carrying the same representation, one insertion at each end of a Wilson line. If we consider Wilson graphs in the bulk model, which are a natural generalizations of the Wilson lines, we can obtain any configuration of points and representations as insertions. Wilson graphs are observables modeled after Wilson lines, but based on oriented graphs instead of curves. Each edge carries a representation of $\mathfrak{g}$ and contributes with a similar term as a Wilson line to the total action, while each vertex carries an intertwining operator between the representations of the attached edges. If the formulation of these intertwining operators is straightforward in the operator formalism, their description is more involved in the path-integral formalism and goes beyond the scope of this paper, where we will for simplicity consider only Wilson loops and open Wilson lines. \section{1D Chern-Simons Theory with a Wilson line} \label{1DCSWL} The AKSZ construction for the Chern-Simons model can also be carried out in one dimension \cite{1DCSAlekseevMnev}. In this section, we will see how to add a Wilson line to this model, by following the same procedure as in the previous section. The main difference comes from the fact that the Wilson line is now a space-filling observable, and that the BV bracket of the auxiliary term with itself will pick up terms from the ambient part of the BV structure. Moreover, as stated before, we will now use a $\mathbb{Z}_2$-grading, since a $\mathbb{Z}$-grading is not possible in one dimension, so instead of denoting the ghost number in square brackets, we will use the parity-reversing operator $\Pi$. Given a one-dimensional manifold $\Gamma$ (in general a disjoint union of circles and open segments), the space of fields is \begin{equation} \mathcal{F}^{\mathrm{amb}}=\mathrm{Map}( \Pi T\Gamma ,\Pi\mathfrak{g}), \end{equation} where $\mathfrak{g}$ is again assumed to be equipped with an invariant scalar product $\left(\cdot,\cdot\right)$. The target space $\Pi\mathfrak{g}$ supports the same geometric structures as before, that we may again transpose to the space of fields. If $x$ is a coordinate on $\Gamma$ and $\theta$ a Grassmanian coordinate on the odd fibers of $\Pi T\Gamma$, we can decompose the fields $\Psi\in\mathrm{Map}( \Pi T\Gamma ,\Pi\mathfrak{g})$ into a $\mathfrak{g}$-valued fermion $\psi$ and a $\mathfrak{g}$-valued one-form $A=A(x)dx$, \begin{equation} \Psi = \psi + \theta A(x). \end{equation} We repeat the same procedure to find the BV structure \begin{equation} \label{1D_BV_form} \Omega^{\mathrm{amb}}= - \int_{\Pi \Gamma}\mu \left( \delta\Psi,\delta\Psi \right) = \int_{\Gamma} \left( \delta\psi,\delta A \right) \end{equation} and the BV action \begin{equation} \label{1D_action} S^{\mathrm{amb}} = \int_{\Pi \Gamma}\mu \left( \frac{1}{2}\left( \Psi, D\Psi\right) + \frac{1}{6}\left(\Psi,\left[ \Psi,\Psi\right]\right)\right) = \int_\Gamma \frac{1}{2} \left( \psi,d_A\psi\right). \end{equation} The $\mathfrak{g}$-valued one-form $A$ can be interpreted as a connection for some principal $G$-bundle over $\Gamma$, where $G$ is a Lie group integrating $\mathfrak{g}$. The odd $\mathfrak{g}$-valued scalar $\psi$ serves simultaneously as a ghost for the gauge symmetry and an antifield for $A$. In order to add Wilson lines to this model, we need to extend the space of fields, the BV structure and the action with precisely the same auxiliary structure $\Omega^{\mathrm{aux}}$ and action $S^{\mathrm{aux}}$ as in the three-dimensional case, except that they are now supported directly by the base manifold of the ambient source space $\Gamma$. We consider a single Wilson line for simplicity, it is easy to add similar terms for additional lines. Furthermore we assume it covers the whole source space $\Gamma$. Actually it could involve only some of the connected components of $\Gamma$, and the other ones would support a bare (in the sense that there are no Wilson lines) one-dimensional Chern-Simons model. Notice also that instead of $\gamma$ we write $\psi$ to emphasize the fact that it plays simultaneously the role of $\gamma$ and $A^+$ of the previous model. We insist once more that since the Wilson line is a space-filling observable, we need to check that the classical master equation is solved, a result which is not guaranteed by the AKSZ construction due to the term $\left\lbrace S^{\mathrm{aux}},S^{\mathrm{aux}}\right\rbrace_{\mathrm{ambient}}$ coming from the auxiliary part of the action and the ambient part of the BV structure. If we use again the projection map $\pi:G\rightarrow\mathcal{O}$ to pull differential forms from the auxiliary space of fields $\mathcal{F}^{\mathrm{aux}}$ to the extended one $\hat{\mathcal{F}}^{\mathrm{aux}}_G$, we can calculate \begin{equation} \begin{split} \frac{1}{2}\left\lbrace \hat{S}_G,\hat{S}_G \right\rbrace &= \int_\Gamma \left( \left( \frac{\hat{S}_G\overleftarrow{\delta}}{\delta A}, \frac{\overrightarrow{\delta}\hat{S}_G}{\delta \psi} \right) + \left( g\frac{\hat{S}_G\overleftarrow{\delta}}{\delta g}, \frac{\overrightarrow{\delta}\hat{S}_G}{\delta g^+} \right) \right.\\ & \qquad \quad \left. -\frac{1}{2} \left( \frac{\hat{S}_G\overleftarrow{\delta}}{\delta g^+},\left[g^+, \frac{\overrightarrow{\delta}\hat{S}_G}{\delta g^+}\right] \right) \right) \\ &= \int_\Gamma \left( -\frac{1}{2}\left[\psi,\psi\right] + \mathrm{Ad}_g T_0, d_A\psi + g^+\right) + \left( -d_A(\mathrm{Ad}_g T_0),-\psi \right) \\ & \qquad \quad - \frac{1}{2} \left( \psi, \left[ g^+,-\psi\right] \right) \\ &= \int_\Gamma \left( \mathrm{Ad}_g T_0,g^+ \right) \\ &= - \frac{1}{2}\int_{\Pi T\Gamma} \mu \left( \mathbf{H},\mathbf{H} \right) \\ &= - \frac{1}{2}\int_{\Pi T\Gamma} \mu \left( \frac{S^{\mathrm{aux}}\overleftarrow{\delta}}{\delta\Psi},\frac{\overrightarrow{\delta}S^{\mathrm{aux}}}{\delta\Psi} \right), \end{split} \end{equation} and the last line shows it explicitly. Nevertheless, this term vanishes, since $g^+$ takes value in the tangent space $T_H\mathcal{O}$ at $H=\mathrm{Ad}_g T_0$ to the adjoint orbit, which is easily seen to be orthogonal to $H$ with respect to the invariant scalar product on $\mathfrak{g}$. Again, before we turn to the case of a source space with a boundary, we need to compute the Hamiltonian cohomological vector field $Q$ generated by $S$, or more accurately its counterpart in the extended space of fields, namely \begin{equation} \begin{split} \hat{Q}_G &= \left( \left( d_A\psi + g^+\right),\frac{\delta}{\delta A} \right) + \left( \left(-\frac{1}{2}\left[\psi,\psi\right] + \mathrm{Ad}_g(T_0)\right),\frac{\delta}{\delta\psi} \right) \\ & \quad - \left( \psi , g\frac{\delta}{\delta g} \right) - \left( \left(\left[\psi,g^+\right] + d_A(\mathrm{Ad}_g(T_0))\right),\frac{\delta}{\delta g^+}\right). \end{split} \end{equation} If the source space has a boundary, in other words if some of its components are segments, we can repeat the procedure to construct the BFV boundary model. We first calculate the image of the symplectic potential of the boundary BFV structure in the augmented space of fields from the variation of the BV action, \begin{equation} \hat{\alpha}_{\partial,G} = \int_{\partial \Gamma} \left( \frac{1}{2} \left( \psi,\delta\psi \right) + \left( T_0, g^{-1}\delta g \right) \right), \end{equation} and the corresponding pre-BFV structure, \begin{equation} \hat{\Omega}_{\partial,G} = \int_{\partial \Gamma} \left( \frac{1}{2} \left( \delta\psi,\delta\psi \right)- \frac{1}{2} \left( \mathrm{Ad}_g(T_0), \left[\delta g\,g^{-1},\delta g\,g^{-1}\right] \right) \right) . \end{equation} We see that the second term is connected to the pullback by the projection map $\pi:G\rightarrow\mathcal{O}$ of the Kirillov-Kostant-Souriau symplectic structure, and we obtain as a BFV structure in the proper boundary space of fields \begin{equation} \Omega_\partial = \int_{\partial \Gamma} \left( \frac{1}{2} \left( \delta\psi,\delta\psi \right) + \tilde{\omega}_{\mathcal{O}} \right). \end{equation} In all these expressions, the integral over $\partial\Gamma$ is nothing but a sum over the boundary points, with each term carrying a sign given by the orientation of its segment. The restriction of the cohomological vector field $\hat{Q}_G$ to the boundary, \begin{equation} \hat{Q}_{\partial,G} = \left( \left(-\frac{1}{2}\left[\psi,\psi\right] + \mathrm{Ad}_g (T_0)\right),\frac{\delta}{\delta\psi} \right) - \left( \psi , g\frac{\delta}{\delta g} \right) , \end{equation} is Hamiltonian with respect to the BFV structure, and it is generated by the BFV action of the boundary model, \begin{equation} \label{0dimBFV} S_\partial = \int_{\partial I} \left( -\frac{1}{6} \left( \psi,\left[\psi,\psi\right] \right) + \left( \mathrm{Ad}_g (T_0),\psi \right) \right) . \end{equation} Finally we show that $S_\partial$ solves the master equation of the BFV model, \begin{equation} \left\lbrace S_\partial , S_\partial\right\rbrace = Q_\partial S_\partial = \int_{\partial\Gamma}\left( T_0,T_0 \right) = 0. \end{equation} As stated before, the last integral is really a sum over the boundary elements of $\partial\Gamma$ with a sign assigned to their orientation, and since they come in pairs, at each end of every segment, the overall sum vanishes. \section{Boundary Quantum States} \label{BoundaryQuantumStates} Upon quantization, the partition function and correlators of a field theory defined on a manifold $N$ without boundary are complex numbers. In the presence of a boundary, one should rather expect quantum states, elements of a Hilbert space associated to each component of the boundary $\partial N$, according to the Atiyah-Segal picture of quantum field theory. The disjoint union of boundary components corresponds to the tensor product of the associated Hilbert spaces. Then gluing together a pair of components of $\partial N$ corresponds to taking the scalar product of the two corresponding factors of the tensor product. For instance, if $N=\left[ 0,1\right]$ is an interval, the partition function of the BV-BFV model should take value in some Hilbert space of the form $\mathcal{H}\otimes\mathcal{H}$, with one factor for each component of the boundary $\partial N =\left\lbrace 0,1\right\rbrace$, and upon gluing the two ends, contracting this tensor product using the scalar product on $\mathcal{H}$ should yield the BV partition function of the same model constructed on the circle $S^1$. If the bulk theory is studied in the BV formalism, the boundary information is encoded in the associated BFV model, at least at the classical level, as we saw in the particular cases of the Chern-Simons theory in one and three dimensions, possibly with Wilson lines. To pass to the quantum level, we first observe that a BFV boundary model can be canonically quantized. The BFV structure, a symplectic structure of ghost number 0 in the space of fields, is used to define the (anti)commutation rules for the quantized fields. These act on the Hilbert space $\mathcal{H}^{\mathrm{BFV}}_\partial$ associated to the boundary where the partition function of the bulk takes value. This Hilbert space inherits a grading from the ghost number of the classical fields. In this picture, the BFV action $S_\partial$, which was the generator of the cohomological vector field on the boundary $Q_\partial$, can be quantized by replacing the classical fields with their quantized counterparts, and we obtain the quantized BFV charge $\hat{S}_\partial$. Its action on the boundary space of states squares to zero and it roughly encodes the gauge transformations. At the classical level, a physical observable is a functional annihilated by the cohomological vector field $Q$ generated by the BV action in the bulk and the BFV action $S_\partial$ on the boundary, and two observables are gauge equivalent if they differ by a $Q$-exact term. At the quantum level, the role of $Q$ is taken over by the BFV charge $\hat{S}_\partial$: a gauge invariant boundary state should be annihilated by the BFV charge, and two states are gauge-equivalent if they differ by a BFV-exact term. Moreover, we require physical states to depend only on physical quantum fields, and not on the ghosts or the antifields. In other words, the space of boundary quantum states should correspond to the BFV-cohomology at ghost number zero $H^0_{\hat{S}_\partial}(\mathcal{H}^{\mathrm{BFV}}_\partial)$. The relation with the quantized bulk theory is that the partition function (and all the other correlators) should obviously be gauge invariant and therefore belong to this cohomology $H^0_{\hat{S}_\partial}(\mathcal{H}^{\mathrm{BFV}}_\partial)$. Its determination thus becomes a subject of interest. We will start with the one-dimensional Chern-Simons theory, a simpler model where all calculations can be done until the end, before we study the more interesting three-dimensional model. \subsection{1D Chern-Simons Theory} The zero-dimensional boundary model of the one-dimensional Chern-Simons theory contains $\mathfrak{g}$-valued fermions and bosonic fields $H=\mathrm{Ad}_g (T_0)$ which take value in the (co)adjoint orbit $\mathcal{O}$. Once quantized, the fermions form a Clifford algebra $Cl(\mathfrak{g})$. If $(t^a)_{a=1}^{\mathrm{dim}\mathfrak{g}}$ is an orthononormal basis of $\mathfrak{g}$ with structure constants $f_{abc}$, we obtain the anticommutation rules \begin{equation} \left[\hat\psi_a,\hat\psi_b\right]= \hbar\delta_{ab} \end{equation} for the quantized fermions. For the bosonic content of the model, the Kirillov symplectic form on the (co)adjoint orbits is the inverse of the restriction from $\mathfrak{g}^\ast\simeq\mathfrak{g}$ to $\mathcal{O}$ of the Kirillov-Kostant-Souriau Poisson structure, so that the commutator of two $\mathcal{O}$-valued quantized fields is simply given by their Lie bracket. If we use the basis $(t_a)$ of the Lie algebra to write \begin{displaymath} H = \mathrm{Ad}_g(T_0) = X_a \, t_a, \end{displaymath} we can express the commutation rules with the structure constants of the Lie algebra, \begin{equation} \label{bosonicquantumoperators} \left[\hat X_a,\hat X_b\right]=\hbar f_{abc} \hat X_c. \end{equation} The corresponding sector of the algebra of quantum operators is a representation of the enveloping algebra $\mathcal{U}(\mathfrak{g})$ of the Lie algebra $\mathfrak{g}$, namely $\rho_R(\mathcal{U}(\mathfrak{g}))\subset \mathrm{End}(V_R)$. This representation is simply the representation $R$ in which we computed the Wilson loops in the previous sections. We can use these operators $\hat\psi$ and $\hat X$ to construct the expectation value of the Wilson line $\langle W_{\Gamma, R}\rangle$ in the operator formalism, such as in \cite{1DCSAlekseevMnev}, where the partition function for the one-dimensional Chern-Simons model is derived in both the path-integral and the operator formalism. If the curve $\Gamma$ is open, this expectation value maps the space of fields to the boundary space of quantum states which is the cohomology at level 0 of the quantum BFV charge, \begin{displaymath} \langle W_{I, R}\rangle \in H^0_{\hat{S}_\partial}(\mathcal{H}_\partial). \end{displaymath} We need to find this cohomology. The BFV charge \begin{equation} \hat{S}_\partial = \int_{\partial \Gamma} \hat{X}_a\hat{\psi}_a - \frac{1}{6} f_{abc}\hat{\psi}_a\hat{\psi}_b\hat{\psi}_c \end{equation} carries one copy of the cubic Dirac operator \cite{AlekseevMeinrenken} \begin{displaymath} \mathfrak{D} = \hat{X}_a\hat{\psi}_a - \frac{1}{6}f_{abc}\hat{\psi}_a\hat{\psi}_b\hat{\psi}_c \end{displaymath} at each boundary point of $\Gamma$. This operator squares to \begin{displaymath} \mathfrak{D}^2 = \frac{1}{2}\left[\mathfrak{D},\mathfrak{D}\right] = \frac{1}{2} \hat{X}_a\hat{X}_a - \frac{1}{48}f_{abc}f_{abc}, \end{displaymath} a central element in the quantum Weil algebra $\mathcal{U}(\mathfrak{g})\otimes Cl(\mathfrak{g})$, which guarantees that the action of the BFV charge squares to zero. It is known that this cohomology in trivial (\cite{AlekseevMeinrenken},\cite{1DCSAlekseevMnev}). The resulting quantized BV-BFV is therefore not very interesting, so we should turn to the more involved problem of the three-dimensional Chern-Simons theory. \subsection{3D Chern-Simons Theory} We may now repeat the same procedure for the three-dimensional model. The first observation is that the treatment of the part coming from the extremities of the Wilson lines, namely the terms in the insertion points labeled by $z_k$ and $z'_k$, is essentially the same as in the one-dimensional model. Each insertion contributes to the overall BFV structure with a term in (\ref{auxBFVstr}), that when canonically quantized gives the algebra of operators (\ref{bosonicquantumoperators}) we encountered in the quantization of the one-dimensional model. We can formally express the quantization map \[ \mathrm{Ad}_{g(z_k)}(T_{k,0}) = X_{k,a}(z_k) t_a \mapsto \rho_k(\hat{X}_a(z_k)) t^a. \] Even though the orbits might be different for different insertions, the commutation rules (\ref{bosonicquantumoperators}) are identical for all of them, only the representation $\rho_k$ differs, as we emphasized on the right-hand side. In the next step, if we choose a complex structure on the boundary surface $\Sigma=\partial N$, we get a polarization of the connection \begin{equation} A = A_z dz + A_{\overline{z}}d\overline{z}, \end{equation} which allows us to rewrite the ambient part of the BFV structure (\ref{2DBFVstrCS}) in Darboux coordinates of the corresponding sector $\mathcal{F}^{\mathrm{amb}}_\partial$ of the BFV space of boundary fields, \begin{equation} \Omega^{\mathrm{amb}}_{\partial} = \int_{\partial N} dzd\overline{z} \left( \left( \delta A_z\delta A_{\overline{z}} \right) + \left( \delta\gamma,\delta A^+ \right) \right). \end{equation} Consequently, we may perform the canonical quantization by choosing among each pair of conjugated fields one quantum field and replace the other one by the corresponding functional differential, for instance \begin{equation} \begin{array}{rcl} A_{\overline{z}} &\rightarrow & a, \\ A_z &\rightarrow & -\frac{\delta}{\delta a}, \\ \gamma &\rightarrow & \gamma, \\ A^+ &\rightarrow & \frac{\delta}{\delta\gamma}, \end{array} \end{equation} so as to obtain canonical (anti)commutation rules. Note that $a$ is a boson and $\gamma$ a fermion. The Hilbert space $\mathcal{H}^{\mathrm{BFV}}_\partial$ of boundary states on which act all these operators is therefore the space of functionals in $a$ and $\gamma$ with value in a tensor product of all the representation space associated to each insertion, \begin{displaymath} \mathcal{H}^{\mathrm{BFV}}_\partial = \mathrm{Fun}\left(a,\gamma ; \bigotimes_k V_{\rho_k} \otimes V_{\rho_k}\right). \end{displaymath} We recall it is graded by the ghost number. Among these states, we want to determine the cohomology $H^0_{\hat{S}_\partial}( \mathcal{H}^{\mathrm{BFV}}_\partial )$ of the BFV charge at ghost number zero, made up of the quantum states of the BV-BFV model. At degree zero, we are considering functionals $\psi$ of the $\mathfrak{g}$-valued $(0,1)$-form $a$, independent of the ghosts $\gamma$, which take value in the tensor product of all representation spaces associated to the extremities of the Wilson lines of the models. The BFV charge \begin{equation} \begin{split} \label{BFVcharge} \hat S_\partial =& - \int_{\partial N}dzd\overline{z} \left( \left(\partial a + \overline{\partial}\frac{\delta}{\delta a} + \left[ a,\frac{\delta}{\delta a}\right],\gamma\right) \right. \\ & \quad - \left. \sum_k \left(\rho_k(\hat{X}_a(z_k))\delta(z-z_k) - \rho_k(\hat{X}_a(z_{k'}))\delta(z-z_{k'})\right) \left( t^a , \gamma\right) \right. \\ & \quad + \left. \left( \frac{1}{2}\left[ \gamma,\gamma\right], \frac{\delta}{\delta\gamma} \right) \right) \end{split} \end{equation} acts on the Hilbert space $\mathcal{H}^{\mathrm{BFV}}_\partial$ via multiplication and differentiation by the quantum fields $a$ and $\gamma$ and via the obvious action of the representation $\rho_k$ on its representation space $V_{\rho_k}$. At ghost number zero, BFV quantum states $\psi$ are therefore subject to the condition \begin{equation} \label{constraintquantumstates} \left( \partial a + \overline{\partial}\frac{\delta}{\delta a} + \left[ a,\frac{\delta}{\delta a}\right] - \sum_k \left( \rho_k(\hat{X}_a(z_k))\delta_{z_k}(z) - \rho_k(\hat{X}_a(z_{k'}))\delta_{z_k}(z) \right)t^a \right) \psi = 0. \end{equation} This actually coincides with the constraint (1) in \cite{CSgenus0} imposed to the Schr\"{o}dinger picture states in the canonical quantization of the Chern-Simons model on $\Sigma\times\mathbb{R}$ at genus 0, or the constraint (2.2) in \cite{CSgenus1} in the same situation at genus 1, where it is found that the cohomology $H^0_{\hat{S}_\partial}(\mathcal{H}^{\mathrm{BFV}}_\partial)$ coincides with the space of conformal blocks in the WZW model for a correlator of fields inserted at the extremities of the Wilson lines. The condition (\ref{constraintquantumstates}) for quantum states also appears in the geometric quantization framework, see for instance constraint (3.4) in \cite{WittenCS}, therefore the space of states in geometric quantization coincides with the space of quantum boundary states in BV-BFV quantization.
{'timestamp': '2012-12-27T02:06:04', 'yymm': '1212', 'arxiv_id': '1212.6256', 'language': 'en', 'url': 'https://arxiv.org/abs/1212.6256'}
\section{Introduction} When a compact particle bunch or laser pulse enters a plasma column, this drive beam disturbs the free plasma electrons which can then set up an oscillatory motion that leads to strong electric fields (``wakefields”) in the direction of the bunch propagation and also transverse to it. By injecting a witness bunch of charged particles into the correct phase of the plasma electron oscillation, the system acts as a particle accelerator. This offers a compelling alternative to conventional microwave radio-frequency acceleration which is limited to accelerating gradients of about 100\,MV/m, at which point the metallic structures where the particles are accelerated start to break down. As plasma is already ionised, it does not suffer from this limitation and accelerating gradients many orders of magnitude higher are possible. As such, plasma wakefield acceleration is a possible solution to developing accelerators of significantly reduced size for high energy particle physics, or indeed for other applications. The concept of accelerating particles in plasma was first proposed in the 1970s~\cite{prl:43:267}. The field has undergone significant development since~\cite{prl:54:693,pp:14:055501,rmp:81:1229,rast:9:63,NJP23-031101}, with progress experimentally, theoretically and in simulation. This has been aided by technology development in high-power lasers and high-performance computing. Many experiments using a laser pulse as a driver have shown that wakefields of 10s of GV/m and beyond are sustainable~\cite{nature:377:606,nature:431:535,nature:431:538,nature:431:541}, with electrons accelerated up to 7.8\,GeV in one acceleration stage of 20\,cm of plasma the highest final energy achieved so far~\cite{prl:122:084801}. Similar accelerating gradients have been achieved when an electron bunch is used as a driver~\cite{nature:445:741,nature:515:92}, with energy gains of 42\,GeV achieved for particles in a single bunch where the head of the bunch drives the wakefields~\cite{nature:445:741} and 9\,GeV per particle achieved for a witness bunch of electrons~\cite{ppcf:58:034017}. However, the laser pulses and electron bunches both suffer from a low stored energy meaning that multiple acceleration stages~\cite{rast:9:63,prab:13:101301} are being investigated in order to achieve the high energies needed for particle physics experiments. The possibility to use proton bunches allows the acceleration to take place in one stage given the high stored energy available in some proton accelerators. The original proposal~\cite{np:5:363} considered, in simulation, TeV protons in bunches of length 100\,$\mu$m which are not currently available. High energy proton bunches at CERN are typically 10\,cm long, however such bunches can undergo a process called self-modulation (SM) in plasma~\cite{bib:kumar,prl:107:145002,bib:pukhov} in which the long proton bunch is split into a series of microbunches. These microbunches are regularly spaced and hence can constructively interfere to drive strong wakefields. The SM process allows the use of proton beams that currently exist in order to develop proton-driven plasma wakefield acceleration into a technology for future particle physics experiments. The advanced wakefield (AWAKE) experiment at CERN~\cite{ppcf:56:084013,NIMA-829-3,nim:a829:76,bib:muggliready} was developed in order to initially verify the concept of proton-driven plasma wakefield acceleration. Proton bunches from the super proton synchrotron (SPS), in which each proton has an energy of 400\,GeV, have a total energy of 19\,kJ per bunch. The bunches are typically 6--12\,cm long and undergo the SM process in a rubidium plasma. Witness bunches of electrons can then be accelerated in the wakefields driven by the proton microbunches. Initial experiments were performed in 2016--18 in order to demonstrate the proof of concept (Run\,1). The scheme is now being developed with a series of experiments (Run\,2) to be performed in this decade. These will demonstrate it as a usable technology for high energy particle acceleration which already has several potential applications in particle physics. The outline of the article is as follows. After this introduction, the highlights of the AWAKE Run\,1 programme are summarised in Section~\ref{sec:run1}. The physics programme of AWAKE Run\,2 is discussed in Section~\ref{sec:run2} followed by a discussion of the setup for AWAKE Run\,2 in Section~\ref{sec:setup}. Section~\ref{sec:applications} then summarises the possible particle physics experiments that could be realised after Run\,2 with electrons provided by the AWAKE scheme. A brief summary is then given in Section~\ref{sec:summary}. \section{Summary of experimental results from AWAKE Run\,1 \label{sec:run1} In the first round of experiments~\cite{bib:muggliready}, we have demonstrated the existence of the SM process and the possibility to accelerate electrons in SM-driven wakefields. We have also observed a number of expected and unexpected characteristics of SM. % An overview of the experimental setup for Run\,1 is shown in Fig.~\ref{fig:run1-layout}. \begin{figure}[bthp!] \centering \includegraphics[width=0.75\textwidth]{Run1-layout.pdf} \caption{Schematic of the AWAKE Run\,1 (2016--18) layout. The laser and proton beams are merged before entering the plasma source. A beam of 10--20\,MeV electrons is also merged with the beam line and injected into the entrance of the plasma source. The plasma source contains rubidium vapour at about 200\,$^\circ$C with precise temperature control over the full 10\,m. The beams exit the plasma source and a series of diagnostics are used to characterise them. There are two imaging stations to measure the transverse profile of the proton bunch and screens emitting optical and coherent transition radiation (OTR and CTR) to measure the longitudinal profile of the proton bunch. Electrons are separated from the protons using a dipole magnet which also induces an energy-dependent spread which is measured on a scintillator screen, imaged by a camera. Diagrams of the proton bunch self-modulation and electron capture are shown in the bottom left. A typical image of the accelerated electron bunch as observed on the scintillator screen is shown in the top right. From Ref.~\cite{bib:nature}. } \label{fig:run1-layout} \end{figure} The proton bunch propagates in a plasma created by a relativistic ionisation front (RIF). % The RIF is the result of the propagation of a short and intense laser pulse in a rubidium vapour~\cite{bib:oz,bib:fabiandensity,bib:gabor, pr:a104:033506}. % When the RIF is placed within the proton bunch, the part of the bunch behind the RIF travelling in plasma is transformed into a train of microbunches. This is shown in Fig.~\ref{fig:alongbunch}~(a) where a clear periodic charge density structure at $t>0$\,ps is observed. % The front ($t<0$\,ps on \mbox{Fig.~\ref{fig:alongbunch}(a)}) is unaffected. % The period of the train or the modulation frequency is determined by the plasma electron frequency, $f_{pe}$~\cite{bib:karl}, measured over one order of magnitude in plasma electron density $n_{e0}$: $f_{pe}\propto \sqrt{n_{e0}}$~\footnote{The electron plasma frequency in a plasma with electron density $n_{e0}$ is: $\omega_{pe}=\left(\frac{n_{e0}e^2}{\varepsilon_0m_e}\right)^{1/2}$, $f_{pe}=\omega_{pe}/2\pi$, where $e$ is the elementary electric charge, $\varepsilon_0$ is the permittivity of free space and $m_e$ is the electron mass.}. % The train formation is a transverse process; protons between microbunches leave the bunch axis and form an expanding halo. % The halo radius measured 10\,m downstream from the plasma exit indicates that protons gained radial momentum from transverse wakefields with amplitudes reaching hundreds of MV/m~\cite{bib:marleneprl}. % This amplitude exceeds their initial amplitude driven at the RIF at the plasma entrance ($<$10\,MV/m). % This indicates that wakefields grow along the plasma, whereas the increase in radius reached by halo protons along the bunch shows that they also grow along the bunch, with both growths expected \cite{NIMA-829-3}. % Correspondingly, large longitudinal wakefields lead to a 2\,GeV energy gain of externally injected 19 MeV test electrons~\cite{bib:nature,bib:marleneroyal}. % Acceleration experiments also suggest that wakefields may break in the back of the bunch, due to the large amplitude of the wakefields and to the finite radial extent of the plasma~\cite{bib:james,PPCF63-055002-halo}. Combined halo radius and acceleration results in experiment and simulations show that the SM process saturates a distance between three and five metres along the plasma~\cite{bib:marleneprab}. % \begin{figure}[bthp!] \centering \includegraphics[width=0.7\textwidth]{117_586-737_70ps_freq_delaymove10_paper_errors22_bw_small.pdf} \caption{ (a) Time-resolved image of the SM proton bunch with the RIF placed 125\,ps (0.5$\sigma_t$, where $\sigma_t$ is the RMS duration of the proton bunch) ahead of bunch centre (front of the bunch at $t<0$\,ps), and plasma electron density $n_{e0}=1.81\times$10$^{14}$\,cm$^{-3}$ (other parameters in~\cite{bib:fabianprl}). % The RIF is at $t=0$\,ps on the image. % (b) Relative RMS phase variation $\Delta\Phi$ of the modulated bunches (in \% of 2$\pi$ or of a modulation period) for each set of images acquired every 50\,ps along the bunch and aligned in time using a reference laser pulse signal visible as a vertical line at the bottom of image (a) ($x>2$\,mm). % From Ref.~\cite{bib:fabianprl}.} \label{fig:alongbunch} \end{figure} Numerical simulation results suggest that injection and acceleration of electrons are ineffective during the SM growth. If the electrons are injected along the proton beam axis, they are defocused in the region of longitudinally varying plasma density near the entrance to the plasma section~\cite{PoP21-123116, NIMA-829-3, PoP25-063108}. If the electrons are injected at a small angle, they cross the transverse boundary of the plasma column which reflects or scatters most of the injected charge~\cite{PPCF61-104004}. This is the likely reason why the measured accelerated charge \cite{bib:nature,bib:marleneprab} is significantly smaller than that simulated without the effect of boundary crossing~\cite{NIMA-829-3}. The phase velocity of the wakefields during SM is less than the speed of light $c$~\cite{bib:pukhov,prl:107:145002}, which prevents electrons from being accelerated to high energies \cite{IPAC14-1537}. This also causes microbunch destruction at late stages of SM development~\cite{PoP18-024501, PoP22-103110}. Fortunately, the phase velocity of the wakefields can be influenced by plasma density gradients along the beam trajectory \cite{NIMA-829-3}. % It also changes the number of protons remaining in the microbunches and the length of the train: more charge and a longer train with positive density gradients, as demonstrated experimentally~\cite{bib:falk} and detailed in simulations~\cite{bib:pablo}. Experimental and simulation results also reveal that with a density gradient the modulation frequency is not unique and varies radially across the time-resolved charge distribution of the train and halo observed 3.5\,m from the exit of the plasma~\cite{bib:pablo}. % The structure of the distribution confirms that protons forming the halo left the wakefields over the first few metres of plasma, their distribution carrying the plasma frequency near the plasma entrance. % The microbunch train, having travelled through the entire plasma, carries the frequency at the end of the plasma. % We observe this with negative, linear density gradients~\cite{bib:falk} with which we also observe the largest frequency variations, with shorter trains, better able to adjust their modulation frequency to the local plasma frequency. The reproducibility of the accelerating structure in the plasma is essential for the controlled acceleration of an externally-injected electron bunch. % The electron bunch must be placed at the proper phase within the wakefields. % That is, at a position within the train where wakefields have reached their maximum, and more precisely, within a fraction of a period of the wakefields, where fields are accelerating and focusing. % The precise location is determined, for example, by loading of the wakefields for minimisation of energy spread and for emittance preservation~\cite{bib:veronica}. While we do not measure the reproducibility of the wakefields from event to event, we demonstrated that when the RIF provides initial wakefields with sufficient amplitude, the phase of the bunch modulation with respect to the RIF is reproducible, despite variation of the incoming bunch parameters~\cite{bib:fabianprl}. % That is, the SM process is seeded, i.e., reproducible and driven away from its instability (SMI) regime~\cite{bib:kumar}. % In this seeded mode, we measure RMS variations of the modulation phase smaller than 8\% of a modulation period (Fig.~\ref{fig:alongbunch}~(b)) all along the bunch train. We observe the instability when the RIF is placed further than $\approx2\sigma_t$ ahead of the centre of the bunch with RMS duration $\sigma_t$ (typically $\cong250$\,ps). % When we place the RIF $\gg2\sigma_t$ ahead of the centre of the bunch, the bunch propagates in a pre-formed plasma whose density is decaying in time because of radial expansion and recombination. % Recording the modulation frequency as a function of RIF timing provides a measurement of plasma density as a function of time after ionisation~\cite{bib:gessner}. % While the SM process is the lowest-order, symmetric mode of interaction between the long incoming bunch and the plasma, signs of the non-axi-symmetric mode, the hose instability~\cite{bib:whittum,thesis:huether}, were also observed. % However, this mode was only observed at low plasma densities, $n_{e0}\le0.5\times10^{14}\,{\rm cm}^{-3}$, much lower than those that led to significant energy gain, $n_{e0}>1.8\times10^{14}\,{\rm cm}^{-3}$. % The above results, in particular the occurrence and the saturation of the SM process of the 400\,GeV proton bunch over a distance less than 10\,m of plasma, with its phase reproducibility, the possibility to accelerate electrons in the wakefields, the absence of hosing instability and the generally excellent agreement between experimental and simulation results~\cite{bib:james,bib:gorn,bib:marleneprab,bib:pablo}, allow for the planning of the next experiments~\cite{bib:mugglirun2}. % These will be conducted in a number of steps geared towards experiments with two plasmas aimed at producing a multi-GeV electron bunch with charge, emittance and relative energy spread sufficient for the applications described in this manuscript. \section{The AWAKE Run\,2 physics programme \label{sec:run2} The Run\,2 physics programme is driven mostly by the long-term goals presented in this paper: producing a high-energy electron bunch with quality sufficient for high-energy or particle physics applications. % Run\,2 will again use proton bunches from the SPS. % The main difference with Run\,1 is the use of two plasma sources, one for SM and one for acceleration, thereby allowing for on-axis injection of the electron bunch into the accelerator plasma and for better control of parameters~\cite{run2a-plan}. % The first part of Run\,2 focuses on the self-modulator, i.e., on the generation of the self-modulated proton bunch to drive the accelerator. % The second part of Run\,2 focuses on the accelerator, i.e., on external injection of the electron bunch and on scaling of its energy gain to higher energies. % We determined in Run\,1 that, as predicted by numerical simulations~\cite{NIMA-829-3,PoP21-083107}, the SM process saturates over a distance of 3--5\,m~\cite{bib:marleneprab}. % Experiments with two plasmas will thus include a 10\,m-long self-modulator plasma, followed by a 10\,m-long accelerator plasma. % In the current plan, the two plasma sources will be based on laser ionisation of a Rb vapour~\cite{bib:oz}. % These are the only sources known so far that provide a plasma density step in the self-modulator~\cite{bib:plyushchev} and the desired density uniformity~\cite{PoP20-013102} in the accelerator. % \subsection{Self-modulator} The self-modulator will have two new features: the ability of seeding the SM process using an electron bunch and the ability of imposing a plasma density step. % These two features will be first tested independently, and then together. % \subsubsection{Electron-bunch seeding} A major result of Run\,1 was the demonstration of the seeding of the SM process using a RIF~\cite{bib:karl}. % However, this seeding method leaves the front of the bunch, ahead of the RIF, un-modulated. % Since AWAKE long-term plans call for an accelerator plasma tens to hundreds of metres long, this plasma will have to be pre-formed. % The laser ionisation process of Run\,1 does not scale to such long plasma because of energy depletion of the laser pulse and because of the focusing geometry. % In this preformed plasma, the un-modulated front of the bunch could experience SMI in the accelerator plasma. % The wakefields driven by this front SMI could interfere with the self-modulated back of the bunch and with the acceleration process. % The SM process can also in principle be seeded by a preceding driver of wakefields, such as an electron bunch or a laser pulse. % In this case, the entire proton bunch becomes self-modulated and the possible issue with the un-modulated front would be avoided. % The programme thus consists of demonstrating that the SM process can indeed be seeded by the electron bunch available in the Run\,1 experimental setup. % The method to be used is similar as that of Run\,1~\cite{bib:fabianprl}, i.e., determining the timing of microbunches appearing along the bunch with respect to the time of the electron bunch, after 10\,m of plasma. % \subsubsection{Plasma density step} \label{sec:density-step} Numerical simulation results suggest that in a plasma with constant density along the beam path, the continuous evolution of the bunch train and wakefields leads to a decay of the amplitude of wakefields after their saturation~\cite{PoP18-024501,PoP22-103110}. % These results also suggest that when applying a density step, some distance into the plasma, within the growth of the SM process, wakefields maintain a near-saturation amplitude for a long distance along the plasma. % Figure~\ref{fig:field_wwo_step_gap} illustrates how the density step changes the wakefield amplitude in the SM and acceleration plasma sections. % These simulations are performed in the axi-symmetric geometry with the quasi-static code LCODE \cite{NIMA-829-350, PRST-AB6-061301}. % The parameters of the density step were optimised for the strongest wakefield at $z=20$\,m with no gap between the sections~\cite{PPCF62-115025}. The density step is seen to strongly increase the wakefield at the acceleration stage even in the presence of a 1\,m gap. % The AWAKE plasma source is based on a rubidium vapour, along which a uniform temperature is imposed to obtain a correspondingly uniform vapour density. % Laser-pulse ionisation then turns this uniform vapour density into an equally uniform plasma density~\cite{bib:karl}. % One can therefore simply impose a temperature step along the column to obtain the corresponding plasma density step~\cite{bib:plyushchev}. Measurements of the effect of the plasma density step on the amplitude of the wakefields will include effects of the size and shape of the bunch halo formed by defocused protons, measurements of plasma light signals, and measurements of electron acceleration. % \subsection{Accelerator} The length of the accelerator plasma is 10\,m for the first experiments. % This is much longer than the distance it takes for the amplitude of wakefields to settle to steady values after the injection point (see Fig.~\ref{fig:field_wwo_step_gap}). This distance is $\sim$2\,m and results from the transverse evolution of the proton bunch in the vacuum gap between the two plasmas. % The plasma is thus long enough for the expected energy gain to be in the multi-GeV range along the ensuing length of plasma where the driving of the wakefields is stationary. % \begin{figure}[H] \centering \includegraphics[width=0.6\textwidth]{field_wwo_step_gap.pdf} \caption{The amplitude of the excited wakefield $E_\text{z,\,max} (z)$ in the uniform plasma and in plasma with the optimised density step with and without a 1\,m gap between SM and acceleration plasma sections. The SM process is seeded by an electron bunch. The density step is the linear growth of the plasma density from $7 \times 10^{14}\,\text{cm}^{-3}$ to $7.21 \times 10^{14}\,\text{cm}^{-3}$ at the interval between $z=0.8$\,m and $z=2.8$\,m.} \label{fig:field_wwo_step_gap} \end{figure} \subsubsection{External injection} In Run\,1, acceleration was obtained with an off-axis injection geometry~\cite{NIMA-829-3}. % This geometry was chosen to avoid defocusing of the injected electrons in the density ramp located at the entrance of the plasma~\cite{PoP21-123116,PoP25-063108}. In this region, the yet un-modulated proton bunch drives transverse fields which are focusing for its own positive-charge sign, but defocusing for injected electrons. % A scheme that avoids these issues is to inject electrons on axis after the SM process has saturated. % The parameters of the electron bunch must be such that it reaches high energies, low final relative energy spread and preserves its incoming emittance. The SM process does not lead to blow-out of plasma electrons from the accelerating structure because the resonant wave drive stops when the plasma wave becomes nonlinear and its period elongates~\cite{PoP20-083119}. The initial electron bunch density $n_{b0}$ must therefore exceed the plasma electron density: $n_{b0}\gg n_{e0}$ to reach blow-out. % Blow-out of plasma electrons is necessary for the focusing force of the plasma to become that of the pure ion column, with strength increasing linearly with radius~\cite{PRA44-6189}. % The normalised emittance $\epsilon_N$ and focus RMS size of the electron bunch at the plasma entrance $\sigma_{r0}$ must be adjusted to satisfy matching to the ion column focusing force: $$\frac{\gamma_0n_{e0}}{\epsilon_N^2}\sigma_{r0}^4=\frac{2\varepsilon_0m_ec^2}{e^2}.$$ % For typical parameters ($\epsilon_N=2$\,mm\,mrad, $n_{e0}=7\times10^{14}\,\text{cm}^{-3}$), electrons must have a large enough energy or relativistic factor $\gamma_0$ to be focused to the small transverse size ($\sigma_{r0}=5.65\,\mu$m, $\gamma_0\cong300$). % In addition, the bunch length and timing in the wakefields must be optimised to load wakefields to minimise the final relative energy spread $\Delta E/E$. % An example set of parameters was developed using a toy-model for the proton bunch~\cite{bib:veronica}. % This example shows that with parameters satisfying the above conditions, about 70\% of the initial 100\,pC bunch charge preserve their emittance and reach 1.67\,GeV/c over 4\,m of plasma with $\Delta E/E\cong1\%$ (core). % \subsubsection{Scalable plasma sources} Assuming the success of experiments on electron injection into the wakefields of a 10\,m-long plasma, energy gain suitable for applications can in principle be achieved by extending the length of the accelerator plasma. % However, the distance over which laser ionisation can occur is limited by depletion of the energy of the laser pulse and by the focusing geometry. We are therefore developing other plasma sources that do not suffer from length limitations: direct-current electrical discharge in noble gases~\cite{Torrado2022} and helicon argon plasma~\cite{bib:helicon}. % The direct-current discharge plasma source (DPS) uses a short pulse ($\sim 10\,\mu$s) high-current pulse ($\sim$\,kA) through the length of a glass tube, filled with a high atomic number noble gas at low pressure ($\sim$\,10\,Pa)~\cite{Torrado2022}. This follows a fast-rising high-voltage pulse ($50 - 100$\,kV) able to ignite long tubes ($L>5$\,m) into uniform plasma densities. We are currently developing a 10\,m long plasma source consisting of a double discharge from a mid length common cathode to two anodes in the extremities. Operation at the plasma densities relevant for AWAKE has been demonstrated. We are currently engaged in demonstrating the ability of the plasma source to reach the required plasma density uniformity. With this source, the length scalability can potentially be reached by stacking together multiple plasma sections (with lengths of a few metres to a few tens of metres) using common cathodes and anodes. Helicon plasma belong to the class of magnetised wave heated plasmas~\cite{helicon1}. They consist of a dielectric vacuum vessel and an external wave excitation antenna, which is powered by radio frequencies. The excited helicon waves heat the plasma and have been demonstrated to generate discharges with high plasma densities~\cite{helicon2}. Its length can thus in principle be extended over long lengths by stacking cells. % Measurements show that the plasma density typical of AWAKE ($n_{e0}=7\times10^{14}\,{\rm cm}^{-3}$) can be reached~\cite{bib:helicon}. % The challenge is to demonstrate that the required plasma density uniformity can also be reached. % This demonstration requires highly accurate and localised plasma density measurements. Specific diagnostics such as Thomson scattering~\cite{epfl-paper} and optical emission spectroscopy are being developed. \section{Overview of AWAKE Run\,2 setup \label{sec:setup} The AWAKE Run\,2 scheme including the two plasma sources, i.e., a self-modulator and an accelerator, and a new electron beam system is shown in Fig.~\ref{fig:layout_Run2}. \begin{figure}[htb] \centering \includegraphics[width=0.7\textwidth]{AWAKE_LayoutRun2.png} \caption{Layout of AWAKE Run\,2.} \label{fig:layout_Run2} \end{figure} The AWAKE Run\,2 programme is subdivided into four phases Run\,2a, 2b, 2c and 2d, following the physics programme described above. The current AWAKE experiment is installed upstream of the previous CERN neutrinos to Gran Sasso (CNGS) facility~\cite{bib:cngs}. The $\sim$\,100 m long CNGS target cavern, which currently houses the CNGS target and secondary beam line while their activation levels decay, is separated from the AWAKE experiment by a shielding wall. Phases Run\,2a and Run\,2b will be carried out in the existing AWAKE facility, having started in 2021 and are foreseen to continue until 2024. For Run\,2c and Run\,2d, however, the CNGS target area needs to be dismantled; around three years are required for the removal of the CNGS target area and the installation and commissioning of the additional equipment for AWAKE Run\,2c, where AWAKE takes advantage of the general 1--2 years shutdown period of the CERN injector complex in 2026/27. It is planned to start AWAKE Run\,2c with first protons after LS3 in 2028. \subsection{AWAKE Run\,2a} The Run\,2a experiments, focusing on electron bunch seeding of the SM process, use the same infrastructure as that of AWAKE Run\,1; bunches of protons are extracted from the CERN SPS and each consists of $3\times 10^{11}$\,protons, each with energy 400\,GeV. The bunch length can be adjusted between 6 and 12\,cm and the bunch is focused at the plasma entrance to a transverse RMS size of $\sigma_r \approx 0.2\,$mm. The plasma is 10\,m long, has a radius of approximately 1\,mm and a density uniformity better than 0.1\%~\cite{bib:fabiandensity}. To create the plasma, rubidium is evaporated in a heat exchanger and the outermost electron of each rubidium atom is ionised with a laser pulse with a pulse length of 120\,fs and a pulse energy of $\leq$\,450\,mJ. The vapour (and thus also the plasma) density is controlled by the temperature of the source and is adjustable between 0.5 and 10 $\times 10^{14}$\,atoms/cm$^3$. In AWAKE Run\,1, the relativistic ionisation front of the laser was also used to seed the self-modulation process by placing it close to the centre of the proton bunch. In AWAKE Run\,2a the laser pulse is placed in front of the proton bunch so that the entire proton bunch interacts with plasma. In AWAKE Run\,1, electron acceleration in the proton-driven plasma wakefield was demonstrated with externally injected 10--20\,MeV electrons~\cite{bib:nature}. These 100--600\,pC electron bunches have a duration of $\sigma_t \geq4$\,ps and are produced in a RF photo injector based on an S-band structure. In AWAKE Run\,2a, these electrons are used to seed the proton bunch SM. \subsection{AWAKE Run\,2b} \label{sec:density-step2} In the Run\,2b experiments, the effect of the plasma density step on the SM process will be measured. This requires a new vapour source and corresponding new diagnostics. The new vapour source is in its design phase and will be exchanged with the current source for Run\,2b. It includes additional observation ports in order to diagnose the electron plasma density that sustains wakefields. The experimental programme will focus on direct measurements of the plasma wakefields. To this end, different diagnostics are currently being evaluated (e.g.\ THz shadowgraphy diagnostics and plasma light diagnostics). \subsection{AWAKE Run\,2c} The CNGS cavern needs to be emptied in order to house the baseline AWAKE Run\,2c and Run\,2d experiments, which includes the second electron source, beam line, klystron system and the second vapour source. In order to be able to integrate the entire Run\,2c experiment in the AWAKE facility, the first plasma cell will be shifted by around 40\,m downstream of its current location and consequently the new equipment will also be accordingly moved downstream. This change also needs some downstream shifting of the proton beam line final dipole magnets, however, despite challenging aperture constraints, no extra magnets are required. The second electron source needs to deliver electrons with 150\,MeV energy, bunch charge of a few 100\,pC, beam emittance of\,2 mm\,mrad and a short bunch length of 200\,fs duration. The baseline proposal is a novel RF gun and two X-band structures for velocity bunching and acceleration. A prototype system is currently being developed % in order to demonstrate the required beam parameters for AWAKE and to study the mechanical and integration aspects in the AWAKE facility. The beam line design for the new 150\,MeV electron beam from the electron source to the plasma source is very challenging, given the tight beam specifications~\cite{Ramjiawan:2021gpy}: the beam must be matched to the plasma at the plasma merging point, with a RMS beam size satisfying $\sigma_{x,y} = \sqrt{0.00487\epsilon_{x,y}}$, zero dispersion and $\alpha_{x,y}= 0$, for emittance, $\epsilon_{x,y}$, and Twiss parameter, $\alpha_{x,y}$, where 0.00487 is the required $\beta$ function in metres. In addition, the gap between the two vapour sources must be as short as possible~\cite{IPAC16-2557}, i.e., $\le 1\,$m. Also considering integration limits from the width of the tunnel, the baseline proposal of the electron transfer line is a dogleg design. Studies on the injection tolerances of the proton and electron beam are currently ongoing and are key to controlled plasma wakefield acceleration. The accelerator plasma source will have a length of about 10\,m and will be based on the laser ionised Rb vapour source (as used in AWAKE Run\,1 and in the first vapour source). The laser beam for the ionisation in the second vapour source will be injected from its downstream end, counter-propagating to the proton beam. Although the same laser as for the first vapour source can be used in Run\,2c by splitting its output beam on two branches, additional laser transport lines and a compressor chamber need to be integrated. \subsection{AWAKE Run\,2d} Once Run\,2c has demonstrated electron acceleration to high energies while controlling beam quality, the second plasma source can be exchanged with a different plasma technology. These sources will be scalable to long distances in order to accelerate electrons to energies of several 10s of GeV and beyond, allowing for the first particle physics applications. As discussed in the previous section, the plasma technologies currently under study at CERN are helicon plasma sources and discharge plasma sources. With the CNGS target cavern fully dismantled, there is enough space available to install a plasma source of 10s of metres in length. Therefore infrastructure changes for Run\,2d concern mainly the different services such as powering, cooling, etc. needed for the plasma source. First studies also show that enough space is available for a prototype fixed-target experimental setup, allowing the first particle physics experiments to be conducted with electrons accelerated from AWAKE. \section{Particle physics applications of AWAKE \label{sec:applications} As outlined in the previous sections, the AWAKE scheme aims to develop an acceleration technology that can be used to provide beams for particle physics experiments. The ultimate goal for novel acceleration schemes is to provide the technology for a high energy, high luminosity, linear electron--positron collider with centre-of-mass energies $O({\rm TeV})$ and luminosities $10^{34}$\,cm$^{-2}$\,s$^{-1}$. In principle, though, any experiment that requires a source of bunched high energy electrons could utilise the AWAKE scheme. Initial application focuses on experiments with less challenging beam parameters than a linear $e^+e^-$ collider, although still having a strong and novel particle physics case. By the conclusion of Run\,2, the AWAKE collaboration should have demonstrated acceleration of electrons with stable GV/m gradients. Scalable plasma sources should have been developed that can be extendable up to even kilometres in length. The acceleration process should preserve the beam quality resulting in bunches with transverse emittance of below 10\,mm\,mrad. With these developments, using proton bunches from the SPS, acceleration of electrons to 10s of GeV, and even up to $\sim$\,200\,GeV, should be possible~\cite{Lotov:2021nob}. Use of the LHC protons with energy 7\,TeV would enable acceleration of electrons up to about 6\,TeV~\cite{bib:caldwell}. A limitation of the current proton drivers is their repetition rate and hence the luminosity of any application of the AWAKE scheme. Given this, high energy applications are considered and also those where the luminosity is less critical. First ideas for particle physics experiments based on electron bunches from the AWAKE scheme were proposed in previous papers~\cite{Wing:rsta,Caldwell:2018atq}. The applications are discussed below, along with extensions and new ideas that have been developed since. The first application would be to use a high energy electron beam impinging on a target in order to search for new phenomena related to dark matter, see Section~\ref{sec:dark-photons}. Another potential first application is collision of an electron bunch with a high-power laser pulse to investigate strong-field QED, see Section~\ref{sec:sfqed}. Both of these experiments would require electrons of 10s of GeV, although higher energies could be considered. An electron--proton collider would be the potential first use of the AWAKE scheme for a high-energy collider. There is a physics case for such a collider with electrons of 10s of GeV, and up to the TeV scale, when in collision with protons from the LHC. Each has a strong particle physics case but with less strict demands on beam quality than an $e^+e^-$ collider, see Section~\ref{sec:ep}. Finally, a new development is consideration of the proton beam at Brookhaven National Laboratory (BNL) as the wakefield driver to develop a compact electron injector which is discussed in Section~\ref{sec:eic}. \subsection{A beam-dump experiment for dark photon searches} \label{sec:dark-photons} Dark photons are postulated particles~\cite{jetp:56:502,pl:b136:279,pl:b166:196} which could provide the link to a dark or hidden sector of particles. This hidden sector could explain a number of issues in particle physics, not least of which is that they are candidates for dark matter which is expected to make up about 80\% of known matter in the Universe. Dark photons are expected to have low masses (sub-GeV)~\cite{pl:b513:119,pr:d79:015014} and couple only weakly to Standard Model particles and so would have not been seen in previous experiments. The dark photon, labelled $A^\prime$, is a light vector boson which results from a spontaneously broken new gauge symmetry and kinetically mixes with the photon and couples to the electromagnetic current with strength $\epsilon \ll 1$. A common approach to search for dark photons is through the interaction of an electron with a target in which the dark photon is produced and subsequently decays. Many experiments, current and proposed~\cite{arXiv:1707.04591}, are searching for dark photons and other feebly interacting particles using electrons impinging on a target. The initial electron beam energy varies, although only the NA64 experiment at CERN~\cite{pr:d89:075008,arxiv:1312.3309,prl:118:011802} has access to high energy electrons ($O(100\,{\rm GeV}$)) with other experiments limited to below ($O(10\,{\rm GeV}$)). A limitation of experiments is the rate of electrons on target which in the case of NA64 is about $10^6$\,electrons on target per second as they are produced in secondary interactions of the SPS proton beam. Given the limitations of the number of electrons on target for a high energy beam, the AWAKE acceleration scheme could enable an experiment to extend the search for dark photons as the number of electrons is expected to be several orders of magnitude higher. Assuming~\cite{awake++} a bunch of $5 \times 10^9$ electrons, each of 50\,GeV in energy, and a running period of 3\,months gives $10^{16}$ electrons on a target of centimetre transverse size. As the AWAKE scheme produces bunches of electrons, an experiment based on this will run as a beam-dump experiment in which electrons are absorbed in a target and a search for dark photons decaying to an $e^+e^-$ pair is performed. This is in contrast to other fixed-target experiments in which single electrons impact on a target and other decay channels can be searched for such as dark photons decaying to a pair of dark matter particles which do not leave deposits in the detectors and so have a signal of missing energy. \begin{figure}[bthp!] \centering \includegraphics[width=0.7\textwidth]{DarkPhotonExptLayout.pdf} \caption{Schematic layout of an experiment to search for dark photons. In the AWAKE scheme, a bunch of electrons enters from the left and impacts on a target of $O(1\,{\rm m})$ in length. A produced dark photon travels through a vacuum tube of length $O(10\,{\rm m})$ in which it decays to an $e^+e^-$ pair which are then measured in a detector system such as a tracking detector and calorimeter. } \label{fig:dark_photons_layout} \end{figure} The potential production of dark photons is sensitive to the experimental setup. Using initial electrons of 50\,GeV, an experimental setup, as shown in Fig.~\ref{fig:dark_photons_layout}, has been simulated using GEANT4~\cite{geant4-1,geant4-2,geant4-3}. To characterise the performance of the experiment, the sensitivity to the coupling strength, $\epsilon$, and mass, $m_{A^\prime}$, is considered and usually represented in a plot of the two. Examples are shown in Fig.~\ref{fig:dark_photons_param} in which the sensitivity is shown for the number of electrons on target and the thickness of the target. These results show the value of having as many electrons on target where the sensitivity to both $\epsilon$ and $m_{A^\prime}$ is increased with an increasing number of electrons. They also show that the sensitivity is reduced with increasing target thickness, however, having a thicker target is necessary in order to keep the background rate under control. \begin{figure}[h] \centering \includegraphics[width=0.36\textwidth]{EOT.png} \includegraphics[width=0.36\textwidth]{TargetThickness.png} \caption{Sensitivity to dark photon production shown for the coupling strength, $\epsilon$, and mass, $m_{A^\prime}$. The varied parameters of the proposed beam-dump experiment are (left) the number of electrons on target and (right) the thickness of the solid metal target which the electrons hit. The initial electron energy is assumed to be 50\,GeV.} \label{fig:dark_photons_param} \end{figure} Expected sensitivities to dark photons in an experiment using an electron beam from the AWAKE scheme are shown in Fig.~\ref{fig:dark_photons} in comparison with previous, current and proposed experiments. Limits from previous experiments are shown as grey-shaded areas in the $\epsilon-m_{A^\prime}$ plane. Expected sensitivities from current and future experiments are shown as coloured lines. Using electrons of energy 50\,GeV will allow dark photon searches to be extended towards higher masses in the range of couplings, $10^{-5} < \epsilon < 10^{-3}$. As there is the possibility of producing much higher energy electron beams, at the TeV scale, with the AWAKE scheme using the LHC protons as the wakefield driver, the sensitivity is shown for 1\,TeV electrons; such an experiment could run as part of a future collider facility, such as the very high energy electron--proton (VHEeP) collider~\cite{epj:c76:463}, after collisions, and as the beam is dumped. The sensitivity is extended to much higher mass values as well as lower $\epsilon$. The mass values reached approach 1\,GeV, far beyond the capability of any other experiment, current or planned. Depending on the future running of the SPS accelerator which feeds the AWAKE experiment, a larger area of parameter space could be investigated if more electrons on target were to be possible. Also, recent investigations indicate that higher energy electrons, up to about 200\,GeV, are possible~\cite{Lotov:2021nob} using the SPS protons as the drive beam which would also extend the sensitivity beyond that shown in Fig.~\ref{fig:dark_photons} for 50\,GeV electrons. Additionally, other decay channels, such as $A^\prime \to \mu^+ \mu^-$ or $A^\prime \to \pi^+ \pi^-$ could also be considered and the experiment optimised to be sensitive to these additional channels. \subsection{Investigation of strong-field QED in electron--laser collisions} \label{sec:sfqed} Progress in high-power laser technology has revived the study of strong-field quantum electrodynamics (QED) since the pioneering experiment, E144~\cite{pr:d60:092004}, that investigated this area of physics in electron--laser collisions in the 1990s. As electrons pass through the intense laser pulse and so experience strong fields (the higher the intensity, the stronger the fields), QED becomes non-linear and experiments mimic the conditions that occur on the surface of neutron stars, at a black hole's event horizon or in atomic physics. The E144 experiment at SLAC investigated electron--laser collisions with bunches of electrons, each of energy $\sim$\,50\,GeV. Experiments (E320 at SLAC and LUXE at DESY with the European XFEL) are underway or planned with high-quality electron bunches, with energies in the 10--20\,GeV range~\cite{luxe,e320}. Given the expectation of bunches of electrons, each of energies in the 10s of GeV range, from the AWAKE scheme, experiments investigating strong-field QED are an obvious initial application with the possibility to also have higher electron energies. As the rate of electron--laser collisions is limited by the roughly 1\,Hz repetition frequency of high-power lasers, high-rate electron bunches are not required. \begin{figure}[bthp!] \centering \includegraphics[trim={5cm 0cm 6cm 1.5cm},clip,width=0.5\textwidth]{Hartin_awake_landscape_fix_90pc_withTeV_expt_comp_output.pdf} \caption{Limits on dark photon production decaying to an $e^+e^-$ pair in terms of the mixing strength, $\epsilon$, and dark photon mass, $m_{A^\prime}$, from previous measurements (light grey shading). The expected sensitivity for the NA64 experiment is shown for a range of electrons on target, $10^{10} - 10^{13}$. Expectations from other potential experiments are shown as coloured lines. Expected limits are also shown for $10^{15}$ (orange line) or $10^{16}$ (green line) electrons of 50\,GeV (``AWAKE50'') on target and $10^{16}$ (blue line) electrons of 1\,TeV (``AWAKE1k'') on target provided to an experiment using the future AWAKE accelerator scheme. From Ref.~\cite{Alemany:2019vsk}.} \label{fig:dark_photons} \end{figure} \subsection{High energy electron--proton/ion colliders} \label{sec:ep} The HERA accelerator complex in DESY, Hamburg, has so far provided the only electron--proton collider. With electrons and protons at maximal energies of 27.5\,GeV and 920\,GeV, respectively, a centre-of-mass energy of about 318\,GeV was achieved. An electron--ion collider (EIC)~\cite{eic} is to start operation in BNL in about a decade with lower centre-of-mass energy than HERA, up to about 140\,GeV. However, it will have significantly higher luminosity, highly polarised beams and variation in the centre of mass energy and ion species, providing a rich physics programme through its great flexibility. A higher energy electron--proton collider, LHeC (large hadron--electron collider)~\cite{lhec1,lhec2}, has been proposed using electrons of 50\,GeV in collision with LHC protons so yielding centre-of-mass energies just above the TeV scale. The possibility of an LHeC-like collider based on the AWAKE scheme to produce 50\,GeV electrons is outlined here. A significantly more compact design should be possible, although with a much reduced luminosity performance. Even more compelling is the possibility of using the AWAKE scheme to provide TeV electrons and so have electron--proton collisions at centre-of-mass energies of 9\,TeV~\cite{epj:c76:463} and this is briefly summarised. Electrons at 3\,TeV could also be used in fixed-target mode and provide a centre-of-mass energy of $\sim$\,80\,GeV, thereby achieving similar values to the EIC. A high energy $ep/eA$ collider could be the first collider application of the AWAKE scheme. In comparison to a high energy $e^+e^-$ collider, an $ep/eA$ collider poses fewer challenges to the AWAKE scheme as only one beam is required, such low emittances are not needed (as the proton emittance dominates) and potentially positrons are not needed, although they are desirable. Given the possibility of providing $O(50\,{\rm GeV})$ electrons using the SPS protons to drive wakefields, this was formulated in the PEPIC (plasma electron--proton/ion collider) project. The electrons would collide with protons from the LHC and using their expected parameters during the high-luminosity phase, this would lead to an instantaneous luminosity of $1.5 \times 10^{27}\,{\rm cm}^{-2}{\rm s}^{-1}$, where parameters expected from a future AWAKE facility have been assumed for the electron bunches~\cite{awake++}. So although PEPIC has the same energy reach (and possibly even beyond) as the LHeC, it would have a luminosity many orders of magnitude lower. Such a low luminosity will not allow investigation of the Higgs sector, detailed measurements of electroweak physics or other phenomena that occur at high $Q^2$, where $Q^2$ in the virtuality of the photon emitted by the electron in the $ep/eA$ collision. However, processes that occur at low $x$, where $x$ is the fraction of the proton's momentum carried by the struck parton, have very high cross sections and so even with a low luminosity, large event samples will be produced. The focus of the physics programme would then be on understanding the structure of matter and quantum chromodynamics (QCD), the theory of the strong force, in a new kinematic regime. The PEPIC collider would be an option for CERN should the LHeC project not go ahead, where the initial study shows that it is possible, could be housed within the current CERN site, and has a novel particle physics programme. Schemes to increase the luminosity should be considered where the current design is mainly limited by the filling time of the SPS. Mechanisms to increase the SPS bunch repetition frequency would lead to a corresponding increase in luminosity and so should be studied, along with other parameters relevant for the luminosity such as the electron bunch population and proton bunch size. The possibility to accelerate electrons up to 200\,GeV using the SPS protons as the drive beam would lead to a doubling of the centre-of-mass energy (2.4\,TeV) and so an increased kinematic range. This provides a larger phase space to search for new physics and larger lever arm to investigate the energy dependence of high-energy cross sections. Using the LHC protons to drive wakefields could lead to electrons at the TeV scale and so an $ep$ collider with centre-of-mass energies $O(10\,{\rm TeV})$. This has been formulated as the VHEeP collider~\cite{epj:c76:463} in which electrons at 3\,TeV are collided with the LHC protons at 7\,TeV giving a centre-of-mass energy of 9\,TeV. This represents a factor of 30 increase compared to HERA and hence an extension in $x$ and $Q^2$ of a factor of 1000. As with PEPIC, the luminosity of VHEeP is relatively low, estimated~\cite{vheep-dis2015} to be $10^{28} - 10^{29}\,{\rm cm}^{-2}{\rm s}^{-1}$ or around 1\,pb$^{-1}$ per year. This is mainly limited by the LHC protons needed to drive wakefields which will need to be dumped after this process and so protons will need to be refilled for the further acceleration of electrons by the AWAKE scheme. Schemes need to be considered such as squeezing the proton or electron bunches, having multiple interaction points, etc. that will increase the luminosity. Even though the luminosity of the VHEeP collider is modest, the very high energy provides a compelling particle physics cases. At low values of $Q^2$, 10s of millions of events are expected and so high precision measurements and searches for new physics will be possible. This will allow investigation of hadronic cross sections and the structure of matter at very high energies. The collisions are also equivalent to a photon of energy of 20\,PeV on a fixed target and so has synergy with cosmic-ray physics. Searches for physics beyond the Standard Model, such as quark substructure or leptoquarks, will also be possible. Even with modest luminosities, the very high energy ensures that the sensitivity to leptoquarks exceeds that possible in proton--proton collisions at the LHC. The particle physics case is discussed in more detail elsewhere~\cite{epj:c76:463,Wing:rsta,Caldwell:2018atq}. As well as $ep$ collisions which have been the focus so far, $eA$ collisions should be considered. \subsection{Use of BNL proton beams for a compact electron injector for a future electron--ion collider} \label{sec:eic} The AWAKE acceleration scheme has been considered as a possible technology for the injector for the high energy electron beam for the future EIC. The EIC is expected to collide electrons of up to 20\,GeV with protons of up to 275\,GeV -- a possible site at BNL in the US already has a circular, 4\,km long proton accelerator, but will require a new electron accelerator of similar size. It has been proposed~\cite{Chappell:2019ovd} to use the AWAKE technology at BNL by using their high-intensity proton bunches to generate large wakefields and hence accelerate electrons to high energies over short distances. Along with CERN, where AWAKE is currently based, BNL is one of the few places in the world with high energy proton bunches that can be used for the AWAKE acceleration scheme. It has been assumed that the proton bunch parameters expected for the future EIC, namely a proton energy of 275\,GeV, $2 \times 10^{11}$\,protons/bunch, a bunch length of 5\,cm and a bunch radius of $40$ or $100\,\mu$m. These are similar values to currently used at AWAKE using CERN's SPS accelerator except that the smaller bunch size allows for a higher plasma density to be used which should yield larger accelerating fields. Using these parameters, the process has been simulated and the accelerating electric field determined for the two values of the bunch radius. The results are shown in Fig.~\ref{fig:eic}, where the larger beam radius is given by the red line and smaller beam radius by the blue line. In the more promising scenario, with a bunch radius of $40\,\mu$m, the peak field is almost 7\,GV/m, which although falls rapidly, levels off above 1\,GV/m, an accelerating gradient well above current conventional accelerator technology. \begin{figure}[thp!] \centering \includegraphics[width=0.6\textwidth]{EIC_plot_wgrid_largerfont2.pdf} \caption{ Evolution of the peak longitudinal fields driven by the BNL proton drive beams over 10\,m using bunch parameters which differ only in their transverse size, $\sigma_r = 40\,\mu$m or $100\,\mu$m. From Ref.~\cite{Chappell:2019ovd}.} \label{fig:eic} \end{figure} This confirms that the high, $> 1$\,GV/m gradients, can be harnessed and that the required maximum energy for the EIC of 20\,GeV could be achieved in under 20\,m. There are indications that the accelerating gradient can be frozen~\cite{PoP22-103110} soon after the peak field, which will be studied at AWAKE (see Sections~\ref{sec:density-step} and~\ref{sec:density-step2}), and values more like $5-6$\,GV/m would be observed in Fig.~\ref{fig:eic} in which case electrons could be accelerated to the required 20\,GeV in under 4\,m. Alternatively, significantly higher electron energies than currently planned, of 50\,GeV and beyond, could be achieved and so extend the kinematic reach in the investigation of parton dynamics in the proton at the EIC. \section{Summary} \label{sec:summary} This article summarised the plans for AWAKE Run\,2, which will take the scheme of proton-driven plasma wakefield acceleration from proof of concept to a technology which can provide beams for particle physics experiments. The proof of concept, AWAKE Run\,1, showed that a long proton bunch can be modulated into a series of microbunches, which are regularly and reproducibly spaced. These microbunches constructively interfere to generate strong electric fields in their wake. The wakefields were sampled by an externally-injected bunch of electrons which were accelerated from about 20\,MeV up to about 2\,GeV within 10\,m of plasma, representing an average field of $> 200$\,MV/m, with peak fields of up to $\sim 1$\,GV/m expected. In AWAKE Run\,2, the proof of concept will be significantly extended to address the requirements needed to develop beams for use in particle physics experiments. The primary aims of AWAKE Run\,2 are to sustain the expected peak fields of $0.5 - 1$\,GV/m over long distances, thereby increasing the accelerated electron energy; to demonstrate that the emittance of the electron bunch is preserved during acceleration in plasma; and to develop plasma sources that are scalable to 100s of metres and beyond. This will be achieved in a staged approach during the 2020s which will require significant extension to the current AWAKE facility, in particular the development of short witness electron bunches for injection, new plasma sources and a suite of diagnostics to measure the physics of the acceleration process. After completion of AWAKE Run\,2, at the end of the 2020s, the scheme should have been sufficiently demonstrated such that it can be used to provide beams for particle physics experiments. Given the challenge in producing high energy electron bunches by conventional means, electrons in the 20--200 GeV range, as driven by protons from the SPS, or even at the TeV scale if using LHC protons as the wakefield driver, can be used in a variety of particle physics experiments. Such electron bunches can be used in experiments to search for dark photons, to measure QED in strong fields or as the injector or main accelerator for the electron arm of an electron--proton or electron--ion collider. These first applications place less stringent requirements on the parameters of the electron bunch than for a high energy, high luminosity linear electron--positron collider, although they will provide a useful stepping stone, along with continued R\&D, to such ultimate applications whilst also providing beams for novel particle physics experiments. \authorcontributions{ E.G, K.L., P.M and M.W. conceived, wrote and edited this article. Members of the AWAKE collaboration contributed to the review of the manuscript and to the work here summarised.} \funding{ This work was supported in parts by a Leverhulme Trust Research Project Grant RPG-2017-143 and by STFC (AWAKE-UK, Cockcroft Institute core, John Adams Institute core, and UCL consolidated grants), United Kingdom; the Russian Science Foundation, project 20-12-00062, for Novosibirsk's contribution; the National Research Foundation of Korea (Nos.\ NRF-2016R1A5A1013277 and NRF-2020R1A2C1010835); the Wolfgang Gentner Programme of the German Federal Ministry of Education and Research (grant no.\ 05E15CHA); M. Wing acknowledges the support of DESY, Hamburg. Support of the National Office for Research, Development and Innovation (NKFIH) under contract numbers 2019-2.1.6-NEMZ\_KI-2019-00004 and MEC\_R-140947, and the support of the Wigner Datacenter Cloud facility through the Awakelaser project is acknowledged. The work of V. Hafych has been supported by the European Union's Framework Programme for Research and Innovation Horizon 2020 (2014--2020) under the Marie Sklodowska-Curie Grant Agreement No.\ 765710. TRIUMF contribution is supported by NSERC of Canada. } \dataavailability{ No new data is provided in this article which presents a review of previous work.} \acknowledgments{ The AWAKE collaboration acknowledge the SPS team for their excellent proton delivery.} \conflictsofinterest{ The authors declare no conflict of interest.} \abbreviations{The following abbreviations are used in this manuscript:\\ \noindent \begin{tabular}{@{}ll} AWAKE & Advanced wakefield experiment\\ BNL & Brookhaven National Laboratory\\ CERN & European Organisation for Nuclear Research \\ & (Conseil Europ\'{e}en pour la Recherche Nucl\'{e}aire)\\ CNGS & CERN neutrinos to Gran Sasso\\ DESY & Deutsches Elektronen-Synchrotron\\ EIC & Electron--ion collider\\ GEANT & Geometry and tracking\\ HERA & Hadron--electron ring accelerator\\ LHC & Large hadron collider\\ LHeC & Large hadron--electron collider\\ LUXE & Laser und XFEL experiment\\ PEPIC & Plasma electron--proton/ion collider\\ QCD & Quantum chromodynamics\\ QED & Quantum electrodynamics\\ RIF & Relativistic ionisation front\\ RMS & Root mean square\\ SLAC & Stanford Linear Accelerator\\ SM & Self-modulation\\ SMI & Self-modulation instability\\ SPS & Super proton synchrotron\\ VHEeP & Very high energy electron--proton\\ XFEL & X-ray free electron laser \end{tabular}} \end{paracol} \reftitle{References}
{'timestamp': '2022-06-14T02:29:22', 'yymm': '2206', 'arxiv_id': '2206.06040', 'language': 'en', 'url': 'https://arxiv.org/abs/2206.06040'}